Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Data Quantitation.

Rozprawy doktorskie na temat „Data Quantitation”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Data Quantitation”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Lee, Wooram. "Protein Set for Normalization of Quantitative Mass Spectrometry Data". Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/54554.

Pełny tekst źródła
Streszczenie:
Mass spectrometry has been recognized as a prominent analytical technique for peptide and protein identification and quantitation. With the advent of soft ionization methods, such as electrospray ionization and matrix assisted laser desorption/ionization, mass spectrometry has opened a new era for protein and proteome analysis. Due to its high-throughput and high-resolution character, along with the development of powerful data analysis software tools, mass spectrometry has become the most popular method for quantitative proteomics. Stable isotope labeling and label-free quantitation methods are widely used in quantitative mass spectrometry experiments. Proteins with stable expression level and key roles in basic cellular functions such as actin, tubulin and glyceraldehyde-3-phosphate dehydrogenase, are frequently utilized as internal controls in biological experiments. However, recent studies have shown that the expression level of such commonly used housekeeping proteins is dependent on cell type, cell cycle or disease status, and that it can change as a result of a biochemical stimulation. Such phenomena can, therefore, substantially compromise the use of these proteins for data validation. In this work, we propose a novel set of proteins for quantitative mass spectrometry that can be used either for data normalization or validation purposes. The protein set was generated from cell cycle experiments performed with MCF-7, an estrogen receptor positive breast cancer cell line, and MCF-10A, a non-tumorigenic immortalized breast cell line. The protein set was selected from a list of 3700 proteins identified in the different cellular sub-fractions and cell cycle stages of MCF-7/MCF-10A cells, based on the stability of spectral count data (CV<30 %) generated with an LTQ ion trap mass spectrometer. A total of 34 proteins qualified as endogenous standards for the nuclear, and 75 for the cytoplasmic cell fractions, respectively. The validation of these proteins was performed with a complementary, Her2+, SKBR-3 cell line. Based on the outcome of these experiments, it is anticipated that the proposed protein set will find applicability for data normalization/validation in a broader range of mechanistic biological studies that involve the use of cell lines.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
2

McQueen, Peter. "Alternative strategies for proteomic analysis and relative protein quantitation". Wiley-VCH, 2015. http://hdl.handle.net/1993/30850.

Pełny tekst źródła
Streszczenie:
The main approach to studying the proteome is a technique called data dependent acquisition (DDA). In DDA, peptides are analyzed by mass spectrometry to determine the protein composition of a biological isolate. However, DDA is limited in its ability to analyze the proteome, in that it only selects the most abundant ions for analysis, and different protein identifications can result even if the same sample is analyzed multiple times in succession. Data independent acquisition (DIA) is a newly developed method that should be able to solve these limitations and improve our ability to analyze the proteome. We used an implementation of DIA (SWATH) to perform relative protein quantitation in the model bacterial system, Clostridium stercorarium, using two different carbohydrate sources, and found that it was able to provide precise quantitation of proteins and was overall more consistent in its ability to identify components of the proteome than DDA. Relative quantitation of proteins is an important method that can determine which proteins are important to a biochemical process of interest. How we determine which proteins are differentially regulated between different conditions is an important question in proteomic analysis. We developed a new approach to analyzing differential protein expression using variation between biological replicates to determine which proteins are being differentially regulated between two conditions. This analysis showed that a large proportion of proteins identified by quantitative proteomic analysis can be differentially regulated and that these proteins are in fact related to biological processes. Analyzing changes in protein expression is a useful tool that can pinpoint many key processes in biological systems. However, these techniques fail to take into account that enzyme activity is regulated by other factors than controlling their level of expression. Activity based protein profiling (ABPP) is a method that can determine the activity state of an enzyme in whole cell proteomes. We found that enzyme activity can change in response to a number of different conditions and that these changes do not always correspond with compositional changes. Mass spectrometry techniques were also used to identify serine hydrolases and characterize their expression in this organism.
February 2016
Style APA, Harvard, Vancouver, ISO itp.
3

Wang, Xin. "A Novel Approach for Automatic Quantitation of 31P Magnetic Resonance Spectroscopy Data". University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1236271757.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Manso, Jalice Y. "Sensor fusion of IR, NIR, and Raman spectroscopic data for polymorph quantitation of an agrochemical compound". Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 37 p, 2009. http://proquest.umi.com/pqdweb?did=1694432951&sid=2&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Lien, Tonje Gulbrandsen. "Statistical Analysis of Quantitative PCR Data". Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for matematiske fag, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-13094.

Pełny tekst źródła
Streszczenie:
This thesis seeks to develop a better understanding of the analysis of gene expression to find the amount of transcript in a sample. The mainstream method used is called Polymerase Chain Reaction (PCR) and it exploits the DNA's ability to replicate. The comparative CT method estimate the starting fluorescence level f0 by assuming constant amplification in each PCR cycle, and it uses the fluorescence level which has risen above a certain threshold. We present a generalization of this method, where different threshold values can be used. The main aim of this thesis is to evaluate a new method called the Enzymological method. It estimates f0 by considering a cycle dependent amplification and uses a larger part of the fluorescence curves, than the two CT methods. All methods are tested on dilution series, where the dilution factors are known. In one of the datasets studied, the Clusterin dilution-dataset, we get better estimates from the Enzymological method compared to the two CT methods.
Style APA, Harvard, Vancouver, ISO itp.
6

Kafatos, George. "Statistical analysis of quantitative seroepidemiological data". Thesis, Open University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.539408.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Martin, Anthony John Michael. "Quantitative data validation (automated visual evaluations)". Thesis, De Montfort University, 1999. http://hdl.handle.net/2086/6256.

Pełny tekst źródła
Streszczenie:
Historically, validation has been perfonned on a case study basis employing visual evaluations, gradually inspiring confidence through continual application. At present, the method of visual evaluation is the most prevalent form of data analysis, as the brain is the best pattern recognition device known. However, the human visual/perceptual system is a complicated mechanism, prone to many types of physical and psychological influences. Fatigue is a major source of inaccuracy within the results of subjects perfonning complex visual evaluation tasks. Whilst physical and experiential differences along with age have an enormous bearing on the visual evaluation results of different subjects. It is to this end that automated methods of validation must be developed to produce repeatable, quantitative and objective verification results. This thesis details the development of the Feature Selective Validation (FSV) method. The FSV method comprises two component measures based on amplitude differences and feature differences. These measures are combined employing a measured level of subjectivity to fonn an overall assessment of the comparison in question or global difference. The three measures within the FSV method are strengthened by statistical analysis in the form of confidence levels based on amplitude, feature or global discrepancies between compared signals. Highly detailed diagnostic infonnation on the location and magnitude of discrepancies is also made available through the employment of graphical (discrete) representations of the three measures. The FSV method also benefits from the ability to mirror human perception, whilst producing infonnation which directly relates human variability and the confidence associated with it. The FSV method builds on the common language of engineers and scientists alike, employing categories which relate to human interpretations of comparisons, namely: 'ideal', 'excellent', 'very good', 'good', 'fair', 'poor' and 'extremely poor' . Quantitative
Style APA, Harvard, Vancouver, ISO itp.
8

Babari, Parvaneh. "Quantitative Automata and Logic for Pictures and Data Words". Doctoral thesis, Universitätsbibliothek Leipzig, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-221165.

Pełny tekst źródła
Streszczenie:
Mathematical logic and automata theory are two scientific disciplines with a close relationship that is not only fundamental for many theoretical results but also forms the basis of a coherent methodology for the verification and synthesis of computing systems. This connection goes back to a much longer history in the 1960s, through the fundamental work of Büchi-Elgot-Trakhtenbrot, which shows the expressive equivalence of automata and logical systems such as monadic second-order logic on finite and infinite words. This allowed the handling of specifications (where global system properties are stated), and implementations (which involve the definition of the local steps in order to satisfy the global goals laid out in the specifications) in a single framework. This connection has been extended to and well-investigated for many other structures such as trees, finite pictures, timed words and data words. For many computer science applications, however, quantitative phenomena need to be modelled, as well. Examples are vagueness and uncertainty of a statement, length of time periods, spatial information, and resource consumption. Weighted automata, introduced by Schützenberger, are prominent models for quantitative aspects of systems. The framework of weighted monadic second-order logic over words was first introduced by Droste and Gastin. They gave a characterization of quantitative behavior of weighted finite automata, as semantics of monadic second-order sentences within their logic. Meanwhile, the idea of weighted logics was also applied to devices recognizing more general structures such as weighted tree automata, weighted automata on infinite words or traces. The main goal of this thesis is to give logical characterizations for weighted automata models on pictures and data words as well as for Büchi-tiling systems in the spirit of the classical Büchi-Elgot theorem. As the second goal, we deal with synchronizing problem for data words. Below, we briefly summarize the contents of this thesis. Informally, a two-dimensional string is called a picture and is defined as a rectangular array of symbols taken from a finite alphabet. A two-dimensional language (or picture language) is a set of pictures. Picture languages have been intensively investigated by several research groups. In Chapter 1, we define weighted two-dimensional on-line tessellation automata (W2OTA) taking weights from a new weight structure called picture valuation monoid. This new weighted picture automaton model can be used to model several applications, e.g. the average density of a picture. Such aspects could not be modelled by semiring weighted picture automaton model. The behavior of this automaton model is a picture series mapping pictures over an alphabet to elements of a picture valuation monoid. As one of our main results, we prove a Nivat theorem for W2OTA. It shows that recognizable picture series can be obtained precisely as projections of particularly simple unambiguously recognizable series restricted to unambiguous recognizable picture languages. In addition, we introduce a weighted monadic second-order logic (WMSO) which can model average density of pictures. As the other main result, we show that W2OTA and a suitable fragment of our weighted MSO logic are expressively equivalent. In Chapter 2, we generalize the notion of finite pictures to +ω-pictures, i.e., pictures which have finite number of rows and infinite number of columns. We extend conventional tiling systems with a Büchi acceptance condition in order to define the class of Büchi-tiling recognizable +ω-picture languages. The class of recognizable +ω-picture languages is indeed, a natural generalization of ω-regular languages. We show that the class of all Büchi-tiling recognizable +ω-picture languages has the similar closure properties as the class of tiling recognizable languages of finite pictures: it is closed under projection, union, and intersection, but not under complementation. While for languages of finite pictures, tiling recognizability and EMSO-definability coincide, the situation is quite different for languages of +ω-pictures. In this setting, the notion of tiling recognizability does not even cover the language of all +ω -pictures over Σ = {a, b} in which the letter a occurs at least once – a picture language that can easily be defined in first-order logic. As a consequence, EMSO is too strong for being captured by the class of tiling recognizable +ω-picture languages. On the other hand, EMSO is too weak for being captured by the class of all Büchi-tiling recognizable +ω-picture languages. To obtain a logical characterization of this class, we introduce the logic EMSO∞, which extends EMSO with existential quantification of infinite sets. Additionally, using combinatorial arguments, we show that the Büchi characterization theorem for ω-regular languges does not carry over to the Büchi-tiling recognizable +ω-picture languages. In Chapter 3, we consider the connection between weighted register automata and weighted logic on data words. Data words are sequences of pairs where the first element is taken from a finite alphabet (as in classical words) and the second element is taken from an infinite data domain. Register automata, introduced by Francez and Kaminski, provide a widely studied model for reasoning on data words. These automata can be considered as classical nondeterministic finite automata equipped with a finite set of registers which are used to store data in order to compare them with some data in the future. In this chapter, for quantitative reasoning on data words, we introduce weighted register automata over commutative data semirings equipped with a collection of binary data functions in the spirit of the classical theory of weighted automata. Whereas in the models of register automata known from the literature data are usually compared with respect to equality or a linear order, here we allow data comparison by means of an arbitrary collection of binary data relations. This approach permits easily to incorporate timed automata and weighted timed automata into our framework. Motivated by the seminal Büchi-Elgot-Trakhtenbrot theorem about the expressive equivalence of finite automata and monadic second-order (MSO) logic and by the weighted MSO logic of Droste and Gastin, we introduce weighted MSO logic on data words and give a logical characterization of weighted register automata. In Chapter 4, we study the concept of synchronizing data words in register automata. The synchronizing problem for data words asks whether there exists a data word that sends all states of the register automaton to a single state. The class of register automata that we consider here has a decidable non-emptiness problem, and the subclass of nondeterministic register automata with a single register has a decidable non-universality problem. We provide the complexity bounds of the synchronizing problem in the family of deterministic register automata with k registers (k-DRA), and in the family of nondeterministic register automata with single register (1-NRA), and in general undecidability of the problem in the family of k-NRA. To this end, we prove that, for k-DRA, inputting data words with only 2k + 1 distinct data values, from the infinite data domain, is sufficient to synchronize. Then, we show that the synchronizing problem for k-DRA is in general PSPACE-complete, and it is in NLOGSPACE for 1-DRA. For nondeterministic register automata (NRA), we show that Ackermann(n) distinct data, where n is the number of states of the register automaton, might be necessary to synchronize. Then, by means of a construction, proving that the synchronizing problem and the non-universality problem in 1-NRA are interreducible, we show the Ackermann-completeness of the problem for 1-NRA. However, for k-NRA, in general, we prove that this problem is undecidable due to the unbounded length of synchronizing data words.
Style APA, Harvard, Vancouver, ISO itp.
9

王漣 i Lian Wang. "A study on quantitative association rules". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31223588.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Wang, Lian. "A study on quantitative association rules /". Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B2118561X.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Ahmad, Yasmeen. "Management, visualisation & mining of quantitative proteomics data". Thesis, University of Dundee, 2012. https://discovery.dundee.ac.uk/en/studentTheses/6ed071fc-e43b-410c-898d-50529dc298ce.

Pełny tekst źródła
Streszczenie:
Exponential data growth in life sciences demands cross discipline work that brings together computing and life sciences in a usable manner that can enhance knowledge and understanding in both fields. High throughput approaches, advances in instrumentation and overall complexity of mass spectrometry data have made it impossible for researchers to manually analyse data using existing market tools. By applying a user-centred approach to effectively capture domain knowledge and experience of biologists, this thesis has bridged the gap between computation and biology through software, PepTracker (http://www.peptracker.com). This software provides a framework for the systematic detection and analysis of proteins that can be correlated with biological properties to expand the functional annotation of the genome. The tools created in this study aim to place analysis capabilities back in the hands of biologists, who are expert in evaluating their data. Another major advantage of the PepTracker suite is the implementation of a data warehouse, which manages and collates highly annotated experimental data from numerous experiments carried out by many researchers. This repository captures the collective experience of a laboratory, which can be accessed via user-friendly interfaces. Rather than viewing datasets as isolated components, this thesis explores the potential that can be gained from collating datasets in a “super-experiment” ideology, leading to formation of broad ranging questions and promoting biology driven lines of questioning. This has been uniquely implemented by integrating tools and techniques from the field of Business Intelligence with Life Sciences and successfully shown to aid in the analysis of proteomic interaction experiments. Having conquered a means of documenting a static proteomics snapshot of cells, the proteomics field is progressing towards understanding the extremely complex nature of cell dynamics. PepTracker facilitates this by providing the means to gather and analyse many protein properties to generate new biological insight, as demonstrated by the identification of novel protein isoforms.
Style APA, Harvard, Vancouver, ISO itp.
12

Monticelli, Fabio Carlo. "Pupillenverhalten als Parameter zur Beurteilung zentral-nervöser Beeinträchtigungen durch Drogen und Medikamente quantitative Analyse mittels Compact Integrated Pupillograph (CIP) am Beispiel von Patienten im Substitutionsprogramm und auffälligen Fahrzeuglenkern". Berlin wvb, Wiss. Verl, 2007. http://www.wvberlin.de/data/inhalt/monticelli.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Burger, George William. "Quantitative Analysis of Cross-Country Flight Performance Data :". Connect to this title online, 2005. http://hdl.handle.net/1811/297.

Pełny tekst źródła
Streszczenie:
Thesis (Honors)--Ohio State University, 2005.
Title from first page of PDF file. Document formattted into pages: contains xi, 66 p.; also includes graphics. Includes bibliographical references (p. 65-66). Available online via Ohio State University's Knowledge Bank.
Style APA, Harvard, Vancouver, ISO itp.
14

Rehse, Sabine. "Registration and Quantitative Image Analysis of SPM Data". Doctoral thesis, Universitätsbibliothek Chemnitz, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200800742.

Pełny tekst źródła
Streszczenie:
Nichtlineare Verzerrungen von Rasterkraftmikroskopie (engl.: scanning probe microscopy, Abk.: SPM) Bildern beeinträchtigen die Qualität von Nanotomographiebildern und SPM Bildsequenzen. In dieser Arbeit wird ein neues, nichtlineares Registrierungsverfahren vorgestellt, das auf einem für medizinische Anwendungen entwickelten Algorithmus aufbaut und diesen für die Behandlung von SPM Daten erweitert. Die nichtlineare Registrierung ermöglicht es, verschiedene nanostrukturierte Materialen über große Bereiche (1 µm x 1 µm) mit einer Auflösung von 10 nm abzubilden. Dies erlaubt eine wesentlich detailliertere quantitative Analyse der Daten. Hierfür wurde eine neue Datenreduktions- und Visualisierungsmethode für Mikrodomänennetzwerke von Blockcopolymeren eingeführt. Zwei- und dreidimensionale Mikrodomänenstrukturen werden zu ihrem Skelett reduziert, Verzweigungspunkte farblich codiert und der entstandene Graph visualisiert. Die Anzahl verschiedener Skelettverzweigungen lässt sich über die Zeit verfolgen. Die Methode wurde mit lokalen Minkowskimaßen der ursprünglichen Graustufenbilder verglichen. Sie liefert morphologische und geometrische Informationen auf unterschiedlichen Längenskalen.
Style APA, Harvard, Vancouver, ISO itp.
15

Mertens, Benjamin. "Bringing 3D and quantitative data in flexible endoscopy". Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209275.

Pełny tekst źródła
Streszczenie:
In a near future, the computation power will be widely used in endoscopy rooms. It will enable the augmented reality already implemented in some surgery. Before reaching this, a preliminary step is the development of a 3D reconstruction endoscope. In addition to that, endoscopists suffer from a lack of quantitative data to evaluate dimensions and distances, notably for the polyp size measurement.

In this thesis, a contribution to more a robust 3D reconstruction endoscopic device is proposed. Structured light technique is used and implemented using a diffractive optical element. Two patterns are developed and compared: the first is based on the spatial-neighbourhood coding strategy, the second on the direct-coding strategy. The latter is implemented on a diffractive optical element and used in an endoscopic 3D reconstruction device. It is tested in several conditions and shows excellent quantitative results but the robustness against bad visual conditions (occlusions, liquids, specular reflection,) must be improved.

Based on this technology, an endoscopic ruler is developed. It is dedicated to answer endoscopists lack of measurement system. The pattern is simplified to a single line to be more robust. Quantitative data show a sub-pixel accuracy and the device is robust in all tested cases. The system has then been validated with a gastroenterologist to measure polyps. Compared to literature in this field, this device performs better and is more accurate.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

Style APA, Harvard, Vancouver, ISO itp.
16

Pititto, Silvia. "Generazione automatica di visualizzazioni di open data quantitativi". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6218/.

Pełny tekst źródła
Streszczenie:
Le sfide dell'Information Visualisation ed i limiti dei sistemi di visualizzazione esistenti hanno portato alla creazione di un nuovo sistema per la generazione automatica di visualizzazioni di Open Data quantitativi, presentato in questa tesi.
Style APA, Harvard, Vancouver, ISO itp.
17

Xin, Bowen. "Multimodal Data Fusion and Quantitative Analysis for Medical Applications". Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/26678.

Pełny tekst źródła
Streszczenie:
Medical big data is not only enormous in its size, but also heterogeneous and complex in its data structure, which makes conventional systems or algorithms difficult to process. These heterogeneous medical data include imaging data (e.g., Positron Emission Tomography (PET), Computerized Tomography (CT), Magnetic Resonance Imaging (MRI)), and non-imaging data (e.g., laboratory biomarkers, electronic medical records, and hand-written doctor notes). Multimodal data fusion is an emerging vital field to address this urgent challenge, aiming to process and analyze the complex, diverse and heterogeneous multimodal data. The fusion algorithms bring great potential in medical data analysis, by 1) taking advantage of complementary information from different sources (such as functional-structural complementarity of PET/CT images) and 2) exploiting consensus information that reflects the intrinsic essence (such as the genetic essence underlying medical imaging and clinical symptoms). Thus, multimodal data fusion benefits a wide range of quantitative medical applications, including personalized patient care, more optimal medical operation plan, and preventive public health. Though there has been extensive research on computational approaches for multimodal fusion, there are three major challenges of multimodal data fusion in quantitative medical applications, which are summarized as feature-level fusion, information-level fusion and knowledge-level fusion: • Feature-level fusion. The first challenge is to mine multimodal biomarkers from high-dimensional small-sample multimodal medical datasets, which hinders the effective discovery of informative multimodal biomarkers. Specifically, efficient dimension reduction algorithms are required to alleviate "curse of dimensionality" problem and address the criteria for discovering interpretable, relevant, non-redundant and generalizable multimodal biomarkers. • Information-level fusion. The second challenge is to exploit and interpret inter-modal and intra-modal information for precise clinical decisions. Although radiomics and multi-branch deep learning have been used for implicit information fusion guided with supervision of the labels, there is a lack of methods to explicitly explore inter-modal relationships in medical applications. Unsupervised multimodal learning is able to mine inter-modal relationship as well as reduce the usage of labor-intensive data and explore potential undiscovered biomarkers; however, mining discriminative information without label supervision is an upcoming challenge. Furthermore, the interpretation of complex non-linear cross-modal associations, especially in deep multimodal learning, is another critical challenge in information-level fusion, which hinders the exploration of multimodal interaction in disease mechanism. • Knowledge-level fusion. The third challenge is quantitative knowledge distillation from multi-focus regions on medical imaging. Although characterizing imaging features from single lesions using either feature engineering or deep learning methods have been investigated in recent years, both methods neglect the importance of inter-region spatial relationships. Thus, a topological profiling tool for multi-focus regions is in high demand, which is yet missing in current feature engineering and deep learning methods. Furthermore, incorporating domain knowledge with distilled knowledge from multi-focus regions is another challenge in knowledge-level fusion. To address the three challenges in multimodal data fusion, this thesis provides a multi-level fusion framework for multimodal biomarker mining, multimodal deep learning, and knowledge distillation from multi-focus regions. Specifically, our major contributions in this thesis include: • To address the challenges in feature-level fusion, we propose an Integrative Multimodal Biomarker Mining framework to select interpretable, relevant, non-redundant and generalizable multimodal biomarkers from high-dimensional small-sample imaging and non-imaging data for diagnostic and prognostic applications. The feature selection criteria including representativeness, robustness, discriminability, and non-redundancy are exploited by consensus clustering, Wilcoxon filter, sequential forward selection, and correlation analysis, respectively. SHapley Additive exPlanations (SHAP) method and nomogram are employed to further enhance feature interpretability in machine learning models. • To address the challenges in information-level fusion, we propose an Interpretable Deep Correlational Fusion framework, based on canonical correlation analysis (CCA) for 1) cohesive multimodal fusion of medical imaging and non-imaging data, and 2) interpretation of complex non-linear cross-modal associations. Specifically, two novel loss functions are proposed to optimize the discovery of informative multimodal representations in both supervised and unsupervised deep learning, by jointly learning inter-modal consensus and intra-modal discriminative information. An interpretation module is proposed to decipher the complex non-linear cross-modal association by leveraging interpretation methods in both deep learning and multimodal consensus learning. • To address the challenges in knowledge-level fusion, we proposed a Dynamic Topological Analysis framework, based on persistent homology, for knowledge distillation from inter-connected multi-focus regions in medical imaging and incorporation of domain knowledge. Different from conventional feature engineering and deep learning, our DTA framework is able to explicitly quantify inter-region topological relationships, including global-level geometric structure and community-level clusters. K-simplex Community Graph is proposed to construct the dynamic community graph for representing community-level multi-scale graph structure. The constructed dynamic graph is subsequently tracked with a novel Decomposed Persistence algorithm. Domain knowledge is incorporated into the Adaptive Community Profile, summarizing the tracked multi-scale community topology with additional customizable clinically important factors.
Style APA, Harvard, Vancouver, ISO itp.
18

Bussola, Francesco. "Quantitative analysis of smartphone PPG data for heart monitoring". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18150/.

Pełny tekst źródła
Streszczenie:
The field of app-based PPG monitoring of cardiac activity is promising, yet classification of heart rhythms in normal sinus rhythm (NSR) or atrial fibrillation (Afib) is difficult in the case of noisy measurements. In this work, we aim at characterizing a dataset of 1572 subjects, whose signals have been crowdsourced by collecting measurements via a dedicated smartphone app, using the embedded camera. We evaluate the distributions of three features of our signals: the peak area, amplitude and the time interval between two successive pulses. We evaluate if some factors affected the distributions, discovering that the strongest effects are for age and BMI groupings. We evaluate the results agreement between the R G B channels of acquisition, finding good agreement between the first two. After finding signal quality indexes in literature, we use a subset of them in a classification task, combined with dynamic time warping distance, a technique that matches a signal to a template. We achieve an accuracy of 89% on the test set, for binary quality classification. On the chaotic temporal series we evaluate the appearance of different types of rhythms on Poincaré plots and we quantify the results by a measure of their 3D spread. We perform this on a set of 20 subjects, 10 NSR and 10 Afib, finding significant differences between their 3D morphologies. We extend our analysis to the larger dataset, obtaining some significant results.
Style APA, Harvard, Vancouver, ISO itp.
19

Yee, Thomas William. "The Analysis of binary data in quantitative plant ecology". Thesis, University of Auckland, 1993. http://hdl.handle.net/2292/1973.

Pełny tekst źródła
Streszczenie:
The analysis of presence/absence data of plant species by regression analysis is the subject of this thesis. A nonparametric approach is emphasized, and methods which take into account correlations between species are also considered. In particular, generalized additive models (GAMs) are used, and these are applied to species’ responses to greenhouse scenarios and to examine multispecies interactions. Parametric models are used to estimate optimal conditions for the presence of species and to test several niche theory hypotheses. An extension of GAMs called vector GAMs is proposed, and they provide a means for proposing nonparametric versions of the following models: multivariate regression, the proportional and nonproportional odds model, the multiple logistic regression model, and bivariate binary regression models such as bivariate probit model and the bivariate logistic model. Some theoretical properties of vector GAMs are deduced from those pertaining to ordinary GAMs, and its relationship with the generalized estimating equations (GEE) approach elucidated.
Whole document restricted, but available by request, use the feedback form to request access.
Style APA, Harvard, Vancouver, ISO itp.
20

Michelson, Daniel Brause. "Quality control of weather radar data for quantitative application". Thesis, University of Salford, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.400829.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Gerber, Meredith L. "Graphical interface for quantitative monitoring of 3D MRI data". Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33707.

Pełny tekst źródła
Streszczenie:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (leaf 62).
The recent development of techniques in magnetic resonance imaging allows for the noninvasive monitoring of cartilage for disease progression, effects of lifestyle change, and results of medical interventions. In particular, the dGEMRIC technique has been used. Prior dGEMRIC data have been two-dimensional. Magnetic resonance equipment can currently produce three-dimensional dGEMRIC data, but software that analyzes three-dimensional data sets is lacking in practicality. This research improved existing software to better handle three-dimensional dGEMRIC data sets. Improvements were made to better facilitate (1) image section selection, (2) segmentation, (3) T1 mapping, and (4) statistical data analysis.
by Meredith L. Gerber
M.Eng.
Style APA, Harvard, Vancouver, ISO itp.
22

Gharabaghi, Sara. "Quantitative Susceptibility Mapping (QSM) Reconstruction from MRI Phase Data". Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1610018553822445.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Lind, Marcus. "Automatic Segmentation of Knee Cartilage Using Quantitative MRI Data". Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138403.

Pełny tekst źródła
Streszczenie:
This thesis investigates if support vector machine classification is a suitable approach when performing automatic segmentation of knee cartilage using quantitative magnetic resonance imaging data. The data sets used are part of a clinical project that investigates if patients that have suffered recent knee damage will develop cartilage damage. Therefore the thesis also investigates if the segmentation results can be used to predict the clinical outcome of the patients. Two methods that perform the segmentation using support vector machine classification are implemented and evaluated. The evaluation indicates that it is a good approach for the task, but the implemented methods needs to be further improved and tested on more data sets before clinical use. It was not possible to relate the cartilage properties to clinical outcome using the segmentation results. However, the investigation demonstrated good promise of how the segmentation results, if they are improved, can be used in combination with quantitative magnetic resonance imaging data to analyze how the cartilage properties change over time or vary between knees.
Style APA, Harvard, Vancouver, ISO itp.
24

Zhang, Wenbing. "A method and program for quantitative description of fracture data and fracture data extrapolation from scanline or wellbore data /". May be available electronically:, 2001. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Ramya, Sravanam Ramya. "Empirical Study on Quantitative Measurement Methods for Big Image Data : An Experiment using five quantitative methods". Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13466.

Pełny tekst źródła
Streszczenie:
Context. With the increasing demand for image processing applications in multimedia applications, the importance for research on image quality assessment subject has received great interest. While the goal of Image Quality Assessment is to find the efficient Image Quality Metrics that are closely relative to human visual perception, from the last three decades much effort has been put by the researchers and numerous papers and literature has been developed with emerging Image Quality Assessment techniques. In this regard, emphasis is given to Full-Reference Image Quality Assessment research where analysis of quality measurement algorithms is done based on the referenced original image as that is much closer to perceptual visual quality. Objectives. In this thesis we investigate five mostly used Image Quality Metrics which were selected (which includes Peak Signal to Noise Ratio (PSNR), Structural SIMilarity Index (SSIM), Feature SIMilarity Index (FSIM), Visual Saliency Index (VSI), Universal Quality Index (UQI)) to perform an experiment on a chosen image dataset (of images with different types of distortions due to different image processing applications) and find the most efficient one with respect to the dataset used. This research analysis could possibly be helpful to researchers working on big image data projects where selection of an appropriate Image Quality Metric is of major significance. Our study details the use of dataset taken and the experimental results where the image set highly influences the results.  Methods. The goal of this study is achieved by conducting a Literature Review to investigate the existing Image Quality Assessment research and Image Quality Metrics and by performing an experiment. The image dataset used in the experiment is prepared by obtaining the database from LIVE Image Quality Assessment database. Matlab software engine was used to experiment for image processing applications. Descriptive analysis (includes statistical analysis) was employed to analyze the results obtained from the experiment. Results. For the distortion types involved (JPEG 2000, JPEG compression, White Gaussian Noise, Gaussian Blur) SSIM was efficient to measure the image quality after distortion for JPEG 2000 compressed and white Gaussian noise images and PSNR was efficient for JPEG compression and Gaussian blur images with respect to the original image.  Conclusions. From this study it is evident that SSIM and PSNR are efficient in Image Quality Assessment for the dataset used. Also, that the level of distortions in the image dataset highly influences the results, where in our case SSIM and PSNR perform efficiently for the used database.
Style APA, Harvard, Vancouver, ISO itp.
26

Otaka, Akihisa. "Quantitative Analyses of Cell Aggregation Behavior Using Cell Trajectory Data". 京都大学 (Kyoto University), 2014. http://hdl.handle.net/2433/188580.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Chen, Qing. "Mining exceptions and quantitative association rules in OLAP data cube". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0024/MQ51312.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Butler, Stephanie T. "Is Quantitative Data-Driven instruction appropriate in visual arts education?" Thesis, California State University, Long Beach, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1587885.

Pełny tekst źródła
Streszczenie:

The use of quantitative data-driven instruction and assessment in the visual arts curriculum could impact the outcome of student creativity if employed within the visual arts, a content area that uses primarily qualitative pedagogy and assessment. In this paper I examine the effect upon measured creativity resulting from the use of Quantitative Data-Driven Assessment compared to the use of Authentic Assessment in the Visual Arts curriculum. This initial experimental research exposed eighth grade Visual Arts students to Authentic Assessment in one group, and Quantitative Data-Driven Assessment in another. Two experiments were conducted from the results. In the first experiment, both groups of student post-test art works are compared for mean creativity scores as defined by an independent expert panel of Art Educators. The second experiment compares for gains in pre-test/post-test creativity as the teacher assessed. Gains in mean creativity scores are compared between groups. Difference in assessment motivations are discussed as possible influencing factors.

Style APA, Harvard, Vancouver, ISO itp.
29

Almadi, Kanika. "Quantitative study of the movie industry based on IMDb data". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113502.

Pełny tekst źródła
Streszczenie:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 47).
Big Data Analytics is an emerging business capability that is providing far more intelligence to the companies nowadays to make well-informed decisions and better formulate their business strategies. This has been made possible due to easy accessibility of immense volume of data stored in clouds in a secure manner. As a result, online product review platforms have also gained enormous popularity and are successfully providing various services to the consumers primarily via user-generated content. The thesis makes use of raw and unstructured data available on IMDB website, cleans it up and organizes it in a structured format suitable for quick analysis by various analytical softwares. The thesis then examines the available literature on analytics done on IMDB movie dataset and identifies that little work has been carried out in predicting the financial success of the movies. The thesis thus carries out data analytics on the IMDB movie sets and highlights several parameters like movie interconnectedness and director's credentials, which correlates positively with the movie gross revenue. The thesis thereafter loosely defines a movie innovative index encompassing of parameters like number of references, number of follows and number of remake and discusses how the abundance of some of these parameters have a positive impact on box office success of the movie. Contrarily the lack of presence of these parameters thereby characterizing an innovative movie may not be so well received by the audiences thus leading to poor box office performance. The thesis also proposes how the director's credentials in the film industry measured by his/her total number of nominations and awards winning in the Oscar have a positive impact on the financial success of the movie and their own career advancement.
by Kanika Almadi.
S.M. in Engineering and Management
Style APA, Harvard, Vancouver, ISO itp.
30

Harradon, Michael Robert. "Quantitative spectral data acquisition and analysis with modular smartphone assemblies". Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/100662.

Pełny tekst źródła
Streszczenie:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 37).
A low-cost cell phone spectrometer using the image sensor in the cell phone camera is developed and analyzed. The spectrometer design is optimized for sensitivity and spectral resolution. Calibration techniques are developed to enable robust data collection across different phone models with minimal equipment. Novel algorithms for robust calibration with minimal equipment are described and implemented. The spectrometer is then characterized for use in colorimetric systems. Finally, the cell phone spectrometer is used in a forensic application for dating blood spots based on time-dependent oxidation-induced spectral changes.
by Michael Robert Harradon.
M. Eng.
Style APA, Harvard, Vancouver, ISO itp.
31

Song, Tingting. "Data analysis for quantitative determinations of polar lipid molecular species". Kansas State University, 2010. http://hdl.handle.net/2097/6907.

Pełny tekst źródła
Streszczenie:
Master of Science
Department of Statistics
Gary L. Gadbury
This report presents an analysis of data resulting from a lipidomics experiment. The experiment sought to determine the changes in the lipidome of big bluestem prairie grass when exposed to stressors. The two stressors were drought (versus a watered condition) and a rust infection (versus no infection), and were whole plot treatments arranged in a 2 by 2 factorial. A split plot treatment factor was the position on a sampled leaf (top half versus bottom half). In addition, samples were analyzed at different times, representing a blocking factor. A total of 110 samples were used and, for each sample, concentrations of 137 lipids were obtained. Many lipids were not detected for certain samples and, in some cases, a lipid was not detected in most samples. Thus, each lipid was analyzed separately using a modeling strategy that involved a combination of mixed effects linear models and a categorical analysis technique, with the latter used for certain lipids to determine if a pattern of observed zeros was associated with the treatment condition(s). In addition, p-values from tests of fixed effects in a mixed effect model were computed three different ways and compared. Results in general show that the drought condition has the greatest effect on the concentrations of certain lipids, followed by the effect of position on the leaf. Of least effect on lipid concentrations was the rust condition.
Style APA, Harvard, Vancouver, ISO itp.
32

Mehra, Chetan Saran. "Constructing smart financial portfolios from data driven quantitative investment models". Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/404673/.

Pełny tekst źródła
Streszczenie:
Portfolio managers have access to large amounts of financial time series data, which is rich in structure and information. Such structure, at varying time horizons and frequencies, exhibits different characteristics, such as momentum and mean reversion to mention two. The key challenge in building a smart portfolio is to first, identify and model the relevant data regimes operating at different time frames and then convert them into an investment model targeting each regime separately. Regimes in financial time series can change over a period of time, i.e. they are heterogeneous. This has implications for a model, as it may stop being profitable once the regime it is targeting has stopped or evolved into another one over a period of time. Changing regimes or those evolving into other regimes is one of the key reasons why we should have several independent models targeting relevant regimes at a particular point in time. In this thesis we present a smart portfolio management approach that advances existing methods and one that beats the Sharpe ratio of other methods, including the efficient frontier. Our smart portfolio is a two-tier framework. In the first tier we build four quantitative investment models, with each model targeting a pattern at different time horizon. We build two market neutral models using the pairs methodology and the other two models use the momentum approach in the equity market. In the second tier we build a set of meta models that allocate capital to tier one, using Kelly Criterion, to build a meta portfolio of quantitative investment models. Our approach is smart at several levels. Firstly, we target patterns that occur in financial data at different time horizons and create high probability investment models. Hence we make better use of data. Secondly, we calculate the optimal bet size using Kelly at each time step to maximise returns. Finally we avoid making investments in loss making models and hence make smarter allocation of capital.
Style APA, Harvard, Vancouver, ISO itp.
33

Yu, Haipeng. "Designing and modeling high-throughput phenotyping data in quantitative genetics". Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/97579.

Pełny tekst źródła
Streszczenie:
Quantitative genetics aims to bridge the genome to phenome gap. The advent of high-throughput genotyping technologies has accelerated the progress of genome to phenome mapping, but a challenge remains in phenotyping. Various high-throughput phenotyping (HTP) platforms have been developed recently to obtain economically important phenotypes in an automated fashion with less human labor and reduced costs. However, the effective way of designing HTP has not been investigated thoroughly. In addition, high-dimensional HTP data bring up a big challenge for statistical analysis by increasing computational demands. A new strategy for modeling high-dimensional HTP data and elucidating the interrelationships among these phenotypes are needed. Previous studies used pedigree-based connectetdness statistics to study the design of phenotyping. The availability of genetic markers provides a new opportunity to evaluate connectedness based on genomic data, which can serve as a means to design HTP. This dissertation first discusses the utility of connectedness spanning in three studies. In the first study, I introduced genomic connectedness and compared it with traditional pedigree-based connectedness. The relationship between genomic connectedness and prediction accuracy based on cross-validation was investigated in the second study. The third study introduced a user-friendly connectedness R package, which provides a suite of functions to evaluate the extent of connectedness. In the last study, I proposed a new statistical approach to model high-dimensional HTP data by leveraging the combination of confirmatory factor analysis and Bayesian network. Collectively, the results from the first three studies suggested the potential usefulness of applying genomic connectedness to design HTP. The statistical approach I introduced in the last study provides a new avenue to model high-dimensional HTP data holistically to further help us understand the interrelationships among phenotypes derived from HTP.
Doctor of Philosophy
Quantitative genetics aims to bridge the genome to phenome gap. With the advent of genotyping technologies, the genomic information of individuals can be included in a quantitative genetic model. A new challenge is to obtain sufficient and accurate phenotypes in an automated fashion with less human labor and reduced costs. The high-throughput phenotyping (HTP) technologies have emerged recently, opening a new opportunity to address this challenge. However, there is a paucity of research in phenotyping design and modeling high-dimensional HTP data. The main themes of this dissertation are 1) genomic connectedness that could potentially be used as a means to design a phenotyping experiment and 2) a novel statistical approach that aims to handle high-dimensional HTP data. In the first three studies, I first compared genomic connectedness with pedigree-based connectedness. This was followed by investigating the relationship between genomic connectedness and prediction accuracy derived from cross-validation. Additionally, I developed a connectedness R package that implements a variety of connectedness measures. The fourth study investigated a novel statistical approach by leveraging the combination of dimension reduction and graphical models to understand the interrelationships among high-dimensional HTP data.
Style APA, Harvard, Vancouver, ISO itp.
34

Lacerda, Fred W. "Comparative advantages of graphic versus numeric representation of quantitative data". Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/49817.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Kjelso, Morten. "A quantitative evaluation of data compression in the memory hierarchy". Thesis, Loughborough University, 1997. https://dspace.lboro.ac.uk/2134/10596.

Pełny tekst źródła
Streszczenie:
This thesis explores the use of lossless data compression in the memory hierarchy of contemporary computer systems. Data compression may realise performance benefits by increasing the capacity of a level in the memory hierarchy and by improving the bandwidth between two levels in the memory hierarchy. Lossless data compression is already widely used in parts ofthe memory hierarchy. However, most of these applications are characterised by targeting inexpensive and relatively low performance devices such as magnetic disk and tape devices. The consequences of this are that the benefits of data compression are not realised to their full potential. This research aims to understand how the benefits of data compression can be realised for levels of the memory hierarchy which have a greater impact on system performance and system cost. This thesis presents a review of data compression in the memory hierarchy and argues that main memory compression has the greatest potential to improve system performance. The review also identifies three key issues relating to the use of data compression in the memory hierarchy. Quantitative investigations are presented to address these issues for main memory data compression. The first investigation is into memory data, and shows that memory data from a range of Unix applications typically compresses to half its original size. The second investigation develops three memory compression architectures, taking into account the results of the previous investigation. Furthermore, the management of compressed data is addressed and management methods are developed which achieve storage efficiencies in excess of 90% and typically complete allocation and de allocation operations with only a few memory accesses. The experimental work then culminates in a performance investigation. This shows that when memory resources are strecthed, hardware based memory compression can improve system performance by up to an order of magnitude. Furthermore, software based memory compression can improve system performance by up to a factor of 2. Finally, the performance models and quantitative results contained in this thesis enable us to identify under what conditions memory compression offers performance benefits. This may help designers incorporate memory compression into future computer systems.
Style APA, Harvard, Vancouver, ISO itp.
36

Talouarn, Estelle. "Utilisation des données de séquence pour la cartographie fine et l'évaluation génomique des caractères d'intérêt des caprins laitiers français". Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0067.

Pełny tekst źródła
Streszczenie:
La filière caprine française a intégré l’ère de la génomique avec le récent développement et la valorisation d’une puce à ADN dans les années 2010-2020 pour la recherche de QTL et l’évaluation génétique. La démocratisation des données de séquençage tout génome pour les animaux de rente ouvre de nouvelles perspectives. Le projet VarGoats, a pour but de mettre à disposition d’un consortium international, un jeu de données de plus de 1000 séquences pour l’espèce Capra hircus. L’étude de la qualité d’imputation vers la séquence dans la filière caprine est un préalable nécessaire à l’utilisation de cette dernière dans les analyses d’association pour la détection de QTL ainsi que dans les évaluations génomiques. L’objectif principal de ces travaux est d’étudier l’intégration potentielle des données de séquence dans les programmes d’amélioration génétique de la filière laitière caprine française. La mise en place d’un contrôle de la qualité des données de séquence a représenté un travail majeur dans ma thèse. Il s’est appuyé sur une recherche bibliographique ainsi que sur la comparaison des génotypes 50k disponibles avec les séquences filtrées. Finalement, sur les 97 889 899 SNP et 12 304 043 indels initiaux, nous avons retenu 23 338 436 variants dont 40 491 appartenaient au set de SNP de la puce Illumina GoatSNP50 BeadChip. Une étude préalable de l’imputation depuis la puce 50k vers la séquence a ensuite été menée dans le but d’obtenir un nombre suffisant de séquences imputées de bonne qualité. Plusieurs méthodes d’imputation (imputation populationnelle ou familiale) et plusieurs logiciels ont été testés en utilisant les données de séquence disponibles (829 séquences des différences races caprines internationales). En intra-race, les taux de concordances génotypiques et alléliques ont été estimées à 0,74 et 0,86 en Saanen et 0,76 et 0,87 en Alpine respectivement. Les corrélations étaient alors de 0,26 et 0,24 en Alpine et Saanen respectivement. Les séquences imputées des mâles ont permis la confirmation de QTL précédemment observés sur les génotypes 50k ainsi que la détection de nouvelles régions d’intérêt. L’exhaustivité des données de séquence représentait une opportunité sans précédent d’approfondir une région QTL du chromosome 19 en Saanen qui est associée à la fois à des caractères de production mais aussi à des caractères de morphologie et santé de la mamelle ainsi qu’à des caractères de production de semence. Cette analyse n’a pas abouti à l’identification de mutations candidates. Néanmoins, nous avons pu proposer un moyen simple d’identifier des profils génomiques et phénotypiques particuliers en race Saanen à partir d’un génotype 50k. Cette méthode pourra s’avérer utile en terme de prédiction précoce tant en France qu’à l’international. Enfin, en réunissant l’ensemble des travaux effectués précédemment, nous avons étudié l’impact de l’intégration de données de séquence imputées sur le chromosome 19 sur la précision des évaluations en race Saanen françaises. Plusieurs modèles d’évaluations ont été mis en oeuvre et comparés : single-step GBLUP (ssGBLUP), single-step GBLUP pondéré (WssGBLUP) en utilisant différents panels de variants imputés. Les meilleurs résultats ont été obtenus en utilisant un ssGBLUP incluant les génotypages 50k et les variants imputés de la région du QTL du chromosome 19 (entre 24,72 et 28,38 Mb) avec des gains de +6,2% de précision en moyenne sur les caractères évalués. La mise à jour de la puce caprine à laquelle j’ai participé représente une perspective d’amélioration de la précision des évaluations. Elle permet d’améliorer significativement la qualité des évaluations génomiques (entre 3,1 et 6,4% en fonction du scenario considéré) tout en limitant les temps de calculs liés à l’imputation notamment. Ces travaux confortent l’intérêt de l’utilisation de données de séquence dans les programmes de sélection caprins français et ouvrent la perspective de leur intégration dans la routine des évaluations
French dairy goats recently integrated genomics with the development of a DNA chip in the 2010s and the first QTL detections and genomic evaluations. The availability of sequence data for farm animals opens up new opportunities. The VarGoats project is an international 1,000 genomes resequencing program designed to provide sequence information of the Capra hircus species. The study of imputation quality to sequence level is a necessary first step before using imputed sequences in association analysis and genomic evaluations. The main objective of this work was to study the possible integration of sequence data in the French dairy goats breeding programs. The set up of a quality check represented a sizable part of this thesis. It was based on bibliographic research and the comparison between available 50k genotypes and sequence data. Out of the initial 97,889,899 SNPs and 12,304,043 indels, we eventually retained 23,338,436 variants including 40,491 SNPs of the Illumina GoatSNP50 BeadChip. A preliminary study of imputation from 50k genotypes to sequence was then performed with the aim of getting a sufficient number of sequenced animals of good quality. Several softwares and methods were considered (family or population imputation) using the 829 sequenced animals available. Within-breed imputation led to genotype and allele concordance of 0.74 and 0.86 in Saanen and 0.76 and 0.87 in Alpine respectively. Correlations were then of 0.26 and 0.24 in Alpine and Saanen respectively. Imputed sequence of males confirmed signals previously identified using 50k genotypes and allowed the detection of new regions of interest. The density of sequence data represented an unprecedented opportunity to deepen our understanding of QTL region of chromosome 19 in the Saanen breed. This region is associated to production, type and udder health traits as well as semen production traits. Our analysis did not point out any candidate mutation. However, we offer a simple way to identify genomic and phenotypic profiles in the Saanen breed using 50k genotypes. This method could be of use for early prediction in France but also worldwide. Finally, using all previous results, we studied the impact of the integrating imputed sequence data of chromosome 19 on the accuracy of evaluations in French Saanen. Several evaluation models were compared : single-step GBLUP (ssGBLUP) and weighted single-step GBLUP (WssGBLUP) using different panels of imputed variants. Best results were obtained using ssGBLUP with 50k genotypes and all variants on the QTL region of chromosome 19 (between 24.72 and 28.38Mb): +6.2% accuracy on average for all evaluated traits. The 50k chip update to which I participated represents a opportunity to improve genomic evaluations. Indeed, it significantly improved accuracy of predictions (between 3.1 and 6.4% on average depending on the scenario) while limiting computation time associated to imputation. This work confirms the benefits of using sequence data in the French dairy goats breeding programs and opens up the perspective of integrating them in the routine genomic evaluations
Style APA, Harvard, Vancouver, ISO itp.
37

Wang, Dingqian. "Quantitative analysis with machine learning models for multi-parametric brain imaging data". Thesis, The University of Sydney, 2019. https://hdl.handle.net/2123/22245.

Pełny tekst źródła
Streszczenie:
Gliomas are considered to be the most common primary adult malignant brain tumor. With the dramatic increases in computational power and improvements in image analysis algorithms, computer-aided medical image analysis has been introduced into clinical applications. Precision tumor grading and genotyping play an indispensable role in clinical diagnosis, treatment and prognosis. Gliomas diagnostic procedures include histopathological imaging tests, molecular imaging scans and tumor grading. Pathologic review of tumor morphology in histologic sections is the traditional method for cancer classification and grading, yet human study has limitations that can result in low reproducibility and inter-observer agreement. Compared with histopathological images, Magnetic resonance (MR) imaging present the different structure and functional features, which might serve as noninvasive surrogates for tumor genotypes. Therefore, computer-aided image analysis has been adopted in clinical application, which might partially overcome these shortcomings due to its capacity to quantitatively and reproducibly measure multilevel features on multi-parametric medical information. Imaging features obtained from a single modal image do not fully represent the disease, so quantitative imaging features, including morphological, structural, cellular and molecular level features, derived from multi-modality medical images should be integrated into computer-aided medical image analysis. The image quality differentiation between multi-modality images is a challenge in the field of computer-aided medical image analysis. In this thesis, we aim to integrate the quantitative imaging data obtained from multiple modalities into mathematical models of tumor prediction response to achieve additional insights into practical predictive value. Our major contributions in this thesis are: 1. Firstly, to resolve the imaging quality difference and observer-dependent in histological image diagnosis, we proposed an automated machine-learning brain tumor-grading platform to investigate contributions of multi-parameters from multimodal data including imaging parameters or features from Whole Slide Images (WSI) and the proliferation marker KI-67. For each WSI, we extract both visual parameters such as morphology parameters and sub-visual parameters including first-order and second-order features. A quantitative interpretable machine learning approach (Local Interpretable Model-Agnostic Explanations) was followed to measure the contribution of features for single case. Most grading systems based on machine learning models are considered “black boxes,” whereas with this system the clinically trusted reasoning could be revealed. The quantitative analysis and explanation may assist clinicians to better understand the disease and accordingly to choose optimal treatments for improving clinical outcomes. 2. Based on the automated brain tumor-grading platform we propose, multimodal Magnetic Resonance Images (MRIs) have been introduced in our research. A new imaging–tissue correlation based approach called RA-PA-Thomics was proposed to predict the IDH genotype. Inspired by the concept of image fusion, we integrate multimodal MRIs and the scans of histopathological images for indirect, fast, and cost saving IDH genotyping. The proposed model has been verified by multiple evaluation criteria for the integrated data set and compared to the results in the prior art. The experimental data set includes public data sets and image information from two hospitals. Experimental results indicate that the model provided improves the accuracy of glioma grading and genotyping.
Style APA, Harvard, Vancouver, ISO itp.
38

Skogsberg, Peter. "Quantitative indicators of a successful mobile application". Thesis, KTH, Radio Systems Laboratory (RS Lab), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-123976.

Pełny tekst źródła
Streszczenie:
The smartphone industry has grown immensely in recent years. The two leading platforms, Google Android and Apple iOS, each feature marketplaces offering hundreds of thousands of software applications, or apps. The vast selection has facilitated a maturing industry, with new business and revenue models emerging. As an app developer, basic statistics and data for one's apps are available via the marketplace, but also via third-party data sources. This report regards how mobile software is evaluated and rated quantitatively by both endusers and developers, and which metrics are relevant in this context. A selection of freely available third-party data sources and app monitoring tools is discussed, followed by introduction of several relevant statistical methods and data mining techniques. The main object of this thesis project is to investigate whether findings from app statistics can provide understanding in how to design more successful apps, that attract more downloads and/or more revenue. After the theoretical background, a practical implementation is discussed, in the form of an in-house application statistics web platform. This was developed together with the app developer company The Mobile Life, who also provided access to app data for 16 of their published iOS and Android apps. The implementation utilizes automated download and import from online data sources, and provides a web based graphical user interface to display this data using tables and charts. Using mathematical software, a number of statistical methods have been applied to the collected dataset. Analysis findings include different categories (clusters) of apps, the existence of correlations between metrics such as an app’s market ranking and the number of downloads, a long-tailed distribution of keywords used in app reviews, regression analysis models for the distribution of downloads, and an experimental application of Pareto’s 80-20 rule which was found relevant to the gathered dataset. Recommendations to the app company include embedding session tracking libraries such as Google Analytics into future apps. This would allow collection of in-depth metrics such as session length and user retention, which would enable more interesting pattern discovery.
Smartphonebranschen har växt kraftigt de senaste åren. De två ledande operativsystemen, Google Android och Apple iOS, har vardera distributionskanaler som erbjuder hundratusentals mjukvaruapplikationer, eller appar. Det breda utbudet har bidragit till en mognande bransch, med nya växande affärs- och intäktsmodeller. Som apputvecklare finns grundläggande statistik och data för ens egna appar att tillgå via distributionskanalerna, men även via datakällor från tredje part. Den här rapporten behandlar hur mobil mjukvara utvärderas och bedöms kvantitativt av båda slutanvändare och utvecklare, samt vilka data och mått som är relevanta i sammanhanget.  Ett urval av fritt tillgängliga tredjeparts datakällor och bevakningsverktyg presenteras, följt av en översikt av flertalet relevanta statistiska metoder och data mining-tekniker. Huvudsyftet med detta examensarbete är att utreda om fynd utifrån appstatistik kan ge förståelse för hur man utvecklar och utformar mer framgångsrika appar, som uppnår fler nedladdningar och/eller större intäkter. Efter den teoretiska bakgrunden diskuteras en konkret implementation, i form av en intern webplattform för appstatistik. Denna plattform utvecklades i samarbete med apputvecklaren The Mobile Life, som också bistod med tillgång till appdata för 16 av deras publicerade iOSoch Android-appar. Implementationen nyttjar automatiserad nedladdning och import av data från datakällor online, samt utgör ett grafiskt gränssnitt för att åskådliggöra datan med bland annat tabeller och grafer. Med hjälp av matematisk mjukvara har ett antal statistiska metoder tillämpats på det insamlade dataurvalet. Analysens omfattning inkluderar en kategorisering (klustring) av appar, existensen av en korrelation mellan mätvärden såsom appars ranking och dess antal nedladdningar, analys av vanligt förekommande ord ur apprecensioner, en regressionsanalysmodell för distributionen av nedladdningar samt en experimentell applicering av Paretos ”80-20”-regel som fanns lämplig för vår data. Rekommendationer till appföretaget inkluderar att bädda in bibliotek för appsessionsspårning, såsom Google Analytics, i dess framtida appar. Detta skulle möjliggöra insamling av mer detaljerad data såsom att mäta sessionslängd och användarlojalitet, vilket skulle möjliggöra mer intressanta analyser.
Style APA, Harvard, Vancouver, ISO itp.
39

Polyvyanyy, Artem, i Dominik Kuropka. "A quantitative evaluation of the enhanced topic-based vector space model". Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2009/3381/.

Pełny tekst źródła
Streszczenie:
This contribution presents a quantitative evaluation procedure for Information Retrieval models and the results of this procedure applied on the enhanced Topic-based Vector Space Model (eTVSM). Since the eTVSM is an ontology-based model, its effectiveness heavily depends on the quality of the underlaying ontology. Therefore the model has been tested with different ontologies to evaluate the impact of those ontologies on the effectiveness of the eTVSM. On the highest level of abstraction, the following results have been observed during our evaluation: First, the theoretically deduced statement that the eTVSM has a similar effecitivity like the classic Vector Space Model if a trivial ontology (every term is a concept and it is independet of any other concepts) is used has been approved. Second, we were able to show that the effectiveness of the eTVSM raises if an ontology is used which is only able to resolve synonyms. We were able to derive such kind of ontology automatically from the WordNet ontology. Third, we observed that more powerful ontologies automatically derived from the WordNet, dramatically dropped the effectiveness of the eTVSM model even clearly below the effectiveness level of the Vector Space Model. Fourth, we were able to show that a manually created and optimized ontology is able to raise the effectiveness of the eTVSM to a level which is clearly above the best effectiveness levels we have found in the literature for the Latent Semantic Index model with compareable document sets.
Style APA, Harvard, Vancouver, ISO itp.
40

Gao, Yang. "On the integration of qualitative and quantitative methods in data fusion". Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240463.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Edgar, Jill Marie. "Hate crime in Canada: A quantitative analysis of victimization survey data". Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6209.

Pełny tekst źródła
Streszczenie:
Hate crime victimization in Canada is a criminal justice issue that has received insufficient attention. To address this lack of information, Statistics Canada included two questions concerning hate crime on the 1999 administration of the General Social Survey. The data from this survey were analyzed for this thesis. Differences between hate crime and non-hate crime respondents were examined. Subsequently, the three most frequently reported hate crime motivation categories of race/ethnicity, sex and culture were compared. The results of the analysis revealed that while differences exist between hate crime and non-hate crime respondents, the main differences appeared between respondents reporting sex-motivated hate crimes and those in the two remaining categories of race/ethnicity and culture. The main variations were in the reasons respondents cited for not reporting the incident to the police and their psychological reactions to the event. Those who perceived their victimization to be based upon their race/ethnicity or culture did not report the incident to the police because they felt it was not important enough. Respondents victimized on the basis of their sex indicated that they did not bring the incident to the attention of the police because they felt the "police do nothing". While respondents of the three motivation categories of hate crime examined in this study reported being fearful as a result of their victimization, respondents who perceived themselves as having been the victim of a sex-based hate crime were substantially less likely than those victimized as a result of their race/ethnicity or culture to report that they were not effected that much.
Style APA, Harvard, Vancouver, ISO itp.
42

Fursov, Ilya. "Quantitative application of 4D seismic data for updating thin-reservoir models". Thesis, Heriot-Watt University, 2015. http://hdl.handle.net/10399/2968.

Pełny tekst źródła
Streszczenie:
A range of methods which allow quantitative integration of 4D seismic and reservoir simulation are developed. These methods are designed to work with thin reservoirs, where the seismic response is normally treated in a map-based sense due to the limited vertical resolution of seismic. The first group of methods are fast-track procedures for prediction of future saturation fronts, and reservoir permeability estimation. The input to these methods is pressure and saturation maps which are intended to be derived from time-lapse seismic attributes. The procedures employ a streamline representation of the fluid flow, and finite difference discretisation of the flow equations. The underlying ideas are drawn from the literature and merged with some innovative new ideas, particularly for the implementation and use. However my conclusions on the applicability of the methods are different from their literature counterparts, and are more conservative. The fast-track procedures are advantageous in terms of speed compared to history matching techniques, but are lacking coupling between the quantities which describe the reservoir fluid flow: permeabilities, pressures, and saturations. For this reason, these methods are very sensitive to the input noise, and currently cannot be applied to the real dataset with a robust outcome. Seismic history matching is the second major method considered here for integrating 4D seismic data with the reservoir simulation model. Although more computationally demanding, history matching is capable of tolerating high levels of the input noise, and is more readily applicable to the real datasets. The proposed implementation for seismic modelling within the history matching loop is based on a linear regression between the time-lapse seismic attribute maps and the reservoir dynamic parameter maps, thus avoiding the petro-elastic and seismic trace modelling. The idea for such regression is developed from a pressure/saturation inversion approach found in the literature. Testing of the seismic history matching workflow with the associated uncertainty estimation is performed for a synthetic model. A reduction of the forecast uncertainties is observed after addition of the 4D seismic information to the history matching process. It is found that a proper formulation of the covariance matrices for the seismic errors is essential to obtain favourable forecasts which have small levels of bias. Finally, the procedure is applied to a North Sea field dataset where a marginal reduction in the prediction uncertainties is observed for the wells located close to the major seismic anomalies. Overall, it is demonstrated that the proposed seismic history matching technique is capable of integrating 4D seismic data with the simulation model and increasing confidence in the latter.
Style APA, Harvard, Vancouver, ISO itp.
43

Schiller, Benjamin J. "Data Biology| A quantitative exploration of gene regulation and underlying mechanisms". Thesis, University of California, San Francisco, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3587899.

Pełny tekst źródła
Streszczenie:

Regulation of gene expression is a fundamental biological process required to adapt the full set of hereditary information (i.e., the genome) to the varied environments that any organism encounters. Here, we elucidate two distinct forms of gene regulation – of endogenous genes by binding of transcription factors to information-containing genomic sequences and of selfish genes (“transposons”) by targeting of small RNAs to repetitive genomic sequences – using a wide array of approaches.

To study regulation by transcription factors, we used glucocorticoid receptor (GR), a hormone-activated, DNA-binding protein that controls inflammation, metabolism, stress responses and other physiological processes. In vitro, GR binds as an inverted dimer to two imperfectly palindromic “half sites” separated by a “spacer”. Moreover, GR binds different sequences with distinct conformations, as demonstrated by nuclear magnetic resonance spectroscopy (NMR) and other biophysical methods.

In vivo, GR employs different functional surfaces when regulating different genes. We investigated whether sequences bound by GR in vivo might be a composite of several motifs, each biased toward utilization of a particular pattern of functional surfaces of GR. Using microarrays and deep sequencing, we characterized gene expression and genomic occupancy by GR, with and without glucocorticoid treatment, of cells expressing GR alleles bearing differences in three known functional surfaces. We found a “sub-motif”, the GR “half site”, that relates to utilization of the dimerization interface and directs genomic binding by GR in a distinct conformation.

To study repression of tranposons, we characterized the production and function of small RNAs in the yeast Cryptococcus neoformans. We found that target transcripts are distinguished by suboptimal introns and inefficient splicing. We identified a complex, SCANR, required for synthesis of small RNAs and demonstrate that it physically associates with the spliceosome. We propose that recognition of gene products by SCANR is in kinetic competition with splicing, thereby further promoting small RNA production from target transcripts.

To achieve these results, we developed new bioinformatics tools: twobitreader, a small Python package for efficient extraction of genomic sequences; scripter, a flexible back-end for easily creating scripts and pipeline; and seriesoftubes, a pipeline built upon scripter for the analysis of deep sequencing data.

Style APA, Harvard, Vancouver, ISO itp.
44

Rangert, Emma. "Integration of Quantitative User Data Into the Agile Website Development Process". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-231004.

Pełny tekst źródła
Streszczenie:
Website analysis with Google Analytics takes place by the end of the development process when performed at Valtech AB. Measurements are added to gather quantitative user data from websites when the website is live instead of as a part of the development process. Through interview sessions and a literature study the Lean Startup philosophy and lean user experience have been studied with focus on the minimum viable product, hypothesis writing, and the build-measure-learn feedback loop. This thesis work proposes a process to integrate the work of website analysis through Google Analytics into the agile work process. The proposed process includes early definition of business impact, key performance indicators and main conversions of the website. Also early set up of the Google Analytics accounts and a dashboard screen monitoring the main macro conversions are an important part. Furthermore, hypothesis writing and creation of minimum viable products as well as consideration of previous measurements in backlog prioritization and refinement meetings are included. Finally, continuous presentation of data measurements at the sprint demo is important. The conclusions of this thesis work are that the motivation for performing agile website analysis depends on the development team members and their knowledge of website analysis. It is also important that the customer is active in the sense of learning and taking part in the measurement process to be able to take over the analysis at project completion. Lastly, it is important to follow up quantitative measurements with qualitative measurements by asking questions why the user acted in a certain way.
Style APA, Harvard, Vancouver, ISO itp.
45

Arroniz, Inigo. "EXTRACTING QUANTITATIVE INFORMATIONFROM NONNUMERIC MARKETING DATA: AN AUGMENTEDLATENT SEMANTIC ANALYSIS APPROACH". Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3083.

Pełny tekst źródła
Streszczenie:
Despite the widespread availability and importance of nonnumeric data, marketers do not have the tools to extract information from large amounts of nonnumeric data. This dissertation attempts to fill this void: I developed a scalable methodology that is capable of extracting information from extremely large volumes of nonnumeric data. The proposed methodology integrates concepts from information retrieval and content analysis to analyze textual information. This approach avoids a pervasive difficulty of traditional content analysis, namely the classification of terms into predetermined categories, by creating a linear composite of all terms in the document and, then, weighting the terms according to their inferred meaning. In the proposed approach, meaning is inferred by the collocation of the term across all the texts in the corpus. It is assumed that there is a lower dimensional space of concepts that underlies word usage. The semantics of each word are inferred by identifying its various contexts in a document and across documents (i.e., in the corpus). After the semantic similarity space is inferred from the corpus, the words in each document are weighted to obtain their representation on the lower dimensional semantic similarity space, effectively mapping the terms to the concept space and ultimately creating a score that measures the concept of interest. I propose an empirical application of the outlined methodology. For this empirical illustration, I revisit an important marketing problem, the effect of movie critics on the performance of the movies. In the extant literature, researchers have used an overall numerical rating of the review to capture the content of the movie reviews. I contend that valuable information present in the textual materials remains uncovered. I use the proposed methodology to extract this information from the nonnumeric text contained in a movie review. The proposed setting is particularly attractive to validate the methodology because the setting allows for a simple test of the text-derived metrics by comparing them to the numeric ratings provided by the reviewers. I empirically show the application of this methodology and traditional computer-aided content analytic methods to study an important marketing topic, the effect of movie critics on movie performance. In the empirical application of the proposed methodology, I use two datasets that combined contain more than 9,000 movie reviews nested in more than 250 movies. I am restudying this marketing problem in the light of directly obtaining information from the reviews instead of following the usual practice of using an overall rating or a classification of the review as either positive or negative. I find that the addition of direct content and structure of the review adds a significant amount of exploratory power as a determinant of movie performance, even in the presence of actual reviewer overall ratings (stars) and other controls. This effect is robust across distinct opertaionalizations of both the review content and the movie performance metrics. In fact, my findings suggest that as we move from sales to profitability to financial return measures, the role of the content of the review, and therefore the critic's role, becomes increasingly important.
Ph.D.
Department of Marketing
Business Administration
Business Administration PhD
Style APA, Harvard, Vancouver, ISO itp.
46

Wüchner, Tobias [Verfasser]. "Behavior-based Malware Detection with Quantitative Data Flow Analysis / Tobias Wüchner". Berlin : epubli, 2016. http://d-nb.info/1120172470/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Sisodiya, Sanjay Mull. "Qualitative and quantitative analysis of MRI data from patients with epilepsy". Thesis, University College London (University of London), 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362884.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Po, Bruce C. (Bruce Chou-hsin) 1977. "Graphical interface for quantitative T1 and T2 mapping of MRI data". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86679.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Towers, David Peter. "The automatic and quantitative analysis of interferometric and optical fringe data". Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/108038/.

Pełny tekst źródła
Streszczenie:
Optical interference techniques are used for a wide variety of industrial measurements. Using holographic interferometry or electronic speckle pattern interferometry, whole field measurements can be made on diffusely reflecting surfaces to sub-wavelength accuracy. Interference fringes are formed by comparing two states of an object. The interference phase contains information regarding the optical path difference between the two object states, and is related to the object deformation. The automatic extraction of the phase is critical for optical fringe methods to be applied as a routine tool. The solution to this problem is the main topic of the thesis. All stages in the analysis have been considered: fringe field recording methods, reconstructing the data into a digital form, and automatic image processing algorithms to solve for the interference phase. A new method for reconstructing holographic fringe data has been explored. This produced a system with considerably reduced sensitivity to environmental changes. An analysis of the reconstructed fringe pattern showed that most errors in the phase measurements are linear. Two methods for error compensation are proposed. The optimum resolution which can be attained using the method is lambda/90, or 4 nanometers. The fringe data was digitised using a framestore and solid state CCD camera. The image processing followed three distinct stages : filtering the input data, forming a 'wrapped' phase map by either the quasi-heterodyne analysis or Fourier transform method, and phase unwrapping. The primary objective was to form a fully automatic fringe analysis package, applicable to general fringe data. Automatic processing has been achieved by making local measurements of fringe field characteristics. The number of iterations of an averaging filter is optimised according to a measure of the fringe’s signal to noise. In phase unwrapping it has been identified that discontinuities in the data are more likely in regions of high spatial frequency fringes. This factor has been incorporated into a new algorithm where regions of discontinuous data are isolated according to local variations in the fringe period and data consistency. These methods have been found to give near optimum results in many cases. The analysis is fully automated, and can be performed in a relatively short time, « 10 minutes on a SUN 4 processor. Applications to static deflections, vibrating objects, axisymmetric flames and transonic air flows are presented. Static deflection data from both holographic interferometry and ESPI is shown. The range of fringe fields which can be analysed is limited by the resolution of the digital image data which can be obtained from commercially available devices. For the quantitative analysis of three dimensional flows, the imaging of the fringe data is difficult due to large variations in localisation depth. Two approaches to overcome this problem are discussed for the specific case of burner flame analysis.
Style APA, Harvard, Vancouver, ISO itp.
50

Teltzrow, Maximilian. "A quantitative analysis of e-commerce". Doctoral thesis, Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät, 2005. http://dx.doi.org/10.18452/15297.

Pełny tekst źródła
Streszczenie:
Die Rolle und Wahrnehmung des World Wide Web in seinen unterschiedlichen Nutzungskontexten ändert sich zunehmend – von einem frühen Fokus auf reine Web-Interaktion mit Kunden, Informationssuchern und anderen Nutzern hin zum Web als eine Komponente in einer mehrkanaligen Informations- und Kommunikationsstrategie. Diese zentrale Entwicklung ermöglicht Firmen, eine wachsende Menge digitaler Konsumenteninformationen zu sammeln, zu analysieren und zu verwerten. Während Firmen von diesen Daten profitieren (z.B. für Marketingzwecke und zur Verbesserung der Bedienungsfreundlichkeit), hat die Analyse und Nutzung von Onlinedaten zu einem signifikanten Anstieg der Datenschutzbedenken bei Konsumenten geführt, was wiederum ein Haupthindernis für erfolgreichen E-Commerce ist. Die Implikationen für eine Firma sind, dass Datenschutzerfordernisse bei der Datenanalyse und -nutzung berücksichtigt und Datenschutzpraktiken effizient nach außen kommuniziert werden müssen. Diese Dissertation erforscht den Grenzbereich zwischen den scheinbar konkurrierenden Interessen von Onlinekonsumenten und Firmen. Datenschutz im Internet wird aus einer Konsumentenperspektive untersucht und Datenschutzanforderungen werden spezifiziert. Eine Gruppe von Geschäftsanalytiken für Webseiten wird präsentiert und es wird verdeutlicht, wie Datenschutzanforderungen in den Analyseprozess integriert werden können. Ein Design zur besseren Kommunikation von Datenschutz wird vorgestellt, um damit eine effizientere Kommunikation der Datenschutzpraktiken einer Firma gegenüber Konsumenten zu ermöglichen. Die vorgeschlagenen Lösungsansätze gestatten den beiden Gegenparteien, widerstreitende Interessen zwischen Datennutzung und Datenschutz auszugleichen. Ein besonderer Fokus dieser Forschungsarbeit liegt auf Mehrkanalhändlern, die den E-Commerce Markt derzeit dominieren. Die Beiträge dieser Arbeit sind im Einzelnen: * Messung von Vorbedingungen für Vertrauen im Mehrkanalhandel Der Erfolg des Mehrkanalhandels und die Bedeutung von Datenschutz werden aus einer Konsumentenperspektive dargestellt. Ein Strukturgleichungsmodell zur Erklärung von Konsumentenvertrauen in einen Mehrkanalhändler wird präsentiert. Vertrauen ist eine zentrale Vorbedingung für die Kaufbereitschaft. Ein signifikanter Einfluss der wahrgenommenen Reputation und Größe physischer Filialen auf das Vertrauen in einen Onlineshop wurde festgestellt. Dieses Resultat bestätigt unsere Hypothese, dass kanalübergreifende Effekte zwischen dem physischen Filialnetzwerk und einem Onlineshop existieren. Der wahrgenommene Datenschutz hat im Vergleich den stärksten Einfluss auf das Vertrauen. Die Resultate legen nahe, Distributionskanäle weiter zu integrieren und die Kommunikation des Datenschutzes zu verbessern. * Design und Test eines Web-Analyse-Systems Der Forschungsbeitrag zu Konsumentenwahrnehmungen im Mehrkanalhandel motiviert die weitere Untersuchung der Erfolgsfaktoren im Internet. Wir präsentieren ein Kennzahlensystem mit 82 Kennzahlen zur Messung des Onlineerfolges von Webseiten. Neue Konversionsmetriken und Kundensegmentierungsansätze werden vorgestellt. Ein Schwerpunkt liegt auf der Entwicklung von Kennzahlen für Mehrkanalhändler. Das Kennzahlensystem wird auf Daten eines Mehrkanalhändlers und einer Informationswebseite geprüft. * Prototypische Entwicklung eines datenschutzwahrenden Web Analyse Services Die Analyse von Webdaten erfordert die Wahrung von Datenschutzrestriktionen. Der Einfluss von Datenschutzbestimmungen auf das Kennzahlensystem wird diskutiert. Wir präsentieren einen datenschutzwahrenden Web Analyse Service, der die Kennzahlen unseres Web-Analyse-Systems berechnet und zudem anzeigt, wenn eine Kennzahl im Konflikt mit Datenschutzbestimmungen steht. Eine syntaktische Erweiterung eines etablierten Datenschutzstandards wird vorgeschlagen. * Erweiterung der Analyse von Datenschutzbedürfnissen aus Kundensicht Eine wichtige Anwendung, die Resultate des beschriebenen Web Analyse Services nutzt, sind Personalisierungssysteme. Diese Systeme verbessern ihre Effizienz mit zunehmenden Informationen über die Nutzer. Daher sind die Datenschutzbedenken von Webnutzern besonders hoch bei Personalisierungssystemen. Konsumentendatenschutzbedenken werden in einer Meta-Studie von 30 Datenschutzumfragen kategorisiert und der Einfluss auf Personalisierungssysteme wird beschrieben. Forschungsansätze zur datensschutzwahrenden Personalisierung werden diskutiert. * Entwicklung eines Datenschutz-Kommunikationsdesigns Eine Firma muss nicht nur Datenschutzanforderungen bei Web-Analyse- und Datennutzungspraktiken berücksichtigen. Sie muss diese Datenschutzvorkehrungen auch effektiv gegenüber den Seitenbesuchern kommunizieren. Wir präsentieren ein neuartiges Nutzer-Interface-Design, bei dem Datenschutzpraktiken kontextualisiert erklärt werden, und der Kundennutzen der Datenübermittlung klar erläutert wird. Ein Nutzerexperiment wurde durchgeführt, das zwei Versionen eines personalisierten Web-Shops vergleicht. Teilnehmer, die mit unserem Interface-Design interagierten, waren signifikant häufiger bereit, persönliche Daten mitzuteilen, bewerteten die Datenschutzpraktiken und den Nutzen der Datenpreisgabe höher und kauften wesentlich häufiger.
The aim of this thesis is to explore the border between the competing interests of online consumers and companies. Privacy on the Internet is investigated from a consumer perspective and recommendations for better privacy management for companies are suggested. The proposed solutions allow the resolution of conflicting goals between companies’ data usage practices and consumers’ privacy concerns. The research is carried out with special emphasis on retailers operating multiple distribution channels. These retailers have become the dominant player in e-commerce. The thesis presents a set of business analyses for measuring online success of Web sites. New conversion metrics and customer segmentation approaches have been introduced. The analysis framework has been tested on Web data from a large multi-channel retailer and an information site. The analysis of Web data requires that privacy restrictions must be adhered to. Thus the impact of legislative and self-imposed privacy requirements on our analysis framework is also discussed. We propose a privacy-preserving Web analysis service that calculates our set of business analyses and indicates when an analysis is not compliant with privacy requirements. A syntactical extension of a privacy standard is proposed. Moreover, an overview of consumer privacy concerns and their particular impact on personalization systems is provided, that is summarized in a meta-study of 30 privacy surveys. A company must not only respect privacy requirements in its Web analysis and usage purposes but it must also effectively communicate these privacy practices to its site visitors. A privacy communication design is presented, which allows more efficient communication of a Web site’s privacy practices directed towards the users. Subjects who interacted with our new interface design were significantly more willing to share personal data with the Web site. They rated its privacy practices and the perceived benefit higher and made considerably more purchases.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii