Tesis sobre el tema "Data projection"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Data projection".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
McWilliams, Brian Victor Parulian. "Projection based models for high dimensional data". Thesis, Imperial College London, 2011. http://hdl.handle.net/10044/1/9577.
Texto completoSibley, Christy N. "Analyzing Navy Officer Inventory Projection Using Data Farming". Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/6868.
Texto completoThe Navys Strategic Planning and Analysis Directorate (OPNAV N14) uses a complex model to project officer status in the coming years. The Officer Strategic Analysis Model (OSAM) projects officer status using an initial inventory, historical loss rates, and dependent functions for accessions, losses, lateral transfers, and promotions that reflect Navy policy and U.S. law. OSAM is a tool for informing decision makers as they consider potential policy changes, or analyze the impact of policy changes already in place, by generating Navy Officer inventory projections for a specified time horizon. This research explores applications of data farming for potential improvement of OSAM. An analysis of OSAM inventory forecast variations over a large number of scenarios while changing multiple input parameters enables assessment of key inputs. This research explores OSAM through applying the principles of design of experiments, regression modeling, and nonlinear programming. The objectives of this portion of the work include identifying critical parameters, determining a suitable measure of effectiveness, assessing model sensitivities, evaluating performance across a spectrum of loss adjustment factors, and determining appropriate values of key model inputs for future use in forecasting Navy officer inventory.
Eslava-Gomez, Guillermina. "Projection pursuit and other graphical methods for multivariate data". Thesis, University of Oxford, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.236118.
Texto completoEbert, Matthias. "Non-ideal projection data in X-ray computed tomography". [S.l. : s.n.], 2002. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10605022.
Texto completoCropanese, Frank C. "Synthesis of low k1 projection lithography utilizing interferometry /". Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/1235.
Texto completoFolgieri, R. "Ensembles based on Random Projection for gene expression data analysis". Doctoral thesis, Università degli Studi di Milano, 2008. http://hdl.handle.net/2434/45878.
Texto completoBolton, Richard John. "Multivariate analysis of multiproduct market research data". Thesis, University of Exeter, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302542.
Texto completoKishimoto, Paul Natsuo. "Transport demand in China : estimation, projection, and policy assessment". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120664.
Texto completoCataloged from PDF version of thesis. "Some pages in the original document contain text that runs off the edge of the page"--Disclaimer Notice page.
Includes bibliographical references.
China's rapid economic growth in the twenty-first century has driven, and been driven by, concomitant motorization and growth of passenger and freight mobility, leading to greater energy demand and environmental impacts. In this dissertation I develop methods to characterize the evolution of passenger transport demand in a rapidly-developing country, in order to support projection and policy assessment. In Essay #1, I study the role that vehicle tailpipe and fuel quality standards ("emissions standards") can play vis-à-vis economy-wide carbon pricing in reducing emissions of pollutants that lead to poor air quality. I extend a global, computable general equilibrium (CGE) model resolving 30 Chinese provinces by separating freight and passenger transport subsectors, road and non-road modes, and household-owned vehicles; and then linking energy demand in these subsectors to a province-level inventory of primary pollutant emissions and future policy targets. While climate policy yields an air quality co-benefit by inducing shifts away from dirtier fuels, this effect is weak within the transport sector. Current emissions standards can drastically reduce transportation emissions, but their overall impact is limited by transport's share in total emissions, which varies across provinces. I conclude that the two categories of measures examined are complementary, and the effectiveness of emissions standards relies on enforcement in removing older, higher-polluting vehicles from the roads. In Essay #2, I characterize Chinese households' demand for transport by estimating the recently-developed, Exact affine Stone index (EASI) demand system on publicly-available data from non-governmental, social surveys. Flexible, EASI demands are particularly useful in China's rapidly-changing economy and transport system, because they capture ways that income elasticities of demand, and household transport budgets, vary with incomes; with population and road network densities; and with the supply of alternative transport modes. I find transport demand to be highly elastic ([epsilon][subscript x] = 1.46) at low incomes, and that income-elasticity of demand declines but remains greater than unity as incomes rise, so that the share of transport in households' spending rises monotonically from 1.6 % to 7.5 %; a wider, yet lower range than in some previous estimates. While no strong effects of city-level factors are identified, these and other non-income effects account for a larger portion of budget share changes than rising incomes. Finally, in Essay #3, I evaluate the predictive performance of the EASI demand system, by testing the sensitivity of model fit to the data available for estimation, in comparison with the less flexible, but widely used, Almost Ideal demand system (AIDS). In rapidly-evolving countries such as China, survey data without nationwide coverage can be used to characterize transport systems, but the omission of cities and provinces could bias results. To examine this possibility, I estimate demand systems on data subsets and test their predictions against observations for the withheld fraction. I find that simple EASI specifications slightly outperform AIDS under cross-validation; these offer a ready replacement in standalone and CGE applications. However, a trade-off exists between accuracy and the inclusion of policy-relevant covariates when data omit areas with high values of these variables. Also, while province-level fixed-effects control for unobserved heterogeneity across units that may bias parameter estimates, they increase prediction error in out-of-sample applications-revealing that the influence of local conditions on household transport expenditure varies significantly across China's provinces. The results motivate targeted transport data collection that better spans variation on city types and attributes; and the validation technique aids transport modelers in designing and validating demand specifications for projection and assessment.
by Paul Natsuo Kishimoto.
Ph. D. in Engineering Systems
Divak, Martin. "Simulated SAR with GIS data and pose estimation using affine projection". Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-66303.
Texto completoGentle, David John. "Tomographic image reconstruction from incomplete projection data with application to industry". Thesis, University of Surrey, 1990. http://epubs.surrey.ac.uk/842931/.
Texto completoBadcock, Julie. "Projection methods for use in the analysis of multivariate process data". Thesis, University of Exeter, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.272980.
Texto completoWeingessel, Andreas, Martin Natter y Kurt Hornik. "Using independent component analysis for feature extraction and multivariate data projection". SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/1424/1/document.pdf.
Texto completoSeries: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
Maguire, Ralph Paul. "Application of pharmacokinetic models to projection data in positron emission tomography". Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/844467/.
Texto completoLandgraf, Andrew J. "Generalized Principal Component Analysis: Dimensionality Reduction through the Projection of Natural Parameters". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1437610558.
Texto completoVamulapalli, Harika Rao. "On Dimensionality Reduction of Data". ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/1211.
Texto completoChen, Mingqing. "Development of a diaphragm tracking algorithm for megavoltage cone beam CT projection data". Thesis, University of Iowa, 2009. https://ir.uiowa.edu/etd/228.
Texto completoMalla, Noor. "Partitioning XML data, towards distributed and parallel management". Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112154/document.
Texto completoWith the widespread diffusion of XML as a format for representing data generated and exchanged over the Web, main query and update engines have been designed and implemented in the last decade. A kind of engines that are playing a crucial role in many applications are « main-memory » systems, which distinguish for the fact that they are easy to manage and to integrate in a programming environment. On the other hand, main-memory systems have scalability issues, as they load the entire document in main-memory before processing. This Thesis presents an XML partitioning technique that allows main-memory engines to process a class of XQuery expressions (queries and updates), that we dub « iterative », on arbitrarily large input documents. We provide a static analysis technique to recognize these expressions. The static analysis is based on paths extracted from the expression and does not need additional schema information. We provide algorithms using path information for partitioning the input documents, so that the query or update can be separately evaluated on each part in order to compute the final result. These algorithms admit a streaming implementation, whose effectiveness is experimentally validated. Besides enabling scalability, our approach is also characterized by the fact that it is easily implementable into a MapReduce framework, thus enabling parallel query/update evaluation on the partitioned data
Schäfer, Matthias Jörg [Verfasser]. "Visual Analytics for Improving Exploration and Projection of Multi-Dimensional Data / Matthias Jörg Schäfer". Konstanz : Bibliothek der Universität Konstanz, 2015. http://d-nb.info/1079391789/34.
Texto completoZeng, Xubin y Kerrie Geil. "Global warming projection in the 21st century based on an observational data-driven model". AMER GEOPHYSICAL UNION, 2016. http://hdl.handle.net/10150/622341.
Texto completoMueller, Klaus. "Fast and accurate three-dimensional reconstrution from cone-beam projection data using algebraic methods /". The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487950658545496.
Texto completoCoimbra, Danilo Barbosa. "Multidimensional projections for the visual exploration of multimedia data". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-11112016-184130/.
Texto completoO advento contínuo de novas tecnologias tem criado um tipo rico e crescente de fontes de informação disponíveis para análise e investigação. Neste contexto, a análise de dados multidi- mensional é consideravelmente importante quando se lida com grandes e complexos conjuntos de dados. Dentre as possibilidades ao analisar esses tipos de dados, a aplicação de técnicas de visualização pode auxiliar o usuário a encontrar e entender os padrões, tendências e estabelecer novas metas. Alguns exemplos de aplicações de visualização de análise de dados multidimen- sionais vão de classificação de imagens, nuvens semântica de palavras, e análise de grupos de coleção de documentos, à exploração de conteúdo multimídia. Esta tese apresenta vários métodos de visualização para explorar de forma interativa conjuntos de dados multidimensionais que visam de usuários especializados aos casuais, fazendo uso de ambas representações estáticas e dinâmicas criadas por projeções multidimensionais. Primeiramente, apresentamos uma técnica de projeção multidimensional que preserva fielmente distância e que pode lidar com qualquer tipo de dados com alta-dimensionalidade, demonstrando cenários de aplicações em ambos os casos de multimídia e coleções de documentos de texto. Em seguida, abordamos a tarefa de interpretar as projeções em 2D, calculando erros de vizinhança. Posteriormente, apresentamos um conjunto de visualizações interativas que visam ajudar os usuários com essas tarefas, revelando a qualidade de uma projeção em 3D, aplicadas em diferentes cenários de alta dimensionalidade. Na parte final, discutimos duas abordagens diferentes para obter percepções sobre dados multimídia, em particular vídeos de futebol. Enquanto a primeira abordagem utiliza projeções multidimensionais, a segunda faz uso de uma eficiente metáfora visual para auxiliar usuários não especialistas em navegar e obter conhecimento em partidas de futebol.
Böckmann, Christine y Janos Sarközi. "The ill-posed inversion of multiwavelength lidar data by a hybrid method of variable projection". Universität Potsdam, 1999. http://opus.kobv.de/ubp/volltexte/2007/1484/.
Texto completoChavez, Daniel. "Parallelizing Map Projection of Raster Data on Multi-core CPU and GPU Parallel Programming Frameworks". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190883.
Texto completoKartprojektioner är en central del av geografiska informationssystem och en otalig mängd av kartprojektioner används idag. Omprojiceringen mellan olika kartprojektioner sker regelbundet i ett geografiskt informationssystem och den kan parallelliseras med flerkärniga CPU:er och GPU:er. Denna masteruppsats implementerar en parallel och analytisk omprojicering av rasterdata i C/C++ med ramverken Pthreads, C++11 STL threads, OpenMP, Intel TBB, CUDA och OpenCL. Uppsatsen jämför de olika implementationernas exekveringstider på tre rasterdata av varierande storlek, där OpenMP hade bäst speedup på 6, 6.2 och 5.5. GPU-implementationerna var 293 % snabbare än de snabbaste CPU-implementationerna, där profileringen visar att de senare spenderade mest tid på trigonometriska funktioner. Resultaten visar att GPU:n är bäst lämpad för omprojicering av rasterdata, medan OpenMP är den snabbaste inom CPU ramverken.
Dudziak, William James. "PRESENTATION AND ANALYSIS OF A MULTI-DIMENSIONAL INTERPOLATION FUNCTION FOR NON-UNIFORM DATA: MICROSPHERE PROJECTION". University of Akron / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=akron1183403994.
Texto completoSalter, James Martin. "Uncertainty quantification for spatial field data using expensive computer models : refocussed Bayesian calibration with optimal projection". Thesis, University of Exeter, 2017. http://hdl.handle.net/10871/30114.
Texto completoLlerena, Soledad Espezua. "Redução dimensional de dados de alta dimensão e poucas amostras usando Projection Pursuit". Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/18/18153/tde-10102013-150240/.
Texto completoReducing the dimension of datasets is an important step in pattern recognition and machine learning processes. PP has emerged as a relevant technique for that purpose. PP aims to find projections of the data in low dimensional spaces where interesting structures are revealed. Despite the success of PP in many dimension reduction problems, the literature shows a limited application of it in dataset with large amounts of features and few samples, such as those obtained in molecular biology. In this work we study ways to take advantage of the potential of PP in order to deal with problems of large dimensionalities and few samples. Among the main contributions of this work are: i) SPPM, an improved method for searching projections, based on a genetic algorithm and specialized crossover operators; and ii) Block-SPPM and W-SPPM, two strategies of applying SPPM in problems with more attributes than samples. The first strategy is based on partitioning the attribute space while the later is based on a precompaction of the data followed by a projection search. Experimental evaluations over public gene-expression datasets showed the efficacy of the proposals in improving the accuracy of popular classifiers with respect to several representative dimension reduction methods, being W-SPPM the strategy with the best compromise between accuracy and computational cost.
witt, micah. "Proton Computed Tomography: Matrix Data Generation Through General Purpose Graphics Processing Unit Reconstruction". CSUSB ScholarWorks, 2014. https://scholarworks.lib.csusb.edu/etd/2.
Texto completoEdberg, Alexandra. "Monitoring Kraft Recovery Boiler Fouling by Multivariate Data Analysis". Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230906.
Texto completoDetta arbete handlar om inkruster i sodapannan pa Montes del Plata, Uruguay. Multivariat dataanalys har anvands for att analysera den stora datamangd som fanns tillganglig for att undersoka hur olika parametrar paverkar inkrusterproblemen. Principal·· Component Analysis (PCA) och Partial Least Square Projection (PLS) har i detta jobb anvants. PCA har anvants for att jamfora medelvarden mellan tidsperioder med hoga och laga inkrusterproblem medan PLS har anvants for att studera korrelationen mellan variablema och darmed ge en indikation pa vilka parametrar som kan tankas att andras for att forbattra tillgangligheten pa sodapannan. Resultaten visar att sodapannan tenderar att ha problem med inkruster som kan hero pa fdrdelningen av luft, pa svartlutens tryck eller pa torrhalten i svartluten. Resultaten visar ocksa att multivariat dataanalys ar ett anvandbart verktyg for att analysera dessa typer av inkrusterproblem.
Fiterau, Madalina. "Discovering Compact and Informative Structures through Data Partitioning". Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/792.
Texto completoGreen, Patrick Corey. "Decision Support for Operational Plantation Forest Inventories through Auxiliary Information and Simulation". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/103054.
Texto completoDoctor of Philosophy
Informed forest management requires accurate, up-to-date information. Groundbased sampling (inventory) is commonly used to generate estimates of forest characteristics such as total wood volume, stem density per unit area, heights, and regeneration survival. As the importance of assessing forest resources has increased, resources are often not available to conduct proper assessments. In this research, the incorporation of ancillary information in planted loblolly pine (Pinus taeda L.) forest inventory was investigated. Additionally, a simulation study investigated the effects of two forest inventory data aggregation methods on predictions and projections of future forest conditions. Forest regeneration surveys are important for assessing conditions immediately after tree planting. An unmanned aircraft system was evaluated for its ability to capture imagery that could be used to automate seedling counting. The imagery was found to be unreliable for use in accurately detecting seedlings in the conditions evaluated. Following establishment, forest conditions are assessed at additional points in forest development. Using a class of statistical estimators known as small-area estimation, a combination of ground and light detection and ranging data generated more confident estimates of forest conditions. Further investigation found that more coarse ancillary information can be used with similar confidence in the conditions evaluated. Forest inventory data are used to generate estimates of future conditions needed for management decisions. The final component of this research found that there are significant differences between two inventory data aggregation strategies when forest conditions are highly spatially variable. The results of this research are of interest to forest managers who regularly assess forest resources with inventories and models. The incorporation of ancillary information has potential to enhance forest resource assessments. Further, managers have guidance on strategies for using this information for estimating future conditions.
Needham, Jessica. "Harnessing demographic data for cross-scale analysis of forest dynamics". Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:156850fa-3148-45a6-b2f8-ada9dd3f6a7f.
Texto completoSwinson, Michael D. "Statistical Modeling of High-Dimensional Nonlinear Systems: A Projection Pursuit Solution". Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-11232005-204333/.
Texto completoShapiro, Alexander, Committee Member ; Vidakovic, Brani, Committee Member ; Ume, Charles, Committee Member ; Sadegh, Nader, Committee Chair ; Liang, Steven, Committee Member. Vita.
Spreyer, Kathrin. "Does it have to be trees? : Data-driven dependency parsing with incomplete and noisy training data". Phd thesis, Universität Potsdam, 2011. http://opus.kobv.de/ubp/volltexte/2012/5749/.
Texto completoWir präsentieren eine neuartige Herangehensweise an das Trainieren von daten-gesteuerten Dependenzparsern auf unvollständigen Annotationen. Unsere Parser sind einfache Varianten von zwei bekannten Dependenzparsern, nämlich des transitions-basierten Malt-Parsers sowie des graph-basierten MST-Parsers. Während frühere Arbeiten zum Parsing mit unvollständigen Daten die Aufgabe meist in Frameworks für unüberwachtes oder schwach überwachtes maschinelles Lernen gebettet haben, behandeln wir sie im Wesentlichen mit überwachten Lernverfahren. Insbesondere schlagen wir "agnostische" Parser vor, die jegliche Fragmentierung der Trainingsdaten vor ihren daten-gesteuerten Lernkomponenten verbergen. Wir stellen Versuchsergebnisse mit Trainingsdaten vor, die mithilfe von Annotationsprojektion gewonnen wurden. Annotationsprojektion ist ein Verfahren, das es uns erlaubt, innerhalb eines Parallelkorpus Annotationen von einer Sprache auf eine andere zu übertragen. Bedingt durch begrenzten crosslingualen Parallelismus und fehleranfällige Wortalinierung ist die Ausgabe des Projektionsschrittes jedoch üblicherweise verrauscht und unvollständig. Gerade dies macht projizierte Annotationen zu einer angemessenen Testumgebung für unsere fragment-fähigen Parser. Unsere Ergebnisse belegen, dass (i) Dependenzparser, die auf großen Mengen von projizierten Annotationen trainiert wurden, größere Genauigkeit erzielen als die zugrundeliegenden direkten Projektionen, und dass (ii) die Genauigkeit unserer agnostischen, fragment-fähigen Parser der Genauigkeit der Originalparser (trainiert auf streng gefilterten, komplett projizierten Bäumen) annähernd gleichgestellt ist. Schließlich zeigen wir mit künstlich fragmentierten Gold-Standard-Daten, dass (iii) der Verlust an Genauigkeit selbst dann bescheiden bleibt, wenn bis zu 50% aller Kanten in den Trainingsdaten fehlen.
Niskanen, M. (Matti). "A visual training based approach to surface inspection". Doctoral thesis, University of Oulu, 2003. http://urn.fi/urn:isbn:9514270673.
Texto completoPatel, Rahul. "Maximum Likelihood – Expectation Maximum Reconstruction with Limited Dataset for Emission Tomography". Akron, OH : University of Akron, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=akron1175781554.
Texto completo"May, 2007." Title from electronic thesis title page (viewed 04/26/2009) Advisor, Dale Mugler; Co-Advisor, Anthony Passalaqua; Committee member, Daniel Sheffer; Department Chair, Daniel Sheffer; Dean of the College, George K. Haritos; Dean of the Graduate School, George R. Newkome. Includes bibliographical references.
Bergfors, Linus. "Explorative Multivariate Data Analysis of the Klinthagen Limestone Quarry Data". Thesis, Uppsala University, Department of Information Technology, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-122575.
Texto completo
The today quarry planning at Klinthagen is rough, which provides an opportunity to introduce new exciting methods to improve the quarry gain and efficiency. Nordkalk AB, active at Klinthagen, wishes to start a new quarry at a nearby location. To exploit future quarries in an efficient manner and ensure production quality, multivariate statistics may help gather important information.
In this thesis the possibilities of the multivariate statistical approaches of Principal Component Analysis (PCA) and Partial Least Squares (PLS) regression were evaluated on the Klinthagen bore data. PCA data were spatially interpolated by Kriging, which also was evaluated and compared to IDW interpolation.
Principal component analysis supplied an overview of the variables relations, but also visualised the problems involved when linking geophysical data to geochemical data and the inaccuracy introduced by lacking data quality.
The PLS regression further emphasised the geochemical-geophysical problems, but also showed good precision when applied to strictly geochemical data.
Spatial interpolation by Kriging did not result in significantly better approximations than the less complex control interpolation by IDW.
In order to improve the information content of the data when modelled by PCA, a more discrete sampling method would be advisable. The data quality may cause trouble, though with sample technique of today it was considered to be of less consequence.
Faced with a single geophysical component to be predicted from chemical variables further geophysical data need to complement existing data to achieve satisfying PLS models.
The stratified rock composure caused trouble when spatially interpolated. Further investigations should be performed to develop more suitable interpolation techniques.
Cho, Jang Ik. "Partial EM Procedure for Big-Data Linear Mixed Effects Model, and Generalized PPE for High-Dimensional Data in Julia". Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case152845439167999.
Texto completoCarraher, Lee A. "Approximate Clustering Algorithms for High Dimensional Streaming and Distributed Data". University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1511860805777818.
Texto completoLin, Christie. "Linear regression analysis of 2D projection image data of 6 degrees-of-freedom transformed 3D image sets for stereotactic radiation therapy". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/76969.
Texto completoCataloged from PDF version of thesis.
Includes bibliographical references (p. 104-106).
Patient positioning is crucial to accurate dose delivery during radiation therapy to ensure the proper localization of dose to the target tumor volume. In patient positioning for stereotactic radiation therapy treatment, classical image registration methods are computationally costly and imprecise. We developed an automatic, fast, and robust 2D-3D registration method to improve accuracy and speed of identifying 6 degrees-of-freedom (DoF) transformations during patient positioning for stereotactic radiotherapy by creating a model of characteristic shape distributions to determine the linear relationship between two real-time orthogonal 2D projection images and the 3D volume image. We defined a preprocessed sparse base set of shape distributions that characterize 2D digitally reconstructed radiograph (DRR) images from a range of independent transformations of the volume. The algorithm calculates the 6-DoF transformation of the patient based upon two orthogonal real-time 2D images by correlating the images against the base set The algorithm has positioning accuracy to at least 1 pixel, equivalent to 0.5098 mm accuracy given this image resolution. The shape distribution of each 2D image is created in MATLAB in an average of 0.017 s. The online algorithm allows for rapid and accurate position matching of the images, providing the transformation needed to align the patient on average in 0.5276 s. The shape distribution algorithm affords speed, robustness, and accuracy of patient positioning during stereotactic radiotherapy treatment for small-order 6-DoF transformations as compared with existing techniques for the quantification of patient setup where both linear and rotational deviations occur. This algorithm also indicates the potential for rapid, high precision patient positioning from the interpolation and extrapolation of the linear relationships based upon shape distributions. Key words: shape distribution, image registration, patient positioning, radiation therapy
by Christie Lin.
S.M.and S.B.
RADEMACHER, ERIC W. "THE PATH TO ACCURATE PRE-ELECTION FORECASTS: AN ANALYSIS OF THE IMPACT OF DATA ADJUSTMENT TECHNIQUES ON PRE-ELECTION PROJECTION ESTIMATES". University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1021921989.
Texto completoRademacher, Eric W. "The path to accurate pre-election forecasts an analysis of the impact of data adjustment techniques on pre-election projection estimates /". Cincinnati, Ohio : University of Cincinnati, 2002. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=ucin1021921989.
Texto completoBlanchard, Pierre. "Fast hierarchical algorithms for the low-rank approximation of matrices, with applications to materials physics, geostatistics and data analysis". Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0016/document.
Texto completoAdvanced techniques for the low-rank approximation of matrices are crucial dimension reduction tools in many domains of modern scientific computing. Hierarchical approaches like H2-matrices, in particular the Fast Multipole Method (FMM), benefit from the block low-rank structure of certain matrices to reduce the cost of computing n-body problems to O(n) operations instead of O(n2). In order to better deal with kernels of various kinds, kernel independent FMM formulations have recently arisen such as polynomial interpolation based FMM. However, they are hardly tractable to high dimensional tensorial kernels, therefore we designed a new highly efficient interpolation based FMM, called the Uniform FMM, and implemented it in the parallel library ScalFMM. The method relies on an equispaced interpolation grid and the Fast Fourier Transform (FFT). Performance and accuracy were compared with the Chebyshev interpolation based FMM. Numerical experiments on artificial benchmarks showed that the loss of accuracy induced by the interpolation scheme was largely compensated by the FFT optimization. First of all, we extended both interpolation based FMM to the computation of the isotropic elastic fields involved in Dislocation Dynamics (DD) simulations. Second of all, we used our new FMM algorithm to accelerate a rank-r Randomized SVD and thus efficiently generate multivariate Gaussian random variables on large heterogeneous grids in O(n) operations. Finally, we designed a new efficient dimensionality reduction algorithm based on dense random projection in order to investigate new ways of characterizing the biodiversity, namely from a geometric point of view
Carvalho, Edigleison Francelino. "Probabilistic incremental learning for image recognition : modelling the density of high-dimensional data". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/90429.
Texto completoNowadays several sensory systems provide data in ows and these measured observations are frequently high-dimensional, i.e., the number of measured variables is large, and the observations are arriving in a sequence. This is in particular the case of robot vision systems. Unsupervised and supervised learning with such data streams is challenging, because the algorithm should be capable of learning from each observation and then discard it before considering the next one, but several methods require the whole dataset in order to estimate their parameters and, therefore, are not suitable for online learning. Furthermore, many approaches su er with the so called curse of dimensionality (BELLMAN, 1961) and can not handle high-dimensional input data. To overcome the problems described above, this work proposes a new probabilistic and incremental neural network model, called Local Projection Incremental Gaussian Mixture Network (LP-IGMN), which is capable to perform life-long learning with high-dimensional data, i.e., it can continuously learn considering the stability of the current model's parameters and automatically adjust its topology taking into account the subspace's boundary found by each hidden neuron. The proposed method can nd the intrinsic subspace where the data lie, which is called the principal subspace. Orthogonal to the principal subspace, there are the dimensions that are noisy or carry little information, i.e., with small variance, and they are described by a single estimated parameter. Therefore, LP-IGMN is robust to di erent sources of data and can deal with large number of noise and/or irrelevant variables in the measured data. To evaluate LP-IGMN we conducted several experiments using simulated and real datasets. We also demonstrated several applications of our method in image recognition tasks. The results have shown that the LP-IGMN performance is competitive, and usually superior, with other stateof- the-art approaches, and it can be successfully used in applications that require life-long learning in high-dimensional spaces.
Etemadpour, Ronak Verfasser], Lars [Akademischer Betreuer] Linsen, Bettina [Akademischer Betreuer] [Olk, Rosane [Akademischer Betreuer] Minghim y Eric [Akademischer Betreuer] Monson. "Human Perception in Using Projection Methods for Multidimensional Data Visualization / Ronak Etemadpour. Betreuer: Lars Linsen. Gutachter: Lars Linsen ; Bettina Olk ; Rosane Minghim ; Eric Monson". Bremen : IRC-Library, Information Resource Center der Jacobs University Bremen, 2013. http://d-nb.info/1087274915/34.
Texto completoHamilton, Lei Hou. "Reduced-data magnetic resonance imaging reconstruction methods: constraints and solutions". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42707.
Texto completoBenmoussat, Mohammed Seghir. "Hyperspectral imagery algorithms for the processing of multimodal data : application for metal surface inspection in an industrial context by means of multispectral imagery, infrared thermography and stripe projection techniques". Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4347/document.
Texto completoThe work presented in this thesis deals with the quality control and inspection of industrial metallic surfaces. The purpose is the generalization and application of hyperspectral imagery methods for multimodal data such as multi-channel optical images and multi-temporal thermographic images. In the first application, data cubes are built from multi-component images to detect surface defects within flat metallic parts. The best performances are obtained with multi-wavelength illuminations in the visible and near infrared ranges, and detection using spectral angle mapper with mean spectrum as a reference. The second application turns on the use of thermography imaging for the inspection of nuclear metal components to detect surface and subsurface defects. A 1D approach is proposed based on using the kurtosis to select 1 principal component (PC) from the first PCs obtained after reducing the original data cube with the principal component analysis (PCA) algorithm. The proposed PCA-1PC method gives good performances with non-noisy and homogeneous data, and SVD with anomaly detection algorithms gives the most consistent results and is quite robust to perturbations such as inhomogeneous background. Finally, an approach based on fringe analysis and structured light techniques in case of deflectometric recordings is presented for the inspection of free-form metal surfaces. After determining the parameters describing the sinusoidal stripe patterns, the proposed approach consists in projecting a list of phase-shifted patterns and calculating the corresponding phase-images. Defect location is based on detecting and analyzing the stripes within the phase-images
Medvedev, Viktor. "Tiesioginio sklidimo neuroninių tinklų taikymo daugiamačiams duomenims vizualizuoti tyrimai". Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080204_162347-54385.
Texto completoThe research area of this work is the analysis of multidimensional data and the ways of improving apprehension of the data. Data apprehension is rather a complicated problem especially if the data refer to a complex object or phenomenon described by many parameters. The research object of the dissertation is artificial neural networks for multidimensional data projection. General topics that are related with this object: multidimensional data visualization; dimensionality reduction algorithms; errors of projecting data; the projection of the new data; strategies for retraining the neural network that visualizes multidimensional data; optimization of control parameters of the neural network for multidimensional data projection; parallel computing. The key aim of the work is to develop and improve methods how to efficiently minimize visualization errors of multidimensional data by using artificial neural networks. The results of the research are applied in solving some problems in practice. Human physiological data that describe the human functional state have been investigated.
Goto, Daniela Bento Fonsechi. "Estimação de maxima verossimilhança para processo de nascimento puro espaço-temporal com dados parcialmente observados". [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306192.
Texto completoDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-11T16:45:43Z (GMT). No. of bitstreams: 1 Goto_DanielaBentoFonsechi_M.pdf: 3513260 bytes, checksum: ff6f9e35005ad9015007d1f51ee722c1 (MD5) Previous issue date: 2008
Resumo: O objetivo desta dissertação é estudar estimação de máxima verossimilhança para processos de nascimento puro espacial para dois diferentes tipos de amostragem: a) quando há observação permanente em um intervalo [0, T]; b) quando o processo é observado após um tempo T fixo. No caso b) não se conhece o tempo de nascimento dos pontos, somente sua localização (dados faltantes). A função de verossimilhança pode ser escrita para o processo de nascimento puro não homogêneo em um conjunto compacto através do método da projeção descrito por Garcia and Kurtz (2008), como projeção da função de verossimilhança. A verossimilhança projetada pode ser interpretada como uma esperança e métodos de Monte Carlo podem ser utilizados para estimar os parâmetros. Resultados sobre convergência quase-certa e em distribuição são obtidos para a aproximação do estimador de máxima verossimilhança. Estudos de simulação mostram que as aproximações são adequadas.
Abstract: The goal of this work is to study the maximum likelihood estimation of a spatial pure birth process under two different sampling schemes: a) permanent observation in a fixed time interval [0, T]; b) observation of the process only after a fixed time T. Under scheme b) we don't know the birth times, we have a problem of missing variables. We can write the likelihood function for the nonhomogeneous pure birth process on a compact set through the method of projection described by Garcia and Kurtz (2008), as the projection of the likelihood function. The fact that the projected likelihood can be interpreted as an expectation suggests that Monte Carlo methods can be used to compute estimators. Results of convergence almost surely and in distribution are obtained for the aproximants to the maximum likelihood estimator. Simulation studies show that the approximants are appropriate.
Mestrado
Inferencia em Processos Estocasticos
Mestre em Estatística
Pagliosa, Lucas de Carvalho. "Visualização e exploração de dados multidimensionais na web". Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-08042016-103144/.
Texto completoWith the growing number and types of data, the need to analyze and understand what they represent and how they are related has become crucial. Visualization techniques based on multidimensional projections have gained space and interest as one of the possible tools to aid this problem, providing a simple and quick way to identify patterns, recognize trends and extract features previously not obvious in the original set. However, the data set projection in a smaller space may not be sufficient in some cases to answer or clarify certain questions asked by the user, making the posterior projection analysis crucial for the exploration and understanding of the data. Thus, interactivity in the visualization, applied to the users needs, is an essential factor for analysis. In this context, this master projects main objective consists to create visual metaphors based on attributes, through statistical measures and artifacts for detecting noise and similar groups, to assist the exploration and analysis of projected data. In addition, it is proposed to make available, in Web browsers, the multidimensional data visualization techniques developed by the Group of Visual and Geometric Processing at ICMC-USP. The development of the project as a Web platform was inspired by the difficulty of installation and running that certain visualization projects have, mainly due different versions of IDEs, compilers and operating systems. In addition, the fact that the project is available online for execution aims to facilitate the access and dissemination of technical proposals for the general public.
Vitale, Raffaele. "Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation". Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90442.
Texto completoLa presente tesis doctoral, concebida principalmente para apoyar y reforzar la relación entre la academia y la industria, se desarrolló en colaboración con Shell Global Solutions (Amsterdam, Países Bajos) en el esfuerzo de aplicar y posiblemente extender los enfoques ya consolidados basados en variables latentes (es decir, Análisis de Componentes Principales - PCA - Regresión en Mínimos Cuadrados Parciales - PLS - o PLS discriminante - PLSDA) para la resolución de problemas complejos no sólo en los campos de mejora y optimización de procesos, sino también en el entorno más amplio del análisis de datos multivariados. Con este fin, en todos los capítulos proponemos nuevas soluciones algorítmicas eficientes para abordar tareas dispares, desde la transferencia de calibración en espectroscopia hasta el modelado en tiempo real de flujos de datos. El manuscrito se divide en las seis partes siguientes, centradas en diversos temas de interés: Parte I - Prefacio, donde presentamos un resumen de este trabajo de investigación, damos sus principales objetivos y justificaciones junto con una breve introducción sobre PCA, PLS y PLSDA; Parte II - Sobre las extensiones basadas en kernels de PCA, PLS y PLSDA, donde presentamos el potencial de las técnicas de kernel, eventualmente acopladas a variantes específicas de la recién redescubierta proyección de pseudo-muestras, formulada por el estadista inglés John C. Gower, y comparamos su rendimiento respecto a metodologías más clásicas en cuatro aplicaciones a escenarios diferentes: segmentación de imágenes Rojo-Verde-Azul (RGB), discriminación y monitorización de procesos por lotes y análisis de diseños de experimentos de mezclas; Parte III - Sobre la selección del número de factores en el PCA por pruebas de permutación, donde aportamos una guía extensa sobre cómo conseguir la selección de componentes de PCA mediante pruebas de permutación y una ilustración completa de un procedimiento algorítmico original implementado para tal fin; Parte IV - Sobre la modelización de fuentes de variabilidad común y distintiva en el análisis de datos multi-conjunto, donde discutimos varios aspectos prácticos del análisis de componentes comunes y distintivos de dos bloques de datos (realizado por métodos como el Análisis Simultáneo de Componentes - SCA - Análisis Simultáneo de Componentes Distintivos y Comunes - DISCO-SCA - Descomposición Adaptada Generalizada de Valores Singulares - Adapted GSVD - ECO-POWER, Análisis de Correlaciones Canónicas - CCA - y Proyecciones Ortogonales de 2 conjuntos a Estructuras Latentes - O2PLS). Presentamos a su vez una nueva estrategia computacional para determinar el número de factores comunes subyacentes a dos matrices de datos que comparten la misma dimensión de fila o columna y dos planteamientos novedosos para la transferencia de calibración entre espectrómetros de infrarrojo cercano; Parte V - Sobre el procesamiento y la modelización en tiempo real de flujos de datos de alta dimensión, donde diseñamos la herramienta de Procesamiento en Tiempo Real (OTFP), un nuevo sistema de manejo racional de mediciones multi-canal registradas en tiempo real; Parte VI - Epílogo, donde presentamos las conclusiones finales, delimitamos las perspectivas futuras, e incluimos los anexos.
La present tesi doctoral, concebuda principalment per a recolzar i reforçar la relació entre l'acadèmia i la indústria, es va desenvolupar en col·laboració amb Shell Global Solutions (Amsterdam, Països Baixos) amb l'esforç d'aplicar i possiblement estendre els enfocaments ja consolidats basats en variables latents (és a dir, Anàlisi de Components Principals - PCA - Regressió en Mínims Quadrats Parcials - PLS - o PLS discriminant - PLSDA) per a la resolució de problemes complexos no solament en els camps de la millora i optimització de processos, sinó també en l'entorn més ampli de l'anàlisi de dades multivariades. A aquest efecte, en tots els capítols proposem noves solucions algorítmiques eficients per a abordar tasques dispars, des de la transferència de calibratge en espectroscopia fins al modelatge en temps real de fluxos de dades. El manuscrit es divideix en les sis parts següents, centrades en diversos temes d'interès: Part I - Prefaci, on presentem un resum d'aquest treball de recerca, es donen els seus principals objectius i justificacions juntament amb una breu introducció sobre PCA, PLS i PLSDA; Part II - Sobre les extensions basades en kernels de PCA, PLS i PLSDA, on presentem el potencial de les tècniques de kernel, eventualment acoblades a variants específiques de la recentment redescoberta projecció de pseudo-mostres, formulada per l'estadista anglés John C. Gower, i comparem el seu rendiment respecte a metodologies més clàssiques en quatre aplicacions a escenaris diferents: segmentació d'imatges Roig-Verd-Blau (RGB), discriminació i monitorització de processos per lots i anàlisi de dissenys d'experiments de mescles; Part III - Sobre la selecció del nombre de factors en el PCA per proves de permutació, on aportem una guia extensa sobre com aconseguir la selecció de components de PCA a través de proves de permutació i una il·lustració completa d'un procediment algorítmic original implementat per a la finalitat esmentada; Part IV - Sobre la modelització de fonts de variabilitat comuna i distintiva en l'anàlisi de dades multi-conjunt, on discutim diversos aspectes pràctics de l'anàlisis de components comuns i distintius de dos blocs de dades (realitzat per mètodes com l'Anàlisi Simultània de Components - SCA - Anàlisi Simultània de Components Distintius i Comuns - DISCO-SCA - Descomposició Adaptada Generalitzada en Valors Singulars - Adapted GSVD - ECO-POWER, Anàlisi de Correlacions Canòniques - CCA - i Projeccions Ortogonals de 2 blocs a Estructures Latents - O2PLS). Presentem al mateix temps una nova estratègia computacional per a determinar el nombre de factors comuns subjacents a dues matrius de dades que comparteixen la mateixa dimensió de fila o columna, i dos plantejaments nous per a la transferència de calibratge entre espectròmetres d'infraroig proper; Part V - Sobre el processament i la modelització en temps real de fluxos de dades d'alta dimensió, on dissenyem l'eina de Processament en Temps Real (OTFP), un nou sistema de tractament racional de mesures multi-canal registrades en temps real; Part VI - Epíleg, on presentem les conclusions finals, delimitem les perspectives futures, i incloem annexos.
Vitale, R. (2017). Novel chemometric proposals for advanced multivariate data analysis, processing and interpretation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90442
TESIS