Дисертації з теми "Computer animation – Data processing"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Computer animation – Data processing".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Stainback, Pamela Barth. "Computer animation : the animation capabilities of the Genigraphics 100C /." Online version of thesis, 1990. http://hdl.handle.net/1850/11460.
Повний текст джерелаYun, Hee Cheol. "Compression of computer animation frames." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/13070.
Повний текст джерелаMarshall, Dana T. "The exploitation of image construction data and temporal/image coherence in ray traced animation /." Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008386.
Повний текст джерелаWinnemöller, Holger. "Implementing non-photorealistic rendreing enhancements with real-time performance." Thesis, Rhodes University, 2002. http://hdl.handle.net/10962/d1003135.
Повний текст джерелаKMBT_363
Adobe Acrobat 9.54 Paper Capture Plug-in
Judice, Sicilia Ferreira Ponce Pasini. "Animação de Fluidos via Modelos do Tipo Lattice Gas e Lattice Boltzmann." Laboratório Nacional de Computação Científica, 2009. http://www.lncc.br/tdmc/tde_busca/arquivo.php?codArquivo=181.
Повний текст джерелаPhysically-based techniques for the animation of fluids (gas or liquids) have taken the attention of the computer graphics community. The traditional fluid animation methods rely on a top down viewpoint that uses 2D/3D mesh based approaches motivated by the Eulerian methods of Finite Element (FE) and Finite Difference (FD), in conjunction with Navier-Stokes equations of fluids. Alternatively, lattice methods comprised by the Lattice Gas Cellular Automata (LGCA) and Lattice Boltzmann (LBM) can be used. The basic idea behind these methods is that the macroscopic dynamics of a fluid is the result of the collective behavior of many microscopic particles. Such bottom-up approaches need low computational resources for both the memory allocation and the computation itself. In this work, we consider animation of fluids for computer graphics applications, using a LGCA method called FHP, and a LBM method called D2Q9, both bidimensional models. We propose 3D fluid animation techniques based on the FHP and D2Q9 as well as interpolation methods. Then, we present two animating frameworks based on the mentioned lattice methods, one for a real time implementation and the other for an off-line implementation. In the experimental results we emphasize the simplicity and power of the presented models when combined with efficient techniques for rendering and compare their efficiency.
Goes, Fernando Ferrari de. "Analise espectral de superficies e aplicações em computação grafica." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275916.
Повний текст джерелаDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-14T02:23:56Z (GMT). No. of bitstreams: 1 Goes_FernandoFerraride_M.pdf: 31957234 bytes, checksum: c369081bcbbb5f360184a1f8467839ea (MD5) Previous issue date: 2009
Resumo: Em computação gráfica, diversos problemas consistem na análise e manipulação da geometria de superfícies. O operador Laplace-Beltrami apresenta autovalores e autofunções que caracterizam a geometria de variedades, proporcionando poderosas ferramentas para o processamento geométrico. Nesta dissertação, revisamos as propriedades espectrais do operador Laplace-Beltrami e propomos sua aplicação em computação gráfica. Em especial, introduzimos novas abordagens para os problemas de segmentação semântica e geração de atlas em superfícies
Abstract: Many applications in computer graphics consist of the analysis and manipulation of the geometry of surfaces. The Laplace-Beltrami operator presents eigenvalues and eigenfuncitons which caracterize the geometry of manifolds, supporting powerful tools for geometry processing. In this dissertation, we revisit the spectral properties of the Laplace-Beltrami operator and apply them in computer graphics. In particular, we introduce new approaches for the problems of semantic segmentation and atlas generation on surfaces
Mestrado
Computação Grafica
Mestre em Ciência da Computação
John, Nigel William. "Techniques for planning computer animation." Thesis, University of Bath, 1989. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329568.
Повний текст джерелаLuther, Kurt. "Supporting and transforming leadership in online creative collaboration." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45822.
Повний текст джерелаHunter, Jane Louise. "Integrated sound synchronisation for computer animation." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239569.
Повний текст джерелаWhited, Brian Scott. "Tangent-ball techniques for shape processing." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31670.
Повний текст джерелаCommittee Chair: Jarek Rossignac; Committee Member: Greg Slabaugh; Committee Member: Greg Turk; Committee Member: Karen Liu; Committee Member: Maryann Simmons. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Caplan, Elizabeth A. "The Effects Of Animated Textual Instruction On Learners' Written Production Of German Modal Verb Sentences." Scholar Commons, 2002. http://purl.fcla.edu/fcla/etd/SFE0000042.
Повний текст джерелаLoizidou, Stephania M. "Dynamic analysis of anthropomorphic manipulators in computer animation." Thesis, London Metropolitan University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314630.
Повний текст джерелаMadhavavapeddy, Neganand. "A methodology for the real-time computer animation of articulated structures." Thesis, Queen's University Belfast, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268214.
Повний текст джерелаChuang, Yung-Yu. "New models and methods for matting and compositing /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/6849.
Повний текст джерелаWang, Yi. "Data Management and Data Processing Support on Array-Based Scientific Data." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1436157356.
Повний текст джерелаHudson, James. "Processing large point cloud data in computer graphics." Connect to this title online, 2003. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1054233187.
Повний текст джерелаTitle from first page of PDF file. Document formatted into pages; contains xix, 169 p.; also includes graphics (some col.). Includes bibliographical references (p. 159-169). Available online via OhioLINK's ETD Center
Fernandez, Noemi. "Statistical information processing for data classification." FIU Digital Commons, 1996. http://digitalcommons.fiu.edu/etd/3297.
Повний текст джерелаNar, Selim. "A Virtual Human Animation Tool Using Motion Capture Data." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609683/index.pdf.
Повний текст джерелаGolab, Lukasz. "Sliding Window Query Processing over Data Streams." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2930.
Повний текст джерелаThis dissertation begins with the observation that the two fundamental requirements of a DSMS are dealing with transient (time-evolving) rather than static data and answering persistent rather than transient queries. One implication of the first requirement is that data maintenance costs have a significant effect on the performance of a DSMS. Additionally, traditional query processing algorithms must be re-engineered for the sliding window model because queries may need to re-process expired data and "undo" previously generated results. The second requirement suggests that a DSMS may execute a large number of persistent queries at the same time, therefore there exist opportunities for resource sharing among similar queries.
The purpose of this dissertation is to develop solutions for efficient query processing over sliding windows by focusing on these two fundamental properties. In terms of the transient nature of streaming data, this dissertation is based upon the following insight. Although the data keep changing over time as the windows slide forward, the changes are not random; on the contrary, the inputs and outputs of a DSMS exhibit patterns in the way the data are inserted and deleted. It will be shown that the knowledge of these patterns leads to an understanding of the semantics of persistent queries, lower window maintenance costs, as well as novel query processing, query optimization, and concurrency control strategies. In the context of the persistent nature of DSMS queries, the insight behind the proposed solution is that various queries may need to be refreshed at different times, therefore synchronizing the refresh schedules of similar queries creates more opportunities for resource sharing.
Vijayakumar, Nithya Nirmal. "Data management in distributed stream processing systems." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278228.
Повний текст джерелаSource: Dissertation Abstracts International, Volume: 68-09, Section: B, page: 6093. Adviser: Beth Plale. Title from dissertation home page (viewed May 9, 2008).
Battle, Timothy P. "A computer-aided design scheme for drainage and runoff systems." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25077.
Повний текст джерелаApplied Science, Faculty of
Civil Engineering, Department of
Graduate
Lau, Yan Nam. "Adaptive approach to triangulation of isosurface in volume data /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20LAUY.
Повний текст джерелаIncludes bibliographical references (leaves 91-95). Also available in electronic version. Access restricted to campus users.
Derksen, Timothy J. (Timothy John). "Processing of outliers and missing data in multivariate manufacturing data." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38800.
Повний текст джерелаIncludes bibliographical references (leaf 64).
by Timothy J. Derksen.
M.Eng.
Bao, Shunxing. "Algorithmic Enhancements to Data Colocation Grid Frameworks for Big Data Medical Image Processing." Thesis, Vanderbilt University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13877282.
Повний текст джерелаLarge-scale medical imaging studies to date have predominantly leveraged in-house, laboratory-based or traditional grid computing resources for their computing needs, where the applications often use hierarchical data structures (e.g., Network file system file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance for laboratory-based approaches reveal that performance is impeded by standard network switches since typical processing can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. On the other hand, the grid may be costly to use due to the dedicated resources used to execute the tasks and lack of elasticity. With increasing availability of cloud-based big data frameworks, such as Apache Hadoop, cloud-based services for executing medical imaging studies have shown promise.
Despite this promise, our studies have revealed that existing big data frameworks illustrate different performance limitations for medical imaging applications, which calls for new algorithms that optimize their performance and suitability for medical imaging. For instance, Apache HBases data distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). Big data medical image processing applications involving multi-stage analysis often exhibit significant variability in processing times ranging from a few seconds to several days. Due to the sequential nature of executing the analysis stages by traditional software technologies and platforms, any errors in the pipeline are only detected at the later stages despite the sources of errors predominantly being the highly compute-intensive first stage. This wastes precious computing resources and incurs prohibitively higher costs for re-executing the application. To address these challenges, this research propose a framework - Hadoop & HBase for Medical Image Processing (HadoopBase-MIP) - which develops a range of performance optimization algorithms and employs a number of system behaviors modeling for data storage, data access and data processing. We also introduce how to build up prototypes to help empirical system behaviors verification. Furthermore, we introduce a discovery with the development of HadoopBase-MIP about a new type of contrast for medical imaging deep brain structure enhancement. And finally we show how to move forward the Hadoop based framework design into a commercialized big data / High performance computing cluster with cheap, scalable and geographically distributed file system.
Lian, Xiang. "Efficient query processing over uncertain data /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CSED%202009%20LIAN.
Повний текст джерелаDa, Yanan. "A Big Spatial Data System for Efficient and Scalable Spatial Data Processing." Thesis, Southern Illinois University at Edwardsville, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10682760.
Повний текст джерелаToday, a large amount of spatial data is generated from a variety of sources, such as mobile devices, sensors, and satellites. Traditional spatial data processing techniques no longer satisfy the efficiency and scalability requirements for large-scale spatial data processing. Existing Big Data processing frameworks such as Hadoop and Spark have been extended to support effective large-scale spatial data processing. In addition to processing data in distributed schemes utilizing computer clusters for efficiency and scalability, single node performance can also be improved by making use of multi-core processors. In this thesis, we investigate approaches to parallelize line segment intersection algorithms for spatial computations on multi-core processors, which can be used as node-level algorithms for distributed spatial data processing. We first provide our design of line segment intersection algorithms and introduce parallelization techniques. Then, we describe experimental results using multiple data sets and speed ups are examined with varying numbers of processing cores. Equipped with the efficient underlying algorithm for spatial computation, we investigate how to build a native big spatial data system from the ground up. We provide a system design for distributed large-scale spatial data management and processing using a two-level hash based Quadtree index as well as algorithms for spatial operations.
JULARDZIJA, MIRHET. "Processing RAW image data in mobile units." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-27724.
Повний текст джерелаWang, Jiayin. "Building Efficient Large-Scale Big Data Processing Platforms." Thesis, University of Massachusetts Boston, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10262281.
Повний текст джерелаIn the era of big data, many cluster platforms and resource management schemes are created to satisfy the increasing demands on processing a large volume of data. A general setting of big data processing jobs consists of multiple stages, and each stage represents generally defined data operation such as ltering and sorting. To parallelize the job execution in a cluster, each stage includes a number of identical tasks that can be concurrently launched at multiple servers. Practical clusters often involve hundreds or thousands of servers processing a large batch of jobs. Resource management, that manages cluster resource allocation and job execution, is extremely critical for the system performance.
Generally speaking, there are three main challenges in resource management of the new big data processing systems. First, while there are various pending tasks from dierent jobs and stages, it is difficult to determine which ones deserve the priority to obtain the resources for execution, considering the tasks' different characteristics such as resource demand and execution time. Second, there exists dependency among the tasks that can be concurrently running. For any two consecutive stages of a job, the output data of the former stage is the input data of the later one. The resource management has to comply with such dependency. The third challenge is the inconsistent performance of the cluster nodes. In practice, run-time performance of every server is varying. The resource management needs to dynamically adjust the resource allocation according to the performance change of each server.
The resource management in the existing platforms and prior work often rely on fixed user-specic congurations, and assumes consistent performance in each node. The performance, however, is not satisfactory under various workloads. This dissertation aims to explore new approaches to improving the eciency of large-scale big data processing platforms. In particular, the run-time dynamic factors are carefully considered when the system allocates the resources. New algorithms are developed to collect run-time data and predict the characteristics of jobs and the cluster. We further develop resource management schemes that dynamically tune the resource allocation for each stage of every running job in the cluster. New findings and techniques in this dissertation will certainly provide valuable and inspiring insights to other similar problems in the research community.
Li, Quanzhong. "Indexing and path query processing for XML data." Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/290141.
Повний текст джерела曾偉明 and Wai-ming Peter Tsang. "Computer aided ultrasonic flaw detection and characterization." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1987. http://hub.hku.hk/bib/B31231007.
Повний текст джерелаGrinman, Alex J. "Natural language processing on encrypted patient data." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113438.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 85-86).
While many industries can benefit from machine learning techniques for data analysis, they often do not have the technical expertise nor computational power to do so. Therefore, many organizations would benefit from outsourcing their data analysis. Yet, stringent data privacy policies prevent outsourcing sensitive data and may stop the delegation of data analysis in its tracks. In this thesis, we put forth a two-party system where one party capable of powerful computation can run certain machine learning algorithms from the natural language processing domain on the second party's data, where the first party is limited to learning only specific functions of the second party's data and nothing else. Our system provides simple cryptographic schemes for locating keywords, matching approximate regular expressions, and computing frequency analysis on encrypted data. We present a full implementation of this system in the form of a extendible software library and a command line interface. Finally, we discuss a medical case study where we used our system to run a suite of unmodified machine learning algorithms on encrypted free text patient notes.
by Alex J. Grinman.
M. Eng.
Westlund, Kenneth P. (Kenneth Peter). "Recording and processing data from transient events." Thesis, Massachusetts Institute of Technology, 1988. https://hdl.handle.net/1721.1/129961.
Повний текст джерелаIncludes bibliographical references.
by Kenneth P. Westlund Jr.
Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1988.
Yuen, Jeanne Y. Y. "Computer Go-Muku." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=64063.
Повний текст джерелаParker, Greg. "Robust processing of diffusion weighted image data." Thesis, Cardiff University, 2014. http://orca.cf.ac.uk/61622/.
Повний текст джерелаScheffler, Carl. "Articulated structure from motion." Thesis, University of the Western Cape, 2004. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=init_2988_1177923873.
Повний текст джерелаTuft, David O. "System for collision detection between deformable models built on axis aligned bounding boxes and GPU based culling /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1689.pdf.
Повний текст джерелаAllander, Karl, Jim Svanberg, and Mattias Klittmark. "Intresseväckande Animation : Utställningsmaterial för Mälsåkerprojektet." Thesis, Mälardalen University, Department of Innovation, Design and Product Development, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-703.
Повний текст джерелаUnder sommaren 2006 öppnar Mälsåker Slott för allmänheten och ett av de planerade projekten är en multimedial utställning om norrmän som tränades där i hemlighet under andra världskriget.
Författarna till denna rapport är alla studenter av informationsdesign, med inriktning mot illustration på Mälardalens Högskola. Då vi har ett stort intresse av animation föreföll detta projekt mycket intressant.
Vi ingick i en projektgrupp som innefattade dataloger, textdesigners samt illustratörer. Illustratörernas del av projektet var att skapa det visuella materialet på ett intresseväckande sätt. I rapporten undersöks möjliga lösningar för animation, manér, bilddramaturgi samt de tekniska förutsättningar som krävs för att skapa ett lyckat slutresultat. Rapporten beskriver tillvägagångssättet för att uppnå dessa mål vilket inkluderar metoder som litteraturstudier, diskussioner samt utprovningar. Utställningsformen är experimentell och arbetet har därför givit ett slutresultat som kan ses som en
fallstudie i sig.
Utprovningar av det färdiga materialet visar att vi efter förutsättningarna lyckats uppnå ett gott resultat. Animerat bildspel fungerar i sammanhanget bra som informativt utställningsmaterial.
Heller, Kelly K. "Satisfiability checking for quality assurance in relational data processing." Thesis, California State University, Long Beach, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=1524200.
Повний текст джерелаDevelopers using the SQL language lack relational SQL debugging tools. SQL also suffers from a lack of sufficient automated frameworks for unit testing and regression testing of queries. This thesis begins with an SQL bug taxonomy. The focus is on Core SQL as defined in the SQL:1999 standard. Features in Core SQL remain virtually unchanged through the latest standard, SQL:2011. Our bug taxonomy illustrates common coding defects related to NULL, LEFT JOIN, selectivity assumptions, and ordering assumptions. We highlight ways that silent failures can impact a databasedependent application. Our proposal for addressing SQL development errors is a verification tool (named POQ) that leverages annotated postconditions attached to each query statement. The formalism of domain relational calculus is the basis for axiomatizing queries. Subsequently, a theorem prover searches for counterexamples that demonstrate when the postcondition fails. Alternative projects using SMT solvers and static analysis methods are also discussed.
Lin, Wen-Ya. "A Load Balancing Data Allocation for Parallel Query Processing." NSUWorks, 1998. http://nsuworks.nova.edu/gscis_etd/670.
Повний текст джерелаPark, Noseong. "Top-K Query Processing in Edge-Labeled Graph Data." Thesis, University of Maryland, College Park, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10128677.
Повний текст джерелаEdge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed.
Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs.
Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features.
The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned.
An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask.
The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.
李淑儀 and Shuk-yee Wendy Lee. "Computer aided facilities design." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1987. http://hub.hku.hk/bib/B31208277.
Повний текст джерелаHaley, Brent Kreh. "A Pipeline for the Creation, Compression, and Display of Streamable 3D Motion Capture Based Skeletal Animation Data." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1300989069.
Повний текст джерелаDing, Luping. "Metadata-aware query processing over data streams." Worcester, Mass. : Worcester Polytechnic Institute, 2008. http://www.wpi.edu/Pubs/ETD/Available/etd-042208-194826/.
Повний текст джерелаKarlsson, Tobias. "Kroppsspråk i 3d-animation : Lögner hos 3d-karaktärer." Thesis, Högskolan i Skövde, Institutionen för kommunikation och information, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-5022.
Повний текст джерелаSungoor, Ala M. H. "Genomic signal processing for enhanced microarray data clustering." Thesis, Kingston University, 2009. http://eprints.kingston.ac.uk/20310/.
Повний текст джерелаChen, Jiawen (Jiawen Kevin). "Efficient data structures for piecewise-smooth video processing." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66003.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references (p. 95-102).
A number of useful image and video processing techniques, ranging from low level operations such as denoising and detail enhancement to higher level methods such as object manipulation and special effects, rely on piecewise-smooth functions computed from the input data. In this thesis, we present two computationally efficient data structures for representing piecewise-smooth visual information and demonstrate how they can dramatically simplify and accelerate a variety of video processing algorithms. We start by introducing the bilateral grid, an image representation that explicitly accounts for intensity edges. By interpreting brightness values as Euclidean coordinates, the bilateral grid enables simple expressions for edge-aware filters. Smooth functions defined on the bilateral grid are piecewise-smooth in image space. Within this framework, we derive efficient reinterpretations of a number of edge-aware filters commonly used in computational photography as operations on the bilateral grid, including the bilateral filter, edgeaware scattered data interpolation, and local histogram equalization. We also show how these techniques can be easily parallelized onto modern graphics hardware for real-time processing of high definition video. The second data structure we introduce is the video mesh, designed as a flexible central data structure for general-purpose video editing. It represents objects in a video sequence as 2.5D "paper cutouts" and allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. In our representation, we assume that motion and depth are piecewise-smooth, and encode them sparsely as a set of points tracked over time. The video mesh is a triangulation over this point set and per-pixel information is obtained by interpolation. To handle occlusions and detailed object boundaries, we rely on the user to rotoscope the scene at a sparse set of frames using spline curves. We introduce an algorithm to robustly and automatically cut the mesh into local layers with proper occlusion topology, and propagate the splines to the remaining frames. Object boundaries are refined with per-pixel alpha mattes. At its core, the video mesh is a collection of texture-mapped triangles, which we can edit and render interactively using graphics hardware. We demonstrate the effectiveness of our representation with special effects such as 3D viewpoint changes, object insertion, depthof- field manipulation, and 2D to 3D video conversion.
by Jiawen Chen.
Ph.D.
Jakubiuk, Wiktor. "High performance data processing pipeline for connectome segmentation." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106122.
Повний текст джерела"December 2015." Cataloged from PDF version of thesis.
Includes bibliographical references (pages 83-88).
By investigating neural connections, neuroscientists try to understand the brain and reconstruct its connectome. Automated connectome reconstruction from high resolution electron miscroscopy is a challenging problem, as all neurons and synapses in a volume have to be detected. A mm3 of a high-resolution brain tissue takes roughly a petabyte of space that the state-of-the-art pipelines are unable to process to date. A high-performance, fully automated image processing pipeline is proposed. Using a combination of image processing and machine learning algorithms (convolutional neural networks and random forests), the pipeline constructs a 3-dimensional connectome from 2-dimensional cross-sections of a mammal's brain. The proposed system achieves a low error rate (comparable with the state-of-the-art) and is capable of processing volumes of 100's of gigabytes in size. The main contributions of this thesis are multiple algorithmic techniques for 2- dimensional pixel classification of varying accuracy and speed trade-off, as well as a fast object segmentation algorithm. The majority of the system is parallelized for multi-core machines, and with minor additional modification is expected to work in a distributed setting.
by Wiktor Jakubiuk.
M. Eng. in Computer Science and Engineering
Nguyen, Qui T. "Robust data partitioning for ad-hoc query processing." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106004.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-62).
Data partitioning can significantly improve query performance in distributed database systems. Most proposed data partitioning techniques choose the partitioning based on a particular expected query workload or use a simple upfront scheme, such as uniform range partitioning or hash partitioning on a key. However, these techniques do not adequately address the case where the query workload is ad-hoc and unpredictable, as in many analytic applications. The HYPER-PARTITIONING system aims to ll that gap, by using a novel space-partitioning tree on the space of possible attribute values to dene partitions incorporating all attributes of a dataset. The system creates a robust upfront partitioning tree, designed to benet all possible queries, and then adapts it over time in response to the actual workload. This thesis evaluates the robustness of the upfront hyper-partitioning algorithm, describes the implementation of the overall HYPER-PARTITIONING system, and shows how hyper-partitioning improves the performance of both selection and join queries.
by Qui T. Nguyen.
M. Eng.
Du, Toit André Johan. "Preslab - micro-computer analysis and design of prestressed concrete slabs." Master's thesis, University of Cape Town, 1988. http://hdl.handle.net/11427/17057.
Повний текст джерелаA micro-computer based package for the analysis and design of prestressed flat slabs is presented. The constant strain triangle and the discreet Kirchhoff plate bending triangle are combined to provide an efficient "shell" element. These triangles are used for the finite element analysis of prestressed flat slabs. An efficient out-of-core solver for sets of linear simultaneous equations is presented. This solver was developed especially for micro-computers. Subroutines for the design of prestressed flat slabs include the principal stresses in the top and bottom fibres of the plate, Wood/Armer moments and untensioned steel areas calculated according to Clark's recommendations. Extensive pre- and post-processing facilities are presented. Several plotting routines were developed to aid the user in his understanding of the behaviour of the structure under load and prestressing.
Samson, Margaret Kingman 1950. "COMPUTER AIDS FOR FACILITY LAYOUT." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276400.
Повний текст джерела