Rozprawy doktorskie na temat „Processing Science”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Processing Science”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Goh, Siew Wei Chemistry Faculty of Science UNSW. "Application of surface science to sulfide mineral processing". Awarded by:University of New South Wales. School of Chemistry, 2006. http://handle.unsw.edu.au/1959.4/32912.
Pełny tekst źródłaChen, Siheng. "Data Science with Graphs: A Signal Processing Perspective". Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/724.
Pełny tekst źródłaBaklar, Mohammed Adnan. "Processing organic semiconductors". Thesis, Queen Mary, University of London, 2010. http://qmro.qmul.ac.uk/xmlui/handle/123456789/1311.
Pełny tekst źródłaKagiri, T. (Thomas). "Designing strategic information systems in complexity science concepts". Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201306061561.
Pełny tekst źródłaNourian, Arash. "Approaches for privacy aware image processing in clouds". Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119518.
Pełny tekst źródłaLe cloud computing est idéal pour le stockage d'image et le traitement parce qu'il fournit le stockage énormément évolutif et des ressources de traitement au bas prix. Un des inconvénients majeurs de cloud computing, cependant, est le manque de mécanismes robustes pour les utilisateurs pour contrôler la vie privée des données qu'ils mettent en gérance aux clouds comme des photos. Une façon d'améliorer la vie privée et la sécurité de photos stockées dans des clouds est de crypter les photos avant le stockage d'eux. Cependant, utilisant le chiffrage pour garantir les informations tenues dans les photos écarte appliquer n'importe quelles transformations d'image tandis qu'ils sont tenus dans les serveurs tiers. Pour aborder cette question, nous avons développé les régimes de codage d'image qui améliorent la vie privée des données d'image qui est externalisée aux clouds pour le traitement. Nous utilisons un modèle de hybrid cloud pour mettre en œuvre nos régimes proposés. Contrairement aux régimes de chiffrage d'image précédemment proposés, nos régimes de codage permettent aux formes différentes de niveau de pixel, le niveau de bloc et le traitement d'image binaire d'avoir lieu dans les clouds tandis que l'image réelle n'est pas révélée au fournisseur de cloud. Nos régimes de codage utilisent une carte de chat chaotique pour transformer l'image après qu'il est masqué avec une image ambiante arbitrairement choisie ou mixte avec d'autres images. Un prototype simplifié des systèmes de traitement d'image a été mis en œuvre et les résultats expérimentaux et l'analyse de sécurité détaillée pour chaque régime proposé sont présentés dans cette thèse. Nous utilisons l'image commune traitant des tâches de démontrer la capacité de notre régime d'exécuter des calculs sur la vie privée des images améliorées. Une variété de niveau de pixel, le niveau de bloc et des filtres binaires a été mise en œuvre pour supporter le traitement d'image sur des images codées dans le système. L'opérationnel des frais généraux supplémentaire selon nos régimes de refléter des filtres est environ 18% le en moyenne.
Lee, Li 1975. "Distributed signal processing". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86436.
Pełny tekst źródłaMcCormick, Martin (Martin Steven). "Digital pulse processing". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78468.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (p. 71-74).
This thesis develops an exact approach for processing pulse signals from an integrate-and-fire system directly in the time-domain. Processing is deterministic and built from simple asynchronous finite-state machines that can perform general piecewise-linear operations. The pulses can then be converted back into an analog or fixed-point digital representation through a filter-based reconstruction. Integrate-and-fire is shown to be equivalent to the first-order sigma-delta modulation used in oversampled noise-shaping converters. The encoder circuits are well known and have simple construction using both current and next-generation technologies. Processing in the pulse-domain provides many benefits including: lower area and power consumption, error tolerance, signal serialization and simple conversion for mixed-signal applications. To study these systems, discrete-event simulation software and an FPGA hardware platform are developed. Many applications of pulse-processing are explored including filtering and signal processing, solving differential equations, optimization, the minsum / Viterbi algorithm, and the decoding of low-density parity-check codes (LDPC). These applications often match the performance of ideal continuous-time analog systems but only require simple digital hardware. Keywords: time-encoding, spike processing, neuromorphic engineering, bit-stream, delta-sigma, sigma-delta converters, binary-valued continuous-time, relaxation-oscillators.
by Martin McCormick.
S.M.
Eldar, Yonina Chana 1973. "Quantum signal processing". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/16805.
Pełny tekst źródłaIncludes bibliographical references (p. 337-346).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Quantum signal processing (QSP) as formulated in this thesis, borrows from the formalism and principles of quantum mechanics and some of its interesting axioms and constraints, leading to a novel paradigm for signal processing with applications in areas ranging from frame theory, quantization and sampling methods to detection, parameter estimation, covariance shaping and multiuser wireless communication systems. The QSP framework is aimed at developing new or modifying existing signal processing algorithms by drawing a parallel between quantum mechanical measurements and signal processing algorithms, and by exploiting the rich mathematical structure of quantum mechanics, but not requiring a physical implementation based on quantum mechanics. This framework provides a unifying conceptual structure for a variety of traditional processing techniques, and a precise mathematical setting for developing generalizations and extensions of algorithms. Emulating the probabilistic nature of quantum mechanics in the QSP framework gives rise to probabilistic and randomized algorithms. As an example we introduce a probabilistic quantizer and derive its statistical properties. Exploiting the concept of generalized quantum measurements we develop frame-theoretical analogues of various quantum-mechanical concepts and results, as well as new classes of frames including oblique frame expansions, that are then applied to the development of a general framework for sampling in arbitrary spaces. Building upon the problem of optimal quantum measurement design, we develop and discuss applications of optimal methods that construct a set of vectors.
(cont.) We demonstrate that, even for problems without inherent inner product constraints, imposing such constraints in combination with least-squares inner product shaping leads to interesting processing techniques that often exhibit improved performance over traditional methods. In particular, we formulate a new viewpoint toward matched filter detection that leads to the notion of minimum mean-squared error covariance shaping. Using this concept we develop an effective linear estimator for the unknown parameters in a linear model, referred to as the covariance shaping least-squares estimator. Applying this estimator to a multiuser wireless setting, we derive an efficient covariance shaping multiuser receiver for suppressing interference in multiuser communication systems.
by Yonina Chana Eldar.
Ph.D.
Yang, Heechun. "Modeling the processing science of thermoplastic composite tow prepreg materials". Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/17217.
Pełny tekst źródłaMcEwen, Gordon John. "Colour image processing for textile fibre matching in forensic science". Thesis, Queen's University Belfast, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336101.
Pełny tekst źródłaPapadopoulos, Stavros. "Authenticated query processing /". View abstract or full-text, 2010. http://library.ust.hk/cgi/db/thesis.pl?CSED%202010%20PAPADO.
Pełny tekst źródłaMehrabi, Saeed. "Advanced natural language processing and temporal mining for clinical discovery". Thesis, Indiana University - Purdue University Indianapolis, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10032405.
Pełny tekst źródłaThere has been vast and growing amount of healthcare data especially with the rapid adoption of electronic health records (EHRs) as a result of the HITECH act of 2009. It is estimated that around 80% of the clinical information resides in the unstructured narrative of an EHR. Recently, natural language processing (NLP) techniques have offered opportunities to extract information from unstructured clinical texts needed for various clinical applications. A popular method for enabling secondary uses of EHRs is information or concept extraction, a subtask of NLP that seeks to locate and classify elements within text based on the context. Extraction of clinical concepts without considering the context has many complications, including inaccurate diagnosis of patients and contamination of study cohorts. Identifying the negation status and whether a clinical concept belongs to patients or his family members are two of the challenges faced in context detection. A negation algorithm called Dependency Parser Negation (DEEPEN) has been developed in this research study by taking into account the dependency relationship between negation words and concepts within a sentence using the Stanford Dependency Parser. The study results demonstrate that DEEPEN, can reduce the number of incorrect negation assignment for patients with positive findings, and therefore improve the identification of patients with the target clinical findings in EHRs. Additionally, an NLP system consisting of section segmentation and relation discovery was developed to identify patients’ family history. To assess the generalizability of the negation and family history algorithm, data from a different clinical institution was used in both algorithm evaluations. The temporal dimension of extracted information from clinical records representing the trajectory of disease progression in patients was also studied in this project. Clinical data of patients who lived in Olmsted County (Rochester, MN) during 1966 to 2010 was analyzed in this work. The patient records were modeled by diagnosis matrices with clinical events as rows and their temporal information as columns. Deep learning algorithm was used to find common temporal patterns within these diagnosis matrices.
Khor, Eng Keat. "Processing digital television streams". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/38113.
Pełny tekst źródłaHe, Bingsheng. "Cache-oblivious query processing /". View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?CSED%202008%20HE.
Pełny tekst źródłaWoodbeck, Kris. "On neural processing in the ventral and dorsal visual pathways using the programmable Graphics Processing Unit". Thesis, University of Ottawa (Canada), 2007. http://hdl.handle.net/10393/27660.
Pełny tekst źródłaDuval, Alexandra M. "Valorization of Carrot Processing Waste". DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2155.
Pełny tekst źródłaReid, D. J. "Picture-text processing and the learning of science by secondary schoolchildren". Thesis, University of Manchester, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383231.
Pełny tekst źródłaLiu, Ying. "Query optimization for distributed stream processing". [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3274258.
Pełny tekst źródłaSource: Dissertation Abstracts International, Volume: 68-07, Section: B, page: 4597. Adviser: Beth Plale. Title from dissertation home page (viewed Apr. 21, 2008).
Bui, Quoc Cuong 1974. "Containerless processing of stainless steel". Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9578.
Pełny tekst źródłaVita.
Includes bibliographical references (leaves 74-75).
The rapid solidification of a Fe-12 wt% Cr-16 wt% Ni alloy in containerless processing conditions was investigated using an electromagnetic levitation facility. High-speed video and pyrometry allowed the study of phase selection and secondary nucleation mechanisms for that alloy as well as measurements of delay times and growth velocities. Double recalescence events were observed for the first time at temperatures near the To temperature of the metastable ferritic phase, defining a value of the critical undercooling for metastable bee nucleation significantly lower than previously reported. Phase selection during recalescence was successfully performed by use of different trigger materials: at temperatures below the bee To , a bee Fe trigger induced the primary nucleation of the metastable bee phase which subsequently transformed into the stable fee phase while an fee Ni trigger caused the nucleation of the equilibrium fee phase. Growth velocities were characterized for the 6 phase growing into the undercooled liquid, the [delta] phase growing into the undercooled liquid and the [gamma] phase growing into the semi-solid primary bee. It was found that a critical undercooling exists at which the growth velocity of the primary ferritic phase is equal to that of the secondary austenitic phase into the primary semi-solid. At undercoolings below this critical value, the equilibrium, can overwhelm the primary a and break into the undercooled liquid. Such a double recalescence event can therefore appear as a single event depending on the geometry of the detection equipment. From this observation, a model based on velocity and delay time arguments was proposed to explain discrepancies with earlier works.
by Quoc Cuong Bui.
S.M.
Weston, Margan Marianna Mackenzie. "Experimental Optical Quantum Science: Efficient Multi-Photon Sources for Quantum Information Science". Thesis, Griffith University, 2017. http://hdl.handle.net/10072/367516.
Pełny tekst źródłaThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Natural Sciences
Science, Environment, Engineering and Technology
Full Text
SONG, HYO-JIN. "PROCESSING PHASE TRANS". University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1132249697.
Pełny tekst źródłaYarosz, Matthew James 1978. "Security sphere : digital image processing". Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86750.
Pełny tekst źródłaIncludes bibliographical references (p. 75).
by Matthew James Yarosz.
M.Eng.and S.B.
Gharbi, Michael (Michael Yanis). "Learning efficient image processing pipelines". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120437.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (pages [125]-138).
The high resolution of modern cameras puts significant performance pressure on image processing pipelines. Tuning the parameters of these pipelines for speed is subject to stringent image quality constraints and requires significant efforts from skilled programmers. Because quality is driven by perceptual factors with which most quantitative image metrics correlate poorly, developing new pipelines involves long iteration cycles, alternating between software implementation and visual evaluation by human experts. These concerns are compounded on modern computing platforms, which are increasingly mobile and heterogeneous. In this dissertation, we apply machine learning towards the design of high-performance, high-fidelity algorithms whose parameters can be optimized automatically on large-scale datasets via gradient based methods. We present applications to low-level image restoration and high performance image filtering on mobile devices.
by Michaël Gharbi.
Ph. D.
Herrmann, Frederick P. (Frederick Paul). "An integrated associative processing system". Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/38020.
Pełny tekst źródłaIncludes bibliographical references (p. 97-105).
by Frederick Paul Herrmann.
Ph.D.
Vasconcellos, Brett W. (Brett William) 1977. "Parallel signal-processing for everyone". Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9097.
Pełny tekst źródłaIncludes bibliographical references (p. 65-67).
We designed, implemented, and evaluated a signal-processing environment that runs on a general-purpose multiprocessor system, allowing easy prototyping of new algorithms and integration with applications. The environment allows the composition of modules implementing individual signal-processing algorithms into a functional application, automatically optimizing their performance. We decompose the problem into four independent components: signal processing, data management, scheduling, and control. This simplifies the programming interface and facilitates transparent parallel signal processing. For tested applications, our system both runs efficiently on single-processors systems and achieves near-linear speedups on symmetric-multiprocessor (SMP) systems.
by Brett W. Vasconcellos.
M.Eng.
Baran, Thomas A. (Thomas Anthony). "Conservation in signal processing systems". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/74991.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (p. 205-209).
Conservation principles have played a key role in the development and analysis of many existing engineering systems and algorithms. In electrical network theory for example, many of the useful theorems regarding the stability, robustness, and variational properties of circuits can be derived in terms of Tellegen's theorem, which states that a wide range of quantities, including power, are conserved. Conservation principles also lay the groundwork for a number of results related to control theory, algorithms for optimization, and efficient filter implementations, suggesting potential opportunity in developing a cohesive signal processing framework within which to view these principles. This thesis makes progress toward that goal, providing a unified treatment of a class of conservation principles that occur in signal processing systems. The main contributions in the thesis can be broadly categorized as pertaining to a mathematical formulation of a class of conservation principles, the synthesis and identification of these principles in signal processing systems, a variational interpretation of these principles, and the use of these principles in designing and gaining insight into various algorithms. In illustrating the use of the framework, examples related to linear and nonlinear signal-flow graph analysis, robust filter architectures, and algorithms for distributed control are provided.
by Thomas A. Baran.
Ph.D.
Kitchens, Jonathan Paul. "Acoustic vector-sensor array processing". Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/60098.
Pełny tekst źródłaThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 145-148).
Existing theory yields useful performance criteria and processing techniques for acoustic pressure-sensor arrays. Acoustic vector-sensor arrays, which measure particle velocity and pressure, offer significant potential but require fundamental changes to algorithms and performance assessment. This thesis develops new analysis and processing techniques for acoustic vector-sensor arrays. First, the thesis establishes performance metrics suitable for vector sensor processing. Two novel performance bounds define optimality and explore the limits of vector-sensor capabilities. Second, the thesis designs non-adaptive array weights that perform well when interference is weak. Obtained using convex optimization, these weights substantially improve conventional processing and remain robust to modeling errors. Third, the thesis develops subspace techniques that enable near-optimal adaptive processing. Subspace processing reduces the problem dimension, improving convergence or shortening training time.
by Jonathan Paul Kitchens.
Ph.D.
Eisenstein, Jacob (Jacob Richard). "Gesture in automatic discourse processing". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44401.
Pełny tekst źródłaIncludes bibliographical references (p. 145-153).
Computers cannot fully understand spoken language without access to the wide range of modalities that accompany speech. This thesis addresses the particularly expressive modality of hand gesture, and focuses on building structured statistical models at the intersection of speech, vision, and meaning. My approach is distinguished in two key respects. First, gestural patterns are leveraged to discover parallel structures in the meaning of the associated speech. This differs from prior work that attempted to interpret individual gestures directly, an approach that was prone to a lack of generality across speakers. Second, I present novel, structured statistical models for multimodal language processing, which enable learning about gesture in its linguistic context, rather than in the abstract. These ideas find successful application in a variety of language processing tasks: resolving ambiguous noun phrases, segmenting speech into topics, and producing keyframe summaries of spoken language. In all three cases, the addition of gestural features - extracted automatically from video - yields significantly improved performance over a state-of-the-art text-only alternative. This marks the first demonstration that hand gesture improves automatic discourse processing.
by Jacob Eisenstein.
Ph.D.
Pardo-Guzman, Dino Alejandro. "Design and processing of organic electroluminescent devices". Diss., The University of Arizona, 2000. http://hdl.handle.net/10150/284162.
Pełny tekst źródłaHartami, A. (Aprilia). "Designing a persuasive game to raise environmental awareness among children:a design science research". Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201805312053.
Pełny tekst źródłaGolab, Lukasz. "Sliding Window Query Processing over Data Streams". Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2930.
Pełny tekst źródłaThis dissertation begins with the observation that the two fundamental requirements of a DSMS are dealing with transient (time-evolving) rather than static data and answering persistent rather than transient queries. One implication of the first requirement is that data maintenance costs have a significant effect on the performance of a DSMS. Additionally, traditional query processing algorithms must be re-engineered for the sliding window model because queries may need to re-process expired data and "undo" previously generated results. The second requirement suggests that a DSMS may execute a large number of persistent queries at the same time, therefore there exist opportunities for resource sharing among similar queries.
The purpose of this dissertation is to develop solutions for efficient query processing over sliding windows by focusing on these two fundamental properties. In terms of the transient nature of streaming data, this dissertation is based upon the following insight. Although the data keep changing over time as the windows slide forward, the changes are not random; on the contrary, the inputs and outputs of a DSMS exhibit patterns in the way the data are inserted and deleted. It will be shown that the knowledge of these patterns leads to an understanding of the semantics of persistent queries, lower window maintenance costs, as well as novel query processing, query optimization, and concurrency control strategies. In the context of the persistent nature of DSMS queries, the insight behind the proposed solution is that various queries may need to be refreshed at different times, therefore synchronizing the refresh schedules of similar queries creates more opportunities for resource sharing.
MacDonald, Darren T. "Image segment processing for analysis and visualization". Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/27641.
Pełny tekst źródłaVijayakumar, Nithya Nirmal. "Data management in distributed stream processing systems". [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278228.
Pełny tekst źródłaSource: Dissertation Abstracts International, Volume: 68-09, Section: B, page: 6093. Adviser: Beth Plale. Title from dissertation home page (viewed May 9, 2008).
Zsilavecz, Guido. "Rita+ : an SGML based document processing system". Master's thesis, University of Cape Town, 1993. http://hdl.handle.net/11427/13531.
Pełny tekst źródłaRita+ is a structured syntax directed document processing system, which allows users to edit documents interactively, and display these documents in a manner determined by the user. The system is SGML (Standard Generalized Markup Language) based in that it reads and saves files as SGML marked up documents, and uses SGML document type definitions as templates for document creation. The display or layout of the document is determined by the Rita Semantic Definition Language (RSDL). With RSDL it is possible to assign semantic actions quickly to an SGML file. Semantic definitions also allows users to export documents to serve as input for powerful batch formatters. Each semantic definition file is associated with a specific document type definition. The Rita+ Editor uses the SGML document type definition to allow the user to create structurally correct documents, and allows the user to create these in an almost arbitrary manner. The Editor displays the user document together with the associated document structure in a different window. Documents are created by selecting document elements displayed in a dynamic menu. This menu changes according to the current position in the document structure. As it is possible to have documents which are incomplete in the sense that required document elements are missing, Rita+ will indicate which document elements are required to complete the document with a minimum number of insertions, by highlighting the required elements in the dynamic menu. The Rita+ system is build on top of an existing system, to which SGML and RSDL support, as well as incomplete document support and menu marking, has been added.
Smith, Charles Eldon. "An information-processing theory of party identification /". The Ohio State University, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487848531364663.
Pełny tekst źródłaIkiz, Yuksel. "Fiber Length Measurement by Image Processing". NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000809-225316.
Pełny tekst źródłaIKIZ, YUKSEL. Fiber Length Measurement by Image Processing. (Under the direction of Dr. Jon P. Rust.) This research studied the accuracy and feasibility of cotton fiber length measurement by image processing as an alternative to existing systems. Current systems have some weaknesses especially in Short Fiber Content (SFC) determination, which is becoming an important length parameter in industry. Seventy-two treatments of five factors were analyzed for length and time measurements by our own computer program. The factors are: Sample preparation (without fiber crossover and with fiber crossover), lighting (backlighting and frontlighting), resolution (37-micron, 57-micron, 106-micron, and 185-micron), preprocessing (4-neighborhood and 8-neighborhood), and processing (outlining, thinning, and adding broken skeletons). The best results in terms of accuracy, precision and analysis time for images without fiber crossovers were: 106-micron resolution with frontlighting using an 8-neighborhood thresholding algorithm and using an outline algorithm for length determination. With fiber crossovers, 57-micron resolution with backlighting using an 8-neighborhood thresholding algorithm and using a thinning algorithm combined with an adding algorithm for combining broken skeletons. Using the above conditions, 1775 area can be analyzed using our current equipment in 15 seconds. In the case of images with crossovers, only 117 can be analyzed in 15 seconds. This research demonstrates that successful sample preparation without fiber crossovers would create the best fiber length measurement technique, however with fiber crossovers the system efficiency has been proven as well.
Martínez-Ayers, Raúl Andrés 1977. "Formation and processing of rheocast microstructures". Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28883.
Pełny tekst źródłaIncludes bibliographical references (leaves 108-114).
(cont.) given slurry was proposed. The fluidity of rheocast A357 alloy slurries was contrasted with the fluidity of superheated liquid. Rheocast slurries with 37% solid particles,were found to flow about half as far as fully liquid alloy superheated 20⁰C above the liquidus.
The importance of semi-solid metal processing derives primarily from its ability to form high integrity parts from lightweight alloys. Since the discovery of the semi-solid metal microstructure, most part production was by reheating of billets which possessed a suitable microstructure ("thixocasting"). However, it is now apparent that there are significant advantages of forming semi-solid slurry directly from liquid alloy ("rheocasting") and efficient rheocasting processes have been engineered. In this work, experimental and analytical approaches were taken to study how non-dendritic microstructures form and evolve in Al-4.5wt%Cu alloy during the earliest stages of solidification. Experimental results showed that particles in quenched rheocast alloy were already spheroidal, and free of entrapped eutectic, after 5 seconds of solidification time. Spheroidal particles were also formed by reheating equiaxed dendrites of approximately 10 [micro]m radius above the eutectic temperature for 5 seconds, but these spheroids contained entrapped eutectic. In both rheocasting and reheating experiments, the average particle radius was found to increase with solidification time at a rate that closely follows the classical dendrite arm ripening curve. Particle growth models developed were compared with the average particle radius measurements, and particle solute content measurements. The maximum cooling rate to maintain spheroidal interface stability at various solid fractions was studied experimentally. A modified stability model which considered particle interaction through solute field overlap was developed and found to be in good agreement with experimental data. A simple method for the foundry to determine the maximum cooling rate for a
by Raul A. Martinez-Ayers.
Ph.D.
Bhattacharya, Dipankar. "Neural networks for signal processing". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq21924.pdf.
Pełny tekst źródłaBenjamin, Jim Isaac. "Quadtree algorithms for image processing /". Online version of thesis, 1991. http://hdl.handle.net/1850/11078.
Pełny tekst źródłaLiu, Fuyu. "Query processing in location-based services". Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4634.
Pełny tekst źródłaID: 029050964; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2010.; Includes bibliographical references (p. 138-145).
Ph.D.
Doctorate
Department of Electrical Engineering and Computer Science
Engineering and Computer Science
Bomström, H. (Henri). "Improving video game designer workflow in procedural content generation-based game design:a design science approach". Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201812063237.
Pełny tekst źródłaLiu, Xun. "Free Wind Path Detecting in Google Map Using Image Processing". Thesis, University of Gävle, Department of Industrial Development, IT and Land Management, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-6920.
Pełny tekst źródłaUrban heat island has become a serious environment problem in nowadays. One of the reasons is the buildings block the wind blow. For planning a scientific urban geometry to reduce the influence of urban heat island, one of the solutions is finding the free wind paths. There many previous works based on GIS, this paper will offer a new method found on image processing in digital map. Using MATLAB instead of GIS device, the algorithm provides a simple and easy-using way to detect the free wind paths in the eight directions.
Boufounos, Petros T. 1977. "Signal processing for DNA sequencing". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/17536.
Pełny tekst źródłaIncludes bibliographical references (p. 83-86).
DNA sequencing is the process of determining the sequence of chemical bases in a particular DNA molecule-nature's blueprint of how life works. The advancement of biological science in has created a vast demand for sequencing methods, which needs to be addressed by automated equipment. This thesis tries to address one part of that process, known as base calling: it is the conversion of the electrical signal-the electropherogram--collected by the sequencing equipment to a sequence of letters drawn from ( A,TC,G ) that corresponds to the sequence in the molecule sequenced. This work formulates the problem as a pattern recognition problem, and observes its striking resemblance to the speech recognition problem. We, therefore, propose combining Hidden Markov Models and Artificial Neural Networks to solve it. In the formulation we derive an algorithm for training both models together. Furthermore, we devise a method to create very accurate training data, requiring minimal hand-labeling. We compare our method with the de facto standard, PHRED, and produce comparable results. Finally, we propose alternative HMM topologies that have the potential to significantly improve the performance of the method.
by Petros T. Boufounos.
M.Eng.and S.B.
Chiu, William. "Processing of Supported Silica Membranes for Gas Separation". The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1349815421.
Pełny tekst źródłaLuo, Chuanjiang. "Laplace-based Spectral Method for Point Cloud Processing". The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1388661251.
Pełny tekst źródłaShi, Xun. "Effect of Processing and Formulations on Cocoa Butter". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1420772706.
Pełny tekst źródłaWang, Jiayin. "Building Efficient Large-Scale Big Data Processing Platforms". Thesis, University of Massachusetts Boston, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10262281.
Pełny tekst źródłaIn the era of big data, many cluster platforms and resource management schemes are created to satisfy the increasing demands on processing a large volume of data. A general setting of big data processing jobs consists of multiple stages, and each stage represents generally defined data operation such as ltering and sorting. To parallelize the job execution in a cluster, each stage includes a number of identical tasks that can be concurrently launched at multiple servers. Practical clusters often involve hundreds or thousands of servers processing a large batch of jobs. Resource management, that manages cluster resource allocation and job execution, is extremely critical for the system performance.
Generally speaking, there are three main challenges in resource management of the new big data processing systems. First, while there are various pending tasks from dierent jobs and stages, it is difficult to determine which ones deserve the priority to obtain the resources for execution, considering the tasks' different characteristics such as resource demand and execution time. Second, there exists dependency among the tasks that can be concurrently running. For any two consecutive stages of a job, the output data of the former stage is the input data of the later one. The resource management has to comply with such dependency. The third challenge is the inconsistent performance of the cluster nodes. In practice, run-time performance of every server is varying. The resource management needs to dynamically adjust the resource allocation according to the performance change of each server.
The resource management in the existing platforms and prior work often rely on fixed user-specic congurations, and assumes consistent performance in each node. The performance, however, is not satisfactory under various workloads. This dissertation aims to explore new approaches to improving the eciency of large-scale big data processing platforms. In particular, the run-time dynamic factors are carefully considered when the system allocates the resources. New algorithms are developed to collect run-time data and predict the characteristics of jobs and the cluster. We further develop resource management schemes that dynamically tune the resource allocation for each stage of every running job in the cluster. New findings and techniques in this dissertation will certainly provide valuable and inspiring insights to other similar problems in the research community.
Li, Quanzhong. "Indexing and path query processing for XML data". Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/290141.
Pełny tekst źródłaLe, Hoang Duc Khanh Computer Science & Engineering Faculty of Engineering UNSW. "Visual dataflow language for image processing". Awarded by:University of New South Wales. Computer Science & Engineering, 2007. http://handle.unsw.edu.au/1959.4/40776.
Pełny tekst źródłaWang, Alice 1975. "Eigenstructure based speech processing in noise". Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/46218.
Pełny tekst źródła