To see the other types of publications on this topic, follow the link: Processing Science.

Dissertations / Theses on the topic 'Processing Science'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Processing Science.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Goh, Siew Wei Chemistry Faculty of Science UNSW. "Application of surface science to sulfide mineral processing." Awarded by:University of New South Wales. School of Chemistry, 2006. http://handle.unsw.edu.au/1959.4/32912.

Full text
Abstract:
Surface spectroscopic techniques have been applied to facets of the flotation beneficiation and hydrometallurgical extraction of sulfide minerals to enhance the fundamental understanding of these industrially important processes. As a precursor to the determination of surface chemical composition, the sub-surface properties of some sulfide minerals that have not previously been fully characterised were also investigated. The electronic properties of ??-NiS and ??-NiS (millerite), Ni3S2 (heazlewoodite), (Ni,Fe)9S8 (pentlandite), CuFe2S3 (cubanite), CuFeS2 (chalcopyrite), Cu5FeS4 (bornite) and CuS (covellite) were investigated by conventional and synchrotron X-ray photoelectron spectroscopy (XPS) and near-edge X-ray absorption fine structure (NEXAFS) spectroscopy augmented by ab initio density of state calculations and NEXAFS spectral simulations. Particular aspects studied included the relationship between sulfur coordination number and core electron binding energies, the higher than expected core electron binding energies for the sulfur in the metal-excess nickel sulfides, and the formal oxidation states of the Cu and Fe in Cu-Fe sulfides. It was concluded that the binding energy dependence on coordination number was less than previously believed, that Ni-Ni bonding was the most likely explanation for the unusual properties of the Ni sulfides, and that there was no convincing evidence for Cu(II) in sulfides as had been claimed. Most of the NEXAFS spectra simulated by the FEFF8 and WIEN2k ab initio codes agreed well with experimental spectra, and the calculated densities of states were useful in rationalising the observed properties. XPS, static secondary ion mass spectrometry (SIMS) and NEXAFS spectroscopy were used to investigate thiol flotation collector adsorption on several sulfides in order to determine the way in which the collector chemisorbs to the mineral surface, to differentiate monolayer from multilayer coverage, and to characterise the multilayer species. It was found that static SIMS alone was able to differentiate monolayer from multilayer coverage, and together with angle-resolved NEXAFS spectroscopy, was also able to confirm that 2-mercaptobenzothiazole interacted through both its N and exocyclic S atoms. The altered layers formed on chalcopyrite and heazlewoodite during acid leaching were examined primarily by means of threshold S KLL Auger electron spectroscopy, but no evidence for buried interfacial species was obtained.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Siheng. "Data Science with Graphs: A Signal Processing Perspective." Research Showcase @ CMU, 2016. http://repository.cmu.edu/dissertations/724.

Full text
Abstract:
A massive amount of data is being generated at an unprecedented level from a diversity of sources, including social media, internet services, biological studies, physical infrastructure monitoring and many others. The necessity of analyzing such complex data has led to the birth of an emerging framework, graph signal processing. This framework offers an unified and mathematically rigorous paradigm for the analysis of high-dimensional data with complex and irregular structure. It extends fundamental signal processing concepts such as signals, Fourier transform, frequency response and filtering, from signals residing on regular lattices, which have been studied by the classical signal processing theory, to data residing on general graphs, which are called graph signals. In this thesis, we consider five fundamental tasks on graphs from the perspective of graph signal processing: representation, sampling, recovery, detection and localization. Representation, aiming to concisely model shapes of graph signals, is at the heart of the proposed techniques. Sampling followed by recovery, aiming to reconstruct an original graph signal from a few selected samples, is applicable in semi-supervised learning and user profiling in online social networks. Detection followed by localization, aiming to identify and localize targeted patterns in noisy graph signals, is related to many real-world applications, such as localizing virus attacks in cyber-physical systems, localizing stimuli in brain connectivity networks, and mining traffic events in city street networks, to name just a few. We illustrate the power of the proposed tools on two real-world problems: fast resampling of 3D point clouds and mining of urban traffic data.
APA, Harvard, Vancouver, ISO, and other styles
3

Baklar, Mohammed Adnan. "Processing organic semiconductors." Thesis, Queen Mary, University of London, 2010. http://qmro.qmul.ac.uk/xmlui/handle/123456789/1311.

Full text
Abstract:
In recent years, there has been a considerable interest in organic semiconducting materials due to their potential to enable, amongst other things, low-cost flexible opto-electronic applications, such as large-area integrated circuitry boards, light-emitting diodes (OLEDs) and organic photovoltaics (OPVs). Promisingly, improved electronic performance and device structures have been realized with e.g. OLEDs entering the market and organic field-effect transistors (OFETs) reaching the performance of amorphous silicon devices; however, it would be too early to state that the field of organic semiconductors has witnessed the sought-after technological revolution. Initial progress in the field was mostly due to synthetic efforts in the form of enhanced regularity and purity of currently used materials, the creation of new molecular species, etc. In this thesis we show that the advancement of physico-chemical aspects – notably materials processing – and the realisation of increased order and control of the solid state structure is critical to realize the full intrinsic potential that organic semiconductors possess. We first investigated how the bulk charge-transport properties of the liquid-crystalline semiconductor poly(2,5-bis (3-dodecylthiophen-2-yl)thieno[3,2-b]thiophenes) (pBTTT-C12) can be enhanced by annealing in the mesophase. To this end, temperature treatment of a period of hours was necessary to realize good bulk charge transport in the out-of-plane directions. This behaviour is in strong contrast to in-plane charge transport as measured in thin-film field-effect structures, for which it was shown that annealing times of 10 min and less are often sufficient to enhance device performance. Our observation 4 may aid in future to optimize the use of pBTTT polymers in electronic devices, in which good bulk charge transport is required, such as OPVs. In the second part of thesis, we explored ink-jet printing of pBTTT-C12, in order to realize precise deposition of this material into pre-defined structures. In organic electronic applications this can, amongst other things, enable deposition of different semiconductors or reduction of the unwanted conduction pathways that often result in undesirable parasitic ‘cross-talk’, for instance, between pixels in display products. We demonstrate the integration of ink-jet printed transistors into unipolar digital logic gates that display the highest signal gain reported for unipolar-based logic gates. Finally, recognizing that a broad range of conjugated organic species fall in the category of “plastic crystals”, we explored the option to process this class of materials in the solid state. We find that solid-state compression moulding indeed can effectively be applied to a wide spectrum of organic small molecular and polymeric semiconductors without affecting adversely the intrinsic favourable electronic characteristics of these materials. To the contrary, we often observe significantly enhanced [bulk] charge transport and essentially identical field-effect transistor performance when compared with solution- or melt-processed equivalents. We thus illustrate that fabrication of functional organic structures does not necessitate the use of solution processing methods, which often require removal of 99 wt% or more of solvent, or precursor side-products, nor application of cumbersome vapour deposition technologies.
APA, Harvard, Vancouver, ISO, and other styles
4

Kagiri, T. (Thomas). "Designing strategic information systems in complexity science concepts." Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201306061561.

Full text
Abstract:
Aligning strategic information systems (SISs) with business objectives poses a big challenge, alignment is a complex and a dynamic process, especially with shifting business strategies. Advancement in the digital age has altered the dynamism of SISs, deployment of these ubiquitous technologies has increased competitive capabilities in organizations, as well as turned SISs management into chaotic affair. SISs are turbulent, non-linear, open, dancing systems: these features have introduced uncertainty and unpredictability, resulting into problematic alignment of SISs. Complexity science is thought to offer a solution in this alignment chaos; it is concerned with dynamism of the system. The research in this thesis focused on SISs design, designing SISs in conformity to complexity science concepts. It analyses complexity science use in SISs design. Complexity science constitutes complex adaptive and evolving systems (CAS & CES), both referred to as complex systems hereafter. To achieve SISs alignment with business objectives, organizations would not only need to design alignment applications, but also aligned network structures that enable strategic knowledge sharing, for competitive advantages. The diversity and the non-linear topology of SISs, does not allow organizations to isolate themselves from the outside world anymore; in the digital era competitive advantages arise from shared information. This thesis proposed the use of Knowledge Assets Value Dynamics Map (KAVDM) to determine the priority and order for sharing knowledge assets and for value conversion in organizations, so as to implement the proposed design. Aligned network topology introduces the network paradigm, which deals with inter-organizational network strategies. The paradigm focuses on organizational systems connectivity for competitive advantages. Complexity science was explored for SISs networked interactions, for inter-organizations alliance benefits; demonstrating the use of dynamic network alignment that adapts and evolves as the organizations knowledge needs rises. The resultant SISs network was redundant, self-regulating, and self-learning. This thesis used simple Generative Network Automata (GNA) to demonstrate that the distributed network topology can adapt and evolve simultaneously as single computational framework, conforming to the complexity concepts. This thesis employed design science research methodology combined with analytical research. It explored existing knowledge on complexity science, investigated and analyzed it possibilities for improving SISs design. This thesis also suggested how SISs should be designed in conformity to complexity science concepts. The thesis’ contribution was a conceptual model, a suggestion to the information systems research community. A literature review was applied while studying existing knowledge in SISs, open data, network paradigm and complexity science. Most publications came from MIS Quarterly (MISQ), Information Systems Research (ISR), IEEE Xplore, and Journal of Strategic Information Systems (JSIS).
APA, Harvard, Vancouver, ISO, and other styles
5

Nourian, Arash. "Approaches for privacy aware image processing in clouds." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=119518.

Full text
Abstract:
Cloud computing is ideal for image storage and processing because itprovides enormously scalable storage and processing resources at low cost. One of the major drawbacks of cloud computing, however, is the lack of robust mechanismsfor the users to control the privacy of the data they farm out to the clouds such as photos. One way of enhancing the privacy andsecurity of photos stored in clouds is to encrypt the photos before storing them. However, using encryption to secure the information held in the photos precludes applying any image processing operations while they are held in the third party servers. To address this issue, we have developed image encoding schemes that enhances the privacy of image data that is outsourced to the clouds for processing. We utilize a hybrid cloud model to implement our proposed schemes. Unlike previously proposed image encryption schemes, our encoding schemes allow different forms of pixel-level, block-level, and binary image processing to take place in the clouds while the actual image is not revealed to the cloud provider. Our encoding schemes use a cat map transformation to encode the image after it is masked with an arbitrarily chosen ambient image or mixed with other images. A simplified prototype of the image processing systems was implemented and the experimental results and detailed security analysis for each proposed scheme are presented in this thesis. We use common image processing tasks to demonstrate the ability of our scheme to perform computations on privacy enhanced images. A variety of pixel level, block level and binary filters have been implemented to support image processing on encoded images in the system. The operational overhead added by our schemes to image filters is roughly 18% on average.
Le cloud computing est idéal pour le stockage d'image et le traitement parce qu'il fournit le stockage énormément évolutif et des ressources de traitement au bas prix. Un des inconvénients majeurs de cloud computing, cependant, est le manque de mécanismes robustes pour les utilisateurs pour contrôler la vie privée des données qu'ils mettent en gérance aux clouds comme des photos. Une façon d'améliorer la vie privée et la sécurité de photos stockées dans des clouds est de crypter les photos avant le stockage d'eux. Cependant, utilisant le chiffrage pour garantir les informations tenues dans les photos écarte appliquer n'importe quelles transformations d'image tandis qu'ils sont tenus dans les serveurs tiers. Pour aborder cette question, nous avons développé les régimes de codage d'image qui améliorent la vie privée des données d'image qui est externalisée aux clouds pour le traitement. Nous utilisons un modèle de hybrid cloud pour mettre en œuvre nos régimes proposés. Contrairement aux régimes de chiffrage d'image précédemment proposés, nos régimes de codage permettent aux formes différentes de niveau de pixel, le niveau de bloc et le traitement d'image binaire d'avoir lieu dans les clouds tandis que l'image réelle n'est pas révélée au fournisseur de cloud. Nos régimes de codage utilisent une carte de chat chaotique pour transformer l'image après qu'il est masqué avec une image ambiante arbitrairement choisie ou mixte avec d'autres images. Un prototype simplifié des systèmes de traitement d'image a été mis en œuvre et les résultats expérimentaux et l'analyse de sécurité détaillée pour chaque régime proposé sont présentés dans cette thèse. Nous utilisons l'image commune traitant des tâches de démontrer la capacité de notre régime d'exécuter des calculs sur la vie privée des images améliorées. Une variété de niveau de pixel, le niveau de bloc et des filtres binaires a été mise en œuvre pour supporter le traitement d'image sur des images codées dans le système. L'opérationnel des frais généraux supplémentaire selon nos régimes de refléter des filtres est environ 18% le en moyenne.
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Li 1975. "Distributed signal processing." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86436.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

McCormick, Martin (Martin Steven). "Digital pulse processing." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78468.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 71-74).
This thesis develops an exact approach for processing pulse signals from an integrate-and-fire system directly in the time-domain. Processing is deterministic and built from simple asynchronous finite-state machines that can perform general piecewise-linear operations. The pulses can then be converted back into an analog or fixed-point digital representation through a filter-based reconstruction. Integrate-and-fire is shown to be equivalent to the first-order sigma-delta modulation used in oversampled noise-shaping converters. The encoder circuits are well known and have simple construction using both current and next-generation technologies. Processing in the pulse-domain provides many benefits including: lower area and power consumption, error tolerance, signal serialization and simple conversion for mixed-signal applications. To study these systems, discrete-event simulation software and an FPGA hardware platform are developed. Many applications of pulse-processing are explored including filtering and signal processing, solving differential equations, optimization, the minsum / Viterbi algorithm, and the decoding of low-density parity-check codes (LDPC). These applications often match the performance of ideal continuous-time analog systems but only require simple digital hardware. Keywords: time-encoding, spike processing, neuromorphic engineering, bit-stream, delta-sigma, sigma-delta converters, binary-valued continuous-time, relaxation-oscillators.
by Martin McCormick.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
8

Eldar, Yonina Chana 1973. "Quantum signal processing." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/16805.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2002.
Includes bibliographical references (p. 337-346).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Quantum signal processing (QSP) as formulated in this thesis, borrows from the formalism and principles of quantum mechanics and some of its interesting axioms and constraints, leading to a novel paradigm for signal processing with applications in areas ranging from frame theory, quantization and sampling methods to detection, parameter estimation, covariance shaping and multiuser wireless communication systems. The QSP framework is aimed at developing new or modifying existing signal processing algorithms by drawing a parallel between quantum mechanical measurements and signal processing algorithms, and by exploiting the rich mathematical structure of quantum mechanics, but not requiring a physical implementation based on quantum mechanics. This framework provides a unifying conceptual structure for a variety of traditional processing techniques, and a precise mathematical setting for developing generalizations and extensions of algorithms. Emulating the probabilistic nature of quantum mechanics in the QSP framework gives rise to probabilistic and randomized algorithms. As an example we introduce a probabilistic quantizer and derive its statistical properties. Exploiting the concept of generalized quantum measurements we develop frame-theoretical analogues of various quantum-mechanical concepts and results, as well as new classes of frames including oblique frame expansions, that are then applied to the development of a general framework for sampling in arbitrary spaces. Building upon the problem of optimal quantum measurement design, we develop and discuss applications of optimal methods that construct a set of vectors.
(cont.) We demonstrate that, even for problems without inherent inner product constraints, imposing such constraints in combination with least-squares inner product shaping leads to interesting processing techniques that often exhibit improved performance over traditional methods. In particular, we formulate a new viewpoint toward matched filter detection that leads to the notion of minimum mean-squared error covariance shaping. Using this concept we develop an effective linear estimator for the unknown parameters in a linear model, referred to as the covariance shaping least-squares estimator. Applying this estimator to a multiuser wireless setting, we derive an efficient covariance shaping multiuser receiver for suppressing interference in multiuser communication systems.
by Yonina Chana Eldar.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Heechun. "Modeling the processing science of thermoplastic composite tow prepreg materials." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/17217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

McEwen, Gordon John. "Colour image processing for textile fibre matching in forensic science." Thesis, Queen's University Belfast, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Papadopoulos, Stavros. "Authenticated query processing /." View abstract or full-text, 2010. http://library.ust.hk/cgi/db/thesis.pl?CSED%202010%20PAPADO.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Mehrabi, Saeed. "Advanced natural language processing and temporal mining for clinical discovery." Thesis, Indiana University - Purdue University Indianapolis, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10032405.

Full text
Abstract:

There has been vast and growing amount of healthcare data especially with the rapid adoption of electronic health records (EHRs) as a result of the HITECH act of 2009. It is estimated that around 80% of the clinical information resides in the unstructured narrative of an EHR. Recently, natural language processing (NLP) techniques have offered opportunities to extract information from unstructured clinical texts needed for various clinical applications. A popular method for enabling secondary uses of EHRs is information or concept extraction, a subtask of NLP that seeks to locate and classify elements within text based on the context. Extraction of clinical concepts without considering the context has many complications, including inaccurate diagnosis of patients and contamination of study cohorts. Identifying the negation status and whether a clinical concept belongs to patients or his family members are two of the challenges faced in context detection. A negation algorithm called Dependency Parser Negation (DEEPEN) has been developed in this research study by taking into account the dependency relationship between negation words and concepts within a sentence using the Stanford Dependency Parser. The study results demonstrate that DEEPEN, can reduce the number of incorrect negation assignment for patients with positive findings, and therefore improve the identification of patients with the target clinical findings in EHRs. Additionally, an NLP system consisting of section segmentation and relation discovery was developed to identify patients’ family history. To assess the generalizability of the negation and family history algorithm, data from a different clinical institution was used in both algorithm evaluations. The temporal dimension of extracted information from clinical records representing the trajectory of disease progression in patients was also studied in this project. Clinical data of patients who lived in Olmsted County (Rochester, MN) during 1966 to 2010 was analyzed in this work. The patient records were modeled by diagnosis matrices with clinical events as rows and their temporal information as columns. Deep learning algorithm was used to find common temporal patterns within these diagnosis matrices.

APA, Harvard, Vancouver, ISO, and other styles
13

Khor, Eng Keat. "Processing digital television streams." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/38113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

He, Bingsheng. "Cache-oblivious query processing /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?CSED%202008%20HE.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Woodbeck, Kris. "On neural processing in the ventral and dorsal visual pathways using the programmable Graphics Processing Unit." Thesis, University of Ottawa (Canada), 2007. http://hdl.handle.net/10393/27660.

Full text
Abstract:
We describe a system of biological inspiration that represents both pathways of the primate visual cortex. Our model is applied to multi-class object recognition and the creation of disparity maps from stereo images. All processing is done using the programmable graphics processor; we show that the Graphics Processing Unit (GPU) is a very natural platform for modeling the highly parallel nature of the brain. Each visual processing area in our model is closely based on the properties of the associated area within the brain. Our model covers areas V1 and V2, area V3 of the dorsal pathway and V4 of the ventral pathway of the primate visual cortex. Our model is able to programmatically tune its parameters to select the optimal cells with which to process any visual field. We define a biological feature descriptor that is appropriate for both multi-class object recognition and stereo disparity. We demonstrate that this feature descriptor is also able to match well under changes to rotation, scale and object pose. Our model is tested on the Caltech 101 object dataset and the Middlebury stereo dataset, performing well in both cases. We show that a significant speedup is achieved by using the GPU for all neural computation. Our results strengthen the case for using both the GPU and biologically-motivated techniques in computer vision.
APA, Harvard, Vancouver, ISO, and other styles
16

Duval, Alexandra M. "Valorization of Carrot Processing Waste." DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2155.

Full text
Abstract:
Commercial carrot processors produce up to 175,000 tons of carrot waste annually. Carrot Mash (CM) is the term referring to the waste by-product of peeled baby carrot processing. Transportation of carrot processing waste is expensive due to its high- water content (approx. 83-95%). High in bioactive compounds (carotenoids) and dietary fibers, it is expected that its conversion into a value-added by-product is of interest to the carrot processing industry. Hemicellulose-rich plant materials have proven to be a source of oligosaccharides, which are known for their beneficial prebiotic activity. The objectives of this research were to: 1) determine the effect of mechanical treatments on the extraction of water and bioactive compounds and evaluate the functional properties of carrot mash; 2) incorporate dried carrot mash into a beef patty and evaluate changes in pH, color, cooking yield, and texture; 3) apply an enzymatic treatment to carrot mash to promote the conversion of polysaccharides to oligosaccharides for prebiotic benefits. Mechanical separation of liquid and solid fractions by way of expeller pressing was efficient in extracting liquid while simultaneously increasing total solids by nearly 200%, the extraction of carotenoids by 1000%, and polyphenol content by nearly 97%. Mechanical treatments increased the fat binding capacity on average by 183% compared to untreated mash. The addition of unpressed carrot mash or expeller pressed carrot mash increased the cooking yield of a beef patty by 3-13% without significantly changing its textural properties. Enzymatically treating the carrot mash significantly increased the concentration of oligosaccharides up to 2.3%. These results suggest that carrot processing wastes can be physically and enzymatically modified and have an immense potential to be utilized as a functional ingredient in human food rather than being landfilled, composted or used as animal feed.
APA, Harvard, Vancouver, ISO, and other styles
17

Reid, D. J. "Picture-text processing and the learning of science by secondary schoolchildren." Thesis, University of Manchester, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Ying. "Query optimization for distributed stream processing." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3274258.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2007.
Source: Dissertation Abstracts International, Volume: 68-07, Section: B, page: 4597. Adviser: Beth Plale. Title from dissertation home page (viewed Apr. 21, 2008).
APA, Harvard, Vancouver, ISO, and other styles
19

Bui, Quoc Cuong 1974. "Containerless processing of stainless steel." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/9578.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 1998.
Vita.
Includes bibliographical references (leaves 74-75).
The rapid solidification of a Fe-12 wt% Cr-16 wt% Ni alloy in containerless processing conditions was investigated using an electromagnetic levitation facility. High-speed video and pyrometry allowed the study of phase selection and secondary nucleation mechanisms for that alloy as well as measurements of delay times and growth velocities. Double recalescence events were observed for the first time at temperatures near the To temperature of the metastable ferritic phase, defining a value of the critical undercooling for metastable bee nucleation significantly lower than previously reported. Phase selection during recalescence was successfully performed by use of different trigger materials: at temperatures below the bee To , a bee Fe trigger induced the primary nucleation of the metastable bee phase which subsequently transformed into the stable fee phase while an fee Ni trigger caused the nucleation of the equilibrium fee phase. Growth velocities were characterized for the 6 phase growing into the undercooled liquid, the [delta] phase growing into the undercooled liquid and the [gamma] phase growing into the semi-solid primary bee. It was found that a critical undercooling exists at which the growth velocity of the primary ferritic phase is equal to that of the secondary austenitic phase into the primary semi-solid. At undercoolings below this critical value, the equilibrium, can overwhelm the primary a and break into the undercooled liquid. Such a double recalescence event can therefore appear as a single event depending on the geometry of the detection equipment. From this observation, a model based on velocity and delay time arguments was proposed to explain discrepancies with earlier works.
by Quoc Cuong Bui.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
20

Weston, Margan Marianna Mackenzie. "Experimental Optical Quantum Science: Efficient Multi-Photon Sources for Quantum Information Science." Thesis, Griffith University, 2017. http://hdl.handle.net/10072/367516.

Full text
Abstract:
Quantum optics promises excellent capabilities for experimental demonstrations of quantum information processing. So far, the practical implementation of protocols displaying a clear quantum advantage has been limited, as they rely on perfect quantum states and high delity measurements. Additionally, the fragile nature of the quantum states makes them susceptible to imperfections in measurement devices, and environmental loss. Two of the key challenges for quantum technologies are that of producing single photons, and pairs of entangled photons, more eciently, and overcoming the detrimental effects of photon loss. The research presented in this thesis tackles the practical realisation of quantum-enhanced protocols by improving spontaneous parametric down conversion for producing photon pairs. I present a spontaneous parametric down-conversion source of frequency-uncorrelated polarisation-entangled photon pairs at telecom wavelengths. The source provides photon pairs that display-simultaneously-the key properties for high-performance quantum information tasks and the investigation of fundamental quantum physics. Specifically, the source provides high heralding efficiency, high quantum state purity and high entangled-state fidelity at the same time. Among the different tests applied, perfect non-classical interference between photons from independent sources with a visibility of (1005)% was observed. Additionally, the polarisation-unentangled version of the source achieved symmetric heralding efficiencies of up to (82 2)%. The following two experiments presented in this thesis make use of these high-performance sources to implement an entanglement verication protocol over a high-loss quantum channel, and to perform a quantum metrology experiment, both of which were not previously achievable with existing sources or designs.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Natural Sciences
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
21

SONG, HYO-JIN. "PROCESSING PHASE TRANS." University of Cincinnati / OhioLINK, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1132249697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Yarosz, Matthew James 1978. "Security sphere : digital image processing." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86750.

Full text
Abstract:
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 75).
by Matthew James Yarosz.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
23

Gharbi, Michael (Michael Yanis). "Learning efficient image processing pipelines." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120437.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages [125]-138).
The high resolution of modern cameras puts significant performance pressure on image processing pipelines. Tuning the parameters of these pipelines for speed is subject to stringent image quality constraints and requires significant efforts from skilled programmers. Because quality is driven by perceptual factors with which most quantitative image metrics correlate poorly, developing new pipelines involves long iteration cycles, alternating between software implementation and visual evaluation by human experts. These concerns are compounded on modern computing platforms, which are increasingly mobile and heterogeneous. In this dissertation, we apply machine learning towards the design of high-performance, high-fidelity algorithms whose parameters can be optimized automatically on large-scale datasets via gradient based methods. We present applications to low-level image restoration and high performance image filtering on mobile devices.
by Michaël Gharbi.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Herrmann, Frederick P. (Frederick Paul). "An integrated associative processing system." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/38020.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 97-105).
by Frederick Paul Herrmann.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
25

Vasconcellos, Brett W. (Brett William) 1977. "Parallel signal-processing for everyone." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/9097.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (p. 65-67).
We designed, implemented, and evaluated a signal-processing environment that runs on a general-purpose multiprocessor system, allowing easy prototyping of new algorithms and integration with applications. The environment allows the composition of modules implementing individual signal-processing algorithms into a functional application, automatically optimizing their performance. We decompose the problem into four independent components: signal processing, data management, scheduling, and control. This simplifies the programming interface and facilitates transparent parallel signal processing. For tested applications, our system both runs efficiently on single-processors systems and achieves near-linear speedups on symmetric-multiprocessor (SMP) systems.
by Brett W. Vasconcellos.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
26

Baran, Thomas A. (Thomas Anthony). "Conservation in signal processing systems." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/74991.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 205-209).
Conservation principles have played a key role in the development and analysis of many existing engineering systems and algorithms. In electrical network theory for example, many of the useful theorems regarding the stability, robustness, and variational properties of circuits can be derived in terms of Tellegen's theorem, which states that a wide range of quantities, including power, are conserved. Conservation principles also lay the groundwork for a number of results related to control theory, algorithms for optimization, and efficient filter implementations, suggesting potential opportunity in developing a cohesive signal processing framework within which to view these principles. This thesis makes progress toward that goal, providing a unified treatment of a class of conservation principles that occur in signal processing systems. The main contributions in the thesis can be broadly categorized as pertaining to a mathematical formulation of a class of conservation principles, the synthesis and identification of these principles in signal processing systems, a variational interpretation of these principles, and the use of these principles in designing and gaining insight into various algorithms. In illustrating the use of the framework, examples related to linear and nonlinear signal-flow graph analysis, robust filter architectures, and algorithms for distributed control are provided.
by Thomas A. Baran.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
27

Kitchens, Jonathan Paul. "Acoustic vector-sensor array processing." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/60098.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 145-148).
Existing theory yields useful performance criteria and processing techniques for acoustic pressure-sensor arrays. Acoustic vector-sensor arrays, which measure particle velocity and pressure, offer significant potential but require fundamental changes to algorithms and performance assessment. This thesis develops new analysis and processing techniques for acoustic vector-sensor arrays. First, the thesis establishes performance metrics suitable for vector sensor processing. Two novel performance bounds define optimality and explore the limits of vector-sensor capabilities. Second, the thesis designs non-adaptive array weights that perform well when interference is weak. Obtained using convex optimization, these weights substantially improve conventional processing and remain robust to modeling errors. Third, the thesis develops subspace techniques that enable near-optimal adaptive processing. Subspace processing reduces the problem dimension, improving convergence or shortening training time.
by Jonathan Paul Kitchens.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
28

Eisenstein, Jacob (Jacob Richard). "Gesture in automatic discourse processing." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/44401.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
Includes bibliographical references (p. 145-153).
Computers cannot fully understand spoken language without access to the wide range of modalities that accompany speech. This thesis addresses the particularly expressive modality of hand gesture, and focuses on building structured statistical models at the intersection of speech, vision, and meaning. My approach is distinguished in two key respects. First, gestural patterns are leveraged to discover parallel structures in the meaning of the associated speech. This differs from prior work that attempted to interpret individual gestures directly, an approach that was prone to a lack of generality across speakers. Second, I present novel, structured statistical models for multimodal language processing, which enable learning about gesture in its linguistic context, rather than in the abstract. These ideas find successful application in a variety of language processing tasks: resolving ambiguous noun phrases, segmenting speech into topics, and producing keyframe summaries of spoken language. In all three cases, the addition of gestural features - extracted automatically from video - yields significantly improved performance over a state-of-the-art text-only alternative. This marks the first demonstration that hand gesture improves automatic discourse processing.
by Jacob Eisenstein.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
29

Pardo-Guzman, Dino Alejandro. "Design and processing of organic electroluminescent devices." Diss., The University of Arizona, 2000. http://hdl.handle.net/10150/284162.

Full text
Abstract:
The present dissertation compiles three aspects of my Ph.D. work on OLED device design, fabrication and characterization. The first chapter is a review of the concepts and theories describing the mechanisms of organic electroluminescence. The second chapter makes use of these concepts to articulate some basic principles for the design of efficient and stable OLEDs. The third chapter describes the main characterization and sample preparation techniques used along this dissertation. Chapter IV describes the processing of efficient organic electroluminescent EL devices with ITO\TPD\AIQ₃\Mg:Ag structures. The screen printing technique of a hole transport polymeric blend was used in an unusual mode to render thin films in the order of 60-80 nm. EL devices were then fabricated on top of these sp films to provide ∼0.9% quantum efficiencies, comparable to spin coating with the same structures. Various polymer:TPD and solvent combinations were studied to find the paste with the best rheological properties. The same technique was also used to deposit a patterned MEH-PPV film. Chapter V describes my research work on the wetting of TPD on ITO substrates. The wetting was monitored by following its surface morphology evolution as a function of temperature. The effect of these surface changes was then correlated to the I-V-L characteristics of devices made with these TPD films. The surface roughness was measured with tapping AFM showed island formation at temperatures as low as 50-60°C. I Also investigated the effect of the purity of materials like AlQ3 on the device EL performance, as described in Chapter VI. In order to improve the purity of these environmentally degradable complexes a new in situ purification technique was developed with excellent enhancement of the EL cell properties. The in situ purification process was then used to purify/deposit organic dyes with improved film formation and EL characteristics.
APA, Harvard, Vancouver, ISO, and other styles
30

Hartami, A. (Aprilia). "Designing a persuasive game to raise environmental awareness among children:a design science research." Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201805312053.

Full text
Abstract:
Considering the environmental problems that happen globally, environmental education was perceived in this study as an important key to deal with the problems. This study was carried out to support environmental education by exploring persuasive games to raise environmental awareness. Through a design science research (DSR) approach, EcoScout Game was designed as a mobile persuasive game for conveying some existing environmental issues to children. During the DSR, a playtest phase was conducted to evaluate the first level of EcoScout Game that contains persuasion goals to keep the environment clean and to dispose waste correctly. The playtest involved 10 participants age 4 to 6 years that provided various responses. Majority of the participants showed their interests to play the game. When the participants were playing the game, all of them understood that they must keep the environment clean. More than a half of the participants (60%) understood and motivated to follow the waste disposal rules. Several improvements for the game are proposed, especially to help children who are still not able to read textual labels and descriptions in the game. Further development and research are required to advance the game and to confirm its effectiveness in persuading children. However, this study showed a possibility to use persuasive games for raising environmental awareness among children in the context of environmental education. Finally, this study demonstrated a potential to apply DSR in persuasive game design.
APA, Harvard, Vancouver, ISO, and other styles
31

Golab, Lukasz. "Sliding Window Query Processing over Data Streams." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2930.

Full text
Abstract:
Database management systems (DBMSs) have been used successfully in traditional business applications that require persistent data storage and an efficient querying mechanism. Typically, it is assumed that the data are static, unless explicitly modified or deleted by a user or application. Database queries are executed when issued and their answers reflect the current state of the data. However, emerging applications, such as sensor networks, real-time Internet traffic analysis, and on-line financial trading, require support for processing of unbounded data streams. The fundamental assumption of a data stream management system (DSMS) is that new data are generated continually, making it infeasible to store a stream in its entirety. At best, a sliding window of recently arrived data may be maintained, meaning that old data must be removed as time goes on. Furthermore, as the contents of the sliding windows evolve over time, it makes sense for users to ask a query once and receive updated answers over time.

This dissertation begins with the observation that the two fundamental requirements of a DSMS are dealing with transient (time-evolving) rather than static data and answering persistent rather than transient queries. One implication of the first requirement is that data maintenance costs have a significant effect on the performance of a DSMS. Additionally, traditional query processing algorithms must be re-engineered for the sliding window model because queries may need to re-process expired data and "undo" previously generated results. The second requirement suggests that a DSMS may execute a large number of persistent queries at the same time, therefore there exist opportunities for resource sharing among similar queries.

The purpose of this dissertation is to develop solutions for efficient query processing over sliding windows by focusing on these two fundamental properties. In terms of the transient nature of streaming data, this dissertation is based upon the following insight. Although the data keep changing over time as the windows slide forward, the changes are not random; on the contrary, the inputs and outputs of a DSMS exhibit patterns in the way the data are inserted and deleted. It will be shown that the knowledge of these patterns leads to an understanding of the semantics of persistent queries, lower window maintenance costs, as well as novel query processing, query optimization, and concurrency control strategies. In the context of the persistent nature of DSMS queries, the insight behind the proposed solution is that various queries may need to be refreshed at different times, therefore synchronizing the refresh schedules of similar queries creates more opportunities for resource sharing.
APA, Harvard, Vancouver, ISO, and other styles
32

MacDonald, Darren T. "Image segment processing for analysis and visualization." Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/27641.

Full text
Abstract:
This thesis is a study of the probabilistic relationship between objects in an image and image appearance. We give a hierarchical, probabilistic criterion for the Bayesian segmentation of photographic images. We validate the segmentation against the Berkeley Segmentation Data Set, where human subjects were asked to partition digital images into segments each representing a 'distinguished thing'. We show that there exists a strong dependency between the hierarchical segmentation criterion, based on our assumptions about the visual appearance of objects, and the distribution of ground truth data. That is, if two pixels have similar visual properties then they will often have the same ground truth state. Segmentation accuracy is quantified by measuring the information cross-entropy between the ground truth probability distribution and an estimate obtained from the segmentation. We consider the proposed method for estimating joint ground truth probability to be an important tool for future image analysis and visualization work.
APA, Harvard, Vancouver, ISO, and other styles
33

Vijayakumar, Nithya Nirmal. "Data management in distributed stream processing systems." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278228.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Computer Science, 2007.
Source: Dissertation Abstracts International, Volume: 68-09, Section: B, page: 6093. Adviser: Beth Plale. Title from dissertation home page (viewed May 9, 2008).
APA, Harvard, Vancouver, ISO, and other styles
34

Zsilavecz, Guido. "Rita+ : an SGML based document processing system." Master's thesis, University of Cape Town, 1993. http://hdl.handle.net/11427/13531.

Full text
Abstract:
Bibliography: leaves 81-83.
Rita+ is a structured syntax directed document processing system, which allows users to edit documents interactively, and display these documents in a manner determined by the user. The system is SGML (Standard Generalized Markup Language) based in that it reads and saves files as SGML marked up documents, and uses SGML document type definitions as templates for document creation. The display or layout of the document is determined by the Rita Semantic Definition Language (RSDL). With RSDL it is possible to assign semantic actions quickly to an SGML file. Semantic definitions also allows users to export documents to serve as input for powerful batch formatters. Each semantic definition file is associated with a specific document type definition. The Rita+ Editor uses the SGML document type definition to allow the user to create structurally correct documents, and allows the user to create these in an almost arbitrary manner. The Editor displays the user document together with the associated document structure in a different window. Documents are created by selecting document elements displayed in a dynamic menu. This menu changes according to the current position in the document structure. As it is possible to have documents which are incomplete in the sense that required document elements are missing, Rita+ will indicate which document elements are required to complete the document with a minimum number of insertions, by highlighting the required elements in the dynamic menu. The Rita+ system is build on top of an existing system, to which SGML and RSDL support, as well as incomplete document support and menu marking, has been added.
APA, Harvard, Vancouver, ISO, and other styles
35

Smith, Charles Eldon. "An information-processing theory of party identification /." The Ohio State University, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487848531364663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ikiz, Yuksel. "Fiber Length Measurement by Image Processing." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000809-225316.

Full text
Abstract:

IKIZ, YUKSEL. Fiber Length Measurement by Image Processing. (Under the direction of Dr. Jon P. Rust.) This research studied the accuracy and feasibility of cotton fiber length measurement by image processing as an alternative to existing systems. Current systems have some weaknesses especially in Short Fiber Content (SFC) determination, which is becoming an important length parameter in industry. Seventy-two treatments of five factors were analyzed for length and time measurements by our own computer program. The factors are: Sample preparation (without fiber crossover and with fiber crossover), lighting (backlighting and frontlighting), resolution (37-micron, 57-micron, 106-micron, and 185-micron), preprocessing (4-neighborhood and 8-neighborhood), and processing (outlining, thinning, and adding broken skeletons). The best results in terms of accuracy, precision and analysis time for images without fiber crossovers were: 106-micron resolution with frontlighting using an 8-neighborhood thresholding algorithm and using an outline algorithm for length determination. With fiber crossovers, 57-micron resolution with backlighting using an 8-neighborhood thresholding algorithm and using a thinning algorithm combined with an adding algorithm for combining broken skeletons. Using the above conditions, 1775 area can be analyzed using our current equipment in 15 seconds. In the case of images with crossovers, only 117 can be analyzed in 15 seconds. This research demonstrates that successful sample preparation without fiber crossovers would create the best fiber length measurement technique, however with fiber crossovers the system efficiency has been proven as well.

APA, Harvard, Vancouver, ISO, and other styles
37

Martínez-Ayers, Raúl Andrés 1977. "Formation and processing of rheocast microstructures." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28883.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2004.
Includes bibliographical references (leaves 108-114).
(cont.) given slurry was proposed. The fluidity of rheocast A357 alloy slurries was contrasted with the fluidity of superheated liquid. Rheocast slurries with 37% solid particles,were found to flow about half as far as fully liquid alloy superheated 20⁰C above the liquidus.
The importance of semi-solid metal processing derives primarily from its ability to form high integrity parts from lightweight alloys. Since the discovery of the semi-solid metal microstructure, most part production was by reheating of billets which possessed a suitable microstructure ("thixocasting"). However, it is now apparent that there are significant advantages of forming semi-solid slurry directly from liquid alloy ("rheocasting") and efficient rheocasting processes have been engineered. In this work, experimental and analytical approaches were taken to study how non-dendritic microstructures form and evolve in Al-4.5wt%Cu alloy during the earliest stages of solidification. Experimental results showed that particles in quenched rheocast alloy were already spheroidal, and free of entrapped eutectic, after 5 seconds of solidification time. Spheroidal particles were also formed by reheating equiaxed dendrites of approximately 10 [micro]m radius above the eutectic temperature for 5 seconds, but these spheroids contained entrapped eutectic. In both rheocasting and reheating experiments, the average particle radius was found to increase with solidification time at a rate that closely follows the classical dendrite arm ripening curve. Particle growth models developed were compared with the average particle radius measurements, and particle solute content measurements. The maximum cooling rate to maintain spheroidal interface stability at various solid fractions was studied experimentally. A modified stability model which considered particle interaction through solute field overlap was developed and found to be in good agreement with experimental data. A simple method for the foundry to determine the maximum cooling rate for a
by Raul A. Martinez-Ayers.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
38

Bhattacharya, Dipankar. "Neural networks for signal processing." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq21924.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Benjamin, Jim Isaac. "Quadtree algorithms for image processing /." Online version of thesis, 1991. http://hdl.handle.net/1850/11078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Liu, Fuyu. "Query processing in location-based services." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4634.

Full text
Abstract:
In the fifth part of this dissertation, we propose to use Road Network Embedding technique to process privacy protected queries.; With the advances in wireless communication technology and advanced positioning systems, a variety of Location-Based Services (LBS) become available to the public. Mobile users can issue location-based queries to probe their surrounding environments. One important type of query in LBS is moving monitoring queries over mobile objects. Due to the high frequency in location updates and the expensive cost of continuous query processing, server computation capacity and wireless communication bandwidth are the two limiting factors for large-scale deployment of moving object database systems. To address both of the scalability factors, distributed computing has been considered. These schemes enable moving objects to participate as a peer in query processing to substantially reduce the demand on server computation, and wireless communications associated with location updates. In the first part of this dissertation, we propose a distributed framework to process moving monitoring queries over moving objects in a spatial network environment. In the second part of this dissertation, in order to reduce the communication cost, we leverage both on-demand data access and periodic broadcast to design a new hybrid distributed solution for moving monitoring queries in an open space environment. Location-based services make our daily life more convenient. However, to receive the services, one has to reveal his/her location and query information when issuing location-based queries. This could lead to privacy breach if these personal information are possessed by some untrusted parties. In the third part of this dissertation, we introduce a new privacy protection measure called query l-diversity, and provide two cloaking algorithms to achieve both location k-anonymity and query l-diversity to better protect user privacy. In the fourth part of this dissertation, we design a hybrid three-tier architecture to help reduce privacy exposure.
ID: 029050964; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2010.; Includes bibliographical references (p. 138-145).
Ph.D.
Doctorate
Department of Electrical Engineering and Computer Science
Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
41

Bomström, H. (Henri). "Improving video game designer workflow in procedural content generation-based game design:a design science approach." Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201812063237.

Full text
Abstract:
The time and money spent on video games are rapidly increasing, as the annual U.S game industry consumer spending has reached 23.5 billion dollars. The cost of producing video game content has grown in accordance with the consumer demand. Artificial intelligence (AI) has been suggested as a way to scale production costs with the demand. In addition to lowering content production costs, AI enables the creation of new forms of gameplay that are not possible with the current toolbox of the industry. The utilization of AI in game design is currently difficult, as it requires both theoretical knowledge and practical expertise. This thesis improved game designer workflow in PCG-based game design by explicating the necessary theoretical frameworks and practical steps needed to adopt AI-based practices in game design. Game designer workflow in PCG-based game design was improved by utilizing the design science research method (DSR). The constructed artefact was determined to be a method in accordance with the DSR knowledge contribution framework, and it was evaluated by using the Quick & Simple strategy from the FEDS framework. The risks related to artefact construction were assessed in accordance with the RMF4DSR framework. The metrics used to measure the performance of the artefact were determined by employing the GQM framework. Finally, the proposed method was evaluated by following it in constructing a simple PCG-based game with an accompanying AI system. The evaluation was performed by utilizing the FEDS framework in an artificial setting. After gathering and analysing the data from the artefact construction and evaluation, the method was modified to address its shortcomings. The produced design method is the main contribution of this thesis. The proposed method lowers the threshold for adopting PCG-based game design practices, and it helps designers, developers, and researchers by creating concrete and actionable steps to follow. The necessary theoretical frameworks and decision points are presented in a single method that demystifies the process of designing PCG-based games. Additional theoretical knowledge has been contributed by studying the topic from a practical perspective and extracting requirements from an actual design process. The method can be used as a practical cookbook for PCG-based projects and as a theoretical base for further studies on PCG-based game design. Future research tasks include evaluating the proposed method in an organizational context with real users. An organizational context also warrants means to managing risks in PCG-based game design projects. Finally, generator evaluation and explicit guidance on generator control are important future research topics.
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Xun. "Free Wind Path Detecting in Google Map Using Image Processing." Thesis, University of Gävle, Department of Industrial Development, IT and Land Management, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-6920.

Full text
Abstract:

Urban heat island has become a serious environment problem in nowadays. One of the reasons is the buildings block the wind blow. For planning a scientific urban geometry to reduce the influence of urban heat island, one of the solutions is finding the free wind paths. There many previous works based on GIS, this paper will offer a new method found on image processing in digital map. Using MATLAB instead of GIS device, the algorithm provides a simple and easy-using way to detect the free wind paths in the eight directions.

APA, Harvard, Vancouver, ISO, and other styles
43

Boufounos, Petros T. 1977. "Signal processing for DNA sequencing." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/17536.

Full text
Abstract:
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.
Includes bibliographical references (p. 83-86).
DNA sequencing is the process of determining the sequence of chemical bases in a particular DNA molecule-nature's blueprint of how life works. The advancement of biological science in has created a vast demand for sequencing methods, which needs to be addressed by automated equipment. This thesis tries to address one part of that process, known as base calling: it is the conversion of the electrical signal-the electropherogram--collected by the sequencing equipment to a sequence of letters drawn from ( A,TC,G ) that corresponds to the sequence in the molecule sequenced. This work formulates the problem as a pattern recognition problem, and observes its striking resemblance to the speech recognition problem. We, therefore, propose combining Hidden Markov Models and Artificial Neural Networks to solve it. In the formulation we derive an algorithm for training both models together. Furthermore, we devise a method to create very accurate training data, requiring minimal hand-labeling. We compare our method with the de facto standard, PHRED, and produce comparable results. Finally, we propose alternative HMM topologies that have the potential to significantly improve the performance of the method.
by Petros T. Boufounos.
M.Eng.and S.B.
APA, Harvard, Vancouver, ISO, and other styles
44

Chiu, William. "Processing of Supported Silica Membranes for Gas Separation." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1349815421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Luo, Chuanjiang. "Laplace-based Spectral Method for Point Cloud Processing." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1388661251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Shi, Xun. "Effect of Processing and Formulations on Cocoa Butter." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1420772706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Jiayin. "Building Efficient Large-Scale Big Data Processing Platforms." Thesis, University of Massachusetts Boston, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10262281.

Full text
Abstract:

In the era of big data, many cluster platforms and resource management schemes are created to satisfy the increasing demands on processing a large volume of data. A general setting of big data processing jobs consists of multiple stages, and each stage represents generally defined data operation such as ltering and sorting. To parallelize the job execution in a cluster, each stage includes a number of identical tasks that can be concurrently launched at multiple servers. Practical clusters often involve hundreds or thousands of servers processing a large batch of jobs. Resource management, that manages cluster resource allocation and job execution, is extremely critical for the system performance.

Generally speaking, there are three main challenges in resource management of the new big data processing systems. First, while there are various pending tasks from dierent jobs and stages, it is difficult to determine which ones deserve the priority to obtain the resources for execution, considering the tasks' different characteristics such as resource demand and execution time. Second, there exists dependency among the tasks that can be concurrently running. For any two consecutive stages of a job, the output data of the former stage is the input data of the later one. The resource management has to comply with such dependency. The third challenge is the inconsistent performance of the cluster nodes. In practice, run-time performance of every server is varying. The resource management needs to dynamically adjust the resource allocation according to the performance change of each server.

The resource management in the existing platforms and prior work often rely on fixed user-specic congurations, and assumes consistent performance in each node. The performance, however, is not satisfactory under various workloads. This dissertation aims to explore new approaches to improving the eciency of large-scale big data processing platforms. In particular, the run-time dynamic factors are carefully considered when the system allocates the resources. New algorithms are developed to collect run-time data and predict the characteristics of jobs and the cluster. We further develop resource management schemes that dynamically tune the resource allocation for each stage of every running job in the cluster. New findings and techniques in this dissertation will certainly provide valuable and inspiring insights to other similar problems in the research community.

APA, Harvard, Vancouver, ISO, and other styles
48

Li, Quanzhong. "Indexing and path query processing for XML data." Diss., The University of Arizona, 2004. http://hdl.handle.net/10150/290141.

Full text
Abstract:
XML has emerged as a new standard for information representation and exchange on the Internet. To efficiently process XML data, we propose the extended preorder numbering scheme, which determines the ancestor-descendant relationship between nodes in the hierarchy of XML data in constant time, and adapts to the dynamics of XML data by allocating extra space. Based on this numbering scheme, we propose sort-merge based algorithms, εA-Join and εε-Join, to process ancestor-descendant path expressions. The experimental results showed an order of magnitude performance improvement over conventional methods. We further propose the partition-based algorithms, which can be chosen by a query optimizer according to the characteristics of the input data. For complex path expressions with branches, we propose the Containment B⁺-tree (CB-tree) index and the IndexTwig algorithm. The CB-tree, which is an extension of the B⁺-tree, supports both the containment query and the reverse containment query. It is an effective indexing scheme for XML documents with or without a small number of recursions. The proposed IndexTwig algorithm works with any index supporting containment and reverse containment queries, such as the CB-tree. We also introduce a simplified output model, which outputs only the necessary result of a path expression. The output model enables the Fast Existence Test (FET) optimization to skip unnecessary data and avoid generating unwanted results. Also in this dissertation, we introduce techniques to process the predicates in XML path expressions using the EVR-tree. The EVR-tree combines the advantages of indexing on values or elements individually using B+-trees. It utilizes the high value selectivity and/or high structural selectivity, and provides ordered element access by using a priority queue. At the end of the dissertation, we introduce the XISS/R system, which is an implementation of the XML Indexing and Storage System (XISS) on top of a relational database. The XISS/R includes a web-based user interface and a XPath query engine to translate XPath queries into efficient SQL statements.
APA, Harvard, Vancouver, ISO, and other styles
49

Le, Hoang Duc Khanh Computer Science &amp Engineering Faculty of Engineering UNSW. "Visual dataflow language for image processing." Awarded by:University of New South Wales. Computer Science & Engineering, 2007. http://handle.unsw.edu.au/1959.4/40776.

Full text
Abstract:
Most current DFVPLs support flow control to facilitate experiments and complex problems. However, current approaches in DFVPLs still remain inefficient. We show that inadequacies in existing visual programming languages may be magnified in applications involving image analysis. These include a lack of efficient communication mechanisms and strong dependency on human involvement to customise properties. For instance, properties in one computational component can not be shared for other components. Moreover, conditional expressions used in control components hold data values that are unrelated with those computational components. Furthermore, since image processing libraries usua.lly only explicitly support pipeline processing, as exemplified by the widely used Insight Toolkit for Medical Image Segmentation and Registration (ITK), a looping algorithm would be difficult to implement without a feedback mechanism supported by the visual language itself. We propose a data-flow visual programming language that encompasses several novel control constructs and parameterised computational units. These components are facilitated by a novel hybrid data-flow model. We also present several conceptual models and design alternatives for control constructs. Several mechanisms and techniques are provided to enhance data propagation for these components. We demonstrate, in an environment that utilises ITK as the underlying processing engine, that the inadequacies in existing DFVPLs can be satisfactorily addressed through the visual components proposed in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Alice 1975. "Eigenstructure based speech processing in noise." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/46218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography