To see the other types of publications on this topic, follow the link: PROCESSING FRAMEWORK.

Dissertations / Theses on the topic 'PROCESSING FRAMEWORK'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'PROCESSING FRAMEWORK.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Childress, Lawrence. "The Loss-Processing Framework." Digital Commons @ East Tennessee State University, 2021. https://dc.etsu.edu/etd/3896.

Full text
Abstract:
The circumstances of responding to loss due to human death are among the most stressful experiences encountered in life. Although grief’s symptoms are typically considered essential to their gradual diminishment, possible negative impacts of complications related to grief are also well known, and have been associated with detriments to mental and physical health. Grief, however, can also generate transformative positive change. Thus, albeit ineludible, responding to loss is not uniformly experienced, expressed, or understood. It is also culturally-shaped, making attempts to define “normal” grief, as well as to label some grief “abnormal”—and to medicalize it—possibly problematic. Bereavement (the situation surrounding a death) and mourning (the publicly expressed response to loss due to death) are changing. Some of these changes (e.g., the increase in hospice care settings prior to deaths, and alterations in the ritual responses following all deaths—irrespective of their context) may have important implications for avoiding grief’s possible complications and for promoting its potential benefits. An improved alignment of grief theory, research, and practice is warranted; but theories of grief are diverse, and historically have not been empirically well-supported. This research articulates a new grief model, the loss-processing framework, featuring three dimensional components (perception, orientation, and direction). As a first step toward validation of the framework, also included is an empirical study examining retrospective descriptive reports of adult loss response relating to the first of these three dimensions (perception). As an interpretive, translational approach to understanding grief, the loss-processing framework may serve to positively impact grieving, health, and life quality.
APA, Harvard, Vancouver, ISO, and other styles
2

Pluskal, Jan. "Framework for Captured Network Communication Processing." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-413326.

Full text
Abstract:
Práce pojednává o možnostech získávání dat a jejich analýzy ze zachycené síťové komunikace. Jsou zhodnoceny možnosti aktuálně dostupných řešení jednotlivých nástrojů i celých prostředí pro síťovou forenzní analýzu. Provedením analýzy těchto nástrojů byly zjištěny nedostatky, pro které není možná integrace již hotových řešení pro záměry projektu SEC6NET, a dále byly stanoveny cíle, které navržené řešení musí splňovat. Na základě cílů a znalostí z předchozích prototypů řešení byla provedena dekompozice problému na jednotlivé funkčně související bloky, které byly implementovány jako nezávislé moduly schopny spolupráce. Správná funkcionalita je po každé změně v implementaci testována pomocí sad Unit testů, které pokrývají majoritní část kódu. Před zahájením samotného vývoje bylo nutné zhodnotit aktuální situaci v komerčních i open-source sférách řešení. Srovnání nástrojů používaných pro forenzní síťovou analýzu nám dalo jasnou představu, na kterou část trhu chce naše řešení směřovat a jaká funkčnost je v jednotlivých nástrojích nepříliš povedená. Následně byly stanoveny hlavní požadavky a směr, kterým by se měl vývoj ubírat. Na začátku vývoje rekonstrukčního frameworku stála fáze vytvoření návrhu architektury a dekompozice průběhu zpracování zachycené komunikace do ucelených částí jednotlivých modulů. Využití předchozích znalostí a zkušeností získaných vývojem rekonstrukčního nástroje Reconsuite nám pomohlo při formování fronty zpracování, kterou budou data při zpracování procházet. Následně byly navrženy základní komponenty provádějící práci se zachycenou komunikací v různých formátech PCAP souborů, rozdělení komunikace na konverzace, provedení defragmentace na úrovni IP a v případě komunikace TCP provedení reassemblingu daných toků. V rané části vývoje jsme se zaměřili na komunikaci zapouzdřenou v nízkoúrovňových protokolech Ethernet, IPv4/IPv6, TCP a UDP. Po definici rozhraní komponent bylo nutné provést další výzkum síťových protokolů a vytvoření algoritmů pro jejich zpracování ze zachycené komunikace, která se liší od standardní a není tedy možné ji zpracovávat dobře známými postupy z RFC či jader operačních systémů. Protože proces zpracování zachycených dat se na komunikaci přímo nepodílí, tak v případě, kdy dojde ke ztrátě či poškození při zachycení, nebo je komunikace směřována jinou cestou, atd., není možné data získat pomocí znovu zasílání, ale je nutné využít jiné mechanismy k označení či obnově takto chybějících dat - algoritmus provádějící IP defragmentaci a TCP reassembling. Po implementaci a otestování byl zjištěn problém se separací jednotlivých TCP toků (TCP sessions), který nebylo možné řešit původním návrhem. Po analýze tohoto problému byla změněna architektura procesní pipeline s výsledným zvýšením počtu rekonstruovaných dat v desítkách procent. V závěrečné fázi je popsána metodologie jakou bylo porvedeno testování výkonu implementovaného řešení a srovnání s již existujícími nástroji. Protože rekonstrukce aplikačních dat je příliš specifická záležitost, při srovnání výkonu byla měřena rychlost zpracování a potřebná paměť pouze při provádění separace toků, IPv4 defragmentace a TCP reassemblingu, tedy operace společné pro všechny rekonstrukční nástroje. Srovnání ukázalo, že Netfox.Framework předčí své konkurenty Wireshark i Network monitor v rychlosti zpracování, tak v úspoře paměti. Jako testovací data byl použit jak generovaný provoz, tak i vzorky reálné komunikace zachycené v laboratorním prostředí.
APA, Harvard, Vancouver, ISO, and other styles
3

Cevik, Alper. "A Medical Image Processing And Analysis Framework." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12612965/index.pdf.

Full text
Abstract:
Medical image analysis is one of the most critical studies in field of medicine, since results gained by the analysis guide radiologists for diagnosis, treatment planning, and verification of administered treatment. Therefore, accuracy in analysis of medical images is at least as important as accuracy in data acquisition processes. Medical images require sequential application of several image post-processing techniques in order to be used for quantification and analysis of intended features. Main objective of this thesis study is to build up an application framework, which enables analysis and quantification of several features in medical images with minimized input-dependency over results. Intended application targets to present a software environment, which enables sequential application of medical image processing routines and provides support for radiologists in diagnosis, treatment planning and treatment verification phases of neurodegenerative diseases and brain tumors
thus, reducing the divergence in results of operations applied on medical images. In scope of this thesis study, a comprehensive literature review is performed, and a new medical image processing and analysis framework - including modules responsible for automation of separate processes and for several types of measurements such as real tumor volume and real lesion area - is implemented. Performance of the fully-automated segmentation module is evaluated with standards introduced by Neuro Imaging Laboratory, UCLA
and the fully-automated registration module with Normalized Cross-Correlation metric. Results have shown a success rate above 90 percent for both of the modules. Additionally, a number of experiments have been designed and performed using the implemented application. It is expected for an accurate, flexible, and robust software application to be accomplished on the basis of this thesis study, and to be used in field of medicine as a contributor by even non-engineer professionals.
APA, Harvard, Vancouver, ISO, and other styles
4

Westin, Carl-Fredrik. "A Tensor Framework for Multidimensional Signal Processing." Doctoral thesis, Linköpings universitet, Bildbehandling, 1994. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54274.

Full text
Abstract:
This thesis deals with ltering of multidimensional signals. A large part of the thesis is devoted to a novel filtering method termed "Normalized convolution". The method performs local expansion of a signal in a chosen lter basis which not necessarily has to be orthonormal. A key feature of the method is that it can deal with uncertain data when additional certainty statements are available for the data and/or the lters. It is shown how false operator responses due to missing or uncertain data can be significantly reduced or eliminated using this technique. Perhaps the most well-known of such eects are the various 'edge effects' which invariably occur at the edges of the input data set. The method is an example of the signal/certainty - philosophy, i.e. the separation of both data and operator into a signal part and a certainty part. An estimate of the certainty must accompany the data. Missing data are simply handled by setting the certainty to zero. Localization or windowing of operators is done using an applicability function, the operator equivalent to certainty, not by changing the actual operator coefficients. Spatially or temporally limited operators are handled by setting the applicability function to zero outside the window. The use of tensors in estimation of local structure and orientation using spatiotemporal quadrature filters is reviewed and related to dual tensor bases. The tensor representation conveys the degree and type of local anisotropy. For image sequences, the shape of the tensors describe the local structure of the spatiotemporal neighbourhood and provides information about local velocity. The tensor representation also conveys information for deciding if true flow or only normal flow is present. It is shown how normal flow estimates can be combined into a true flow using averaging of this tensor eld description. Important aspects of representation and techniques for grouping local orientation estimates into global line information are discussed. The uniformity of some standard parameter spaces for line segmentation is investigated. The analysis shows that, to avoid discontinuities, great care should be taken when choosing the parameter space for a particular problem. A new parameter mapping well suited for line extraction, the Möbius strip parameterization, is de ned. The method has similarities to the Hough Transform. Estimation of local frequency and bandwidth is also discussed. Local frequency is an important concept which provides an indication of the appropriate range of scales for subsequent analysis. One-dimensional and two-dimensional examples of local frequency estimation are given. The local bandwidth estimate is used for dening a certainty measure. The certainty measure enables the use of a normalized averaging process increasing robustness and accuracy of the frequency statements.
APA, Harvard, Vancouver, ISO, and other styles
5

張少能 and Siu-nang Bruce Cheung. "A concise framework of natural language processing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1989. http://hub.hku.hk/bib/B31208563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

van, Schaik Sebastiaan Johannes. "A framework for processing correlated probabilistic data." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:91aa418d-536e-472d-9089-39bef5f62e62.

Full text
Abstract:
The amount of digitally-born data has surged in recent years. In many scenarios, this data is inherently uncertain (or: probabilistic), such as data originating from sensor networks, image and voice recognition, location detection, and automated web data extraction. Probabilistic data requires novel and different approaches to data mining and analysis, which explicitly account for the uncertainty and the correlations therein. This thesis introduces ENFrame: a framework for processing and mining correlated probabilistic data. Using this framework, it is possible to express both traditional and novel algorithms for data analysis in a special user language, without having to explicitly address the uncertainty of the data on which the algorithms operate. The framework will subsequently execute the algorithm on the probabilistic input, and perform exact or approximate parallel probability computation. During the probability computation, correlations and provenance are succinctly encoded using probabilistic events. This thesis contains novel contributions in several directions. An expressive user language – a subset of Python – is introduced, which allows a programmer to implement algorithms for probabilistic data without requiring knowledge of the underlying probabilistic model. Furthermore, an event language is presented, which is used for the probabilistic interpretation of the user program. The event language can succinctly encode arbitrary correlations using events, which are the probabilistic counterparts of deterministic user program variables. These highly interconnected events are stored in an event network, a probabilistic interpretation of the original user program. Multiple techniques for exact and approximate probability computation (with error guarantees) of such event networks are presented, as well as techniques for parallel computation. Adaptations of multiple existing data mining algorithms are shown to work in the framework, and are subsequently subjected to an extensive experimental evaluation. Additionally, a use-case is presented in which a probabilistic adaptation of a clustering algorithm is used to predict faults in energy distribution networks. Lastly, this thesis presents techniques for integrating a number of different probabilistic data formalisms for use in this framework and in other applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Cheung, Siu-nang Bruce. "A concise framework of natural language processing /." [Hong Kong : University of Hong Kong], 1989. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12432544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Karlsson, Per. "A GPU-based framework for efficient image processing." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-112093.

Full text
Abstract:
This thesis tries to answer how to design a framework for image processing on the GPU, supporting the common environments OpenGL GLSL, OpenCL and CUDA. An generalized view of GPU image processing is presented. The framework is called gpuip and is implemented in C++ but also wrapped with Python-bindings. The framework is cross-platform and works for Windows, Mac OSX and Unix operating systems. The thesis also involves the work of creating two executable programs that uses the gpuip-framework. One of the programs has a graphical user interface and the other program is command-line only. Both programs are developed in Python. Performance tests are created to compare the GPU environments against a single core CPU implementation. All the GPU implementations in the gpuip-framework are significantly faster than the CPU when executing the presented test-cases. On average, the framework is two magnitudes faster than the single core CPU.
APA, Harvard, Vancouver, ISO, and other styles
9

Hongslo, Anders. "Stream Processing in the Robot Operating System framework." Thesis, Linköpings universitet, Artificiell intelligens och integrerad datorsystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-79846.

Full text
Abstract:
Streams of information rather than static databases are becoming increasingly important with the rapid changes involved in a number of fields such as finance, social media and robotics. DyKnow is a stream-based knowledge processing middleware which has been used in autonomous Unmanned Aerial Vehicle (UAV) research. ROS (Robot Operating System) is an open-source robotics framework providing hardware abstraction, device drivers, communication infrastructure, tools, libraries as well as other functionalities. This thesis describes a design and a realization of stream processing in ROS based on the stream-based knowledge processing middleware DyKnow. It describes how relevant information in ROS can be selected, labeled, merged and synchronized to provide streams of states. There are a lot of applications for such stream processing such as execution monitoring or evaluating metric temporal logic formulas through progression over state sequences containing the features of the formulas. Overviews are given of DyKnow and ROS before comparing the two and describing the design. The stream processing capabilities implemented in ROS are demonstrated through performance evaluations which show that such stream processing is fast and efficient. The resulting realization in ROS is also readily extensible to provide further stream processing functionality.
APA, Harvard, Vancouver, ISO, and other styles
10

Allott, Nicholas Mark. "A natural language processing framework for automated assessment." Thesis, Nottingham Trent University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Moraes, Sergio A. S. "A distributed processing framework with application to graphics." Thesis, University of Sussex, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387338.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zineddin, Bachar. "Microarray image processing : a novel neural network framework." Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/5713.

Full text
Abstract:
Due to the vast success of bioengineering techniques, a series of large-scale analysis tools has been developed to discover the functional organization of cells. Among them, cDNA microarray has emerged as a powerful technology that enables biologists to cDNA microarray technology has enabled biologists to study thousands of genes simultaneously within an entire organism, and thus obtain a better understanding of the gene interaction and regulation mechanisms involved. Although microarray technology has been developed so as to offer high tolerances, there exists high signal irregularity through the surface of the microarray image. The imperfection in the microarray image generation process causes noises of many types, which contaminate the resulting image. These errors and noises will propagate down through, and can significantly affect, all subsequent processing and analysis. Therefore, to realize the potential of such technology it is crucial to obtain high quality image data that would indeed reflect the underlying biology in the samples. One of the key steps in extracting information from a microarray image is segmentation: identifying which pixels within an image represent which gene. This area of spotted microarray image analysis has received relatively little attention relative to the advances in proceeding analysis stages. But, the lack of advanced image analysis, including the segmentation, results in sub-optimal data being used in all downstream analysis methods. Although there is recently much research on microarray image analysis with many methods have been proposed, some methods produce better results than others. In general, the most effective approaches require considerable run time (processing) power to process an entire image. Furthermore, there has been little progress on developing sufficiently fast yet efficient and effective algorithms the segmentation of the microarray image by using a highly sophisticated framework such as Cellular Neural Networks (CNNs). It is, therefore, the aim of this thesis to investigate and develop novel methods processing microarray images. The goal is to produce results that outperform the currently available approaches in terms of PSNR, k-means and ICC measurements.
APA, Harvard, Vancouver, ISO, and other styles
13

Nazemi, Gelyan Sepideh. "Distributed optimisation framework for in-network data processing." Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/42539.

Full text
Abstract:
In an information network consisting of different types of communication devices equipped with various types of sensors, it is inevitable that a huge amount of data will be generated. Considering the practical network constraints such as bandwidth and energy limitations, storing, processing and transmitting this very large volume of data is very challenging, if not impossible. However, In-Network Processing (INP) has opened a new door to possible solutions for optimising the utilisation of network resources. INP methods primarily aim to aggregate (e.g., compression, fusion and averaging) data from different sources with the objective of reducing the data volume for further transfer, thus, reducing energy consumption, and increasing the network lifetime. However, processing data often results in an imprecise outcome such as irrelevancy, incompleteness, etc. Therefore, besides characterising the Quality of Information (QoI) in these systems, which is important, it is also crucial to consider the effect of further data processing on the measured QoI associated with each specific piece of information. Typically, the greater the degree of data aggregation, the higher the computation energy cost that is incurred. However, as the volume of data is reduced after aggregation, less energy is needed for subsequent data transmission and reception. Furthermore, aggregation of data can cause deterioration of QoI. Therefore, there is a trade-off among the QoI requirement and energy consumption by computation and communication. We define the optimal data reduction rate parameter as the degree to which data can be efficiently reduced while guaranteeing the required QoI for the end user. Using wireless sensor networks for illustration, we concentrate on designing a distributed framework to facilitate controlling of INP process at each node while satisfying the end user's QoI requirements. We formulate the INP problem as a non-linear optimisation problem with the objective of minimising the total energy consumption through the network subject to a given QoI requirement for the end user. The proposed problem is intrinsically a non-convex, and, in general, hard to solve. Given the non-convexity and hardness of the problem, we propose a novel approach that can reduce the computation complexity of the problem. Specifically, we prove that under the assumption of uniform parameters' settings, the complexity of the proposed problem can be reduced significantly, which may be feasible for each node with limited energy supply to carry out the problem computation. Moreover, we propose an optimal solution by transforming the original problem to an equivalent one. Using the theory of duality optimisation, we prove that under a set of reasonable cost and topology assumptions, the optimal solution can be efficiently, obtained despite the non-convexity of the problem. Furthermore, we propose an effective and efficient distributed, iterative algorithm that can converge to the optimal solution. We evaluate our proposed complexity reduction framework under different parameter settings, and show that the problem with N variables can be reduced to the problem with logN variables presenting a significant reduction in the complexity of the problem. The validity and performance of the proposed distributed optimisation framework has been evaluated through extensive simulation. We show that the proposed distributed algorithm can converge to the optimal solution very fast. The behaviour of the proposed framework has been examined under different parameters' setting, and checked against the optimal solution obtained via an exhaustive search algorithm. The results show the quick and efficient convergences for the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
14

Van, der Byl Andrew. "A parallel processing framework for spectral based computations." Doctoral thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/11522.

Full text
Abstract:
Includes abstract.
Includes bibliographical references.
Today, great advances have been made; however the tenet of ‘design first, figure out how to program later’ still lingers in the corridors of Silicon Valley. The focus of this study is however not on making a contribution to compilers or software development, nor on determining an efficient generic parallel processing architecture for all classes of computing. Instead, this study adopts a different design approach, where a class of computing is first selected and analyzed, before determining a suitable hardware structure which can be tailored to the class being considered. The class of computing under investigation in this work is Spectral Methods, which by its very nature, has its own processing and data communication requirements. The purpose of this study is to investigate the processing and data handling requirements of the Spectral Methods class, and to design a suitable framework to support this class. The approach is different from past traditions - the hardware framework is based on software requirements, and in a sense is designed for the processing required, rather that the other way around.
APA, Harvard, Vancouver, ISO, and other styles
15

Heintz, Fredrik. "DyKnow : A Stream-Based Knowledge Processing Middleware Framework." Doctoral thesis, Linköping : Department of Computer and Information Science, Linköpings universitet, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Cline, George E. "A control framework for distributed (parallel) processing environments." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-12042009-020227/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Yang, Bin, and 杨彬. "A novel framework for binning environmental genomic fragments." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45789344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Simonis, Volker. "A framework for processing and presenting parallel text corpora." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=971862257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

ISHIKAWA, Yoshiharu, and Fengrong LI. "Query Processing in a Traceable P2P Record Exchange Framework." Institute of Electronics, Information and Communication Engineers, 2010. http://hdl.handle.net/2237/14955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Inam, Ul Haq Muhammad. "Texture analysis in the Logarithmic Image Processing (LIP) framework." Phd thesis, Université Jean Monnet - Saint-Etienne, 2013. http://tel.archives-ouvertes.fr/tel-00998492.

Full text
Abstract:
This thesis looks at the evaluation of textures in two different perspectives using logarithmic image processing (LIP) framework. The first case after introducing the concept of textures and giving some classical approaches of textures evaluation, it gives an original approach of textures evaluation called covariogram which is derived from similarity metrics like distances or correlations etc. The classical covariogram which is derived from the classical similarity metrics and LIP covariogram are then applied over several images and the efficiency of the LIP one is clearly shown for darkened images. The last two chapters offer a new approach by considering the gray levels of an image as the phases of a medium. Each phase simulates like a percolation of a liquid in a medium defining the percolation trajectories. The propagation from one pixel to another is taken as easy or difficult determined by the difference of the gray level intensities. Finally different parameters like fractality from fractal dimensions, mean histogram etc associated to these trajectories are derived, based on which the primary experiment for the classification of random texture is carried out determining the relevance of this idea. Obviously, our study is only first approach and requires additional workout to obtain a reliable method of classification
APA, Harvard, Vancouver, ISO, and other styles
21

BELMONTE, LEONARDO MENDES. "A MODEL AND AN IMPLEMENTATION FRAMEWORK FOR SETS PROCESSING." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2006. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=9534@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Essa dissertação propõe um modelo de processamento da informação baseado em conjuntos, que pode ser visto como uma generalização do modelo de grafos clássico para hipertextos. Este modelo pressupõe um modelo semântico de um domínio de aplicação, e a partir deste são definidos conjuntos de objetos. Tarefas de processamento da informação que os usuários devem executar, com apoio da aplicação, são descritas como composições funcionais de operações realizadas sobre os itens de informação e sobre os conjuntos definidos. Este tipo de modelo permite a construção de aplicações com interfaces de manipulação direta sobre os itens e conjuntos, e inclui, entre outras, as interfaces de navegação facetada, atualmente, encontradas na Web. Neste tipo de interface, o usuário constrói a composição funcional que representa a computação desejada de forma incremental, através da manipulação direta de elementos na interface. Esta dissertação especifica este modelo, e apresenta uma implementação no ambiente .Net. Além da definição dos itens e conjuntos, é também gerada uma Linguagem Específica de Domínio (Domain Specific Language, DSL) que permite a expressão direta das operações sobre itens e conjuntos. O modelo proposto, e sua implementação, é integrado a um framework para geração de interfaces para manipulação direta de conjuntos, objeto de outra dissertação. É apresentado um estudo de caso, utilizando o modelo, a sua implementação e a integração com a interface, ilustrando como a abordagem facilita diversos tipos de tarefas comumente realizadas por usuários de aplicações Web.
This dissertation proposes an information processing model based on sets that can be seen as a generalization of the classic model of graphs for hypertexts. This model estimates a semantic model of an application domain, and the sets of objects are defined from this. Information processing tasks that the users should execute, with support of the application, are described as functional compositions of operations applied over the information items and over the defined sets. This model type allows the application construction with interfaces of direct manipulation on items and sets, and includes, among others, the faceted navigation interfaces that are currently found in the Web. In this type of interface, the user builds the functional composition that represents the desired computation in the incremental form, through the direct manipulation of elements in the interface.This dissertation specifies this model, and presents an implementation in the .Net environment. Beyond the definition of items and sets, it generates a Domain Specific Language (DSL) that allows the direct expression of operations on items and sets. The proposed model, and its implementation, is integrated with a framework for generating direct manipulation interfaces on sets, that is the focus of another dissertation. A study case is presented, using the model, its implementation and the integration with the interface, illustrating how the approach facilitates different types of tasks that are frequently executed by Web application users.
APA, Harvard, Vancouver, ISO, and other styles
22

Modi, John J., Tony L. Essman, Douglas H. Brandon, Joe W. Waller, Robert S. Hester, Fern L. Pham, Vien X. Bui, Dan C. Green, and Mark G. Kerzie. "JOINT FRAMEWORK PROJECT: A FLIGHT TEST DATA PROCESSING SYSTEM." International Foundation for Telemetering, 2003. http://hdl.handle.net/10150/605348.

Full text
Abstract:
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The Joint Framework Project (JFP) is an effort to conjoin the software data processing pipeline frameworks between Lockheed Martin’s Flight Test Data Centers. The JFP integrates the existing Data Processing Framework (DPFW) with the Joint Enterprise Test System (JETS) data products concept of pipelines. The JFP is constructed with simple governing concepts of data pipes and filters, engineered to manage post flight and real time test data for LM Aeronautics Flight Test mission support, and the results are presented here. The JFP is an Object-Oriented dynamically configurable framework that supports LM Aeronautics Flight Test programs. The JFP uses the Adaptive Communications Environment (ACE) framework, an open source high-performance networking package, to implement the components. The joint framework project provides a real time and an interactive / background post flight test data processing environment reproducing MIL-STD 1553, ARINC 429, Pulse Code Modulation (PCM), Time Space Position Information (TSPI), Digital Video, and High Speed Data Bus (HSDB) data streams for flight test and discipline engineers. The architecture supports existing requirements for the flight test centers, and provides a remarkably flexible environment for integrating enhancements. The JFP is a collaborative effort consisting of LM Aero Flight Test software teams at Marietta, Fort Worth, Edwards Air Force Base, and Palmdale. A prototype will be presented of the JFP addressing the data specific treatment of demultiplexing, decommutation, filtering, data merging, engineering unit conversion, and data reporting. An overview of the distributed architecture is presented, and the potential for the JFP extensibility to support future flight test program requirements is discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Dworaczyk, Wiltshire Austin Aaron. "CUDA ENHANCED FILTERING IN A PIPELINED VIDEO PROCESSING FRAMEWORK." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/1072.

Full text
Abstract:
The processing of digital video has long been a significant computational task for modern x86 processors. With every video frame composed of one to three planes, each consisting of a two-dimensional array of pixel data, and a video clip comprising of thousands of such frames, the sheer volume of data is significant. With the introduction of new high definition video formats such as 4K or stereoscopic 3D, the volume of uncompressed frame data is growing ever larger. Modern CPUs offer performance enhancements for processing digital video through SIMD instructions such as SSE2 or AVX. However, even with these instruction sets, CPUs are limited by their inherently sequential design, and can only operate on a handful of bytes in parallel. Even processors with a multitude of cores only execute on an elementary level of parallelism. GPUs provide an alternative, massively parallel architecture. GPUs differ from CPUs by providing thousands of throughput-oriented cores, instead of a maximum of tens of generalized “good enough at everything” x86 cores. The GPU’s throughput-oriented cores are far more adept at handling large arrays of pixel data, as many video filtering operations can be performed independently. This computational independence allows for pixel processing to scale across hun- dreds or even thousands of device cores. This thesis explores the utilization of GPUs for video processing, and evaluates the advantages and caveats of porting the modern video filtering framework, Vapoursynth, over to running entirely on the GPU. Compute heavy GPU-enabled video processing results in up to a 108% speedup over an SSE2-optimized, multithreaded CPU implementation.
APA, Harvard, Vancouver, ISO, and other styles
24

Kalidindi, Arvind R. (Arvind Rama). "An alloy selection and processing framework for nanocrystalline materials." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120208.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 2018
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 106-115).
Nanocrystalline materials have a unique set of properties due to their nanometer-scale grain size. To harness these properties, grain growth in these materials needs to be suppressed, particularly in order to process bulk nanocrystalline components and to use them reliably. Alloying the material with the right elements has the potential to produce remarkably stable nanocrystalline states, particularly if the nanocrystalline state is thermodynamically stable against grain growth. This thesis builds upon previous models for selecting alloy combinations that lead to thermodynamic stability against grain growth, by developing frameworks that extend to negative enthalpy of mixing systems and ordered grain boundary complexions. These models are used to develop a generalized stability criterion based on bulk thermodynamic parameters, which can be used to select alloy systems that are formally stable against grain growth. A robust statistical mechanics framework is developed for reliable thermodynamic observations using Monte Carlo simulations to produce free energy diagrams and phase diagrams for stable nanocrystalline alloys.
by Arvind R. Kalidindi.
Ph. D.
Ph.D. Massachusetts Institute of Technology, Department of Materials Science and Engineering
APA, Harvard, Vancouver, ISO, and other styles
25

Schrijvers, Tom. "Overview of the monadic constraint programming framework." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4141/.

Full text
Abstract:
A constraint programming system combines two essential components: a constraint solver and a search engine. The constraint solver reasons about satisfiability of conjunctions of constraints, and the search engine controls the search for solutions by iteratively exploring a disjunctive search tree defined by the constraint program. The Monadic Constraint Programming framework gives a monadic definition of constraint programming where the solver is defined as a monad threaded through the monadic search tree. Search and search strategies can then be defined as firstclass objects that can themselves be built or extended by composable search transformers. Search transformers give a powerful and unifying approach to viewing search in constraint programming, and the resulting constraint programming system is first class and extremely flexible.
APA, Harvard, Vancouver, ISO, and other styles
26

Yeung, Sai Kit. "Stochastic framework for inverse consistent registration /." View abstract or full-text, 2005. http://library.ust.hk/cgi/db/thesis.pl?BIEN%202005%20YEUNG.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Teymourian, Kia [Verfasser]. "A Framework for Knowledge-Based Complex Event Processing / Kia Teymourian." Berlin : Freie Universität Berlin, 2014. http://d-nb.info/1063331803/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Tianyu Tom. "Toward an interpretive framework of two-dimensional speech-signal processing." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65520.

Full text
Abstract:
Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 177-179).
Traditional representations of speech are derived from short-time segments of the signal and result in time-frequency distributions of energy such as the short-time Fourier transform and spectrogram. Speech-signal models of such representations have had utility in a variety of applications such as speech analysis, recognition, and synthesis. Nonetheless, they do not capture spectral, temporal, and joint spectrotemporal energy fluctuations (or "modulations") present in local time-frequency regions of the time-frequency distribution. Inspired by principles from image processing and evidence from auditory neurophysiological models, a variety of twodimensional (2-D) processing techniques have been explored in the literature as alternative representations of speech; however, speech-based models are lacking in this framework. This thesis develops speech-signal models for a particular 2-D processing approach in which 2-D Fourier transforms are computed on local time-frequency regions of the canonical narrowband or wideband spectrogram; we refer to the resulting transformed space as the Grating Compression Transform (GCT). We argue for a 2-D sinusoidal-series amplitude modulation model of speech content in the spectrogram domain that relates to speech production characteristics such as pitch/noise of the source, pitch dynamics, formant structure and dynamics, and offset/onset content. Narrowband- and wideband-based models are shown to exhibit important distinctions in interpretation and oftentimes "dual" behavior. In the transformed GCT space, the modeling results in a novel taxonomy of signal behavior based on the distribution of formant and onset/offset content in the transformed space via source characteristics. Our formulation provides a speech-specific interpretation of the concept of "modulation" in 2-D processing in contrast to existing approaches that have done so either phenomenologically through qualitative analyses and/or implicitly through data-driven machine learning approaches. One implication of the proposed taxonomy is its potential for interpreting transformations of other time-frequency distributions such as the auditory spectrogram which is generally viewed as being "narrowband"/"wideband" in its low/high-frequency regions. The proposed signal model is evaluated in several ways. First, we perform analysis of synthetic speech signals to characterize its properties and limitations. Next, we develop an algorithm for analysis/synthesis of spectrograms using the model and demonstrate its ability to accurately represent real speech content. As an example application, we further apply the models in cochannel speaker separation, exploiting the GCT's ability to distribute speaker-specific content and often recover overlapping information through demodulation and interpolation in the 2-D GCT space. Specifically, in multi-pitch estimation, we demonstrate the GCT's ability to accurately estimate separate and crossing pitch tracks under certain conditions. Finally, we demonstrate the model's ability to separate mixtures of speech signals using both prior and estimated pitch information. Generalization to other speech-signal processing applications is proposed.
by Tianyu Tom Wang.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
29

Khairo-Sindi, Mazin Omar. "Framework for web log pre-processing within web usage mining." Thesis, University of Manchester, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488456.

Full text
Abstract:
Web mining is gaining popularity by the day and the role of the web in providing invaluable information about users' behaviour and navigational patterns is now highly appreciated by information technology specialists and businesses alike. Nevertheless, given the enormity of the web and the complexities involved in delivering and retrieving electronic information, one can imagine the difficulties involved in extracting a set of minable objects from the raw and huge web log data. Added to the fact that web mining is a new science, this may explain why research on data pre-processing is still limited in scope. And, although the debate on major issues is still gaining momentum, attempts to establish a coherent and accurate web usage pre-processing framework are still non existent. As a contribution to the existing debate, this research aims at formulating a workable, reliable, and coherent pre-processing framework. The present study will address the following issues: enhance and maximise knowledge about every visit made to a given website from multiple web logs even when they have different schemas, improve the process of eliminating excessive web log data that are not related to users' behaviour, modify the existing approaches for session identification in order to obtain more accurate results and eliminate redundant data that comes as a result of repeatedly adding cached data to the web logs regardless whether or not the added page is a frameset. In addition to the suggested improvements, the study will also introduce a novel task, namely, "automatic web log integration". This will make it possible to integrate different web logs with different schemas into a unified data set. Finally, the study will incorporate unnecessary information, particularly that pertaining to malicious website visits into the non user request removal task. Put together, both the suggested improvements and novel tasks result into a coherent pre-processing framework. To test the reliability and validity of the framework, a website is created in order to perform the necessary experimental work and a prototype pre-processing tool is devised and employed to support it.
APA, Harvard, Vancouver, ISO, and other styles
30

Ravali, Yeluri. "BALLWORLD: A FRAMEWORK FOR LEARNING STATISTICAL INFERENCE AND STREAM PROCESSING." University of Akron / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=akron1498769835817335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Keceli, Fuat. "Dataflow interchange format and a framework for processing dataflow graphs." College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/1831.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2004.
Thesis research directed by: Dept. of Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
32

Hanus, Michael, and Sven Koschnicke. "An ER-based framework for declarative web programming." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4144/.

Full text
Abstract:
We describe a framework to support the implementation of web-based systems to manipulate data stored in relational databases. Since the conceptual model of a relational database is often specified as an entity-relationship (ER) model, we propose to use the ER model to generate a complete implementation in the declarative programming language Curry. This implementation contains operations to create and manipulate entities of the data model, supports authentication, authorization, session handling, and the composition of individual operations to user processes. Furthermore and most important, the implementation ensures the consistency of the database w.r.t. the data dependencies specified in the ER model, i.e., updates initiated by the user cannot lead to an inconsistent state of the database. In order to generate a high-level declarative implementation that can be easily adapted to individual customer requirements, the framework exploits previous works on declarative database programming and web user interface construction in Curry.
APA, Harvard, Vancouver, ISO, and other styles
33

Archer, Cynthia. "A framework for representing non-stationary data with mixtures of linear models /." Full text open access at:, 2002. http://content.ohsu.edu/u?/etd,585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Guest, Thomas. "Nonlinear design of geophysical surveys and processing strategies." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4903.

Full text
Abstract:
The principal aim of all scientific experiments is to infer knowledge about a set of parameters of interest through the process of data collection and analysis. In the geosciences, large sums of money are spent on the data analysis stage but much less attention is focussed on the data collection stage. Statistical experimental design (SED), a mature field of statistics, uses mathematically rigorous methods to optimise the data collection stage so as to maximise the amount of information recorded about the parameters of interest. The uptake of SED methods in geophysics has been limited as the majority of SED research is based on linear and linearised theories whereas most geophysical methods are highly nonlinear and therefore the developed methods are not robust. Nonlinear SED methods are computationally demanding and hence to date the methods that do exist limit the designs to be either very simplistic or computationally infeasible and therefore cannot be used in an industrial setting. In this thesis, I firstly show that it is possible to design industry scale experiments for highly nonlinear problems within a computationally tractable time frame. Using an entropy based method constructed on a Bayesian framework I introduce an iteratively-constructive method that reduces the computational demand by introducing one new datum at a time for the design. The method reduces the multidimensional design space to a single-dimensional space at each iteration by fixing the experimental setup of the previous iteration. Both a synthetic experiment using a highly nonlinear parameter-data relationship, and a seismic amplitude versus offset (AVO) experiment are used to illustrate that the results produced by the iteratively-constructive method closely match the results of a global design method at a fraction of the computational cost. This new method thus extends the class of iterative design methods to nonlinear problems, and makes fully nonlinear design methods applicable to higher dimensional industrial scale problems. Using the new iteratively-constructive method, I show how optimal trace profiles for processing amplitude versus angle (AVA) surveys that account for all prior petrophysical information about the target reservoir can be generated using totally nonlinear methods. I examine how the optimal selections change as our prior knowledge of the rock parameters and reservoir fluid content change, and assess which of the prior parameters has the largest effect on the selected traces. The results show that optimal profiles are far more sensitive to prior information about reservoir porosity than information about saturating fluid properties. By applying ray tracing methods the AVA results can be used to design optimal processing profiles from seismic datasets, for multiple targets each with different prior model uncertainties. Although the iteratively-constructive method can be used to design the data collection stage it has been used here to select optimal data subsets post-survey. Using a nonlinear Bayesian SED method I show how industrial scale amplitude versus offset (AVO) data collection surveys can be constructed to maximise the information content contained in AVO crossplots, the principal source of petrophysical information from seismic surveys. The results show that the optimal design is highly dependant on the model parameters when a low number of receivers is being used, but that a single optimal design exists for the complete range of parameters once the number of receivers is increased above a threshold value. However, when acquisition and processing costs are considered I find that, in the case of AVO experiments, a design with constant spatial receiver separation is close to optimal. This explains why regularly-spaced, 2D seismic surveys have performed so well historically, not only from the point of view of noise attenuation and imaging in which homogeneous data coverage confers distinct advantages, but also as providing data to constrain subsurface petrophysical information. Finally, I discuss the implications of the new methods developed and assess which areas of geophysics would benefit from applying SED methods during the design stage.
APA, Harvard, Vancouver, ISO, and other styles
35

Fleites, Fausto C. "A Scalable Multimedia Content Processing Framework with Application to TV Shopping." FIU Digital Commons, 2014. http://digitalcommons.fiu.edu/etd/1452.

Full text
Abstract:
The advent of smart TVs has reshaped the TV-consumer interaction by combining TVs with mobile-like applications and access to the Internet. However, consumers are still unable to seamlessly interact with the contents being streamed. An example of such limitation is TV shopping, in which a consumer makes a purchase of a product or item displayed in the current TV show. Currently, consumers can only stop the current show and attempt to find a similar item in the Web or an actual store. It would be more convenient if the consumer could interact with the TV to purchase interesting items. Towards the realization of TV shopping, this dissertation proposes a scalable multimedia content processing framework. Two main challenges in TV shopping are addressed: the efficient detection of products in the content stream, and the retrieval of similar products given a consumer-selected product. The proposed framework consists of three components. The first component performs computational and temporal aware multimedia abstraction to select a reduced number of frames that summarize the important information in the video stream. By both reducing the number of frames and taking into account the computational cost of the subsequent detection phase, this component component allows the efficient detection of products in the stream. The second component realizes the detection phase. It executes scalable product detection using multi-cue optimization. Additional information cues are formulated into an optimization problem that allows the detection of complex products, i.e., those that do not have a rigid form and can appear in various poses. After the second component identifies products in the video stream, the consumer can select an interesting one for which similar ones must be located in a product database. To this end, the third component of the framework consists of an efficient, multi-dimensional, tree-based indexing method for multimedia databases. The proposed index mechanism serves as the backbone of the search. Moreover, it is able to efficiently bridge the semantic gap and perception subjectivity issues during the retrieval process to provide more relevant results.
APA, Harvard, Vancouver, ISO, and other styles
36

Zheng, Lizhi. "A generic parallel processing framework for real-time software video compression." Thesis, Brunel University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.412432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mikhalev, Alexander. "Image processing and agent-based framework for the geolocation of emitters." Thesis, Cranfield University, 2010. http://dspace.lib.cranfield.ac.uk/handle/1826/4647.

Full text
Abstract:
The research presented in this thesis is about a task of geolocation of radio frequency emitters. In this research the problem of geolocation of non-collaborative emitter was addressed. This thesis presents the novel algorithm for the RF emitter geolocation based on the image process technique known as Hough Transform. The comparison of this algorithm with traditional approaches to geolocation showed a number of benefits, like robustness, accuracy and advanced fusion capability. The application of the Hough Transform to data fusion allowed to use the modern concepts of agentbased fusion and cluster level fusion, thus moving the solution of the problem of the geolocation to upper level of fusion hierarchy. The work on Hough Transform lead to a comparison of the Bayesian and non-Bayesian approaches in solving the task of geolocation. Exploitation of the comparison lead to the derivation of a generalized estimator. This estimator highlighted a number of mathematical functions which can be exploited for geolocation and data fusion. These functions has been tested for the purpose of data fusion in geolocation and it was found that Hough Transform is a useful alternative approach for the data fusion for geolocation of RF emitter.
APA, Harvard, Vancouver, ISO, and other styles
38

Marupudi, Surendra Brahma. "Framework for Semantic Integration and Scalable Processing of City Traffic Events." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1472505847.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Mikhalev, A. "Image processing and agent-based framework for the geolocation of emitters." Thesis, Department of Informatics and Sensors, 2010. http://dspace.lib.cranfield.ac.uk/handle/1826/4647.

Full text
Abstract:
The research presented in this thesis is about a task of geolocation of radio frequency emitters. In this research the problem of geolocation of non-collaborative emitter was addressed. This thesis presents the novel algorithm for the RF emitter geolocation based on the image process technique known as Hough Transform. The comparison of this algorithm with traditional approaches to geolocation showed a number of benefits, like robustness, accuracy and advanced fusion capability. The application of the Hough Transform to data fusion allowed to use the modern concepts of agentbased fusion and cluster level fusion, thus moving the solution of the problem of the geolocation to upper level of fusion hierarchy. The work on Hough Transform lead to a comparison of the Bayesian and non-Bayesian approaches in solving the task of geolocation. Exploitation of the comparison lead to the derivation of a generalized estimator. This estimator highlighted a number of mathematical functions which can be exploited for geolocation and data fusion. These functions has been tested for the purpose of data fusion in geolocation and it was found that Hough Transform is a useful alternative approach for the data fusion for geolocation of RF emitter.
APA, Harvard, Vancouver, ISO, and other styles
40

Keane, John F. "A framework for molecular signal processing and detection in biological cells /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/6126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Yin, Deming. "A framework for network RTK data processing based on grid computing." Thesis, Queensland University of Technology, 2009. https://eprints.qut.edu.au/28596/1/Deming_Yin_Thesis.pdf.

Full text
Abstract:
Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.
APA, Harvard, Vancouver, ISO, and other styles
42

Yin, Deming. "A framework for network RTK data processing based on grid computing." Queensland University of Technology, 2009. http://eprints.qut.edu.au/28596/.

Full text
Abstract:
Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.
APA, Harvard, Vancouver, ISO, and other styles
43

Tymoshenko, Kateryna. "A General Framework for Exploiting Background Knowledge in Natural Language Processing." Doctoral thesis, Università degli studi di Trento, 2012. https://hdl.handle.net/11572/368094.

Full text
Abstract:
The two key aspects of natural language processing (NLP) applications based on machine learning techniques are the learning algorithm and the feature representation of the documents, entities, or words that have to be manipulated. Until now, the majority of the approaches exploited syntactic features, while semantic feature extraction suffered from low coverage of the available knowledge resources and the difficulty to match text and ontology elements. Nowadays, the Semantic Web made available a large amount of logically encoded world knowledge called Linked Open Data (LOD). However, extending state-of-the-art natural language applications to use LOD resources is not a trivial task due to a number of reasons, including natural language ambiguity and heterogeneity and ambiguity of the schemes adopted by different LOD resources. In this thesis we define a general framework for supporting NLP with semantic features extracted from LOD. The main idea behind the framework is to (i) map terms in text to the unique resource identifiers (URIs) of LOD concepts through Wikipedia mediation; (ii) use the URIs to obtain background knowledge from LOD; (iii) integrate the obtained knowledge as semantic features into machine learning algorithms. We evaluate the framework by means of case studies on coreference resolution and relation extraction. Additionally, we propose an approach for increasing accuracy of the mapping step based on the "one sense per discourse" hypothesis. Finally, we present an open-source Java tool for extracting LOD knowledge through SPARQL endpoints and converting it to NLP features.
APA, Harvard, Vancouver, ISO, and other styles
44

Tymoshenko, Kateryna. "A General Framework for Exploiting Background Knowledge in Natural Language Processing." Doctoral thesis, University of Trento, 2012. http://eprints-phd.biblio.unitn.it/870/1/tymoshenko-thesis-submitted.pdf.

Full text
Abstract:
The two key aspects of natural language processing (NLP) applications based on machine learning techniques are the learning algorithm and the feature representation of the documents, entities, or words that have to be manipulated. Until now, the majority of the approaches exploited syntactic features, while semantic feature extraction suffered from low coverage of the available knowledge resources and the difficulty to match text and ontology elements. Nowadays, the Semantic Web made available a large amount of logically encoded world knowledge called Linked Open Data (LOD). However, extending state-of-the-art natural language applications to use LOD resources is not a trivial task due to a number of reasons, including natural language ambiguity and heterogeneity and ambiguity of the schemes adopted by different LOD resources. In this thesis we define a general framework for supporting NLP with semantic features extracted from LOD. The main idea behind the framework is to (i) map terms in text to the unique resource identifiers (URIs) of LOD concepts through Wikipedia mediation; (ii) use the URIs to obtain background knowledge from LOD; (iii) integrate the obtained knowledge as semantic features into machine learning algorithms. We evaluate the framework by means of case studies on coreference resolution and relation extraction. Additionally, we propose an approach for increasing accuracy of the mapping step based on the "one sense per discourse" hypothesis. Finally, we present an open-source Java tool for extracting LOD knowledge through SPARQL endpoints and converting it to NLP features.
APA, Harvard, Vancouver, ISO, and other styles
45

au, os goh@murdoch edu, and Ong Sing Goh. "A framework and evaluation of conversation agents." Murdoch University, 2008. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20081020.134601.

Full text
Abstract:
This project details the development of a novel and practical framework for the development of conversation agents (CAs), or conversation robots. CAs, are software programs which can be used to provide a natural interface between human and computers. In this study, ‘conversation’ refers to real-time dialogue exchange between human and machine which may range from web chatting to “on-the-go” conversation through mobile devices. In essence, the project proposes a “smart and effective” communication technology where an autonomous agent is able to carry out simulated human conversation via multiple channels. The CA developed in this project is termed “Artificial Intelligence Natural-language Identity” (AINI) and AINI is used to illustrate the implementation and testing carried out in this project. Up to now, most CAs have been developed with a short term objective to serve as tools to convince users that they are talking with real humans as in the case of the Turing Test. The traditional designs have mainly relied on ad-hoc approach and hand-crafted domain knowledge. Such approaches make it difficult for a fully integrated system to be developed and modified for other domain applications and tasks. The proposed framework in this thesis addresses such limitations. Overcoming the weaknesses of previous systems have been the key challenges in this study. The research in this study has provided a better understanding of the system requirements and the development of a systematic approach for the construction of intelligent CAs based on agent architecture using a modular N-tiered approach. This study demonstrates an effective implementation and exploration of the new paradigm of Computer Mediated Conversation (CMC) through CAs. The most significant aspect of the proposed framework is its ability to re-use and encapsulate expertise such as domain knowledge, natural language query and human-computer interface through plug-in components. As a result, the developer does not need to change the framework implementation for different applications. This proposed system provides interoperability among heterogeneous systems and it has the flexibility to be adapted for other languages, interface designs and domain applications. A modular design of knowledge representation facilitates the creation of the CA knowledge bases. This enables easier integration of open-domain and domain-specific knowledge with the ability to provide answers for broader queries. In order to build the knowledge base for the CAs, this study has also proposed a mechanism to gather information from commonsense collaborative knowledge and online web documents. The proposed Automated Knowledge Extraction Agent (AKEA) has been used for the extraction of unstructured knowledge from the Web. On the other hand, it is also realised that it is important to establish the trustworthiness of the sources of information. This thesis introduces a Web Knowledge Trust Model (WKTM) to establish the trustworthiness of the sources. In order to assess the proposed framework, relevant tools and application modules have been developed and an evaluation of their effectiveness has been carried out to validate the performance and accuracy of the system. Both laboratory and public experiments with online users in real-time have been carried out. The results have shown that the proposed system is effective. In addition, it has been demonstrated that the CA could be implemented on the Web, mobile services and Instant Messaging (IM). In the real-time human-machine conversation experiment, it was shown that AINI is able to carry out conversations with human users by providing spontaneous interaction in an unconstrained setting. The study observed that AINI and humans share common properties in linguistic features and paralinguistic cues. These human-computer interactions have been analysed and contributed to the understanding of how the users interact with CAs. Such knowledge is also useful for the development of conversation systems utilising the commonalities found in these interactions. While AINI is found having difficulties in responding to some forms of paralinguistic cues, this could lead to research directions for further work to improve the CA performance in the future.
APA, Harvard, Vancouver, ISO, and other styles
46

Hopper, Michael A. "A compiler framework for multithreaded parallel systems." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Birkenes, Øystein. "A Framework for Speech Recognition using Logistic Regression." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1599.

Full text
Abstract:

Although discriminative approaches like the support vector machine or logistic regression have had great success in many pattern recognition application, they have only achieved limited success in speech recognition. Two of the difficulties often encountered include 1) speech signals typically have variable lengths, and 2) speech recognition is a sequence labeling problem, where each spoken utterance corresponds to a sequence of words or phones.

In this thesis, we present a framework for automatic speech recognition using logistic regression. We solve the difficulty of variable length speech signals by including a mapping in the logistic regression framework that transforms each speech signal into a fixed-dimensional vector. The mapping is defined either explicitly with a set of hidden Markov models (HMMs) for the use in penalized logistic regression (PLR), or implicitly through a sequence kernel to be used with kernel logistic regression (KLR). Unlike previous work that has used HMMs in combination with a discriminative classification approach, we jointly optimize the logistic regression parameters and the HMM parameters using a penalized likelihood criterion.

Experiments show that joint optimization improves the recognition accuracy significantly. The sequence kernel we present is motivated by the dynamic time warping (DTW) distance between two feature vector sequences. Instead of considering only the optimal alignment path, we sum up the contributions from all alignment paths. Preliminary experiments with the sequence kernel show promising results.

A two-step approach is used for handling the sequence labeling problem. In the first step, a set of HMMs is used to generate an N-best list of sentence hypotheses for a spoken utterance. In the second step, these sentence hypotheses are rescored using logistic regression on the segments in the N-best list. A garbage class is introduced in the logistic regression framework in order to get reliable probability estimates for the segments in the N-best lists. We present results on both a connected digit recognition task and a continuous phone recognition task.

APA, Harvard, Vancouver, ISO, and other styles
48

Gawley, Darren John. "Towards an estimation framework for some problems in computer vision." Title page, abstract and table of contents only, 2004. http://web4.library.adelaide.edu.au/theses/09PH/09phg284.pdf.

Full text
Abstract:
Thesis (Ph.D.)--University of Adelaide, School of Computer Science and Cooperative Research Centre for Sensor Signal and Information Processing, 2004.
"September 2004" Includes bibliographical references (leaves 119-126).
APA, Harvard, Vancouver, ISO, and other styles
49

Chung, Wilson C. "Adaptive subband video coding in a rate-distortion-constrained framework." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/15459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Pyla, Hari Krishna. "Tempest: A Framework for High Performance Thermal-Aware Distributed Computing." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/33198.

Full text
Abstract:
Compute clusters are consuming more power at higher densities than ever before. This results in increased thermal dissipation, the need for powerful cooling systems, and ultimately a reduction in system reliability as temperatures increase. Over the past several years, the research community has reacted to this problem by producing software tools such as HotSpot and Mercury to estimate system thermal characteristics and validate thermal-management techniques. While these tools are flexible and useful, they suffer several limitations: for the average user such simulation tools can be cumbersome to use, these tools may take significant time and expertise to port to different systems. Further, such tools produce significant detail and accuracy at the expense of execution time enough to prohibit iterative testing. We propose a fast, easy to use, accurate, portable, software framework called Tempest (for temperature estimator) that leverages emergent thermal sensors to enable user profiling, evaluating, and reducing the thermal characteristics of systems and applications. In this thesis, we illustrate the use of Tempest to analyze the thermal effects of various parallel benchmarks in clusters. We also show how users can analyze the effects of thermal optimizations on cluster applications. Dynamic Voltage and Frequency Scaling (DVFS) reduces the power consumption of high-performance clusters by reducing processor voltage during periods of low utilization. We designed Tempest to measure the runtime effects of processor frequency on thermals. Our experiments indicate HPC workload characteristics greatly impact the effects of DVFS on temperature. We propose a thermal-aware DVFS scheduling approach that proactively controls processor voltage across a cluster by evaluating and predicting trends in processor temperature. We identify approaches that can maintain temperature thresholds and reduce temperature with minimal impact on performance. Our results indicate that proactive, temperature-aware scheduling of DVFS can reduce cluster-wide processor thermals by more than 10 degrees Celsius, the threshold for improving electronic reliability by 50%.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography