Dissertations / Theses on the topic 'Financial engineering Data processing'

To see the other types of publications on this topic, follow the link: Financial engineering Data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Financial engineering Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Fernandez, Noemi. "Statistical information processing for data classification." FIU Digital Commons, 1996. http://digitalcommons.fiu.edu/etd/3297.

Full text
Abstract:
This thesis introduces new algorithms for analysis and classification of multivariate data. Statistical approaches are devised for the objectives of data clustering, data classification and object recognition. An initial investigation begins with the application of fundamental pattern recognition principles. Where such fundamental principles meet their limitations, statistical and neural algorithms are integrated to augment the overall approach for an enhanced solution. This thesis provides a new dimension to the problem of classification of data as a result of the following developments: (1) application of algorithms for object classification and recognition; (2) integration of a neural network algorithm which determines the decision functions associated with the task of classification; (3) determination and use of the eigensystem using newly developed methods with the objectives of achieving optimized data clustering and data classification, and dynamic monitoring of time-varying data; and (4) use of the principal component transform to exploit the eigensystem in order to perform the important tasks of orientation-independent object recognition, and di mensionality reduction of the data such as to optimize the processing time without compromising accuracy in the analysis of this data.
APA, Harvard, Vancouver, ISO, and other styles
2

Chiu, Cheng-Jung. "Data processing in nanoscale profilometry." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36677.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1995.
Includes bibliographical references (p. 176-177).
New developments on the nanoscale are taking place rapidly in many fields. Instrumentation used to measure and understand the geometry and property of the small scale structure is therefore essential. One of the most promising devices to head the measurement science into the nanoscale is the scanning probe microscope. A prototype of a nanoscale profilometer based on the scanning probe microscope has been built in the Laboratory for Manufacturing and Productivity at MIT. A sample is placed on a precision flip stage and different sides of the sample are scanned under the SPM to acquire its separate surface topography. To reconstruct the original three dimensional profile, many techniques like digital filtering, edge identification, and image matching are investigated and implemented in the computer programs to post process the data, and with greater emphasis placed on the nanoscale application. The important programming issues are addressed, too. Finally, this system's error sources are discussed and analyzed.
by Cheng-Jung Chiu.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
3

Koriziz, Hariton. "Signal processing methods for the modelling and prediction of financial data." Thesis, Imperial College London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.504921.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Laurila, M. (Mikko). "Big data in Finnish financial services." Bachelor's thesis, University of Oulu, 2017. http://urn.fi/URN:NBN:fi:oulu-201711243156.

Full text
Abstract:
This thesis aims to explore the concept of big data, and create understanding of big data maturity in the Finnish financial services industry. The research questions of this thesis are “What kind of big data solutions are being implemented in the Finnish financial services sector?” and “Which factors impede faster implementation of big data solutions in the Finnish financial services sector?”. Big data, being a concept usually linked with huge data sets and economies of scale, is an interesting topic for research in Finland, a market in which the size of data sets is somewhat limited by the size of the market. This thesis includes a literature review on the concept of big data, and earlier literature of the Finnish big data landscape, and a qualitative content analysis of available public information on big data maturity in the context of the Finnish financial services market. The results of this research show that in Finland big data is utilized to some extent, at least by the larger organizations. Financial services specific big data solutions include things like the automation of applications handling in insurance. The most clear and specific factors slowing the development of big data maturity in the industry are the lack of competent work-force and new regulations compliance projects taking development resources. These results can be used as an overview of the state of big data maturity in the Finnish financial services industry. This study also lays a solid foundation for further research in the form of conducting interviews, which would provide more in-depth data
Tämän työn tavoitteena on selvittää big data -käsitettä sekä kehittää ymmärrystä Suomen rahoitusalan big data -kypsyydestä. Tutkimuskysymykset tutkielmalle ovat “Millaisia big data -ratkaisuja on otettu käyttöön rahoitusalalla Suomessa?” sekä “Mitkä tekijät hidastavat big data -ratkaisujen implementointia rahoitusalalla Suomessa?”. Big data käsitteenä liitetään yleensä valtaviin datamassoihin ja suuruuden ekonomiaan. Siksi big data onkin mielenkiintoinen aihe tutkittavaksi suomalaisessa kontekstissa, missä datajoukkojen koko on jossain määrin rajoittunut markkinan koon myötä. Työssä esitetään big datan määrittely kirjallisuuteen perustuen sekä esitetään yhteenveto big datan soveltamisesta Suomessa aikaisempiin tutkimuksiin perustuen. Työssä on toteutettu laadullinen aineistoanalyysi julkisesti saatavilla olevasta informaatiosta big datan käytöstä rahoitusalalla Suomessa. Tulokset osoittavat big dataa hyödynnettävän jossain määrin rahoitusalalla Suomessa, ainakin suurikokoisissa organisaatioissa. Rahoitusalalle erityisiä ratkaisuja ovat esimerkiksi hakemuskäsittelyprosessien automatisointi. Selkeimmät big data -ratkaisujen implementointia hidastavat tekijät ovat osaavan työvoiman puute, sekä uusien regulaatioiden asettamat paineet kehitysresursseille. Työ muodostaa eräänlaisen kokonaiskuvan big datan hyödyntämisestä rahoitusalalla Suomessa. Tutkimus perustuu julkisen aineiston analyysiin, mikä osaltaan luo pohjan jatkotutkimukselle aiheesta. Jatkossa haastatteluilla voitaisiinkin edelleen syventää tietämystä aiheesta
APA, Harvard, Vancouver, ISO, and other styles
5

Siu, Ka Wai. "Numerical algorithms for data analysis with imaging and financial applications." HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/550.

Full text
Abstract:
In this thesis, we study modellings and numerical algorithms to data analysis with applications to image processing and financial forecast. The thesis is composed of two parts, namely the tensor regression and data assimilation methods for image restoration.;We start with investigating the tensor regression problem in Chapter 2. It is a generalization of a classical regression in order to adopt and analyze much more information by using multi-dimensional arrays. Since the regression problem is subject to multiple solutions, we propose a regularized tensor regression model to the problem. By imposing a low-rank property of the solution and considering the structure of the tensor product, we develop an algorithm which is suitable for scalable implementations. The regularization method is used to select useful solutions which depend on applications. The proposed model is solved by the alternating minimization method and we prove the convergence of the objective function values and iterates by the maximization-minimization (MM) technique. We study different factors which affects the performance of the algorithm, including sample sizes, solution ranks and the noise levels. Applications include image compressing and financial forecast.;In Chapter 3, we apply filtering methods in data assimilation to image restoration problems. Traditionally, data assimilation methods optimally combine a predictive state from a dynamical system with real partially observations. The motivation is to improve the model forecast by real observation. We construct an artificial dynamics to the non-blind deblurring problems. By making use of spatial information of a single image, a span of ensemble members is constructed. A two-stage use of ensemble transform Kalman filter (ETKF) is adopted to deblur corrupted images. The theoretical background of ETKF and the use of artificial dynamics by stage augmentation method are provided. Numerical experiments include image and video processing.;Concluding remarks and discussion on future extensions are included in Chapter 4.
APA, Harvard, Vancouver, ISO, and other styles
6

Pan, Howard W. (Howard Weihao) 1973. "Integrating financial data over the Internet." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/37812.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (leaves 65-66).
This thesis examines the issues and value-added, from both the technical and economic perspective, of solving the information integration problem in the retail banking industry. In addition, we report on an implementation of a prototype for the Universal Banking Application using currently available technologies. We report on some of the issues we discovered and the suggested improvements for future work.
by Howard W. Pan.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Derksen, Timothy J. (Timothy John). "Processing of outliers and missing data in multivariate manufacturing data." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38800.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (leaf 64).
by Timothy J. Derksen.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
8

Nyström, Simon, and Joakim Lönnegren. "Processing data sources with big data frameworks." Thesis, KTH, Data- och elektroteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188204.

Full text
Abstract:
Big data is a concept that is expanding rapidly. As more and more data is generatedand garnered, there is an increasing need for efficient solutions that can be utilized to process all this data in attempts to gain value from it. The purpose of this thesis is to find an efficient way to quickly process a large number of relatively small files. More specifically, the purpose is to test two frameworks that can be used for processing big data. The frameworks that are tested against each other are Apache NiFi and Apache Storm. A method is devised in order to, firstly, construct a data flow and secondly, construct a method for testing the performance and scalability of the frameworks running this data flow. The results reveal that Apache Storm is faster than Apache NiFi, at the sort of task that was tested. As the number of nodes included in the tests went up, the performance did not always do the same. This indicates that adding more nodes to a big data processing pipeline, does not always result in a better performing setup and that, sometimes, other measures must be made to heighten the performance.
Big data är ett koncept som växer snabbt. När mer och mer data genereras och samlas in finns det ett ökande behov av effektiva lösningar som kan användas föratt behandla all denna data, i försök att utvinna värde från den. Syftet med detta examensarbete är att hitta ett effektivt sätt att snabbt behandla ett stort antal filer, av relativt liten storlek. Mer specifikt så är det för att testa två ramverk som kan användas vid big data-behandling. De två ramverken som testas mot varandra är Apache NiFi och Apache Storm. En metod beskrivs för att, för det första, konstruera ett dataflöde och, för det andra, konstruera en metod för att testa prestandan och skalbarheten av de ramverk som kör dataflödet. Resultaten avslöjar att Apache Storm är snabbare än NiFi, på den typen av test som gjordes. När antalet noder som var med i testerna ökades, så ökade inte alltid prestandan. Detta visar att en ökning av antalet noder, i en big data-behandlingskedja, inte alltid leder till bättre prestanda och att det ibland krävs andra åtgärder för att öka prestandan.
APA, Harvard, Vancouver, ISO, and other styles
9

徐順通 and Sung-thong Andrew Chee. "Computerisation in Hong Kong professional engineering firms." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1985. http://hub.hku.hk/bib/B31263124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Yi. "Data Management and Data Processing Support on Array-Based Scientific Data." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1436157356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bostanudin, Nurul Jihan Farhah. "Computational methods for processing ground penetrating radar data." Thesis, University of Portsmouth, 2013. https://researchportal.port.ac.uk/portal/en/theses/computational-methods-for-processing-ground-penetrating-radar-data(d519f94f-04eb-42af-a504-a4c4275d51ae).html.

Full text
Abstract:
The aim of this work was to investigate signal processing and analysis techniques for Ground Penetrating Radar (GPR) and its use in civil engineering and construction industry. GPR is the general term applied to techniques which employ radio waves, typically in the Mega Hertz and Giga Hertz range, to map structures and features buried in the ground or in manmade structures. GPR measurements can suffer from large amount of noise. This is primarily caused by interference from other radio-wave-emitting devices (e.g., cell phones, radios, etc.) that are present in the surrounding area of the GPR system during data collection. In addition to noise, presence of clutter – reflections from other non-target objects buried underground in the vicinity of the target can make GPR measurement difficult to understand and interpret, even for the skilled human, GPR analysts. This thesis is concerned with the improvements and processes that can be applied to GPR data in order to enhance target detection and characterisation process particularly with multivariate signal processing techniques. Those primarily include Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Both techniques have been investigated, implemented and compared regarding their abilities to separate the target originating signals from the noise and clutter type signals present in the data. Combination of PCA and ICA (SVDPICA) and two-dimensional PCA (2DPCA) are the specific approaches adopted and further developed in this work. Ability of those methods to reduce the amount of clutter and unwanted signals present in GPR data have been investigated and reported in this thesis, suggesting that their use in automated analysis of GPR images is a possibility. Further analysis carried out in this work concentrated on analysing the performance of developed multivariate signal processing techniques and at the same time investigating the possibility of identifying and characterising the features of interest in pre-processed GPR images. The driving idea behind this part of work was to extract the resonant modes present in the individual traces of each GPR image and to use properties of those poles to characterise target. Three related but different methods have been implemented and applied in this work – Extended Prony, Linear Prediction Singular Value Decomposition and Matrix Pencil methods. In addition to these approaches, PCA technique has been used to reduce dimensionality of extracted traces and to compare signals measured in various experimental setups. Performance analysis shows that Matrix Pencil offers the best results.
APA, Harvard, Vancouver, ISO, and other styles
12

Grinman, Alex J. "Natural language processing on encrypted patient data." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113438.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 85-86).
While many industries can benefit from machine learning techniques for data analysis, they often do not have the technical expertise nor computational power to do so. Therefore, many organizations would benefit from outsourcing their data analysis. Yet, stringent data privacy policies prevent outsourcing sensitive data and may stop the delegation of data analysis in its tracks. In this thesis, we put forth a two-party system where one party capable of powerful computation can run certain machine learning algorithms from the natural language processing domain on the second party's data, where the first party is limited to learning only specific functions of the second party's data and nothing else. Our system provides simple cryptographic schemes for locating keywords, matching approximate regular expressions, and computing frequency analysis on encrypted data. We present a full implementation of this system in the form of a extendible software library and a command line interface. Finally, we discuss a medical case study where we used our system to run a suite of unmodified machine learning algorithms on encrypted free text patient notes.
by Alex J. Grinman.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
13

Westlund, Kenneth P. (Kenneth Peter). "Recording and processing data from transient events." Thesis, Massachusetts Institute of Technology, 1988. https://hdl.handle.net/1721.1/129961.

Full text
Abstract:
Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1988.
Includes bibliographical references.
by Kenneth P. Westlund Jr.
Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1988.
APA, Harvard, Vancouver, ISO, and other styles
14

Setiowijoso, Liono. "Data Allocation for Distributed Programs." PDXScholar, 1995. https://pdxscholar.library.pdx.edu/open_access_etds/5102.

Full text
Abstract:
This thesis shows that both data and code must be efficiently distributed to achieve good performance in a distributed system. Most previous research has either tried to distribute code structures to improve parallelism or to distribute data to reduce communication costs. Code distribution (exploiting functional parallelism) is an effort to distribute or to duplicate function codes to optimize parallel performance. On the other hand, data distribution tries to place data structures as close as possible to the function codes that use it, so that communication cost can be reduced. In particular, dataflow researchers have primarily focused on code partitioning and assignment. We have adapted existing data allocation algorithms for use with an existing dataflow-based system, ParPlum. ParPlum allows the execution of dataflow graphs on networks of workstations. To evaluate the impact of data allocation, we extended ParPlum to more effectively handle data structures. We then implemented tools to extract from dataflow graphs information that is relevant to the mapping algorithms and fed this information to our version of a data distribution algorithm. To see the relation between code and data parallelism we added optimization to optimize the distribution of the loop function components and the data structure access components. All of these are done automatically without programmer or user involvement. We ran a number of experiments using matrix multiplication as our workload. We used different numbers of processors and different existing partitioning and allocation algorithm. Our results show that automatic data distribution greatly improves the performance of distributed dataflow applications. For example, with 15 x 15 matrices, applying data distribution speeds up execution about 80% on 7 machines. Using data distribution and our code-optimizations on 7 machines speeds up execution over the base case by 800%. Our work shows that it is possible to make efficient use of distributed networks with compiler support and shows that both code mapping and data mapping must be considered to achieve optimal performance.
APA, Harvard, Vancouver, ISO, and other styles
15

Jakovljevic, Sasa. "Data collecting and processing for substation integration enhancement." Texas A&M University, 2003. http://hdl.handle.net/1969/93.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Aygar, Alper. "Doppler Radar Data Processing And Classification." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609890/index.pdf.

Full text
Abstract:
In this thesis, improving the performance of the automatic recognition of the Doppler radar targets is studied. The radar used in this study is a ground-surveillance doppler radar. Target types are car, truck, bus, tank, helicopter, moving man and running man. The input of this thesis is the output of the real doppler radar signals which are normalized and preprocessed (TRP vectors: Target Recognition Pattern vectors) in the doctorate thesis by Erdogan (2002). TRP vectors are normalized and homogenized doppler radar target signals with respect to target speed, target aspect angle and target range. Some target classes have repetitions in time in their TRPs. By the use of these repetitions, improvement of the target type classification performance is studied. K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) algorithms are used for doppler radar target classification and the results are evaluated. Before classification PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis), NMF (Nonnegative Matrix Factorization) and ICA (Independent Component Analysis) are implemented and applied to normalized doppler radar signals for feature extraction and dimension reduction in an efficient way. These techniques transform the input vectors, which are the normalized doppler radar signals, to another space. The effects of the implementation of these feature extraction algoritms and the use of the repetitions in doppler radar target signals on the doppler radar target classification performance are studied.
APA, Harvard, Vancouver, ISO, and other styles
17

Lu, Feng. "Big data scalability for high throughput processing and analysis of vehicle engineering data." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207084.

Full text
Abstract:
"Sympathy for Data" is a platform that is utilized for Big Data automation analytics. It is based on visual interface and workflow configurations. The main purpose of the platform is to reuse parts of code for structured analysis of vehicle engineering data. However, there are some performance issues on a single machine for processing a large amount of data in Sympathy for Data. There are also disk and CPU IO intensive issues when the data is oversized and the platform need fits comfortably in memory. In addition, for data over the TB or PB level, the Sympathy for data needs separate functionality for efficient processing simultaneously and scalable for distributed computation functionality. This paper focuses on exploring the possibilities and limitations in using the Sympathy for Data platform in various data analytic scenarios within the Volvo Cars vision and strategy. This project re-writes the CDE workflow for over 300 nodes into pure Python script code and make it executable on the Apache Spark and Dask infrastructure. We explore and compare both distributed computing frameworks implemented on Amazon Web Service EC2 used for 4 machine with a 4x type for distributed cluster measurement. However, the benchmark results show that Spark is superior to Dask from performance perspective. Apache Spark and Dask will combine with Sympathy for Data products for a Big Data processing engine to optimize the system disk and CPU IO utilization. There are several challenges when using Spark and Dask to analyze large-scale scientific data on systems. For instance, parallel file systems are shared among all computing machines, in contrast to shared-nothing architectures. Moreover, accessing data stored in commonly used scientific data formats, such as HDF5 is not tentatively supported in Spark. This report presents research carried out on the next generation of Big Data platforms in the automotive industry called "Sympathy for Data". The research questions focusing on improving the I/O performance and scalable distributed function to promote Big Data analytics. During this project, we used the Dask.Array parallelism features for interpretation the data sources as a raster shows in table format, and Apache Spark used as data processing engine for parallelism to load data sources to memory for improving the big data computation capacity. The experiments chapter will demonstrate 640GB of engineering data benchmark for single node and distributed computation mode to evaluate the Sympathy for Data Disk CPU and memory metrics. Finally, the outcome of this project improved the six times performance of the original Sympathy for data by developing a middleware SparkImporter. It is used in Sympathy for Data for distributed computation and connected to the Apache Spark for data processing through the maximum utilization of the system resources. This improves its throughput, scalability, and performance. It also increases the capacity of the Sympathy for data to process Big Data and avoids big data cluster infrastructures.
APA, Harvard, Vancouver, ISO, and other styles
18

Chung, Kit-lun, and 鐘傑麟. "Intelligent agent for Internet Chinese financial news retrieval." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B30106503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

陳詠儀 and Wing-yi Chan. "The smart card technology in the financial services." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31268596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Trigueiros, Duarte. "Neural network based methods in the extraction of knowledge from accounting and financial data." Thesis, University of East Anglia, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.292217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Jiawen (Jiawen Kevin). "Efficient data structures for piecewise-smooth video processing." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66003.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 95-102).
A number of useful image and video processing techniques, ranging from low level operations such as denoising and detail enhancement to higher level methods such as object manipulation and special effects, rely on piecewise-smooth functions computed from the input data. In this thesis, we present two computationally efficient data structures for representing piecewise-smooth visual information and demonstrate how they can dramatically simplify and accelerate a variety of video processing algorithms. We start by introducing the bilateral grid, an image representation that explicitly accounts for intensity edges. By interpreting brightness values as Euclidean coordinates, the bilateral grid enables simple expressions for edge-aware filters. Smooth functions defined on the bilateral grid are piecewise-smooth in image space. Within this framework, we derive efficient reinterpretations of a number of edge-aware filters commonly used in computational photography as operations on the bilateral grid, including the bilateral filter, edgeaware scattered data interpolation, and local histogram equalization. We also show how these techniques can be easily parallelized onto modern graphics hardware for real-time processing of high definition video. The second data structure we introduce is the video mesh, designed as a flexible central data structure for general-purpose video editing. It represents objects in a video sequence as 2.5D "paper cutouts" and allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. In our representation, we assume that motion and depth are piecewise-smooth, and encode them sparsely as a set of points tracked over time. The video mesh is a triangulation over this point set and per-pixel information is obtained by interpolation. To handle occlusions and detailed object boundaries, we rely on the user to rotoscope the scene at a sparse set of frames using spline curves. We introduce an algorithm to robustly and automatically cut the mesh into local layers with proper occlusion topology, and propagate the splines to the remaining frames. Object boundaries are refined with per-pixel alpha mattes. At its core, the video mesh is a collection of texture-mapped triangles, which we can edit and render interactively using graphics hardware. We demonstrate the effectiveness of our representation with special effects such as 3D viewpoint changes, object insertion, depthof- field manipulation, and 2D to 3D video conversion.
by Jiawen Chen.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
22

Jakubiuk, Wiktor. "High performance data processing pipeline for connectome segmentation." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106122.

Full text
Abstract:
Thesis: M. Eng. in Computer Science and Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February 2016.
"December 2015." Cataloged from PDF version of thesis.
Includes bibliographical references (pages 83-88).
By investigating neural connections, neuroscientists try to understand the brain and reconstruct its connectome. Automated connectome reconstruction from high resolution electron miscroscopy is a challenging problem, as all neurons and synapses in a volume have to be detected. A mm3 of a high-resolution brain tissue takes roughly a petabyte of space that the state-of-the-art pipelines are unable to process to date. A high-performance, fully automated image processing pipeline is proposed. Using a combination of image processing and machine learning algorithms (convolutional neural networks and random forests), the pipeline constructs a 3-dimensional connectome from 2-dimensional cross-sections of a mammal's brain. The proposed system achieves a low error rate (comparable with the state-of-the-art) and is capable of processing volumes of 100's of gigabytes in size. The main contributions of this thesis are multiple algorithmic techniques for 2- dimensional pixel classification of varying accuracy and speed trade-off, as well as a fast object segmentation algorithm. The majority of the system is parallelized for multi-core machines, and with minor additional modification is expected to work in a distributed setting.
by Wiktor Jakubiuk.
M. Eng. in Computer Science and Engineering
APA, Harvard, Vancouver, ISO, and other styles
23

Nguyen, Qui T. "Robust data partitioning for ad-hoc query processing." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106004.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-62).
Data partitioning can significantly improve query performance in distributed database systems. Most proposed data partitioning techniques choose the partitioning based on a particular expected query workload or use a simple upfront scheme, such as uniform range partitioning or hash partitioning on a key. However, these techniques do not adequately address the case where the query workload is ad-hoc and unpredictable, as in many analytic applications. The HYPER-PARTITIONING system aims to ll that gap, by using a novel space-partitioning tree on the space of possible attribute values to dene partitions incorporating all attributes of a dataset. The system creates a robust upfront partitioning tree, designed to benet all possible queries, and then adapts it over time in response to the actual workload. This thesis evaluates the robustness of the upfront hyper-partitioning algorithm, describes the implementation of the overall HYPER-PARTITIONING system, and shows how hyper-partitioning improves the performance of both selection and join queries.
by Qui T. Nguyen.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
24

Bao, Shunxing. "Algorithmic Enhancements to Data Colocation Grid Frameworks for Big Data Medical Image Processing." Thesis, Vanderbilt University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13877282.

Full text
Abstract:

Large-scale medical imaging studies to date have predominantly leveraged in-house, laboratory-based or traditional grid computing resources for their computing needs, where the applications often use hierarchical data structures (e.g., Network file system file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance for laboratory-based approaches reveal that performance is impeded by standard network switches since typical processing can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. On the other hand, the grid may be costly to use due to the dedicated resources used to execute the tasks and lack of elasticity. With increasing availability of cloud-based big data frameworks, such as Apache Hadoop, cloud-based services for executing medical imaging studies have shown promise.

Despite this promise, our studies have revealed that existing big data frameworks illustrate different performance limitations for medical imaging applications, which calls for new algorithms that optimize their performance and suitability for medical imaging. For instance, Apache HBases data distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). Big data medical image processing applications involving multi-stage analysis often exhibit significant variability in processing times ranging from a few seconds to several days. Due to the sequential nature of executing the analysis stages by traditional software technologies and platforms, any errors in the pipeline are only detected at the later stages despite the sources of errors predominantly being the highly compute-intensive first stage. This wastes precious computing resources and incurs prohibitively higher costs for re-executing the application. To address these challenges, this research propose a framework - Hadoop & HBase for Medical Image Processing (HadoopBase-MIP) - which develops a range of performance optimization algorithms and employs a number of system behaviors modeling for data storage, data access and data processing. We also introduce how to build up prototypes to help empirical system behaviors verification. Furthermore, we introduce a discovery with the development of HadoopBase-MIP about a new type of contrast for medical imaging deep brain structure enhancement. And finally we show how to move forward the Hadoop based framework design into a commercialized big data / High performance computing cluster with cheap, scalable and geographically distributed file system.

APA, Harvard, Vancouver, ISO, and other styles
25

Hatchell, Brian. "Data base design for integrated computer-aided engineering." Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/16744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Waite, Martin. "Data structures for the reconstruction of engineering drawings." Thesis, Nottingham Trent University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Einstein, Noah. "SmartHub: Manual Wheelchair Data Extraction and Processing Device." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555352793977171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Chaudhuri, Shomesh Ernesto. "Financial signal processing : applications to asset-market dynamics and healthcare finance." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117839.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 139-144).
The seemingly random fluctuations of price and value produced by information flow and complex interactions across a diverse population of stakeholders has motivated the extensive use of stochastic processes to analyze both capital markets and the regulatory approval process in healthcare. This thesis approaches the statistical analysis of such processes through the lens of signal processing, with a particular emphasis on studying how dynamics evolve over time. We begin with a brief introduction to financial signal processing in Part I, before turning to specific applications in the main body of the thesis. In Part II, we apply spectral analysis to understand and quantify the relationship between asset-market dynamics across multiple time horizons, and show how this framework can be used to improve portfolio and risk management. Using the Fourier transform, we decompose asset-return alphas, betas and covariances into distinct frequency components, allowing us to identify the relative importance of specific time horizons in determining each of these quantities. Our approach can be applied to any portfolio, and is particularly useful for comparing the forecast power of multiple investment strategies. Part III addresses the growing interest from the healthcare industry, regulators and patients to include Bayesian adaptive methods in the regulatory approval process of new therapies. By applying sequential likelihood ratio tests to a Bayesian decision analysis framework that assigns asymmetric weights to false approvals and false rejections, we are able to design adaptive clinical trials that maximize the value to current and future patients and consequently, public health. We also consider the possibility that as the process unfolds, drug sponsors might stop a trial early if new information suggests market prospects are not as favorable as originally forecasted. We show that clinical trials that can be modified as data are observed are more valuable than trials without this flexibility.
by Shomesh Ernesto Chaudhuri.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
29

Guttman, Michael. "Sampled-data IIR filtering via time-mode signal processing." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86770.

Full text
Abstract:
In this work, the design of sampled-data infinite impulse response filters based on time-mode signal processing circuits is presented. Time-mode signal processing (TMSP), defined as the processing of sampled analog information using time-difference variables, has become one of the more popular emerging technologies in circuit design. As TMSP is still relatively new, there is still much development needed to extend the technology into a general signal-processing tool. In this work, a set of general building block will be introduced that perform the most basic mathematical operations in the time-mode. By arranging these basic structures, higher-order time-mode systems, specifically, time-mode filters, will be realized. Three second-order time-mode filters (low-pass, band-reject, high-pass) are modeled using MATLAB, and simulated in Spectre to verify the design methodology. Finally, a damped integrator and a second-order low-pass time-mode IIR filter are both implemented using discrete components.
Dans ce mémoire, la conception de filtres de données-échantillonnées ayant une réponse impulsionnelle infinie basée sur le traitement de signal en mode temporel est présentée. Le traitement de signal dans le domaine temporel (TSDT), définie comme étant le traitement d'information analogique échantillonnée en utilisant des différences de temps comme variables, est devenu une des techniques émergentes de conception de circuits des plus populaires. Puisque le TSDT est toujours relativement récent, il y a encore beaucoup de développements requis pour étendre cette technologie comme un outil de traitement de signal général. Dans cette recherche, un ensemble de blocs d'assemblage capable de réaliser la plupart des opérations mathématiques dans le domaine temporel sera introduit. En arrangeant ces structures élémentaires, des systèmes en mode temporel d'ordre élevé, plus spécifiquement des filtres en mode temporel, seront réalisés. Trois filtres de deuxième ordre dans le domaine temporel (passe-bas, passe-bande et passe-haut) sont modélisés sur MATLAB et simulé sur Spectre afin de vérifier la méthodologie de conception. Finalement, un intégrateur amorti et un filtre passe-bas IIR de deuxième ordre en mode temporel sont implémentés avec des composantes discrètes.
APA, Harvard, Vancouver, ISO, and other styles
30

Breest, Martin, Paul Bouché, Martin Grund, Sören Haubrock, Stefan Hüttenrauch, Uwe Kylau, Anna Ploskonos, Tobias Queck, and Torben Schreiter. "Fundamentals of Service-Oriented Engineering." Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2009/3380/.

Full text
Abstract:
Since 2002, keywords like service-oriented engineering, service-oriented computing, and service-oriented architecture have been widely used in research, education, and enterprises. These and related terms are often misunderstood or used incorrectly. To correct these misunderstandings, a deeper knowledge of the concepts, the historical backgrounds, and an overview of service-oriented architectures is demanded and given in this paper.
APA, Harvard, Vancouver, ISO, and other styles
31

Faber, Marc. "On-Board Data Processing and Filtering." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596433.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV
One of the requirements resulting from mounting pressure on flight test schedules is the reduction of time needed for data analysis, in pursuit of shorter test cycles. This requirement has ramifications such as the demand for record and processing of not just raw measurement data but also of data converted to engineering units in real time, as well as for an optimized use of the bandwidth available for telemetry downlink and ultimately for shortening the duration of procedures intended to disseminate pre-selected recorded data among different analysis groups on ground. A promising way to successfully address these needs consists in implementing more CPU-intelligence and processing power directly on the on-board flight test equipment. This provides the ability to process complex data in real time. For instance, data acquired at different hardware interfaces (which may be compliant with different standards) can be directly converted to more easy-to-handle engineering units. This leads to a faster extraction and analysis of the actual data contents of the on-board signals and busses. Another central goal is the efficient use of the available bandwidth for telemetry. Real-time data reduction via intelligent filtering is one approach to achieve this challenging objective. The data filtering process should be performed simultaneously on an all-data-capture recording and the user should be able to easily select the interesting data without building PCM formats on board nor to carry out decommutation on ground. This data selection should be as easy as possible for the user, and the on-board FTI devices should generate a seamless and transparent data transmission, making a quick data analysis viable. On-board data processing and filtering has the potential to become the future main path to handle the challenge of FTI data acquisition and analysis in a more comfortable and effective way.
APA, Harvard, Vancouver, ISO, and other styles
32

Hinrichs, Angela S. (Angela Soleil). "An architecture for distributing processing on realtime data streams." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Marcus, Adam Ph D. Massachusetts Institute of Technology. "Optimization techniques for human computation-enabled data processing systems." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78454.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 119-124).
Crowdsourced labor markets make it possible to recruit large numbers of people to complete small tasks that are difficult to automate on computers. These marketplaces are increasingly widely used, with projections of over $1 billion being transferred between crowd employers and crowd workers by the end of 2012. While crowdsourcing enables forms of computation that artificial intelligence has not yet achieved, it also presents crowd workflow designers with a series of challenges including describing tasks, pricing tasks, identifying and rewarding worker quality, dealing with incorrect responses, and integrating human computation into traditional programming frameworks. In this dissertation, we explore the systems-building, operator design, and optimization challenges involved in building a crowd-powered workflow management system. We describe a system called Qurk that utilizes techniques from databases such as declarative workflow definition, high-latency workflow execution, and query optimization to aid crowd-powered workflow developers. We study how crowdsourcing can enhance the capabilities of traditional databases by evaluating how to implement basic database operators such as sorts and joins on datasets that could not have been processed using traditional computation frameworks. Finally, we explore the symbiotic relationship between the crowd and query optimization, enlisting crowd workers to perform selectivity estimation, a key component in optimizing complex crowd-powered workflows.
by Adam Marcus.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
34

Stein, Oliver. "Intelligent Resource Management for Large-scale Data Stream Processing." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-391927.

Full text
Abstract:
With the increasing trend of using cloud computing resources, the efficient utilization of these resources becomes more and more important. Working with data stream processing is a paradigm gaining in popularity, with tools such as Apache Spark Streaming or Kafka widely available, and companies are shifting towards real-time monitoring of data such as sensor networks, financial data or anomaly detection. However, it is difficult for users to efficiently make use of cloud computing resources and studies show that a lot of energy and compute hardware is wasted. We propose an approach to optimizing resource usage in cloud computing environments designed for data stream processing frameworks, based on bin packing algorithms. Test results show that the resource usage is substantially improved as a result, with future improvements suggested to further increase this. The solution was implemented as an extension of the HarmonicIO data stream processing framework and evaluated through simulated workloads.
APA, Harvard, Vancouver, ISO, and other styles
35

DeMaio, William (William Aloysius). "Data processing and inference methods for zero knowledge nuclear disarmament." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106698.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 63-64).
It is hoped that future nuclear arms control treaties will call for the dismantlement of stored nuclear warheads. To make the authenticated decommissioning of nuclear weapons agreeable, methods must be developed to validate the structure and composition of nuclear warheads without it being possible to gain knowledge about these attributes. Nuclear resonance fluorescence (NRF) imaging potentially enables the physically-encrypted verification of nuclear weapons in a manner that would meet treaty requirements. This thesis examines the physics behind NRF, develops tools for processing resonance data, establishes methodologies for simulating information gain during warhead verification, and tests potential inference processes. The influence of several inference parameters are characterized, and success is shown in predicting the properties of an encrypting foil and the thickness of a warhead in a one-dimensional verification scenario.
by William DeMaio.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
36

Gardener, Michael Edwin. "A multichannel, general-purpose data logger." Thesis, Cape Technikon, 1986. http://hdl.handle.net/20.500.11838/2179.

Full text
Abstract:
Thesis (Diploma (Electrical Engineering))--Cape Technikon, 1986.
This thesis describes the implementation of a general-purpose, microprocessor-based Data Logger. The Hardware allows analog data acquisition from one to thirty two channels with 12 bit resolution and at a data throughput of up to 2KHz. The data is logged directly to a Buffer memory and from there, at the end of each 109, it is dumped to an integral cassette data recorder. The recorded data can be transfered from the logger to a desk-top computer, via the IEEE 488 port, for further processing and display. All log parameters are user selectable by means of menu prompted keyboard entry and a Real-Time clock (RTC) provides date and time information automatically.
APA, Harvard, Vancouver, ISO, and other styles
37

Baker, Alison M. "Restructuring Option Chain Data Sets Using Matlab." Digital WPI, 2010. https://digitalcommons.wpi.edu/etd-theses/473.

Full text
Abstract:
Large data sets are required to store all of the information contained in option chains. The data set we work with includes all U.S. exchange traded put and call options. This data set is part of a larger data set commonly referred to as the National Best Bid Offer (NBBO) data set. The national bid best offer is a Securities and Exchange Commission (SEC) term for the best available ask price and bid price. Brokers must guarantee investors these prices on their trades. We have acquired data for the 5 year period from 2005 to 2009 for all U.S. traded options. Each year of data is approximately 6 gigabytes. The company, (DeltaNeutral - Options Data And End Of Day Downloads, 2010), from which we acquired the data, also has a software package, OptimalTrader, to process the data. For this data to be used in research projects, the data must be accessible by specific underlying security for selected date ranges. This type of data is more useful to the financial mathematics student than the output given by the software provided by DeltaNeutral. The software used in this data manipulation is Matlab. Each individual file of original data was parsed, and new files were written with some reformatting in which the original data was largely reorganized. The new organization will make searching for information from one stock or any specific group of stocks easier to achieve. We have created 3 m-files in Matlab which deal with reformatting the data, error handling, and searching through the original or reformatted data. The result is that new datasets can be created for further studying and manipulation. Future students working with this data should find this method, toolset, and the newly constructed datasets to be useful tools in working with options data and examining option chains.
APA, Harvard, Vancouver, ISO, and other styles
38

Adonis, Ridoh. "An empirical investigation into the information management systems at a South African financial institution." Thesis, Cape Peninsula University of Technology, 2016. http://hdl.handle.net/20.500.11838/2474.

Full text
Abstract:
Thesis (MTech (Business Administration))--Cape Peninsula University of Technology, 2016.
The study has been triggered by the increase in information breaches in organisations. Organisations may have policies and procedures, strategies and systems in place in order to mitigate the risk of information breaches; however, data breaches are still on the rise. Governments across the world have or are putting in place laws around data protection which organisations have to align their process, strategies and systems to. The continuous and rapid emergence of new technology is making it even easier for information breaches to occur. In particular, the focus of this study is aimed at the information management systems in a selected financial institution in South Africa. Based on the objectives, this study: explored the shortfalls of information security on a South African financial institution; investigated whether data remains separate while privacy is ensured; investigated responsiveness of business processes on information management; investigated the capability of systems on information management; investigated the strategies formulated for information management and finally, investigated projects and programmes aimed at addressing information management. Quantitative, as well as qualitative analysis, was employed whereby questionnaires were sent to employees who were employed at junior management positions. Semi- structured in-depth interviews were self-administered whereby the researcher interviewed senior management at the organisation. These senior managers from different value chains are responsible for implementing information management policies and strategy.
APA, Harvard, Vancouver, ISO, and other styles
39

Nedstrand, Paul, and Razmus Lindgren. "Test Data Post-Processing and Analysis of Link Adaptation." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-121589.

Full text
Abstract:
Analysing the performance of cell phones and other wireless connected devices to mobile networks are key when validating if the standard of the system is achieved. This justies having testing tools that can produce a good overview of the data between base stations and cell phones to see the performance of the cell phone. This master thesis involves developing a tool that produces graphs with statistics from the trac data in the communication link between a connected mobile device and a base station. The statistics will be the correlation between two parameters in the trac data in the channel (e.g. throughput over the channel condition). The tool is oriented on analysis of link adaptation and by the produced graphs the testing personnel at Ericsson will be able to analyse the performance of one or several mobile equipments. We performed our own analysis on link adaptation using the tool to show that this type of analysis is possible with this tool. To show that the tool is useful for Ericsson we let test personnel answer a survey on the usability and user friendliness of it.
APA, Harvard, Vancouver, ISO, and other styles
40

Narayanan, Shruthi (Shruthi P. ). "Real-time processing and visualization of intensive care unit data." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/119537.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 83).
Intensive care unit (ICU) patients undergo detailed monitoring so that copious information regarding their condition is available to support clinical decision-making. Full utilization of the data depends heavily on its quantity, quality and manner of presentation to the physician at the bedside of a patient. In this thesis, we implemented a visualization system to aid ICU clinicians in collecting, processing, and displaying available ICU data. Our goals for the system are: to be able to receive large quantities of patient data from various sources, to compute complex functions over the data that are able to quantify an ICU patient's condition, to plot the data using a clean and interactive interface, and to be capable of live plot updates upon receiving new data. We made significant headway toward our goals, and we succeeded in creating a highly adaptable visualization system that future developers and users will be able to customize.
by Shruthi Narayanan.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
41

Shih, Daphne Yong-Hsu. "A data path for a pixel-parallel image processing system." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/40570.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 65).
by Daphne Yong-Hsu Shih.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
42

Saltin, Joakim. "Interactive visualization of financial data : Development of a visual data mining tool." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-181225.

Full text
Abstract:
In this project, a prototype visual data mining tool was developed, allowing users to interactively investigate large multi-dimensional datasets visually (using 2D visualization techniques) using so called drill-down, roll-up and slicing operations. The project included all steps of the development, from writing specifications and designing the program to implementing and evaluating it. Using ideas from data warehousing, custom methods for storing pre-computed aggregations of data (commonly referred to as materialized views) and retrieving data from these were developed and implemented in order to achieve higher performance on large datasets. View materialization enables the program to easily fetch or calculate a view using other views, something which can yield significant performance gains if view sizes are much smaller than the underlying raw dataset. The choice of which views to materialize was done in an automated manner using a well-known algorithm - the greedy algorithm for view materialization - which selects the fraction of all possible views that is likely (but not guaranteed) to yield the best performance gain. The use of materialized views was shown to have good potential to increase performance for large datasets, with an average speedup (compared to on-the-fly queries) between 20 and 70 for a test dataset containing 500~000 rows. The end result was a program combining flexibility with good performance, which was also reflected by good scores in a user-acceptance test, with participants from the company where this project was carried out.
APA, Harvard, Vancouver, ISO, and other styles
43

Akleman, Ergun. "Pseudo-affine functions : a non-polynomial implicit function family to describe curves and sufaces." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/15409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kardos, Péter. "Performance optimization ofthe online data processing softwareof a high-energy physics experiment : Performance optimization ofthe online data processing softwareof a high-energy physics experiment." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-404475.

Full text
Abstract:
The LHCb experiment probes the differences between matter and anti-matter by examining particle collisions. Like any modern high energy physics experiment, LHCbrelies on a complex hardware and software infrastructure to collect and analyze the data generated from particle collisions. To filter out unimportant data before writing it to permanent storage, particle collision events have to be processed in real-time which requires a lot of computing power. This thesis focuses on performance optimizations of several parts of the real-timedata processing software: i) one of the particle path reconstruction steps; ii) theparticle path refining step; iii) the data structures used by the real-time reconstructionalgorithms. The thesis investigates and employs techniques such as vectorization, cache-friendly memory structures, microarchitecture analysis, and memory allocation optimizations. The resulting performance-optimized code uses today's many-core, data-parallel,superscalar processors to their full potential in order to meet the performance demands of the experiment. The thesis results show that the reconstruction step got3 times faster, the refinement step got 2 times faster and the changes to the datamodel allowed vectorization of most reconstruction algorithms.
APA, Harvard, Vancouver, ISO, and other styles
45

Jungner, Andreas. "Ground-Based Synthetic Aperture Radar Data processing for Deformation Measurement." Thesis, KTH, Geodesi och satellitpositionering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-199677.

Full text
Abstract:
This thesis describes a first hands-on experience working with a Ground-Based Synthetic Aperture Radar (GB-SAR) at the Institute of Geomatics in Castelldefels (Barcelona, Spain), used to exploit radar interferometry usually employed on space borne platforms. We describe the key concepts of a GB-SAR as well as the data processing procedure to obtain deformation measurements. A large part of the thesis work have been devoted to development of GB-SAR processing tools such as coherence and interferogram generation, automating the co-registration process, geocoding of GB-SAR data and the adaption of existing satellite SAR tools to GB-SAR data. Finally a series of field campaigns have been conducted to test the instrument in different environments to collect data necessary to develop GB-SAR processing tools as well as to discover capabilities and limitations of the instrument.   The key outcome of the field campaigns is that high coherence necessary to conduct interferometric measurements can be obtained with a long temporal baseline. Several factors that affect the result are discussed, such as the reflectivity of the observed scene, the image co-registration and the illuminating geometry.
Det här examensarbetet bygger på erfarenheter av arbete med en mark-baserad syntetisk apertur radar (GB-SAR) vid Geomatiska Institutet i Castelldefels (Barcelona, Spanien). SAR tekniken tillåter radar interferometri som är en vanligt förekommande teknik både på satellit och flygburna platformar. Det här arbetet beskriver instrumentets tekniska egenskaper samt behandlingen av data for att uppmäta deformationer. En stor del av arbetet har ägnats åt utveckling av GB-SAR data applikationer som koherens och interferogram beräkning, automatisering av bild matchning med skript, geokodning av GB-SAR data samt anpassning av befintliga SAR program till GB-SAR data. Slutligen har mätningar gjorts i fält for att samla in data nödvändiga for GB-SAR applikations utvecklingen samt få erfarenhet av instrumentets egenskaper och begränsningar.   Huvudresultatet av fältmätningarna är att hög koherens nödvändig för interferometriska mätningar går att uppnå med relativ lång tid mellan mätepokerna. Flera faktorer som påverkar resultatet diskuteras, som det observerade områdets reflektivitet, radar bild matchningen och den illuminerande geometrin.
APA, Harvard, Vancouver, ISO, and other styles
46

van, Schaik Sebastiaan Johannes. "A framework for processing correlated probabilistic data." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:91aa418d-536e-472d-9089-39bef5f62e62.

Full text
Abstract:
The amount of digitally-born data has surged in recent years. In many scenarios, this data is inherently uncertain (or: probabilistic), such as data originating from sensor networks, image and voice recognition, location detection, and automated web data extraction. Probabilistic data requires novel and different approaches to data mining and analysis, which explicitly account for the uncertainty and the correlations therein. This thesis introduces ENFrame: a framework for processing and mining correlated probabilistic data. Using this framework, it is possible to express both traditional and novel algorithms for data analysis in a special user language, without having to explicitly address the uncertainty of the data on which the algorithms operate. The framework will subsequently execute the algorithm on the probabilistic input, and perform exact or approximate parallel probability computation. During the probability computation, correlations and provenance are succinctly encoded using probabilistic events. This thesis contains novel contributions in several directions. An expressive user language – a subset of Python – is introduced, which allows a programmer to implement algorithms for probabilistic data without requiring knowledge of the underlying probabilistic model. Furthermore, an event language is presented, which is used for the probabilistic interpretation of the user program. The event language can succinctly encode arbitrary correlations using events, which are the probabilistic counterparts of deterministic user program variables. These highly interconnected events are stored in an event network, a probabilistic interpretation of the original user program. Multiple techniques for exact and approximate probability computation (with error guarantees) of such event networks are presented, as well as techniques for parallel computation. Adaptations of multiple existing data mining algorithms are shown to work in the framework, and are subsequently subjected to an extensive experimental evaluation. Additionally, a use-case is presented in which a probabilistic adaptation of a clustering algorithm is used to predict faults in energy distribution networks. Lastly, this thesis presents techniques for integrating a number of different probabilistic data formalisms for use in this framework and in other applications.
APA, Harvard, Vancouver, ISO, and other styles
47

Korziuk, Kamil, and Tomasz Podbielski. "Engineering Requirements for platform, integrating health data." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16089.

Full text
Abstract:
In the world that we already live people are more and more on the run and population ageing significantly raise, new technologies are trying to bring best they can to meet humans’ expectations. Survey’s results, that was done during technology conference with elderly on Blekinge Institute of Technology showed, that no one of them has any kind of help in their home but they would need it. This Master thesis present human health state monitoring to focus on fall detection. Health care systems will not completely stop cases when humans are falling down, but further studying causes can prevent them.In this thesis, integration of sensors for vital parameters measurements, human position and measured data evaluation are presented. This thesis is based on specific technologies compatible with Arduino Uno and Arduino Mega microcontrollers, measure sensors and data exchange between data base, MATLAB/Simulink and web page. Sensors integrated in one common system bring possibility to examine the patient health state and call aid assistance in case of health decline or serious injury risk.System efficiency was based on many series of measurement. First phase a comparison between different filter was carried out to choose one with best performance. Kalman filtering and trim parameter for accelerometer was used to gain satisfying results and the final human fall detection algorithm. Acquired measurement and data evaluation showed that Kalmar filtering allow to reach high performance and give the most reliable results. In the second phase sensor placement was tested. Collected data showed that human fall detection is correctly recognized by system with high accuracy. Designed system as a result allow to measure human health and vital state like: temperature, heartbeat, position and activity. Additionally, system gives online overview possibility with actual health state, historical data and IP camera preview when alarm was raised after bad health condition.
APA, Harvard, Vancouver, ISO, and other styles
48

R, S. Umesh. "Algorithms for processing polarization-rich optical imaging data." Thesis, Indian Institute of Science, 2004. http://hdl.handle.net/2005/96.

Full text
Abstract:
This work mainly focuses on signal processing issues related to continuous-wave, polarization-based direct imaging schemes. Here, we present a mathematical framework to analyze the performance of the Polarization Difference Imaging (PDI) and Polarization Modulation Imaging (PMI). We have considered three visualization parameters, namely, the polarization intensity (PI), Degree of Linear Polarization (DOLP) and polarization orientation (PO) for comparing these schemes. The first two parameters appear frequently in literature, possibly under different names. The last parameter, polarization orientation, has been introduced and elaborated in this thesis. We have also proposed some extensions/alternatives for the existing imaging and processing schemes and analyzed their advantages. Theoretically and through Monte-Carlo simulations, we have studied the performance of these schemes under white and coloured noise conditions, concluding that, in general, the PMI gives better estimates of all the parameters. Experimental results corroborate our theoretical arguments. PMI is shown to give asymptotically efficient estimates of these parameters, whereas PDI is shown to give biased estimates of the first two and is also shown to be incapable of estimating PO. Moreover, it is shown that PDI is a particular case of PMI. The property of PDI, that it can yield estimates at lower variances has been recognized as its major strength. We have also shown that the three visualization parameters can be fused to form a colour image, giving a holistic view of the scene. We report the advantages of analyzing chunks of data and bootstrapped data under various circumstances. Experiments were conducted to image objects through calibrated scattering media and natural media like mist, with successful results. Scattering media prepared with polystyrene microspheres of diameters 2.97m, 0.06m and 0.13m dispersed in water were used in our experiments. An intensified charge coupled device (CCD) camera was used to capture the images. Results showed that imaging could be performed beyond optical thickness of 40, for particles with 0.13m diameter. For larger particles, the depth to which we could image was much lesser. An experiment using an incoherent source yielded better results than with coherent sources, which we attribute to the speckle noise induced by coherent sources. We have suggested a harmonic based imaging scheme, which can perhaps be used when we have a mixture of scattering particles. We have also briefly touched upon the possible post processing that can be performed on the obtained results, and as an example, shown segmentation based on a PO imaging result.
APA, Harvard, Vancouver, ISO, and other styles
49

McCaney, Patrick Michael 1980. "Emotional response modeling in financial markets : Boston Stock Exchange data analysis." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28481.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (leaves 57-58).
In this thesis, physiological data is analyzed in the context of financial risk processing, specifically investigating the effects of financial trading decisions and situations on the physiological responses of professional market makers. The data for this analysis comes from an experiment performed on market makers at the Boston Stock Exchange. This analysis involved significant preprocessing of large financial and physiological data sets. Short-term and long term analysis of financial and performance based event markers of the data are performed and the results interpreted. There are two main conclusions. First, negative performance events are found to be the the main driver of physiological responses; positive performance events have minimal deviations from baseline physiological signals. Second, a long term analysis of events yield more substantial physiological changes than a short term analysis.
by Patrick Michael McCaney.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
50

Xie, Tian, and 謝天. "Development of a XML-based distributed service architecture for product development in enterprise clusters." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B30477165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography