Segui questo link per vedere altri tipi di pubblicazioni sul tema: Civil engineering Data processing.

Tesi sul tema "Civil engineering Data processing"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Civil engineering Data processing".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Sinske, A. N. (Alexander Nicholas). "Comparative evaluation of the model-centred and the application-centred design approach in civil engineering software". Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/52741.

Testo completo
Abstract (sommario):
Thesis (PhD)--University of Stellenbosch, 2002.
ENGLISH ABSTRACT: In this dissertation the traditional model-centred (MC)design approach for the development of software in the civil engineering field is compared to a newly developed application-centred (AC)design approach. In the MC design software models play the central role. A software model maps part of the world, for example its visualization or analysis onto the memory space of the computer. Characteristic of the MC design is that the identifiers of objects are unique and persistent only within the name scope of a model, and that classes which define the objects are components of the model. In the AC design all objects of the engineering task are collected in an application. The identifiers of the objects are unique and persistent within the name scope of the application and classes are no longer components of a model, but components of the software platform. This means that an object can be a part of several models. It is investigated whether the demands on the information and communication in modern civil engineering processes can be satisfied using the MC design approach. The investigation is based on the evaluation of existing software for the analysis and design of a sewer reticulation system of realistic dimensions and complexity. Structural, quantitative, as well as engineering complexity criteria are used to evaluate the design. For the evaluation of the quantitative criteria, in addition to the actual Duration of Execution, a User Interaction Count, the Persistent Data Size, and a Basic Instruction Count based on a source code complexity analysis, are introduced. The analysis of the MCdesign shows that the solution of an engineering task requires several models. The interaction between the models proves to be complicated and inflexible due to the limitation of object identifier scope: The engineer is restricted to the concepts of the software developer, who must provide static bridges between models in the form of data files or software transformers. The concept of the ACdesign approach is then presented and implemented in a new software application written in Java. This application is also extended for the distributed computing scenario. Newbasic classes are defined to manage the static and dynamic behaviour of objects, and to ensure the consistent and persistent state of objects in the application. The same structural and quantitative analyses are performed using the same test data sets as for the MCapplication. It is shown that the AC design approach is superior to the MC design approach with respect to structural, quantitative and engineering complexity .criteria. With respect to the design structure the limitation of object identifier scope, and thus the requirement for bridges between models, falls away, which is in particular of value for the distributed computing scenario. Although the new object management routines introduce an overhead in the duration of execution for the AC design compared to a hypothetical MC design with only one model and no software bridges, the advantages of the design structure outweigh this potential disadvantage.
AFRIKAANSE OPSOMMING: In hierdie proefskrif word die tradisionele modelgesentreerde (MC)ontwerpbenadering vir die ontwikkeling van sagteware vir die siviele ingenieursveld vergelyk met 'n nuut ontwikkelde applikasiegesentreerde (AC) ontwerpbenadering. In die MContwerp speel sagtewaremodelle 'n sentrale rol. 'n Sagtewaremodel beeld 'n deel van die wêreld, byvoorbeeld die visualisering of analise op die geheueruimte van die rekenaar af. Eienskappe van die MContwerp is dat die identifiseerders van objekte slegs binne die naamruimte van 'n model uniek en persistent is, en dat klasse wat die objekte definieer komponente van die model is. In die AC ontwerp is alle objekte van die ingenieurstaak saamgevat in 'n applikasie. Die identifisieerders van die objekte is uniek en persistent binne die naamruimte van die applikasie en klasse is nie meer komponente van die model nie, maar komponente van die sagtewareplatform. Dit beteken dat 'n objek deel van 'n aantal modelle kan vorm. Dit word ondersoek of daar by die MC ontwerpbenadering aan die vereistes wat by moderne siviele ingenieursprosesse ten opsigte van inligting en kommunikasie gestel word, voldoen kan word. Die ondersoek is gebaseer op die evaluering van bestaande sagteware vir die analise en ontwerp van 'n rioolversamelingstelsel met realistiese dimensies en kompleksiteit. Strukturele, kwantitatiewe, sowel as ingenieurskompleksiteitskriteria word gebruik om die ontwerp te evalueer. Vir die evaluering van die kwantitatiewe kriteria word addisioneel tot die uitvoerduurte 'n gebruikersinteraksie-telling, die persistente datagrootte, en 'n basiese instruksietelling gebaseer op 'n bronkode kompleksiteitsanalise , ingevoer. Die analise van die MC ontwerp toon dat die oplossing van ingenieurstake 'n aantal modelle benodig. Die interaksie tussen die modelle bewys dat dit kompleks en onbuigsaam is, as gevolg van die beperking op objekidentifiseerderruimte: Die ingenieur is beperk tot die konsepte van die sagteware ontwikkelaar wat statiese brue tussen modelle in die vorm van lêers of sagteware transformators moet verskaf. Die AC ontwerpbenadering word dan voorgestel en geïmplementeer in 'n nuwe sagteware-applikasie, geskryf in Java. Die applikasie word ook uitgebrei vir die verdeelde bewerking in die rekenaarnetwerk. Nuwe basisklasse word gedefinieer om die statiese en dinamiese gedrag van objekte te bestuur, en om die konsistente en persistente status van objekte in die applikasie te verseker. Dieselfde strukturele en kwantitatiewe analises word uitgevoer met dieselfde toetsdatastelle soos vir die MC ontwerp. Daar word getoon dat die AC ontwerpbenadering die MC ontwerpbenadering oortref met betrekking tot die strukturele, kwantitatiewe en ingenieurskompleksiteitskriteria. Met betrekking tot die ontwerpstruktuur val die beperking van die objek-identfiseerderruimte en dus die vereiste van brue tussen modelle weg, wat besonder voordelig is vir die verdeelde bewerking in die rekenaarnetwerk. Alhoewel die nuwe objekbestuurroetines in die AC ontwerp in vergelyking met 'n hipotetiese MC ontwerp, wat slegs een model en geen sagteware brue bevat, langer uitvoerduurtes tot gevolg het, is die voordele van die ontwerpstruktuur groter as die potensiële nadele.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Bostanudin, Nurul Jihan Farhah. "Computational methods for processing ground penetrating radar data". Thesis, University of Portsmouth, 2013. https://researchportal.port.ac.uk/portal/en/theses/computational-methods-for-processing-ground-penetrating-radar-data(d519f94f-04eb-42af-a504-a4c4275d51ae).html.

Testo completo
Abstract (sommario):
The aim of this work was to investigate signal processing and analysis techniques for Ground Penetrating Radar (GPR) and its use in civil engineering and construction industry. GPR is the general term applied to techniques which employ radio waves, typically in the Mega Hertz and Giga Hertz range, to map structures and features buried in the ground or in manmade structures. GPR measurements can suffer from large amount of noise. This is primarily caused by interference from other radio-wave-emitting devices (e.g., cell phones, radios, etc.) that are present in the surrounding area of the GPR system during data collection. In addition to noise, presence of clutter – reflections from other non-target objects buried underground in the vicinity of the target can make GPR measurement difficult to understand and interpret, even for the skilled human, GPR analysts. This thesis is concerned with the improvements and processes that can be applied to GPR data in order to enhance target detection and characterisation process particularly with multivariate signal processing techniques. Those primarily include Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Both techniques have been investigated, implemented and compared regarding their abilities to separate the target originating signals from the noise and clutter type signals present in the data. Combination of PCA and ICA (SVDPICA) and two-dimensional PCA (2DPCA) are the specific approaches adopted and further developed in this work. Ability of those methods to reduce the amount of clutter and unwanted signals present in GPR data have been investigated and reported in this thesis, suggesting that their use in automated analysis of GPR images is a possibility. Further analysis carried out in this work concentrated on analysing the performance of developed multivariate signal processing techniques and at the same time investigating the possibility of identifying and characterising the features of interest in pre-processed GPR images. The driving idea behind this part of work was to extract the resonant modes present in the individual traces of each GPR image and to use properties of those poles to characterise target. Three related but different methods have been implemented and applied in this work – Extended Prony, Linear Prediction Singular Value Decomposition and Matrix Pencil methods. In addition to these approaches, PCA technique has been used to reduce dimensionality of extracted traces and to compare signals measured in various experimental setups. Performance analysis shows that Matrix Pencil offers the best results.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Yang, Su. "PC-grade parallel processing and hardware acceleration for large-scale data analysis". Thesis, University of Huddersfield, 2009. http://eprints.hud.ac.uk/id/eprint/8754/.

Testo completo
Abstract (sommario):
Arguably, modern graphics processing units (GPU) are the first commodity, and desktop parallel processor. Although GPU programming was originated from the interactive rendering in graphical applications such as computer games, researchers in the field of general purpose computation on GPU (GPGPU) are showing that the power, ubiquity and low cost of GPUs makes them an ideal alternative platform for high-performance computing. This has resulted in the extensive exploration in using the GPU to accelerate general-purpose computations in many engineering and mathematical domains outside of graphics. However, limited to the development complexity caused by the graphics-oriented concepts and development tools for GPU-programming, GPGPU has mainly been discussed in the academic domain so far and has not yet fully fulfilled its promises in the real world. This thesis aims at exploiting GPGPU in the practical engineering domain and presented a novel contribution to GPGPU-driven linear time invariant (LTI) systems that are employed by the signal processing techniques in stylus-based or optical-based surface metrology and data processing. The core contributions that have been achieved in this project can be summarized as follow. Firstly, a thorough survey of the state-of-the-art of GPGPU applications and their development approaches has been carried out in this thesis. In addition, the category of parallel architecture pattern that the GPGPU belongs to has been specified, which formed the foundation of the GPGPU programming framework design in the thesis. Following this specification, a GPGPU programming framework is deduced as a general guideline to the various GPGPU programming models that are applied to a large diversity of algorithms in scientific computing and engineering applications. Considering the evolution of GPU’s hardware architecture, the proposed frameworks cover through the transition of graphics-originated concepts for GPGPU programming based on legacy GPUs and the abstraction of stream processing pattern represented by the compute unified device architecture (CUDA) in which GPU is considered as not only a graphics device but a streaming coprocessor of CPU. Secondly, the proposed GPGPU programming framework are applied to the practical engineering applications, namely, the surface metrological data processing and image processing, to generate the programming models that aim to carry out parallel computing for the corresponding algorithms. The acceleration performance of these models are evaluated in terms of the speed-up factor and the data accuracy, which enabled the generation of quantifiable benchmarks for evaluating consumer-grade parallel processors. It shows that the GPGPU applications outperform the CPU solutions by up to 20 times without significant loss of data accuracy and any noticeable increase in source code complexity, which further validates the effectiveness of the proposed GPGPU general programming framework. Thirdly, this thesis devised methods for carrying out result visualization directly on GPU by storing processed data in local GPU memory through making use of GPU’s rendering device features to achieve realtime interactions. The algorithms employed in this thesis included various filtering techniques, discrete wavelet transform, and the fast Fourier Transform which cover the common operations implemented in most LTI systems in spatial and frequency domains. Considering the employed GPUs’ hardware designs, especially the structure of the rendering pipelines, and the characteristics of the algorithms, the series of proposed GPGPU programming models have proven its feasibility, practicality, and robustness in real engineering applications. The developed GPGPU programming framework as well as the programming models are anticipated to be adaptable for future consumer-level computing devices and other computational demanding applications. In addition, it is envisaged that the devised principles and methods in the framework design are likely to have significant benefits outside the sphere of surface metrology.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Oosthuizen, Daniel Rudolph. "Data modelling of industrial steel structures". Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53346.

Testo completo
Abstract (sommario):
Thesis (MScEng)--Stellenbosch University, 2003.
ENGLISH ABSTRACT: AP230 of STEP is an application protocol for structural steel-framed buildings. Product data relating to steel structures is represented in a model that captures analysis, design and manufacturing views. The information requirements described in AP230 were analysed with the purpose of identifying a subset of entities that are essential for the description of simple industrial steel frames with the view to being able to describe the structural concept, and to perform the structural analysis and design of such structures. Having identified the essential entities, a relational database model for these entities was developed. Planning, analysis and design applications will use the database to collaboratively exchange data relating to the structure. The comprehensiveness of the database model was investigated by mapping a simple industrial frame to the database model. Access to the database is provided by a set of classes called the database representative classes. The data-representatives are instances that have the same selection identifiers and attributes as corresponding information units in the database. The datarepresentatives' primary tasks are to store themselves in the database and to retrieve their state from the database. A graphical user interface application, programmed in Java, used for the description of the structural concept with the capacity of storing the concept in the database and retrieving it again through the use of the database representative classes was also created as part of this project.
AFRIKAANSE OPSOMMING: AP230 van STEP is 'n toepassingsprotokol wat staal raamwerke beskryf. Die produkdata ter beskrywing van staal strukture word saamgevat in 'n model wat analise, ontwerp en vervaardigings oogmerke in aanmerking neem. Die informasie vereistes, soos beskryf in AP230, is geanaliseer om 'n subset van entiteite te identifiseer wat noodsaaklik is vir die beskrywing van 'n eenvoudige nywerheidsstruktuur om die strukturele konsep te beskryf en om die struktuur te analiseer en te ontwerp. Nadat die essensiële entiteite geïdentifiseer is, is 'n relasionele databasismodel van die entiteite geskep. Beplanning, analise en ontwerptoepassings maak van die databasis gebruik om kollaboratief data oor strukture uit te ruil. Die omvattenheid van die databasis-model is ondersoek deur 'n eenvoudige nywerheidsstruktuur daarop afte beeld. Toegang tot die databasis word verskaf deur 'n groep Java klasse wat bekend staan as die verteenwoordigende databasis klasse. Hierdie databasis-verteenwoordigers is instansies met dieselfde identifikasie eienskappe as die ooreenkomstige informasie eenhede in die databasis. Die hoofdoel van die databasis-verteenwoordigers is om hulself in die databasis te stoor asook om hul rang weer vanuit die databasis te verkry. 'n Grafiese gebruikerskoppelvlak, geprogrammeer in Java, is ontwikkel. Die koppelvlak word gebruik om die strukturele konsep te beskryf, dit te stoor na die databasis en om dit weer, met behulp van die databasis-verteenwoordigers, uit die databasis te haal.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Lo, Kin-keung, e 羅建強. "An investigation of computer assisted testing for civil engineering students in a Hong Kong technical institute". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1988. http://hub.hku.hk/bib/B38627000.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Gkoktsi, K. "Compressive techniques for sub-Nyquist data acquisition & processing in vibration-based structural health monitoring of engineering structures". Thesis, City, University of London, 2018. http://openaccess.city.ac.uk/19192/.

Testo completo
Abstract (sommario):
Vibration-based structural health monitoring (VSHM) is an automated method for assessing the integrity and performance of dynamically excited structures through processing of structural vibration response signals acquired by arrays of sensors. From a technological viewpoint, wireless sensor networks (WSNs) offer less obtrusive, more economical, and rapid VSHM deployments in civil structures compared to their tethered counterparts, especially in monitoring large-scale and geometrically complex structures. However, WSNs are constrained by certain practical issues related to local power supply at sensors and restrictions to the amount of wirelessly transmitted data due to increased power consumptions and bandwidth limitations in wireless communications. The primary objective of this thesis is to resolve the above issues by considering sub-Nyquist data acquisition and processing techniques that involve simultaneous signal acquisition and compression before transmission. This drastically reduces the sampling and transmission requirements leading to reduced power consumptions up to 85-90% compared to conventional approaches at Nyquist rate. Within this context, the current state-of-the-art VSHM approaches exploits the theory of compressive sensing (CS) to acquire structural responses at non-uniform random sub-Nyquist sampling schemes. By exploiting the sparse structure of the analysed signals in a known vector basis (i.e., non-zero signal coefficients), the original time-domain signals are reconstructed at the uniform Nyquist grid by solving an underdetermined optimisation problem subject to signal sparsity constraints. However, the CS sparse recovery is a computationally intensive problem that strongly depends on and is limited by the sparsity attributes of the measured signals on a pre-defined expansion basis. This sparsity information, though, is unknown in real-time VSHM deployments while it is adversely affected by noisy environments encountered in practice. To efficiently address the above limitations encountered in CS-based VSHM methods, this research study proposes three alternative approaches for energy-efficient VSHM using compressed structural response signals under ambient vibrations. The first approach aims to enhance the sparsity information of vibrating structural responses by considering their representation on the wavelet transform domain using various oscillatory functions with different frequency domain attributes. In this respect, a novel data-driven damage detection algorithm is developed herein, emerged as a fusion of the CS framework with the Relative Wavelet Entropy (RWE) damage index. By processing sparse signal coefficients on the harmonic wavelet transform for two comparative structural states (i.e., damage versus healthy state), CS-based RWE damage indices are retrieved from a significantly reduced number of wavelet coefficients without reconstructing structural responses in time-domain. The second approach involves a novel signal-agnostic sub-Nyquist spectral estimation method free from sparsity constraints, which is proposed herein as a viable alternative for power-efficient WSNs in VSHM applications. The developed method relies on Power Spectrum Blind Sampling (PSBS) techniques together with a deterministic multi-coset sampling pattern, capable to acquire stationary structural responses at sub-Nyquist rates without imposing sparsity conditions. Based on a network of wireless sensors operating on the same sampling pattern, auto/cross power-spectral density estimates are computed directly from compressed data by solving an overdetermined optimisation problem; thus, by-passing the computationally intensive signal reconstruction operations in time-domain. This innovative approach can be fused with standard operational modal analysis algorithms to estimate the inherent resonant frequencies and modal deflected shapes of structures under low-amplitude ambient vibrations with the minimum power, computational and memory requirements at the sensor, while outperforming pertinent CS-based approaches. Based on the extracted modal in formation, numerous data-driven damage detection strategies can be further employed to evaluate the condition of the monitored structures. The third approach of this thesis proposes a noise-immune damage detection method capable to capture small shifts in structural natural frequencies before and after a seismic event of low intensity using compressed acceleration data contaminated with broadband noise. This novel approach relies on a recently established sub-Nyquist pseudo-spectral estimation method which combines the deterministic co-prime sub-Nyquist sampling technique with the multiple signal classification (MUSIC) pseudo-spectrum estimator. This is also a signal-agnostic and signal reconstruction-free method that treats structural response signals as wide-sense stationary stochastic processes to retrieve, with very high resolution, auto-power spectral densities and structural natural frequency estimates directly from compressed data while filtering out additive broadband noise.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Bakhary, Norhisham. "Structural condition monitoring and damage identification with artificial neural network". University of Western Australia. School of Civil and Resource Engineering, 2009. http://theses.library.uwa.edu.au/adt-WU2009.0102.

Testo completo
Abstract (sommario):
Many methods have been developed and studied to detect damage through the change of dynamic response of a structure. Due to its capability to recognize pattern and to correlate non-linear and non-unique problem, Artificial Neural Networks (ANN) have received increasing attention for use in detecting damage in structures based on vibration modal parameters. Most successful works reported in the application of ANN for damage detection are limited to numerical examples and small controlled experimental examples only. This is because of the two main constraints for its practical application in detecting damage in real structures. They are: 1) the inevitable existence of uncertainties in vibration measurement data and finite element modeling of the structure, which may lead to erroneous prediction of structural conditions; and 2) enormous computational effort required to reliably train an ANN model when it involves structures with many degrees of freedom. Therefore, most applications of ANN in damage detection are limited to structure systems with a small number of degrees of freedom and quite significant damage levels. In this thesis, a probabilistic ANN model is proposed to include into consideration the uncertainties in finite element model and measured data. Rossenblueth's point estimate method is used to reduce the calculations in training and testing the probabilistic ANN model. The accuracy of the probabilistic model is verified by Monte Carlo simulations. Using the probabilistic ANN model, the statistics of the stiffness parameters can be predicted which are used to calculate the probability of damage existence (PDE) in each structural member. The reliability and efficiency of this method is demonstrated using both numerical and experimental examples. In addition, a parametric study is carried out to investigate the sensitivity of the proposed method to different damage levels and to different uncertainty levels. As an ANN model requires enormous computational effort in training the ANN model when the number of degrees of freedom is relatively large, a substructuring approach employing multi-stage ANN is proposed to tackle the problem. Through this method, a structure is divided to several substructures and each substructure is assessed separately with independently trained ANN model for the substructure. Once the damaged substructures are identified, second-stage ANN models are trained for these substructures to identify the damage locations and severities of the structural ii element in the substructures. Both the numerical and experimental examples are used to demonstrate the probabilistic multi-stage ANN methods. It is found that this substructuring ANN approach greatly reduces the computational effort while increasing the damage detectability because fine element mesh can be used. It is also found that the probabilistic model gives better damage identification than the deterministic approach. A sensitivity analysis is also conducted to investigate the effect of substructure size, support condition and different uncertainty levels on the damage detectability of the proposed method. The results demonstrated that the detectibility level of the proposed method is independent of the structure type, but dependent on the boundary condition, substructure size and uncertainty level.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

De, Kock Jacobus M. (Jacobus Michiel). "An overview of municipal information systems of Drakenstein municipality with reference to the Actionit open decision support framework". Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/52684.

Testo completo
Abstract (sommario):
Thesis (MScEng)--University of Stellenbosch, 2002.
ENGLISH ABSTRACT: Actionl'I' is a project undertaken by a consortium consisting of CSIR, Simeka Management Consulting, University of Pretoria and the University of Stellenbosch for the Innovation Fund of the Department of Arts, Culture, Science and Technology in South Africa. Their objective is to create a basic specification for seletected information exchange that is compatible with all levels of government. The comparison between existing information systems at municipal level and ActionIT specifications will be investigated for the purpose of exposing shortcomings on both sides. Appropriate features of existing information systems will be identified for the purpose of enhancing the ActionIT specifications. The ActionIT project is presently in its user requirement and conceptual model definition phase, and this thesis aims to assist in providing information that may be helpful infuture developments. The study undertaken in this thesis requires the application of analytical theory and a working knowledge of information systems and databases in order to: 1. Research existing information systems and relevant engineering data at local municipal authorities. Also important will be the gathering of information regarding systems currently in use, and the format in which information is stored and utilised at municipalities. 2. Do an adequate analysis of the contents of recorded information. This information will establish background knowledge on the operations of local authorities and a clearer understanding of information systems. 3. Evaluate to what degree existing information systems comply with ActionIT specifications. This will be the main focus of this thesis. Thus the focus of this thesis is to record (provide an overview oj) activities in a municipal environment and the interaction with the environment on information system level where standards provided by ActionIT as an Open Decision Support Framework can be of value.
AFRIKAANSE OPSOMMING: ActionIT is in projek wat deur ActionIT konsortium bestaande uit die WNNR, Simeka Management Consulting, Universiteit van Pretoria en die Universiteit van Stellenbosch, onderneem is vir die Innovasie Vonds van die departement van Kuns, Kultuur, Wetenskap en Tegnologie van Suid-Afrika. Hul mikpunt is om in spesifikasie vir informasie sisteme te onwikkel, wat met alle vlakke van regering kan skakel. Die vergelyking tussen die bestaande informasie sisteme op munisipale vlak en ActionIT spesifikasies salondersoek word vir die doelom tekortkominge aan beide kante uit te wys. Vir die verbetering van ActionIT spesifikasies moet aanvaarbare eienskappe van bestaande informasie sisteme geindentifiseer word. Die ActionIT projek is tans in die gebruikers vereiste en konseptueIe model definisie fase, en die tesis is gemik daarop om 'n bydrae te lewer tot die bevordering van informasie wat mag help in toekomstige ontwikkeling. Die werk onderneem in die tesis vereis in teoretiese kennis van informasie sisteme en databasise om: 1. in Ondersoek in die bestaande informasie sisteem en relefante ingenieurs data van in plaaslike munisipaliteit te doen. Die insameling van informasie oor die huidige sisteme in gebruik, die formaat waarin die informasie gestoor en gebruik word is ook belangrik. 2. in Analise van die inhoud van die waargenome informasie te doen. Die informasie sal agtergrond kennis gee oor die werking van plaaslike munisipale owerheid en in beter insig in informasie sisteme gee. 3. in Evaluasie van die verband tussen die bestaande informasie sisteme en ActionIT spesifikasies te doen. Dit is die hoof fokus punt van die tesis. Dus die doel van die tesis is om in oorsig te gee oor die aktiviteite in in munisipale omgewing en die interaksie met die omgewing op informasie sisteem vlak. Waar standaarde wat deur ActionIT voorgeskryf word as in "Open Decision Support Framework" van belang kan wees.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Camp, Nicholas Julian. "A model for the time dependent behaviour of rock joints". Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/21138.

Testo completo
Abstract (sommario):
This thesis is a theoretical investigation into the time-dependent behaviour of rock joints. Much of the research work that has been conducted to date in the area of finite element analysis has been involved with the development of special elements to deal with these discontinuities. A comprehensive literature survey is undertaken highlighting some of the significant contributions to the modelling of joints. It is then shown how internal variables can be used to model discontinuities in the rock mass. A finite element formulation is described resulting in a system of equations which can easily be adapted to cope with various constitutive behaviours on the discontinuities. In particular, a viscoplastic relationship; which uses a homogeneous, hyperbolic yield function is adopted. The viscoplastic relationship can be used for both time-dependent (creep) or quasi-static (elasto-plastic) problems. Time-dependent behaviour requires a time integration scheme and therefore a generalised explicit/implicit scheme is chosen. The resulting numerical algorithms are all implemented in the finite element program, NOSTRUM. Various examples are presented to illustrate certain features of both the formulation and the numerical algorithm. Jointed rock beams and a jointed infinite rock mass are modelled assuming plane strain conditions. Reasons are proposed to explain the predicted behaviour. The results of the analysis shows that the internal variable formulation successfully models time-dependent joint movements in a continuous media. The method gives good, qualitative results which agree with observations in deep level mines. It is recommended that quantitative mine observations be used to calibrate the model so that usable predictions of joint movement can be made. This would enable any new developments to be implemented in the model. Further work on implicit methods might allow greater modelling flexibility by reducing computer run times.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Goy, Cristina. "Displacement Data Processing and FEM Model Calibration of a 3D-Printed Groin Vault Subjected to Shaking-Table Tests". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20061/.

Testo completo
Abstract (sommario):
The present thesis is part of the wide work required by the SEBESMOVA3D (SEeismic BEhavior of Scaled MOdels of groin VAults made by 3D printers) project whose first motivation is the preservation of the cultural heritage in case of seismic events. Therefore, the main topic of the thesis is the analysis of the seismic response of scaled models of groin vaults, made of plastic 3D printed bricks filled with mortar, and subjected to shaking table tests performed at the EQUALS laboratory of the University of Bristol. The work has been developed on two parallel binaries: the processing of the displacement data acquired in situ and the calibration of a FEM model.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Glick, Travis Bradley. "Utilizing High-Resolution Archived Transit Data to Study Before-and-After Travel-Speed and Travel-Time Conditions". PDXScholar, 2017. https://pdxscholar.library.pdx.edu/open_access_etds/4065.

Testo completo
Abstract (sommario):
Travel times, operating speeds, and service reliability influence costs and service attractiveness. This paper outlines an approach to quantify how these metrics change after a modification of roadway design or transit routes using archived transit data. The Tri-County Metropolitan Transportation District of Oregon (TriMet), Portland's public transportation provider, archives automatic vehicle location (AVL) data for all buses as part of their bus dispatch system (BDS). This research combines three types of AVL data (stop event, stop disturbance, and high-resolution) to create a detailed account of transit behavior; this probe data gives insights into the behavior of transit as well as general traffic. The methodology also includes an updated approach for confidence intervals estimates that more accurately represent of range of speed and travel time percentile estimates. This methodology is applied to three test cases using a month of AVL data collected before and after the implementation of each roadway change. The results of the test cases highlight the broad applicability for this approach to before-and-after studies.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Ragnucci, Beatrice. "Data analysis of collapse mechanisms of a 3D printed groin vault in shaking table testing". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22365/.

Testo completo
Abstract (sommario):
The aim of this novel experimental study is to investigate the behaviour of a 2m x 2m model of a masonry groin vault, which is built by the assembly of blocks made of a 3D-printed plastic skin filled with mortar. The choice of the groin vault is due to the large presence of this vulnerable roofing system in the historical heritage. Experimental tests on the shaking table are carried out to explore the vault response on two support boundary conditions, involving four lateral confinement modes. The data processing of markers displacement has allowed to examine the collapse mechanisms of the vault, based on the arches deformed shapes. There then follows a numerical evaluation, to provide the orders of magnitude of the displacements associated to the previous mechanisms. Given that these displacements are related to the arches shortening and elongation, the last objective is the definition of a critical elongation between two diagonal bricks and consequently of a diagonal portion. This study aims to continue the previous work and to take another step forward in the research of ground motion effects on masonry structures.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Gichamo, Tseganeh Zekiewos. "Advancing Streamflow Forecasts Through the Application of a Physically Based Energy Balance Snowmelt Model With Data Assimilation and Cyberinfrastructure Resources". DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7463.

Testo completo
Abstract (sommario):
The Colorado Basin River Forecast Center (CBRFC) provides forecasts of streamflow for purposes such as flood warning and water supply. Much of the water in these basins comes from spring snowmelt, and the forecasters at CBRFC currently employ a suite of models that include a temperature-index snowmelt model. While the temperature-index snowmelt model works well for weather and land cover conditions that do not deviate from those historically observed, the changing climate and alterations in land use necessitate the use of models that do not depend on calibrations based on past data. This dissertation reports work done to overcome these limitations through using a snowmelt model based on physically invariant principles that depends less on calibration and can directly accommodate weather and land use changes. The first part of the work developed an ability to update the conditions represented in the model based on observations, a process referred to as data assimilation, and evaluated resulting improvements to the snowmelt driven streamflow forecasts. The second part of the research was the development of web services that enable automated and efficient access to and processing of input data to the hydrological models as well as parallel processing methods that speed up model executions. These tasks enable the more detailed models and data assimilation methods to be more efficiently used for streamflow forecasts.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Assefha, Sabina, e Matilda Sandell. "Evaluation of digital terrain models created in post processing software for UAS-data : Focused on point clouds created through block adjustment and dense image matching". Thesis, Högskolan i Gävle, Samhällsbyggnad, GIS, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-26976.

Testo completo
Abstract (sommario):
Lately Unmanned Aerial Systems (UAS) are used more frequently in surveying. With broader use comes higher demands on the uncertainty in such measurements. The post processing software is an important factor that affects the uncertainty in the finished product. Therefore it is vital to evaluate how results differentiate in different software and how parameters contribute. In UAS-photogrammetry images are acquired with an overlap which makes it possible to generate point clouds in photogrammetric software. These point clouds are often used to create Digital Terrain Models (DTM).  The purpose of this study is to evaluate how the level of uncertainty differentiates when processing the same UAS-data through block adjustment and dense image matching in two different photogrammetric post processing software. The software used are UAS Master and Pix4D. The objective is also to investigate how the level of extraction in UAS Master and the setting for image scale in Pix4D affects the results when generating point clouds. Three terrain models were created in both software using the same set of data, changing only extraction level and image scale in UAS Master and Pix4D respectively.  26 control profiles were measured with network-RTK in the area of interest to calculate the root mean square (RMS) and mean deviation in order to verify and compare the uncertainty of the terrain models. The study shows that results vary when processing the same UAS-data in different software.  The study also shows that the extraction level in UAS Master and the image scale in Pix4D impacts the results differently. In UAS Master the uncertainty decreases with higher extraction level when generating terrain models. A clear pattern regarding the image scale setting in Pix4D cannot be determined. Both software were able to produce elevation models with a RMS-value of around 0,03 m. The mean deviation in all models created in this study were below 0,02 m, which is the requirement for class 1 in the technical specification SIS-TS 21144:2016. However the mean deviation for the ground type gravel in the terrain model created in UAS Master at a low extraction level exceeds the demands for class 1. This indicates all but one of the created models fulfil the requirements for class 1, which is the class containing the highest requirements.
Obemannade flygfarkostsystem (eng. Unmanned Aerial Systems, UAS) används allt mer frekvent för datainsamling inom geodetisk mätning. I takt med att användningsområdena ökar ställs också högre krav på mätosäkerheten i dessa mätningar. De efterbearbetningsprogram som används är en faktor som påverkar mätosäkerheten i den slutgiltiga produkten. Det är därför viktigt att utvärdera hur olika programvaror påverkar slutresultatet och hur valda parametrar spelar in. I UAS-fotogrammetri tas bilder med övertäckning för att kunna generera punktmoln som i sin tur kan bearbetas till digitala terrängmodeller (DTM).  Syftet med studien är att utvärdera hur mätosäkerheten skiljer sig när samma data bearbetas genom blockutjämning och tät bildmatchning i två olika programvaror. Programvarorna som används i studien är UAS Master och Pix4D. Målet är också att utreda hur vald extraktions nivå i UAS Master och vald bildskala i Pix4D påverkar resultatet vid generering av terrängmodeller. Tre terrängmodeller skapades i UAS Master med olika extraktionsnivåer och ytterligare tre skapades i Pix4D med olika bildskalor. 26 kontrollprofiler mättes in med nätverks-RTK i aktuellt område för beräkning av medelavvikelse och kvadratiskt medelvärde (RMS). Detta för att kunna verifiera och jämföra mätosäkerheten i modellerna. Studien visar att slutresultatet varierar när samma data bearbetas i olika programvaror.  Studien visar också att vald extraktionsnivå i UAS Master och vald bildskala i Pix4D påverkar resultatet olika. I UAS Master minskar mätosäkerheten med ökad extraktionsnivå, i Pix4D är det svårare att se ett tydligt mönster. Båda programvaror kunde producera terrängmodeller med ett RMS-värde kring 0,03 m. Medelavvikelsen i samtliga modeller understiger 0,02 m, vilket är kravet för klass 1 från den tekniska specifikationen SIS-TS 21144:2016. Medelavvikelsen för marktypen grus i UAS Master i modellen med låg extraktionsnivå överskrider dock kraven för klass 1. Därmed uppnår alla förutom en av terrängmodellerna kraven för klass 1, vilket är den klass med högst ställda krav.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Thorell, Marcus, e Mattias Andersson. "Implementering av HAZUS-MH i Sverige : Möjligheter och hinder". Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-72666.

Testo completo
Abstract (sommario):
För modellering av risker vid naturkatastrofer är GIS ett grundläggande verktyg. HAZUS-MH är ett GIS-baserat riskanalysverktyg, utvecklat av den amerikanska myndigheten FEMA. HAZUS-MH har en välutvecklad metodologi för modellering av naturkatastrofer, vilket är något som efterfrågas på europeisk nivå inom ramen för översvämningsdirektivet. Därför föreligger ett intresse för implementering av HAZUS-MH för icke amerikanska förhållanden. Denna studies syfte är att fördjupa kunskaperna för implementering och användning av HAZUS-MH i Sverige. För att möjliggöra implementering behöver svenska data bearbetas för att matcha datastrukturen i HAZUS-MH. Metoden innefattar en litteraturgenomgång av tidigare studier och manualer samt databearbetning. Erfarenheter av databearbetningen samlades in för att bygga upp en manual för databearbetning samt för att utvärdera möjligheter och hinder för implementering i Sverige. Resultatet visar hur systemkrav och övriga inställningar för användning av HAZUS-MH ser ut. De övriga inställningarna berör koppling till HAZUS-MH databas med mera. För anpassning av svenska data beskrivs databehov (administrativ indelning, inventeringsdata och hydrologiska data), data-bearbetning (rekommenderad arbetsgång för att fylla shape-filer och tabeller med information) och dataimport. Vidare redogör resultatet för tillämpningen av HAZUS-MH med svenska data. Denna studie identifierar flera möjligheter hos HAZUS-MH. Möjligheterna att skapa risk- och sårbarhetskartor samt dataimport är de största. Tidsåtgången för att utföra anpassningen av svenska data var runt 15 arbetsdagar. Denna studie uppskattar att med hjälp av manualer för anpass-ningen kan denna tid kortas till 3 arbetsdagar. Om processen att anpassa svenska data automat-iseras kan tiden kortas ytterligare. Den största barriären enligt denna studie är insamling av data. För att kunna använda HAZUS-MH fulla potential behövs omfattande datainsamling. En annan barriär är begränsningar i hydro-logiska data, det är nödvändigt med externa hydrologiska data för en så korrekt analys som möjligt. Vidare forskning inom området bör enligt denna studie fokusera på metoder för att samla in data samt hur en automatisk process för att anpassa data skulle kunna se ut.
When modeling risks for natural disasters, GIS is a fundamental tool. HAZUS-MH is a GIS-based risk analysis tool, developed by the American authority FEMA. HAZUS-MH has a well-developed methodology for modeling natural disasters, which is something that is demanded at European level within the flood directive framework. Hence, there is an interest in implementing HAZUS-MH for non-US conditions. The aim of the study is to deepen the knowledge for the implementation and use of HAZUS-MH in Sweden. To enable implementation, Swedish data is required to be processed to match the data structure of HAZUS-MH. Methods for this study are a literature review of previous studies and manuals and data processing. Experiences of the data processing were collected to build a manual for data processing and to evaluate opportunities and obstacles for implementation in Sweden. The result shows how system requirements and other settings for using HAZUS-MH look like. The other settings include connection to the HAZUS-MH database et cetera. For adaption of Swedish data, requirements including data (administrative division, inventory data and hydrological data), data processing (recommended workflow to fill shape-files and attribute tables with information) and data import are described. The result also describes the application of HAZUS-MH with Swedish data. This study identifies several possibilities of HAZUS-MH. The opportunities for creating risk and vulnerability maps and data import are the largest. The time required to perform the adaptation of Swedish data was approximately 15 working days. This study estimates that with the help of manuals for the adaption, this time could be shortened to approximately 3 working days. If the process of adapting data is automated, this time could be shortened further. The largest obstacle under this study is the data collection process, to use the full potential of HAZUS-MH extensive data collection is needed. Another obstacle is the limitation of hydrological data, external hydrological data is necessary to get as accurate analysis as possible. Further research in the field should, according to this study, focus on methods of collecting data and development of an automatic process for managing data.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Fernandez, Noemi. "Statistical information processing for data classification". FIU Digital Commons, 1996. http://digitalcommons.fiu.edu/etd/3297.

Testo completo
Abstract (sommario):
This thesis introduces new algorithms for analysis and classification of multivariate data. Statistical approaches are devised for the objectives of data clustering, data classification and object recognition. An initial investigation begins with the application of fundamental pattern recognition principles. Where such fundamental principles meet their limitations, statistical and neural algorithms are integrated to augment the overall approach for an enhanced solution. This thesis provides a new dimension to the problem of classification of data as a result of the following developments: (1) application of algorithms for object classification and recognition; (2) integration of a neural network algorithm which determines the decision functions associated with the task of classification; (3) determination and use of the eigensystem using newly developed methods with the objectives of achieving optimized data clustering and data classification, and dynamic monitoring of time-varying data; and (4) use of the principal component transform to exploit the eigensystem in order to perform the important tasks of orientation-independent object recognition, and di mensionality reduction of the data such as to optimize the processing time without compromising accuracy in the analysis of this data.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Chiu, Cheng-Jung. "Data processing in nanoscale profilometry". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36677.

Testo completo
Abstract (sommario):
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1995.
Includes bibliographical references (p. 176-177).
New developments on the nanoscale are taking place rapidly in many fields. Instrumentation used to measure and understand the geometry and property of the small scale structure is therefore essential. One of the most promising devices to head the measurement science into the nanoscale is the scanning probe microscope. A prototype of a nanoscale profilometer based on the scanning probe microscope has been built in the Laboratory for Manufacturing and Productivity at MIT. A sample is placed on a precision flip stage and different sides of the sample are scanned under the SPM to acquire its separate surface topography. To reconstruct the original three dimensional profile, many techniques like digital filtering, edge identification, and image matching are investigated and implemented in the computer programs to post process the data, and with greater emphasis placed on the nanoscale application. The important programming issues are addressed, too. Finally, this system's error sources are discussed and analyzed.
by Cheng-Jung Chiu.
M.S.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Derksen, Timothy J. (Timothy John). "Processing of outliers and missing data in multivariate manufacturing data". Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38800.

Testo completo
Abstract (sommario):
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (leaf 64).
by Timothy J. Derksen.
M.Eng.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Nyström, Simon, e Joakim Lönnegren. "Processing data sources with big data frameworks". Thesis, KTH, Data- och elektroteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188204.

Testo completo
Abstract (sommario):
Big data is a concept that is expanding rapidly. As more and more data is generatedand garnered, there is an increasing need for efficient solutions that can be utilized to process all this data in attempts to gain value from it. The purpose of this thesis is to find an efficient way to quickly process a large number of relatively small files. More specifically, the purpose is to test two frameworks that can be used for processing big data. The frameworks that are tested against each other are Apache NiFi and Apache Storm. A method is devised in order to, firstly, construct a data flow and secondly, construct a method for testing the performance and scalability of the frameworks running this data flow. The results reveal that Apache Storm is faster than Apache NiFi, at the sort of task that was tested. As the number of nodes included in the tests went up, the performance did not always do the same. This indicates that adding more nodes to a big data processing pipeline, does not always result in a better performing setup and that, sometimes, other measures must be made to heighten the performance.
Big data är ett koncept som växer snabbt. När mer och mer data genereras och samlas in finns det ett ökande behov av effektiva lösningar som kan användas föratt behandla all denna data, i försök att utvinna värde från den. Syftet med detta examensarbete är att hitta ett effektivt sätt att snabbt behandla ett stort antal filer, av relativt liten storlek. Mer specifikt så är det för att testa två ramverk som kan användas vid big data-behandling. De två ramverken som testas mot varandra är Apache NiFi och Apache Storm. En metod beskrivs för att, för det första, konstruera ett dataflöde och, för det andra, konstruera en metod för att testa prestandan och skalbarheten av de ramverk som kör dataflödet. Resultaten avslöjar att Apache Storm är snabbare än NiFi, på den typen av test som gjordes. När antalet noder som var med i testerna ökades, så ökade inte alltid prestandan. Detta visar att en ökning av antalet noder, i en big data-behandlingskedja, inte alltid leder till bättre prestanda och att det ibland krävs andra åtgärder för att öka prestandan.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Zhao, Wenguang S. M. Massachusetts Institute of Technology. "Modeling of ultrasonic processing". Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33738.

Testo completo
Abstract (sommario):
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2005.
Includes bibliographical references (leaves 53-55).
This paper presents a finite element analysis (FEA) of ultrasonic processing of an aerospace-grade carbon-epoxy composite laminate. An ultrasonic (approximately 30 kHz) loading horn is applied to a small region at the laminate surface, which produces a spatially nonuniform strain energy field within the material. A fraction of this strain energy is dissipated during each ultrasonic loading cycle depending on the temperature- dependent viscoelastic response of the material. This dissipation produces a rapid heating, yielding temperature increases over 100⁰C in approximately Is and permitting the laminate to be consolidated prior to full curing in an autoclave or other equipment. The spatially nonuniform, nonlinear, and coupled nature of this process, along with the large number of experimental parameters, makes trial-and-error analysis of the process intractable, and the FEA approach is valuable in process development and optimization.
by Wenguang Zhao.
S.M.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

徐順通 e Sung-thong Andrew Chee. "Computerisation in Hong Kong professional engineering firms". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1985. http://hub.hku.hk/bib/B31263124.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Wang, Yi. "Data Management and Data Processing Support on Array-Based Scientific Data". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1436157356.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Vann, A. M. "Intelligent monitoring of civil engineering systems". Thesis, University of Bristol, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.238845.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Grinman, Alex J. "Natural language processing on encrypted patient data". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113438.

Testo completo
Abstract (sommario):
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 85-86).
While many industries can benefit from machine learning techniques for data analysis, they often do not have the technical expertise nor computational power to do so. Therefore, many organizations would benefit from outsourcing their data analysis. Yet, stringent data privacy policies prevent outsourcing sensitive data and may stop the delegation of data analysis in its tracks. In this thesis, we put forth a two-party system where one party capable of powerful computation can run certain machine learning algorithms from the natural language processing domain on the second party's data, where the first party is limited to learning only specific functions of the second party's data and nothing else. Our system provides simple cryptographic schemes for locating keywords, matching approximate regular expressions, and computing frequency analysis on encrypted data. We present a full implementation of this system in the form of a extendible software library and a command line interface. Finally, we discuss a medical case study where we used our system to run a suite of unmodified machine learning algorithms on encrypted free text patient notes.
by Alex J. Grinman.
M. Eng.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Westlund, Kenneth P. (Kenneth Peter). "Recording and processing data from transient events". Thesis, Massachusetts Institute of Technology, 1988. https://hdl.handle.net/1721.1/129961.

Testo completo
Abstract (sommario):
Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1988.
Includes bibliographical references.
by Kenneth P. Westlund Jr.
Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1988.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Setiowijoso, Liono. "Data Allocation for Distributed Programs". PDXScholar, 1995. https://pdxscholar.library.pdx.edu/open_access_etds/5102.

Testo completo
Abstract (sommario):
This thesis shows that both data and code must be efficiently distributed to achieve good performance in a distributed system. Most previous research has either tried to distribute code structures to improve parallelism or to distribute data to reduce communication costs. Code distribution (exploiting functional parallelism) is an effort to distribute or to duplicate function codes to optimize parallel performance. On the other hand, data distribution tries to place data structures as close as possible to the function codes that use it, so that communication cost can be reduced. In particular, dataflow researchers have primarily focused on code partitioning and assignment. We have adapted existing data allocation algorithms for use with an existing dataflow-based system, ParPlum. ParPlum allows the execution of dataflow graphs on networks of workstations. To evaluate the impact of data allocation, we extended ParPlum to more effectively handle data structures. We then implemented tools to extract from dataflow graphs information that is relevant to the mapping algorithms and fed this information to our version of a data distribution algorithm. To see the relation between code and data parallelism we added optimization to optimize the distribution of the loop function components and the data structure access components. All of these are done automatically without programmer or user involvement. We ran a number of experiments using matrix multiplication as our workload. We used different numbers of processors and different existing partitioning and allocation algorithm. Our results show that automatic data distribution greatly improves the performance of distributed dataflow applications. For example, with 15 x 15 matrices, applying data distribution speeds up execution about 80% on 7 machines. Using data distribution and our code-optimizations on 7 machines speeds up execution over the base case by 800%. Our work shows that it is possible to make efficient use of distributed networks with compiler support and shows that both code mapping and data mapping must be considered to achieve optimal performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Vemulapalli, Eswar Venkat Ram Prasad 1976. "Architecture for data exchange among partially consistent data models". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/84814.

Testo completo
Abstract (sommario):
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2002.
Includes bibliographical references (leaves 75-76).
by Eswar Venkat Ram Prasad Vemulapalli.
S.M.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Jakovljevic, Sasa. "Data collecting and processing for substation integration enhancement". Texas A&M University, 2003. http://hdl.handle.net/1969/93.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Aygar, Alper. "Doppler Radar Data Processing And Classification". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609890/index.pdf.

Testo completo
Abstract (sommario):
In this thesis, improving the performance of the automatic recognition of the Doppler radar targets is studied. The radar used in this study is a ground-surveillance doppler radar. Target types are car, truck, bus, tank, helicopter, moving man and running man. The input of this thesis is the output of the real doppler radar signals which are normalized and preprocessed (TRP vectors: Target Recognition Pattern vectors) in the doctorate thesis by Erdogan (2002). TRP vectors are normalized and homogenized doppler radar target signals with respect to target speed, target aspect angle and target range. Some target classes have repetitions in time in their TRPs. By the use of these repetitions, improvement of the target type classification performance is studied. K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) algorithms are used for doppler radar target classification and the results are evaluated. Before classification PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis), NMF (Nonnegative Matrix Factorization) and ICA (Independent Component Analysis) are implemented and applied to normalized doppler radar signals for feature extraction and dimension reduction in an efficient way. These techniques transform the input vectors, which are the normalized doppler radar signals, to another space. The effects of the implementation of these feature extraction algoritms and the use of the repetitions in doppler radar target signals on the doppler radar target classification performance are studied.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Lu, Feng. "Big data scalability for high throughput processing and analysis of vehicle engineering data". Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207084.

Testo completo
Abstract (sommario):
"Sympathy for Data" is a platform that is utilized for Big Data automation analytics. It is based on visual interface and workflow configurations. The main purpose of the platform is to reuse parts of code for structured analysis of vehicle engineering data. However, there are some performance issues on a single machine for processing a large amount of data in Sympathy for Data. There are also disk and CPU IO intensive issues when the data is oversized and the platform need fits comfortably in memory. In addition, for data over the TB or PB level, the Sympathy for data needs separate functionality for efficient processing simultaneously and scalable for distributed computation functionality. This paper focuses on exploring the possibilities and limitations in using the Sympathy for Data platform in various data analytic scenarios within the Volvo Cars vision and strategy. This project re-writes the CDE workflow for over 300 nodes into pure Python script code and make it executable on the Apache Spark and Dask infrastructure. We explore and compare both distributed computing frameworks implemented on Amazon Web Service EC2 used for 4 machine with a 4x type for distributed cluster measurement. However, the benchmark results show that Spark is superior to Dask from performance perspective. Apache Spark and Dask will combine with Sympathy for Data products for a Big Data processing engine to optimize the system disk and CPU IO utilization. There are several challenges when using Spark and Dask to analyze large-scale scientific data on systems. For instance, parallel file systems are shared among all computing machines, in contrast to shared-nothing architectures. Moreover, accessing data stored in commonly used scientific data formats, such as HDF5 is not tentatively supported in Spark. This report presents research carried out on the next generation of Big Data platforms in the automotive industry called "Sympathy for Data". The research questions focusing on improving the I/O performance and scalable distributed function to promote Big Data analytics. During this project, we used the Dask.Array parallelism features for interpretation the data sources as a raster shows in table format, and Apache Spark used as data processing engine for parallelism to load data sources to memory for improving the big data computation capacity. The experiments chapter will demonstrate 640GB of engineering data benchmark for single node and distributed computation mode to evaluate the Sympathy for Data Disk CPU and memory metrics. Finally, the outcome of this project improved the six times performance of the original Sympathy for data by developing a middleware SparkImporter. It is used in Sympathy for Data for distributed computation and connected to the Apache Spark for data processing through the maximum utilization of the system resources. This improves its throughput, scalability, and performance. It also increases the capacity of the Sympathy for data to process Big Data and avoids big data cluster infrastructures.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Smith, Alexander D. "Computerized modeling of geotechnical stratigraphic data". Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/14360.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil Engineering, 1989.
Archives copy bound in 1 v.; Barker copy in 2 v.
Includes bibliographical references (leaves 246-251).
by Alexander Donnan Smith.
Ph.D.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Chen, Jiawen (Jiawen Kevin). "Efficient data structures for piecewise-smooth video processing". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66003.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 95-102).
A number of useful image and video processing techniques, ranging from low level operations such as denoising and detail enhancement to higher level methods such as object manipulation and special effects, rely on piecewise-smooth functions computed from the input data. In this thesis, we present two computationally efficient data structures for representing piecewise-smooth visual information and demonstrate how they can dramatically simplify and accelerate a variety of video processing algorithms. We start by introducing the bilateral grid, an image representation that explicitly accounts for intensity edges. By interpreting brightness values as Euclidean coordinates, the bilateral grid enables simple expressions for edge-aware filters. Smooth functions defined on the bilateral grid are piecewise-smooth in image space. Within this framework, we derive efficient reinterpretations of a number of edge-aware filters commonly used in computational photography as operations on the bilateral grid, including the bilateral filter, edgeaware scattered data interpolation, and local histogram equalization. We also show how these techniques can be easily parallelized onto modern graphics hardware for real-time processing of high definition video. The second data structure we introduce is the video mesh, designed as a flexible central data structure for general-purpose video editing. It represents objects in a video sequence as 2.5D "paper cutouts" and allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. In our representation, we assume that motion and depth are piecewise-smooth, and encode them sparsely as a set of points tracked over time. The video mesh is a triangulation over this point set and per-pixel information is obtained by interpolation. To handle occlusions and detailed object boundaries, we rely on the user to rotoscope the scene at a sparse set of frames using spline curves. We introduce an algorithm to robustly and automatically cut the mesh into local layers with proper occlusion topology, and propagate the splines to the remaining frames. Object boundaries are refined with per-pixel alpha mattes. At its core, the video mesh is a collection of texture-mapped triangles, which we can edit and render interactively using graphics hardware. We demonstrate the effectiveness of our representation with special effects such as 3D viewpoint changes, object insertion, depthof- field manipulation, and 2D to 3D video conversion.
by Jiawen Chen.
Ph.D.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Jakubiuk, Wiktor. "High performance data processing pipeline for connectome segmentation". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106122.

Testo completo
Abstract (sommario):
Thesis: M. Eng. in Computer Science and Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February 2016.
"December 2015." Cataloged from PDF version of thesis.
Includes bibliographical references (pages 83-88).
By investigating neural connections, neuroscientists try to understand the brain and reconstruct its connectome. Automated connectome reconstruction from high resolution electron miscroscopy is a challenging problem, as all neurons and synapses in a volume have to be detected. A mm3 of a high-resolution brain tissue takes roughly a petabyte of space that the state-of-the-art pipelines are unable to process to date. A high-performance, fully automated image processing pipeline is proposed. Using a combination of image processing and machine learning algorithms (convolutional neural networks and random forests), the pipeline constructs a 3-dimensional connectome from 2-dimensional cross-sections of a mammal's brain. The proposed system achieves a low error rate (comparable with the state-of-the-art) and is capable of processing volumes of 100's of gigabytes in size. The main contributions of this thesis are multiple algorithmic techniques for 2- dimensional pixel classification of varying accuracy and speed trade-off, as well as a fast object segmentation algorithm. The majority of the system is parallelized for multi-core machines, and with minor additional modification is expected to work in a distributed setting.
by Wiktor Jakubiuk.
M. Eng. in Computer Science and Engineering
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Nguyen, Qui T. "Robust data partitioning for ad-hoc query processing". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106004.

Testo completo
Abstract (sommario):
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-62).
Data partitioning can significantly improve query performance in distributed database systems. Most proposed data partitioning techniques choose the partitioning based on a particular expected query workload or use a simple upfront scheme, such as uniform range partitioning or hash partitioning on a key. However, these techniques do not adequately address the case where the query workload is ad-hoc and unpredictable, as in many analytic applications. The HYPER-PARTITIONING system aims to ll that gap, by using a novel space-partitioning tree on the space of possible attribute values to dene partitions incorporating all attributes of a dataset. The system creates a robust upfront partitioning tree, designed to benet all possible queries, and then adapts it over time in response to the actual workload. This thesis evaluates the robustness of the upfront hyper-partitioning algorithm, describes the implementation of the overall HYPER-PARTITIONING system, and shows how hyper-partitioning improves the performance of both selection and join queries.
by Qui T. Nguyen.
M. Eng.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Bao, Shunxing. "Algorithmic Enhancements to Data Colocation Grid Frameworks for Big Data Medical Image Processing". Thesis, Vanderbilt University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13877282.

Testo completo
Abstract (sommario):

Large-scale medical imaging studies to date have predominantly leveraged in-house, laboratory-based or traditional grid computing resources for their computing needs, where the applications often use hierarchical data structures (e.g., Network file system file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance for laboratory-based approaches reveal that performance is impeded by standard network switches since typical processing can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. On the other hand, the grid may be costly to use due to the dedicated resources used to execute the tasks and lack of elasticity. With increasing availability of cloud-based big data frameworks, such as Apache Hadoop, cloud-based services for executing medical imaging studies have shown promise.

Despite this promise, our studies have revealed that existing big data frameworks illustrate different performance limitations for medical imaging applications, which calls for new algorithms that optimize their performance and suitability for medical imaging. For instance, Apache HBases data distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). Big data medical image processing applications involving multi-stage analysis often exhibit significant variability in processing times ranging from a few seconds to several days. Due to the sequential nature of executing the analysis stages by traditional software technologies and platforms, any errors in the pipeline are only detected at the later stages despite the sources of errors predominantly being the highly compute-intensive first stage. This wastes precious computing resources and incurs prohibitively higher costs for re-executing the application. To address these challenges, this research propose a framework - Hadoop & HBase for Medical Image Processing (HadoopBase-MIP) - which develops a range of performance optimization algorithms and employs a number of system behaviors modeling for data storage, data access and data processing. We also introduce how to build up prototypes to help empirical system behaviors verification. Furthermore, we introduce a discovery with the development of HadoopBase-MIP about a new type of contrast for medical imaging deep brain structure enhancement. And finally we show how to move forward the Hadoop based framework design into a commercialized big data / High performance computing cluster with cheap, scalable and geographically distributed file system.

Gli stili APA, Harvard, Vancouver, ISO e altri
36

Mohd, Yunus Mohd Zulkifli. "Geospatial data management throughout large area civil engineering projects". Thesis, University of Newcastle Upon Tyne, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360241.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Hatchell, Brian. "Data base design for integrated computer-aided engineering". Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/16744.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Waite, Martin. "Data structures for the reconstruction of engineering drawings". Thesis, Nottingham Trent University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328794.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Einstein, Noah. "SmartHub: Manual Wheelchair Data Extraction and Processing Device". The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555352793977171.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Guttman, Michael. "Sampled-data IIR filtering via time-mode signal processing". Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86770.

Testo completo
Abstract (sommario):
In this work, the design of sampled-data infinite impulse response filters based on time-mode signal processing circuits is presented. Time-mode signal processing (TMSP), defined as the processing of sampled analog information using time-difference variables, has become one of the more popular emerging technologies in circuit design. As TMSP is still relatively new, there is still much development needed to extend the technology into a general signal-processing tool. In this work, a set of general building block will be introduced that perform the most basic mathematical operations in the time-mode. By arranging these basic structures, higher-order time-mode systems, specifically, time-mode filters, will be realized. Three second-order time-mode filters (low-pass, band-reject, high-pass) are modeled using MATLAB, and simulated in Spectre to verify the design methodology. Finally, a damped integrator and a second-order low-pass time-mode IIR filter are both implemented using discrete components.
Dans ce mémoire, la conception de filtres de données-échantillonnées ayant une réponse impulsionnelle infinie basée sur le traitement de signal en mode temporel est présentée. Le traitement de signal dans le domaine temporel (TSDT), définie comme étant le traitement d'information analogique échantillonnée en utilisant des différences de temps comme variables, est devenu une des techniques émergentes de conception de circuits des plus populaires. Puisque le TSDT est toujours relativement récent, il y a encore beaucoup de développements requis pour étendre cette technologie comme un outil de traitement de signal général. Dans cette recherche, un ensemble de blocs d'assemblage capable de réaliser la plupart des opérations mathématiques dans le domaine temporel sera introduit. En arrangeant ces structures élémentaires, des systèmes en mode temporel d'ordre élevé, plus spécifiquement des filtres en mode temporel, seront réalisés. Trois filtres de deuxième ordre dans le domaine temporel (passe-bas, passe-bande et passe-haut) sont modélisés sur MATLAB et simulé sur Spectre afin de vérifier la méthodologie de conception. Finalement, un intégrateur amorti et un filtre passe-bas IIR de deuxième ordre en mode temporel sont implémentés avec des composantes discrètes.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Roome, Stephen John. "The industrial application of digital signal processing". Thesis, City University London, 1989. http://openaccess.city.ac.uk/7405/.

Testo completo
Abstract (sommario):
This thesis describes an investigation into the application of digital signal processing techniques to the solution of industrial signal processing problems. The investigation took the form of three case studies chosen to illustrate the variety of possible applications. The first was the computer simulation of a digital microwave communications link which utilised narrowband FM modulation and partial response techniques. In order to ensure that the behaviour of the simulation reliably matched that of the modelled system it was found necessary to have a sound theoretical background, implementation using good software engineering methodology together with methodical testing and validation. The second case study was a comprehensive investigation of adaptive noise cancelling systems concentrating on issues important for practical implementation of the technique: stability and convergence of the adaptation algorithm; misadjustment noise and effects due to realizability constraints. It was found that theoretical predictions of the systems behaviour were in good agreement with the results of computer simulation except for the level of output misadjustment noise. In order to make the mathematics of the LMS algorithm tractable it was assumed that the input data formed a series of uncorrelated vectors. It was found that this assumption is only appropriate for the prediction of misadjustment noise when the reference input is uncorrelated. The final case study concerned the automatic detection and assessment of pressing faults on gramophone records for quality assurance purposes. A pattern recognition technique for identifying the signals due to gramophone record defects and a numerical method for assessing the perceived severity of the defects were developed empirically. Prototype equipment was designed, built and tested in extended field trials. The equipment was shown to be superior to previous equipment developed using analogue signal processing techniques. These case studies demonstrate that digital signal processing is a powerful and widely applicable technique for the solution of industrial signal processing problems. Solutions may be theoretical or obtained by experiment or simulation. The strengths and weaknesses of each approach are illustrated.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Breest, Martin, Paul Bouché, Martin Grund, Sören Haubrock, Stefan Hüttenrauch, Uwe Kylau, Anna Ploskonos, Tobias Queck e Torben Schreiter. "Fundamentals of Service-Oriented Engineering". Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2009/3380/.

Testo completo
Abstract (sommario):
Since 2002, keywords like service-oriented engineering, service-oriented computing, and service-oriented architecture have been widely used in research, education, and enterprises. These and related terms are often misunderstood or used incorrectly. To correct these misunderstandings, a deeper knowledge of the concepts, the historical backgrounds, and an overview of service-oriented architectures is demanded and given in this paper.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Faber, Marc. "On-Board Data Processing and Filtering". International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596433.

Testo completo
Abstract (sommario):
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV
One of the requirements resulting from mounting pressure on flight test schedules is the reduction of time needed for data analysis, in pursuit of shorter test cycles. This requirement has ramifications such as the demand for record and processing of not just raw measurement data but also of data converted to engineering units in real time, as well as for an optimized use of the bandwidth available for telemetry downlink and ultimately for shortening the duration of procedures intended to disseminate pre-selected recorded data among different analysis groups on ground. A promising way to successfully address these needs consists in implementing more CPU-intelligence and processing power directly on the on-board flight test equipment. This provides the ability to process complex data in real time. For instance, data acquired at different hardware interfaces (which may be compliant with different standards) can be directly converted to more easy-to-handle engineering units. This leads to a faster extraction and analysis of the actual data contents of the on-board signals and busses. Another central goal is the efficient use of the available bandwidth for telemetry. Real-time data reduction via intelligent filtering is one approach to achieve this challenging objective. The data filtering process should be performed simultaneously on an all-data-capture recording and the user should be able to easily select the interesting data without building PCM formats on board nor to carry out decommutation on ground. This data selection should be as easy as possible for the user, and the on-board FTI devices should generate a seamless and transparent data transmission, making a quick data analysis viable. On-board data processing and filtering has the potential to become the future main path to handle the challenge of FTI data acquisition and analysis in a more comfortable and effective way.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Morikawa, Takayuki. "Incorporating stated preference data in travel demand analysis". Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/14326.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Lin, Keng-Fan. "Hybrid Analysis for Synthetic Aperture Radar Data Stack". Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10267516.

Testo completo
Abstract (sommario):

Demand for the Earth observation has risen in the past few decades. As technology advanced, remote sensing techniques have become more and more essential in various applications, such as landslide recognition, land use monitoring, ecological observation. Among the existing techniques, synthetic aperture radar (SAR) has the advantage of making day-and-night acquisitions in any weather conditions. The characteristics of SAR secure the applicability of delivering reliable measurements over cloudy areas and performing measurement without any external energy source. However, SAR images suffer from lower spatial and spectral resolution compared to the optical ones. The coherent nature of radar signals also results in speckles which make the acquired images noisy.

To overcome the aforementioned issues, one can consider analyzing a long series of SAR-based observations over the same area. In that sense, spatial correlations of the image pixels can be studied based on similarity of temporal statistics. Adaptive image processing can thus be created. In the past, such an adaptive procedure was only applied for slow movement detection using SAR interferometry (InSAR). For the first time, we propose a full framework that allows processing the SAR images in an adaptive manner without losing the original resolution. This framework, namely Hybrid Analysis for Synthetic Aperture Radar (HASAR), exploits information in single-polarized/multi-temporal data stacks and focuses on two applications: change detection and image classification. Three techniques are developed in this study. First, we propose a new hypothesis testing procedure to identify pixels behaving similarly over time. Compared with conventional methods, the proposed test provides similarity measurements regardless of temporal variabilities and outliers. Its effectiveness paves the way for the following two techniques. Second, we develop an automatic change detection approach which utilizes spatiotemporal observations obtained by the first technique to locate abrupt changes in the imaged areas. Compared with existing methods, this approach does not require parameter-tuning procedures, giving a fully unsupervised solution for multi-temporal change analysis. Last, we deliver an efficient solution for classifying single-polarized datasets. A first-level classifier is implemented to analyze the spatiotemporal observations previously mentioned. Different from any other existing methods, the proposed method does not need polarimetric information for solving the multi-class problem. Its effectiveness greatly improves the added value of the single-polarization datasets.

Various experiments have been made to test the effectiveness of each proposed technique. First of all, the results from TerraSAR-X datasets in Los Angeles and Hong Kong signify that the proposed testing procedure is able to deliver effective extraction of statistically homogeneous pixels. Next, the detected changes from ERS-02 datasets in Taiwan show good matches with ground truth. Compared with conventional pairwise change analysis, the proposed multi-temporal change analysis provides much more observations that can be used for change analysis. The detected changes can be better located through the temporal statistics. Only historically significant changes will be considered as changes, which greatly reduces the false alarm rate. Finally, the results of multi-class classification from TanDEM-X and COSMO-SkyMed datasets in Los Angeles and Chicago, respectively, reveal high classification accuracies without complicated training procedures. It thus provides an entirely new solution for classifying the single-polarized datasets. By collectively utilizing different attributes (amplitude/coherence), dimensionalities (space/time), and processing approaches (pixel-based/object-based), the HASAR system augments the information content of the SAR data stacks. It, therefore, shows high potentials for continuous Earth monitoring.

Gli stili APA, Harvard, Vancouver, ISO e altri
46

Hinrichs, Angela S. (Angela Soleil). "An architecture for distributing processing on realtime data streams". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11418.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Marcus, Adam Ph D. Massachusetts Institute of Technology. "Optimization techniques for human computation-enabled data processing systems". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78454.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 119-124).
Crowdsourced labor markets make it possible to recruit large numbers of people to complete small tasks that are difficult to automate on computers. These marketplaces are increasingly widely used, with projections of over $1 billion being transferred between crowd employers and crowd workers by the end of 2012. While crowdsourcing enables forms of computation that artificial intelligence has not yet achieved, it also presents crowd workflow designers with a series of challenges including describing tasks, pricing tasks, identifying and rewarding worker quality, dealing with incorrect responses, and integrating human computation into traditional programming frameworks. In this dissertation, we explore the systems-building, operator design, and optimization challenges involved in building a crowd-powered workflow management system. We describe a system called Qurk that utilizes techniques from databases such as declarative workflow definition, high-latency workflow execution, and query optimization to aid crowd-powered workflow developers. We study how crowdsourcing can enhance the capabilities of traditional databases by evaluating how to implement basic database operators such as sorts and joins on datasets that could not have been processed using traditional computation frameworks. Finally, we explore the symbiotic relationship between the crowd and query optimization, enlisting crowd workers to perform selectivity estimation, a key component in optimizing complex crowd-powered workflows.
by Adam Marcus.
Ph.D.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Zheng, Xiao. "Mid-spatial frequency control for automated functional surface processing". Thesis, University of Huddersfield, 2018. http://eprints.hud.ac.uk/id/eprint/34723/.

Testo completo
Abstract (sommario):
Functional surfaces interact with surrounding substances, such as another solid, a liquid, gas, acoustic or electromagnetic waves etc., in order to achieve a required effect. Surfaces are increasingly required with complex forms and ever-increasing precision, can be very challenging to make. In particular, mid-spatial frequency (MSF) ripples are difficult to avoid for various reasons, but especially the changing misfit between a polishing tool as it moves across a complex workpiece surface. Current surface processing techniques are limited in their ability effectively to control or remove MSF errors for the reasons: i) lack of the ability to conform to the complex working surfaces, including grinding and lapping; ii) low material removal rate, such as Magnetorheological finishing and fluid jet polishing; iii) high cost (typically for ion beam figuring); iv) constrains for the size of the workpiece, such as stressed lap polishing and stressed mirror polishing. This thesis reports on the development of enhanced techniques, both to understand the formation of MSF errors on aspherical surfaces, and to mitigate them, increasing overall production efficiency. This has been achieved by: 1) Development of a novel stressed mirror technique providing a universal platform for aspheric experiments. 2) Results and analysis of kinetic simulations to understand the working mechanism of the non-Newtonian material under different stress conditions. 3) Developing a non-Newtonian tool, used in a novel way, to manage misfit between an aspherical workpiece and the tool surface. Peak-to-valley MSF error on an off-axis aspheric part better than 10 nm has been achieved. 4) Using bonded diamond pads, with various diamond sizes in a ‘grolishing’ (hybrid between grinding and polishing) procedure to achieve extremely high material removal rates (up to 267 mm3 /min), and control MSF errors 10 nm peak-to-valley, on flat and spherical surfaces. 5) Providing an aspherical surface after grolishing by a 3-microns diamond pad, with texture of sufficiently quality to be measured directly by an interferometer, which usually be achieved only after polishing.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Hou, Xianxu. "An investigation of deep learning for image processing applications". Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/52056/.

Testo completo
Abstract (sommario):
Significant strides have been made in computer vision over the past few years due to the recent development in deep learning, especially deep convolutional neural networks (CNNs). Based on the advances in GPU computing, innovative model architectures and large-scale dataset, CNNs have become the workhorse behind the state of the art performance for most computer vision tasks. For instance, the most advanced deep CNNs are able to achieve and even surpass human-level performance in image classification tasks. Deep CNNs have demonstrated the ability to learn very powerful image features or representations in a supervised manner. However, in spite of the impressive performance, it is still very difficult to interpret and understand the learned deep features when compared to traditional human-crafted ones. It is not very clear what has been learned in the deep features and how to apply them to other tasks like traditional image processing problems. In this thesis, we focus on exploring deep features extracted from pretrained deep convolutional neural networks, based on which we develop new techniques to tackle different traditional image processing problems. First we consider the task to quickly filter out irrelevant information in an image. In particular, we develop a method for exploiting object specific channel (OSC) from pretrained deep CNNs in which neurons are activated by the presence of specific objects in the input image. Building on the basic OSC features and use face detection as a specific example, we introduce a multi-scale approach to constructing robust face heatmaps for rapidly filtering out non-face regions thus significantly improving search efficiency for potential face candidates. Finally we develop a simple and compact face detectors in unconstrained settings with state of the art performance. Second we turn to the task to produce visually pleasing images. We investigate two generative models, variational autoencoder (VAE) and generative adversarial network (GAN), and propose to construct objective functions to train generative models by incorporating pretrained deep CNNs. As a result, high quality face images can be generated with realistic facial parts like clear nose, mouth as well as the tiny texture of hair. Moreover, the learned latent vectors demonstrate the capability of capturing conceptual and semantic information of facial images, which can be used to achieve state of the art performance in facial attribute prediction. Third we consider image information augmentation and reduction tasks. We propose a deep feature consistent principle to measure the similarity between two images in feature space. Based on this principle, we investigate several traditional image processing problems for both image information augmentation (companding and inverse halftoning) and reduction (downscaling, decolorization and HDR tone mapping). The experiments demonstrate the effectiveness of deep learning based solutions to solve these traditional low-level image processing problems. These approaches enjoy many advantages of neural network models such as easy to use and deploy, end-to-end training as a single learning problem without hand-crafted features. Last we investigate objective methods for measuring perceptual image quality and propose a new deep feature based image quality assessment (DFB-IQA) index by measuring the inconsistency between the distorted image and the reference image in feature space. The proposed DFB-IQA index performs very well and behave consistently with subjective mean opinion scores when applied to distorted images created from a variety of different types of distortions. Our works contribute to a growing literature that demonstrates the power of deep learning in solving traditional signal processing problems and advance the state of the art on different tasks.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Kuang, Zheng. "Parallel diffractive multi-beam ultrafast laser micro-processing". Thesis, University of Liverpool, 2011. http://livrepository.liverpool.ac.uk/1333/.

Testo completo
Abstract (sommario):
During the last decade, ultrashort pulse lasers have been employed for high precision surface micro-structuring of materials such as metals, semiconductors and dielectrics with little thermal damage. Due to the ultra high intensity of focussed femtosecond pulses (I > 1012W/cm2), nonlinear absorption can be induced at the focus leading to highly localised material ablation or modification. This is now opening up applications ranging from integrated optics, through multi-photon induced refractive index engineering to precision surface modification for silicon scribing and solar cell fabrication. To ensure non thermal material processing, the input fluence (F) of the ultrashort pulse laser must be kept in the low regime (F ∼ 1Jcm-2), a few times above the well defined ablation threshold. Accordingly, μJ (10-6J) level pulse energy input is often required for ultrashort pulse laser fine micro/nano-surface structuring. Running at one kilohertz repetition rate, many current ultrashort pulse laser systems can provide mJ (10-3J) level output pulse energy. Accordingly, significant attenuation of the laser output is required for many applications and hence causes a great deal of energy loss. With this limitation in mind, holographic multiple beam ultrashort pulse laser processing, where the mJ pulse energy is split into many desired diffracted beams with arbitrary geometric arrangement, is proposed in this thesis. The multi-beam patterns are generated by phase modulation using computer generated holograms (CGHs) which are displayed on a Spatial Light Modulator (SLM). The ability to address these devices in real time and synchronize with scanning methods adds an additional flexibility to the processing. The results obtained in this thesis demonstrate high precision micro-fabrication of different kinds of materials with greatly increased processing efficiency and throughput, showing many potential industrial applications.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia