Dissertationen zum Thema „Civil engineering Data processing“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Civil engineering Data processing" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Sinske, A. N. (Alexander Nicholas). „Comparative evaluation of the model-centred and the application-centred design approach in civil engineering software“. Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/52741.
Der volle Inhalt der QuelleENGLISH ABSTRACT: In this dissertation the traditional model-centred (MC)design approach for the development of software in the civil engineering field is compared to a newly developed application-centred (AC)design approach. In the MC design software models play the central role. A software model maps part of the world, for example its visualization or analysis onto the memory space of the computer. Characteristic of the MC design is that the identifiers of objects are unique and persistent only within the name scope of a model, and that classes which define the objects are components of the model. In the AC design all objects of the engineering task are collected in an application. The identifiers of the objects are unique and persistent within the name scope of the application and classes are no longer components of a model, but components of the software platform. This means that an object can be a part of several models. It is investigated whether the demands on the information and communication in modern civil engineering processes can be satisfied using the MC design approach. The investigation is based on the evaluation of existing software for the analysis and design of a sewer reticulation system of realistic dimensions and complexity. Structural, quantitative, as well as engineering complexity criteria are used to evaluate the design. For the evaluation of the quantitative criteria, in addition to the actual Duration of Execution, a User Interaction Count, the Persistent Data Size, and a Basic Instruction Count based on a source code complexity analysis, are introduced. The analysis of the MCdesign shows that the solution of an engineering task requires several models. The interaction between the models proves to be complicated and inflexible due to the limitation of object identifier scope: The engineer is restricted to the concepts of the software developer, who must provide static bridges between models in the form of data files or software transformers. The concept of the ACdesign approach is then presented and implemented in a new software application written in Java. This application is also extended for the distributed computing scenario. Newbasic classes are defined to manage the static and dynamic behaviour of objects, and to ensure the consistent and persistent state of objects in the application. The same structural and quantitative analyses are performed using the same test data sets as for the MCapplication. It is shown that the AC design approach is superior to the MC design approach with respect to structural, quantitative and engineering complexity .criteria. With respect to the design structure the limitation of object identifier scope, and thus the requirement for bridges between models, falls away, which is in particular of value for the distributed computing scenario. Although the new object management routines introduce an overhead in the duration of execution for the AC design compared to a hypothetical MC design with only one model and no software bridges, the advantages of the design structure outweigh this potential disadvantage.
AFRIKAANSE OPSOMMING: In hierdie proefskrif word die tradisionele modelgesentreerde (MC)ontwerpbenadering vir die ontwikkeling van sagteware vir die siviele ingenieursveld vergelyk met 'n nuut ontwikkelde applikasiegesentreerde (AC) ontwerpbenadering. In die MContwerp speel sagtewaremodelle 'n sentrale rol. 'n Sagtewaremodel beeld 'n deel van die wêreld, byvoorbeeld die visualisering of analise op die geheueruimte van die rekenaar af. Eienskappe van die MContwerp is dat die identifiseerders van objekte slegs binne die naamruimte van 'n model uniek en persistent is, en dat klasse wat die objekte definieer komponente van die model is. In die AC ontwerp is alle objekte van die ingenieurstaak saamgevat in 'n applikasie. Die identifisieerders van die objekte is uniek en persistent binne die naamruimte van die applikasie en klasse is nie meer komponente van die model nie, maar komponente van die sagtewareplatform. Dit beteken dat 'n objek deel van 'n aantal modelle kan vorm. Dit word ondersoek of daar by die MC ontwerpbenadering aan die vereistes wat by moderne siviele ingenieursprosesse ten opsigte van inligting en kommunikasie gestel word, voldoen kan word. Die ondersoek is gebaseer op die evaluering van bestaande sagteware vir die analise en ontwerp van 'n rioolversamelingstelsel met realistiese dimensies en kompleksiteit. Strukturele, kwantitatiewe, sowel as ingenieurskompleksiteitskriteria word gebruik om die ontwerp te evalueer. Vir die evaluering van die kwantitatiewe kriteria word addisioneel tot die uitvoerduurte 'n gebruikersinteraksie-telling, die persistente datagrootte, en 'n basiese instruksietelling gebaseer op 'n bronkode kompleksiteitsanalise , ingevoer. Die analise van die MC ontwerp toon dat die oplossing van ingenieurstake 'n aantal modelle benodig. Die interaksie tussen die modelle bewys dat dit kompleks en onbuigsaam is, as gevolg van die beperking op objekidentifiseerderruimte: Die ingenieur is beperk tot die konsepte van die sagteware ontwikkelaar wat statiese brue tussen modelle in die vorm van lêers of sagteware transformators moet verskaf. Die AC ontwerpbenadering word dan voorgestel en geïmplementeer in 'n nuwe sagteware-applikasie, geskryf in Java. Die applikasie word ook uitgebrei vir die verdeelde bewerking in die rekenaarnetwerk. Nuwe basisklasse word gedefinieer om die statiese en dinamiese gedrag van objekte te bestuur, en om die konsistente en persistente status van objekte in die applikasie te verseker. Dieselfde strukturele en kwantitatiewe analises word uitgevoer met dieselfde toetsdatastelle soos vir die MC ontwerp. Daar word getoon dat die AC ontwerpbenadering die MC ontwerpbenadering oortref met betrekking tot die strukturele, kwantitatiewe en ingenieurskompleksiteitskriteria. Met betrekking tot die ontwerpstruktuur val die beperking van die objek-identfiseerderruimte en dus die vereiste van brue tussen modelle weg, wat besonder voordelig is vir die verdeelde bewerking in die rekenaarnetwerk. Alhoewel die nuwe objekbestuurroetines in die AC ontwerp in vergelyking met 'n hipotetiese MC ontwerp, wat slegs een model en geen sagteware brue bevat, langer uitvoerduurtes tot gevolg het, is die voordele van die ontwerpstruktuur groter as die potensiële nadele.
Bostanudin, Nurul Jihan Farhah. „Computational methods for processing ground penetrating radar data“. Thesis, University of Portsmouth, 2013. https://researchportal.port.ac.uk/portal/en/theses/computational-methods-for-processing-ground-penetrating-radar-data(d519f94f-04eb-42af-a504-a4c4275d51ae).html.
Der volle Inhalt der QuelleYang, Su. „PC-grade parallel processing and hardware acceleration for large-scale data analysis“. Thesis, University of Huddersfield, 2009. http://eprints.hud.ac.uk/id/eprint/8754/.
Der volle Inhalt der QuelleOosthuizen, Daniel Rudolph. „Data modelling of industrial steel structures“. Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53346.
Der volle Inhalt der QuelleENGLISH ABSTRACT: AP230 of STEP is an application protocol for structural steel-framed buildings. Product data relating to steel structures is represented in a model that captures analysis, design and manufacturing views. The information requirements described in AP230 were analysed with the purpose of identifying a subset of entities that are essential for the description of simple industrial steel frames with the view to being able to describe the structural concept, and to perform the structural analysis and design of such structures. Having identified the essential entities, a relational database model for these entities was developed. Planning, analysis and design applications will use the database to collaboratively exchange data relating to the structure. The comprehensiveness of the database model was investigated by mapping a simple industrial frame to the database model. Access to the database is provided by a set of classes called the database representative classes. The data-representatives are instances that have the same selection identifiers and attributes as corresponding information units in the database. The datarepresentatives' primary tasks are to store themselves in the database and to retrieve their state from the database. A graphical user interface application, programmed in Java, used for the description of the structural concept with the capacity of storing the concept in the database and retrieving it again through the use of the database representative classes was also created as part of this project.
AFRIKAANSE OPSOMMING: AP230 van STEP is 'n toepassingsprotokol wat staal raamwerke beskryf. Die produkdata ter beskrywing van staal strukture word saamgevat in 'n model wat analise, ontwerp en vervaardigings oogmerke in aanmerking neem. Die informasie vereistes, soos beskryf in AP230, is geanaliseer om 'n subset van entiteite te identifiseer wat noodsaaklik is vir die beskrywing van 'n eenvoudige nywerheidsstruktuur om die strukturele konsep te beskryf en om die struktuur te analiseer en te ontwerp. Nadat die essensiële entiteite geïdentifiseer is, is 'n relasionele databasismodel van die entiteite geskep. Beplanning, analise en ontwerptoepassings maak van die databasis gebruik om kollaboratief data oor strukture uit te ruil. Die omvattenheid van die databasis-model is ondersoek deur 'n eenvoudige nywerheidsstruktuur daarop afte beeld. Toegang tot die databasis word verskaf deur 'n groep Java klasse wat bekend staan as die verteenwoordigende databasis klasse. Hierdie databasis-verteenwoordigers is instansies met dieselfde identifikasie eienskappe as die ooreenkomstige informasie eenhede in die databasis. Die hoofdoel van die databasis-verteenwoordigers is om hulself in die databasis te stoor asook om hul rang weer vanuit die databasis te verkry. 'n Grafiese gebruikerskoppelvlak, geprogrammeer in Java, is ontwikkel. Die koppelvlak word gebruik om die strukturele konsep te beskryf, dit te stoor na die databasis en om dit weer, met behulp van die databasis-verteenwoordigers, uit die databasis te haal.
Lo, Kin-keung, und 羅建強. „An investigation of computer assisted testing for civil engineering students in a Hong Kong technical institute“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1988. http://hub.hku.hk/bib/B38627000.
Der volle Inhalt der QuelleGkoktsi, K. „Compressive techniques for sub-Nyquist data acquisition & processing in vibration-based structural health monitoring of engineering structures“. Thesis, City, University of London, 2018. http://openaccess.city.ac.uk/19192/.
Der volle Inhalt der QuelleBakhary, Norhisham. „Structural condition monitoring and damage identification with artificial neural network“. University of Western Australia. School of Civil and Resource Engineering, 2009. http://theses.library.uwa.edu.au/adt-WU2009.0102.
Der volle Inhalt der QuelleDe, Kock Jacobus M. (Jacobus Michiel). „An overview of municipal information systems of Drakenstein municipality with reference to the Actionit open decision support framework“. Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/52684.
Der volle Inhalt der QuelleENGLISH ABSTRACT: Actionl'I' is a project undertaken by a consortium consisting of CSIR, Simeka Management Consulting, University of Pretoria and the University of Stellenbosch for the Innovation Fund of the Department of Arts, Culture, Science and Technology in South Africa. Their objective is to create a basic specification for seletected information exchange that is compatible with all levels of government. The comparison between existing information systems at municipal level and ActionIT specifications will be investigated for the purpose of exposing shortcomings on both sides. Appropriate features of existing information systems will be identified for the purpose of enhancing the ActionIT specifications. The ActionIT project is presently in its user requirement and conceptual model definition phase, and this thesis aims to assist in providing information that may be helpful infuture developments. The study undertaken in this thesis requires the application of analytical theory and a working knowledge of information systems and databases in order to: 1. Research existing information systems and relevant engineering data at local municipal authorities. Also important will be the gathering of information regarding systems currently in use, and the format in which information is stored and utilised at municipalities. 2. Do an adequate analysis of the contents of recorded information. This information will establish background knowledge on the operations of local authorities and a clearer understanding of information systems. 3. Evaluate to what degree existing information systems comply with ActionIT specifications. This will be the main focus of this thesis. Thus the focus of this thesis is to record (provide an overview oj) activities in a municipal environment and the interaction with the environment on information system level where standards provided by ActionIT as an Open Decision Support Framework can be of value.
AFRIKAANSE OPSOMMING: ActionIT is in projek wat deur ActionIT konsortium bestaande uit die WNNR, Simeka Management Consulting, Universiteit van Pretoria en die Universiteit van Stellenbosch, onderneem is vir die Innovasie Vonds van die departement van Kuns, Kultuur, Wetenskap en Tegnologie van Suid-Afrika. Hul mikpunt is om in spesifikasie vir informasie sisteme te onwikkel, wat met alle vlakke van regering kan skakel. Die vergelyking tussen die bestaande informasie sisteme op munisipale vlak en ActionIT spesifikasies salondersoek word vir die doelom tekortkominge aan beide kante uit te wys. Vir die verbetering van ActionIT spesifikasies moet aanvaarbare eienskappe van bestaande informasie sisteme geindentifiseer word. Die ActionIT projek is tans in die gebruikers vereiste en konseptueIe model definisie fase, en die tesis is gemik daarop om 'n bydrae te lewer tot die bevordering van informasie wat mag help in toekomstige ontwikkeling. Die werk onderneem in die tesis vereis in teoretiese kennis van informasie sisteme en databasise om: 1. in Ondersoek in die bestaande informasie sisteem en relefante ingenieurs data van in plaaslike munisipaliteit te doen. Die insameling van informasie oor die huidige sisteme in gebruik, die formaat waarin die informasie gestoor en gebruik word is ook belangrik. 2. in Analise van die inhoud van die waargenome informasie te doen. Die informasie sal agtergrond kennis gee oor die werking van plaaslike munisipale owerheid en in beter insig in informasie sisteme gee. 3. in Evaluasie van die verband tussen die bestaande informasie sisteme en ActionIT spesifikasies te doen. Dit is die hoof fokus punt van die tesis. Dus die doel van die tesis is om in oorsig te gee oor die aktiviteite in in munisipale omgewing en die interaksie met die omgewing op informasie sisteem vlak. Waar standaarde wat deur ActionIT voorgeskryf word as in "Open Decision Support Framework" van belang kan wees.
Camp, Nicholas Julian. „A model for the time dependent behaviour of rock joints“. Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/21138.
Der volle Inhalt der QuelleGoy, Cristina. „Displacement Data Processing and FEM Model Calibration of a 3D-Printed Groin Vault Subjected to Shaking-Table Tests“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20061/.
Der volle Inhalt der QuelleGlick, Travis Bradley. „Utilizing High-Resolution Archived Transit Data to Study Before-and-After Travel-Speed and Travel-Time Conditions“. PDXScholar, 2017. https://pdxscholar.library.pdx.edu/open_access_etds/4065.
Der volle Inhalt der QuelleRagnucci, Beatrice. „Data analysis of collapse mechanisms of a 3D printed groin vault in shaking table testing“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22365/.
Der volle Inhalt der QuelleGichamo, Tseganeh Zekiewos. „Advancing Streamflow Forecasts Through the Application of a Physically Based Energy Balance Snowmelt Model With Data Assimilation and Cyberinfrastructure Resources“. DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7463.
Der volle Inhalt der QuelleAssefha, Sabina, und Matilda Sandell. „Evaluation of digital terrain models created in post processing software for UAS-data : Focused on point clouds created through block adjustment and dense image matching“. Thesis, Högskolan i Gävle, Samhällsbyggnad, GIS, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-26976.
Der volle Inhalt der QuelleObemannade flygfarkostsystem (eng. Unmanned Aerial Systems, UAS) används allt mer frekvent för datainsamling inom geodetisk mätning. I takt med att användningsområdena ökar ställs också högre krav på mätosäkerheten i dessa mätningar. De efterbearbetningsprogram som används är en faktor som påverkar mätosäkerheten i den slutgiltiga produkten. Det är därför viktigt att utvärdera hur olika programvaror påverkar slutresultatet och hur valda parametrar spelar in. I UAS-fotogrammetri tas bilder med övertäckning för att kunna generera punktmoln som i sin tur kan bearbetas till digitala terrängmodeller (DTM). Syftet med studien är att utvärdera hur mätosäkerheten skiljer sig när samma data bearbetas genom blockutjämning och tät bildmatchning i två olika programvaror. Programvarorna som används i studien är UAS Master och Pix4D. Målet är också att utreda hur vald extraktions nivå i UAS Master och vald bildskala i Pix4D påverkar resultatet vid generering av terrängmodeller. Tre terrängmodeller skapades i UAS Master med olika extraktionsnivåer och ytterligare tre skapades i Pix4D med olika bildskalor. 26 kontrollprofiler mättes in med nätverks-RTK i aktuellt område för beräkning av medelavvikelse och kvadratiskt medelvärde (RMS). Detta för att kunna verifiera och jämföra mätosäkerheten i modellerna. Studien visar att slutresultatet varierar när samma data bearbetas i olika programvaror. Studien visar också att vald extraktionsnivå i UAS Master och vald bildskala i Pix4D påverkar resultatet olika. I UAS Master minskar mätosäkerheten med ökad extraktionsnivå, i Pix4D är det svårare att se ett tydligt mönster. Båda programvaror kunde producera terrängmodeller med ett RMS-värde kring 0,03 m. Medelavvikelsen i samtliga modeller understiger 0,02 m, vilket är kravet för klass 1 från den tekniska specifikationen SIS-TS 21144:2016. Medelavvikelsen för marktypen grus i UAS Master i modellen med låg extraktionsnivå överskrider dock kraven för klass 1. Därmed uppnår alla förutom en av terrängmodellerna kraven för klass 1, vilket är den klass med högst ställda krav.
Thorell, Marcus, und Mattias Andersson. „Implementering av HAZUS-MH i Sverige : Möjligheter och hinder“. Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-72666.
Der volle Inhalt der QuelleWhen modeling risks for natural disasters, GIS is a fundamental tool. HAZUS-MH is a GIS-based risk analysis tool, developed by the American authority FEMA. HAZUS-MH has a well-developed methodology for modeling natural disasters, which is something that is demanded at European level within the flood directive framework. Hence, there is an interest in implementing HAZUS-MH for non-US conditions. The aim of the study is to deepen the knowledge for the implementation and use of HAZUS-MH in Sweden. To enable implementation, Swedish data is required to be processed to match the data structure of HAZUS-MH. Methods for this study are a literature review of previous studies and manuals and data processing. Experiences of the data processing were collected to build a manual for data processing and to evaluate opportunities and obstacles for implementation in Sweden. The result shows how system requirements and other settings for using HAZUS-MH look like. The other settings include connection to the HAZUS-MH database et cetera. For adaption of Swedish data, requirements including data (administrative division, inventory data and hydrological data), data processing (recommended workflow to fill shape-files and attribute tables with information) and data import are described. The result also describes the application of HAZUS-MH with Swedish data. This study identifies several possibilities of HAZUS-MH. The opportunities for creating risk and vulnerability maps and data import are the largest. The time required to perform the adaptation of Swedish data was approximately 15 working days. This study estimates that with the help of manuals for the adaption, this time could be shortened to approximately 3 working days. If the process of adapting data is automated, this time could be shortened further. The largest obstacle under this study is the data collection process, to use the full potential of HAZUS-MH extensive data collection is needed. Another obstacle is the limitation of hydrological data, external hydrological data is necessary to get as accurate analysis as possible. Further research in the field should, according to this study, focus on methods of collecting data and development of an automatic process for managing data.
Fernandez, Noemi. „Statistical information processing for data classification“. FIU Digital Commons, 1996. http://digitalcommons.fiu.edu/etd/3297.
Der volle Inhalt der QuelleChiu, Cheng-Jung. „Data processing in nanoscale profilometry“. Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36677.
Der volle Inhalt der QuelleIncludes bibliographical references (p. 176-177).
New developments on the nanoscale are taking place rapidly in many fields. Instrumentation used to measure and understand the geometry and property of the small scale structure is therefore essential. One of the most promising devices to head the measurement science into the nanoscale is the scanning probe microscope. A prototype of a nanoscale profilometer based on the scanning probe microscope has been built in the Laboratory for Manufacturing and Productivity at MIT. A sample is placed on a precision flip stage and different sides of the sample are scanned under the SPM to acquire its separate surface topography. To reconstruct the original three dimensional profile, many techniques like digital filtering, edge identification, and image matching are investigated and implemented in the computer programs to post process the data, and with greater emphasis placed on the nanoscale application. The important programming issues are addressed, too. Finally, this system's error sources are discussed and analyzed.
by Cheng-Jung Chiu.
M.S.
Derksen, Timothy J. (Timothy John). „Processing of outliers and missing data in multivariate manufacturing data“. Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38800.
Der volle Inhalt der QuelleIncludes bibliographical references (leaf 64).
by Timothy J. Derksen.
M.Eng.
Nyström, Simon, und Joakim Lönnegren. „Processing data sources with big data frameworks“. Thesis, KTH, Data- och elektroteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188204.
Der volle Inhalt der QuelleBig data är ett koncept som växer snabbt. När mer och mer data genereras och samlas in finns det ett ökande behov av effektiva lösningar som kan användas föratt behandla all denna data, i försök att utvinna värde från den. Syftet med detta examensarbete är att hitta ett effektivt sätt att snabbt behandla ett stort antal filer, av relativt liten storlek. Mer specifikt så är det för att testa två ramverk som kan användas vid big data-behandling. De två ramverken som testas mot varandra är Apache NiFi och Apache Storm. En metod beskrivs för att, för det första, konstruera ett dataflöde och, för det andra, konstruera en metod för att testa prestandan och skalbarheten av de ramverk som kör dataflödet. Resultaten avslöjar att Apache Storm är snabbare än NiFi, på den typen av test som gjordes. När antalet noder som var med i testerna ökades, så ökade inte alltid prestandan. Detta visar att en ökning av antalet noder, i en big data-behandlingskedja, inte alltid leder till bättre prestanda och att det ibland krävs andra åtgärder för att öka prestandan.
Zhao, Wenguang S. M. Massachusetts Institute of Technology. „Modeling of ultrasonic processing“. Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33738.
Der volle Inhalt der QuelleIncludes bibliographical references (leaves 53-55).
This paper presents a finite element analysis (FEA) of ultrasonic processing of an aerospace-grade carbon-epoxy composite laminate. An ultrasonic (approximately 30 kHz) loading horn is applied to a small region at the laminate surface, which produces a spatially nonuniform strain energy field within the material. A fraction of this strain energy is dissipated during each ultrasonic loading cycle depending on the temperature- dependent viscoelastic response of the material. This dissipation produces a rapid heating, yielding temperature increases over 100⁰C in approximately Is and permitting the laminate to be consolidated prior to full curing in an autoclave or other equipment. The spatially nonuniform, nonlinear, and coupled nature of this process, along with the large number of experimental parameters, makes trial-and-error analysis of the process intractable, and the FEA approach is valuable in process development and optimization.
by Wenguang Zhao.
S.M.
徐順通 und Sung-thong Andrew Chee. „Computerisation in Hong Kong professional engineering firms“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1985. http://hub.hku.hk/bib/B31263124.
Der volle Inhalt der QuelleWang, Yi. „Data Management and Data Processing Support on Array-Based Scientific Data“. The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1436157356.
Der volle Inhalt der QuelleVann, A. M. „Intelligent monitoring of civil engineering systems“. Thesis, University of Bristol, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.238845.
Der volle Inhalt der QuelleGrinman, Alex J. „Natural language processing on encrypted patient data“. Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113438.
Der volle Inhalt der QuelleThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 85-86).
While many industries can benefit from machine learning techniques for data analysis, they often do not have the technical expertise nor computational power to do so. Therefore, many organizations would benefit from outsourcing their data analysis. Yet, stringent data privacy policies prevent outsourcing sensitive data and may stop the delegation of data analysis in its tracks. In this thesis, we put forth a two-party system where one party capable of powerful computation can run certain machine learning algorithms from the natural language processing domain on the second party's data, where the first party is limited to learning only specific functions of the second party's data and nothing else. Our system provides simple cryptographic schemes for locating keywords, matching approximate regular expressions, and computing frequency analysis on encrypted data. We present a full implementation of this system in the form of a extendible software library and a command line interface. Finally, we discuss a medical case study where we used our system to run a suite of unmodified machine learning algorithms on encrypted free text patient notes.
by Alex J. Grinman.
M. Eng.
Westlund, Kenneth P. (Kenneth Peter). „Recording and processing data from transient events“. Thesis, Massachusetts Institute of Technology, 1988. https://hdl.handle.net/1721.1/129961.
Der volle Inhalt der QuelleIncludes bibliographical references.
by Kenneth P. Westlund Jr.
Thesis (B.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1988.
Setiowijoso, Liono. „Data Allocation for Distributed Programs“. PDXScholar, 1995. https://pdxscholar.library.pdx.edu/open_access_etds/5102.
Der volle Inhalt der QuelleVemulapalli, Eswar Venkat Ram Prasad 1976. „Architecture for data exchange among partially consistent data models“. Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/84814.
Der volle Inhalt der QuelleIncludes bibliographical references (leaves 75-76).
by Eswar Venkat Ram Prasad Vemulapalli.
S.M.
Jakovljevic, Sasa. „Data collecting and processing for substation integration enhancement“. Texas A&M University, 2003. http://hdl.handle.net/1969/93.
Der volle Inhalt der QuelleAygar, Alper. „Doppler Radar Data Processing And Classification“. Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609890/index.pdf.
Der volle Inhalt der QuelleLu, Feng. „Big data scalability for high throughput processing and analysis of vehicle engineering data“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207084.
Der volle Inhalt der QuelleSmith, Alexander D. „Computerized modeling of geotechnical stratigraphic data“. Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/14360.
Der volle Inhalt der QuelleArchives copy bound in 1 v.; Barker copy in 2 v.
Includes bibliographical references (leaves 246-251).
by Alexander Donnan Smith.
Ph.D.
Chen, Jiawen (Jiawen Kevin). „Efficient data structures for piecewise-smooth video processing“. Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/66003.
Der volle Inhalt der QuelleCataloged from PDF version of thesis.
Includes bibliographical references (p. 95-102).
A number of useful image and video processing techniques, ranging from low level operations such as denoising and detail enhancement to higher level methods such as object manipulation and special effects, rely on piecewise-smooth functions computed from the input data. In this thesis, we present two computationally efficient data structures for representing piecewise-smooth visual information and demonstrate how they can dramatically simplify and accelerate a variety of video processing algorithms. We start by introducing the bilateral grid, an image representation that explicitly accounts for intensity edges. By interpreting brightness values as Euclidean coordinates, the bilateral grid enables simple expressions for edge-aware filters. Smooth functions defined on the bilateral grid are piecewise-smooth in image space. Within this framework, we derive efficient reinterpretations of a number of edge-aware filters commonly used in computational photography as operations on the bilateral grid, including the bilateral filter, edgeaware scattered data interpolation, and local histogram equalization. We also show how these techniques can be easily parallelized onto modern graphics hardware for real-time processing of high definition video. The second data structure we introduce is the video mesh, designed as a flexible central data structure for general-purpose video editing. It represents objects in a video sequence as 2.5D "paper cutouts" and allows interactive editing of moving objects and modeling of depth, which enables 3D effects and post-exposure camera control. In our representation, we assume that motion and depth are piecewise-smooth, and encode them sparsely as a set of points tracked over time. The video mesh is a triangulation over this point set and per-pixel information is obtained by interpolation. To handle occlusions and detailed object boundaries, we rely on the user to rotoscope the scene at a sparse set of frames using spline curves. We introduce an algorithm to robustly and automatically cut the mesh into local layers with proper occlusion topology, and propagate the splines to the remaining frames. Object boundaries are refined with per-pixel alpha mattes. At its core, the video mesh is a collection of texture-mapped triangles, which we can edit and render interactively using graphics hardware. We demonstrate the effectiveness of our representation with special effects such as 3D viewpoint changes, object insertion, depthof- field manipulation, and 2D to 3D video conversion.
by Jiawen Chen.
Ph.D.
Jakubiuk, Wiktor. „High performance data processing pipeline for connectome segmentation“. Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106122.
Der volle Inhalt der Quelle"December 2015." Cataloged from PDF version of thesis.
Includes bibliographical references (pages 83-88).
By investigating neural connections, neuroscientists try to understand the brain and reconstruct its connectome. Automated connectome reconstruction from high resolution electron miscroscopy is a challenging problem, as all neurons and synapses in a volume have to be detected. A mm3 of a high-resolution brain tissue takes roughly a petabyte of space that the state-of-the-art pipelines are unable to process to date. A high-performance, fully automated image processing pipeline is proposed. Using a combination of image processing and machine learning algorithms (convolutional neural networks and random forests), the pipeline constructs a 3-dimensional connectome from 2-dimensional cross-sections of a mammal's brain. The proposed system achieves a low error rate (comparable with the state-of-the-art) and is capable of processing volumes of 100's of gigabytes in size. The main contributions of this thesis are multiple algorithmic techniques for 2- dimensional pixel classification of varying accuracy and speed trade-off, as well as a fast object segmentation algorithm. The majority of the system is parallelized for multi-core machines, and with minor additional modification is expected to work in a distributed setting.
by Wiktor Jakubiuk.
M. Eng. in Computer Science and Engineering
Nguyen, Qui T. „Robust data partitioning for ad-hoc query processing“. Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106004.
Der volle Inhalt der QuelleThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 59-62).
Data partitioning can significantly improve query performance in distributed database systems. Most proposed data partitioning techniques choose the partitioning based on a particular expected query workload or use a simple upfront scheme, such as uniform range partitioning or hash partitioning on a key. However, these techniques do not adequately address the case where the query workload is ad-hoc and unpredictable, as in many analytic applications. The HYPER-PARTITIONING system aims to ll that gap, by using a novel space-partitioning tree on the space of possible attribute values to dene partitions incorporating all attributes of a dataset. The system creates a robust upfront partitioning tree, designed to benet all possible queries, and then adapts it over time in response to the actual workload. This thesis evaluates the robustness of the upfront hyper-partitioning algorithm, describes the implementation of the overall HYPER-PARTITIONING system, and shows how hyper-partitioning improves the performance of both selection and join queries.
by Qui T. Nguyen.
M. Eng.
Bao, Shunxing. „Algorithmic Enhancements to Data Colocation Grid Frameworks for Big Data Medical Image Processing“. Thesis, Vanderbilt University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=13877282.
Der volle Inhalt der QuelleLarge-scale medical imaging studies to date have predominantly leveraged in-house, laboratory-based or traditional grid computing resources for their computing needs, where the applications often use hierarchical data structures (e.g., Network file system file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance for laboratory-based approaches reveal that performance is impeded by standard network switches since typical processing can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. On the other hand, the grid may be costly to use due to the dedicated resources used to execute the tasks and lack of elasticity. With increasing availability of cloud-based big data frameworks, such as Apache Hadoop, cloud-based services for executing medical imaging studies have shown promise.
Despite this promise, our studies have revealed that existing big data frameworks illustrate different performance limitations for medical imaging applications, which calls for new algorithms that optimize their performance and suitability for medical imaging. For instance, Apache HBases data distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). Big data medical image processing applications involving multi-stage analysis often exhibit significant variability in processing times ranging from a few seconds to several days. Due to the sequential nature of executing the analysis stages by traditional software technologies and platforms, any errors in the pipeline are only detected at the later stages despite the sources of errors predominantly being the highly compute-intensive first stage. This wastes precious computing resources and incurs prohibitively higher costs for re-executing the application. To address these challenges, this research propose a framework - Hadoop & HBase for Medical Image Processing (HadoopBase-MIP) - which develops a range of performance optimization algorithms and employs a number of system behaviors modeling for data storage, data access and data processing. We also introduce how to build up prototypes to help empirical system behaviors verification. Furthermore, we introduce a discovery with the development of HadoopBase-MIP about a new type of contrast for medical imaging deep brain structure enhancement. And finally we show how to move forward the Hadoop based framework design into a commercialized big data / High performance computing cluster with cheap, scalable and geographically distributed file system.
Mohd, Yunus Mohd Zulkifli. „Geospatial data management throughout large area civil engineering projects“. Thesis, University of Newcastle Upon Tyne, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360241.
Der volle Inhalt der QuelleHatchell, Brian. „Data base design for integrated computer-aided engineering“. Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/16744.
Der volle Inhalt der QuelleWaite, Martin. „Data structures for the reconstruction of engineering drawings“. Thesis, Nottingham Trent University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328794.
Der volle Inhalt der QuelleEinstein, Noah. „SmartHub: Manual Wheelchair Data Extraction and Processing Device“. The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555352793977171.
Der volle Inhalt der QuelleGuttman, Michael. „Sampled-data IIR filtering via time-mode signal processing“. Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86770.
Der volle Inhalt der QuelleDans ce mémoire, la conception de filtres de données-échantillonnées ayant une réponse impulsionnelle infinie basée sur le traitement de signal en mode temporel est présentée. Le traitement de signal dans le domaine temporel (TSDT), définie comme étant le traitement d'information analogique échantillonnée en utilisant des différences de temps comme variables, est devenu une des techniques émergentes de conception de circuits des plus populaires. Puisque le TSDT est toujours relativement récent, il y a encore beaucoup de développements requis pour étendre cette technologie comme un outil de traitement de signal général. Dans cette recherche, un ensemble de blocs d'assemblage capable de réaliser la plupart des opérations mathématiques dans le domaine temporel sera introduit. En arrangeant ces structures élémentaires, des systèmes en mode temporel d'ordre élevé, plus spécifiquement des filtres en mode temporel, seront réalisés. Trois filtres de deuxième ordre dans le domaine temporel (passe-bas, passe-bande et passe-haut) sont modélisés sur MATLAB et simulé sur Spectre afin de vérifier la méthodologie de conception. Finalement, un intégrateur amorti et un filtre passe-bas IIR de deuxième ordre en mode temporel sont implémentés avec des composantes discrètes.
Roome, Stephen John. „The industrial application of digital signal processing“. Thesis, City University London, 1989. http://openaccess.city.ac.uk/7405/.
Der volle Inhalt der QuelleBreest, Martin, Paul Bouché, Martin Grund, Sören Haubrock, Stefan Hüttenrauch, Uwe Kylau, Anna Ploskonos, Tobias Queck und Torben Schreiter. „Fundamentals of Service-Oriented Engineering“. Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2009/3380/.
Der volle Inhalt der QuelleFaber, Marc. „On-Board Data Processing and Filtering“. International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596433.
Der volle Inhalt der QuelleOne of the requirements resulting from mounting pressure on flight test schedules is the reduction of time needed for data analysis, in pursuit of shorter test cycles. This requirement has ramifications such as the demand for record and processing of not just raw measurement data but also of data converted to engineering units in real time, as well as for an optimized use of the bandwidth available for telemetry downlink and ultimately for shortening the duration of procedures intended to disseminate pre-selected recorded data among different analysis groups on ground. A promising way to successfully address these needs consists in implementing more CPU-intelligence and processing power directly on the on-board flight test equipment. This provides the ability to process complex data in real time. For instance, data acquired at different hardware interfaces (which may be compliant with different standards) can be directly converted to more easy-to-handle engineering units. This leads to a faster extraction and analysis of the actual data contents of the on-board signals and busses. Another central goal is the efficient use of the available bandwidth for telemetry. Real-time data reduction via intelligent filtering is one approach to achieve this challenging objective. The data filtering process should be performed simultaneously on an all-data-capture recording and the user should be able to easily select the interesting data without building PCM formats on board nor to carry out decommutation on ground. This data selection should be as easy as possible for the user, and the on-board FTI devices should generate a seamless and transparent data transmission, making a quick data analysis viable. On-board data processing and filtering has the potential to become the future main path to handle the challenge of FTI data acquisition and analysis in a more comfortable and effective way.
Morikawa, Takayuki. „Incorporating stated preference data in travel demand analysis“. Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/14326.
Der volle Inhalt der QuelleLin, Keng-Fan. „Hybrid Analysis for Synthetic Aperture Radar Data Stack“. Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10267516.
Der volle Inhalt der QuelleDemand for the Earth observation has risen in the past few decades. As technology advanced, remote sensing techniques have become more and more essential in various applications, such as landslide recognition, land use monitoring, ecological observation. Among the existing techniques, synthetic aperture radar (SAR) has the advantage of making day-and-night acquisitions in any weather conditions. The characteristics of SAR secure the applicability of delivering reliable measurements over cloudy areas and performing measurement without any external energy source. However, SAR images suffer from lower spatial and spectral resolution compared to the optical ones. The coherent nature of radar signals also results in speckles which make the acquired images noisy.
To overcome the aforementioned issues, one can consider analyzing a long series of SAR-based observations over the same area. In that sense, spatial correlations of the image pixels can be studied based on similarity of temporal statistics. Adaptive image processing can thus be created. In the past, such an adaptive procedure was only applied for slow movement detection using SAR interferometry (InSAR). For the first time, we propose a full framework that allows processing the SAR images in an adaptive manner without losing the original resolution. This framework, namely Hybrid Analysis for Synthetic Aperture Radar (HASAR), exploits information in single-polarized/multi-temporal data stacks and focuses on two applications: change detection and image classification. Three techniques are developed in this study. First, we propose a new hypothesis testing procedure to identify pixels behaving similarly over time. Compared with conventional methods, the proposed test provides similarity measurements regardless of temporal variabilities and outliers. Its effectiveness paves the way for the following two techniques. Second, we develop an automatic change detection approach which utilizes spatiotemporal observations obtained by the first technique to locate abrupt changes in the imaged areas. Compared with existing methods, this approach does not require parameter-tuning procedures, giving a fully unsupervised solution for multi-temporal change analysis. Last, we deliver an efficient solution for classifying single-polarized datasets. A first-level classifier is implemented to analyze the spatiotemporal observations previously mentioned. Different from any other existing methods, the proposed method does not need polarimetric information for solving the multi-class problem. Its effectiveness greatly improves the added value of the single-polarization datasets.
Various experiments have been made to test the effectiveness of each proposed technique. First of all, the results from TerraSAR-X datasets in Los Angeles and Hong Kong signify that the proposed testing procedure is able to deliver effective extraction of statistically homogeneous pixels. Next, the detected changes from ERS-02 datasets in Taiwan show good matches with ground truth. Compared with conventional pairwise change analysis, the proposed multi-temporal change analysis provides much more observations that can be used for change analysis. The detected changes can be better located through the temporal statistics. Only historically significant changes will be considered as changes, which greatly reduces the false alarm rate. Finally, the results of multi-class classification from TanDEM-X and COSMO-SkyMed datasets in Los Angeles and Chicago, respectively, reveal high classification accuracies without complicated training procedures. It thus provides an entirely new solution for classifying the single-polarized datasets. By collectively utilizing different attributes (amplitude/coherence), dimensionalities (space/time), and processing approaches (pixel-based/object-based), the HASAR system augments the information content of the SAR data stacks. It, therefore, shows high potentials for continuous Earth monitoring.
Hinrichs, Angela S. (Angela Soleil). „An architecture for distributing processing on realtime data streams“. Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11418.
Der volle Inhalt der QuelleMarcus, Adam Ph D. Massachusetts Institute of Technology. „Optimization techniques for human computation-enabled data processing systems“. Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78454.
Der volle Inhalt der QuelleCataloged from PDF version of thesis.
Includes bibliographical references (p. 119-124).
Crowdsourced labor markets make it possible to recruit large numbers of people to complete small tasks that are difficult to automate on computers. These marketplaces are increasingly widely used, with projections of over $1 billion being transferred between crowd employers and crowd workers by the end of 2012. While crowdsourcing enables forms of computation that artificial intelligence has not yet achieved, it also presents crowd workflow designers with a series of challenges including describing tasks, pricing tasks, identifying and rewarding worker quality, dealing with incorrect responses, and integrating human computation into traditional programming frameworks. In this dissertation, we explore the systems-building, operator design, and optimization challenges involved in building a crowd-powered workflow management system. We describe a system called Qurk that utilizes techniques from databases such as declarative workflow definition, high-latency workflow execution, and query optimization to aid crowd-powered workflow developers. We study how crowdsourcing can enhance the capabilities of traditional databases by evaluating how to implement basic database operators such as sorts and joins on datasets that could not have been processed using traditional computation frameworks. Finally, we explore the symbiotic relationship between the crowd and query optimization, enlisting crowd workers to perform selectivity estimation, a key component in optimizing complex crowd-powered workflows.
by Adam Marcus.
Ph.D.
Zheng, Xiao. „Mid-spatial frequency control for automated functional surface processing“. Thesis, University of Huddersfield, 2018. http://eprints.hud.ac.uk/id/eprint/34723/.
Der volle Inhalt der QuelleHou, Xianxu. „An investigation of deep learning for image processing applications“. Thesis, University of Nottingham, 2018. http://eprints.nottingham.ac.uk/52056/.
Der volle Inhalt der QuelleKuang, Zheng. „Parallel diffractive multi-beam ultrafast laser micro-processing“. Thesis, University of Liverpool, 2011. http://livrepository.liverpool.ac.uk/1333/.
Der volle Inhalt der Quelle