Tesi sul tema "Data processing"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Data processing".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Long, Christopher C. "Data Processing for NASA's TDRSS DAMA Channel". International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/611474.
Testo completoPresently, NASA's Space Network (SN) does not have the ability to receive random messages from satellites using the system. Scheduling of the service must be done by the owner of the spacecraft through Goddard Space Flight Center (GSFC). The goal of NASA is to improve the current system so that random messages, that are generated on board the satellite, can be received by the SN. The messages will be requests for service that the satellites control system deems necessary. These messages will then be sent to the owner of the spacecraft where appropriate action and scheduling can take place. This new service is known as the Demand Assignment Multiple Access system (DAMA).
Sun, Wenjun. "Parallel data processing for semistructured data". Thesis, London South Bank University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434394.
Testo completoGiordano, Manfredi. "Autonomic Big Data Processing". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14837/.
Testo completoRydell, Joakim. "Advanced MRI Data Processing". Doctoral thesis, Linköping : Department of Biomedical Engineering, Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10038.
Testo completoIrick, Nancy. "Post Processing Data Analysis". International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606091.
Testo completoOnce the test is complete, the job of the Data Analyst has begun. Files from the various acquisition systems are collected. It is the job of the analyst to put together these files in a readable format so the success or failure of the test can be attained. This paper will discuss the process of breaking down these files, comparing data from different systems, and methods of presenting the data.
Castro, Fernandez Raul. "Stateful data-parallel processing". Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/31596.
Testo completoNyström, Simon, e Joakim Lönnegren. "Processing data sources with big data frameworks". Thesis, KTH, Data- och elektroteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188204.
Testo completoBig data är ett koncept som växer snabbt. När mer och mer data genereras och samlas in finns det ett ökande behov av effektiva lösningar som kan användas föratt behandla all denna data, i försök att utvinna värde från den. Syftet med detta examensarbete är att hitta ett effektivt sätt att snabbt behandla ett stort antal filer, av relativt liten storlek. Mer specifikt så är det för att testa två ramverk som kan användas vid big data-behandling. De två ramverken som testas mot varandra är Apache NiFi och Apache Storm. En metod beskrivs för att, för det första, konstruera ett dataflöde och, för det andra, konstruera en metod för att testa prestandan och skalbarheten av de ramverk som kör dataflödet. Resultaten avslöjar att Apache Storm är snabbare än NiFi, på den typen av test som gjordes. När antalet noder som var med i testerna ökades, så ökade inte alltid prestandan. Detta visar att en ökning av antalet noder, i en big data-behandlingskedja, inte alltid leder till bättre prestanda och att det ibland krävs andra åtgärder för att öka prestandan.
Mai, Luo. "Towards efficient big data processing in data centres". Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/64817.
Testo completoMueller, Guenter. "DIGITAL DATA RECORDING: NEW WAYS IN DATA PROCESSING". International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/606505.
Testo completoWith the introduction of digital data recorders new ways of data processing have been developed. The three most important improvements are discussed in this paper: A) By processing PCM Data from a digital recorder by using the SCSI-Interface our ground station has developed software to detect the synchronization pattern of the PCM data and then perform software frame decommutation. Many advantages will be found with this method. B) New digital recorders already use the CCSDS Standard as the internal recording format. Once this technique is implemented in our ground station’s software and becomes part of our software engineering team’s general know-how, the switch to CCSDS telemetry in the future will require no quantum leap in effort. C) Digital recorders offer a very new application: Writing data to a digital tape in the recorder’s own format, allows the replay of data using the recorder’s interfaces; i.e. writing vibration data from the host system to tape, using the analog format of the digital recorder, allows the analysis of the data either in analog form, using the analog interface of the recorder, or in digital form.
Macias, Filiberto. "Real Time Telemetry Data Processing and Data Display". International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/611405.
Testo completoThe Telemetry Data Center (TDC) at White Sands Missile Range (WSMR) is now beginning to modernize its existing telemetry data processing system. Modern networking and interactive graphical displays are now being introduced. This infusion of modern technology will allow the TDC to provide our customers with enhanced data processing and display capability. The intent of this project is to outline this undertaking.
Neukirch, Maik. "Non Stationary Magnetotelluric Data Processing". Doctoral thesis, Universitat de Barcelona, 2014. http://hdl.handle.net/10803/284932.
Testo completoBrewster, Wayne Allan. "Space tether - radar data processing". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA289654.
Testo completoThesis advisor(s): Richard Christopher Olsen, Ralph Hippenstiel. "September 1994." Bibliography: p. 71. Also available online.
Caon, John. "Multi-channel radiometric data processing /". Title page, abstract and contents only, 1993. http://web4.library.adelaide.edu.au/theses/09SB/09sbc235.pdf.
Testo completoCover title: Advantages of multi-channel radiometric processing. Two maps have overlays. National map series reference Forbes, N.S.W. 1:250,000 S heet SI/55-7. Includes bibliographical references (leaf 38).
Rupprecht, Lukas. "Network-aware big data processing". Thesis, Imperial College London, 2017. http://hdl.handle.net/10044/1/52455.
Testo completoChiu, Cheng-Jung. "Data processing in nanoscale profilometry". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36677.
Testo completoIncludes bibliographical references (p. 176-177).
New developments on the nanoscale are taking place rapidly in many fields. Instrumentation used to measure and understand the geometry and property of the small scale structure is therefore essential. One of the most promising devices to head the measurement science into the nanoscale is the scanning probe microscope. A prototype of a nanoscale profilometer based on the scanning probe microscope has been built in the Laboratory for Manufacturing and Productivity at MIT. A sample is placed on a precision flip stage and different sides of the sample are scanned under the SPM to acquire its separate surface topography. To reconstruct the original three dimensional profile, many techniques like digital filtering, edge identification, and image matching are investigated and implemented in the computer programs to post process the data, and with greater emphasis placed on the nanoscale application. The important programming issues are addressed, too. Finally, this system's error sources are discussed and analyzed.
by Cheng-Jung Chiu.
M.S.
Garlick, Dean, Glen Wada e Pete Krull. "SPIRIT III Data Verification Processing". International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/608393.
Testo completoThis paper will discuss the functions performed by the Spatial Infrared Imaging Telescope (SPIRIT) III Data Processing Center (DPC) at Utah State University (USU). The SPIRIT III sensor is the primary instrument on the Midcourse Space Experiment (MSX) satellite; and as builder of this sensor system, USU is responsible for developing and operating the associated DPC. The SPIRIT III sensor consists of a six-color long-wave infrared (LWIR) radiometer system, an LWIR spectrographic interferometer, contamination sensors, and housekeeping monitoring systems. The MSX spacecraft recorders can capture up to 8+ gigabytes of data a day from this sensor. The DPC is subsequently required to provide a 24-hour turnaround to verify and qualify these data by implementing a complex set of sensor and data verification and quality checks. This paper addresses the computing architecture, distributed processing software, and automated data verification processes implemented to meet these requirements.
Ostroumov, Ivan Victorovich. "Real time sensors data processing". Thesis, Polit. Challenges of science today: XIV International Scientific and Practical Conference of Young Researchers and Students, April 2–3, 2014 : theses. – К., 2014. – 35p, 2014. http://er.nau.edu.ua/handle/NAU/26582.
Testo completoSilva, João Paulo Sá da. "Data processing in Zynq APSoC". Master's thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/14703.
Testo completoField-Programmable Gate Arrays (FPGAs) were invented by Xilinx in 1985, i.e. less than 30 years ago. The influence of FPGAs on many directions in engineering is growing continuously and rapidly. There are many reasons for such progress and the most important are the inherent reconfigurability of FPGAs and relatively cheap development cost. Recent field-configurable micro-chips combine the capabilities of software and hardware by incorporating multi-core processors and reconfigurable logic enabling the development of highly optimized computational systems for a vast variety of practical applications, including high-performance computing, data, signal and image processing, embedded systems, and many others. In this context, the main goals of the thesis are to study the new micro-chips, namely the Zynq-7000 family and to apply them to two selected case studies: data sort and Hamming weight calculation for long vectors.
Field-Programmable Gate Arrays (FPGAs) foram inventadas pela Xilinx em 1985, ou seja, há menos de 30 anos. A influência das FPGAs está a crescer continua e rapidamente em muitos ramos de engenharia. Há varias razões para esta evolução, as mais importantes são a sua capacidade de reconfiguração inerente e os baixos custos de desenvolvimento. Os micro-chips mais recentes baseados em FPGAs combinam capacidades de software e hardware através da incorporação de processadores multi-core e lógica reconfigurável permitindo o desenvolvimento de sistemas computacionais altamente otimizados para uma grande variedade de aplicações práticas, incluindo computação de alto desempenho, processamento de dados, de sinal e imagem, sistemas embutidos, e muitos outros. Neste contexto, este trabalho tem como o objetivo principal estudar estes novos micro-chips, nomeadamente a família Zynq-7000, para encontrar as melhores formas de potenciar as vantagens deste sistema usando casos de estudo como ordenação de dados e cálculo do peso de Hamming para vetores longos.
Wang, Yue-Jin. "Adaptive data processing satellite positioning". Thesis, Queensland University of Technology, 1994.
Cerca il testo completoRydman, Oskar. "Data processing of Controlled Source Audio Magnetotelluric (CSAMT) Data". Thesis, Uppsala universitet, Geofysik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-387246.
Testo completoProjektet behandlar tre stycken metoder för att förbättra signalkvaliten hos Controlled Source Audio Magnetotellurics (CSAMT) data, dessa implementeras och deras för- och nackdelar diskuteras. Metoderna som hanteras är: Avlägsnandet av trender från tidsserier i tidsdomänen istället för i frekvensdomänen. Implementationen av ett koherenstest för att identifiera ”dåliga” datasegment ochavlägsna dessa från vidare beräkningar. Implementationen av en metod för att både hitta och avlägsna transienter (dataspikar) från tidsserien för att minska bakgrundsbruset i frekvensspektrat. Både avlägsnandet av trender samt transienter visar positiv inverkan på datakvaliteten,även om skillnaderna är relativt små (båda på ungefär 1-10%). På grund av begränsningarfrån mätdatan kunde inget meningsfullt koherenstest utformas. Överlag har processernasom diskuteras i rapporten förbättrat datakvaliten och kan ses som ett grundarbete förfortsatta förbättringar inom området.
Chitondo, Pepukayi David Junior. "Data policies for big health data and personal health data". Thesis, Cape Peninsula University of Technology, 2016. http://hdl.handle.net/20.500.11838/2479.
Testo completoHealth information policies are constantly becoming a key feature in directing information usage in healthcare. After the passing of the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009 and the Affordable Care Act (ACA) passed in 2010, in the United States, there has been an increase in health systems innovations. Coupling this health systems hype is the current buzz concept in Information Technology, „Big data‟. The prospects of big data are full of potential, even more so in the healthcare field where the accuracy of data is life critical. How big health data can be used to achieve improved health is now the goal of the current health informatics practitioner. Even more exciting is the amount of health data being generated by patients via personal handheld devices and other forms of technology that exclude the healthcare practitioner. This patient-generated data is also known as Personal Health Records, PHR. To achieve meaningful use of PHRs and healthcare data in general through big data, a couple of hurdles have to be overcome. First and foremost is the issue of privacy and confidentiality of the patients whose data is in concern. Secondly is the perceived trustworthiness of PHRs by healthcare practitioners. Other issues to take into context are data rights and ownership, data suppression, IP protection, data anonymisation and reidentification, information flow and regulations as well as consent biases. This study sought to understand the role of data policies in the process of data utilisation in the healthcare sector with added interest on PHRs utilisation as part of big health data.
Wang, Yi. "Data Management and Data Processing Support on Array-Based Scientific Data". The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1436157356.
Testo completoLloyd, Ian J. "Data processing and individual freedom : data protection and beyond". Thesis, University of Strathclyde, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233213.
Testo completoAygar, Alper. "Doppler Radar Data Processing And Classification". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609890/index.pdf.
Testo completoFernandez, Noemi. "Statistical information processing for data classification". FIU Digital Commons, 1996. http://digitalcommons.fiu.edu/etd/3297.
Testo completoBernecker, Thomas. "Similarity processing in multi-observation data". Diss., lmu, 2012. http://nbn-resolving.de/urn:nbn:de:bvb:19-154119.
Testo completoCukrowski, Jacek, e Manfred M. Fischer. "Efficient Organization of Collective Data-Processing". WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/4148/1/WSG_DP_6498.pdf.
Testo completoSeries: Discussion Papers of the Institute for Economic Geography and GIScience
Jones, Jonathan A. "Nuclear magnetic resonance data processing methods". Thesis, University of Oxford, 1992. http://ora.ox.ac.uk/objects/uuid:7df97c9a-4e65-4c10-83eb-dfaccfdccefe.
Testo completoHein, C. S. "Integrated topics in geochemical data processing". Thesis, University of Bristol, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.354700.
Testo completoSun, Youshun 1970. "Processing of randomly obtained seismic data". Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/59086.
Testo completoIncludes bibliographical references (leaves 62-64).
by Youshun Sun.
S.M.in Geosystems
Bisot, Clémence. "Spectral Data Processing for Steel Industry". Thesis, KTH, Optimeringslära och systemteori, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-175880.
Testo completoFör stålindustrin, att veta och förstå ytegenskaperna på ett stålband vid varje steg i produktionsprocessen är en nyckelfaktor för att styra slutproduktens kvalitet. Den senaste tidens ökande kvalitetskraven har gjort denna uppgift allt mer viktigare. Ytan på nya stål kvaliteter med komplexa kemiska sammansättningar har egenskaper som är särskilt svårt att hantera. För dess kvaliteter är ytkontroll kritisk och svår. En av de tekniker som används för att kontrollera ytans kvalitet är spektrum analys. Arcelor Mittal, världens ledande integrerade stål- och gruvföretag, har under de senaste åren lett flera projekt för att undersöka möjligheten att använda mätinstrument för att mäta spektrum ljuset från sin produkt i olika stadier av produktionen. En av de tekniker som används för att kontrollera ytans kvalitet är spektrum analys. I denna avhandling har vi utvecklat matematiska modeller och statistiska verktyg för att kunna hanskas med signaler som är uppmätt med spektrometrar inom ramen av olika forskningsprojekt hos Arcelor Mittal.
Faber, Marc. "On-Board Data Processing and Filtering". International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596433.
Testo completoOne of the requirements resulting from mounting pressure on flight test schedules is the reduction of time needed for data analysis, in pursuit of shorter test cycles. This requirement has ramifications such as the demand for record and processing of not just raw measurement data but also of data converted to engineering units in real time, as well as for an optimized use of the bandwidth available for telemetry downlink and ultimately for shortening the duration of procedures intended to disseminate pre-selected recorded data among different analysis groups on ground. A promising way to successfully address these needs consists in implementing more CPU-intelligence and processing power directly on the on-board flight test equipment. This provides the ability to process complex data in real time. For instance, data acquired at different hardware interfaces (which may be compliant with different standards) can be directly converted to more easy-to-handle engineering units. This leads to a faster extraction and analysis of the actual data contents of the on-board signals and busses. Another central goal is the efficient use of the available bandwidth for telemetry. Real-time data reduction via intelligent filtering is one approach to achieve this challenging objective. The data filtering process should be performed simultaneously on an all-data-capture recording and the user should be able to easily select the interesting data without building PCM formats on board nor to carry out decommutation on ground. This data selection should be as easy as possible for the user, and the on-board FTI devices should generate a seamless and transparent data transmission, making a quick data analysis viable. On-board data processing and filtering has the potential to become the future main path to handle the challenge of FTI data acquisition and analysis in a more comfortable and effective way.
Brown, Barbie, Parminder Ghuman, Johnny Medina e Randy Wilke. "A DESKTOP SATELLITE DATA PROCESSING SYSTEM". International Foundation for Telemetering, 1997. http://hdl.handle.net/10150/607552.
Testo completoThe international space community, including National Aeronautics and Space Administration (NASA), European Space Agency (ESA), Japanese National Space Agency (NASDA) and others, are committed to using the Consultative Committee for Space Data Systems (CCSDS) recommendations for low earth orbiting satellites. With the advent of the CCSDS standards and the availability of direct broadcast data from a number of current and future spacecraft, a large number of users could have access to earth science data. However, to allow for the largest possible user base, the cost of processing this data must be as low as possible. By utilizing Very Large Scale Integration (VLSI) Application-Specific Integrated Circuits (ASIC), pipelined data processing, and advanced software development technology and tools, highly integrated CCSDS data processing can be attained in a single desktop system. This paper describes a prototype desktop system based on the Peripheral Component Interconnect (PCI) bus that performs CCSDS standard frame synchronization, bit transition density decoding, Cyclical Redundancy Check (CRC) error checking, Reed-Solomon decoding, data unit sorting, packet extraction, annotation and other CCSDS service processing. Also discussed is software technology used to increase the flexibility and usability of the desktop system. The reproduction cost for the system described is less than 1/8th the current cost of commercially available CCSDS data processing systems.
Turver, Kim D. "Batch Processing of Flight Test Data". International Foundation for Telemetering, 1993. http://hdl.handle.net/10150/611885.
Testo completoBoeing's Test Data Retrieval System not only acts as an interface between the Airborne Data Acquisition System and a mainframe computer but also does batch mode processing of data at faster than real time. Analysis engineers request time intervals and measurements of interest. Time intervals and measurements requested are acquired from the flight tape, converted to first order engineering units, and output to 3480 data cartridge tape for post processing. This allows all test data to be stored and only the data of interest to be processed at any given time.
White, Allan P., e Richard K. Dean. "Real-Time Test Data Processing System". International Foundation for Telemetering, 1989. http://hdl.handle.net/10150/614650.
Testo completoThe U.S. Army Aviation Development Test Activity at Fort Rucker, Alabama needed a real-time test data collection and processing capability for helicopter flight testing. The system had to be capable of collecting and processing both FM and PCM data streams from analog tape and/or a telemetry receiver. The hardware and software was to be off the shelf whenever possible. The integration was to result in a stand alone telemetry collection and processing system.
Eccles, Lee H., e John J. Muckerheide. "FLIGHT TEST AIRBORNE DATA PROCESSING SYSTEM". International Foundation for Telemetering, 1986. http://hdl.handle.net/10150/615393.
Testo completoThe Experimental Flight Test organization of the Boeing Commercial Airplane Company has an onboard data reduction system known as the Airborne Data Analysis/Monitor System or ADAMS. ADAMS has evolved over the last 11 years from a system built around a single minicomputer to a system using two minicomputers to a distributed processing system based on microprocessors. The system is built around two buses. One bus is used for passing setup and control information between elements of the system. This is burst type data. The second bus is used for passing periodic data between the units. This data originates in the sensors installed by Flight Test or in the Black Boxes on the airplane. These buses interconnect a number of different processors. The Application Processor is the primary data analysis processor in the system. It runs the application programs and drives the display devices. A number of Application Processors may be installed. The File Processor handles the mass storage devices and such common peripheral devices as the printer. The Acquisition Interface Assembly is the entry point for data into ADAMS. It accepts serial PCM data from either the data acquisition system or the tape recorder. This data is then concatenated, converted to engineering units, and passed to the rest of the system for further processing and display. Over 70 programs have been written to support activities on the airplane. Programs exist to aid the instrumentation engineer in preparing the system for flight and to minimize the amount of paper which must be dealt with. Additional programs are used by the analysis engineer to evaluate the aircraft performance in real time. These programs cover the tests from takeoff through cruise testing and aircraft maneuvers to landing. They are used to analyze everything from brake performance to fuel consumption. Using these programs has reduced the amount of data reduction done on the ground and in many cases eliminated it completely.
Tamura, Yoshiaki. "Study on Precise Tidal Data Processing". 京都大学 (Kyoto University), 2000. http://hdl.handle.net/2433/157208.
Testo completoKyoto University (京都大学)
0048
新制・論文博士
博士(理学)
乙第10362号
論理博第1378号
新制||理||1180(附属図書館)
UT51-2000-F428
(主査)教授 竹本 修三, 助教授 福田 洋一, 教授 古澤 保
学位規則第4条第2項該当
Nasr, Kamil. "Comparison of Popular Data Processing Systems". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-293494.
Testo completoDatabehandling definieras generellt som insamling och omvandling av data för att extrahera meningsfull information. Databehandling involverar en mängd processer som validering, sorteringssammanfattning, aggregering för att nämna några. Många analysmotorer lämnar idag för storskalig databehandling, nämligen Apache Spark, Apache Flink och Apache Beam. Var och en av dessa motorer har sina egna fördelar och nackdelar. I den här avhandlingsrapporten använde vi alla dessa tre motorer för att bearbeta data från kolmonoxidens dagliga sammanfattningsdataset för att bestämma utsläppsnivåerna per område och tidsenhet. Sedan jämförde vi prestandan hos dessa 3 motorer med olika mått. Resultaten visade att Apache Beam, även om det erbjuds större bekvämlighet när man skriver program, var långsammare än Apache Flink och Apache Spark. Spark Runner in Beam var den snabbaste löparen och Apache Spark var den snabbaste databehandlingsramen totalt.
Marselli, Catherine. "Data processing of a navigation microsystem". Université de Franche-Comté. UFR des sciences et techniques, 1998. http://www.theses.fr/1998BESA2078.
Testo completoThis research is part of a Swiss French academic project whose goal was the determination of some limits in the design and use of microtechnologies and microsystems, using as a common thread example a navigation system based on microaccelerometers and angular rate microsensors (gyros). The entire project was divided into four parts, including design at the component level as well as at the system level. This PhD report describes the data processing of the navigation microsystem realised at the Electronics and Signal Processing Laboratory of the Institute of Microtechnology, University of Neuchâtel. Current low-cost microsensors are less expensive but less accurate that mechanical or optical sensors. In a navigation system, the accelerometer and gyro outputs are integrated, leading to the accumulation of the errors. Thus, the measured trajectory becomes quickly wrong and a corrective system has to be designed. Hence, the goals of the data processing system is to compute the navigation parameters (position, velocity, orientation) while preventing the trajectory from diverging, following two approaches: reducing the sensor errors,updating regularly the trajectory using an aiding navigation system
Ives, Zachary G. "Efficient query processing for data integration /". Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/6864.
Testo completoLian, Xiang. "Efficient query processing over uncertain data /". View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?CSED%202009%20LIAN.
Testo completoNeeli, Sandeep Wilamowski Bogdan M. "Internet data acquisition, search and processing". Auburn, Ala., 2009. http://hdl.handle.net/10415/1969.
Testo completoLei, Chuan. "Recurring Query Processing on Big Data". Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-dissertations/550.
Testo completoLiu, Kun. "Multi-View Oriented 3D Data Processing". Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0273/document.
Testo completoPoint cloud refinement and surface reconstruction are two fundamental problems in geometry processing. Most of the existing methods have been targeted at range sensor data and turned out be ill-adapted to multi-view data. In this thesis, two novel methods are proposed respectively for the two problems with special attention to multi-view data. The first method smooths point clouds originating from multi-view reconstruction without impairing the data. The problem is formulated as a nonlinear constrained optimization and addressed as a series of unconstrained optimization problems by means of a barrier method. The second method triangulates point clouds into meshes using an advancing front strategy directed by a sphere packing criterion. The method is algorithmically simple and can produce high-quality meshes efficiently. The experiments on synthetic and real-world data have been conducted as well, which demonstrates the robustness and the efficiency of the methods. The developed methods are suitable for applications which require accurate and consistent position information such photogrammetry and tracking in computer vision
Liu, Kun. "Multi-View Oriented 3D Data Processing". Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0273.
Testo completoPoint cloud refinement and surface reconstruction are two fundamental problems in geometry processing. Most of the existing methods have been targeted at range sensor data and turned out be ill-adapted to multi-view data. In this thesis, two novel methods are proposed respectively for the two problems with special attention to multi-view data. The first method smooths point clouds originating from multi-view reconstruction without impairing the data. The problem is formulated as a nonlinear constrained optimization and addressed as a series of unconstrained optimization problems by means of a barrier method. The second method triangulates point clouds into meshes using an advancing front strategy directed by a sphere packing criterion. The method is algorithmically simple and can produce high-quality meshes efficiently. The experiments on synthetic and real-world data have been conducted as well, which demonstrates the robustness and the efficiency of the methods. The developed methods are suitable for applications which require accurate and consistent position information such photogrammetry and tracking in computer vision
Dao, Quang Minh. "High performance processing of metagenomics data". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS203.
Testo completoThe assessment and characterization of the gut microbiome has become a focus of research in the area of human autoimmune diseases. Many diseases such as obesity, inflammatory bowel (IBD), lean or beses twins, colorectal cancers and so on (Qin et al. 2010; Turnbaugh et al. 2009) have already been found to be associated with changes in the human microbiome. To investigate these relationships, quantitative metagenomics (QM) studies based on sequencing data could be performed. Understanding the role of the microbiome in human health and how it can be modulated is becoming increasingly relevant for precision medicine and for the medical management of chronic diseases. Results from such QM studies which report the organisms present in the samples and profile their abundances, will be used for continuous analyses. The terms microbiome and microbiota are used indistinctly to describe the community of microorganisms that live in a given environment. The development of high-throughput DNA sequencing technologies has boosted microbiome research through the study of microbial genomes allowing a more precise quantification of microbial and functional abundance. However, microbiome data analysis is challenging because it involves high-dimensional structured multivariate sparse data and because of its compositional structure of microbiome data. The data preprocessing is typically implemented as a pipeline (workflow) with third-party software that each process input files and produces output files. The pipelines are often deep, with ten or more tools, which could be very diverse from different languages such as R, Python, Perl etc. and integrated into different frameworks (Leipzig 2017) such as Galaxy, Apache Taverna, Toil etc. The challenges with existing approaches is that they are not always efficient with very large datasets in terms of scalability for individual tools in a metagenomics pipeline and their execution speed also has not met the expectations of the bioinformaticians. To date, more and more data are captured or generated from many different research areas such as Physics, Climatology, Sociology, Remote sensing or Management as well as bioinformatics. Indeed, Big Data Analytics (BDA) describes the unprecedented growth of data generated and collected from all kinds of data sources as mentioned above. This growth could be in the volume of data, in the speed of data moving in/out or in the speed of analyzing data which depends on high-performance computing (HPC) technologies. In the past few decades since the invention of the computer, HPC has contributed significantly to our quality of life - driving scientific innovation, enhancing engineering design and consumer goods manufacturing, as well as strengthening national and international security. This has been recognised and emphasised by both government and industry, with major ongoing investments in areas encompassing weather forecasting, scientific research and development as well as drug design and healthcare outcomes. In many ways, those two worlds (HPC and big data) are slowly, but surely converging. They are the keys to overcome limitations of bioinformatics analysis in general and quantitative metagenomics analysis in particular. Within the scope of this thesis, we contributed a novel bioinformatics framework and pipeline called QMSpy which helped bioinformaticians overcome limitations related to HPC and big data domains in the context of quantitative metagenomics. QMSpy tackles two challenges introduced by large scale NGS data: (i) sequencing data alignment - a computation intensive task and (ii) quantify metagenomics objects - a memory intensive task. By leveraging the powerful distributed computing engine (Apache Spark), in combination with the workflow management of big data processing (Hortonwork Data Platform), QMSpy allows us not only to bypass [...]
Bilalli, Besim. "Learning the impact of data pre-processing in data analysis". Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/587221.
Testo completoExiste una clara correlación entre disponibilidad y análisis de datos, por tanto con el incremento de disponibilidad de datos --- inevitable según la ley de Moore, la necesidad de analizar datos se incrementa también. Esto definitivamente involucra mucha más gente, no necesariamente experta, en la realización de tareas analíticas. Sin embargo los distintos, desafiantes y temporalmente costosos pasos del proceso de análisis de datos abruman a los no expertos, que requieren ayuda (por ejemplo, automatización o recomendaciones). Uno de los pasos más importantes y que más tiempo conlleva es el pre-procesado de datos. Pre-procesar datos es desafiante, y a la vez tiene un gran impacto en el análisis. A este respecto, trabajos previos se han centrado en proveer asistencia al usuario en el pre-procesado de datos pero sin tener en cuenta el impacto en el resultado del análisis. Por lo tanto, el objetivo ha sido generalmente el de permitir analizar los datos mediante el pre-procesado y no el de mejorar el resultado. Por el contrario, esta tesis tiene como objetivo desarrollar métodos que provean asistencia en el pre-procesado de datos con el único objetivo de mejorar (por ejemplo, incrementar la precisión predictiva de un clasificador) el resultado del análisis. Con este objetivo, proponemos un método y definimos una arquitectura que emplea ideas de meta-aprendizaje para encontrar la relación entre transformaciones (operadores de pre-procesado) i algoritmos de minería de datos (algoritmos de clasificación). Esto, eventualmente, permite ordenar y recomendar transformaciones de acuerdo con el impacto potencial en el análisis. Para alcanzar este objetivo, primero estudiamos los métodos disponibles actualmente y los sistemas que proveen asistencia al usuario, tanto para los pasos individuales en análisis de datos como para el proceso completo. Posteriormente, clasificamos los metadatos que los diferentes sistemas usan y ponemos el foco específicamente en aquellos que usan metadatos para meta-aprendizaje. Aplicamos un método para estudiar el poder predictivo de los metadatos y extraemos y seleccionamos los metadatos más relevantes. Finalmente, nos centramos en la asistencia al usuario en el paso de pre-procesado de datos. Concebimos una arquitectura y construimos una herramienta, PRESISTANT, que dado un algoritmo de clasificación es capaz de recomendar operadores de pre-procesado que una vez aplicados impactan positivamente el resultado final (por ejemplo, incrementan la precisión predictiva). Nuestros resultados muestran que proveer asistencia al usuario en el pre-procesado de datos con el objetivo de mejorar el resultado del análisis es factible y muy útil para no-expertos. Además, esta tesis es un paso en la dirección de desmitificar que la tarea no trivial de pre-procesar datos esta solo al alcance de expertos.
Derksen, Timothy J. (Timothy John). "Processing of outliers and missing data in multivariate manufacturing data". Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/38800.
Testo completoIncludes bibliographical references (leaf 64).
by Timothy J. Derksen.
M.Eng.
Cena, Bernard Maria. "Reconstruction for visualisation of discrete data fields using wavelet signal processing". University of Western Australia. Dept. of Computer Science, 2000. http://theses.library.uwa.edu.au/adt-WU2003.0014.
Testo completoChintala, Venkatram Reddy. "Digital image data representation". Ohio : Ohio University, 1986. http://www.ohiolink.edu/etd/view.cgi?ohiou1183128563.
Testo completo