Tesis sobre el tema "Scant data"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Scant data".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Corbin, Max. "Surface fitting head scan data sets". Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1175886726.
Texto completoFontanarava, Julien. "Signal Extraction from Scans of Electrocardiograms". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-248430.
Texto completoI detta examensarbete föreslår vi en Deep Learning-metod för fullständig automatiserad digitalisering av EKG-grafer. Vi utför digitaliseringen av EKG-graferna i tre steg: layoutdetektering, kolumnvis signalsegmentering och slutligen signalhämtning. Var och en av dem utförs av ett faltningsnätverk. Dessa nätverk är inspirerade av nätverk som används för objektdetektering och pixelvis segmentering. Vi tränar varje nätverk på syntetiska bilder som återspeglar utmaningarna i den verkliga datan. Användningen av dessa realistiska syntetiska bilder syftar till att göra våra modeller robusta mot variationer av EKG-graferna i den riktiga världen. Jämfört med riktmärkning från datorseende visar våra nätverk lovande resultat. Vårt signalhämtningsnätverk överträffar avsevärt vår implementering av riktmärket. Vår kolumnsegmenteringsmodell visar robusthet mot överlappande signaler, en fråga om signalsegmentering som metoder i datorseende inte kan hantera. Sammantaget ger denna helautomatiska pipeline en förbättring i tid och precision för läkare som är villiga att digitalisera sina EKG-databaser.
Agirnas, Emre. "Multi-scan Data Association Algorithm For Multitarget Tracking". Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605646/index.pdf.
Texto completos performance is better than that of JPDA method. Moreover, a survey over target tracking literature is presented including basics of multitarget tracking systems and existing data association methods.
Le, Bas Timothy P. "Processing techniques for TOBI side-scan sonar data". Thesis, University of Reading, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360112.
Texto completoKhoodoruth, B. Dhalila S. Y. "Detection, classification and visualization of CT Scan data". Pau, 2009. http://www.theses.fr/2009PAUU3001.
Texto completoThe dissertation include the detection, classification and visualization of brain trauma lesions from Computed Tomography. Various geometrical methods have been studied such as hybrid, feature extraction, level sets, watershed and region growing which are analyzed based upon their methodological aspects and their constraints evaluations. The pixel intensities, gradient magnitude, affinity map and catchment basins of these methods are validated based upon various ranges of constraints evaluations for which we have found out and contributed. We have also contributed for the deduction of the most appropriate method of detection for specific feature in the trauma lesions. We contribute a new methodology for the featurebased contour extraction of the lesion available that uses bilateral filtering, anisotropic diffusion properties, watershed and mathematical morphology operators based mainly on the gradient function. The gradient of the gray level values of watershed pixels are transformed after flooding and substituted by the gradient magnitude of the diffusion anisotropy. The evaluations of the classification of these lesions are undertaken by pattern recognition. We propose to classify these traumatic brain injuries from CT scans by pattern recognition. The k-means and the Markov random field algorithms have been implemented and experimented for each feature of the various lesions. Entropies of these CT scans have been calculated to get an optimized statistical evaluation for each feature lesion such as brain atropy, subdural hygroma, subdural haematoma, extracranial haematoma and nonhaemorraghic contusion. These methods are compared to assess their performance and statistical accuracy with respect to the featurebased lesion sets. These featurebased lesion sets are analyzed and evaluated statistically from the intensity to the pixel values and estimated calculated volumes. The numerical interpretations of each specific feature enables a proper assessment of the evolutionary stages of the featurebased lesions. Our last contributions are based mainly on the clinical aspects from these evaluated interpretations of the featurebased lesion sets. Herewith are the future directions of the research work. A multilayer neural work with sparse distributions and switching linear dynamical system for feature detection and classification simultaneously. The second direction is an implementation of a brain atlas of trauma case to typical case through a pixelbased structuring for heterogeneous regrouping of the anatomy and realtime visualization
Tomé, Diego Gomes. "A near-data select scan operator for database systems". reponame:Repositório Institucional da UFPR, 2017. http://hdl.handle.net/1884/53293.
Texto completoCoorientador : Marco Antonio Zanata Alves
Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 21/12/2017
Inclui referências : p. 61-64
Resumo: Um dos grandes gargalos em sistemas de bancos de dados focados em leitura consiste em mover dados em torno da hierarquia de memória para serem processados na CPU. O movimento de dados é penalizado pela diferença de desempenho entre o processador e a memória, que é um problema bem conhecido chamado memory wall. O surgimento de memórias inteligentes, como o novo Hybrid Memory Cube (HMC), permitem mitigar o problema do memory wall executando instruções em chips de lógica integrados a uma pilha de DRAMs. Essas memórias possuem potencial para computação de operações de banco de dados direto em memória além do armazenamento de bancos de dados. O objetivo desta dissertação é justamente a execução do operador algébrico de seleção direto em memória para reduzir o movimento de dados através da memória e da hierarquia de cache. O foco na operação de seleção leva em conta o fato que a leitura de colunas a serem filtradas movem grandes quantidades de dados antes de outras operações como junções (ou seja, otimização push-down). Inicialmente, foi avaliada a execução da operação de seleção usando o HMC como uma DRAM comum. Posteriormente, são apresentadas extensões à arquitetura e ao conjunto de instruções do HMC, chamado HMC-Scan, para executar a operação de seleção próximo aos dados no chip lógico do HMC. Em particular, a extensão HMC-Scan tem o objetivo de resolver internamente as dependências de instruções. Contudo, nós observamos que o HMC-Scan requer muita interação entre a CPU e a memória para avaliar a execução de filtros de consultas. Portanto, numa segunda contribuição, apresentamos a extensão arquitetural HIPE-Scan para diminuir esta interação através da técnica de predicação. A predicação suporta a avaliação de predicados direto em memória sem necessidade de decisões da CPU e transforma dependências de controle em dependências de dados (isto é, execução predicada). Nós implementamos a operação de seleção próximo aos dados nas estratégias de execução de consulta orientada a linha/coluna/vetor para a arquitetura x86 e para nas duas extensões HMC-Scan e HIPE-Scan. Nossas simulações mostram uma melhora de desempenho de até 3.7× para HMC-Scan e 5.6× para HIPE-Scan quando executada a consulta 06 do benchmark TPC-H de 1 GB na estratégia de execução orientada a coluna. Palavras-chave: SGBD em Memória, Cubo de Memória Híbrido, Processamento em Memória.
Abstract: A large burden of processing read-mostly databases consists of moving data around the memory hierarchy rather than processing data in the processor. The data movement is penalized by the performance gap between the processor and the memory, which is the well-known problem called memory wall. The emergence of smart memories, as the new Hybrid Memory Cube (HMC), allows mitigating the memory wall problem by executing instructions in logic chips integrated to a stack of DRAMs. These memories can enable not only in-memory databases but also have potential for in-memory computation of database operations. In this dissertation, we focus on the discussion of near-data query processing to reduce data movement through the memory and cache hierarchy. We focus on the select scan database operator, because the scanning of columns moves large amounts of data prior to other operations like joins (i.e., push-down optimization). Initially, we evaluate the execution of the select scan using the HMC as an ordinary DRAM. Then, we introduce extensions to the HMC Instruction Set Architecture (ISA) to execute our near-data select scan operator inside the HMC, called HMC-Scan. In particular, we extend the HMC ISA with HMC-Scan to internally solve instruction dependencies. To support branch-less evaluation of the select scan and transform control-flow dependencies into data-flow dependencies (i.e., predicated execution) we propose another HMC ISA extension called HIPE-Scan. The HIPE-Scan leads to less iteration between processor and HMC during the execution of query filters that depends on in-memory data. We implemented the near-data select scan in the row/column/vector-wise query engines for x86 and two HMC extensions, HMC-Scan and HIPE-Scan achieving performance improvements of up to 3.7× for HMC-Scan and 5.6× for HIPE-Scan when executing the Query-6 from 1 GB TPC-H database on column-wise. Keywords: In-Memory DBMS, Hybrid Memory Cube, Processing-in-Memory.
Seiler, Alexander. "Improved methods in reverse engineering using CMM scan data". Thesis, Nottingham Trent University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239711.
Texto completoXiao, Yijun. "Segmentation and modelling of whole human body scan data". Thesis, University of Glasgow, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.426616.
Texto completoZacharia, Nadime. "Compression and decompression of test data for scan-based designs". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0004/MQ44048.pdf.
Texto completoFang, Haian. "Optimal estimation of head scan data with generalized cross validation". Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1179344603.
Texto completoZacharia, Nadime. "Compression and decompression of test data for scan based designs". Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20218.
Texto completoThe design of the decompression unit is treated in depth and a design is proposed that minimizes the amount of extra hardware required. In fact, the design of the decompression unit uses flip-flops already on the chip: it is implemented without inserting any additional flip-flops.
The proposed scheme is applied in two different contexts: (1) in (external) deterministic-stored testing, to reduce the memory requirements imposed on the test equipment; and (2) in built-in self test, to design a test pattern generator capable of generating deterministic patterns with modest area and memory requirements.
Experimental results are provided for the largest ISCAS'89 benchmarks. All of these results point to show that the proposed technique greatly reduces the amount of test data while requiring little area overhead. Compression factors of more than 20 are reported for some circuits.
El-Shehaly, Mai Hassan. "A Visualization Framework for SiLK Data exploration and Scan Detection". Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/34606.
Texto completoMaster of Science
Adeniyi, Olanrewaju Ari. "FUSION OF ULTRASONIC C-SCAN DATA WITH FINITE ELEMENT ANALYSIS". OpenSIUC, 2012. https://opensiuc.lib.siu.edu/theses/909.
Texto completoLing, Li. "Local Feature Correspondence on Side-Scan Sonar Seafloor Images". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291803.
Texto completoI undervattensmiljöer så är perception och navigationssystem ofta beroende av ekolodsteknik. Side scan sonar (SSS) ger högupplösta, fotorealistiska bilder av havsbottnen till en relativt låg kostnad. Dessa bilder kan användas för områdesigenkänning och navigation av autonoma undervattensfordon (AUV). Lokal kännerteckensmatchning består av detektion, beskrivning och matchning av nyckelpunkter på överlappande bilder. Detta är en viktig byggsten för AUV navigation. Nya metoder baserade på djupinlärning har varit i framkant för kännerteckensmatching av kamerabilder. Däremot är kännerteckensmatchning av SSS bilder fortfarande dominerat av traditionella metoder så som SIFT och RootSIFT. Denna rapport använder SSS bilder av havsbottnen där bottentrålning har förekommit för kännerteckensmatching. D2-Net är en detect-and-describe VGG16 baserad nätverksarkitektur designad och testad på kännerteckensmatching av kamerabilder. I denna rapport anpassas denna metod till SSS bilder. Kostnadsfunktionen använder sig av trippelmarginalsrankning så att nätverket ska kunna detektera distinkta nyckelpunkter samt producera liknande deskriptorer för matchande pixlar. Metoden utvärderades på icke-triviala SSS bildpar och uppnådde bättre resultat än RootSIFT.
Teran, Espinoza Aldo. "Acoustic-Inertial Forward-Scan Sonar Simultaneous Localization and Mapping". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287368.
Texto completoDen ökande tillgängligheten och mångsidigheten för framåtriktade (FS) bildåtergivande ekolod (även känd som framåtriktade ekolod eller FLS) har gett upphov till robotgemenskapens intresse som försöker lösa det svåra problemet med robotuppfattning i undervattensscenarier med låg synlighet. Att bearbeta inkommande data från ett bildbildsekolod är utmanande, eftersom den tar en akustisk 2D-bild av 3D-scenen istället för att ge enkla räckviddsmätningar som andra ekolodstekniker gör (t.ex. multibeam-ekolod). Därför krävs komplexa efterbearbetnings- och sensorfusionsmetoder för att extrahera användbar information ur ekolodsbilden. Denna rapport beskriver utvecklingen, valideringen och implementeringen av en akustisk-tröghetslokaliserings- och kartläggningsalgoritm som bearbetar ekolodsbilder som fångats av ett FS-ekolod och tröghetsmätningar för att lösa samtidig lokalisering och kartläggning (SLAM) med en undervattenssensor. En begränsning för ekolodsmätning utgörs av att detektera och matcha funktioner från två på varandra följande ekolodsbilder på en degenerationsmedveten tvåvisningsbuntjustering. Mätningarna av ekolodsmätningen smälter samman med förintegrerade tröghetsmätningar i en minimal framställning av graf. Den senaste iSAM2- lösaren (Incremental Smoothing and Mapping) används för att möjliggöra lokalisering i realtid. En Python-simulator utvecklades för att utvärdera prestanda för algoritmen för tvåvisningsbuntjustering. Resultaten presenteras och diskuteras från både datorsimuleringar i Gazebo med hjälp av Robot Operating System (ROS) och från verkliga tester i en kontrollerad miljö med en egenutvecklad sensorsvit. Sonarbildsgenerationer, sensordrift och beräkningskomplexitet visade sig vara svåra att hantera, vilket minskade prestanda och robusthet i den nuvarande implementeringen av SLAM-lösningen. Emellertid kommer det nuvarande arbetet att fungera som en språngbräda för framtida arbete och samarbete inom lokalisering och kartläggning under vattnet med hjälp av FS-ekolod.
Kirkvik, Ann-Silje. "Completing a model based on laser scan generated point cloud data". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10489.
Texto completoThis paper is a master thesis for the Department of Computer and Information Science at the Norwegian University of Science and Technology, spring 2008. It is a study of hole filling in three dimensional surface models obtained from scanned real world objects. The goal of this project is to find solutions that are capable of filling an incomplete model in a plausible and visually pleasing manner. To reach this goal both theoretical studies and practical testing were performed. This paper presents a theoretical foundation, needed to gain a greater understanding of the problem, and the results from the testing phase. This knowledge and experience is then used to present a possible solution to the hole filling problem. The conclusions of this project is that automatic procedures, that are thoroughly documented in the literature, fails to perform in a satisfactory manner when the data set becomes too complicated. The Nidaros Cathedral is such a difficult data set, and will require a customized and user guided solution to met the goals of this project.
Desai, Grishma Mahesh. "Automated extraction of abdominal aortic aneurysm geometries from CT scan data". Thesis, University of Hull, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441672.
Texto completoXie, Yiping. "Machine Learning for Inferring Depth from Side-scan Sonar Images". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264835.
Texto completoUndervattensnavigering med autonoma undervattensfordon (AUV från engelskans Autonomous Underwater Vehicle), är betydelsefull för marinvetenskaplig forskning, och beror starkt på vilken typ av sonar som används. Vanligtvis är AUV:er utrustade med både sidescansonar och multibeamsonar eftersom båda har sina fördelar och begränsningar. Sidescansonar har större omfång än multibeam-sonar och är samtidigt mycket billigare, men kan inte ge exakta mätningar av djupet. Detta examensarbete syftar till att undersöka om maskininlärningsmetoder skulle kunna användas för att översätta sidescandata till multibeamdata med hög noggrannhet så att undervattensnavigering skulle kunna göras av AUV:er utrustade endast med sidescansonar. Tillvägagångssättet i examensarbetet är baserat på olika maskininlärningsmetoder, däribland generativa modeller och diskriminerande modeller. Syftet är att undersöka om olika maskininlärningsbaserade modeller kan dra slutsatser om havsdjupet baserat endast på sidescandata. De modeller som testas och jämförs inkluderar regression och generativa adversativt nätverk. Även olika CNN-baserade arkitekturer som U-Net och ResNet testas och jämförs. Som ett experimentförsök har detta projekt redan visat förmågan och den stora potentialen för maskininlärningsbaserade metoder som extraherar latenta representationer från sidescansonar och kan estimera djupet med en rimlig noggrannhet. Ytterligare förbättringar skulle kunna göras för att förbättra prestanda och stabilitet som potentiellt kan verifieras på AUV-plattformar i realtid.
Hornung, Maximilian. "Deep Learning-Based Identification of Ischemic Regions in Native Head CT Scans". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272129.
Texto completoStroke är en av de viktigaste orsakerna till dödsfall och funktionshinder över hela världen. Snabb diagnos är av avgörande betydelse för strokebehandling. I klinisk rutin utförs en datortomografi utan kontrastmedel omedelbart för att bestämma om en stroke är ischemisk eller hemorragisk och terapi planeras baserat på resultatet. I händelse av en ischemisk stroke kan tidiga tecken på infarkt uppstå på grund av ökat vattenupptag. Dessa tecken kan vara subtila, särskilt om de bara observeras strax efter symptomen börjat, men har potential att ge en avgörande första bedömning av infarktens plats och omfattning. I detta projekt tränar vi ett djupt neuralt nätverk för att förutsäga infarktkärnan från datortomografibilder på ett bild-till-bild-sätt. För att underlätta utnyttjandet av anatomiska korrespondenser genomförs lärandet i det standardiserade koordinatsystemet i en hjärnatlas i vilken alla bilder är deformerbart registrerade. Förutom binära infarktkärnmasker, används erfusionskartor såsom cerebral blodvolym och flöde som ytterligare träningsmål. Därmed finns mer fysiologisk information som det neurala nätverket kan tränas på. Metoden utvärderas med hjälp av korsvalidering på träningsdatauppsättningen bestående av 141 patienter. För validering mäter vi överlappningen med de observerade maskerna, lokaliseringens kvalitet och utvärdering med både manuell och automatisk bedömning av berörda ASPECTS-regioner. Det visas att de ytterligare målen förbättrar resultaten betydande och uppnår en area-under-kurva på 0,835 jämfört med automatisk bedömning av klassifikationen av ASPECTS-regioner och ger ett avstånd av 0 mm mellan förutsägelsen maximalt och strokeinfarktkärnan i majoriteten av allvarliga fall av stroke med en infarktkärnvolym större än 70 ml.
Jones, Lewys. "Applications of focal-series data in scanning-transmission electron microscopy". Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:a6f2a4d5-e77a-47a5-b2d7-aab4b7069ce2.
Texto completoLan, Liang. "Data Mining Algorithms for Classification of Complex Biomedical Data". Diss., Temple University Libraries, 2012. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/214773.
Texto completoPh.D.
In my dissertation, I will present my research which contributes to solve the following three open problems from biomedical informatics: (1) Multi-task approaches for microarray classification; (2) Multi-label classification of gene and protein prediction from multi-source biological data; (3) Spatial scan for movement data. In microarray classification, samples belong to several predefined categories (e.g., cancer vs. control tissues) and the goal is to build a predictor that classifies a new tissue sample based on its microarray measurements. When faced with the small-sample high-dimensional microarray data, most machine learning algorithm would produce an overly complicated model that performs well on training data but poorly on new data. To reduce the risk of over-fitting, feature selection becomes an essential technique in microarray classification. However, standard feature selection algorithms are bound to underperform when the size of the microarray data is particularly small. The best remedy is to borrow strength from external microarray datasets. In this dissertation, I will present two new multi-task feature filter methods which can improve the classification performance by utilizing the external microarray data. The first method is to aggregate the feature selection results from multiple microarray classification tasks. The resulting multi-task feature selection can be shown to improve quality of the selected features and lead to higher classification accuracy. The second method jointly selects a small gene set with maximal discriminative power and minimal redundancy across multiple classification tasks by solving an objective function with integer constraints. In protein function prediction problem, gene functions are predicted from a predefined set of possible functions (e.g., the functions defined in the Gene Ontology). Gene function prediction is a complex classification problem characterized by the following aspects: (1) a single gene may have multiple functions; (2) the functions are organized in hierarchy; (3) unbalanced training data for each function (much less positive than negative examples); (4) missing class labels; (5) availability of multiple biological data sources, such as microarray data, genome sequence and protein-protein interactions. As participants in the 2011 Critical Assessment of Function Annotation (CAFA) challenge, our team achieved the highest AUC accuracy among 45 groups. In the competition, we gained by focusing on the 5-th aspect of the problem. Thus, in this dissertation, I will discuss several schemes to integrate the prediction scores from multiple data sources and show their results. Interestingly, the experimental results show that a simple averaging integration method is competitive with other state-of-the-art data integration methods. Original spatial scan algorithm is used for detection of spatial overdensities: discovery of spatial subregions with significantly higher scores according to some density measure. This algorithm is widely used in identifying cluster of disease cases (e.g., identifying environmental risk factors for child leukemia). However, the original spatial scan algorithm only works on static spatial data. In this dissertation, I will propose one possible solution for spatial scan on movement data.
Temple University--Theses
Langø, Hans Martin y Morten Tylden. "Surface Reconstruction and Stereoscopic Video Rendering from Laser Scan Generated Point Cloud Data". Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9589.
Texto completoThis paper contains studies about the process of creating three-dimensional objects from point clouds. The main goal of this master thesis was to process a point cloud of the Nidaros Cathedral, mainly as a pilot project to create a standard procedure for future projects with similar goals. The main challenges were two-fold; both processing the data and creating stereoscopic videos presenting it. The approach to solving the problems include the study of earlier work on similar subjects, learning algorithms and tools, and finding the best procedures through trial and error. This resulted in a visually pleasing model of the cathedral, as well a stereoscopic video demonstrating it from all angles. The conclusion of the thesis is a pilot project demonstrating the dierent operations needed to overcome the challenges encountered during the work. The focus have been on presenting the procedures in such a way that they might be used in future projects of similar nature.
Fraker, Shannon E. "Evaluation of Scan Methods Used in the Monitoring of Public Health Surveillance Data". Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/29511.
Texto completoPh. D.
Nolte, Zachary. "Mosquito popper: a multiplayer online game for 3D human body scan data segmentation". Thesis, University of Iowa, 2017. https://ir.uiowa.edu/etd/5585.
Texto completoLi, Jian. "Investigating the effect of the DGNSS SCAT-I data link on VOR signal reception". Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1178220159.
Texto completoBoyanapally, Deepthi. "MERGING OF FINGERPRINT SCANS OBTAINED FROM MULTIPLE CAMERAS IN 3D FINGERPRINT SCANNER SYSTEM". UKnowledge, 2008. http://uknowledge.uky.edu/gradschool_theses/510.
Texto completoPeng, Peng. "A Measurement Approach to Understanding the Data Flow of Phishing From Attacker and Defender Perspectives". Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/96401.
Texto completoMaster of Science
Phishing attack is the fraudulent attempt to lure the target users to give away sensitive information such as usernames, passwords and credit card details. Cybercriminals usually build phishing websites (mimicking a trustworthy entity), and trick users to reveal important credentials. However, the data flow of phishing process is still unclear. From attackers' per- spective, we want to know how attackers collect the sensitive information stolen by phishing websites. On the other hand, from defenders' perspective, we are trying to figure out how online scan engines (e.g., VirusTotal) detect phishing URLs and how reliable their detection results are. In this thesis, we perform an empirical measurement to help answer the two questions above. By monitoring and analyzing a large number of real-world phishing websites, we draw a clear picture of the credential sharing process during phishing attacks. Also, by building our own phishing websites and submitting to VirusTotal for scanning, we find that more rigorous methodologies to use VirusTotal labels are desperately needed.
Balduzzi, Mathilde. "Plant canopy modeling from Terrestrial LiDAR System distance and intensity data". Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20203.
Texto completoThe challenge of this thesis is reconstruct the 3D geometry of vegetation from distance and intensity data provided by a 3D scanner LiDAR. A method of “Shape-From-Shading” by propagation is developed to be combined with a fusion method of type “Kalman” to get an optimal reconstruction of the leaves. -Introduction-The LiDAR data analysis shows that the point cloud quality is variable. This quality depends upon the measurement set up. When the LiDAR laser beam reaches the edge of a surface (or a steeply inclined surface), it also integrate background measurement. Those set up produce outliers. This kind of set up is common for foliage measurement as foliages have in general fragmented and complex shape. LiDAR data are of bad quality and the quantity of leaves in a scan makes the correction of outliers fastidious. This thesis goal is to develop a methodology to allow us to integrate the LiDAR intensity data to the distance to make an automatic correction of those outliers. -Shape-from-shading-The Shape-from-shading principle is to reconstruct the distance values from intensities of a photographed object. The camera (LiDAR sensor) and the light source (LiDAR laser) have the same direction and are placed at infinity relatively to the surface. This makes the distance effect on intensity negligible and the hypothesis of an orthographic camera valid. In addition, the relationship between the incident angle light beam and intensity is known. Thanks to the LiDAR data analysis, we are able to choose the best data between distance and intensity in the scope of leaves reconstruction. An algorithm of propagation SFS along iso-intense regions is developed. This type of algorithm allows us to integrate a fusion method of type Kalman. -Mathematical design of the method-The patches of the surface corresponding to the iso-intense regions are patches of surfaces called the constant slope surfaces, or sand-pile surfaces. We are going to use those surfaces to rebuild the 3D geometry corresponding to the scanned surfaces. We show that from the knowledge of the 3d of an iso-intensity region, we can construct those sand-pile surfaces. The initialization of the first iso-intense regions contour (propagation seeds) is done with the 3D LiDAR data. The greatest slope lines of those surfaces are generated. Thanks to the propagation of those lines (and thus of the corresponding sand-pile surface), we build the other contour of the iso-intense region. Then, we propagate the reconstruction iteratively. -Kalman filter-We can consider this propagation as being the computation of a trajectory on the reconstructed surface. In our study framework, the distance data is always available (3D scanner data). It is thus possible to choose which data (intensity vs distance) is the best to reconstruct the object surface. This can be done with a fusion of type Kalman filter. -Algorithm-To proceed a reconstruction by propagation, it is necessary to order the iso-intensity regions. Once the propagation seeds are found, they are initialized with the distances provided by the LiDAR. For each nodes of the hierarchy (corresponding to an iso-intensity region), the sand-pile surface reconstruction is done. -Manuscript-The thesis manuscript gathers five chapters. First, we give a short description of the LiDAR technology and an overview of the traditional 3D surface reconstruction from point cloud. Then we make a state-of-art of the shape-from –shading methods. LiDAR intensity is studied in a third chapter to define the strategy of distance effect correction and to set up the incidence angle vs intensity relationship. A fourth chapter gives the principal results of this thesis. It gathers the theoretical approach of the SFS algorithm developed in this thesis. We will provide its description and results when applied to synthetic images. Finally, a last chapter introduces results of leaves reconstruction
Kulunk, Hasan Salih. "Lakebed Characterization Using Side-Scan Data for Investigating the Latest Lake Superior Coastal Environment Conditions". Thesis, Michigan Technological University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10683388.
Texto completoThis thesis provides a review of the development of hydrographic survey equipment and supporting geospatial equipment and technology such as GPS. Using SonarWiz, a sonar image processing software package, lakebed classification methodologies were evaluated for mapping Buffalo Reef in Lake Superior located near Gay, Michigan. The goal was to develop an approach to mapping the reef bed and delineating various components of the lake bottom, including stamp sands, which are migrating from the abandoned Gay copper processing stamp mill to the reef. This contamination of the reef is having an adverse effect on habitats important to local flora and fauna.
Sonar data was collected with an Edgetech 4125 side-scan sonar and an Iver3, a fully autonomous under water vehicle sonar, which has bathymetry and side-scan capabilities. Both systems are owned and operated by the Great Lake Research Center at Michigan Technological University.
Sonar image post-processing was complete utilizing SonarWiz 7, ArcGIS 10.5 and ERDAS Imagine. The resulting classification is composed of 6 information classes: cobble, cobble/stamp sand with different level intensity returns (low, medium, and high), trend of stamp sand, sandy waves and shadow which indicates mostly rock/ bedrock. The cobble/stamp sand had two distinct spectral classes: high intensity returns and low intensity returns for Iver 3, three distinct spectral classes: high intensity returns, medium intensity returns and low intensity returns for Edgetech 4125. The Edgetech 4125 classification excluded shadow area automatically.
The final step was an interpretation of lakebed features based on ground truth samples and photographic images from the bottom surface. Recommendations for future research are presented.
Moritz, Malte y Anton Pettersson. "Estimation of Local Map from Radar Data". Thesis, Linköpings universitet, Reglerteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-111916.
Texto completoKarlsson, Rasmus. "Exploring a video game AI bot that scans and reacts to its surroundings in real-time". Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-76737.
Texto completoDonglikar, Swapneel B. "Design for Testability Techniques to Optimize VLSI Test Cost". Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/43712.
Texto completoMaster of Science
Read, Simon. "Methods for the improved implementation of the spatial scan statistic when applied to binary labelled point data". Thesis, University of Sheffield, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.555124.
Texto completoManuel, Melissa Barnes Ulrich Pamela V. Connell Lenda Jo. "Using 3D body scan measurement data and body shape assessment to build anthropometric profiles of tween girls". Auburn, Ala, 2009. http://hdl.handle.net/10415/1585.
Texto completoCho, Jang Ik. "Partial EM Procedure for Big-Data Linear Mixed Effects Model, and Generalized PPE for High-Dimensional Data in Julia". Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case152845439167999.
Texto completoPersson, Andreas. "3D Scan-based Navigation using Multi-Level Surface Maps". Thesis, Örebro University, School of Science and Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-11211.
Texto completoThe field of research connected to mobile robot navigation is much broader than the scope of this thesis. Hence in this report, the navigation topic is narrowed down to primarily concerning mapping and scan matching techniques that were used to achieve the overall task of navigation nature. Where the work presented within this report is based on an existing robot platform with technique for providing 3D point-clouds, as result of 3D scanning, and functionality for planning for and following a path. In this thesis it is presented how a scan matching algorithm is used for securing the alignment between provided succession point-clouds. Since the computational time of nearest neighbour search is a commonly discussed aspect of scan matching, suggestions about techniques for decreasing the computational time are also presented within this report. With secured alignment, the challenge was within representing provided point-clouds by a map model. Provided point-clouds were of 3D character, thus a mapping technique is presented that provides rough 3D representations of the environment. A problem that arose with a 3D map representation was that given functionality for path planning required a 2D representation. This is addressed by translating the 3D map at a specific height level into a 2D map usable for path planning, where this report suggest a novel traversability analysis approach with the use of a tree structure.
Dutton, James Allen. "Developing articulated human models from laser scan data for use as avatars in real time networked virtual environments". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA397086.
Texto completoThesis advisors: Bachmann, Eric ; Yun, Xiaoping. "September 2001." Includes bibliographical references (p. 47-49). Also Available online.
Lontoc-Roy, Melinda. "Three-dimensional visualization in situ and complexity analysis of crop root systems using CT scan data : a primer". Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82282.
Texto completoQuerel, Richard Robert y University of Lethbridge Faculty of Arts and Science. "IRMA calibrations and data analysis for telescope site selection". Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2007, 2007. http://hdl.handle.net/10133/675.
Texto completoxii, 135 leaves : ill. ; 28 cm. --
Joshi, Shriyanka. "Reverse Engineering of 3-D Point Cloud into NURBS Geometry". University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1595849563494564.
Texto completoZafalon, Zaira Regina [UNESP]. "Scan for MARC: princípios sintáticos e semânticos de registros bibliográficos aplicados à conversão de dados analógicos para o formato MARC 21 bibliográfico". Universidade Estadual Paulista (UNESP), 2012. http://hdl.handle.net/11449/103386.
Texto completoThe research presents as its central theme the study of the bibliographic record conversion process. The object of study is framed by an understanding of analogic bibliographic record conversion to the Bibliograhpic MARC21 format, based on a syntactic and semantic analysis of records described according to descriptive metadata structure standards and content standards. The thesis in this research is that the syntactic and semantic principles of bibliographic records, defined by description and visualization cataloguing schemes, present in the descriptive metadata structure standards and content standards, determine the bibliographic record conversion process to the MARC21 Bibliographic Format. In the light of this, the purpose of this research is to develop a theoretical study of the syntax and semantics of bibliographic records, grounded in Linguistic theories of Saussure and Hjelmslev, which can underlie analogic bibliographic record conversion to the MARC21 Bibliographic Format using a computational interpreter. To this end, the general aim was to develop a theoretical-conceptual model of the syntax and semantics of bibliographic records, based on saussurean and hjelmslevian linguistic studies of human language manifestations, which can be applicable to a computational interpreter designed for the conversion of bibliographic records to the MARC21 Bibliographic Format. To attain this goal, the following specific objectives were identified, in two groups and related to the theoretical-conceptual model of bibliographic record syntax and semantics and to the conversion process of the records, respectively: to make explicit the relationship between the syntax and semantics of bibliographic records... (Complete abstract click electronic access below)
Anil, Engin Burak. "Utilization of As-is Building Information Models Obtained from Laser Scan Data for Evaluation of Earthquake Damaged Reinforced Concrete Buildings". Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/499.
Texto completoJett, David B. "Selection of flip-flops for partial scan paths by use of a statistical testability measure". Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-12302008-063234/.
Texto completoZafalon, Zaira Regina. "Scan for MARC : princípios sintáticos e semânticos de registros bibliográficos aplicados à conversão de dados analógicos para o formato MARC 21 bibliográfico /". Marília : [s.n.], 2012. http://hdl.handle.net/11449/103386.
Texto completoBanca: Dulce Maria Baptista
Banca: Edberto Ferneda
Banca: Elisa Campos Machado
Banca: Ricardo César Gonçalves Sant'Ana
Resumo:A pesquisa apresenta como tema nuclear o estudo do processo de conversão de registros bibliográficos. Delimita-se o objeto de estudo pelo entendimento da conversão de registros bibliográficos analógicos para o formato MARC21 Bibliográfico, a partir da análise sintática e semântica de registros descritos segundo padrões de estrutura de metadados descritivos e padrões de conteúdo. A tese nesta pesquisa é a de que os princípios sintáticos e semânticos de registros bibliográficos, definidos pelos esquemas de descrição e de visualização na catalogação, presentes nos padrões de estrutura de metadados descritivos e nos padrões de conteúdo, determinam o processo de conversão de registros bibliográficos para o Formato MARC21 Bibliográfico. Em vista desse panorama, a proposição desta pesquisa é desenvolver um estudo teórico sobre a sintaxe e a semântica de registros bibliográficos, pelo viés da Linguística, com Saussure e Hjelmslev, que subsidiem a conversão de registros bibliográficos analógicos para o Formato MARC21 Bibliográfico em um interpretador computacional. Com esta proposta, estabelece-se, como objetivo geral, desenvolver um modelo teórico-conceitual de sintaxe e semântica em registros bibliográficos, a partir de estudos lingüísticos saussureanos e hjelmslevianos das manifestações da linguagem humana, que seja aplicável a um interpretador computacional voltado à conversão de registros bibliográficos ao formato MARC21 Bibliográfico. Para o alcance de tal objetivo recorre-se aos seguintes objetivos específicos, reunidos em dois grupos e voltados, respectivamente ao modelo teórico-conceitual da estrutura sintática e semântica de registros bibliográficos, e ao processo de conversão de seus registros: explicitar a relação entre a sintaxe e a semântica... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The research presents as its central theme the study of the bibliographic record conversion process. The object of study is framed by an understanding of analogic bibliographic record conversion to the Bibliograhpic MARC21 format, based on a syntactic and semantic analysis of records described according to descriptive metadata structure standards and content standards. The thesis in this research is that the syntactic and semantic principles of bibliographic records, defined by description and visualization cataloguing schemes, present in the descriptive metadata structure standards and content standards, determine the bibliographic record conversion process to the MARC21 Bibliographic Format. In the light of this, the purpose of this research is to develop a theoretical study of the syntax and semantics of bibliographic records, grounded in Linguistic theories of Saussure and Hjelmslev, which can underlie analogic bibliographic record conversion to the MARC21 Bibliographic Format using a computational interpreter. To this end, the general aim was to develop a theoretical-conceptual model of the syntax and semantics of bibliographic records, based on saussurean and hjelmslevian linguistic studies of human language manifestations, which can be applicable to a computational interpreter designed for the conversion of bibliographic records to the MARC21 Bibliographic Format. To attain this goal, the following specific objectives were identified, in two groups and related to the theoretical-conceptual model of bibliographic record syntax and semantics and to the conversion process of the records, respectively: to make explicit the relationship between the syntax and semantics of bibliographic records... (Complete abstract click electronic access below)
Doutor
Feulner, Martin [Verfasser] y Sigrid [Akademischer Betreuer] Liede-Schumann. "Taxonomical use of floral scent data in apomictic taxa of Hieracium and Sorbus derived from hybridization / Martin Feulner. Betreuer: Sigrid Liede-Schumann". Bayreuth : Universität Bayreuth, 2013. http://d-nb.info/1059352567/34.
Texto completoSimán, Frans Filip. "Assessment of Machine Learning Applied to X-Ray Fluorescence Core Scan Data from the Zinkgruvan Zn-Pb-Ag Deposit, Bergslagen, Sweden". Thesis, Luleå tekniska universitet, Geovetenskap och miljöteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-82050.
Texto completoMason, Terry. "ADVANCES IN WIDEBAND VHS CASSETTE RECORDING". International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/608887.
Texto completoIn recent years, many designers have turned to digital techniques as a means of improving the fidelity of instrumentation data recorders. However, single and multi-channel recorders based on professional VHS transports are now available which use innovative methods for achieving near-perfect timebase accuracy, inter-channel timing and group delay specifications for long-duration wideband analog recording applications. This paper discusses some of the interesting technical problems involved and demonstrates that VHS cassette recorders are now a convenient and low cost proposition for high precision multi-channel wideband data recording.
Schönström, Linus. "Programming a TEM for magnetic measurements : DMscript code for acquiring EMCD data in a single scan with a q-E STEM setup". Thesis, Uppsala universitet, Tillämpad materialvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-306167.
Texto completoAhlström, Daniel. "Minimizing memory requirements for deterministic test data in embedded testing". Thesis, Linköping University, Linköping University, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54655.
Texto completoEmbedded and automated tests reduce maintenance costs for embedded systems installed in remote locations. Testing multiple components of an embedded system, connected on a scan chain, using deterministic test patterns stored in a system provide high fault coverage but require large system memory. This thesis presents an approach to reduce test data memory requirements by the use of a test controller program, utilizing the observation of that there are multiple components of the same type in a system. The program use deterministic test patterns specific to every component type, which is stored in system memory, to create fully defined test patterns when needed. By storing deterministic test patterns specific to every component type, the program can use the test patterns for multiple tests and several times within the same test. The program also has the ability to test parts of a system without affecting the normal functional operation of the rest of the components in the system and without an increase of test data memory requirements. Two experiments were conducted to determine how much test data memory requirements are reduced using the approach presented in this thesis. The results for the experiments show up to 26.4% reduction of test data memory requirements for ITC´02 SOC test benchmarks and in average 60% reduction of test data memory requirements for designs generated to gain statistical data.
Mccart, James A. "Goal Attainment On Long Tail Web Sites: An Information Foraging Approach". Scholar Commons, 2009. http://scholarcommons.usf.edu/etd/3686.
Texto completo