Дисертації з теми "Scant data"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Scant data.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Scant data".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Corbin, Max. "Surface fitting head scan data sets." Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1175886726.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Fontanarava, Julien. "Signal Extraction from Scans of Electrocardiograms." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-248430.

Повний текст джерела
Анотація:
In this thesis, we propose a Deep Learning method for fully automated digitization of ECG (Electrocardiogram) sheets. We perform the digitization of ECG sheets in three steps: layout detection, column-wise signal segmentation, and finally signal retrieval - each of them performed by a Convolutional Neural Network. These steps leverage advances in the fields of object detection and pixel-wise segmentation due to the rise of CNNs in image processing. We train each network on synthetic images that reect the challenges of real-world data. The use of these realistic synthetic images aims at making our models robust to the variability of real-world ECG sheets. Compared with computer vision benchmarks, our networks show promising results. Our signal retrieval network significantly outperforms our implementation of the benchmark. Our column segmentation model shows robustness to overlapping signals, an issue of signal segmentation that computer vision methods are not equipped to deal with. Overall, this fully automated pipeline provides a gain in time and precision for physicians willing to digitize their ECG database.
I detta examensarbete föreslår vi en Deep Learning-metod för fullständig automatiserad digitalisering av EKG-grafer. Vi utför digitaliseringen av EKG-graferna i tre steg: layoutdetektering, kolumnvis signalsegmentering och slutligen signalhämtning. Var och en av dem utförs av ett faltningsnätverk. Dessa nätverk är inspirerade av nätverk som används för objektdetektering och pixelvis segmentering. Vi tränar varje nätverk på syntetiska bilder som återspeglar utmaningarna i den verkliga datan. Användningen av dessa realistiska syntetiska bilder syftar till att göra våra modeller robusta mot variationer av EKG-graferna i den riktiga världen. Jämfört med riktmärkning från datorseende visar våra nätverk lovande resultat. Vårt signalhämtningsnätverk överträffar avsevärt vår implementering av riktmärket. Vår kolumnsegmenteringsmodell visar robusthet mot överlappande signaler, en fråga om signalsegmentering som metoder i datorseende inte kan hantera. Sammantaget ger denna helautomatiska pipeline en förbättring i tid och precision för läkare som är villiga att digitalisera sina EKG-databaser.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Agirnas, Emre. "Multi-scan Data Association Algorithm For Multitarget Tracking." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605646/index.pdf.

Повний текст джерела
Анотація:
Data association problem for multitarget tracking is determination of the relationship between targets and the incoming measurements from sensors of the target tracking system. Performance of a multitarget tracking system is strongly related to the chosen method for data association and target tracking algorithm. Incorrect data association effects state estimation of targets. In this thesis, we propose a new multi-scan data association algorithm for multitarget tracking systems. This algorithm was implemented by using MATLAB programming tool. Performances of the new algorithm and JPDA method for multiple targets tracking are compared. During simulations linear models are used and the uncertainties in the sensor and motion models are modeled by Gaussian density. Simulation results are presented. Results show that the new algorithm'
s performance is better than that of JPDA method. Moreover, a survey over target tracking literature is presented including basics of multitarget tracking systems and existing data association methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Le, Bas Timothy P. "Processing techniques for TOBI side-scan sonar data." Thesis, University of Reading, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.360112.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Khoodoruth, B. Dhalila S. Y. "Detection, classification and visualization of CT Scan data." Pau, 2009. http://www.theses.fr/2009PAUU3001.

Повний текст джерела
Анотація:
Cette thèse concerne la détection, la classication et la visualisation des traumas crâniens en imagerie scanner du cerveau. Plusieurs méthodes de segmentation y sont étudiées : hybride, extraction de carcatéristiques, level sets, watershed et propagation de région. Nous proposons une caractérisation des différentes méthodes en fonction de la méthodologie sousjacente et des contraintes associées. Plusieurs techniques de caractérisation, telles que l'intensité des pixels, l'amplitude du gradient, la carte d'affinité et les bassins d'attraction, sont évaluées en fonction de différents paramètres. Nous avons ainsi identifié la méthode qui nous semble la mieux appropriée pour segmenter chaque type de lésion. Nous proposons également une nouvelle méthode d'extraction du contour des lésions, qui combine filtrage bilatéral, propriétés de diffusion anisotropique, algorithme watershed et opérateurs de morphologie mathématique. La fonction gradient de l'image watershed est transformée par application d'une méthode de flooding et substituée par le gradient de la fonction de diffusion anisotropique. Différentes méthodes de segmentation supervisées et non supervisées dont le k-means, le champs de Markov randomisé, l'entropie ont été expérimentées afin de segmenter chaque type de trauma : atropie cérébrale, hygrome subdurale, hématome subdurale, hématome extracrânien et contusion non hémorragique. L'efficacité de chaque méthode est évaluée pour chaque type de lésion. Parallélement à l'approche géométrique, nous évaluons la pertinence d'approches statistiques appliquées aux images après amélioration du contraste. Notre dernière contribution concerne quelques aspects cliniques liés à notre travail. Les futures directions des travaux de recherche peuvent être entamées en appliquant un 'multilayer neural work with sparse distributions and switching linear dynamical system' pour les détection des caracteristiques et classification simultanément. Une seconde direction est l'implémentation d'un atlas constituant des différentes régions de l'anatomie en comparaison des cas typiques et des cas traumatiques par une structuration basée sur les pixels par plusieurs approches
The dissertation include the detection, classification and visualization of brain trauma lesions from Computed Tomography. Various geometrical methods have been studied such as hybrid, feature extraction, level sets, watershed and region growing which are analyzed based upon their methodological aspects and their constraints evaluations. The pixel intensities, gradient magnitude, affinity map and catchment basins of these methods are validated based upon various ranges of constraints evaluations for which we have found out and contributed. We have also contributed for the deduction of the most appropriate method of detection for specific feature in the trauma lesions. We contribute a new methodology for the featurebased contour extraction of the lesion available that uses bilateral filtering, anisotropic diffusion properties, watershed and mathematical morphology operators based mainly on the gradient function. The gradient of the gray level values of watershed pixels are transformed after flooding and substituted by the gradient magnitude of the diffusion anisotropy. The evaluations of the classification of these lesions are undertaken by pattern recognition. We propose to classify these traumatic brain injuries from CT scans by pattern recognition. The k-means and the Markov random field algorithms have been implemented and experimented for each feature of the various lesions. Entropies of these CT scans have been calculated to get an optimized statistical evaluation for each feature lesion such as brain atropy, subdural hygroma, subdural haematoma, extracranial haematoma and nonhaemorraghic contusion. These methods are compared to assess their performance and statistical accuracy with respect to the featurebased lesion sets. These featurebased lesion sets are analyzed and evaluated statistically from the intensity to the pixel values and estimated calculated volumes. The numerical interpretations of each specific feature enables a proper assessment of the evolutionary stages of the featurebased lesions. Our last contributions are based mainly on the clinical aspects from these evaluated interpretations of the featurebased lesion sets. Herewith are the future directions of the research work. A multilayer neural work with sparse distributions and switching linear dynamical system for feature detection and classification simultaneously. The second direction is an implementation of a brain atlas of trauma case to typical case through a pixelbased structuring for heterogeneous regrouping of the anatomy and realtime visualization
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tomé, Diego Gomes. "A near-data select scan operator for database systems." reponame:Repositório Institucional da UFPR, 2017. http://hdl.handle.net/1884/53293.

Повний текст джерела
Анотація:
Orientador : Eduardo Cunha de Almeida
Coorientador : Marco Antonio Zanata Alves
Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 21/12/2017
Inclui referências : p. 61-64
Resumo: Um dos grandes gargalos em sistemas de bancos de dados focados em leitura consiste em mover dados em torno da hierarquia de memória para serem processados na CPU. O movimento de dados é penalizado pela diferença de desempenho entre o processador e a memória, que é um problema bem conhecido chamado memory wall. O surgimento de memórias inteligentes, como o novo Hybrid Memory Cube (HMC), permitem mitigar o problema do memory wall executando instruções em chips de lógica integrados a uma pilha de DRAMs. Essas memórias possuem potencial para computação de operações de banco de dados direto em memória além do armazenamento de bancos de dados. O objetivo desta dissertação é justamente a execução do operador algébrico de seleção direto em memória para reduzir o movimento de dados através da memória e da hierarquia de cache. O foco na operação de seleção leva em conta o fato que a leitura de colunas a serem filtradas movem grandes quantidades de dados antes de outras operações como junções (ou seja, otimização push-down). Inicialmente, foi avaliada a execução da operação de seleção usando o HMC como uma DRAM comum. Posteriormente, são apresentadas extensões à arquitetura e ao conjunto de instruções do HMC, chamado HMC-Scan, para executar a operação de seleção próximo aos dados no chip lógico do HMC. Em particular, a extensão HMC-Scan tem o objetivo de resolver internamente as dependências de instruções. Contudo, nós observamos que o HMC-Scan requer muita interação entre a CPU e a memória para avaliar a execução de filtros de consultas. Portanto, numa segunda contribuição, apresentamos a extensão arquitetural HIPE-Scan para diminuir esta interação através da técnica de predicação. A predicação suporta a avaliação de predicados direto em memória sem necessidade de decisões da CPU e transforma dependências de controle em dependências de dados (isto é, execução predicada). Nós implementamos a operação de seleção próximo aos dados nas estratégias de execução de consulta orientada a linha/coluna/vetor para a arquitetura x86 e para nas duas extensões HMC-Scan e HIPE-Scan. Nossas simulações mostram uma melhora de desempenho de até 3.7× para HMC-Scan e 5.6× para HIPE-Scan quando executada a consulta 06 do benchmark TPC-H de 1 GB na estratégia de execução orientada a coluna. Palavras-chave: SGBD em Memória, Cubo de Memória Híbrido, Processamento em Memória.
Abstract: A large burden of processing read-mostly databases consists of moving data around the memory hierarchy rather than processing data in the processor. The data movement is penalized by the performance gap between the processor and the memory, which is the well-known problem called memory wall. The emergence of smart memories, as the new Hybrid Memory Cube (HMC), allows mitigating the memory wall problem by executing instructions in logic chips integrated to a stack of DRAMs. These memories can enable not only in-memory databases but also have potential for in-memory computation of database operations. In this dissertation, we focus on the discussion of near-data query processing to reduce data movement through the memory and cache hierarchy. We focus on the select scan database operator, because the scanning of columns moves large amounts of data prior to other operations like joins (i.e., push-down optimization). Initially, we evaluate the execution of the select scan using the HMC as an ordinary DRAM. Then, we introduce extensions to the HMC Instruction Set Architecture (ISA) to execute our near-data select scan operator inside the HMC, called HMC-Scan. In particular, we extend the HMC ISA with HMC-Scan to internally solve instruction dependencies. To support branch-less evaluation of the select scan and transform control-flow dependencies into data-flow dependencies (i.e., predicated execution) we propose another HMC ISA extension called HIPE-Scan. The HIPE-Scan leads to less iteration between processor and HMC during the execution of query filters that depends on in-memory data. We implemented the near-data select scan in the row/column/vector-wise query engines for x86 and two HMC extensions, HMC-Scan and HIPE-Scan achieving performance improvements of up to 3.7× for HMC-Scan and 5.6× for HIPE-Scan when executing the Query-6 from 1 GB TPC-H database on column-wise. Keywords: In-Memory DBMS, Hybrid Memory Cube, Processing-in-Memory.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Seiler, Alexander. "Improved methods in reverse engineering using CMM scan data." Thesis, Nottingham Trent University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239711.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Xiao, Yijun. "Segmentation and modelling of whole human body scan data." Thesis, University of Glasgow, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.426616.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zacharia, Nadime. "Compression and decompression of test data for scan-based designs." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0004/MQ44048.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Fang, Haian. "Optimal estimation of head scan data with generalized cross validation." Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1179344603.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Zacharia, Nadime. "Compression and decompression of test data for scan based designs." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20218.

Повний текст джерела
Анотація:
Traditional methods to test integrated circuits (ICs) require enormous amount of memory, which make them increasingly expensive and unattractive. This thesis addresses this issue for scan-based designs by proposing a method to compress and decompress input test patterns. By storing the test patterns in a compressed format, the amount of memory required to test ICs can be reduced to manageable levels. The thesis describes the compression and decompression scheme in details. The proposed method relies on the insertion of a decompression unit on the chip. During test application, the patterns are decompressed by the decompression unit as they are applied. Hence, decompression is done on-the-fly in hardware and does not slow down test application.
The design of the decompression unit is treated in depth and a design is proposed that minimizes the amount of extra hardware required. In fact, the design of the decompression unit uses flip-flops already on the chip: it is implemented without inserting any additional flip-flops.
The proposed scheme is applied in two different contexts: (1) in (external) deterministic-stored testing, to reduce the memory requirements imposed on the test equipment; and (2) in built-in self test, to design a test pattern generator capable of generating deterministic patterns with modest area and memory requirements.
Experimental results are provided for the largest ISCAS'89 benchmarks. All of these results point to show that the proposed technique greatly reduces the amount of test data while requiring little area overhead. Compression factors of more than 20 are reported for some circuits.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

El-Shehaly, Mai Hassan. "A Visualization Framework for SiLK Data exploration and Scan Detection." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/34606.

Повний текст джерела
Анотація:
Network packet traces, despite having a lot of noise, contain priceless information, especially for investigating security incidents or troubleshooting performance problems. However, given the gigabytes of flow crossing a typical medium sized enterprise network every day, spotting malicious activity and analyzing trends in network behavior becomes a tedious task. Further, computational mechanisms for analyzing such data usually take substantial time to reach interesting patterns and often mislead the analyst into reaching false positives, benign traffic being identified as malicious, or false negatives, where malicious activity goes undetected. Therefore, the appropriate representation of network traffic data to the human user has been an issue of concern recently. Much of the focus, however, has been on visualizing TCP traffic alone while adapting visualization techniques for the data fields that are relevant to this protocol's traffic, rather than on the multivariate nature of network security data in general, and the fact that forensic analysis, in order to be fast and effective, has to take into consideration different parameters for each protocol. In this thesis, we bring together two powerful tools from different areas of application: SiLK (System for Internet-Level Knowledge), for command-based network trace analysis; and ComVis, a generic information visualization tool. We integrate the power of both tools by aiding simplified interaction between them, using a simple GUI, for the purpose of visualizing network traces, characterizing interesting patterns, and fingerprinting related activity. To obtain realistic results, we applied the visualizations on anonymized packet traces from Lawrence Berkley National Laboratory, captured on selected hours across three months. We used a sliding window approach in visually examining traces for two transport-layer protocols: ICMP and UDP. The main contribution of this research is a protocol-specific framework of visualization for ICMP and UDP data. We explored relevant header fields and the visualizations that worked best for each of the two protocols separately. The resulting views led us to a number of guidelines that can be vital in the creation of "smart books" describing best practices in using visualization and interaction techniques to maintain network security; while creating visual fingerprints which were found unique for individual types of scanning activity. Our visualizations use a multiple-views approach that incorporates the power of two-dimensional scatter plots, histograms, parallel coordinates, and dynamic queries.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Adeniyi, Olanrewaju Ari. "FUSION OF ULTRASONIC C-SCAN DATA WITH FINITE ELEMENT ANALYSIS." OpenSIUC, 2012. https://opensiuc.lib.siu.edu/theses/909.

Повний текст джерела
Анотація:
AN ABSTRACT OF THE THESIS OF Olanrewaju Ari Adeniyi for the Master of Science degree in Mechanical Engineering and Energy Processes presented June 2012, at Southern Illinois University Carbondale Title: FUSION OF ULTRASONIC C-SCAN DATA WITH FINITE ELEMENT ANALYSIS Major Professor: Dr. Tsuchin Philip Chu Ultrasonic testing is a highly valued method in the field of Non-destructive testing (NDT). It is an engineering tool that allows for non-invasive testing and evaluation. It is used widely in the aerospace industry to determine the integrity of complex materials without the use of destructive measures. This method of testing can be utilized to provide multitude of parameters such as material properties and thicknesses. It can also be used to test for discrepancies in test specimen such as voids, impurities, delamination and other defects that could degrade the integrity of a structure. The problem is that this method is limited in the area of evaluation of end results. Results are generated in the form of data images and are evaluated for quality or quantitative image assessment. Simulation models are created from an image, which causes low accuracy of analysis. The integration of Ultrasonic C-scan data with Finite Element Analysis (FEA) addresses these issues. It allows for models to be generated from Ultrasonic C-scan data, which provides the means to conduct accurate FEA simulations. The fusion of Ultrasonic C-scan data with computational methods, such as FEA, allows tested materials to be subjected to loading conditions that may be experienced in actual use. The results from FEA analysis can provide localized stress and strain fields generated from the loading conditions. The success of this analysis relies on the ability to generate high quality C-scan data to create accurate CAD data models. The generation of high quality scans will produce vital analysis information such as material properties, thickness, voids, surface inclusions and other critical deformities, all which will be used to generate a CAD analysis. With the ultrasonic data generated, finite element analysis can be utilized to further evaluate tested specimen. This technique has been applied to an isotropic aluminum block standard and an anisotropic Carbon Fiber Reinforced Polymer sample, both with known defects.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Ling, Li. "Local Feature Correspondence on Side-Scan Sonar Seafloor Images." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291803.

Повний текст джерела
Анотація:
In underwater environments, the perception and navigation systems are heavily dependent on the acoustic wave based sonar technology. Side-scan sonar (SSS) provides high-resolution, photo-realistic images of the seafloor at a relatively cheap price. These images could be considered potential candidates for place recognition and navigation of autonomous underwater vehicles (AUVs). Local feature correspondence matching, or the detection, description and matching of keypoints in overlapping images is a necessary building block for AUV navigation. Recent deep learning based research has resulted in state-of-the-art local correspondence models for camera images. For SSS images, however, deep learning based studies are limited and handcrafted methods such as SIFT and RootSIFT still dominate the field. In this study, SSS images taken from a seafloor area with bottom trawling marks were used for correspondence matching. D2-Net, a detect-and-describe VGG16 based network architecture designed for and tested on camera image correspondence was fine-tuned for SSS image correspondence. Using triplet margin ranking loss, the network was trained to simultaneously detect salient keypoints and produce similar descriptors for corresponding pixels and dissimilar descriptors for non-corresponding pixels. When evaluated on the nontrivial SSS images pairs in the test dataset, the best performing D2-Net based network was found to outperform the RootSIFT baseline in terms of number of detected keypoints, keypoint repeatability and mean matching accuracy at above 10 pixel threshold.
I undervattensmiljöer så är perception och navigationssystem ofta beroende av ekolodsteknik. Side scan sonar (SSS) ger högupplösta, fotorealistiska bilder av havsbottnen till en relativt låg kostnad. Dessa bilder kan användas för områdesigenkänning och navigation av autonoma undervattensfordon (AUV). Lokal kännerteckensmatchning består av detektion, beskrivning och matchning av nyckelpunkter på överlappande bilder. Detta är en viktig byggsten för AUV navigation. Nya metoder baserade på djupinlärning har varit i framkant för kännerteckensmatching av kamerabilder. Däremot är kännerteckensmatchning av SSS bilder fortfarande dominerat av traditionella metoder så som SIFT och RootSIFT. Denna rapport använder SSS bilder av havsbottnen där bottentrålning har förekommit för kännerteckensmatching. D2-Net är en detect-and-describe VGG16 baserad nätverksarkitektur designad och testad på kännerteckensmatching av kamerabilder. I denna rapport anpassas denna metod till SSS bilder. Kostnadsfunktionen använder sig av trippelmarginalsrankning så att nätverket ska kunna detektera distinkta nyckelpunkter samt producera liknande deskriptorer för matchande pixlar. Metoden utvärderades på icke-triviala SSS bildpar och uppnådde bättre resultat än RootSIFT.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Teran, Espinoza Aldo. "Acoustic-Inertial Forward-Scan Sonar Simultaneous Localization and Mapping." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287368.

Повний текст джерела
Анотація:
The increasing accessibility and versatility of forward-scan (FS) imaging sonars (also known as forward looking sonars or FLS) has spurred the interest of the robotics community seeking to solve the difficult problem of robotic perception in low-visibility underwater scenarios. Processing the incoming data from an imaging sonar is challenging, since it captures an acoustic 2D image of the 3D scene instead of providing straightforward range measurements like other sonar technologies do (e.g. multibeam sonar). Hence, complex postprocessing and sensor fusion techniques are required to extract useful information out of the sonar image. The present report details development, validation and implementation of an acoustic-inertial localization and mapping algorithm that processes sonar images captured by an FS sonar and inertial measurements to solve the simultaneous localization and mapping (SLAM) problem with an underwater sensor suite. A sonar odometry pose constraint is computed by detecting and matching features from two consecutive sonar images on a degeneracy-aware two-view bundle adjustment. The sonar odometry measurements are fused with preintegrated inertial measurements in a minimal pose-graph representation. The state-of-the-art iSAM2 (Incremental Smoothing and Mapping) solver is used to allow for real-time localization. A Python simulator was developed to evaluate the performance of the two-view bundle adjustment algorithm. Results are presented and discussed from both computer simulations in Gazebo using the Robot Operating System (ROS) and from real-world tests in a controlled environment with an in-house developed sensor suite. Sonar image degeneracies, sensor drift, and computation complexity, proved to be hard to tackle, reducing the performance and robustness of the current implementation of the SLAM solution. However, the current work will serve as a stepping stone for for future work and collaboration in underwater localization and mapping using FS sonars.
Den ökande tillgängligheten och mångsidigheten för framåtriktade (FS) bildåtergivande ekolod (även känd som framåtriktade ekolod eller FLS) har gett upphov till robotgemenskapens intresse som försöker lösa det svåra problemet med robotuppfattning i undervattensscenarier med låg synlighet. Att bearbeta inkommande data från ett bildbildsekolod är utmanande, eftersom den tar en akustisk 2D-bild av 3D-scenen istället för att ge enkla räckviddsmätningar som andra ekolodstekniker gör (t.ex. multibeam-ekolod). Därför krävs komplexa efterbearbetnings- och sensorfusionsmetoder för att extrahera användbar information ur ekolodsbilden. Denna rapport beskriver utvecklingen, valideringen och implementeringen av en akustisk-tröghetslokaliserings- och kartläggningsalgoritm som bearbetar ekolodsbilder som fångats av ett FS-ekolod och tröghetsmätningar för att lösa samtidig lokalisering och kartläggning (SLAM) med en undervattenssensor. En begränsning för ekolodsmätning utgörs av att detektera och matcha funktioner från två på varandra följande ekolodsbilder på en degenerationsmedveten tvåvisningsbuntjustering. Mätningarna av ekolodsmätningen smälter samman med förintegrerade tröghetsmätningar i en minimal framställning av graf. Den senaste iSAM2- lösaren (Incremental Smoothing and Mapping) används för att möjliggöra lokalisering i realtid. En Python-simulator utvecklades för att utvärdera prestanda för algoritmen för tvåvisningsbuntjustering. Resultaten presenteras och diskuteras från både datorsimuleringar i Gazebo med hjälp av Robot Operating System (ROS) och från verkliga tester i en kontrollerad miljö med en egenutvecklad sensorsvit. Sonarbildsgenerationer, sensordrift och beräkningskomplexitet visade sig vara svåra att hantera, vilket minskade prestanda och robusthet i den nuvarande implementeringen av SLAM-lösningen. Emellertid kommer det nuvarande arbetet att fungera som en språngbräda för framtida arbete och samarbete inom lokalisering och kartläggning under vattnet med hjälp av FS-ekolod.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Kirkvik, Ann-Silje. "Completing a model based on laser scan generated point cloud data." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10489.

Повний текст джерела
Анотація:

This paper is a master thesis for the Department of Computer and Information Science at the Norwegian University of Science and Technology, spring 2008. It is a study of hole filling in three dimensional surface models obtained from scanned real world objects. The goal of this project is to find solutions that are capable of filling an incomplete model in a plausible and visually pleasing manner. To reach this goal both theoretical studies and practical testing were performed. This paper presents a theoretical foundation, needed to gain a greater understanding of the problem, and the results from the testing phase. This knowledge and experience is then used to present a possible solution to the hole filling problem. The conclusions of this project is that automatic procedures, that are thoroughly documented in the literature, fails to perform in a satisfactory manner when the data set becomes too complicated. The Nidaros Cathedral is such a difficult data set, and will require a customized and user guided solution to met the goals of this project.

Стилі APA, Harvard, Vancouver, ISO та ін.
17

Desai, Grishma Mahesh. "Automated extraction of abdominal aortic aneurysm geometries from CT scan data." Thesis, University of Hull, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441672.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Xie, Yiping. "Machine Learning for Inferring Depth from Side-scan Sonar Images." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264835.

Повний текст джерела
Анотація:
Underwater navigation using Autonomous Underwater Vehicles (AUVs), which is significant for marine science research, highly depends on the acoustic method, sonar. Typically, AUVsare equipped with side-scan sonars and multibeam sonars at the same time since they both have their advantages and limitations. Side-scan sonars have a much wider range than multibeamsonars and at the same time are much cheaper, yet they could not provide accurate depth measurements. This thesis is aiming at investigating if a machine-interpreted method could beused to translate side-scan sonar data to multibeam data with high accuracy so that underwater navigation could be done by AUVs equipped only with side-scan sonars. The approaches considered in this thesis are based on Machine Learning methods, including generative models and discriminative models. The objective of this thesis is to investigate the feasibility of machine learning based models to infer the depth based on side-scan sonar images. Different models, including regression and Generative Adversarial Networks, are tested and compared. Different CNN based architectures such as U-Net and ResNet are tested andcompared as well. As an experiment trial, this project has already shown the ability and great potential of machine learning based methods extracting latent representations from side-scansonars and inferring the depth with reasonable accuracy. Further improvement could be madeto improve the performance and stability to be potentially verified on the AUV platforms inreal-time.
Undervattensnavigering med autonoma undervattensfordon (AUV från engelskans Autonomous Underwater Vehicle), är betydelsefull för marinvetenskaplig forskning, och beror starkt på vilken typ av sonar som används. Vanligtvis är AUV:er utrustade med både sidescansonar och multibeamsonar eftersom båda har sina fördelar och begränsningar. Sidescansonar har större omfång än multibeam-sonar och är samtidigt mycket billigare, men kan inte ge exakta mätningar av djupet. Detta examensarbete syftar till att undersöka om maskininlärningsmetoder skulle kunna användas för att översätta sidescandata till multibeamdata med hög noggrannhet så att undervattensnavigering skulle kunna göras av AUV:er utrustade endast med sidescansonar. Tillvägagångssättet i examensarbetet är baserat på olika maskininlärningsmetoder, däribland generativa modeller och diskriminerande modeller. Syftet är att undersöka om olika maskininlärningsbaserade modeller kan dra slutsatser om havsdjupet baserat endast på sidescandata. De modeller som testas och jämförs inkluderar regression och generativa adversativt nätverk. Även olika CNN-baserade arkitekturer som U-Net och ResNet testas och jämförs. Som ett experimentförsök har detta projekt redan visat förmågan och den stora potentialen för maskininlärningsbaserade metoder som extraherar latenta representationer från sidescansonar och kan estimera djupet med en rimlig noggrannhet. Ytterligare förbättringar skulle kunna göras för att förbättra prestanda och stabilitet som potentiellt kan verifieras på AUV-plattformar i realtid.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Hornung, Maximilian. "Deep Learning-Based Identification of Ischemic Regions in Native Head CT Scans." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272129.

Повний текст джерела
Анотація:
Stroke is one of the major causes of death and disability worldwide. Fast diagnosis is of critical importance for stroke treatment. In clinical routine, a non-contrast CT (NCCT) is typically acquired immediately to determine whether the stroke is ischemic or hemorrhagic and plan therapy accordingly. In case of ischemia, early signs of infarction may appear due to increased water uptake. These signs may be subtle, especially if observed only shortly after symptom onset, but hold the potential to provide a crucial first assessment of the location and extent of the infarction. In this paper, we train a deep neural network to predict the infarct core from NCCT in an image-to-image fashion. To facilitate exploitation of anatomic correspondences, learning is carried out in the standardized coordinate system of a brain atlas to which all images are deformably registered. Apart from binary infarct core masks, perfusion maps such as cerebral blood volume and flow are employed as additional training targets to enrich the physiologic information available to the model. The method is evaluated using cross validation on the training data set consisting of 141 cases. For validation, we measure the overlap with the ground truth masks, the localisation performance and the agreement with both manual and automatic assessment of affected ASPECTS regions. It is shown that the additional targets improve the results signficantly, achieving an area-under-curve of 0.835 when compared with the automated assessment in ASPECTS region classification and providing a distance of 0 mm between the prediction maximum and the indicated stroke infarct core in the majority of severe strokes with an infarct core volume greater than 70 ml.
Stroke är en av de viktigaste orsakerna till dödsfall och funktionshinder över hela världen. Snabb diagnos är av avgörande betydelse för strokebehandling. I klinisk rutin utförs en datortomografi utan kontrastmedel omedelbart för att bestämma om en stroke är ischemisk eller hemorragisk och terapi planeras baserat på resultatet. I händelse av en ischemisk stroke kan tidiga tecken på infarkt uppstå på grund av ökat vattenupptag. Dessa tecken kan vara subtila, särskilt om de bara observeras strax efter symptomen börjat, men har potential att ge en avgörande första bedömning av infarktens plats och omfattning. I detta projekt tränar vi ett djupt neuralt nätverk för att förutsäga infarktkärnan från datortomografibilder på ett bild-till-bild-sätt. För att underlätta utnyttjandet av anatomiska korrespondenser genomförs lärandet i det standardiserade koordinatsystemet i en hjärnatlas i vilken alla bilder är deformerbart registrerade. Förutom binära infarktkärnmasker, används  erfusionskartor såsom cerebral blodvolym och flöde som ytterligare träningsmål. Därmed finns mer fysiologisk information som det neurala nätverket kan tränas på. Metoden utvärderas med hjälp av korsvalidering på träningsdatauppsättningen bestående av 141 patienter. För validering mäter vi överlappningen med de observerade maskerna, lokaliseringens kvalitet och utvärdering med både manuell och automatisk bedömning av berörda ASPECTS-regioner. Det visas att de ytterligare målen förbättrar resultaten betydande och uppnår en area-under-kurva på 0,835 jämfört med automatisk bedömning av klassifikationen av ASPECTS-regioner och ger ett avstånd av 0 mm mellan förutsägelsen maximalt och strokeinfarktkärnan i majoriteten av allvarliga fall av stroke med en infarktkärnvolym större än 70 ml.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Jones, Lewys. "Applications of focal-series data in scanning-transmission electron microscopy." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:a6f2a4d5-e77a-47a5-b2d7-aab4b7069ce2.

Повний текст джерела
Анотація:
Since its development, the scanning transmission electron microscope has rapidly found uses right across the material sciences. Its use of a finely focussed electron probe rastered across samples offers the microscopist a variety of imaging and spectroscopy signals in parallel. These signals are individually intuitive to interpret, and collectively immensely powerful as a research tool. Unsurprisingly then, much attention is concentrated on the optical quality of the electron probes used. The introduction of multi-pole hardware to correct optical distortions has yielded a step-change in imaging performance; now with spherical and other remnant aberrations greatly reduced, larger probe forming apertures are suddenly available. Probes formed by such apertures exhibit a much improved and routinely sub-Angstrom diffraction-limited resolution, as well as a greatly increased probe current for spectroscopic work. The superb fineness of the electron beams and enormous magnifications now achievable make the STEM instrument one of the most sensitive scientific instruments developed by man, and this thesis will deal with two core issues that suddenly become important in this new aberration-corrected era. With this new found sensitivity comes the risk of imaging-distortion from outside influences such as acoustic or mechanical vibrations. These can corrupt the data in an unsatisfactory manner and counter the natural interpretability of the technique. Methods to identify and diagnose this distortion will be discussed, and a new technique developed to restore the corrupted data presented. Secondly, the subtleties of probe-shape in the multi-pole corrected STEM are extensively evaluated via simulation, with the contrast-transfer capabilities across defocus explored in detail. From this investigation a new technique of STEM focal-series reconstruction (FSR) is developed to compensate for the small remnant aberrations that still persist – recovering the sample object function free from any optical distortion. In both cases the methodologies were developed into automated computer codes and example restorations from the two techniques are shown (separately, although in principal the scan-corrected output is compatible with FSR). The performance of these results has been quantified with respect to several factors including; image resolution, signal-noise ratio, sample-drift, low frequency instability, and quantitative image intensity. The techniques developed are offered as practical tools for the microscopist wishing to push the performance of their instrument just that little bit further.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Lan, Liang. "Data Mining Algorithms for Classification of Complex Biomedical Data." Diss., Temple University Libraries, 2012. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/214773.

Повний текст джерела
Анотація:
Computer and Information Science
Ph.D.
In my dissertation, I will present my research which contributes to solve the following three open problems from biomedical informatics: (1) Multi-task approaches for microarray classification; (2) Multi-label classification of gene and protein prediction from multi-source biological data; (3) Spatial scan for movement data. In microarray classification, samples belong to several predefined categories (e.g., cancer vs. control tissues) and the goal is to build a predictor that classifies a new tissue sample based on its microarray measurements. When faced with the small-sample high-dimensional microarray data, most machine learning algorithm would produce an overly complicated model that performs well on training data but poorly on new data. To reduce the risk of over-fitting, feature selection becomes an essential technique in microarray classification. However, standard feature selection algorithms are bound to underperform when the size of the microarray data is particularly small. The best remedy is to borrow strength from external microarray datasets. In this dissertation, I will present two new multi-task feature filter methods which can improve the classification performance by utilizing the external microarray data. The first method is to aggregate the feature selection results from multiple microarray classification tasks. The resulting multi-task feature selection can be shown to improve quality of the selected features and lead to higher classification accuracy. The second method jointly selects a small gene set with maximal discriminative power and minimal redundancy across multiple classification tasks by solving an objective function with integer constraints. In protein function prediction problem, gene functions are predicted from a predefined set of possible functions (e.g., the functions defined in the Gene Ontology). Gene function prediction is a complex classification problem characterized by the following aspects: (1) a single gene may have multiple functions; (2) the functions are organized in hierarchy; (3) unbalanced training data for each function (much less positive than negative examples); (4) missing class labels; (5) availability of multiple biological data sources, such as microarray data, genome sequence and protein-protein interactions. As participants in the 2011 Critical Assessment of Function Annotation (CAFA) challenge, our team achieved the highest AUC accuracy among 45 groups. In the competition, we gained by focusing on the 5-th aspect of the problem. Thus, in this dissertation, I will discuss several schemes to integrate the prediction scores from multiple data sources and show their results. Interestingly, the experimental results show that a simple averaging integration method is competitive with other state-of-the-art data integration methods. Original spatial scan algorithm is used for detection of spatial overdensities: discovery of spatial subregions with significantly higher scores according to some density measure. This algorithm is widely used in identifying cluster of disease cases (e.g., identifying environmental risk factors for child leukemia). However, the original spatial scan algorithm only works on static spatial data. In this dissertation, I will propose one possible solution for spatial scan on movement data.
Temple University--Theses
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Langø, Hans Martin, and Morten Tylden. "Surface Reconstruction and Stereoscopic Video Rendering from Laser Scan Generated Point Cloud Data." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9589.

Повний текст джерела
Анотація:

This paper contains studies about the process of creating three-dimensional objects from point clouds. The main goal of this master thesis was to process a point cloud of the Nidaros Cathedral, mainly as a pilot project to create a standard procedure for future projects with similar goals. The main challenges were two-fold; both processing the data and creating stereoscopic videos presenting it. The approach to solving the problems include the study of earlier work on similar subjects, learning algorithms and tools, and finding the best procedures through trial and error. This resulted in a visually pleasing model of the cathedral, as well a stereoscopic video demonstrating it from all angles. The conclusion of the thesis is a pilot project demonstrating the dierent operations needed to overcome the challenges encountered during the work. The focus have been on presenting the procedures in such a way that they might be used in future projects of similar nature.

Стилі APA, Harvard, Vancouver, ISO та ін.
23

Fraker, Shannon E. "Evaluation of Scan Methods Used in the Monitoring of Public Health Surveillance Data." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/29511.

Повний текст джерела
Анотація:
With the recent increase in the threat of biological terrorism as well as the continual risk of other diseases, the research in public health surveillance and disease monitoring has grown tremendously. There is an abundance of data available in all sorts of forms. Hospitals, federal and local governments, and industries are all collecting data and developing new methods to be used in the detection of anomalies. Many of these methods are developed, applied to a real data set, and incorporated into software. This research, however, takes a different view of the evaluation of these methods. We feel that there needs to be solid statistical evaluation of proposed methods no matter the intended area of application. Using proof-by-example does not seem reasonable as the sole evaluation criteria especially concerning methods that have the potential to have a great impact in our lives. For this reason, this research focuses on determining the properties of some of the most common anomaly detection methods. A distinction is made between metrics used for retrospective historical monitoring and those used for prospective on-going monitoring with the focus on the latter situation. Metrics such as the recurrence interval and time-to-signal measures are therefore the most applicable. These metrics, in conjunction with control charts such as exponentially weighted moving average (EWMA) charts and cumulative sum (CUSUM) charts, are examined. Two new time-to-signal measures, the average time-between-signal events and the average signal event length, are introduced to better compare the recurrence interval with the time-to-signal properties of surveillance schemes. The relationship commonly thought to exist between the recurrence interval and the average time to signal is shown to not exist once autocorrelation is present in the statistics used for monitoring. This means that closer consideration needs to be paid to the selection of which of these metrics to report. The properties of a commonly applied scan method are also studied carefully in the strictly temporal setting. The counts of incidences are assumed to occur independently over time and follow a Poisson distribution. Simulations are used to evaluate the method under changes in various parameters. In addition, there are two methods proposed in the literature for the calculation of the p-value, an adjustment based on the tests for previous time periods and the use of the recurrence interval with no adjustment for previous tests. The difference in these two methods is also considered. The quickness of the scan method in detecting an increase in the incidence rate as well as the number of false alarm events that occur and how long the method signals after the increase threat has passed are all of interest. These estimates from the scan method are compared to other attribute monitoring methods, mainly the Poisson CUSUM chart. It is shown that the Poisson CUSUM chart is typically faster in the detection of the increased incidence rate.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Nolte, Zachary. "Mosquito popper: a multiplayer online game for 3D human body scan data segmentation." Thesis, University of Iowa, 2017. https://ir.uiowa.edu/etd/5585.

Повний текст джерела
Анотація:
Game with a purpose (GWAP) is a concept that aims to utilize the hours spent in the world playing video games by everyday people to yield valuable data. The main objective of this research is to prove the feasibility of using the concept of GWAP for the segmentation and labeling of massive amount of 3D human body scan data. The rationale behind using GWAP as a method for mesh segmentation and labeling is that the current methods use expensive, time consuming computational algorithms to accomplish this task. Furthermore, the computer algorithms are not as detailed and specific as what natural human ability can achieve in segmentation tasks. The method presented in this paper overcomes the shortcomings of computer algorithms by introducing the concept of GWAP for human model segmentation. The actual process of segmenting and labeling the mesh becomes a form of entertainment rather than a tedious process, from which segmentation data is produced as a bi-product. In addition, the natural capabilities of the human visual processing systems are harnessed to identify and label various parts of the 3D human body shape, which in turn gives more details and specificity in segmentation. The effectiveness of the proposed game play mechanism is proven by experiments conducted in this study.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Li, Jian. "Investigating the effect of the DGNSS SCAT-I data link on VOR signal reception." Ohio : Ohio University, 1996. http://www.ohiolink.edu/etd/view.cgi?ohiou1178220159.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Boyanapally, Deepthi. "MERGING OF FINGERPRINT SCANS OBTAINED FROM MULTIPLE CAMERAS IN 3D FINGERPRINT SCANNER SYSTEM." UKnowledge, 2008. http://uknowledge.uky.edu/gradschool_theses/510.

Повний текст джерела
Анотація:
Fingerprints are the most accurate and widely used biometrics for human identification due to their uniqueness, rapid and easy means of acquisition. Contact based techniques of fingerprint acquisition like traditional ink and live scan methods are not user friendly, reduce capture area and cause deformation of fingerprint features. Also, improper skin conditions and worn friction ridges lead to poor quality fingerprints. A non-contact, high resolution, high speed scanning system has been developed to acquire a 3D scan of a finger using structured light illumination technique. The 3D scanner system consists of three cameras and a projector, with each camera producing a 3D scan of the finger. By merging the 3D scans obtained from the three cameras a nail to nail fingerprint scan is obtained. However, the scans from the cameras do not merge perfectly. The main objective of this thesis is to calibrate the system well such that 3D scans obtained from the three cameras merge or align automatically. This error in merging is reduced by compensating for radial distortion present in the projector of the scanner system. The error in merging after radial distortion correction is then measured using the projector coordinates of the scanner system.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Peng, Peng. "A Measurement Approach to Understanding the Data Flow of Phishing From Attacker and Defender Perspectives." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/96401.

Повний текст джерела
Анотація:
Phishing has been a big concern due to its active roles in recent data breaches and state- sponsored attacks. While existing works have extensively analyzed phishing websites and detection methods, there is still a limited understanding of the data flow of the phishing process. In this thesis, we perform an empirical measurement to draw a clear picture of the data flow of phishing from both attacker and defender perspectives. First, from attackers' perspective, we want to know how attackers collect the sensitive information stolen from victims throughout the end-to-end phishing attack process. So we collected more than 179,000 real-world phishing URLs. Then we build a measurement tool to feed fake credentials to live phishing sites and monitor how the credential information is shared with the phishing server and potentially third-party collectors on the client side. Besides, we also obtain phishing kits to analyze how credentials are sent to attackers and third-parties on the server side. Then, from defenders' perspective, online scan engines such as VirusTotal are heavily used by phishing defenders to label phishing URLs, however, the data flow behind phishing detection by those scan engines is still unclear. So we build our own phishing websites, submit them to VirusTotal for scanning, to understand how VirusTotal works and the quality of its labels. Our study reveals the key mechanisms for information sharing during phishing attacks and the need for developing more rigorous methodologies to assess and make use of the labels obtained from VirusTotal.
Master of Science
Phishing attack is the fraudulent attempt to lure the target users to give away sensitive information such as usernames, passwords and credit card details. Cybercriminals usually build phishing websites (mimicking a trustworthy entity), and trick users to reveal important credentials. However, the data flow of phishing process is still unclear. From attackers' per- spective, we want to know how attackers collect the sensitive information stolen by phishing websites. On the other hand, from defenders' perspective, we are trying to figure out how online scan engines (e.g., VirusTotal) detect phishing URLs and how reliable their detection results are. In this thesis, we perform an empirical measurement to help answer the two questions above. By monitoring and analyzing a large number of real-world phishing websites, we draw a clear picture of the credential sharing process during phishing attacks. Also, by building our own phishing websites and submitting to VirusTotal for scanning, we find that more rigorous methodologies to use VirusTotal labels are desperately needed.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Balduzzi, Mathilde. "Plant canopy modeling from Terrestrial LiDAR System distance and intensity data." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20203.

Повний текст джерела
Анотація:
Le défi de cette thèse est de reconstruire la géométrie 3D de la végétation à partir des données de distance et d'intensité fournies par un scanner de type LiDAR. Une méthode de « shape-from-shading » par propagation est développée pour être combinée avec une méthode de fusion de données type filtre de Kalman pour la reconstruction optimale des surfaces foliaires.-Introduction-L'analyse des données LiDAR nous permet de dire que la qualité du nuage de point est variable en fonction de la configuration de la mesure : lorsque le LiDAR mesure le bord d'une surface ou une surface fortement inclinée, il intègre dans sa mesure une partie de l'arrière plan. Ces configurations de mesures produisent des points aberrants. On retrouve souvent ce type de configuration pour la mesure de feuillages puisque ces derniers ont des géométries fragmentées et variables. Les scans sont en général de mauvaise qualité et la quantité d'objets présents dans le scan rend la suppression manuelle des points aberrants fastidieuse. L'objectif de cette thèse est de développer une méthodologie permettant d'intégrer les données d'intensité LiDAR aux distances pour corriger automatiquement ces points aberrants. -Shape-From-Shading-Le principe du Shape-From-Shading (SFS) est de retrouver les valeurs de distance à partir des intensités d'un objet pris en photo. La caméra (capteur LiDAR) et la source de lumière (laser LiDAR) ont la même direction et sont placés à l'infini relativement à la surface, ce qui rend l'effet de la distance sur l'intensité négligeable et l'hypothèse d'une caméra orthographique valide. En outre, la relation entre angle d'incidence lumière/surface et intensité est connue. Par la nature des données LiDAR, nous pourrons choisir la meilleure donnée entre distance et intensité à utiliser pour la reconstruction des surfaces foliaires. Nous mettons en place un algorithme de SFS par propagation le long des régions iso-intenses pour pouvoir intégrer la correction de la distance grâce à l'intensité via un filtre de type Kalman. -Design mathématique de la méthode-Les morceaux de surface correspondant aux régions iso-intenses sont des morceaux de surfaces dites d'égales pentes, ou de tas de sable. Nous allons utiliser ce type de surface pour reconstruire la géométrie 3D correspondant aux images d'intensité.Nous démontrons qu'à partir de la connaissance de la 3D d'un bord d'une région iso-intense, nous pouvons retrouver des surfaces paramétriques correspondant à la région iso-intense qui correspondent aux surfaces de tas de sable. L'initialisation de la région iso-intense initiale (graine de propagation) se fait grâce aux données de distance LiDAR. Les lignes de plus grandes pentes de ces surfaces sont générées. Par propagation de ces lignes (et donc génération du morceau de la surface en tas de sable), nous déterminons l'autre bord de la région iso-intense. Puis, par itération, nous propagerons la reconstruction de la surface. -Filtre de Kalman-Nous pouvons considérer cette propagation des lignes de plus grande pente comme étant le calcul d'une trajectoire sur la surface à reconstruire. Dans le cadre de notre étude, la donnée de distance est toujours disponible (données du scanner 3D). Ainsi il est possible de choisir, lors de la propagation, quelle donnée (distance ou intensité) utiliser pour la reconstruction. Ceci peut être fait notamment grâce à une fusion de type Kalman. -Algorithme-Pour procéder à la reconstruction par propagation, il est nécessaire d'hiérarchiser les domaines iso-intenses de l'image. Une fois que les graines de propagation sont repérées, elles sont initialisées avec l'image des distances. Enfin, pour chacun des nœuds de la hiérarchie (représentant un domaine iso-intense), la reconstruction d'un tas de sable est faite. C'est lors de cette dernière étape qu'une fusion de type Kalman peut être introduite
The challenge of this thesis is reconstruct the 3D geometry of vegetation from distance and intensity data provided by a 3D scanner LiDAR. A method of “Shape-From-Shading” by propagation is developed to be combined with a fusion method of type “Kalman” to get an optimal reconstruction of the leaves. -Introduction-The LiDAR data analysis shows that the point cloud quality is variable. This quality depends upon the measurement set up. When the LiDAR laser beam reaches the edge of a surface (or a steeply inclined surface), it also integrate background measurement. Those set up produce outliers. This kind of set up is common for foliage measurement as foliages have in general fragmented and complex shape. LiDAR data are of bad quality and the quantity of leaves in a scan makes the correction of outliers fastidious. This thesis goal is to develop a methodology to allow us to integrate the LiDAR intensity data to the distance to make an automatic correction of those outliers. -Shape-from-shading-The Shape-from-shading principle is to reconstruct the distance values from intensities of a photographed object. The camera (LiDAR sensor) and the light source (LiDAR laser) have the same direction and are placed at infinity relatively to the surface. This makes the distance effect on intensity negligible and the hypothesis of an orthographic camera valid. In addition, the relationship between the incident angle light beam and intensity is known. Thanks to the LiDAR data analysis, we are able to choose the best data between distance and intensity in the scope of leaves reconstruction. An algorithm of propagation SFS along iso-intense regions is developed. This type of algorithm allows us to integrate a fusion method of type Kalman. -Mathematical design of the method-The patches of the surface corresponding to the iso-intense regions are patches of surfaces called the constant slope surfaces, or sand-pile surfaces. We are going to use those surfaces to rebuild the 3D geometry corresponding to the scanned surfaces. We show that from the knowledge of the 3d of an iso-intensity region, we can construct those sand-pile surfaces. The initialization of the first iso-intense regions contour (propagation seeds) is done with the 3D LiDAR data. The greatest slope lines of those surfaces are generated. Thanks to the propagation of those lines (and thus of the corresponding sand-pile surface), we build the other contour of the iso-intense region. Then, we propagate the reconstruction iteratively. -Kalman filter-We can consider this propagation as being the computation of a trajectory on the reconstructed surface. In our study framework, the distance data is always available (3D scanner data). It is thus possible to choose which data (intensity vs distance) is the best to reconstruct the object surface. This can be done with a fusion of type Kalman filter. -Algorithm-To proceed a reconstruction by propagation, it is necessary to order the iso-intensity regions. Once the propagation seeds are found, they are initialized with the distances provided by the LiDAR. For each nodes of the hierarchy (corresponding to an iso-intensity region), the sand-pile surface reconstruction is done. -Manuscript-The thesis manuscript gathers five chapters. First, we give a short description of the LiDAR technology and an overview of the traditional 3D surface reconstruction from point cloud. Then we make a state-of-art of the shape-from –shading methods. LiDAR intensity is studied in a third chapter to define the strategy of distance effect correction and to set up the incidence angle vs intensity relationship. A fourth chapter gives the principal results of this thesis. It gathers the theoretical approach of the SFS algorithm developed in this thesis. We will provide its description and results when applied to synthetic images. Finally, a last chapter introduces results of leaves reconstruction
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Kulunk, Hasan Salih. "Lakebed Characterization Using Side-Scan Data for Investigating the Latest Lake Superior Coastal Environment Conditions." Thesis, Michigan Technological University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10683388.

Повний текст джерела
Анотація:

This thesis provides a review of the development of hydrographic survey equipment and supporting geospatial equipment and technology such as GPS. Using SonarWiz, a sonar image processing software package, lakebed classification methodologies were evaluated for mapping Buffalo Reef in Lake Superior located near Gay, Michigan. The goal was to develop an approach to mapping the reef bed and delineating various components of the lake bottom, including stamp sands, which are migrating from the abandoned Gay copper processing stamp mill to the reef. This contamination of the reef is having an adverse effect on habitats important to local flora and fauna.

Sonar data was collected with an Edgetech 4125 side-scan sonar and an Iver3, a fully autonomous under water vehicle sonar, which has bathymetry and side-scan capabilities. Both systems are owned and operated by the Great Lake Research Center at Michigan Technological University.

Sonar image post-processing was complete utilizing SonarWiz 7, ArcGIS 10.5 and ERDAS Imagine. The resulting classification is composed of 6 information classes: cobble, cobble/stamp sand with different level intensity returns (low, medium, and high), trend of stamp sand, sandy waves and shadow which indicates mostly rock/ bedrock. The cobble/stamp sand had two distinct spectral classes: high intensity returns and low intensity returns for Iver 3, three distinct spectral classes: high intensity returns, medium intensity returns and low intensity returns for Edgetech 4125. The Edgetech 4125 classification excluded shadow area automatically.

The final step was an interpretation of lakebed features based on ground truth samples and photographic images from the bottom surface. Recommendations for future research are presented.

Стилі APA, Harvard, Vancouver, ISO та ін.
30

Moritz, Malte, and Anton Pettersson. "Estimation of Local Map from Radar Data." Thesis, Linköpings universitet, Reglerteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-111916.

Повний текст джерела
Анотація:
Autonomous features in vehicles is already a big part of the automobile area and now many companies are looking for ways to make vehicles fully autonomous. Autonomous vehicles need to get information about the surrounding environment. The information is extracted from exteroceptive sensors and today vehicles often use laser scanners for this purpose. Laser scanners are very expensive and fragile, it is therefore interesting to investigate if cheaper radar sensors could be used. One big challenge when it comes to autonomous vehicles is to be able to use the exteroceptive sensors and extract a position of the vehicle and at the same time get a map of the environment. The area of Simultaneous Localization and Mapping (SLAM) is a well explored area when using laser scanners but is not that well explored when using radars. It has been investigated if it is possible to use radar sensors on a truck to create a map of the area where the truck drives. The truck has been equipped with ego-motion sensors and radars and the data from them has been fused together to get a position of the truck and to get a map of the surrounding environment, i.e. a SLAM algorithm has been implemented. The map is represented by an Occupancy Grid Map (OGM) which should only consist of static objects. The OGM is updated probabilistically by using a binary Bayes filter. To localize the truck with help of motion sensors an Extended Kalman Filter (EKF) is used together with a map and a scan match method. All these methods are put together to create a SLAM algorithm. A range rate filter method is used to filter out noise and non-static measurements from the radar. The results of this thesis show that it is possible to use radar sensors to create a map of a truck's surroundings. The quality of the map is considered to be good and details such as space between parked trucks, signs and light posts can be distinguished. It has also been proven that methods with low performance on their own can together with other methods work very well in the SLAM algorithm. Overall the SLAM algorithm works well but when driving in unexplored areas with a low number of objects problems with positioning might occur. A real time system has also been implemented and the map can be seen at the same time as the truck is manoeuvred.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Karlsson, Rasmus. "Exploring a video game AI bot that scans and reacts to its surroundings in real-time." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-76737.

Повний текст джерела
Анотація:
The buzz surrounding artificial intelligences continues to grow. They are currently used in a wide variety of systems and appliances, such as video games, virtual personal assistants, and self-driving cars. This paper explores the possibility of a self-learning AI that can play the classic arcade game Q*BERT, using only screenshots as input. It is tested to work on several different screens sizes, and the results are collected and compared to that of a human player, as well as results from previous research. The results are fairly positive. While the AI had a hard time of matching the human player on average score, it did get close to the highest score.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Donglikar, Swapneel B. "Design for Testability Techniques to Optimize VLSI Test Cost." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/43712.

Повний текст джерела
Анотація:
High test data volume and long test application time are two major concerns for testing scan based circuits. The Illinois Scan (ILS) architecture has been shown to be effective in addressing both these issues. The ILS achieves a high degree of test data compression thereby reducing both the test data volume and test application time. The degree of test data volume reduction depends on the fault coverage achievable in the broadcast mode. However, the fault coverage achieved in the broadcast mode of ILS architecture depends on the actual configuration of individual scan chains, i.e., the number of chains and the mapping of the individual flip-flops of the circuit to the respective scan chain positions. Current methods for constructing scan chains in ILS are either ad-hoc or use test pattern information from an a-priori automatic test pattern generation (ATPG) run. In this thesis, we present novel low cost techniques to construct ILS scan configuration for a given design. These techniques efficiently utilize the circuit topology information and try to optimize the flip-flop assignment to a scan chain location without much compromise in the fault coverage in the broadcast mode. Thus, they eliminate the need of an a-priori ATPG run or any test set information. In addition, we also propose a new scan architecture which combines the broadcast mode of ILS and Random Access Scan architecture to enable further test volume reduction on and above effectively configured conventional ILS architecture using the aforementioned heuristics with reasonable area overhead. Experimental results on the ISCASâ 89 benchmark circuits show that the proposed ILS configuration methods can achieve on an average 5% more fault coverage in the broadcast mode and on average 15% more test data volume and test application time reduction than existing methods. The proposed new architecture achieves, on an average, 9% and 33% additional test data volume and test application time reduction respectively on top of our proposed ILS configuration heuristics.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Read, Simon. "Methods for the improved implementation of the spatial scan statistic when applied to binary labelled point data." Thesis, University of Sheffield, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.555124.

Повний текст джерела
Анотація:
This thesis investigates means of improving, applying, and measuring the success of, the Spatial Scan Statistic (SSS) when applied to binary labelled point data (BLPD). As the SSS is an established means of detecting anomalies in spatial data (also known as clusters), this work has potential application in many fields, notably epidemiology. Firstly, the thesis considers the capacity of the SSS to correctly identify the presence of anomalies, irrespective of location. The most important contribution is the identification that p-values produced by the standard algorithm for implementing of the SSS are sometimes conservative, and thus may lead to lower-than-expected statistical power. A novel means of rectifying this is presented, along with a study of how this can be used in conjunction with an existing technique (Gumbel smoothing) for reducing the computational expense of the SSS. A novel version of the SSS for BLPD is also derived and tested, together with an alternative algorithm for selecting circular scan windows. Secondly, the thesis considers the capacity of the SSS to correctly identify the location of anomalies. This is an under-researched area, and this work is relevant to all forms of data to which the SSS is applied, not just BLPD. A synthesis of current research is presented as a five level framework, facilitating the comparison and hy- bridisation of existing spatial accuracy measures for the SSS. Two novel measures of spatial accuracy ('l!, 0) are derived, both compatible with this framework. 'l! works in conjunction with power; D is independent of power. Both use a single parameter to encapsulate complex information about spatial accuracy performance. This pre- viously required two or more parameters, or an arbitrarily weighted combination of two or more parameters. All novel techniques are benchmark tested against established software, and the statistical significance of performance improvements is measured.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Manuel, Melissa Barnes Ulrich Pamela V. Connell Lenda Jo. "Using 3D body scan measurement data and body shape assessment to build anthropometric profiles of tween girls." Auburn, Ala, 2009. http://hdl.handle.net/10415/1585.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Cho, Jang Ik. "Partial EM Procedure for Big-Data Linear Mixed Effects Model, and Generalized PPE for High-Dimensional Data in Julia." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case152845439167999.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Persson, Andreas. "3D Scan-based Navigation using Multi-Level Surface Maps." Thesis, Örebro University, School of Science and Technology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-11211.

Повний текст джерела
Анотація:

The field of research connected to mobile robot navigation is much broader than the scope of this thesis. Hence in this report, the navigation topic is narrowed down to primarily concerning mapping and scan matching techniques that were used to achieve the overall task of navigation nature. Where the work presented within this report is based on an existing robot platform with technique for providing 3D point-clouds, as result of 3D scanning, and functionality for planning for and following a path. In this thesis it is presented how a scan matching algorithm is used for securing the alignment between provided succession point-clouds. Since the computational time of nearest neighbour search is a commonly discussed aspect of scan matching, suggestions about techniques for decreasing the computational time are also presented within this report. With secured alignment, the challenge was within representing provided point-clouds by a map model. Provided point-clouds were of 3D character, thus a mapping technique is presented that provides rough 3D representations of the environment. A problem that arose with a 3D map representation was that given functionality for path planning required a 2D representation. This is addressed by translating the 3D map at a specific height level into a 2D map usable for path planning, where this report suggest a novel traversability analysis approach with the use of a tree structure.

 

Стилі APA, Harvard, Vancouver, ISO та ін.
37

Dutton, James Allen. "Developing articulated human models from laser scan data for use as avatars in real time networked virtual environments." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA397086.

Повний текст джерела
Анотація:
Thesis (M.S. in Modeling, Virtual Environments, and Simulation)--Naval Postgraduate School, Sept. 2001.
Thesis advisors: Bachmann, Eric ; Yun, Xiaoping. "September 2001." Includes bibliographical references (p. 47-49). Also Available online.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Lontoc-Roy, Melinda. "Three-dimensional visualization in situ and complexity analysis of crop root systems using CT scan data : a primer." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=82282.

Повний текст джерела
Анотація:
The importance of root systems for soil-based resource acquisition by plants has long motivated researchers to quantify the complexity of root system structures. However, most of those studies proceeded from 2-D spatial data, and thus lacked the relevance of a 3-D analysis. In this project, helical CT scanning was applied to study root systems with an unprecedented level of accuracy, using non-destructive and non-invasive 3-D imaging that allowed for a spatio-temporal analysis. The appropriate CT scan parameters and configuration were determined for root systems of maize seedlings grown in sand and loamy sand. It was found that the soil conditions allowing for better visualization were sand before watering and loamy sand after watering. Root systems were CT scanned and visualized either at a single moment in time or repeatedly on successive days. Complexity analysis was performed by estimating the fractal dimension on skeletonized 3-D images of root systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Querel, Richard Robert, and University of Lethbridge Faculty of Arts and Science. "IRMA calibrations and data analysis for telescope site selection." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2007, 2007. http://hdl.handle.net/10133/675.

Повний текст джерела
Анотація:
Our group has developed a 20 μm passive atmospheric water vapour monitor. The Infrared Radiometer for Millimetre Astronomy (IRMA) has been commissioned and deployed for site testing for the Thirty Meter Telescope (TMT) and the Giant Magellan Telescope (GMT). Measuring precipitable water vapour (PWV) requires both a sophisticated atmospheric model (BTRAM) and an instrument (IRMA). Atmospheric models depend on atmospheric profiles. Most profiles are generic in nature, representing only a latitude in some cases. Site-specific atmospheric profiles are required to accurately simulate the atmosphere above any location on Earth. These profiles can be created from publicly available archives of radiosonde data, that offer nearly global coverage. Having created a site-specific profile and model, it is necessary to determine the PWV sensitivity to the input parameter uncertainties used in the model. The instrument must also be properly calibrated. In this thesis, I describe the radiometric calibration of the IRMA instrument, and the creation and analysis of site-specific atmospheric models for use with the IRMA instrument in its capacity as an atmospheric water vapour monitor for site testing.
xii, 135 leaves : ill. ; 28 cm. --
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Joshi, Shriyanka. "Reverse Engineering of 3-D Point Cloud into NURBS Geometry." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1595849563494564.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Zafalon, Zaira Regina [UNESP]. "Scan for MARC: princípios sintáticos e semânticos de registros bibliográficos aplicados à conversão de dados analógicos para o formato MARC 21 bibliográfico." Universidade Estadual Paulista (UNESP), 2012. http://hdl.handle.net/11449/103386.

Повний текст джерела
Анотація:
Made available in DSpace on 2014-06-11T19:32:42Z (GMT). No. of bitstreams: 0 Previous issue date: 2012-06-29Bitstream added on 2014-06-13T19:43:19Z : No. of bitstreams: 1 zafalon_zr_dr_mar.pdf: 2823617 bytes, checksum: 2ac77130d0e7d2a5c1d95c0fa6f81dac (MD5)
The research presents as its central theme the study of the bibliographic record conversion process. The object of study is framed by an understanding of analogic bibliographic record conversion to the Bibliograhpic MARC21 format, based on a syntactic and semantic analysis of records described according to descriptive metadata structure standards and content standards. The thesis in this research is that the syntactic and semantic principles of bibliographic records, defined by description and visualization cataloguing schemes, present in the descriptive metadata structure standards and content standards, determine the bibliographic record conversion process to the MARC21 Bibliographic Format. In the light of this, the purpose of this research is to develop a theoretical study of the syntax and semantics of bibliographic records, grounded in Linguistic theories of Saussure and Hjelmslev, which can underlie analogic bibliographic record conversion to the MARC21 Bibliographic Format using a computational interpreter. To this end, the general aim was to develop a theoretical-conceptual model of the syntax and semantics of bibliographic records, based on saussurean and hjelmslevian linguistic studies of human language manifestations, which can be applicable to a computational interpreter designed for the conversion of bibliographic records to the MARC21 Bibliographic Format. To attain this goal, the following specific objectives were identified, in two groups and related to the theoretical-conceptual model of bibliographic record syntax and semantics and to the conversion process of the records, respectively: to make explicit the relationship between the syntax and semantics of bibliographic records... (Complete abstract click electronic access below)
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Anil, Engin Burak. "Utilization of As-is Building Information Models Obtained from Laser Scan Data for Evaluation of Earthquake Damaged Reinforced Concrete Buildings." Research Showcase @ CMU, 2015. http://repository.cmu.edu/dissertations/499.

Повний текст джерела
Анотація:
Objective, accurate, and fast assessment of damage to buildings after an earthquake is crucial for timely remediation of material losses and safety of occupants of buildings. Currently, visual inspection of buildings for damage and manual identification of damage severity are the primary evaluation methods. However, visual inspections have a number of shortcomings. In different research studies, visual observations and inspector judgment have been shown to differ amongst different inspectors in terms of thoroughness and reliability of inspection, details included in inspection reports, and results of the damage assessment. Automatic damage assessment could help in evaluating damaged buildings by reducing the dependency on subjective data collection and evaluation of the damage observations. Laser scanning is a promising tool for field data collection for post-earthquake damage assessment as laser scanners are able to produce accurate and dense 3D measurements of the environment. Laser scan data can be processed to extract damage indicators. Identifying the damage severity requires the damage indicators be related to the building components in 3D space, as well as the structural configuration of the building, details of the reinforcement, and actual material properties. A Building Information Model (BIM) within which a structural system and damage are represented can serve as the underlying information source for damage assessment and post-earthquake seismic performance evaluation. However, further research is required for utilizing laser scan data and as-is BIMs generated from laser scan data for storing and reasoning about damaged buildings. In order to address the challenges and needs stated above, (1) the unique characteristics of laser scan data, which can potentially limit the reliability of the scanner data for crack identification under certain scenarios were investigated; (2) the information requirements for representing and reasoning about damage conditions were formalized; (3) a representation schema for damaged conditions was developed; and (4) reasoning mechanisms were studied for identifying the damage modes and severities of components using the identified damage parameters and structural properties. The research methods involved experiments to identify the characteristics of laser scanners for damage detection, investigation of damage assessment guidelines, and investigation and analysis of Building Information Modeling standards. The results of the investigation on damage assessment standards were used for identifying the information requirements for the representation of damage and for developing the representation schema. Validation studies include: (1) validation of the information requirements by an analysis to quantify the sensitivity of damage assessment to the identified damage parameters; (2) validation of generality of the representation schema to masonry components; (3) validation of the reasoning mechanisms with a user study. The contributions include: (1) characterization of two laser scanner for detecting earthquake induced cracks; (2) identification of information requirements for visual assessment of earthquake damage on reinforced concrete shear walls; (3) a schema for representing the earthquake damage for supporting the visual assessment; and (4) approach for identifying the damage mode and severity of reinforced concrete walls.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Jett, David B. "Selection of flip-flops for partial scan paths by use of a statistical testability measure." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-12302008-063234/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Zafalon, Zaira Regina. "Scan for MARC : princípios sintáticos e semânticos de registros bibliográficos aplicados à conversão de dados analógicos para o formato MARC 21 bibliográfico /." Marília : [s.n.], 2012. http://hdl.handle.net/11449/103386.

Повний текст джерела
Анотація:
Orientador: Plácida Leopoldina Ventura Amorim da Costa Santos
Banca: Dulce Maria Baptista
Banca: Edberto Ferneda
Banca: Elisa Campos Machado
Banca: Ricardo César Gonçalves Sant'Ana
Resumo:A pesquisa apresenta como tema nuclear o estudo do processo de conversão de registros bibliográficos. Delimita-se o objeto de estudo pelo entendimento da conversão de registros bibliográficos analógicos para o formato MARC21 Bibliográfico, a partir da análise sintática e semântica de registros descritos segundo padrões de estrutura de metadados descritivos e padrões de conteúdo. A tese nesta pesquisa é a de que os princípios sintáticos e semânticos de registros bibliográficos, definidos pelos esquemas de descrição e de visualização na catalogação, presentes nos padrões de estrutura de metadados descritivos e nos padrões de conteúdo, determinam o processo de conversão de registros bibliográficos para o Formato MARC21 Bibliográfico. Em vista desse panorama, a proposição desta pesquisa é desenvolver um estudo teórico sobre a sintaxe e a semântica de registros bibliográficos, pelo viés da Linguística, com Saussure e Hjelmslev, que subsidiem a conversão de registros bibliográficos analógicos para o Formato MARC21 Bibliográfico em um interpretador computacional. Com esta proposta, estabelece-se, como objetivo geral, desenvolver um modelo teórico-conceitual de sintaxe e semântica em registros bibliográficos, a partir de estudos lingüísticos saussureanos e hjelmslevianos das manifestações da linguagem humana, que seja aplicável a um interpretador computacional voltado à conversão de registros bibliográficos ao formato MARC21 Bibliográfico. Para o alcance de tal objetivo recorre-se aos seguintes objetivos específicos, reunidos em dois grupos e voltados, respectivamente ao modelo teórico-conceitual da estrutura sintática e semântica de registros bibliográficos, e ao processo de conversão de seus registros: explicitar a relação entre a sintaxe e a semântica... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The research presents as its central theme the study of the bibliographic record conversion process. The object of study is framed by an understanding of analogic bibliographic record conversion to the Bibliograhpic MARC21 format, based on a syntactic and semantic analysis of records described according to descriptive metadata structure standards and content standards. The thesis in this research is that the syntactic and semantic principles of bibliographic records, defined by description and visualization cataloguing schemes, present in the descriptive metadata structure standards and content standards, determine the bibliographic record conversion process to the MARC21 Bibliographic Format. In the light of this, the purpose of this research is to develop a theoretical study of the syntax and semantics of bibliographic records, grounded in Linguistic theories of Saussure and Hjelmslev, which can underlie analogic bibliographic record conversion to the MARC21 Bibliographic Format using a computational interpreter. To this end, the general aim was to develop a theoretical-conceptual model of the syntax and semantics of bibliographic records, based on saussurean and hjelmslevian linguistic studies of human language manifestations, which can be applicable to a computational interpreter designed for the conversion of bibliographic records to the MARC21 Bibliographic Format. To attain this goal, the following specific objectives were identified, in two groups and related to the theoretical-conceptual model of bibliographic record syntax and semantics and to the conversion process of the records, respectively: to make explicit the relationship between the syntax and semantics of bibliographic records... (Complete abstract click electronic access below)
Doutor
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Feulner, Martin [Verfasser], and Sigrid [Akademischer Betreuer] Liede-Schumann. "Taxonomical use of floral scent data in apomictic taxa of Hieracium and Sorbus derived from hybridization / Martin Feulner. Betreuer: Sigrid Liede-Schumann." Bayreuth : Universität Bayreuth, 2013. http://d-nb.info/1059352567/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Simán, Frans Filip. "Assessment of Machine Learning Applied to X-Ray Fluorescence Core Scan Data from the Zinkgruvan Zn-Pb-Ag Deposit, Bergslagen, Sweden." Thesis, Luleå tekniska universitet, Geovetenskap och miljöteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-82050.

Повний текст джерела
Анотація:
Lithological core logging is a subjective and time consuming endeavour which could possibly be automated, the question is if and to what extent this automation would affect the resulting core logs. This study presents a case from the Zinkgruvan Zn-Pb-Ag mine, Bergslagen, Sweden; in which Classification and Regression Trees and K-means Clustering on the Self Organising Map were applied to X-Ray Flourescence lithogeochemistry data derived from automated core scan technology. These two methods are assessed through comparison to manual core logging. It is found that the X-Ray Fluorescence data are not sufficiently accurate or precise for the purpose of automated full lithological classification since not all elements are successfully quantified. Furthermore, not all lithologies are possible to distinquish with lithogeochemsitry alone furter hindering the success of automated lithological classification. This study concludes that; 1) K-means on the Self Organising Map is the most successful approach, however; this may be influenced by the method of domain validation, 2) the choice of ground truth for learning is important for both supervised learning and the assessment of machine learning accuracy and 3) geology, data resolution and choice of elements are important parameters for machine learning. Both the supervised method of Classification and Regression Trees and the unsupervised method of K-means clustering applied to Self Organising Maps show potential to assist core logging procedures.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Mason, Terry. "ADVANCES IN WIDEBAND VHS CASSETTE RECORDING." International Foundation for Telemetering, 1992. http://hdl.handle.net/10150/608887.

Повний текст джерела
Анотація:
International Telemetering Conference Proceedings / October 26-29, 1992 / Town and Country Hotel and Convention Center, San Diego, California
In recent years, many designers have turned to digital techniques as a means of improving the fidelity of instrumentation data recorders. However, single and multi-channel recorders based on professional VHS transports are now available which use innovative methods for achieving near-perfect timebase accuracy, inter-channel timing and group delay specifications for long-duration wideband analog recording applications. This paper discusses some of the interesting technical problems involved and demonstrates that VHS cassette recorders are now a convenient and low cost proposition for high precision multi-channel wideband data recording.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Schönström, Linus. "Programming a TEM for magnetic measurements : DMscript code for acquiring EMCD data in a single scan with a q-E STEM setup." Thesis, Uppsala universitet, Tillämpad materialvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-306167.

Повний текст джерела
Анотація:
Code written in the DigitalMicrograph® scripting language enables a new experimental design for acquiring the magnetic dichroism in EELS. Called the q-E STEM setup, it provides simultaneous acquisition of the dichroic pairs of spectra (eliminating major error sources) while preserving the real-space resolution of STEM. This gives the setup great potential for real-space maps of magnetic momenta which can be instrumental in furthering the understanding of e.g. interfacial magnetic effects. The report includes a thorough presentation of the created acquisition routine, a detailed outline of future work and a fast introduction to the DMscript language.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Ahlström, Daniel. "Minimizing memory requirements for deterministic test data in embedded testing." Thesis, Linköping University, Linköping University, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54655.

Повний текст джерела
Анотація:

Embedded and automated tests reduce maintenance costs for embedded systems installed in remote locations. Testing multiple components of an embedded system, connected on a scan chain, using deterministic test patterns stored in a system provide high fault coverage but require large system memory. This thesis presents an approach to reduce test data memory requirements by the use of a test controller program, utilizing the observation of that there are multiple components of the same type in a system. The program use deterministic test patterns specific to every component type, which is stored in system memory, to create fully defined test patterns when needed. By storing deterministic test patterns specific to every component type, the program can use the test patterns for multiple tests and several times within the same test. The program also has the ability to test parts of a system without affecting the normal functional operation of the rest of the components in the system and without an increase of test data memory requirements. Two experiments were conducted to determine how much test data memory requirements are reduced using the approach presented in this thesis. The results for the experiments show up to 26.4% reduction of test data memory requirements for ITC´02 SOC test benchmarks and in average 60% reduction of test data memory requirements for designs generated to gain statistical data.

Стилі APA, Harvard, Vancouver, ISO та ін.
50

Mccart, James A. "Goal Attainment On Long Tail Web Sites: An Information Foraging Approach." Scholar Commons, 2009. http://scholarcommons.usf.edu/etd/3686.

Повний текст джерела
Анотація:
This dissertation sought to explain goal achievement at limited traffic “long tail” Web sites using Information Foraging Theory (IFT). The central thesis of IFT is that individuals are driven by a metaphorical sense of smell that guides them through patches of information in their environment. An information patch is an area of the search environment with similar information. Information scent is the driving force behind why a person makes a navigational selection amongst a group of competing options. As foragers are assumed to be rational, scent is a mechanism by which to reduce search costs by increasing the accuracy on which option leads to the information of value. IFT was originally developed to be used in a “production rule” environment, where a user would perform an action when the conditions of a rule were met. However, the use of IFT in clickstream research required conceptualizing the ideas of information scent and patches in a non-production rule environment. To meet such an end this dissertation asked three research questions regarding (1) how to learn information patches, (2) how to learn trails of scent, and finally (3) how to combine both concepts to create a Clickstream Model of Information Foraging (CMIF). The learning of patches and trails were accomplished by using contrast sets, which distinguished between individuals who achieved a goal or not. A user- and site-centric version of the CMIF, which extended and operationalized IFT, presented and evaluated hypotheses. The user-centric version had four hypotheses and examined product purchasing behavior from panel data, whereas the site-centric version had nine hypotheses and predicted contact form submission using data from a Web hosting company. In general, the results show that patches and trails exist on several Web sites, and the majority of hypotheses were supported in each version of the CMIF. This dissertation contributed to the literature by providing a theoretically-grounded model which tested and extended IFT; introducing a methodology for learning patches and trails; detailing a methodology for preprocessing clickstream data for long tail Web sites; and focusing on traditionally under-studied long tail Web sites.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії