Academic literature on the topic 'Scanner data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Scanner data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Scanner data"

1

Bohner, Lauren, Daniel Habor, Klaus Radermacher, Stefan Wolfart, and Juliana Marotti. "Scanning of a Dental Implant with a High-Frequency Ultrasound Scanner: A Pilot Study." Applied Sciences 11, no. 12 (June 14, 2021): 5494. http://dx.doi.org/10.3390/app11125494.

Full text
Abstract:
The purpose of this in vitro study was to assess the trueness of a dental implant scanned using an intraoral high-frequency ultrasound prototype and compared with conventional optical scanners. An acrylic resin cast containing a dental implant at position 11 was scanned with a fringe projection 3D sensor for use as a reference dataset. The same cast was scanned 10 times for each group. Ultrasound scanning was performed with a high-frequency probe (42 MHz, aperture diameter of 4 mm and focus length of 8 mm), and 3D images were reconstructed based on the depth of each surface point echo. Optical scans were performed in a laboratory and with an intraoral scanner. A region of interest consisting of the dental implant site was segmented and matched to the reference dataset. Trueness was defined as the closeness between experimental data and the reference surface. Statistical analysis was performed with one-way ANOVA and post-hoc tests with a significance level of p = 0.05. No statistical difference was found among the evaluated scanners. The mean deviation error was 57.40 ± 17.44 µm for the ultrasound scanner, 75.40 ± 41.43 µm for the laboratory scanner and 38.55 ± 24.34 µm for the intraoral scanner. The high-frequency ultrasound scanner showed similar trueness to optical scanners for digital implant impression.
APA, Harvard, Vancouver, ISO, and other styles
2

Ruzgienė, Birutė, Renata Bagdžiūnaitė, and Vilma Ruginytė. "SCANNING AERIAL PHOTOS USING A NON-PROFESSIONAL SCANNER." Geodesy and Cartography 38, no. 3 (October 1, 2012): 118–21. http://dx.doi.org/10.3846/20296991.2012.728901.

Full text
Abstract:
For scanning analog aerial photographs, digital photogrammetry requires specific and expensive photogrammetric scanners. However, we only have a simple A4 format scanner useful for solving some special photogrammetric tasks applied for analyzing the possibilities of scanning photographic material. The paper investigates the peculiarities of scanning analog aerial photos using the scanner processing pictures smaller than an A4 format. The achieved results are compared with digital data obtained using a professional photogrammetric scanner. Experimental photogrammetric measurements have showed that the results of aerial photographs scanned by a nonprofessional scanner satisfy accuracy requirements for topographic mapping at a scale of 1:5000.
APA, Harvard, Vancouver, ISO, and other styles
3

Rabah, Chaima Ben, Gouenou Coatrieux, and Riadh Abdelfattah. "Boosting up Source Scanner Identification Using Wavelets and Convolutional Neural Networks." Traitement du Signal 37, no. 6 (December 31, 2020): 881–88. http://dx.doi.org/10.18280/ts.370601.

Full text
Abstract:
In this paper, we present a conceptually innovative method for source scanner identification (SSI), that is to say, identifying the scanner at the origin of a scanned document. Solutions from literature can distinguish between scanners of different brands and models but fail to differentiate between scanners of the same models. To overcome this issue, the approach we propose takes advantage of a convolutional neural network (CNN) to automatically extract intrinsic scanner features from the distribution of the coefficients of the diagonal high-frequency (HH) sub-band of the discrete stationary wavelet transform (SWT) of scanned images. Such information serves as a reliable characteristic to classify scanners of different/same brands and models. Experiments conducted on a set of 8 scanners yielded a model with an accuracy of 99.31% at the block level and 100% at the full image level, showcasing the potential of using deep learning for SSI and outperforming existing schemes from literature. The influence of the model’s parameters such as the input size, the training data size, the number of layers, and the number of nodes in the fully connected layer as well as the effect of the pre-processing step were investigated.
APA, Harvard, Vancouver, ISO, and other styles
4

Nestle, U., S. Kremp, D. Hellwig, A. Grgic, H. G. Buchholz, W. Mischke, C. Gromoll, et al. "Multi-centre calibration of an adaptive thresholding method for PET-based delineation of tumour volumes in radiotherapy planning of lung cancer." Nuklearmedizin 51, no. 03 (2012): 101–10. http://dx.doi.org/10.3413/nukmed-0452-11-12.

Full text
Abstract:
SummaryPurpose: To evaluate the calibration of an adaptive thresholding algorithm (contrastoriented algorithm) for FDG PET-based delineation of tumour volumes in eleven centres with respect to scanner types and image data processing by phantom measurements. Methods: A cylindrical phantom with spheres of different diameters was filled with FDG realizing different signal-to-background ratios and scanned using 5 Siemens Biograph PET/CT scanners, 5 Philips Gemini PET/CT scanners, and one Siemens ECAT-ART PET scanner. All scans were analysed by the contrast-oriented algorithm implemented in two different software packages. For each site, the threshold SUVs of all spheres best matching the known sphere volumes were determined. Calibration parameters a and b were calculated for each combination of scanner and image-analysis software package. In addition, “scanner-typespecific” calibration curves were determined from all values obtained for each combination of scanner type and software package. Both kinds of calibration curves were used for volume delineation of the spheres. Results: Only minor differences in calibration parameters were observed for scanners of the same type (Δa ≤ 4%, Δb ≤ 14%) provided that identical imaging protocols were used whereas significant differences were found comparing calibration parameters of the ART scanner with those of scanners of different type (Δa ≤ 60%, Δb ≤ 54%). After calibration, for all scanners investigated the calculated SUV thresholds for auto-contouring did not differ significantly (all p > 0.58). The resulting sphere volumes deviated by less than –7% to +8% from the true values. Conclusion: After multi-centre calibration the use of the contrast-oriented algorithm for FDG PET-based delineation of tumour volumes in the different centres using different scanner types and specific imaging protocols is feasible.
APA, Harvard, Vancouver, ISO, and other styles
5

Stangeland, Marcus, Trond Engjom, Martin Mezl, Radovan Jirik, Odd Gilja, Georg Dimcevski, and Kim Nylund. "Interobserver Variation of the Bolus-and-Burst Method for Pancreatic Perfusion with Dynamic – Contrast-Enhanced Ultrasound." Ultrasound International Open 03, no. 03 (June 2017): E99—E106. http://dx.doi.org/10.1055/s-0043-110475.

Full text
Abstract:
Abstract Purpose Dynamic contrast-enhanced ultrasound (DCE-US) can be used for calculating organ perfusion. By combining bolus injection with burst replenishment, the actual mean transit time (MTT) can be estimated. Blood volume (BV) can be obtained by scaling the data to a vessel on the imaging plane. The study aim was to test interobserver agreement for repeated recordings using the same ultrasound scanner and agreement between results on two different scanner systems. Materials and Methods Ten patients under evaluation for exocrine pancreatic failure were included. Each patient was scanned two times on a GE Logiq E9 scanner, by two different observers, and once on a Philips IU22 scanner, after a bolus of 1.5 ml Sonovue. A 60-second recording of contrast enhancement was performed before the burst and the scan continued for another 30 s for reperfusion. We performed data analysis using MATLAB-based DCE-US software. An artery in the same depth as the region of interest (ROI) was used for scaling. The measurements were compared using the intraclass correlation coefficient (ICC) and Bland Altman plots. Results The interobserver agreement on the Logiq E9 for MTT (ICC=0.83, confidence interval (CI) 0.46–0.96) was excellent. There was poor agreement for MTT between the Logiq E9 and the IU22 (ICC=−0.084, CI −0.68–0.58). The interobserver agreement for blood volume measurements was excellent on the Logiq E9 (ICC=0.9286, CI 0.7250–0.98) and between scanners (ICC=0.86, CI=0.50–0.97). Conclusion Interobserver agreement was excellent using the same scanner for both parameters and between scanners for BV, but the comparison between two scanners did not yield acceptable agreement for MTT. This was probably due to incomplete bursting of bubbles in some of the recordings on the IU22.
APA, Harvard, Vancouver, ISO, and other styles
6

Provenzale, James M., Brian A. Taylor, Elisabeth A. Wilde, Michael Boss, and Walter Schneider. "Analysis of variability of fractional anisotropy values at 3T using a novel diffusion tensor imaging phantom." Neuroradiology Journal 31, no. 6 (July 24, 2018): 581–86. http://dx.doi.org/10.1177/1971400918789383.

Full text
Abstract:
We employed a novel diffusion tensor imaging phantom to study intra- and interscanner reproducibility on two 3T magnetic resonance (MR) scanners. Using a phantom containing thousands of hollow micron-size tubes in complex arrays, we performed two experiments using a b value of 1000 s/ms2 on two Siemens 3T Trio scanners. First, we performed 12-direction scans. Second, on one scanner, we performed two 64-direction protocols with different repetition times (TRs). We used a one-way analysis of variance to calculate differences between scanners and the Mann-Whitney U test to assess differences between 12-direction and 64-direction data. We calculated the coefficient of variation (CoV) for intrascanner and interscanner data. For 12-direction protocols, mean fractional anisotropy (FA) was 0.3003 for Scanner 1 (four scans) and 0.3094 for Scanner 2 (three scans). Lowest FA value on Scanner 1 was 2.56 standard deviations below the mean of Scanner 2. For 64-direction scans, mean FA was 0.2640 for 4000 ms TR and 0.2582 for 13,200 ms TR scans. For 12-direction scans, within-scanner CoV was 0.0326 for Scanner 1 and 0.0240 for Scanner 2; between-scanner CoV was 0.032. For 64-direction scans, CoV was 0.056 for TR 4000 ms and 0.0533 for TR 13,200 ms. The difference between median FA values of 12-direction and 64-direction scans was statistically significant ( p < 0.001). We found relatively good reproducibility on any single MR scanner. FA values from one scanner were sometimes significantly below the mean FA of another scanner, which has important implications for clinical use of DTI.
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Ji Hong, Xiao Lin Dai, and Shu Ping Gao. "A Study on Data Acquisition from Sections of Virtual Coat Profile." Advanced Materials Research 230-232 (May 2011): 1204–9. http://dx.doi.org/10.4028/www.scientific.net/amr.230-232.1204.

Full text
Abstract:
Data was obtained through scanning manikin and coats separated by using [TC]2 3D body scanner. The method, using [TC]2 scanner as the experimental method and through double converting the scanned data format to get torso geometric section sets, was analyzed. Main program source code of Torso was provided in this paper. Geometric algorithms of point cloud data and curve data in there sections was provided based on the interception ways of horizontal sections, vertical sections and other random oblique sections toward torso geometric cross section.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Kai, Kai Zhan, Xiaocong Yang, and Da Zhang. "Accuracy Improvement Method of a 3D Laser Scanner Based on the D-H Model." Shock and Vibration 2021 (May 25, 2021): 1–9. http://dx.doi.org/10.1155/2021/9965904.

Full text
Abstract:
A three-dimensional (3D) laser scanner with characteristics such as acquiring huge point cloud data and noncontact measurement has revolutionized the surveying and mapping industry. Nonetheless, how to guarantee the 3D laser scanner precision remains the critical factor that determines the excellence of 3D laser scanners. Hence, this study proposes a 3D laser scanner error analysis and calibration-method-based D-H model, applies the D-H model method in the robot area to the 3D laser scanner coordinate for calculating the point cloud data and creatively derive the error model, comprehensively analyzes six external parameters and seven inner structure parameters that affect point cloud coordinator error, and designs two calibration platforms for inner structure parameters. To validate the proposed method, we used SOKKIA total station and BLSS-PE 3D laser scanner to attain the center coordinate of the testing target sphere and then evaluate the external parameters and modify the point coordinate. Based on modifying the point coordinate, comparing the point coordinate that considered the inner structure parameters with the point coordinate that did not consider the inner structure parameters, the experiment revealed that the BLSS-PE 3D laser scanner’s precision enhanced after considering the inner structure parameters, demonstrating that the error analysis and calibration method was correct and feasible.
APA, Harvard, Vancouver, ISO, and other styles
9

Elbrecht, Pirjo, Jaak Henno, and Knut Joosep Palm. "Body Measurements Extraction from 3D Scanner Data." Applied Mechanics and Materials 339 (July 2013): 372–77. http://dx.doi.org/10.4028/www.scientific.net/amm.339.372.

Full text
Abstract:
The growing power of computing, development of methods of 3D graphics for human body modeling and simulation together with development of 3D image capture technologies using 3D scanners has caused rapid development of digital tailoring - a complex of methods where made-to-measure clothing is produced starting with 3D scanning of a customer, extraction of essential measurements from obtained data cloud and then automatic production of a garment corresponding to exact measures of the customer. Extraction of exact measures from the ca 200000 data points produced by 3D scanner is a complex problem and not yet well investigated.
APA, Harvard, Vancouver, ISO, and other styles
10

Liebold, F., and H. G. Maas. "Integrated Georeferencing of LiDAR and Camera Data Acquired from a Moving Platform." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3 (August 11, 2014): 191–96. http://dx.doi.org/10.5194/isprsarchives-xl-3-191-2014.

Full text
Abstract:
This paper presents an approach for modeling the trajectory of a moving platform equipped with a laser scanner and a camera. In most cases, GNSS and INS is used to determine the orientation of the platform, but sometimes it is impossible to use GNSS, especially indoor applications should be mentioned here. INS has a bad error propagation without GNSS. In addition, the accuracy of GNSS and low-cost INS is limited and often not equivalent to the accuracy potential of laser scanners. For the camera, there exists the well-known alternative to obtain the orientation parameters via triangulation, for instance employing structure-from-motion techniques. But it is more challenging to find an alternative for the laser scanner, because of its sequential data acquisition. In the approach shown here, we propose to use a camera in combination with structure-from-motion techniques as the basis for determining the laser scanner trajectory parameters. For that purpose, we use piece-wise models for the trajectory through polynomial functions, supported by time-stamped matches between laser scanner and camera data.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Scanner data"

1

Bae, Kwang-Ho. "Automated registration of unorganised point clouds from terrestrial laser scanners." Curtin University of Technology, Department of Spatial Sciences, 2006. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=16596.

Full text
Abstract:
Laser scanners provide a three-dimensional sampled representation of the surfaces of objects. The spatial resolution of the data is much higher than that of conventional surveying methods. The data collected from different locations of a laser scanner must be transformed into a common coordinate system. If good a priori alignment is provided and the point clouds share a large overlapping region, existing registration methods, such as the Iterative Closest Point (ICP) or Chen and Medioni’s method, work well. In practical applications of laser scanners, partially overlapping and unorganised point clouds are provided without good initial alignment. In these cases, the existing registration methods are not appropriate since it becomes very difficult to find the correspondence of the point clouds. A registration method, the Geometric Primitive ICP with the RANSAC (GPICPR), using geometric primitives, neighbourhood search, the positional uncertainty of laser scanners, and an outlier removal procedure is proposed in this thesis. The change of geometric curvature and approximate normal vector of the surface formed by a point and its neighbourhood are used for selecting the possible correspondences of point clouds. In addition, an explicit expression of the position uncertainty of measurement by laser scanners is presented in this dissertation and this position uncertainty is utilised to estimate the precision and accuracy of the estimated relative transformation parameters between point clouds. The GP-ICPR was tested with both simulated data and datasets from close range and terrestrial laser scanners in terms of its precision, accuracy, and convergence region. It was shown that the GP-ICPR improved the precision of the estimated relative transformation parameters as much as a factor of 5.
In addition, the rotational convergence region of the GP-ICPR on the order of 10°, which is much larger than the ICP or its variants, provides a window of opportunity to utilise this automated registration method in practical applications such as terrestrial surveying and deformation monitoring.
APA, Harvard, Vancouver, ISO, and other styles
2

Foekens, Eijte Willem. "Scanner data based marketing modelling : empirical applications /." Capelle a/d IJssel : Labyrint Publ, 1995. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=007021048&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Trost, Daniel Roland. "Organic produce demand estimation utilizing retail scanner data." Thesis, Montana State University, 1999. http://etd.lib.montana.edu/etd/1999/trost/TrostD1999.pdf.

Full text
Abstract:
Retail demand relationships for organic and non-organic bananas, garlic, onions, and potatoes are examined using scanner data from a retail co-operative food store located in Bozeman, Montana. A level version Rotterdam demand specification is used in a six-equation system to estimate Hicksian demand elasticities. The own-price elasticity for organic onions is negative and significant. All other own-price elasticities are not significantly different from zero. This indicates consumers may not be very price sensitive for the goods in question. With few exceptions, the cross-price elasticities which are significant are also positive. Income elasticities are mostly significant and positive. Elasticity measurement may be somewhat imprecise due to a lack of variability in prices and an ambiguous error structure. Key factors influencing the quantities of the produce items purchased include the number of children in a household, the average age of adults in a household, and employment status of the primary grocery shopper. Educational status did not have any significant impact on quantities purchased.
APA, Harvard, Vancouver, ISO, and other styles
4

Tóvári, Dániel. "Segmentation Based Classification of Airborne Laser Scanner Data." [S.l. : s.n.], 2006. http://digbib.ubka.uni-karlsruhe.de/volltexte/1000006285.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Preuksakarn, Chakkrit. "Reconstructing plant architecture from 3D laser scanner data." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20116/document.

Full text
Abstract:
Les modèles virtuels de plantes sont visuellement de plus en plus réalistes dans les applications infographiques. Cependant, dans le contexte de la biologie et l'agronomie, l'acquisition de modèles précis de plantes réelles reste un problème majeur pour la construction de modèles quantitatifs du développement des plantes.Récemment, des scanners laser 3D permettent d'acquérir des images 3D avec pour chaque pixel une profondeur correspondant à la distance entre le scanner et la surface de l'objet visé. Cependant, une plante est généralement un ensemble important de petites surfaces sur lesquelles les méthodes classiques de reconstruction échouent. Dans cette thèse, nous présentons une méthode pour reconstruire des modèles virtuels de plantes à partir de scans laser. Mesurer des plantes avec un scanner laser produit des données avec différents niveaux de précision. Les scans sont généralement denses sur la surface des branches principales mais recouvrent avec peu de points les branches fines. Le cœur de notre méthode est de créer itérativement un squelette de la structure de la plante en fonction de la densité locale de points. Pour cela, une méthode localement adaptative a été développée qui combine une phase de contraction et un algorithme de suivi de points.Nous présentons également une procédure d'évaluation quantitative pour comparer nos reconstructions avec des structures reconstruites par des experts de plantes réelles. Pour cela, nous explorons d'abord l'utilisation d'une distance d'édition entre arborescence. Finalement, nous formalisons la comparaison sous forme d'un problème d'assignation pour trouver le meilleur appariement entre deux structures et quantifier leurs différences
In the last decade, very realistic rendering of plant architectures have been produced in computer graphics applications. However, in the context of biology and agronomy, acquisition of accurate models of real plants is still a tedious task and a major bottleneck for the construction of quantitative models of plant development. Recently, 3D laser scanners made it possible to acquire 3D images on which each pixel has an associate depth corresponding to the distance between the scanner and the pinpointed surface of the object. Standard geometrical reconstructions fail on plants structures as they usually contain a complex set of discontinuous or branching surfaces distributed in space with varying orientations. In this thesis, we present a method for reconstructing virtual models of plants from laser scanning of real-world vegetation. Measuring plants with laser scanners produces data with different levels of precision. Points set are usually dense on the surface of the main branches, but only sparsely cover thin branches. The core of our method is to iteratively create the skeletal structure of the plant according to local density of point set. This is achieved thanks to a method that locally adapts to the levels of precision of the data by combining a contraction phase and a local point tracking algorithm. In addition, we present a quantitative evaluation procedure to compare our reconstructions against expertised structures of real plants. For this, we first explore the use of an edit distance between tree graphs. Alternatively, we formalize the comparison as an assignment problem to find the best matching between the two structures and quantify their differences
APA, Harvard, Vancouver, ISO, and other styles
6

Töpel, Johanna. "Initial Analysis and Visualization of Waveform Laser Scanner Data." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2864.

Full text
Abstract:

Conventional airborne laser scanner systems output the three-dimensional coordinates of the surface location hit by the laser pulse. Data storage capacity and processing speeds available today has made it possible to digitally sample and store the entire reflected waveform, instead of only extracting the coordinates. Research has shown that return waveforms can give even more detailed insights into the vertical structure of surface objects, surface slope, roughness and reflectivity than the conventional systems. One of the most important advantages with registering the waveforms is that it gives the user the possibility to himself define the way range is calculated in post-processing.

In this thesis different techniques have been tested to visualize a waveform data set in order to get a better understanding of the waveforms and how they can be used to improve methods for classification of ground objects.

A pulse detection algorithm, using the EM algorithm, has been implemented and tested. The algorithm output position and width of the echo pulses. One of the results of this thesis is that echo pulses reflected by vegetation tend to be wider than those reflected by for example a road. Another result is that up till five echo pulses can be detected compared to two echo pulses that the conventional system detects.

APA, Harvard, Vancouver, ISO, and other styles
7

Payette, Francois. "Applications of a sampling strategy for the ERBE scanner data." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Henning, Jason Gregory. "Modeling Forest Canopy Distribution from Ground-Based Laser Scanner Data." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/28431.

Full text
Abstract:
A commercially available, tripod mounted, ground-based laser scanner was used to assess forest canopies and measure individual tree parameters. The instrument is comparable to scanning airborne light detection and ranging (lidar) technology but gathers data at higher resolution over a more limited scale. The raw data consist of a series of range measurements to visible surfaces taken at known angles relative to the scanner. Data were translated into three dimensional (3D) point clouds with points corresponding to surfaces visible from the scanner vantage point. A 20 m x 40 m permanent plot located in upland deciduous forest at Coweeta, NC was assessed with 41 and 45 scans gathered during periods of leaf-on and leaf-off, respectively. Data management and summary needs were addressed, focusing on the development of registration methods to align point clouds collected from multiple vantage points and minimize the volume of the plot canopy occluded from the scanner's view. Automated algorithms were developed to extract points representing tree bole surfaces, bole centers and ground surfaces. The extracted points served as the control surfaces necessary for registration. Occlusion was minimized by combining aligned point clouds captured from multiple vantage points with 0.1% and 0.34% of the volume scanned being occluded from view under leaf-off and leaf-on conditions, respectively. The point cloud data were summarized to estimate individual tree parameters including diameter at breast height (dbh), upper stem diameters, branch heights and XY positions of trees on the plot. Estimated tree positions were, on average, within 0.4 m of tree positions measured independently on the plot. Canopy height models, digital terrain models and 3D maps of the density of canopy surfaces were created using aligned point cloud data. Finally spatially explicit models of the horizontal and vertical distribution of plant area index (PAI) and leaf area index (LAI) were generated as examples of useful data summaries that cannot be practically collected using existing methods.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Nalani, Hetti Arachchige. "Automatic Reconstruction of Urban Objects from Mobile Laser Scanner Data." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-159872.

Full text
Abstract:
Aktuelle 3D-Stadtmodelle werden immer wichtiger in verschiedenen städtischen Anwendungsbereichen. Im Moment dienen sie als Grundlage bei der Stadtplanung, virtuellem Tourismus und Navigationssystemen. Mittlerweile ist der Bedarf an 3D-Gebäudemodellen dramatisch gestiegen. Der Grund dafür sind hauptsächlich Navigationssysteme und Onlinedienste wie Google Earth. Die Mehrheit der Untersuchungen zur Rekonstruktion von Gebäudemodellen von Luftaufnahmen konzentriert sich ausschließlich auf Dachmodellierung. Jedoch treiben Anwendungen wie Virtuelle Realität und Navigationssysteme die Nachfrage nach detaillieren Gebäudemodellen, die nicht nur die geometrischen Aspekte sondern auch semantische Informationen beinhalten, stark an. Urbanisierung und Industrialisierung beeinflussen das Wachstum von urbaner Vegetation drastisch, welche als ein wesentlicher Teil des Lebensraums angesehen wird. Aus diesem Grund werden Aufgaben wie der Ökosystemüberwachung, der Verbesserung der Planung und des Managements von urbanen Regionen immer mehr Aufmerksamkeit geschenkt. Gleichermaßen hat die Erkennung und Modellierung von Bäumen im Stadtgebiet sowie die kontinuierliche Überprüfung ihrer Inventurparameter an Bedeutung gewonnen. Die steigende Nachfrage nach 3D-Gebäudemodellen, welche durch Fassadeninformation ergänzt wurden, und Informationen über einzelne Bäume im städtischen Raum erfordern effiziente Extraktions- und Rekonstruktionstechniken, die hochgradig automatisiert sind. In diesem Zusammenhang ist das Wissen über die geometrische Form jedes Objektteils ein wichtiger Aspekt. Heutzutage, wird das Mobile Laser Scanning (MLS) vermehrt eingesetzt um Objekte im städtischen Umfeld zu erfassen und es entwickelt sich zur Hauptquelle von Daten für die Modellierung von urbanen Objekten. Eine Vielzahl von Objekten wurde schon mit Daten von MLS rekonstruiert. Außerdem wurden bereits viele Methoden für die Verarbeitung von MLS-Daten mit dem Ziel urbane Objekte zu erkennen und zu rekonstruieren vorgeschlagen. Die 3D-Punkwolke einer städtischen Szene stellt eine große Menge von Messungen dar, die viele Objekte von verschiedener Größe umfasst, komplexe und unvollständige Strukturen sowie Löcher (Rauschen und Datenlücken) enthält und eine inhomogene Punktverteilung aufweist. Aus diesem Grund ist die Verarbeitung von MLS-Punktwolken im Hinblick auf die Extrahierung und Modellierung von wesentlichen und charakteristischen Fassadenstrukturen sowie Bäumen von großer Bedeutung. In der Arbeit werden zwei neue Methoden für die Rekonstruktion von Gebäudefassaden und die Extraktion von Bäumen aus MLS-Punktwolken vorgestellt, sowie ihre Anwendbarkeit in der städtischen Umgebung analysiert. Die erste Methode zielt auf die Rekonstruktion von Gebäudefassaden mit expliziter semantischer Information, wie beispielsweise Fenster, Türen, und Balkone. Die Rekonstruktion läuft vollautomatisch ab. Zu diesem Zweck werden einige Algorithmen vorgestellt, die auf dem Vorwissen über die geometrische Form und das Arrangement von Fassadenmerkmalen beruhen. Die initiale Klassifikation, mit welcher die Punkte in Objektpunkte und Bodenpunkte unterschieden werden, wird über eine lokale Höhenhistogrammanalyse zusammen mit einer planaren Region-Growing-Methode erzielt. Die Punkte, die als zugehörig zu Objekten klassifiziert werden, werden anschließend in Ebenen segmentiert, welche als Basiselemente der Merkmalserkennung angesehen werden können. Information über die Gebäudestruktur kann in Form von Regeln und Bedingungen erfasst werden, welche die wesentlichen Steuerelemente bei der Erkennung der Fassadenmerkmale und der Rekonstruktion des geometrischen Modells darstellen. Um Merkmale wie Fenster oder Türen zu erkennen, die sich an der Gebäudewand befinden, wurde eine löcherbasierte Methode implementiert. Einige Löcher, die durch Verdeckungen entstanden sind, können anschließend durch einen neuen regelbasierten Algorithmus eliminiert werden. Außenlinien der Merkmalsränder werden durch ein Polygon verbunden, welches das geometrische Modell repräsentiert, indem eine Methode angewendet wird, die auf geometrischen Primitiven basiert. Dabei werden die topologischen Relationen unter Beachtung des Vorwissens über die primitiven Formen analysiert. Mögliche Außenlinien können von den Kantenpunkten bestimmt werden, welche mit einer winkelbasierten Methode detektiert werden können. Wiederkehrende Muster und Ähnlichkeiten werden ausgenutzt um geometrische und topologische Ungenauigkeiten des rekonstruierten Modells zu korrigieren. Neben der Entwicklung des Schemas zur Rekonstruktion des 3D-Fassadenmodells, sind die Segmentierung einzelner Bäume und die Ableitung von Attributen der städtischen Bäume im Fokus der Untersuchung. Die zweite Methode zielt auf die Extraktion von individuellen Bäumen aus den Restpunktwolken. Vorwissen über Bäume, welches speziell auf urbane Regionen zugeschnitten ist, wird im Extraktionsprozess verwendet. Der formbasierte Ansatz zur Extraktion von Einzelbäumen besteht aus einer Reihe von Schritten. In jedem Schritt werden Objekte in Abhängigkeit ihrer geometrischen Merkmale gefunden. Stämme werden unter Ausnutzung der Hauptrichtung der Punktverteilung identifiziert. Dafür werden Punktsegmente gesucht, die einen Teil des Baumstamms repräsentieren. Das Ergebnis des Algorithmus sind segmentierte Bäume, welche genutzt werden können um genaue Informationen über die Größe und Position jedes einzelnen Baumes abzuleiten. Einige Beispiele der Ergebnisse werden in der Arbeit angeführt. Die Zuverlässigkeit der Algorithmen und der Methoden im Allgemeinen wurden unter Verwendung von drei Datensätzen, die mit verschiedenen Laserscannersystemen aufgenommen wurden, verifiziert. Die Untersuchung zeigt auch das Potential sowie die Einschränkungen der entwickelten Methoden wenn sie auf verschiedenen Datensätzen angewendet werden. Die Ergebnisse beider Methoden wurden quantitativ bewertet unter Verwendung einer Menge von Maßen, die die Qualität der Fassadenrekonstruktion und Baumextraktion betreffen wie Vollständigkeit und Genauigkeit. Die Genauigkeit der Fassadenrekonstruktion, der Baumstammdetektion, der Erfassung von Baumkronen, sowie ihre Einschränkungen werden diskutiert. Die Ergebnisse zeigen, dass MLS-Punktwolken geeignet sind um städtische Objekte detailreich zu dokumentieren und dass mit automatischen Rekonstruktionsmethoden genaue Messungen der wichtigsten Attribute der Objekte, wie Fensterhöhe und -breite, Flächen, Stammdurchmesser, Baumhöhe und Kronenfläche, erzielt werden können. Der gesamte Ansatz ist geeignet für die Rekonstruktion von Gebäudefassaden und für die korrekte Extraktion von Bäumen sowie ihre Unterscheidung zu anderen urbanen Objekten wie zum Beispiel Straßenschilder oder Leitpfosten. Aus diesem Grund sind die beiden Methoden angemessen um Daten von heterogener Qualität zu verarbeiten. Des Weiteren bieten sie flexible Frameworks für das viele Erweiterungen vorstellbar sind
Up-to-date 3D urban models are becoming increasingly important in various urban application areas, such as urban planning, virtual tourism, and navigation systems. Many of these applications often demand the modelling of 3D buildings, enriched with façade information, and also single trees among other urban objects. Nowadays, Mobile Laser Scanning (MLS) technique is being progressively used to capture objects in urban settings, thus becoming a leading data source for the modelling of these two urban objects. The 3D point clouds of urban scenes consist of large amounts of data representing numerous objects with significant size variability, complex and incomplete structures, and holes (noise and data gaps) or variable point densities. For this reason, novel strategies on processing of mobile laser scanning point clouds, in terms of the extraction and modelling of salient façade structures and trees, are of vital importance. The present study proposes two new methods for the reconstruction of building façades and the extraction of trees from MLS point clouds. The first method aims at the reconstruction of building façades with explicit semantic information such as windows, doors and balconies. It runs automatically during all processing steps. For this purpose, several algorithms are introduced based on the general knowledge on the geometric shape and structural arrangement of façade features. The initial classification has been performed using a local height histogram analysis together with a planar growing method, which allows for classifying points as object and ground points. The point cloud that has been labelled as object points is segmented into planar surfaces that could be regarded as the main entity in the feature recognition process. Knowledge of the building structure is used to define rules and constraints, which provide essential guidance for recognizing façade features and reconstructing their geometric models. In order to recognise features on a wall such as windows and doors, a hole-based method is implemented. Some holes that resulted from occlusion could subsequently be eliminated by means of a new rule-based algorithm. Boundary segments of a feature are connected into a polygon representing the geometric model by introducing a primitive shape based method, in which topological relations are analysed taking into account the prior knowledge about the primitive shapes. Possible outlines are determined from the edge points detected from the angle-based method. The repetitive patterns and similarities are exploited to rectify geometrical and topological inaccuracies of the reconstructed models. Apart from developing the 3D façade model reconstruction scheme, the research focuses on individual tree segmentation and derivation of attributes of urban trees. The second method aims at extracting individual trees from the remaining point clouds. Knowledge about trees specially pertaining to urban areas is used in the process of tree extraction. An innovative shape based approach is developed to transfer this knowledge to machine language. The usage of principal direction for identifying stems is introduced, which consists of searching point segments representing a tree stem. The output of the algorithm is, segmented individual trees that can be used to derive accurate information about the size and locations of each individual tree. The reliability of the two methods is verified against three different data sets obtained from different laser scanner systems. The results of both methods are quantitatively evaluated using a set of measures pertaining to the quality of the façade reconstruction and tree extraction. The performance of the developed algorithms referring to the façade reconstruction, tree stem detection and the delineation of individual tree crowns as well as their limitations are discussed. The results show that MLS point clouds are suited to document urban objects rich in details. From the obtained results, accurate measurements of the most important attributes relevant to the both objects (building façades and trees), such as window height and width, area, stem diameter, tree height, and crown area are obtained acceptably. The entire approach is suitable for the reconstruction of building façades and for the extracting trees correctly from other various urban objects, especially pole-like objects. Therefore, both methods are feasible to cope with data of heterogeneous quality. In addition, they provide flexible frameworks, from which many extensions can be envisioned
APA, Harvard, Vancouver, ISO, and other styles
10

Natter, Martin, and Markus Feurstein. "Correcting for CBC model bias. A hybrid scanner data - conjoint model." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2001. http://epub.wu.ac.at/880/1/document.pdf.

Full text
Abstract:
Choice-Based Conjoint (CBC) models are often used for pricing decisions, especially when scanner data models cannot be applied. Up to date, it is unclear how Choice-Based Conjoint (CBC) models perform in terms of forecasting real-world shop data. In this contribution, we measure the performance of a Latent Class CBC model not by means of an experimental hold-out sample but via aggregate scanner data. We find that the CBC model does not accurately predict real-world market shares, thus leading to wrong pricing decisions. In order to improve its forecasting performance, we propose a correction scheme based on scanner data. Our empirical analysis shows that the hybrid method improves the performance measures considerably. (author's abstract)
Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Scanner data"

1

missing], [name. Scanner data and price indexes. Chicago, IL: University of Chicago Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

McCann, John M., and John P. Gallagher. Expert Systems for Scanner Data Environments. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-011-3923-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

James, Cavuoto, ed. The color scanner book. Torrance, California: Micro Pub. Press, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Campbell, Jeffrey R. Rigid prices: Evidence from U.S. scanner data. [Chicago, Ill.]: Federal Reserve Bank of Chicago, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sithole, George. Segmentation and classification of airborne laser scanner data. Delft: Nederlandse Commissie voor Geodesie, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Beale, Stephen. The scanner book: A complete guide to the use and applications of desktop scanners. Torrance, California: Micro Pub. Co., 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Beale, Stephen. The scanner handbook: A complete guide to the use and applications of desktop scanners. Oxford: Heinemann Newtech, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tibrewala, Vikas. Nonstationary conditional trend analysis: An application to scanner panel data. Fontainbleau: INSEAD, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bucklin, Randolph E. Commercial adoption of advances in the analysis of scanner data. Cambridge, Mass: Marketing Science Institute, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Weinberg, Bruce. Building an information strategy for scanner data: A conference summary. Cambridge, Mass: Marketing Science Institute, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Scanner data"

1

Enesi, Indrit, and Blerina Zanaj. "Implementing Steganocryptography in Scanner and Angio-Scanner Medical Images." In Mobile Networks for Biometric Data Analysis, 109–20. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-39700-9_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Biedl, Therese, Stephane Durocher, and Jack Snoeyink. "Reconstructing Polygons from Scanner Data." In Algorithms and Computation, 862–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10631-6_87.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cook, J. C., A. D. Goodson, and J. W. R. Griffiths. "Low Frequency Sector Scanner Using NLA." In Underwater Acoustic Data Processing, 47–53. Dordrecht: Springer Netherlands, 1989. http://dx.doi.org/10.1007/978-94-009-2289-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shang, Shize, and Xiangwei Kong. "Printer and Scanner Forensics." In Handbook of Digital Forensics of Multimedia Data and Devices, 375–410. Chichester, UK: John Wiley & Sons, Ltd, 2015. http://dx.doi.org/10.1002/9781118705773.ch10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

De Vitiis, Claudia, Alessio Guandalini, Francesca Inglese, and Marco Dionisio Terribili. "Sampling Schemes Using Scanner Data for the Consumer Price Index." In New Statistical Developments in Data Science, 203–17. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21158-5_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hug, Christoph. "Extracting Artificial Surface Objects from Airborne Laser Scanner Data." In Automatic Extraction of Man-Made Objects from Aerial and Space Images (II), 203–12. Basel: Birkhäuser Basel, 1997. http://dx.doi.org/10.1007/978-3-0348-8906-3_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wittink, Dick R., and John C. Porter. "Aggregation Bias Resulting from Nonlinearity in Scanner Retail Data." In Operations Research Proceedings 1991, 357–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-46773-8_94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Okada, A., and A. Miyauchi. "Predicting the Amount of Purchase by a Procedure Using Multidimensional Scaling: An Application to Scanner Data on Beer." In Classification, Data Analysis, and Data Highways, 401–8. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/978-3-642-72087-1_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baltsavias, E. P., and M. Crosetto. "Test and Calibration of a DTP Scanner for GIS Data Acquisition." In Data Acquisition and Analysis for Multimedia GIS, 141–50. Vienna: Springer Vienna, 1996. http://dx.doi.org/10.1007/978-3-7091-2684-4_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Buonamici, Francesco, Monica Carfagni, Luca Puggelli, Michaela Servi, and Yary Volpe. "A Fast and Reliable Optical 3D Scanning System for Human Arm." In Lecture Notes in Mechanical Engineering, 268–73. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70566-4_43.

Full text
Abstract:
AbstractThe article discusses the design of an acquisition system for the 3D surface of human arms. The system is composed by a 3D optical scanner implementing stereoscopic depth sensors and by an acquisition software responsible for the processing of the raw data. The 3D data acquired by the scanner is used as starting point for the manufacturing of custom-made 3D printed casts. Specifically, the article discusses the choices made in the development of an improved version of an existing system presented in [1] and presents the results achieved by the devised system.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Scanner data"

1

Vieira, Miguel, and Kenji Shimada. "Segmentation of Noisy Laser-Scanner Generated Meshes With Piecewise Polynomial Approximations." In ASME 2004 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2004. http://dx.doi.org/10.1115/detc2004-57475.

Full text
Abstract:
Laser scanners offer a fast and simple way of collecting large amounts of geometric data from real-world objects. Although this aspect makes them attractive for design and reverse engineering, the laser-scanner data is often noisy and not partitioned into meaningful surfaces. A good partitioning, or segmentation, of the scanner data has uses including feature detection, surface boundary generation, surface fitting, and surface reconstruction. This paper presents a method for segmenting noisy three-dimensional surface meshes created from laser-scanned data into distinct regions closely approximated by explicit surfaces. The algorithm first estimates mesh curvatures and noise levels and then uses the curvature data to construct seed regions around each vertex. If a seed region meets certain criteria, it is assigned a region number and is grown into a set of connected vertices approximated by a bicubic polynomial surface. All the vertices in a region are within known distance and surface normal tolerances from their underlying surface approximations. The algorithm works on noisy or smooth data and requires little or no user interaction. We demonstrate the effectiveness of the segmentation on real-world examples.
APA, Harvard, Vancouver, ISO, and other styles
2

Schmitz, Anne, and Davide Piovesan. "A Novel Methodology to Determine Optimal Active Marker Scanner Placement." In ASME 2017 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/imece2017-70285.

Full text
Abstract:
Motion capture is a common technology used to measure human motion. One key aspect to collecting an ample amount of accurate data is maximizing the motion capture volume. This maximization is typically gone by a trial and error method. The purpose of this study was to develop a better method to determine the camera placement to maximize the captured volume. Two active marker scanners were methodically placed at various locations around a circle centered on a forceplate. The scanner placements of optimal coverage were defined as the position of the scanners when the area of overlap between the two scanners was maximum and the area of uncovered walkway at a minimum. The optimal placement of the scanners occurred when one scanner was placed at (−1.718, 2.459) meters with respect to the forceplate origin and the other positioned at (0.7691, −2.9) meters. Although these dimensions are specific to the constraints and walkway of our lab, the method can be adapted to any lab size. This setup is crucial to establish so accurate, un-occluded marker data can be collected in future studies.
APA, Harvard, Vancouver, ISO, and other styles
3

Johnston, R. A., and N. B. Price. "RBF patching of laser scanner data." In 2008 23rd International Conference Image and Vision Computing New Zealand (IVCNZ). IEEE, 2008. http://dx.doi.org/10.1109/ivcnz.2008.4762077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Yunbao, and Xiaoping Qian. "A Dynamic Sensing-and-Modeling Approach to 3D Point- and Area-Sensor Integration." In ASME 2006 International Manufacturing Science and Engineering Conference. ASMEDC, 2006. http://dx.doi.org/10.1115/msec2006-21105.

Full text
Abstract:
The recent advancement of 3D non-contact laser scanners enables fast measurement of parts by generating huge amount of coordinate data for a large surface area in a short time. In contrast, traditional tactile probes in the coordinate measurement machines (CMM) can generate more accurate coordinate data points in a much slower pace. Therefore the combination of laser scanners and touch probes can potentially lead to more accurate, faster and denser measurement. In this paper, we develop a dynamic sensing-and-modeling approach for integrating a tactile point sensor and an area laser scanner to improve the measurement speed and quality. The part is first laser scanned to capture the overall shape of the object. It is then probed via a tactile sensor at positions are dynamically determined to reduce the measurement uncertainty based on a novel next-best-point formulation. Technically, we use the Kalman filter to fuse laser scanned point cloud and tactile points and to incrementally update the surface model based on the dynamically probed points. We solve the next-best-point problem by transforming the B-spline surface’s uncertainty distribution into a higher dimensional uncertainty surface so that the convex hull property of the B-spline surface can be utilized to dramatically reduce the search speed and to guarantee the optimality of the resulting point. Three examples in this paper demonstrate that the dynamic sensing-and-modeling effectively integrates the area laser scanner and the point touch probe and leads to significant amount of measurement time saving (at least several times in all three cases). This dynamic approach’s further benefits include reduced surface uncertainty due to the maximum uncertainty control through the next-best-point sensing and improved surface accuracy in surface reconstruction through the use of Kalman filter to account various sensor noise.
APA, Harvard, Vancouver, ISO, and other styles
5

Bozkurt, Nesli, Ugur Halici, Ilkay Ulusoy, and Erdem Akagunduz. "3D data processing for enhancement of face scanner data." In 2009 IEEE 17th Signal Processing and Communications Applications Conference (SIU). IEEE, 2009. http://dx.doi.org/10.1109/siu.2009.5136431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gong, Bo, William C. Messner, Tuviah E. Schlesinger, Hadas Shragai, Daniel D. Stancil, and Jinhui Zhai. "Ultrahigh-performance optical servo system using an electro-optic beam scanner." In Optical Data Storage, edited by Douglas G. Stinson and Ryuichi Katayama. SPIE, 2000. http://dx.doi.org/10.1117/12.399375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Szabo, Csaba, Stefan Korecko, and Branislav Sobota. "Processing 3D scanner data for virtual reality." In 2010 10th International Conference on Intelligent Systems Design and Applications (ISDA). IEEE, 2010. http://dx.doi.org/10.1109/isda.2010.5687085.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xianfang Sun, Paul L. Rosin, Ralph R. Martin, and Frank C. Langbein. "Noise in 3D laser range scanner data." In 2008 IEEE International Conference on Shape Modeling and Applications (SMI). IEEE, 2008. http://dx.doi.org/10.1109/smi.2008.4547945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Atasoy, Guzide, Pingbo Tang, Jiansong Zhang, and Burcu Akinci. "Visualizing Laser Scanner Data for Bridge Inspection." In 27th International Symposium on Automation and Robotics in Construction. International Association for Automation and Robotics in Construction (IAARC), 2010. http://dx.doi.org/10.22260/isarc2010/0042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Qibao, Yi Chiu, Adrian J. Devasahayam, Michael A. Seigler, David N. Lambeth, Tuviah E. Schlesinger, and Daniel D. Stancil. "Waveguide optical scanner with increased deflection sensitivity for optical data storage." In Optical Data Storage '94, edited by David K. Campbell, Martin Chen, and Koichi Ogawa. SPIE, 1994. http://dx.doi.org/10.1117/12.190192.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Scanner data"

1

Ng, Serena. Opportunities and Challenges: Lessons from Analyzing Terabytes of Scanner Data. Cambridge, MA: National Bureau of Economic Research, August 2017. http://dx.doi.org/10.3386/w23673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Taylor, James L. Establishing Measurement Uncertainty for the Digital Temperature Scanner Using Calibration Data. Fort Belvoir, VA: Defense Technical Information Center, November 2013. http://dx.doi.org/10.21236/ada594984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Faber, Benjamin, and Thibault Fally. Firm Heterogeneity in Consumption Baskets: Evidence from Home and Store Scanner Data. Cambridge, MA: National Bureau of Economic Research, January 2017. http://dx.doi.org/10.3386/w23101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chevalier, Judith, Anil Kashyap, and Peter Rossi. Why Don't Prices Rise During Periods of Peak Demand? Evidence from Scanner Data. Cambridge, MA: National Bureau of Economic Research, October 2000. http://dx.doi.org/10.3386/w7981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Guha, Rishab, and Serena Ng. A Machine Learning Analysis of Seasonal and Cyclical Sales in Weekly Scanner Data. Cambridge, MA: National Bureau of Economic Research, May 2019. http://dx.doi.org/10.3386/w25899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Boffo, C., and P. Bauer. FIONDA (Filtering Images of Niobium Disks Application): Filter application for Eddy Current Scanner data analysis. Office of Scientific and Technical Information (OSTI), May 2005. http://dx.doi.org/10.2172/15020167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Smyre, J. L., M. E. Hodgson, B. W. Moll, A. L. King, and Yang Cheng. Daytime multispectral scanner aerial surveys of the Oak Ridge Reservation, 1992--1994: Overview of data processing and analysis by the Environmental Restoration Remote Sensing Program, Fiscal year 1995. Office of Scientific and Technical Information (OSTI), November 1995. http://dx.doi.org/10.2172/204019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Brewster, S. B. Jr, M. E. Howard, and J. E. Shines. A multispectral scanner survey of the Tonopah Test Range, Nevada. Date of survey: August 1993. Office of Scientific and Technical Information (OSTI), August 1994. http://dx.doi.org/10.2172/10196597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Holden, N. E., and S. Ramavataram. Integral charged particle nuclear data bibliography: Literature scanned from April 11, 1987 through November 10, 1988. Office of Scientific and Technical Information (OSTI), December 1988. http://dx.doi.org/10.2172/6187647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Holden, N. E., S. Ramavataram, and C. L. Dunford. Integral charged particle nuclear data bibliography: Literature scanned from April 1, 1986 through April 10, 1987. Office of Scientific and Technical Information (OSTI), April 1987. http://dx.doi.org/10.2172/6163940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography