Rozprawy doktorskie na temat „Scanner data”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Scanner data.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Scanner data”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Bae, Kwang-Ho. "Automated registration of unorganised point clouds from terrestrial laser scanners". Curtin University of Technology, Department of Spatial Sciences, 2006. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=16596.

Pełny tekst źródła
Streszczenie:
Laser scanners provide a three-dimensional sampled representation of the surfaces of objects. The spatial resolution of the data is much higher than that of conventional surveying methods. The data collected from different locations of a laser scanner must be transformed into a common coordinate system. If good a priori alignment is provided and the point clouds share a large overlapping region, existing registration methods, such as the Iterative Closest Point (ICP) or Chen and Medioni’s method, work well. In practical applications of laser scanners, partially overlapping and unorganised point clouds are provided without good initial alignment. In these cases, the existing registration methods are not appropriate since it becomes very difficult to find the correspondence of the point clouds. A registration method, the Geometric Primitive ICP with the RANSAC (GPICPR), using geometric primitives, neighbourhood search, the positional uncertainty of laser scanners, and an outlier removal procedure is proposed in this thesis. The change of geometric curvature and approximate normal vector of the surface formed by a point and its neighbourhood are used for selecting the possible correspondences of point clouds. In addition, an explicit expression of the position uncertainty of measurement by laser scanners is presented in this dissertation and this position uncertainty is utilised to estimate the precision and accuracy of the estimated relative transformation parameters between point clouds. The GP-ICPR was tested with both simulated data and datasets from close range and terrestrial laser scanners in terms of its precision, accuracy, and convergence region. It was shown that the GP-ICPR improved the precision of the estimated relative transformation parameters as much as a factor of 5.
In addition, the rotational convergence region of the GP-ICPR on the order of 10°, which is much larger than the ICP or its variants, provides a window of opportunity to utilise this automated registration method in practical applications such as terrestrial surveying and deformation monitoring.
Style APA, Harvard, Vancouver, ISO itp.
2

Foekens, Eijte Willem. "Scanner data based marketing modelling : empirical applications /". Capelle a/d IJssel : Labyrint Publ, 1995. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=007021048&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Trost, Daniel Roland. "Organic produce demand estimation utilizing retail scanner data". Thesis, Montana State University, 1999. http://etd.lib.montana.edu/etd/1999/trost/TrostD1999.pdf.

Pełny tekst źródła
Streszczenie:
Retail demand relationships for organic and non-organic bananas, garlic, onions, and potatoes are examined using scanner data from a retail co-operative food store located in Bozeman, Montana. A level version Rotterdam demand specification is used in a six-equation system to estimate Hicksian demand elasticities. The own-price elasticity for organic onions is negative and significant. All other own-price elasticities are not significantly different from zero. This indicates consumers may not be very price sensitive for the goods in question. With few exceptions, the cross-price elasticities which are significant are also positive. Income elasticities are mostly significant and positive. Elasticity measurement may be somewhat imprecise due to a lack of variability in prices and an ambiguous error structure. Key factors influencing the quantities of the produce items purchased include the number of children in a household, the average age of adults in a household, and employment status of the primary grocery shopper. Educational status did not have any significant impact on quantities purchased.
Style APA, Harvard, Vancouver, ISO itp.
4

Tóvári, Dániel. "Segmentation Based Classification of Airborne Laser Scanner Data". [S.l. : s.n.], 2006. http://digbib.ubka.uni-karlsruhe.de/volltexte/1000006285.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Preuksakarn, Chakkrit. "Reconstructing plant architecture from 3D laser scanner data". Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20116/document.

Pełny tekst źródła
Streszczenie:
Les modèles virtuels de plantes sont visuellement de plus en plus réalistes dans les applications infographiques. Cependant, dans le contexte de la biologie et l'agronomie, l'acquisition de modèles précis de plantes réelles reste un problème majeur pour la construction de modèles quantitatifs du développement des plantes.Récemment, des scanners laser 3D permettent d'acquérir des images 3D avec pour chaque pixel une profondeur correspondant à la distance entre le scanner et la surface de l'objet visé. Cependant, une plante est généralement un ensemble important de petites surfaces sur lesquelles les méthodes classiques de reconstruction échouent. Dans cette thèse, nous présentons une méthode pour reconstruire des modèles virtuels de plantes à partir de scans laser. Mesurer des plantes avec un scanner laser produit des données avec différents niveaux de précision. Les scans sont généralement denses sur la surface des branches principales mais recouvrent avec peu de points les branches fines. Le cœur de notre méthode est de créer itérativement un squelette de la structure de la plante en fonction de la densité locale de points. Pour cela, une méthode localement adaptative a été développée qui combine une phase de contraction et un algorithme de suivi de points.Nous présentons également une procédure d'évaluation quantitative pour comparer nos reconstructions avec des structures reconstruites par des experts de plantes réelles. Pour cela, nous explorons d'abord l'utilisation d'une distance d'édition entre arborescence. Finalement, nous formalisons la comparaison sous forme d'un problème d'assignation pour trouver le meilleur appariement entre deux structures et quantifier leurs différences
In the last decade, very realistic rendering of plant architectures have been produced in computer graphics applications. However, in the context of biology and agronomy, acquisition of accurate models of real plants is still a tedious task and a major bottleneck for the construction of quantitative models of plant development. Recently, 3D laser scanners made it possible to acquire 3D images on which each pixel has an associate depth corresponding to the distance between the scanner and the pinpointed surface of the object. Standard geometrical reconstructions fail on plants structures as they usually contain a complex set of discontinuous or branching surfaces distributed in space with varying orientations. In this thesis, we present a method for reconstructing virtual models of plants from laser scanning of real-world vegetation. Measuring plants with laser scanners produces data with different levels of precision. Points set are usually dense on the surface of the main branches, but only sparsely cover thin branches. The core of our method is to iteratively create the skeletal structure of the plant according to local density of point set. This is achieved thanks to a method that locally adapts to the levels of precision of the data by combining a contraction phase and a local point tracking algorithm. In addition, we present a quantitative evaluation procedure to compare our reconstructions against expertised structures of real plants. For this, we first explore the use of an edit distance between tree graphs. Alternatively, we formalize the comparison as an assignment problem to find the best matching between the two structures and quantify their differences
Style APA, Harvard, Vancouver, ISO itp.
6

Töpel, Johanna. "Initial Analysis and Visualization of Waveform Laser Scanner Data". Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2864.

Pełny tekst źródła
Streszczenie:

Conventional airborne laser scanner systems output the three-dimensional coordinates of the surface location hit by the laser pulse. Data storage capacity and processing speeds available today has made it possible to digitally sample and store the entire reflected waveform, instead of only extracting the coordinates. Research has shown that return waveforms can give even more detailed insights into the vertical structure of surface objects, surface slope, roughness and reflectivity than the conventional systems. One of the most important advantages with registering the waveforms is that it gives the user the possibility to himself define the way range is calculated in post-processing.

In this thesis different techniques have been tested to visualize a waveform data set in order to get a better understanding of the waveforms and how they can be used to improve methods for classification of ground objects.

A pulse detection algorithm, using the EM algorithm, has been implemented and tested. The algorithm output position and width of the echo pulses. One of the results of this thesis is that echo pulses reflected by vegetation tend to be wider than those reflected by for example a road. Another result is that up till five echo pulses can be detected compared to two echo pulses that the conventional system detects.

Style APA, Harvard, Vancouver, ISO itp.
7

Payette, Francois. "Applications of a sampling strategy for the ERBE scanner data". Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61784.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Henning, Jason Gregory. "Modeling Forest Canopy Distribution from Ground-Based Laser Scanner Data". Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/28431.

Pełny tekst źródła
Streszczenie:
A commercially available, tripod mounted, ground-based laser scanner was used to assess forest canopies and measure individual tree parameters. The instrument is comparable to scanning airborne light detection and ranging (lidar) technology but gathers data at higher resolution over a more limited scale. The raw data consist of a series of range measurements to visible surfaces taken at known angles relative to the scanner. Data were translated into three dimensional (3D) point clouds with points corresponding to surfaces visible from the scanner vantage point. A 20 m x 40 m permanent plot located in upland deciduous forest at Coweeta, NC was assessed with 41 and 45 scans gathered during periods of leaf-on and leaf-off, respectively. Data management and summary needs were addressed, focusing on the development of registration methods to align point clouds collected from multiple vantage points and minimize the volume of the plot canopy occluded from the scanner's view. Automated algorithms were developed to extract points representing tree bole surfaces, bole centers and ground surfaces. The extracted points served as the control surfaces necessary for registration. Occlusion was minimized by combining aligned point clouds captured from multiple vantage points with 0.1% and 0.34% of the volume scanned being occluded from view under leaf-off and leaf-on conditions, respectively. The point cloud data were summarized to estimate individual tree parameters including diameter at breast height (dbh), upper stem diameters, branch heights and XY positions of trees on the plot. Estimated tree positions were, on average, within 0.4 m of tree positions measured independently on the plot. Canopy height models, digital terrain models and 3D maps of the density of canopy surfaces were created using aligned point cloud data. Finally spatially explicit models of the horizontal and vertical distribution of plant area index (PAI) and leaf area index (LAI) were generated as examples of useful data summaries that cannot be practically collected using existing methods.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
9

Nalani, Hetti Arachchige. "Automatic Reconstruction of Urban Objects from Mobile Laser Scanner Data". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-159872.

Pełny tekst źródła
Streszczenie:
Aktuelle 3D-Stadtmodelle werden immer wichtiger in verschiedenen städtischen Anwendungsbereichen. Im Moment dienen sie als Grundlage bei der Stadtplanung, virtuellem Tourismus und Navigationssystemen. Mittlerweile ist der Bedarf an 3D-Gebäudemodellen dramatisch gestiegen. Der Grund dafür sind hauptsächlich Navigationssysteme und Onlinedienste wie Google Earth. Die Mehrheit der Untersuchungen zur Rekonstruktion von Gebäudemodellen von Luftaufnahmen konzentriert sich ausschließlich auf Dachmodellierung. Jedoch treiben Anwendungen wie Virtuelle Realität und Navigationssysteme die Nachfrage nach detaillieren Gebäudemodellen, die nicht nur die geometrischen Aspekte sondern auch semantische Informationen beinhalten, stark an. Urbanisierung und Industrialisierung beeinflussen das Wachstum von urbaner Vegetation drastisch, welche als ein wesentlicher Teil des Lebensraums angesehen wird. Aus diesem Grund werden Aufgaben wie der Ökosystemüberwachung, der Verbesserung der Planung und des Managements von urbanen Regionen immer mehr Aufmerksamkeit geschenkt. Gleichermaßen hat die Erkennung und Modellierung von Bäumen im Stadtgebiet sowie die kontinuierliche Überprüfung ihrer Inventurparameter an Bedeutung gewonnen. Die steigende Nachfrage nach 3D-Gebäudemodellen, welche durch Fassadeninformation ergänzt wurden, und Informationen über einzelne Bäume im städtischen Raum erfordern effiziente Extraktions- und Rekonstruktionstechniken, die hochgradig automatisiert sind. In diesem Zusammenhang ist das Wissen über die geometrische Form jedes Objektteils ein wichtiger Aspekt. Heutzutage, wird das Mobile Laser Scanning (MLS) vermehrt eingesetzt um Objekte im städtischen Umfeld zu erfassen und es entwickelt sich zur Hauptquelle von Daten für die Modellierung von urbanen Objekten. Eine Vielzahl von Objekten wurde schon mit Daten von MLS rekonstruiert. Außerdem wurden bereits viele Methoden für die Verarbeitung von MLS-Daten mit dem Ziel urbane Objekte zu erkennen und zu rekonstruieren vorgeschlagen. Die 3D-Punkwolke einer städtischen Szene stellt eine große Menge von Messungen dar, die viele Objekte von verschiedener Größe umfasst, komplexe und unvollständige Strukturen sowie Löcher (Rauschen und Datenlücken) enthält und eine inhomogene Punktverteilung aufweist. Aus diesem Grund ist die Verarbeitung von MLS-Punktwolken im Hinblick auf die Extrahierung und Modellierung von wesentlichen und charakteristischen Fassadenstrukturen sowie Bäumen von großer Bedeutung. In der Arbeit werden zwei neue Methoden für die Rekonstruktion von Gebäudefassaden und die Extraktion von Bäumen aus MLS-Punktwolken vorgestellt, sowie ihre Anwendbarkeit in der städtischen Umgebung analysiert. Die erste Methode zielt auf die Rekonstruktion von Gebäudefassaden mit expliziter semantischer Information, wie beispielsweise Fenster, Türen, und Balkone. Die Rekonstruktion läuft vollautomatisch ab. Zu diesem Zweck werden einige Algorithmen vorgestellt, die auf dem Vorwissen über die geometrische Form und das Arrangement von Fassadenmerkmalen beruhen. Die initiale Klassifikation, mit welcher die Punkte in Objektpunkte und Bodenpunkte unterschieden werden, wird über eine lokale Höhenhistogrammanalyse zusammen mit einer planaren Region-Growing-Methode erzielt. Die Punkte, die als zugehörig zu Objekten klassifiziert werden, werden anschließend in Ebenen segmentiert, welche als Basiselemente der Merkmalserkennung angesehen werden können. Information über die Gebäudestruktur kann in Form von Regeln und Bedingungen erfasst werden, welche die wesentlichen Steuerelemente bei der Erkennung der Fassadenmerkmale und der Rekonstruktion des geometrischen Modells darstellen. Um Merkmale wie Fenster oder Türen zu erkennen, die sich an der Gebäudewand befinden, wurde eine löcherbasierte Methode implementiert. Einige Löcher, die durch Verdeckungen entstanden sind, können anschließend durch einen neuen regelbasierten Algorithmus eliminiert werden. Außenlinien der Merkmalsränder werden durch ein Polygon verbunden, welches das geometrische Modell repräsentiert, indem eine Methode angewendet wird, die auf geometrischen Primitiven basiert. Dabei werden die topologischen Relationen unter Beachtung des Vorwissens über die primitiven Formen analysiert. Mögliche Außenlinien können von den Kantenpunkten bestimmt werden, welche mit einer winkelbasierten Methode detektiert werden können. Wiederkehrende Muster und Ähnlichkeiten werden ausgenutzt um geometrische und topologische Ungenauigkeiten des rekonstruierten Modells zu korrigieren. Neben der Entwicklung des Schemas zur Rekonstruktion des 3D-Fassadenmodells, sind die Segmentierung einzelner Bäume und die Ableitung von Attributen der städtischen Bäume im Fokus der Untersuchung. Die zweite Methode zielt auf die Extraktion von individuellen Bäumen aus den Restpunktwolken. Vorwissen über Bäume, welches speziell auf urbane Regionen zugeschnitten ist, wird im Extraktionsprozess verwendet. Der formbasierte Ansatz zur Extraktion von Einzelbäumen besteht aus einer Reihe von Schritten. In jedem Schritt werden Objekte in Abhängigkeit ihrer geometrischen Merkmale gefunden. Stämme werden unter Ausnutzung der Hauptrichtung der Punktverteilung identifiziert. Dafür werden Punktsegmente gesucht, die einen Teil des Baumstamms repräsentieren. Das Ergebnis des Algorithmus sind segmentierte Bäume, welche genutzt werden können um genaue Informationen über die Größe und Position jedes einzelnen Baumes abzuleiten. Einige Beispiele der Ergebnisse werden in der Arbeit angeführt. Die Zuverlässigkeit der Algorithmen und der Methoden im Allgemeinen wurden unter Verwendung von drei Datensätzen, die mit verschiedenen Laserscannersystemen aufgenommen wurden, verifiziert. Die Untersuchung zeigt auch das Potential sowie die Einschränkungen der entwickelten Methoden wenn sie auf verschiedenen Datensätzen angewendet werden. Die Ergebnisse beider Methoden wurden quantitativ bewertet unter Verwendung einer Menge von Maßen, die die Qualität der Fassadenrekonstruktion und Baumextraktion betreffen wie Vollständigkeit und Genauigkeit. Die Genauigkeit der Fassadenrekonstruktion, der Baumstammdetektion, der Erfassung von Baumkronen, sowie ihre Einschränkungen werden diskutiert. Die Ergebnisse zeigen, dass MLS-Punktwolken geeignet sind um städtische Objekte detailreich zu dokumentieren und dass mit automatischen Rekonstruktionsmethoden genaue Messungen der wichtigsten Attribute der Objekte, wie Fensterhöhe und -breite, Flächen, Stammdurchmesser, Baumhöhe und Kronenfläche, erzielt werden können. Der gesamte Ansatz ist geeignet für die Rekonstruktion von Gebäudefassaden und für die korrekte Extraktion von Bäumen sowie ihre Unterscheidung zu anderen urbanen Objekten wie zum Beispiel Straßenschilder oder Leitpfosten. Aus diesem Grund sind die beiden Methoden angemessen um Daten von heterogener Qualität zu verarbeiten. Des Weiteren bieten sie flexible Frameworks für das viele Erweiterungen vorstellbar sind
Up-to-date 3D urban models are becoming increasingly important in various urban application areas, such as urban planning, virtual tourism, and navigation systems. Many of these applications often demand the modelling of 3D buildings, enriched with façade information, and also single trees among other urban objects. Nowadays, Mobile Laser Scanning (MLS) technique is being progressively used to capture objects in urban settings, thus becoming a leading data source for the modelling of these two urban objects. The 3D point clouds of urban scenes consist of large amounts of data representing numerous objects with significant size variability, complex and incomplete structures, and holes (noise and data gaps) or variable point densities. For this reason, novel strategies on processing of mobile laser scanning point clouds, in terms of the extraction and modelling of salient façade structures and trees, are of vital importance. The present study proposes two new methods for the reconstruction of building façades and the extraction of trees from MLS point clouds. The first method aims at the reconstruction of building façades with explicit semantic information such as windows, doors and balconies. It runs automatically during all processing steps. For this purpose, several algorithms are introduced based on the general knowledge on the geometric shape and structural arrangement of façade features. The initial classification has been performed using a local height histogram analysis together with a planar growing method, which allows for classifying points as object and ground points. The point cloud that has been labelled as object points is segmented into planar surfaces that could be regarded as the main entity in the feature recognition process. Knowledge of the building structure is used to define rules and constraints, which provide essential guidance for recognizing façade features and reconstructing their geometric models. In order to recognise features on a wall such as windows and doors, a hole-based method is implemented. Some holes that resulted from occlusion could subsequently be eliminated by means of a new rule-based algorithm. Boundary segments of a feature are connected into a polygon representing the geometric model by introducing a primitive shape based method, in which topological relations are analysed taking into account the prior knowledge about the primitive shapes. Possible outlines are determined from the edge points detected from the angle-based method. The repetitive patterns and similarities are exploited to rectify geometrical and topological inaccuracies of the reconstructed models. Apart from developing the 3D façade model reconstruction scheme, the research focuses on individual tree segmentation and derivation of attributes of urban trees. The second method aims at extracting individual trees from the remaining point clouds. Knowledge about trees specially pertaining to urban areas is used in the process of tree extraction. An innovative shape based approach is developed to transfer this knowledge to machine language. The usage of principal direction for identifying stems is introduced, which consists of searching point segments representing a tree stem. The output of the algorithm is, segmented individual trees that can be used to derive accurate information about the size and locations of each individual tree. The reliability of the two methods is verified against three different data sets obtained from different laser scanner systems. The results of both methods are quantitatively evaluated using a set of measures pertaining to the quality of the façade reconstruction and tree extraction. The performance of the developed algorithms referring to the façade reconstruction, tree stem detection and the delineation of individual tree crowns as well as their limitations are discussed. The results show that MLS point clouds are suited to document urban objects rich in details. From the obtained results, accurate measurements of the most important attributes relevant to the both objects (building façades and trees), such as window height and width, area, stem diameter, tree height, and crown area are obtained acceptably. The entire approach is suitable for the reconstruction of building façades and for the extracting trees correctly from other various urban objects, especially pole-like objects. Therefore, both methods are feasible to cope with data of heterogeneous quality. In addition, they provide flexible frameworks, from which many extensions can be envisioned
Style APA, Harvard, Vancouver, ISO itp.
10

Natter, Martin, i Markus Feurstein. "Correcting for CBC model bias. A hybrid scanner data - conjoint model". SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2001. http://epub.wu.ac.at/880/1/document.pdf.

Pełny tekst źródła
Streszczenie:
Choice-Based Conjoint (CBC) models are often used for pricing decisions, especially when scanner data models cannot be applied. Up to date, it is unclear how Choice-Based Conjoint (CBC) models perform in terms of forecasting real-world shop data. In this contribution, we measure the performance of a Latent Class CBC model not by means of an experimental hold-out sample but via aggregate scanner data. We find that the CBC model does not accurately predict real-world market shares, thus leading to wrong pricing decisions. In order to improve its forecasting performance, we propose a correction scheme based on scanner data. Our empirical analysis shows that the hybrid method improves the performance measures considerably. (author's abstract)
Series: Report Series SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
Style APA, Harvard, Vancouver, ISO itp.
11

Tankielun, Adam. "Data post-processing and hardware architecture of electromagnetic near field scanner". Aachen Shaker, 2007. http://d-nb.info/987676512/04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Tankielun, Adam. "Data post-processing and hardware architecture of electromagnetic near-field scanner /". Aachen : Shaker, 2008. http://d-nb.info/987676512/04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Stockton, Matthew C. "Applications of demand analysis for the dairy industry using household scanner data". Texas A&M University, 2004. http://hdl.handle.net/1969.1/1330.

Pełny tekst źródła
Streszczenie:
This study illustrates the use of ACNielsen Homescan Panel (HSD) in three separate demand analyses of dairy products: (1) the effect of using cross-sectional data in a New Empirical Industrial Organization (NEIO) study of ice cream firm mergers in San Antonio; (2) the estimation of hedonic price models for fluid milk by quart, halfgallon and gallon container sizes; (3) the estimation of a demand system including white milk, flavored milk, carbonated soft drinks, bottled water, and fruit juice by various container sizes. In the NEIO study a standard LA/AIDS demand system was used to estimate elasticities evaluating seven simulated mergers of ice cream manufactures in San Antonio in 1999. Unlike previously published NEIO work, it is the first to use crosssectional data to address the issue associated with inventory effects. Using the method developed by Capps, Church and Love, none of the simulated price effects associated with the mergers was statistically different from zero at the 5% confidence level. In 1995 Nerlove proposed a quantity-dependent hedonic model as a viable alternative to the conventional price-dependent hedonic model as a means to ascertain consumer willingness to pay for the characteristics of a given good. We revisited Nerlove’s work validating his model using transactional data indigenous to the HSD. Hedonic models, both price-dependent and quantity-dependent, were estimated for the characteristics of fat content, container type, and brand designation for the container sizes of gallon, half- gallon, and quart. A rigorous explanation of the interpretation between the estimates derived from the two hedonic models was discussed. Using the Almost Ideal Demand System (AIDS), a matrix of own-price, crossprice, and expenditure elasticities was estimated involving various container sizes of white milk, flavored milk, carbonated soft drinks, bottled water, and fruit juices, using a cross-section of the 1999 HSD. We described price imputations and the handling of censored observations to develop the respective elasticities. These elasticities provided information about intra-product relationships (same product but different sizes), intrasize relationships (different products same container size), and inter-product relationships (different products and different sizes). This container size issue is unique in the extant literature associated with non-alcoholic beverage industry.
Style APA, Harvard, Vancouver, ISO itp.
14

Schilling, Anita. "Automatic Retrieval of Skeletal Structures of Trees from Terrestrial Laser Scanner Data". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-155698.

Pełny tekst źródła
Streszczenie:
Research on forest ecosystems receives high attention, especially nowadays with regard to sustainable management of renewable resources and the climate change. In particular, accurate information on the 3D structure of a tree is important for forest science and bioclimatology, but also in the scope of commercial applications. Conventional methods to measure geometric plant features are labor- and time-intensive. For detailed analysis, trees have to be cut down, which is often undesirable. Here, Terrestrial Laser Scanning (TLS) provides a particularly attractive tool because of its contactless measurement technique. The object geometry is reproduced as a 3D point cloud. The objective of this thesis is the automatic retrieval of the spatial structure of trees from TLS data. We focus on forest scenes with comparably high stand density and with many occlusions resulting from it. The varying level of detail of TLS data poses a big challenge. We present two fully automatic methods to obtain skeletal structures from scanned trees that have complementary properties. First, we explain a method that retrieves the entire tree skeleton from 3D data of co-registered scans. The branching structure is obtained from a voxel space representation by searching paths from branch tips to the trunk. The trunk is determined in advance from the 3D points. The skeleton of a tree is generated as a 3D line graph. Besides 3D coordinates and range, a scan provides 2D indices from the intensity image for each measurement. This is exploited in the second method that processes individual scans. Furthermore, we introduce a novel concept to manage TLS data that facilitated the researchwork. Initially, the range image is segmented into connected components. We describe a procedure to retrieve the boundary of a component that is capable of tracing inner depth discontinuities. A 2D skeleton is generated from the boundary information and used to decompose the component into sub components. A Principal Curve is computed from the 3D point set that is associated with a sub component. The skeletal structure of a connected component is summarized as a set of polylines. Objective evaluation of the results remains an open problem because the task itself is ill-defined: There exists no clear definition of what the true skeleton should be w.r.t. a given point set. Consequently, we are not able to assess the correctness of the methods quantitatively, but have to rely on visual assessment of results and provide a thorough discussion of the particularities of both methods. We present experiment results of both methods. The first method efficiently retrieves full skeletons of trees, which approximate the branching structure. The level of detail is mainly governed by the voxel space and therefore, smaller branches are reproduced inadequately. The second method retrieves partial skeletons of a tree with high reproduction accuracy. The method is sensitive to noise in the boundary, but the results are very promising. There are plenty of possibilities to enhance the method’s robustness. The combination of the strengths of both presented methods needs to be investigated further and may lead to a robust way to obtain complete tree skeletons from TLS data automatically
Die Erforschung des ÖkosystemsWald spielt gerade heutzutage im Hinblick auf den nachhaltigen Umgang mit nachwachsenden Rohstoffen und den Klimawandel eine große Rolle. Insbesondere die exakte Beschreibung der dreidimensionalen Struktur eines Baumes ist wichtig für die Forstwissenschaften und Bioklimatologie, aber auch im Rahmen kommerzieller Anwendungen. Die konventionellen Methoden um geometrische Pflanzenmerkmale zu messen sind arbeitsintensiv und zeitaufwändig. Für eine genaue Analyse müssen Bäume gefällt werden, was oft unerwünscht ist. Hierbei bietet sich das Terrestrische Laserscanning (TLS) als besonders attraktives Werkzeug aufgrund seines kontaktlosen Messprinzips an. Die Objektgeometrie wird als 3D-Punktwolke wiedergegeben. Basierend darauf ist das Ziel der Arbeit die automatische Bestimmung der räumlichen Baumstruktur aus TLS-Daten. Der Fokus liegt dabei auf Waldszenen mit vergleichsweise hoher Bestandesdichte und mit zahlreichen daraus resultierenden Verdeckungen. Die Auswertung dieser TLS-Daten, die einen unterschiedlichen Grad an Detailreichtum aufweisen, stellt eine große Herausforderung dar. Zwei vollautomatische Methoden zur Generierung von Skelettstrukturen von gescannten Bäumen, welche komplementäre Eigenschaften besitzen, werden vorgestellt. Bei der ersten Methode wird das Gesamtskelett eines Baumes aus 3D-Daten von registrierten Scans bestimmt. Die Aststruktur wird von einer Voxelraum-Repräsentation abgeleitet indem Pfade von Astspitzen zum Stamm gesucht werden. Der Stamm wird im Voraus aus den 3D-Punkten rekonstruiert. Das Baumskelett wird als 3D-Liniengraph erzeugt. Für jeden gemessenen Punkt stellt ein Scan neben 3D-Koordinaten und Distanzwerten auch 2D-Indizes zur Verfügung, die sich aus dem Intensitätsbild ergeben. Bei der zweiten Methode, die auf Einzelscans arbeitet, wird dies ausgenutzt. Außerdem wird ein neuartiges Konzept zum Management von TLS-Daten beschrieben, welches die Forschungsarbeit erleichtert hat. Zunächst wird das Tiefenbild in Komponenten aufgeteilt. Es wird eine Prozedur zur Bestimmung von Komponentenkonturen vorgestellt, die in der Lage ist innere Tiefendiskontinuitäten zu verfolgen. Von der Konturinformation wird ein 2D-Skelett generiert, welches benutzt wird um die Komponente in Teilkomponenten zu zerlegen. Von der 3D-Punktmenge, die mit einer Teilkomponente assoziiert ist, wird eine Principal Curve berechnet. Die Skelettstruktur einer Komponente im Tiefenbild wird als Menge von Polylinien zusammengefasst. Die objektive Evaluation der Resultate stellt weiterhin ein ungelöstes Problem dar, weil die Aufgabe selbst nicht klar erfassbar ist: Es existiert keine eindeutige Definition davon was das wahre Skelett in Bezug auf eine gegebene Punktmenge sein sollte. Die Korrektheit der Methoden kann daher nicht quantitativ beschrieben werden. Aus diesem Grund, können die Ergebnisse nur visuell beurteiltwerden. Weiterhinwerden die Charakteristiken beider Methoden eingehend diskutiert. Es werden Experimentresultate beider Methoden vorgestellt. Die erste Methode bestimmt effizient das Skelett eines Baumes, welches die Aststruktur approximiert. Der Detaillierungsgrad wird hauptsächlich durch den Voxelraum bestimmt, weshalb kleinere Äste nicht angemessen reproduziert werden. Die zweite Methode rekonstruiert Teilskelette eines Baums mit hoher Detailtreue. Die Methode reagiert sensibel auf Rauschen in der Kontur, dennoch sind die Ergebnisse vielversprechend. Es gibt eine Vielzahl von Möglichkeiten die Robustheit der Methode zu verbessern. Die Kombination der Stärken von beiden präsentierten Methoden sollte weiter untersucht werden und kann zu einem robusteren Ansatz führen um vollständige Baumskelette automatisch aus TLS-Daten zu generieren
Style APA, Harvard, Vancouver, ISO itp.
15

Tankielun, Adam [Verfasser]. "Data Post-Processing and Hardware Architecture of Electromagnetic Near-Field Scanner / Adam Tankielun". Aachen : Shaker, 2008. http://d-nb.info/1164342266/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Mayer, Andreas F. (Andreas Frank) Carleton University Dissertation Management Studies. "A comparative study on new product diffusion models: a scanner data based study". Ottawa, 1993.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Douros, I. "Calculating the curvature shape characteristics of the human body from 3D scanner data". Thesis, University College London (University of London), 2004. http://discovery.ucl.ac.uk/1446738/.

Pełny tekst źródła
Streszczenie:
In the recent years, there have been significant advances in the development and manufacturing of 3D scanners capable of capturing detailed (external) images of whole human bodies. Such hardware offers the opportunity to collect information that could be used to describe, interpret and analyse the shape of the human body for a variety of applications where shape information plays a vital role (e.g. apparel sizing and customisation; medical research in fields such as nutrition, obesity/anorexia and perceptive psychology; ergonomics for vehicle and furniture design). However, the representations delivered by such hardware typically consist of unstructured or partially structured point clouds, whereas it would be desirable to have models that allow shape-related information to be more immediately accessible. This thesis describes a method of extracting the differential geometry properties of the body surface from unorganized point cloud datasets. In effect, this is a way of constructing curvature maps that allows the detection on the surface of features that are deformable (such as ridges) rather than reformable under certain transformations. Such features could subsequently be used to interpret the topology of a human body and to enable classification according to its shape, rather than its size (as is currently the standard practice for many of the applications concemed). The background, motivation and significance of this research are presented in chapter one. Chapter two is a literature review describing the previous and current attempts to model 3D objects in general and human bodies in particular, as well as the mathematical and technical issues associated with the modelling. Chapter three presents an overview of: the methodology employed throughout the research; the assumptions regarding the data to be processed; and the strategy for evaluating the results for each stage of the methodology. Chapter four describes an algorithm (and some variations) for approximating the local surface geometry around a given point of the input data set by means of a least-squares minimization. The output of such an algorithm is a surface patch described in an analytic (implicit) form. This is necessary for the next step described below. The case is made for using implicit surfaces rather than more popular 3D surface representations such as parametric forms or height functions. Chapter five describes the processing needed for calculating curvature-related characteristics for each point of the input surface. This utilises the implicit surface patches generated by the algorithm described in the previous chapter, and enables the construction of a "curvature map" of the original surface, which incorporates rich information such as the principal curvatures, shape indices and curvature directions. Chapter six describes a family of algorithms for calculating features such as ridges and umbilic points on the surface from the curvature map, in a manner that bypasses the problem of separating a vector field (i.e. the principal curvature directions) across the entire surface of an object. An alternative approach, using the focal surface information, is also considered briefly in comparison. The concluding chapter summarises the results from all steps of the processing and evaluates them in relation to the requirements set in chapter one. Directions for further research are also proposed.
Style APA, Harvard, Vancouver, ISO itp.
18

Burman, Helén. "Calibration and orientation of airborne image and laser scanner data using GPS and INS". Doctoral thesis, KTH, Geodesy and Photogrammetry, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-2970.

Pełny tekst źródła
Streszczenie:

GPS and INS measurements provide positions and attitudesthat can be used for direct orientation of airborne sensors.This research improves the final results by performingsimultaneous adjustments of GPS, INS and image or laser scannerdata. The first part of this thesis deals with in-airinitialisation of INS attitude using GPS and INS velocitydifference. This is an improvement over initialisation on theground. Even better results can probably be obtained ifaccelerometer biases are modelled and horizontal accelerationsmade larger.

The second part of this thesis deals with GPS/INSorientation of aerial images. Theoretical investigations havebeen made to find the expected accuracy of stereo models andorthophotos oriented by GPS/INS. Direct orientation will becompared to block triangulation. Triangulation can to greaterextent model systematic errors in image and GPS-coordinates.Further, the precision in attitude after triangulation isbetter than that found in present INS performance. On the otherhand, direct orientation can provide more effective dataprocessing, since there is no need for finding or measuring tiepoints or ground control points. In strip triangulation, thenumber of ground control points can be reduced, since INSattitude measurements control error propagation through thestrip. Even if consecutive images are strongly correlated indirect orientation, it is advisable to make a relativeorientation to minimise stereo model deformations.

The third part of this thesis deals with matching laserscanner data. Both elevation and intensity data are used formatching and the differences between overlapping strips aremodelled as exterior orientation errors. Special attention ispaid to determining misalignment between the INS and the laserscanner coordinate systems. We recommend flying in fourdifferent directions over an area with elevation and/orintensity gradients. In this way, misalignment can be foundwithout any ground control. This method can also be used withother imaging sensors,e.g.an aerial camera.

Keywords:Airborne, Camera, Laser scanner, GPS, INS,Adjustment, Matching.

Style APA, Harvard, Vancouver, ISO itp.
19

Kämpchen, Nico. "Feature-level fusion of laser scanner and video data for advanced driver assistance systems". [S.l. : s.n.], 2007. http://nbn-resolving.de/urn:nbn:de:bsz:289-vts-59588.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Andersen, Hans-Erik. "Estimation of critical forest structure metrics through the spatial analysis of airborne laser scanner data /". Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/5579.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Rajwade, Jaisingh. "PARTIAL-DATA INTERPOLATION DURING ARCING OF AN X-RAY TUBE IN A COMPUTED TOMOGRAPHY SCANNER". Cleveland State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=csu1304966508.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Boyanapally, Deepthi. "MERGING OF FINGERPRINT SCANS OBTAINED FROM MULTIPLE CAMERAS IN 3D FINGERPRINT SCANNER SYSTEM". UKnowledge, 2008. http://uknowledge.uky.edu/gradschool_theses/510.

Pełny tekst źródła
Streszczenie:
Fingerprints are the most accurate and widely used biometrics for human identification due to their uniqueness, rapid and easy means of acquisition. Contact based techniques of fingerprint acquisition like traditional ink and live scan methods are not user friendly, reduce capture area and cause deformation of fingerprint features. Also, improper skin conditions and worn friction ridges lead to poor quality fingerprints. A non-contact, high resolution, high speed scanning system has been developed to acquire a 3D scan of a finger using structured light illumination technique. The 3D scanner system consists of three cameras and a projector, with each camera producing a 3D scan of the finger. By merging the 3D scans obtained from the three cameras a nail to nail fingerprint scan is obtained. However, the scans from the cameras do not merge perfectly. The main objective of this thesis is to calibrate the system well such that 3D scans obtained from the three cameras merge or align automatically. This error in merging is reduced by compensating for radial distortion present in the projector of the scanner system. The error in merging after radial distortion correction is then measured using the projector coordinates of the scanner system.
Style APA, Harvard, Vancouver, ISO itp.
23

Balduzzi, Mathilde. "Plant canopy modeling from Terrestrial LiDAR System distance and intensity data". Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20203.

Pełny tekst źródła
Streszczenie:
Le défi de cette thèse est de reconstruire la géométrie 3D de la végétation à partir des données de distance et d'intensité fournies par un scanner de type LiDAR. Une méthode de « shape-from-shading » par propagation est développée pour être combinée avec une méthode de fusion de données type filtre de Kalman pour la reconstruction optimale des surfaces foliaires.-Introduction-L'analyse des données LiDAR nous permet de dire que la qualité du nuage de point est variable en fonction de la configuration de la mesure : lorsque le LiDAR mesure le bord d'une surface ou une surface fortement inclinée, il intègre dans sa mesure une partie de l'arrière plan. Ces configurations de mesures produisent des points aberrants. On retrouve souvent ce type de configuration pour la mesure de feuillages puisque ces derniers ont des géométries fragmentées et variables. Les scans sont en général de mauvaise qualité et la quantité d'objets présents dans le scan rend la suppression manuelle des points aberrants fastidieuse. L'objectif de cette thèse est de développer une méthodologie permettant d'intégrer les données d'intensité LiDAR aux distances pour corriger automatiquement ces points aberrants. -Shape-From-Shading-Le principe du Shape-From-Shading (SFS) est de retrouver les valeurs de distance à partir des intensités d'un objet pris en photo. La caméra (capteur LiDAR) et la source de lumière (laser LiDAR) ont la même direction et sont placés à l'infini relativement à la surface, ce qui rend l'effet de la distance sur l'intensité négligeable et l'hypothèse d'une caméra orthographique valide. En outre, la relation entre angle d'incidence lumière/surface et intensité est connue. Par la nature des données LiDAR, nous pourrons choisir la meilleure donnée entre distance et intensité à utiliser pour la reconstruction des surfaces foliaires. Nous mettons en place un algorithme de SFS par propagation le long des régions iso-intenses pour pouvoir intégrer la correction de la distance grâce à l'intensité via un filtre de type Kalman. -Design mathématique de la méthode-Les morceaux de surface correspondant aux régions iso-intenses sont des morceaux de surfaces dites d'égales pentes, ou de tas de sable. Nous allons utiliser ce type de surface pour reconstruire la géométrie 3D correspondant aux images d'intensité.Nous démontrons qu'à partir de la connaissance de la 3D d'un bord d'une région iso-intense, nous pouvons retrouver des surfaces paramétriques correspondant à la région iso-intense qui correspondent aux surfaces de tas de sable. L'initialisation de la région iso-intense initiale (graine de propagation) se fait grâce aux données de distance LiDAR. Les lignes de plus grandes pentes de ces surfaces sont générées. Par propagation de ces lignes (et donc génération du morceau de la surface en tas de sable), nous déterminons l'autre bord de la région iso-intense. Puis, par itération, nous propagerons la reconstruction de la surface. -Filtre de Kalman-Nous pouvons considérer cette propagation des lignes de plus grande pente comme étant le calcul d'une trajectoire sur la surface à reconstruire. Dans le cadre de notre étude, la donnée de distance est toujours disponible (données du scanner 3D). Ainsi il est possible de choisir, lors de la propagation, quelle donnée (distance ou intensité) utiliser pour la reconstruction. Ceci peut être fait notamment grâce à une fusion de type Kalman. -Algorithme-Pour procéder à la reconstruction par propagation, il est nécessaire d'hiérarchiser les domaines iso-intenses de l'image. Une fois que les graines de propagation sont repérées, elles sont initialisées avec l'image des distances. Enfin, pour chacun des nœuds de la hiérarchie (représentant un domaine iso-intense), la reconstruction d'un tas de sable est faite. C'est lors de cette dernière étape qu'une fusion de type Kalman peut être introduite
The challenge of this thesis is reconstruct the 3D geometry of vegetation from distance and intensity data provided by a 3D scanner LiDAR. A method of “Shape-From-Shading” by propagation is developed to be combined with a fusion method of type “Kalman” to get an optimal reconstruction of the leaves. -Introduction-The LiDAR data analysis shows that the point cloud quality is variable. This quality depends upon the measurement set up. When the LiDAR laser beam reaches the edge of a surface (or a steeply inclined surface), it also integrate background measurement. Those set up produce outliers. This kind of set up is common for foliage measurement as foliages have in general fragmented and complex shape. LiDAR data are of bad quality and the quantity of leaves in a scan makes the correction of outliers fastidious. This thesis goal is to develop a methodology to allow us to integrate the LiDAR intensity data to the distance to make an automatic correction of those outliers. -Shape-from-shading-The Shape-from-shading principle is to reconstruct the distance values from intensities of a photographed object. The camera (LiDAR sensor) and the light source (LiDAR laser) have the same direction and are placed at infinity relatively to the surface. This makes the distance effect on intensity negligible and the hypothesis of an orthographic camera valid. In addition, the relationship between the incident angle light beam and intensity is known. Thanks to the LiDAR data analysis, we are able to choose the best data between distance and intensity in the scope of leaves reconstruction. An algorithm of propagation SFS along iso-intense regions is developed. This type of algorithm allows us to integrate a fusion method of type Kalman. -Mathematical design of the method-The patches of the surface corresponding to the iso-intense regions are patches of surfaces called the constant slope surfaces, or sand-pile surfaces. We are going to use those surfaces to rebuild the 3D geometry corresponding to the scanned surfaces. We show that from the knowledge of the 3d of an iso-intensity region, we can construct those sand-pile surfaces. The initialization of the first iso-intense regions contour (propagation seeds) is done with the 3D LiDAR data. The greatest slope lines of those surfaces are generated. Thanks to the propagation of those lines (and thus of the corresponding sand-pile surface), we build the other contour of the iso-intense region. Then, we propagate the reconstruction iteratively. -Kalman filter-We can consider this propagation as being the computation of a trajectory on the reconstructed surface. In our study framework, the distance data is always available (3D scanner data). It is thus possible to choose which data (intensity vs distance) is the best to reconstruct the object surface. This can be done with a fusion of type Kalman filter. -Algorithm-To proceed a reconstruction by propagation, it is necessary to order the iso-intensity regions. Once the propagation seeds are found, they are initialized with the distances provided by the LiDAR. For each nodes of the hierarchy (corresponding to an iso-intensity region), the sand-pile surface reconstruction is done. -Manuscript-The thesis manuscript gathers five chapters. First, we give a short description of the LiDAR technology and an overview of the traditional 3D surface reconstruction from point cloud. Then we make a state-of-art of the shape-from –shading methods. LiDAR intensity is studied in a third chapter to define the strategy of distance effect correction and to set up the incidence angle vs intensity relationship. A fourth chapter gives the principal results of this thesis. It gathers the theoretical approach of the SFS algorithm developed in this thesis. We will provide its description and results when applied to synthetic images. Finally, a last chapter introduces results of leaves reconstruction
Style APA, Harvard, Vancouver, ISO itp.
24

Posadas, Benedict Kit. "AN APPLICATION OF ARTIFICIAL INTELLIGENCE TECHNIQUES IN CLASSIFYING TREE SPECIES WITH LiDAR AND MULTI-SPECTRAL SCANNER DATA". MSSTATE, 2008. http://sun.library.msstate.edu/ETD-db/theses/available/etd-07142008-113351/.

Pełny tekst źródła
Streszczenie:
Tree species identification is an important element in many forest resources applications such as wildlife habitat management, inventory, and forest damage assessment. Field data collection for large or mountainous areas is often cost prohibitive, and good estimates of the number and spatial arrangement of species or species groups cannot be obtained. Knowledge-based and neural network species classification models were constructed for remotely sensed data of conifer stands located in the lower mountain regions near McCall, Idaho, and compared to field data. Analyses for each modeling system were made based on multi-spectral sensor (MSS) data alone and MSS plus LiDAR (light detection and ranging) data. The neural network system produced models identifying five of six species with 41% to 88% producer accuracies and greater overall accuracies than the knowledge-based system. The neural network analysis that included a LiDAR derived elevation variable plus multi-spectral variables gave the best overall accuracy at 63%.
Style APA, Harvard, Vancouver, ISO itp.
25

Larsson, Sören. "An industrial robot as carrier of a laser profile scanner : motion control, data capturing and path planning /". Örebro : Örebro universitet, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-1738.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Heng, Yan. "Three essays on differentiated products and heterogeneous consumer preferences: the case of table eggs". Diss., Kansas State University, 2015. http://hdl.handle.net/2097/18993.

Pełny tekst źródła
Streszczenie:
Doctor of Philosophy
Department of Agricultural Economics
Hikaru Hanawa Peterson
Consumers’ food demand has been found to be affected not only by prices and income, but also by their increasing concern about factors like health benefits, animal welfare, and environmental impacts. Thus, many food producers have differentiated and advertised their products using relevant attributes. The increasing demand and supply of differentiated food products have raised questions regarding consumer preferences and producer strategies. This dissertation consists of three essays and empirically examines the egg market to shed light on related issues. The first question that this study aims to answer is whether consumers are willing to pay a premium for livestock and dairy products associated with improved animal welfare. Consumers’ attitude towards such products not only affect manufacturers’ production decisions, but also influence policy makers and current legislations. Using a national online survey with choice experiments, the first essay found that consumers in the study sample valued eggs produced under animal-friendly environment, suggesting incentives for producers to adopt animal welfare friendly practices. In an actual shopping trip, consumers usually need to choose from products with multiple attributes and labels. Studying how consumers with heterogeneous preferences process these information simultaneously and make decisions is important for producers to target interested consumer segments and implement more effective labeling strategies. In the second essay, a different national online survey was administered. The analysis using a latent class model categorized the sample respondents into four classes, and their preferences toward attributes and various label combinations differed across classes. Scanner data, which record actually purchased choices, are an important source of information to study consumer preferences. Diverging from the traditional demand approaches that are limited in studying differentiated product markets using scanner data, this study used a random coefficient logit model to overcome potential limitations and examine the demand relationship as well as price competition in the differentiated egg market. The third essay found that conventional and private labeled eggs yielded higher margins due to less elastic demand and cautioned producers of specialty eggs, which are usually sold at high prices despite their much more elastic demand.
Style APA, Harvard, Vancouver, ISO itp.
27

Rahayem, Mohamed. "Planar segmentation for Geometric Reverse Engineering using data from a laser profile scanner mounted on an industrial robot". Licentiate thesis, Örebro University, Department of Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-2318.

Pełny tekst źródła
Streszczenie:

Laser scanners in combination with devices for accurate orientation like Coordinate Measuring Machines (CMM) are often used in Geometric Reverse Engineering (GRE) to measure point data. The industrial robot as a device for orientation has relatively low accuracy but the advantage of being numerically controlled, fast, flexible, rather cheap and compatible with industrial environments. It is therefore of interest to investigate if it can be used in this application.

This thesis will describe a measuring system consisting of a laser profile scanner mounted on an industrial robot with a turntable. It will also give an introduction to Geometric Reverse Engineering (GRE) and describe an automatic GRE process using this measuring system. The thesis also presents a detailed accuracy analysis supported by experiments that show how 2D profile data can be used to achieve a higher accuracy than the basic accuracy of the robot. The core topic of the thesis is the investigation of a new technique for planar segmentation. The new method is implemented in the GRE system and compared with an implementation of a more traditional method.

Results from practical experiments show that the new method is much faster while equally accurate or better.

Style APA, Harvard, Vancouver, ISO itp.
28

Hayes, Ladson. "Techniques for facilitating the registration and rectification of satellite data with examples using data from the advanced very high resolution radiometer and the Landsat multispectral scanner". Thesis, University of Dundee, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303169.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Hofmann, Alexandra. "An Approach to 3D Building Model Reconstruction from Airborne Laser Scanner Data Using Parameter Space Analysis and Fusion of Primitives". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2005. http://nbn-resolving.de/urn:nbn:de:swb:14-1121943034550-40151.

Pełny tekst źródła
Streszczenie:
Within this work an approach was developed, which utilises airborne laser scanner data in order to generate 3D building models. These 3D building models may be used for technical and environmental planning. The approach has to follow certain requirements such as working automatically and robust and being flexible in use but still practicable. The approach starts with small point clouds containing one building at the time extracted from laser scanner data set by applying a pre-segmentation scheme. The laser scanner point cloud of each building is analysed separately. A 2.5D-Delaunay triangle mesh structure (TIN) is calculated into the laser scanner point cloud. For each triangle the orientation parameters in space (orientation, slope and perpendicular distance to the barycentre of the laser scanner point cloud) are determined and mapped into a parameter space. As buildings are composed of planar features, primitives, triangles representing these features should group in parameter space. A cluster analysis technique is utilised to find and outline these groups/clusters. The clusters found in parameter space represent plane objects in object space. Grouping adjacent triangles in object space - which represent points in parameter space - enables the interpolation of planes in the ALS points that form the triangles. In each cluster point group a plane in object space is interpolated. All planes derived from the data set are intersected with their appropriate neighbours. From this, a roof topology is established, which describes the shape of the roof. This ensures that each plane has knowledge on its direct adjacent neighbours. Walls are added to the intersected roof planes and the virtual 3D building model is presented in a file written in VRML (Virtual Reality Macro Language). Besides developing the 3D building model reconstruction scheme, this research focuses on the geometric reconstruction and the derivation of attributes of 3D building models. The developed method was tested on different data sets obtained from different laser scanner systems. This study will also show, which potential and limits the developed method has when applied to these different data sets
In der vorliegenden Arbeit wird eine neue Methode zur automatischen Rekonstruktion von 3D Gebäudemodellen aus Flugzeuglaserscannerdaten vorgestellt. Diese 3D Gebäudemodelle können in technischer und landschaftsplanerischer Hinsicht genutzt werden. Bezüglich der zu entwickelnden Methode wurden Regelungen und Bedingungen erstellt, die eine voll automatische und robuste Arbeitsweise sowie eine flexible und praktikable Nutzung gewährleisten sollten. Die entwickelte Methode verwendet Punktwolken, welche mittels einer Vorsegmentierung aus dem gesamten Laserscannerdatensatz extrahiert wurden und jeweils nur ein Gebäude beinhalten. Diese Laserscannerdatenpunktwolken werden separat analysiert. Eine 2,5D-Delaunay-Dreiecksvermaschung (TIN) wird in jede Punktwolke gerechnet. Für jedes Dreieck dieser Vermaschung werden die Lageparameter im Raum (Ausrichtung, Neigungsgrad und senkrechter Abstand der Ebene des Dreiecks zum Schwerpunkt der Punktwolke) bestimmt und in einen Parameterraum aufgetragen. Im Parameterraum bilden diejenigen Dreiecke Gruppen, welche sich im Objektraum auf ebenen Flächen befinden. Mit der Annahme, dass sich ein Gebäude aus ebenen Flächen zusammensetzt, dient die Identifizierung von Clustern im Parameterraum der Detektierung dieser Flächen. Um diese Gruppen/Cluster aufzufinden wurde eine Clusteranalysetechnik genutzt. Über die detektierten Cluster können jene Laserscannerpunkte im Objektraum bestimmt werden, die eine Dachfläche formen. In die Laserscannerpunkte der somit gefundenen Dachflächen werden Ebenen interpoliert. Alle abgeleiteten Ebenen gehen in den entwickelten Rekonstruktionsalgorithmus ein, der eine Topologie zwischen den einzelnen Ebenen aufbaut. Anhand dieser Topologie erhalten die Ebenen ?Kenntnis? über ihre jeweiligen Nachbarn und können miteinander verschnitten werden. Der fertigen Dachgestalt werden Wände zugefügt und das komplette 3D Gebäudemodell wird mittels VRML (Virtual Reality Macro Language) visualisiert. Diese Studie bezieht sich neben der Entwicklung eines Schemas zu automatischen Gebäuderekonstruktion auch auf die Ableitung von Attributen der 3D Gebäudemodellen. Die entwickelte Methode wurde an verschiedenen Flugzeuglaserscannerdatensätzen getestet. Es wird gezeigt, welche Potentiale und Grenzen die entwickelte Methode bei der Bearbeitung dieser verschiedenen Laserscannerdatensätze hat
Style APA, Harvard, Vancouver, ISO itp.
30

MacDonald, David Ross. "The assessment of LANDSAT multi-spectral scanner and thermatic mapper data for geological investigation, using four examples from South Australia /". Title page, contents and abstract only, 1988. http://web4.library.adelaide.edu.au/theses/09S.B/09s.bm1351.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

White, Katharine L. "What is the future of brand name beef? : A price analysis of branding incentives and other attributes for retail beef using sales scanner data". Thesis, Manhattan, Kan. : Kansas State University, 2010. http://hdl.handle.net/2097/3551.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Wang, Chao. "Point clouds and thermal data fusion for automated gbXML-based building geometry model generation". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/54008.

Pełny tekst źródła
Streszczenie:
Existing residential and small commercial buildings now represent the greatest opportunity to improve building energy efficiency. Building energy simulation analysis is becoming increasingly important because the analysis results can assist the decision makers to make decisions on improving building energy efficiency and reducing environmental impacts. However, manually measuring as-is conditions of building envelops including geometry and thermal value is still a labor-intensive, costly, and slow process. Thus, the primary objective of this research was to automatically collect and extract the as-is geometry and thermal data of the building envelope components and create a gbXML-based building geometry model. In the proposed methodology, a rapid and low-cost data collection hardware system was designed by integrating 3D laser scanners and an infrared (IR) camera. Secondly, several algorithms were created to automatically recognize various components of building envelope as objects from collected raw data. The extracted 3D semantic geometric model was then automatically saved as an industry standard file format for data interoperability. The feasibility of the proposed method was validated through three case studies. The contributions of this research include 1) a customized low-cost hybrid data collection system development to fuse various data into a thermal point cloud; 2) an automatic method of extracting building envelope components and its geometry data to generate gbXML-based building geometry model. The broader impacts of this research are that it could offer a new way to collect as is building data without impeding occupants’ daily life, and provide an easier way for laypeople to understand the energy performance of their buildings via 3D thermal point cloud visualization.
Style APA, Harvard, Vancouver, ISO itp.
33

Stuhr, Lina. "Grain Reduction in Scanned Image Sequences under Time Constraints". Thesis, Linköping University, Department of Electrical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-17577.

Pełny tekst źródła
Streszczenie:

This thesis is about improving the image quality of image sequences scanned by the film scanner GoldenEye. Film grain is often seen as an artistic effect in film sequences but scanned images can be more grainy or noisy than the intention. To remove the grain and noise as well as sharpen the images a few known image enhancement methods have been implemented, tested and evaluated. An own idea of a thresholding method using the dyadic wavelet transform has also been tested. As benchmark has MATLAB been used but one method has also been implemented in C/C++. Some of the methods works satisfactory when it comes to the image result but none of the methods works satisfactory when it comes to time consumption. To solve that a few speed up ideas are suggested in the end of the thesis. A method to correct the color of the sequences has also been suggested.

Style APA, Harvard, Vancouver, ISO itp.
34

Fried, Samantha Jo. "Landsat in Contexts: Deconstructing and Reconstructing the Data-to-Action Paradigm in Earth Remote Sensing". Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/89431.

Pełny tekst źródła
Streszczenie:
There is a common theme at play in our talk of data generally, of digital earth data more specifically, and of environmental monitoring most specifically: more data leads to more action and, ultimately, to societal good. This data-to-action framework is troubled. Its taken-for-grantedness prevents us from attending to the processes between data and action. It also dampens our drive to investigate the contexts of that data, that action, and that envisioned societal good. In this dissertation, I deconstruct this data-to-action model in the context of Landsat, the United States' first natural resource management satellite. First, I talk about the ways in which Landsat's data and instrumentation hold conflicting narratives and values within them. Therefore, Landsat data does not automatically or easily yield action toward environmental preservation, or toward any unified societal good. Furthermore, I point out a parallel dynamic in STS, where critique is somewhat analogous to data. We want our critiques to yield action, and to guide us toward a more just technoscience. However, critiques—like data—require intentional, reconstructive interventions toward change. Here is an opportunity for a diffractive intervention: one in which we read STS and remote sensing through each other, to create space for interdisciplinary dialogue around environmental preservation. A focus on this shared goal, I argue, is imperative. At stake are issues of environmental degradation, dwindling resources, and climate change. I conclude with beginnings rather than endings: with suggestions for how we might begin to create infrastructure that attends to that forgotten space between data, critique, action, and change.
Doctor of Philosophy
I have identified a problem I call the data-to-action paradigm. When we scroll around on Facebook and find articles –– citing pages and pages of statistics –– on our rapidly melting glaciers and increasingly unpredictable weather patterns, we are existing within this paradigm. We have been offered evidence of looming, catastrophic change, but no suggestions on what to do about it. This is not only happening with climatological data and large-scale environmental systems modelling. Rather, this is a general problem across the field of Earth Remote Sensing. The origins of this data-to-action paradigm, I argue, can be found in old and new rhetoric about Landsat, the United States’ first natural resource management satellite. This rhetoric often says that Landsat — and other natural resource management satellites’ — data is a way toward societal good. The more data we have, the more good will proliferate in the world. However, we haven’t been specific about what that good might look like, and what kinds of actions we might take toward that good using this data. This is because, I argue, Earth systems science is politically complicated, with many different conceptions of societal good. In order to be more specific about how we might use this data toward some kind of good we must (1) explore the history of environmental data, and figure out where this rhetoric comes from (which I I do in this dissertation), and (2) encourage interdisciplinary collaborations between Earth Remote Sensing scientists, social scientists, and humanists, to more specifically flesh out connections between digital Earth data, its analyses, and subsequent civic action on such data.
Style APA, Harvard, Vancouver, ISO itp.
35

Jacobsen, Anne. "Analysing airborne optical remote sensing data from a hyperspectral scanner and implications for environmental mapping and monitoring results from a study of casi data and Danish semi-natural dry grasslands /". Roskilde : Danmarks Miljøundersøgelser, 2001. http://www.dmu.dk/1_viden/2_Publikationer/3_Ovrige/default.asp.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Pazuniak, Orest V. "Do Labels Make A Difference: Estimating The Impacts Of Vermont’s Gmo Labeling Law On Perceptions And Prices". ScholarWorks @ UVM, 2018. https://scholarworks.uvm.edu/graddis/974.

Pełny tekst źródła
Streszczenie:
Vermont is the first and only state in the US to establish mandatory labels for food containing genetically modified organisms (GMOs). This thesis investigates the impact of the mandatory labeling law as it relates to changes in prices, quantities sold, and opinions of GMOs. First, grocery store scanner data from Vermont and Oregon are compared using triple difference (difference-in-difference-in-difference) models. Next, Vermont, Oregon, and Colorado survey response data are compared using difference-in-difference models. The findings reveal that there is a general price premium for non-GMO goods of $0.05/oz across all states and times, that mandatory labeling laws do not result in a short-term change in quantities sold or prices of GMO products, and that both mandatory labeling laws and failed mandatory labeling referendums cause an increase in support for GMOs in the food supply. The implications of this research are that mandatory GMO labels did not impact short-term prices or sales and increased the level of support for GMOs.
Style APA, Harvard, Vancouver, ISO itp.
37

Nalani, Hetti Arachchige [Verfasser], Hans-Gerd [Akademischer Betreuer] Maas, Eberhard [Akademischer Betreuer] Gülch i Norbert [Akademischer Betreuer] Haala. "Automatic Reconstruction of Urban Objects from Mobile Laser Scanner Data / Hetti Arachchige Nalani. Gutachter: Hans-Gerd Maas ; Eberhard Gülch ; Norbert Haala. Betreuer: Hans-Gerd Maas". Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://d-nb.info/1069093025/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Schilling, Anita [Verfasser], Hans-Gerd [Akademischer Betreuer] Maas, Norbert [Akademischer Betreuer] Pfeifer i Uwe [Akademischer Betreuer] Petersohn. "Automatic Retrieval of Skeletal Structures of Trees from Terrestrial Laser Scanner Data / Anita Schilling. Gutachter: Hans-Gerd Maas ; Norbert Pfeifer ; Uwe Petersohn. Betreuer: Hans-Gerd Maas". Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://d-nb.info/106904069X/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Persson, Per-Göran. "Modeling the impact of sales promotion on store profits". Doctoral thesis, Handelshögskolan i Stockholm, Centrum för Konsumentmarknadsföring (CCM), 1995. http://urn.kb.se/resolve?urn=urn:nbn:se:hhs:diva-883.

Pełny tekst źródła
Streszczenie:
Millions of dollars are spent each year on sales promotion in grocery retailing. Despite this, no one really knows wheter promotions for individual items have positive, neutral, or negative effects on the store’s total profits. In this dissertation, an attempt is made to build a framework model that can be used to predict and evaluate how specific promotions affect item-level, category-level, and store-level profits. One major contribution is that the proposed model specifically incorporates store-traffic and cannibalization effects. The book consists of four parts: the first part is a review of the literature on sales promotion. The second part describes the development of a model for measuring the profit of retailer promotions. The third part shows the impact of sales promotion on profits for a number of hypothetical products. The fourth part measures the profit impact of sales promotions for 15 items in three product categories. The data used in this study were collected in a Swedish supermarket in cooperation with ICA.
Diss. Stockholm : Handelshögskolan, 1995
Style APA, Harvard, Vancouver, ISO itp.
40

Holmberg, Carina. "Stores and consumers : two perspectives on food purchasing". Doctoral thesis, Stockholm : Foundation for Distribution Research, Economic Research Institute, Stockholm School of Economics [Fonden för handels- och distributionsforskning, Ekonomiska forskningsinstitutet vid Handelshögsk.], 1996. http://www.hhs.se/efi/summary/418.htm.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Kapicová, Eva. "Přístupy k řešení digitalizace dokumentů". Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-72508.

Pełny tekst źródła
Streszczenie:
This thesis is focused on document imaging and describes different approaches to the implementation of document imaging systems. In the first part there is the theoretical background of the document imaging that is supplemented by current statistics. In the first part there are also some remarks on the situation on the Czech market. The second part is based on examinations of different approaches to document imaging systems that were made by Czech companies. The main objective of this thesis is a practical example of selecting an appropriate solution of the document imaging. There are considered three types of approaches to the document imaging: complete outsourcing, in-house outsourcing or a solution of their own. To achieve this goal there is an adequate theoretical basis for the document imaging and an overview of hardware and software support for digitization in the first part of the thesis. TEI method by Forrester Research is used for comparison of the economic impact of the different approaches. The contribution of this thesis is primarily in the description of the state of document imaging in the Czech Republic and also examination of different solutions and approaches to the document imaging. There is also a completion of the theoretical part by a data from current surveys.
Style APA, Harvard, Vancouver, ISO itp.
42

Oliveira, Pezente Aline (De Souza Oliveira Pezente). "Predictive demand models in the food and agriculture sectors : an analysis of the current models and results of a novel approach using machine learning techniques with retail scanner data". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117950.

Pełny tekst źródła
Streszczenie:
Thesis: S.M. in Management of Technology, Massachusetts Institute of Technology, Sloan School of Management, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 51-53).
Agriculture commodities production and consumption are typically not aligned since the timing of commodity production with its pace of consumption is disjoint, once commodities are often produced periodically (with certain crops being harvested once a year) but with a continuous consumption throughout the year. The temporal mismatches in production and consumption require both commodities consumers (food industries) and producers (farmers) to predict future consumption based on limited unreliable information, about the future of demand and available historical data. Consequently, the lack of an appropriate understanding of what is the actual food consumption trend, lead's the producers in some cases to make wrong bets, which eventually causes food waste, price volatility and excess commodities stock. The commodities market has a good view of short-term supply fundamentals but still lacks powerful tools and frameworks to estimate long-term demand fundamentals, of which will drive the future supply. This thesis studies commodities demand forecasting using Nielsen's Retail Scanners data based on machine learning techniques to construct nonlinear parametric models of commodities consumption, using the U.S sugar cane as our use case. By combining Nielsen Retail Scanner data from January 2006 to December 2015 for a sample of 30% of U.S retail, wholesalers and small shops, considering a basket of products that has sugar as one of its main components, we were able to construct out-of-sample forecasts that significantly improve the prediction of sugar demand compared to classical base-line model approach of the historical moving average.
by Aline Oliveira Pezente.
S.M. in Management of Technology
Style APA, Harvard, Vancouver, ISO itp.
43

Weeks, Scarla Jeanne. "Spatial and temporal variability of chlorophyll concentrations from nimbus-7 coastal zone colour scanner data in the Benguala upwelling system and the sub-tropical convergence region south of Africa". Master's thesis, University of Cape Town, 1993. http://hdl.handle.net/11427/21857.

Pełny tekst źródła
Streszczenie:
Bibliography: pages 68-74.
South African oceanographers were engaged in collecting hydrographic and biological sea truth data in order to calibrate the CZCS measurements from the NIMBUS-7 satellite over the Benguela Upwelling region and along the east coast of South Africa during the period 1978 to 1981. A brief overview of the CZCS validation programme and its application to the South African marine environment is given, followed by an analysis of level-Til CZCS data obtained from NASA for the region 10° - 60°S, and 10° - 100°E. This area includes the Benguela Upwelling system on the continental shelf, and the Southern Ocean with the Subtropical Convergence zone south of Africa. High annual values (5mg m⁻³) of chlorophyll occurred in the Benguela shelf region, typical of other upwelling systems in the world ocean, and the data shows a strong interannual signal in the seven years of composited data from 1978-1985, with maxima in 1982. Two distinct regimes were found in the Benguela Upwelling system, the seasonal variations of pigment concentration in the northern and southern Benguela regions being out of phase. In the Southern Ocean, the values of chlorophyll were generally low (0.15mg m⁻³) with the strongest signal (1.5mg m⁻³) found at the southern border of the Agulhas retroflection region and its frontal boundary with the colder subantarctic water to the south. The high values of chlorophyll found in this region are ten times the typical open Southern Ocean values. There is a clear interannual signal in the CZCS data for this Subtropi£al Convergence region, which has a low value in 1979 rising to a maximum in 1981 and then decreasing to another low value in 1985. There appears to be no pronounced seasonal variation in the Subtropical Convergence data. Reasons for the strong signal in the surface chlorophyll concentrations at the front between the Agulhas Return Current and the Southern Ocean are discussed, and it is shown that the Agulhas Plateau sets up a topographic Rossby wave in the Agulhas Return Current, which can be clearly identified in the CZCS signal. The large expanse of the Subtropical Convergence region is found as able to sustain a standing stock of phytoplankton similar in magnitude to that on the Benguela shelf, for limited periods of time. A brief analysis of sea surface temperature versus chlorophyll concentration shows the relationship between the two parameters to take the form of an inverted parabola, having a temperature window within which maximum chlorophyll concentrations are found.
Style APA, Harvard, Vancouver, ISO itp.
44

Wang, Xiaojin. "ESSAYS ON AGRICULTURAL MARKET AND POLICIES: IMPORTED SHRIMP, ORGANIC COFFEE, AND CIGARETTES IN THE UNITED STATES". UKnowledge, 2016. http://uknowledge.uky.edu/agecon_etds/41.

Pełny tekst źródła
Streszczenie:
This dissertation focuses on topics in areas of agricultural and food policy, international trade, agricultural markets and marketing. The dissertation is structured as three papers. The first paper, Chapter 1, evaluates the impact of agricultural trade policies. Imported shrimp, which comprises nearly ninety percent of all United States shrimp consumption, have become the subject of antidumping and countervailing duty investigations in the past decade. I estimate the import demand for shrimp in the United States from 1999-2014, using the Barten’s synthetic model. I test the hypothesis of possible structural breaks in the import demand introduced by various trade policies: antidumping/countervailing duty investigations and impositions, and import refusals due to safety and environmental issues. Results show that these import-restricting policies have significant effects on the import shrimp demand, indicating that the omission of them would lead to biased estimates. Chapter 2, the second paper, examines how the burden of state cigarette tax is divided between producers/retailers and consumers, by using the Nielsen store-level scanner data on cigarette prices from convenience stores over the period 2011–2012. Cigarette taxes were found more than fully passed through to retail prices on average, suggesting consumers pay excess burden and market power exists in the cigarette industry. Utilizing information on the attributes of cigarette products, we demonstrated that tax incidence varied by brand and package size: pass-through rates for premium brands and carton-packaged cigarettes are higher than those for discount brands and cigarettes in packs, respectively, indicating possibilities of different demand elasticities across product tiers. Chapter 3, the third paper, focuses on identifying the demographic characteristics of households buying organic coffee, by examining the factors that influence the probability that a consumer will buy organic coffee, and which factors affect the amount organic coffee purchased. Using nationally representative household level data from 55,470 households over the period of 2011 to 2013 (Nielsen Homescan), and a censored demand model, we find that economic and demographic factors play a crucial role in the household choice of purchasing organic coffee. Furthermore, households are less sensitive to own-price changes in the case of organic coffee versus conventional coffee.
Style APA, Harvard, Vancouver, ISO itp.
45

Elshiewy, Ossama [Verfasser], Yasemin [Akademischer Betreuer] Boztuğ, Till [Akademischer Betreuer] Dannewald i Maik [Akademischer Betreuer] Hammerschmidt. "The Impact of Voluntary Front-of-Pack Nutrition-Label Introduction on Purchase Behavior : Three Studies Analyzing Supermarket Scanner Data / Ossama Elshiewy. Gutachter: Yasemin Boztug ; Till Dannewald ; Maik Hammerschmidt. Betreuer: Yasemin Boztug". Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2015. http://d-nb.info/1066427453/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Elshiewy, Ossama Verfasser], Yasemin [Akademischer Betreuer] Boztuğ, Till [Akademischer Betreuer] [Dannewald i Maik [Akademischer Betreuer] Hammerschmidt. "The Impact of Voluntary Front-of-Pack Nutrition-Label Introduction on Purchase Behavior : Three Studies Analyzing Supermarket Scanner Data / Ossama Elshiewy. Gutachter: Yasemin Boztug ; Till Dannewald ; Maik Hammerschmidt. Betreuer: Yasemin Boztug". Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2015. http://nbn-resolving.de/urn:nbn:de:gbv:7-11858/00-1735-0000-0022-5DA5-3-5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Richter, Martin. "Systém pro sdílení skenerů po síti". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-264963.

Pełny tekst źródła
Streszczenie:
The purpose of this master's thesis is creation of a system capable of sharing scanners over computer network. The target scanner interfaces are TWAIN and WIA on Microsoft Windows operating system, and SANE on GNU/Linux. C++ programming language, Boost libraries and Qt framework were used to implement the programming part of this work. Several smaller helper libraries were implemented that are useful even outside this work, most notably TWAIN++ framework. The resulting system enables the user to share scanners over network, and scan using any of the aforementioned interfaces.
Style APA, Harvard, Vancouver, ISO itp.
48

Lee, Nien-Lung. "Feature Recognition From Scanned Data Points /". The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487868114111376.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Haberjahn, Mathias. "Multilevel Datenfusion konkurrierender Sensoren in der Fahrzeugumfelderfassung". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16856.

Pełny tekst źródła
Streszczenie:
Mit der vorliegenden Dissertation soll ein Beitrag zur Steigerung der Genauigkeit und Zuverlässigkeit einer sensorgestützten Objekterkennung im Fahrzeugumfeld geleistet werden. Aufbauend auf einem Erfassungssystem, bestehend aus einer Stereokamera und einem Mehrzeilen-Laserscanner, werden teils neu entwickelte Verfahren für die gesamte Verarbeitungskette vorgestellt. Zusätzlich wird ein neuartiges Framework zur Fusion heterogener Sensordaten eingeführt, welches über eine Zusammenführung der Fusionsergebnisse aus den unterschiedlichen Verarbeitungsebenen in der Lage ist, die Objektbestimmung zu verbessern. Nach einer Beschreibung des verwendeten Sensoraufbaus werden die entwickelten Verfahren zur Kalibrierung des Sensorpaares vorgestellt. Bei der Segmentierung der räumlichen Punktdaten werden bestehende Verfahren durch die Einbeziehung von Messgenauigkeit und Messspezifik des Sensors erweitert. In der anschließenden Objektverfolgung wird neben einem neuartigen berechnungsoptimierten Ansatz zur Objektassoziierung ein Modell zur adaptiven Referenzpunktbestimmung und –Verfolgung beschrieben. Durch das vorgestellte Fusions-Framework ist es möglich, die Sensordaten wahlweise auf drei unterschiedlichen Verarbeitungsebenen (Punkt-, Objekt- und Track-Ebene) zu vereinen. Hierzu wird ein sensorunabhängiger Ansatz zur Fusion der Punktdaten dargelegt, der im Vergleich zu den anderen Fusionsebenen und den Einzelsensoren die genaueste Objektbeschreibung liefert. Für die oberen Fusionsebenen wurden unter Ausnutzung der konkurrierenden Sensorinformationen neuartige Verfahren zur Bestimmung und Reduzierung der Detektions- und Verarbeitungsfehler entwickelt. Abschließend wird beschrieben, wie die fehlerreduzierenden Verfahren der oberen Fusionsebenen mit der optimalen Objektbeschreibung der unteren Fusionsebene für eine optimale Objektbestimmung zusammengeführt werden können. Die Effektivität der entwickelten Verfahren wurde durch Simulation oder in realen Messszenarien überprüft.
With the present thesis a contribution to the increase of the accuracy and reliability of a sensor-supported recognition and tracking of objects in a vehicle’s surroundings should be made. Based on a detection system, consisting of a stereo camera and a laser scanner, novel developed procedures are introduced for the whole processing chain of the sensor data. In addition, a new framework is introduced for the fusion of heterogeneous sensor data. By combining the data fusion results from the different processing levels the object detection can be improved. After a short description of the used sensor setup the developed procedures for the calibration and mutual orientation are introduced. With the segmentation of the spatial point data existing procedures are extended by the inclusion of measuring accuracy and specificity of the sensor. In the subsequent object tracking a new computation-optimized approach for the association of the related object hypotheses is presented. In addition, a model for a dynamic determination and tracking of an object reference point is described which exceeds the classical tracking of the object center in the track accuracy. By the introduced fusion framework it is possible to merge the sensor data at three different processing levels (point, object and track level). A sensor independent approach for the low fusion of point data is demonstrated which delivers the most precise object description in comparison to the other fusion levels and the single sensors. For the higher fusion levels new procedures were developed to discover and clean up the detection and processing mistakes benefiting from the competing sensor information. Finally it is described how the fusion results of the upper and lower levels can be brought together for an ideal object description. The effectiveness of the newly developed methods was checked either by simulation or in real measurement scenarios.
Style APA, Harvard, Vancouver, ISO itp.
50

Söderström, Simon. "Detect obstacles for forest machinery from laser scanned data". Thesis, Umeå universitet, Institutionen för fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-174911.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii