Auswahl der wissenschaftlichen Literatur zum Thema „Velodyne LiDAR“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Velodyne LiDAR" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Velodyne LiDAR"

1

Alsadik, Bashar. „Performance Assessment of Mobile Laser Scanning Systems Using Velodyne Hdl-32e“. Surveying and Geospatial Engineering Journal 1, Nr. 1 (01.01.2021): 28–33. http://dx.doi.org/10.38094/sgej116.

Der volle Inhalt der Quelle
Annotation:
Mapping systems using multi-beam LiDARs are widely used nowadays for different geospatial applications graduating from indoor projects to outdoor city-wide projects. These mobile mapping systems can be either ground-based or aerial-based systems and are mostly equipped with inertial navigation systems INS. The Velodyne HDL-32 LiDAR is a well-known 360° spinning multi-beam laser scanner that is widely used in outdoor and indoor mobile mapping systems. The performance of such LiDARs is an ongoing research topic which is quite important for the quality assurance and quality control topic. The performance of this LiDAR type is correlated to many factors either related to the device itself or the design of the mobile mapping system. Regarding design, most of the mapping systems are equipped with a single Velodyne HDL32 in a specific orientation angle which is different among the mapping systems manufacturers. The LiDAR orientation angle has a significant impact on the performance in terms of the density and coverage of the produced point clouds. Furthermore, during the lifetime of this multi-beam LiDAR, one or more beams may be defected and then either continue the production or returned to the manufacturer to be fixed which then cost time and money. In this paper, the design impact analysis of a mobile laser scanning (MLS) system equipped with a single Velodyne HDL-32E will be clarified and a clear relationship is given between the orientation angle of the LiDAR and the output density of points. The ideal angular orientation of a single Velodyne HDL-32E is found to be at 35° in a mobile mapping system. Furthermore, we investigated the degradation of points density when one of the 32 beams is defected and quantified the density loss percentage and to the best of our knowledge, this is not presented in literature before. It is found that a maximum of about 8% point density loss occurs on the ground and 4% on the facades when having a defected beam of the Velodyne HDL-32E.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jozkow, G., C. Toth und D. Grejner-Brzezinska. „UAS TOPOGRAPHIC MAPPING WITH VELODYNE LiDAR SENSOR“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-1 (02.06.2016): 201–8. http://dx.doi.org/10.5194/isprsannals-iii-1-201-2016.

Der volle Inhalt der Quelle
Annotation:
Unmanned Aerial System (UAS) technology is nowadays willingly used in small area topographic mapping due to low costs and good quality of derived products. Since cameras typically used with UAS have some limitations, e.g. cannot penetrate the vegetation, LiDAR sensors are increasingly getting attention in UAS mapping. Sensor developments reached the point when their costs and size suit the UAS platform, though, LiDAR UAS is still an emerging technology. One issue related to using LiDAR sensors on UAS is the limited performance of the navigation sensors used on UAS platforms. Therefore, various hardware and software solutions are investigated to increase the quality of UAS LiDAR point clouds. This work analyses several aspects of the UAS LiDAR point cloud generation performance based on UAS flights conducted with the Velodyne laser scanner and cameras. The attention was primarily paid to the trajectory reconstruction performance that is essential for accurate point cloud georeferencing. Since the navigation sensors, especially Inertial Measurement Units (IMUs), may not be of sufficient performance, the estimated camera poses could allow to increase the robustness of the estimated trajectory, and subsequently, the accuracy of the point cloud. The accuracy of the final UAS LiDAR point cloud was evaluated on the basis of the generated DSM, including comparison with point clouds obtained from dense image matching. The results showed the need for more investigation on MEMS IMU sensors used for UAS trajectory reconstruction. The accuracy of the UAS LiDAR point cloud, though lower than for point cloud obtained from images, may be still sufficient for certain mapping applications where the optical imagery is not useful.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Jozkow, G., C. Toth und D. Grejner-Brzezinska. „UAS TOPOGRAPHIC MAPPING WITH VELODYNE LiDAR SENSOR“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-1 (02.06.2016): 201–8. http://dx.doi.org/10.5194/isprs-annals-iii-1-201-2016.

Der volle Inhalt der Quelle
Annotation:
Unmanned Aerial System (UAS) technology is nowadays willingly used in small area topographic mapping due to low costs and good quality of derived products. Since cameras typically used with UAS have some limitations, e.g. cannot penetrate the vegetation, LiDAR sensors are increasingly getting attention in UAS mapping. Sensor developments reached the point when their costs and size suit the UAS platform, though, LiDAR UAS is still an emerging technology. One issue related to using LiDAR sensors on UAS is the limited performance of the navigation sensors used on UAS platforms. Therefore, various hardware and software solutions are investigated to increase the quality of UAS LiDAR point clouds. This work analyses several aspects of the UAS LiDAR point cloud generation performance based on UAS flights conducted with the Velodyne laser scanner and cameras. The attention was primarily paid to the trajectory reconstruction performance that is essential for accurate point cloud georeferencing. Since the navigation sensors, especially Inertial Measurement Units (IMUs), may not be of sufficient performance, the estimated camera poses could allow to increase the robustness of the estimated trajectory, and subsequently, the accuracy of the point cloud. The accuracy of the final UAS LiDAR point cloud was evaluated on the basis of the generated DSM, including comparison with point clouds obtained from dense image matching. The results showed the need for more investigation on MEMS IMU sensors used for UAS trajectory reconstruction. The accuracy of the UAS LiDAR point cloud, though lower than for point cloud obtained from images, may be still sufficient for certain mapping applications where the optical imagery is not useful.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Tazir, M. L., und N. Seube. „COMPARISON OF UAV LIDAR ODOMETRY OF ROTATING AND FIXED VELODYNE PLATFORMS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2020 (06.08.2020): 527–34. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2020-527-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. Three-dimensional LiDAR rangefinders are increasingly integrated into unmanned aerial vehicles (UAV), due to their direct access to 3D information, their high accuracy and high refresh rate, and their tendency to be lightweight and cheaper. However, all commercial LiDARs can only offer a limited vertical resolution. To cope with this problem, a solution can be to rotate the LiDAR on an axis passing through its center, adding an additional degree of freedom and allowing more overlap, which significantly enlarges the sensor scope and allows having a complete spherical field of view (FOV). In this paper, we explore this solution in detail for drone’s context, while making comparisons between the rotating and fixed configurations for a Multi-Layers LiDAR (MLL) of type Velodyne Puck Lite. We investigate its impact on the LiDAR Odometry (LO) process by comparing the resulting trajectories with the data of the two configurations, as well as, qualitative comparisons, of the resulting maps.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Okunsky, M. V., und N. V. Nesterova. „Velodyne LIDAR method for sensor data decoding“. IOP Conference Series: Materials Science and Engineering 516 (26.04.2019): 012018. http://dx.doi.org/10.1088/1757-899x/516/1/012018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Atanacio-Jiménez, Gerardo, José-Joel González-Barbosa, Juan B. Hurtado-Ramos, Francisco J. Ornelas-Rodríguez, Hugo Jiménez-Hernández, Teresa García-Ramirez und Ricardo González-Barbosa. „LIDAR Velodyne HDL-64E Calibration Using Pattern Planes“. International Journal of Advanced Robotic Systems 8, Nr. 5 (01.01.2011): 59. http://dx.doi.org/10.5772/50900.

Der volle Inhalt der Quelle
Annotation:
This work describes a method for calibration of the Velodyne HDL-64E scanning LIDAR system. The principal contribution was expressed by a pattern calibration signature, the mathematical model and the numerical algorithm for computing the calibration parameters of the LIDAR. In this calibration pattern the main objective is to minimize systematic errors due to geometric calibration factor. It describes an algorithm for solution of the intrinsic and extrinsic parameters. Finally, its uncertainty was calculated from the standard deviation of calibration result errors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhang, Liang, Qingquan Li, Ming Li, Qingzhou Mao und Andreas Nüchter. „Multiple Vehicle-like Target Tracking Based on the Velodyne LiDAR*“. IFAC Proceedings Volumes 46, Nr. 10 (Juni 2013): 126–31. http://dx.doi.org/10.3182/20130626-3-au-2035.00058.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Erke, Shang, Dai Bin, Nie Yiming, Xiao Liang und Zhu Qi. „A fast calibration approach for onboard LiDAR-camera systems“. International Journal of Advanced Robotic Systems 17, Nr. 2 (01.03.2020): 172988142090960. http://dx.doi.org/10.1177/1729881420909606.

Der volle Inhalt der Quelle
Annotation:
Outdoor surveillance and security robots have a wide range of industrial, military, and civilian applications. In order to achieve autonomous navigation, the LiDAR-camera system is widely applied by outdoor surveillance and security robots. The calibration of the LiDAR-camera system is essential and important for robots to correctly acquire the scene information. This article proposes a fast calibration approach that is different from traditional calibration algorithms. The proposed approach combines two independent calibration processes, which are the calibration of LiDAR and the camera to robot platform, so as to address the relationship between LiDAR sensor and camera sensor. A novel approach to calibrate LiDAR to robot platform is applied to improve accuracy and robustness. A series of indoor experiments are carried out and the results show that the proposed approach is effective and efficient. At last, it is applied to our own outdoor security robot platform to detect both positive and negative obstacles in a field environment, in which two Velodyne-HDL-32 LiDARs and a color camera are employed. The real application illustrates the robustness performance of the proposed approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lassiter, H. Andrew, Travis Whitley, Benjamin Wilkinson und Amr Abd-Elrahman. „Scan Pattern Characterization of Velodyne VLP-16 Lidar Sensor for UAS Laser Scanning“. Sensors 20, Nr. 24 (21.12.2020): 7351. http://dx.doi.org/10.3390/s20247351.

Der volle Inhalt der Quelle
Annotation:
Many lightweight lidar sensors employed for UAS lidar mapping feature a fan-style laser emitter-detector configuration which results in a non-uniform pattern of laser pulse returns. As the role of UAS lidar mapping grows in both research and industry, it is imperative to understand the behavior of the fan-style lidar sensor to ensure proper mission planning. This study introduces sensor modeling software for scanning simulation and analytical equations developed in-house to characterize the non-uniform return density (i.e., scan pattern) of the fan-style sensor, with special focus given to a popular fan-style sensor, the Velodyne VLP-16 laser scanner. The results indicate that, despite the high pulse frequency of modern scanners, areas of poor laser pulse coverage are often present along the scanning path under typical mission parameters. These areas of poor coverage appear in a variety of shapes and sizes which do not necessarily correspond to the forward speed of the scanner or the height of the scanner above the ground, highlighting the importance of scan simulation for proper mission planning when using a fan-style sensor.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bula, Jason, Marc-Henri Derron und Gregoire Mariethoz. „Dense point cloud acquisition with a low-cost Velodyne VLP-16“. Geoscientific Instrumentation, Methods and Data Systems 9, Nr. 2 (12.10.2020): 385–96. http://dx.doi.org/10.5194/gi-9-385-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. This study develops a low-cost terrestrial lidar system (TLS) for dense point cloud acquisition. Our system consists of a VLP-16 lidar scanner produced by Velodyne, which we have placed on a motorized rotating platform. This allows us to continuously change the direction and densify the scan. Axis correction is performed in post-processing to obtain accurate scans. The system has been compared indoors with a high-cost system, showing an average absolute difference of ±2.5 cm. Stability tests demonstrated an average distance of ±2 cm between repeated scans with our system. The system has been tested in abandoned mines with promising results. It has a very low price (approximately USD 4000) and opens the door to measuring risky sectors where instrument loss is high but information valuable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Velodyne LiDAR"

1

Zhao, Guanyi. „Fusion of Ladybug3 omnidirectional camera and Velodyne Lidar“. Thesis, KTH, Geodesi och satellitpositionering, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172431.

Der volle Inhalt der Quelle
Annotation:
The advent of autonomous vehicles expedites the revolution of car industry. Volvo Car Corporation has an ambition of developing the next generation of autonomous vehicle. In the Volvo Car Corporation, Active Safety CAE group, enthusiastic engineers have initiated a series of relevant research to enhance the safety function for autonomous vehicle and this thesis work is also implemented at Active Safety CAE with their support.    Perception of vehicle plays a pivotal role in autonomous driving, therefore an idea of improving vision by fusing two different types of data from Velodyne HDL-64E S3 High Definition LiDAR Sensor and Ladybug3 camera respectively, is proposed.  This report presents the whole process of fusion of point clouds and image data. An experiment is implemented for collecting and synchronizing multi-sensor data streams by building a platform which supports the mounting of Velodyne, Ladybug 3 and their accessories, as well as the connection to GPS unit, laptop. Related software/programming environment for recording, synchronizing and storing data will also be mentioned. Synchronization is mainly achieved by matching timestamps between different datasets. Creating log files for timestamps is the primary task in synchronization. External Calibration between Velodyne and Ladybug3 camera for matching two different datasets correctly is the focus of this report. In the project, we will develop a semi-automatic calibration method with very little human intervention using a checkerboard for acquiring a small set of feature points from laser point cloud and image feature correspondences. Based on these correspondences, the displacement is computed. Using the computed result, the laser points are back-projected into the image. If the original and back-projected images are sufficiently consistent, then the transformation parameters can be accepted. Displacement between camera and laser scanner are estimated through two separate steps: first, we will estimate the pose for the checkerboard in image and get its depth information in camera coordinate system; and then a transformation relation between the camera and the laser scanner will be computed within three dimensional space.  Fusion of datasets will finally be done by combing color information from image and range information from point cloud together. Other applications related to data fusion will be developed as the support of future work.  In the end, a conclusion will be drawn. Possible improvements are also expected in future work. For example, better accuracy of calibration might be achieved with other methods and adding texture to cloud points will generate a more realistic model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zhang, Erik. „Integration of IMU and Velodyne LiDAR sensor in an ICP-SLAM framework“. Thesis, KTH, Optimeringslära och systemteori, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-193653.

Der volle Inhalt der Quelle
Annotation:
Simultaneous localization and mapping (SLAM) of an unknown environment is a critical step for many autonomous processes. For this work, we propose a solution which does not rely on storing descriptors of the environment and performing descriptors filtering. Compared to most SLAM based methods this work with general sparse point clouds with the underlying generalized ICP (GICP) algorithm for point cloud registration. This thesis presents a modified GICP method and an investigation of how and if an IMU can assist the SLAM process by different methods of integrating the IMU measurements. All the data in this thesis have been sampled from a LiDAR scanner mounted on top of an UAV, a car or on a backpack. Suggested modification on GICP have shown to improve robustness in a forest environment. From urban measurements the result indicates that IMU contributes by reducing the overall angular drift, which in a long run is contributing most to the loop closure error.
Lokalisering och kartläggning (SLAM) i en okänd miljö är ett viktigt steg för många autonoma system. Den föreslagna lösningen är inte beroende på att hitta nyckelpunkter eller nyckelobjekt. Till skillnad från många andra SLAM baserade metoder så arbetar denna metod med glesa punktmoln där 'generalized ICP' (GICP)algoritmen används för punktmolns registrering. I denna uppsats så föreslås en variant av GICP och undersöker, ifall en tröghetssensor (IMU) kan hjälpa till med SLAM-processen. LiDAR-data som har använts i denna uppsats har varit uppmätta från en Velodyne LiDAR monterat på en ryggsäck, en bil och på en UAV. Resultatet tyder på att IMU-data kan göra algoritmen robustare och från mätningar i stadsmiljö så visar det sig att IMU kan hjälpa till att minska vinkeldrift, vilket är det största felkällan för noggrannhet i det globala koordinat systemet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Marko, Peter. „Detekce objektů v laserových skenech pomocí konvolučních neuronových sítí“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445509.

Der volle Inhalt der Quelle
Annotation:
This thesis is aimed at detection of lines of horizontal road markings from a point cloud, which was obtained using mobile laser mapping. The system works interactively in cooperation with user, which marks the beginning of the traffic line. The program gradually detects the remaining parts of the traffic line and creates its vector representation. Initially, a point cloud is projected into a horizontal plane, crating a 2D image that is segmented by a U-Net convolutional neural network. Segmentation marks one traffic line. Segmentation is converted to a polyline, which can be used in a geo-information system. During testing, the U-Net achieved a segmentation accuracy of 98.8\%, a specificity of 99.5\% and a sensitivity of 72.9\%. The estimated polyline reached an average deviation of 1.8cm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kannan, Krishnaswamy. „Development of a reference software platform for the Velodyne VLP-16 LiDARS“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-206080.

Der volle Inhalt der Quelle
Annotation:
Recent advancements in sensor technologies have paved the way towards the realization of autonomous vehicles. The choice of sensors used in such vehicles plays a pivotal role in their successful operation under different conditions. The sensors used in the current generation semi-autonomous vehicles provide a reliable performance for normal use cases. However, their reliability significantly reduces for extreme use cases due to their sensing limitations such as, poor sensitivity to certain materials, limited range, restricted field of view (blind spots), and so on. Hence, a deeper understanding of the limitations of these sensors is required to characterize the sensor specifications required for future fully autonomous vehicles. This thesis work focuses on developing a reference software platform that captures the ground truth required for benchmarking the sensors used in current generation semi-autonomous vehicles. A modular, scalable software framework for the reference software platform called, LiDAR Software Platform is proposed, and four functions, namely point cloud fusion, point cloud mapping, point cloud obstacle detection, and point cloud map obstacle detection, based on the proposed framework are developed. The platform is developed using state-of-the-art Velodyne VLP-16 3D LiDARS (Light Detection and Ranging), the Robot Operating System (ROS), and the Point Cloud Library (PCL). A novel approach for improving the density of the point clouds by fusing the data from two Velodyne VLP-16 LiDARS is proposed, and successfully implemented, and a high density 3D point cloud map of the environment with centimeter accuracy is generated. Finally, the minimum point cloud density required for a robust detection of an obstacle at 50 meters from the truck is determined.
Nya framsteg inom sensorteknologi har banat väg för förverkligande av autonoma fordon. Valet av sensorer som används i sådana fordon spelar en viktig roll i deras framgångsrika verksamhet under olika förhållanden. De sensorer som används i den nuvarande generationen halvautonoma fordon ger en tillförlitlig prestanda för normala användningsfall. Men deras tillförlitlighet minskar avsevärt under extrema användningsfall på grund av deras avkännande begränsningar såsom dålig känslighet för vissa material, begränsat område, begränsat synfält (blinda fläckar), och så vidare. Därför är en djupare förståelse av begränsningarna av dessa sensorer som krävs för att karakterisera sensor specifikationer som krävs för framtida helt autonoma fordon. Denna avhandling fokuserar på att utveckla en referensmjukvaruplattform som fångar marken sanningen som krävs för benchmarking de sensorer som används i nuvarande generationens halvautonoma fordon. En modulär och skalbar ram programvara för referensmjukvaruplattform kallas, LiDAR Software Platform är föreslås, och fyra funktioner, nämligen punktmoln fusion, punktmoln kartläggning, punktmoln hinderdetektering och punktmoln karta hinderdetektering, baserat på den föreslagna ramen är tagit fram. Plattformen är utvecklad med hjälp av toppmodern Velodyne VLP-16 3D LiDARS (Light Detection and Ranging), Robot Operating System (ROS), och Point cloud Library (PCL). En ny metod för att förbättra tätheten av punktmoln genom sammansmältning data från två Velodyne VLP-16 LiDARS är föreslås och framgångsrikt genomförts och en hög densitet 3D punktmoln karta över miljön med centimeternoggrannhet är genereras. Slutligen, det minimum punktmoln täthet som krävs för en robust detektering av ett hinder på 50 meter från lastbilen är bestämd.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lin, Ying-chen, und 林映辰. „Large scale 3D scene registration using data from Velodyne LIDAR“. Thesis, 2011. http://ndltd.ncl.edu.tw/handle/90027455243843548464.

Der volle Inhalt der Quelle
Annotation:
碩士
國立高雄大學
資訊工程學系碩士班
99
With the rapid development of 3D computing technology, the acquisition of range data has become a necessary activity for numerous applications. Large scene reconstruction, which aims to gather depth information of the environment by the use of some range sensors, is a challenging problem. It is considered challenging because the scale of scanned data is relatively large than any regular sized object. Laser rangefinders are perhaps the most frequently used sensors in the applications of scene reconstruction. In our work a 3D reconstruction system that equips with a Velodyne LIDAR is built. In order to improve the range and the density of reconstruction the system implements a modified Iterative Closest Point (ICP) algorithm. The combination of various strategies such as worst rejection and feature extraction is proposed to achieve significant improvement. The experiment shows that the heuristic registering algorithm is capable to align multiple LIDAR scans that consist of large number of 3D coordinates in a faster and more accurate manner compared to conventional ICP algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Velodyne LiDAR"

1

Velas, Martin, Michal Spanel, Michal Hradis und Adam Herout. „CNN for IMU assisted odometry estimation using velodyne LiDAR“. In 2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC). IEEE, 2018. http://dx.doi.org/10.1109/icarsc.2018.8374163.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Bergelt, Rene, Owes Khan und Wolfram Hardt. „Improving the intrinsic calibration of a Velodyne LiDAR sensor“. In 2017 IEEE SENSORS. IEEE, 2017. http://dx.doi.org/10.1109/icsens.2017.8234357.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Velas, Martin, Michal Spanel, Michal Hradis und Adam Herout. „CNN for very fast ground segmentation in velodyne LiDAR data“. In 2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC). IEEE, 2018. http://dx.doi.org/10.1109/icarsc.2018.8374167.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Halterman, Ryan, und Michael Bruch. „Velodyne HDL-64E lidar for unmanned surface vehicle obstacle detection“. In SPIE Defense, Security, and Sensing, herausgegeben von Grant R. Gerhart, Douglas W. Gage und Charles M. Shoemaker. SPIE, 2010. http://dx.doi.org/10.1117/12.850611.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Solis, E. D. Bravo, A. Miranda Neto und B. Nina Huallpa. „Pearson's Correlation Coefficient for Discarding Redundant Information: Velodyne Lidar Data Analysis“. In 2015 12th Latin American Robotics Symposium (LARS) and 2015 3rd Brazilian Symposium on Robotics (LARS-SBR). IEEE, 2015. http://dx.doi.org/10.1109/lars-sbr.2015.34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Chu, Phuong Minh, Seoungjae Cho, Sungdae Sim, Kiho Kwak, Yong Woon Park und Kyungeun Cho. „Removing past data of dynamic objects using static Velodyne LiDAR sensor“. In 2016 16th International Conference on Control, Automation and Systems (ICCAS). IEEE, 2016. http://dx.doi.org/10.1109/iccas.2016.7832519.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Yu, Yong, Zhenhai Gao, Bing Zhu und Jian Zhao. „Recognition and Classification of Vehicle Target Using the Vehicle-Mounted Velodyne LIDAR“. In SAE 2014 World Congress & Exhibition. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2014. http://dx.doi.org/10.4271/2014-01-0322.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Goll, Stanislav A., Sergey S. Luksha, Vladimir S. Leushkin und Alexandr G. Borisov. „Construction of the local patency map on the data from Velodyne LiDAR“. In 2016 5th Mediterranean Conference on Embedded Computing (MECO). IEEE, 2016. http://dx.doi.org/10.1109/meco.2016.7525736.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Oh, Seontaek, Ji-Hwan Yu, Azim Eskandarian und Young-Keun Kim. „Design of Yaw and Tilt Alignment Inspection System for a Low-Resolution 3D LiDAR Using Planar Target“. In ASME 2020 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/imece2020-23601.

Der volle Inhalt der Quelle
Annotation:
Abstract It is crucial to attach a LiDAR (Light Detection and Ranging), a 3D sensor for a mobile platform such as robot and vehicle within the acceptable misalignment to detect the relative positions of surrounding obstacles accurately. This paper proposes a novel inspection method to estimate the yaw and tilt alignment of a LiDAR after the sensor attachment. The proposed method uses a planar target board to estimate the yaw and tilt angles of a low-resolution LiDAR with milli-degree precision. Experiments were conducted to evaluate the accuracy and precision of the proposed method with the designed test-bench that can control the reference pointing angle of the LiDAR from −10 to 10 degrees. The experiment results on Velodyne VLP-16, a low-resolution LiDAR with 16 channels, showed that the proposed system estimated the yaw and tilt angles within accuracy and precision of 0.1 degrees.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Akhtar, Rayyan, Huabiao Qin und Guancheng Chen. „Velodyne LiDAR and monocular camera data fusion for depth map and 3D reconstruction“. In Eleventh International Conference on Digital Image Processing, herausgegeben von Xudong Jiang und Jenq-Neng Hwang. SPIE, 2019. http://dx.doi.org/10.1117/12.2539863.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie