Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Système de multi-Camera.

Zeitschriftenartikel zum Thema „Système de multi-Camera“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Système de multi-Camera" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Guo Peiyao, 郭珮瑶, 蒲志远 Pu Zhiyuan und 马展 Ma Zhan. „多相机系统:成像增强及应用“. Laser & Optoelectronics Progress 58, Nr. 18 (2021): 1811013. http://dx.doi.org/10.3788/lop202158.1811013.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Xiao Yifan, 肖一帆, und 胡伟 Hu Wei. „基于多相机系统的高精度标定“. Laser & Optoelectronics Progress 60, Nr. 20 (2023): 2015003. http://dx.doi.org/10.3788/lop222787.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhao Yanfang, 赵艳芳, 孙鹏 Sun Peng, 董明利 Dong Mingli, 刘其林 Liu Qilin, 燕必希 Yan Bixi und 王君 Wang Jun. „多相机视觉测量系统在轨自主定向方法“. Laser & Optoelectronics Progress 61, Nr. 10 (2024): 1011003. http://dx.doi.org/10.3788/lop231907.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ren Guoyin, 任国印, 吕晓琪 Xiaoqi Lü und 李宇豪 Li Yuhao. „多摄像机视场下基于一种DTN的多人脸实时跟踪系统“. Laser & Optoelectronics Progress 59, Nr. 2 (2022): 0210004. http://dx.doi.org/10.3788/lop202259.0210004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kulathunga, Geesara, Aleksandr Buyval und Aleksandr Klimchik. „Multi-Camera Fusion in Apollo Software Distribution“. IFAC-PapersOnLine 52, Nr. 8 (2019): 49–54. http://dx.doi.org/10.1016/j.ifacol.2019.08.047.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Mehta, S. S., und T. F. Burks. „Multi-camera Fruit Localization in Robotic Harvesting“. IFAC-PapersOnLine 49, Nr. 16 (2016): 90–95. http://dx.doi.org/10.1016/j.ifacol.2016.10.017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

R.Kennady, Et al. „A Nonoverlapping Vision Field Multi-Camera Network for Tracking Human Build Targets“. International Journal on Recent and Innovation Trends in Computing and Communication 11, Nr. 3 (31.03.2023): 366–69. http://dx.doi.org/10.17762/ijritcc.v11i3.9871.

Der volle Inhalt der Quelle
Annotation:
This research presents a procedure for tracking human build targets in a multi-camera network with nonoverlapping vision fields. The proposed approach consists of three main steps: single-camera target detection, single-camera target tracking, and multi-camera target association and continuous tracking. The multi-camera target association includes target characteristic extraction and the establishment of topological relations. Target characteristics are extracted based on the HSV (Hue, Saturation, and Value) values of each human build movement target, and the space-time topological relations of the multi-camera network are established using the obtained target associations. This procedure enables the continuous tracking of human build movement targets in large scenes, overcoming the limitations of monitoring within the narrow field of view of a single camera.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Guler, Puren, Deniz Emeksiz, Alptekin Temizel, Mustafa Teke und Tugba Taskaya Temizel. „Real-time multi-camera video analytics system on GPU“. Journal of Real-Time Image Processing 11, Nr. 3 (27.03.2013): 457–72. http://dx.doi.org/10.1007/s11554-013-0337-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Huang, Sunan, Rodney Swee Huat Teo und William Wai Lun Leong. „Multi-Camera Networks for Coverage Control of Drones“. Drones 6, Nr. 3 (03.03.2022): 67. http://dx.doi.org/10.3390/drones6030067.

Der volle Inhalt der Quelle
Annotation:
Multiple unmanned multirotor (MUM) systems are becoming a reality. They have a wide range of applications such as for surveillance, search and rescue, monitoring operations in hazardous environments and providing communication coverage services. Currently, an important issue in MUM is coverage control. In this paper, an existing coverage control algorithm has been extended to incorporate a new sensor model, which is downward facing and allows pan-tilt-zoom (PTZ). Two new constraints, namely view angle and collision avoidance, have also been included. Mobile network coverage among the MUMs is studied. Finally, the proposed scheme is tested in computer simulations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

WANG, Liang. „Multi-Camera Calibration Based on 1D Calibration Object“. ACTA AUTOMATICA SINICA 33, Nr. 3 (2007): 0225. http://dx.doi.org/10.1360/aas-007-0225.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Moreno, Patricio, Juan Francisco Presenza, Ignacio Mas und Juan Ignacio Giribet. „Aerial Multi-Camera Robotic Jib Crane“. IEEE Robotics and Automation Letters 6, Nr. 2 (April 2021): 4103–8. http://dx.doi.org/10.1109/lra.2021.3065299.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Fu, Qiang, Xiang-Yang Chen und Wei He. „A Survey on 3D Visual Tracking of Multicopters“. International Journal of Automation and Computing 16, Nr. 6 (01.10.2019): 707–19. http://dx.doi.org/10.1007/s11633-019-1199-2.

Der volle Inhalt der Quelle
Annotation:
Abstract Three-dimensional (3D) visual tracking of a multicopter (where the camera is fixed while the multicopter is moving) means continuously recovering the six-degree-of-freedom pose of the multicopter relative to the camera. It can be used in many applications, such as precision terminal guidance and control algorithm validation for multicopters. However, it is difficult for many researchers to build a 3D visual tracking system for multicopters (VTSMs) by using cheap and off-the-shelf cameras. This paper firstly gives an over- view of the three key technologies of a 3D VTSMs: multi-camera placement, multi-camera calibration and pose estimation for multi-copters. Then, some representative 3D visual tracking systems for multicopters are introduced. Finally, the future development of the 3D VTSMs is analyzed and summarized.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Wu, Yi-Chang, Ching-Han Chen, Yao-Te Chiu und Pi-Wei Chen. „Cooperative People Tracking by Distributed Cameras Network“. Electronics 10, Nr. 15 (25.07.2021): 1780. http://dx.doi.org/10.3390/electronics10151780.

Der volle Inhalt der Quelle
Annotation:
In the application of video surveillance, reliable people detection and tracking are always challenging tasks. The conventional single-camera surveillance system may encounter difficulties such as narrow-angle of view and dead space. In this paper, we proposed multi-cameras network architecture with an inter-camera hand-off protocol for cooperative people tracking. We use the YOLO model to detect multiple people in the video scene and incorporate the particle swarm optimization algorithm to track the person movement. When a person leaves the area covered by a camera and enters an area covered by another camera, these cameras can exchange relevant information for uninterrupted tracking. The motion smoothness (MS) metrics is proposed for evaluating the tracking quality of multi-camera networking system. We used a three-camera system for two persons tracking in overlapping scene for experimental evaluation. Most tracking person offsets at different frames were lower than 30 pixels. Only 0.15% of the frames showed abrupt increases in offsets pixel. The experiment results reveal that our multi-camera system achieves robust, smooth tracking performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Yang, Yi, Di Tang, Dongsheng Wang, Wenjie Song, Junbo Wang und Mengyin Fu. „Multi-camera visual SLAM for off-road navigation“. Robotics and Autonomous Systems 128 (Juni 2020): 103505. http://dx.doi.org/10.1016/j.robot.2020.103505.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Feng, Xin, Xiao Lv, Junyu Dong, Yongshun Liu, Fengfeng Shu und Yihui Wu. „Double-Glued Multi-Focal Bionic Compound Eye Camera“. Micromachines 14, Nr. 8 (31.07.2023): 1548. http://dx.doi.org/10.3390/mi14081548.

Der volle Inhalt der Quelle
Annotation:
Compound eye cameras are a vital component of bionics. Compound eye lenses are currently used in light field cameras, monitoring imaging, medical endoscopes, and other fields. However, the resolution of the compound eye lens is still low at the moment, which has an impact on the application scene. Photolithography and negative pressure molding were used to create a double-glued multi-focal bionic compound eye camera in this study. The compound eye camera has 83 microlenses, with ommatidium diameters ranging from 400 μm to 660 μm, and a 92.3 degree field-of-view angle. The double-gluing structure significantly improves the optical performance of the compound eye lens, and the spatial resolution of the ommatidium is 57.00 lp mm−1. Additionally, the measurement of speed is investigated. This double-glue compound eye camera has numerous potential applications in the military, machine vision, and other fields.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Liu, Xinhua, Jie Tian, Hailan Kuang und Xiaolin Ma. „A Stereo Calibration Method of Multi-Camera Based on Circular Calibration Board“. Electronics 11, Nr. 4 (17.02.2022): 627. http://dx.doi.org/10.3390/electronics11040627.

Der volle Inhalt der Quelle
Annotation:
In the application of 3D reconstruction of multi-cameras, it is necessary to calibrate the camera used separately, and at the same time carry out multi-stereo calibration, and the calibration accuracy directly affects the effect of the 3D reconstruction of the system. Many researchers focus on the optimization of the calibration algorithm and the improvement of calibration accuracy after obtaining the calibration plate pattern coordinates, ignoring the impact of calibration on the accuracy of the calibration board pattern coordinate extraction. Therefore, this paper proposes a multi-camera stereo calibration method based on circular calibration plate focusing on the extraction of pattern features during the calibration process. This method preforms the acquisition of the subpixel edge acquisition based on Franklin matrix and circular feature extraction of the circular calibration plate pattern collected by the camera, and then combines the Zhang’s calibration method to calibrate the camera. Experimental results show that compared with the traditional calibration method, the method has better calibration effect and calibration accuracy, and the average reprojection error of the multi-camera is reduced by more than 0.006 pixels.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Alsadik, Bashar, Fabio Remondino und Francesco Nex. „Simulating a Hybrid Acquisition System for UAV Platforms“. Drones 6, Nr. 11 (25.10.2022): 314. http://dx.doi.org/10.3390/drones6110314.

Der volle Inhalt der Quelle
Annotation:
Currently, there is a rapid trend in the production of airborne sensors consisting of multi-view cameras or hybrid sensors, i.e., a LiDAR scanner coupled with one or multiple cameras to enrich the data acquisition in terms of colors, texture, completeness of coverage, accuracy, etc. However, the current UAV hybrid systems are mainly equipped with a single camera that will not be sufficient to view the facades of buildings or other complex objects without having double flight paths with a defined oblique angle. This entails extensive flight planning, acquisition duration, extra costs, and data handling. In this paper, a multi-view camera system which is similar to the conventional Maltese cross configurations used in the standard aerial oblique camera systems is simulated. This proposed camera system is integrated with a multi-beam LiDAR to build an efficient UAV hybrid system. To design the low-cost UAV hybrid system, two types of cameras are investigated and proposed, namely the MAPIR Survey and the SenseFly SODA, integrated with a multi-beam digital Ouster OS1-32 LiDAR sensor. Two simulated UAV flight experiments are created with a dedicated methodology and processed with photogrammetric methods. The results show that with a flight speed of 5 m/sec and an image overlap of 80/80, an average density of up to 1500 pts/m2 can be achieved with adequate facade coverage in one-pass flight strips.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Dexheimer, Eric, Patrick Peluse, Jianhui Chen, James Pritts und Michael Kaess. „Information-Theoretic Online Multi-Camera Extrinsic Calibration“. IEEE Robotics and Automation Letters 7, Nr. 2 (April 2022): 4757–64. http://dx.doi.org/10.1109/lra.2022.3145061.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Xu, Jian, Chunjuan Bo und Dong Wang. „A novel multi-target multi-camera tracking approach based on feature grouping“. Computers & Electrical Engineering 92 (Juni 2021): 107153. http://dx.doi.org/10.1016/j.compeleceng.2021.107153.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Li, Yun-Lun, Hao-Ting Li und Chen-Kuo Chiang. „Multi-Camera Vehicle Tracking Based on Deep Tracklet Similarity Network“. Electronics 11, Nr. 7 (24.03.2022): 1008. http://dx.doi.org/10.3390/electronics11071008.

Der volle Inhalt der Quelle
Annotation:
Multi-camera vehicle tracking at the city scale has received lots of attention in the last few years. It has large-scale differences, frequent occlusion, and appearance differences caused by the viewing angle differences, which is quite challenging. In this research, we propose the Tracklet Similarity Network (TSN) for a multi-target multi-camera (MTMC) vehicle tracking system based on the evaluation of the similarity between vehicle tracklets. In addition, a novel component, Candidates Intersection Ratio (CIR), is proposed to refine the similarity. It provides an associate scheme to build the multi-camera tracking results as a tree structure. Based on these components, an end-to-end vehicle tracking system is proposed. The experimental results demonstrate that an 11% improvement on the evaluation score is obtained compared to the conventional similarity baseline.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Oh, Hyondong, Dae-Yeon Won, Sung-Sik Huh, David Hyunchul Shim, Min-Jea Tahk und Antonios Tsourdos. „Indoor UAV Control Using Multi-Camera Visual Feedback“. Journal of Intelligent & Robotic Systems 61, Nr. 1-4 (01.12.2010): 57–84. http://dx.doi.org/10.1007/s10846-010-9506-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Wang, Chuan, Shijie Liu, Xiaoyan Wang und Xiaowei Lan. „Time Synchronization and Space Registration of Roadside LiDAR and Camera“. Electronics 12, Nr. 3 (20.01.2023): 537. http://dx.doi.org/10.3390/electronics12030537.

Der volle Inhalt der Quelle
Annotation:
The sensing system consisting of Light Detection and Ranging (LiDAR) and a camera provides complementary information about the surrounding environment. To take full advantage of multi-source data provided by different sensors, an accurate fusion of multi-source sensor information is needed. Time synchronization and space registration are the key technologies that affect the fusion accuracy of multi-source sensors. Due to the difference in data acquisition frequency and deviation in startup time between LiDAR and the camera, asynchronous data acquisition between LiDAR and camera is easy to occur, which has a significant influence on subsequent data fusion. Therefore, a time synchronization method of multi-source sensors based on frequency self-matching is developed in this paper. Without changing the sensor frequency, the sensor data are processed to obtain the same number of data frames and set the same ID number, so that the LiDAR and camera data correspond one by one. Finally, data frames are merged into new data packets to realize time synchronization between LiDAR and camera. Based on time synchronization, to achieve spatial synchronization, a nonlinear optimization algorithm of joint calibration parameters is used, which can effectively reduce the reprojection error in the process of sensor spatial registration. The accuracy of the proposed time synchronization method is 99.86% and the space registration accuracy is 99.79%, which is better than the calibration method of the Matlab calibration toolbox.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Kim, Jinwoo, und Seokho Chi. „Multi-camera vision-based productivity monitoring of earthmoving operations“. Automation in Construction 112 (April 2020): 103121. http://dx.doi.org/10.1016/j.autcon.2020.103121.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Dinc, Semih, Farbod Fahimi und Ramazan Aygun. „Mirage: an O(n) time analytical solution to 3D camera pose estimation with multi-camera support“. Robotica 35, Nr. 12 (16.02.2017): 2278–96. http://dx.doi.org/10.1017/s0263574716000874.

Der volle Inhalt der Quelle
Annotation:
SUMMARYMirage is a camera pose estimation method that analytically solves pose parameters in linear time for multi-camera systems. It utilizes a reference camera pose to calculate the pose by minimizing the 2D projection error between reference and actual pixel coordinates. Previously, Mirage has been successfully applied to trajectory tracking (visual servoing) problem. In this study, a comprehensive evaluation of Mirage is performed by particularly focusing on the area of camera pose estimation. Experiments have been performed using simulated and real data on noisy and noise-free environments. The results are compared with the state-of-the-art techniques. Mirage outperforms other methods by generating fast and accurate results in all tested environments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Pedersini, F., A. Sarti und S. Tubaro. „Accurate and simple geometric calibration of multi-camera systems“. Signal Processing 77, Nr. 3 (September 1999): 309–34. http://dx.doi.org/10.1016/s0165-1684(99)00042-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Su, Shan, Li Yan, Hong Xie, Changjun Chen, Xiong Zhang, Lyuzhou Gao und Rongling Zhang. „Multi-Level Hazard Detection Using a UAV-Mounted Multi-Sensor for Levee Inspection“. Drones 8, Nr. 3 (06.03.2024): 90. http://dx.doi.org/10.3390/drones8030090.

Der volle Inhalt der Quelle
Annotation:
This paper introduces a developed multi-sensor integrated system comprising a thermal infrared camera, an RGB camera, and a LiDAR sensor, mounted on a lightweight unmanned aerial vehicle (UAV). This system is applied to the inspection tasks of levee engineering, enabling the real-time, rapid, all-day, all-round, and non-contact acquisition of multi-source data for levee structures and their surrounding environments. Our aim is to address the inefficiencies, high costs, limited data diversity, and potential safety hazards associated with traditional methods, particularly concerning the structural safety of dam bodies. In the preprocessing stage of multi-source data, techniques such as thermal infrared data enhancement and multi-source data alignment are employed to enhance data quality and consistency. Subsequently, a multi-level approach to detecting and screening suspected risk areas is implemented, facilitating the rapid localization of potential hazard zones and assisting in assessing the urgency of addressing these concerns. The reliability of the developed multi-sensor equipment and the multi-level suspected hazard detection algorithm is validated through on-site levee engineering inspections conducted during flood disasters. The application reliably detects and locates suspected hazards, significantly reducing the time and resource costs associated with levee inspections. Moreover, it mitigates safety risks for personnel engaged in levee inspections. Therefore, this method provides reliable data support and technical services for levee inspection, hazard identification, flood control, and disaster reduction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Li, Congcong, Jing Li, Yuguang Xie, Jiayang Nie, Tao Yang und Zhaoyang Lu. „Multi-camera joint spatial self-organization for intelligent interconnection surveillance“. Engineering Applications of Artificial Intelligence 107 (Januar 2022): 104533. http://dx.doi.org/10.1016/j.engappai.2021.104533.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

GU, YUANTAO, YILUN CHEN, ZHENGWEI JIANG und KUN TANG. „PARTICLE FILTER BASED MULTI-CAMERA INTEGRATION FOR FACE 3D-POSE TRACKING“. International Journal of Wavelets, Multiresolution and Information Processing 04, Nr. 04 (Dezember 2006): 677–90. http://dx.doi.org/10.1142/s0219691306001531.

Der volle Inhalt der Quelle
Annotation:
Face tracking has many visual applications such as human-computer interfaces, video communications and surveillance. Color-based particle trackers have been proved robust and versatile for a modest computational cost. In this paper, a probabilistic method for integrating multi-camera information is introduced to track human face 3D-pose variations. The proposed method fuses information coming from several calibrated cameras via one color-based particle filter. The algorithm relies on the following novelties. First, the human head other than face is defined as the target of our algorithm. To distinguish the face region and hair region, a dual-color-ball is utilized to model the human head in 3D space. Second, to enhance the robustness to illumination variety, the Fisher criterion is applied to measure the separability of the face region and the hair region on the color histogram. Consequently, the color distribution template can be adapted at the proper time. Finally, the algorithm is performed based on the distributed framework, therefore the computation is implemented equally by all client processors. To demonstrate the performance of the proposed algorithm, several scenarios of visual tracking are tested in an office environment with three to four calibrated cameras. Experiments show that accurate tracking results are achieved, even in some difficult scenarios, such as the complete occlusion and the temptation of anything with skin color. Furthermore, the additional information of our track results, including the head posture and the face orientation schemes, can be used for further work such as face recognition and eye gaze estimation, which is also explained by elaborated designed experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Svanström, Fredrik, Fernando Alonso-Fernandez und Cristofer Englund. „Drone Detection and Tracking in Real-Time by Fusion of Different Sensing Modalities“. Drones 6, Nr. 11 (26.10.2022): 317. http://dx.doi.org/10.3390/drones6110317.

Der volle Inhalt der Quelle
Annotation:
Automatic detection of flying drones is a key issue where its presence, especially if unauthorized, can create risky situations or compromise security. Here, we design and evaluate a multi-sensor drone detection system. In conjunction with standard video cameras and microphone sensors, we explore the use of thermal infrared cameras, pointed out as a feasible and promising solution that is scarcely addressed in the related literature. Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest. The sensing solutions are complemented with an ADS-B receiver, a GPS receiver, and a radar module. However, our final deployment has not included the latter due to its limited detection range. The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution. Two other novelties of our work are the creation of a new public dataset of multi-sensor annotated data that expands the number of classes compared to existing ones, as well as the study of the detector performance as a function of the sensor-to-target distance. Sensor fusion is also explored, showing that the system can be made more robust in this way, mitigating false detections of the individual sensors.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Li, Jincheng, Guoqing Deng, Wen Zhang, Chaofan Zhang, Fan Wang und Yong Liu. „Realization of CUDA-based real-time multi-camera visual SLAM in embedded systems“. Journal of Real-Time Image Processing 17, Nr. 3 (27.11.2019): 713–27. http://dx.doi.org/10.1007/s11554-019-00924-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Chen, Andrew Tzer-Yeu, Morteza Biglari-Abhari und Kevin I.-Kai Wang. „Investigating fast re-identification for multi-camera indoor person tracking“. Computers & Electrical Engineering 77 (Juli 2019): 273–88. http://dx.doi.org/10.1016/j.compeleceng.2019.06.009.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Wang, Sheng, Zhisheng You und Yuxi Zhang. „A Novel Multi-Projection Correction Method Based on Binocular Vision“. Electronics 12, Nr. 4 (10.02.2023): 910. http://dx.doi.org/10.3390/electronics12040910.

Der volle Inhalt der Quelle
Annotation:
In order to improve the accuracy of multi-projection correction fusion, a multi-projection correction method based on binocular vision is proposed. To date, most of the existing methods are based on the single-camera mode, which may lose the depth information of the display wall and may not accurately obtain the details of the geometric structure of the display wall. The proposed method uses the depth information of a binocular camera to build a high-precision 3D display wall model; thus, there is no need to know the specific CAD size of the display wall in advance. Meanwhile, this method can be applied to display walls of any shape. By calibrating the binocular vision camera, the radial and eccentric aberration of the camera can be reduced. The projector projects the encoded structured light stripes, and the high-precision 3D structural information of the projection screen can be reconstructed according to the phase relationship after collecting the deformation stripes on the projection screen with the binocular camera. Thus, the screen-projector sub-pixel-level mapping relationship is established, and a high-precision geometric correction is achieved. In addition, by means of the one-to-one mapping relation between the phase information and the three-dimensional space points, accurate point cloud matching among multiple binocular phase sets could be established, so that such method can be applied to any quantity of projectors. The experimental results based on various special-shaped projection screens show that, comparing to the single-camera-based method, the proposed method improves the geometric correction accuracy of multi-projection stitching by more than 20%. This method also has many advantages, such as strong universality, high measurement accuracy, and rapid measurement speed, which indicate its wide application potential in many different fields.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Sasaki, Kazuyuki, Yasuo Sakamoto, Takashi Shibata und Yasufumi Emori. „The Multi-Purpose Camera: A New Anterior Eye Segment Analysis System“. Ophthalmic Research 22, Nr. 1 (1990): 3–8. http://dx.doi.org/10.1159/000267056.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Kornuta, Tomasz, und Cezary Zieliński. „Robot Control System Design Exemplified by Multi-Camera Visual Servoing“. Journal of Intelligent & Robotic Systems 77, Nr. 3-4 (15.09.2013): 499–523. http://dx.doi.org/10.1007/s10846-013-9883-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Hung, Michael Chien-Chun, und Kate Ching-Ju Lin. „Joint sink deployment and association for multi-sink wireless camera networks“. Wireless Communications and Mobile Computing 16, Nr. 2 (19.08.2014): 209–22. http://dx.doi.org/10.1002/wcm.2509.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Tran, Nha, Toan Nguyen, Minh Nguyen, Khiet Luong und Tai Lam. „Global-local attention with triplet loss and label smoothed crossentropy for person re-identification“. IAES International Journal of Artificial Intelligence (IJ-AI) 12, Nr. 4 (01.12.2023): 1883. http://dx.doi.org/10.11591/ijai.v12.i4.pp1883-1891.

Der volle Inhalt der Quelle
Annotation:
Person re-identification (Person Re-ID) is a research direction on tracking and identifying people in surveillance camera systems with non-overlapping camera perspectives. Despite much research on this topic, there are still some practical problems that Person Re-ID has not yet solved, in reality, human objects can easily be obscured by obstructions such as other people, trees, luggage, umbrellas, signs, cars, motorbikes. In this paper, we propose a multibranch deep learning network architecture. In which one branch is for the representation of global features and two branches are for the representation of local features. Dividing the input image into small parts and changing the number of parts between the two branches helps the model to represent the features better. In addition, we add an attention module to the ResNet50 backbone that enhances important human characteristics and eliminates irrelevant information. To improve robustness, the model is trained by combining triplet loss and label smoothing cross-entropy loss (LSCE). Experiments are carried out on datasets Market1501, and duke multi-target multi-camera (DukeMTMC) datasets, our method achieved 96.04% rank-1, 88,11% mean average precision (mAP) on the Market1501 dataset, and 88.78% rank-1, 78,6% mAP on the DukeMTMC dataset. This method achieves performance better than some state-of-the-art methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Wang, Haoyu, Chi Chen, Yong He, Shangzhe Sun, Liuchun Li, Yuhang Xu und Bisheng Yang. „Easy Rocap: A Low-Cost and Easy-to-Use Motion Capture System for Drones“. Drones 8, Nr. 4 (02.04.2024): 137. http://dx.doi.org/10.3390/drones8040137.

Der volle Inhalt der Quelle
Annotation:
Fast and accurate pose estimation is essential for the local motion control of robots such as drones. At present, camera-based motion capture (Mocap) systems are mostly used by robots. However, this kind of Mocap system is easily affected by light noise and camera occlusion, and the cost of common commercial Mocap systems is high. To address these challenges, we propose Easy Rocap, a low-cost, open-source robot motion capture system, which can quickly and robustly capture the accurate position and orientation of the robot. Firstly, based on training a real-time object detector, an object-filtering algorithm using class and confidence is designed to eliminate false detections. Secondly, multiple-object tracking (MOT) is applied to maintain the continuity of the trajectories, and the epipolar constraint is applied to multi-view correspondences. Finally, the calibrated multi-view cameras are used to calculate the 3D coordinates of the markers and effectively estimate the 3D pose of the target robot. Our system takes in real-time multi-camera data streams, making it easy to integrate into the robot system. In the simulation scenario experiment, the average position estimation error of the method is less than 0.008 m, and the average orientation error is less than 0.65 degrees. In the real scenario experiment, we compared the localization results of our method with the advanced LiDAR-Inertial Simultaneous Localization and Mapping (SLAM) algorithm. According to the experimental results, SLAM generates drifts during turns, while our method can overcome the drifts and accumulated errors of SLAM, making the trajectory more stable and accurate. In addition, the pose estimation speed of our system can reach 30 Hz.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Popovic, Vladan, Kerem Seyid, Abdulkadir Akin, Ömer Cogal, Hossein Afshari, Alexandre Schmid und Yusuf Leblebici. „Image Blending in a High Frame Rate FPGA-based Multi-Camera System“. Journal of Signal Processing Systems 76, Nr. 2 (08.11.2013): 169–84. http://dx.doi.org/10.1007/s11265-013-0858-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Fan, Zhijie, Zhiwei Cao, Xin Li, Chunmei Wang, Bo Jin und Qianjin Tang. „Video Surveillance Camera Identity Recognition Method Fused With Multi-Dimensional Static and Dynamic Identification Features“. International Journal of Information Security and Privacy 17, Nr. 1 (09.03.2023): 1–18. http://dx.doi.org/10.4018/ijisp.319304.

Der volle Inhalt der Quelle
Annotation:
With the development of smart cities, video surveillance networks have become an important infrastructure for urban governance. However, by replacing or tampering with surveillance cameras, an important front-end device, attackers are able to access the internal network. In order to identify illegal or suspicious camera identities in advance, a camera identity identification method that incorporates multidimensional identification features is proposed. By extracting the static information of cameras and dynamic traffic information, a camera identity system that incorporates explicit, implicit, and dynamic identifiers is constructed. The experimental results show that the explicit identifiers have the highest contribution, but they are easy to forge; the dynamic identifiers rank second, but the traffic preprocessing is complex; the static identifiers rank last but are indispensable. Experiments on 40 cameras verified the effectiveness and feasibility of the proposed identifier system for camera identification, and the accuracy of identification reached 92.5%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Kim, Juhwan, und Dongsik Jo. „Optimal Camera Placement to Generate 3D Reconstruction of a Mixed-Reality Human in Real Environments“. Electronics 12, Nr. 20 (13.10.2023): 4244. http://dx.doi.org/10.3390/electronics12204244.

Der volle Inhalt der Quelle
Annotation:
Virtual reality and augmented reality are increasingly used for immersive engagement by utilizing information from real environments. In particular, three-dimensional model data, which is the basis for creating virtual places, can be manually developed using commercial modeling toolkits, but with the advancement of sensing technology, computer vision technology can also be used to create virtual environments. Specifically, a 3D reconstruction approach can generate a single 3D model from image information obtained from various scenes in real environments using several cameras (multi-cameras). The goal is to generate a 3D model with excellent precision. However, the rules for choosing the optimal number of cameras and settings to capture information from in real environments (e.g., actual people) employing several cameras in unconventional positions are lacking. In this study, we propose an optimal camera placement strategy for acquiring high-quality 3D data using an irregular camera placement, essential for organizing image information while acquiring human data in a three-dimensional real space, using multiple irregular cameras in real environments. Our results show that installation costs can be lowered by arranging a minimum number of multi-camera cameras in an arbitrary space, and automated virtual human manufacturing with high accuracy can be conducted using optimal irregular camera location.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Chebi, Hocine. „Novel greedy grid-voting algorithm for optimisation placement of multi-camera“. International Journal of Sensor Networks 35, Nr. 3 (2021): 170. http://dx.doi.org/10.1504/ijsnet.2021.10036663.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Chebi, Hocine. „Novel greedy grid-voting algorithm for optimisation placement of multi-camera“. International Journal of Sensor Networks 35, Nr. 3 (2021): 170. http://dx.doi.org/10.1504/ijsnet.2021.113840.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Wang, Bo, Jiayao Hou, Yanyan Ma, Fei Wang und Fei Wei. „Multi-DS Strategy for Source Camera Identification in Few-Shot Sample Data Sets“. Security and Communication Networks 2022 (06.09.2022): 1–14. http://dx.doi.org/10.1155/2022/8716884.

Der volle Inhalt der Quelle
Annotation:
Source camera identification (SCI) is an intriguing problem in digital forensics, which identifies the source device of given images. However, most existing works require sufficient training samples to ensure performance. In this work, we propose a method based on semi-supervised ensemble learning (multi-DS) strategy, which extends labeled data set by multi-distance-based clustering strategy and then calibrate the pseudo-labels through a self-correction mechanism. Next, we iteratively perform the calibration-appending-training process to improve our model. We design comprehensive experiments, and our model achieves satisfactory performance on benchmark public databases (i.e., Dresden, VISION, and SOCRatES).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Ferraguti, Federica, Chiara Talignani Landi, Silvia Costi, Marcello Bonfè, Saverio Farsoni, Cristian Secchi und Cesare Fantuzzi. „Safety barrier functions and multi-camera tracking for human–robot shared environment“. Robotics and Autonomous Systems 124 (Februar 2020): 103388. http://dx.doi.org/10.1016/j.robot.2019.103388.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Vandendriessche, Jurgen, Bruno da Silva, Lancelot Lhoest, An Braeken und Abdellah Touhafi. „M3-AC: A Multi-Mode Multithread SoC FPGA Based Acoustic Camera“. Electronics 10, Nr. 3 (29.01.2021): 317. http://dx.doi.org/10.3390/electronics10030317.

Der volle Inhalt der Quelle
Annotation:
Acoustic cameras allow the visualization of sound sources using microphone arrays and beamforming techniques. The required computational power increases with the number of microphones in the array, the acoustic images resolution, and in particular, when targeting real-time. Such a constraint limits the use of acoustic cameras in many wireless sensor network applications (surveillance, industrial monitoring, etc.). In this paper, we propose a multi-mode System-on-Chip (SoC) Field-Programmable Gate Arrays (FPGA) architecture capable to satisfy the high computational demand while providing wireless communication for remote control and monitoring. This architecture produces real-time acoustic images of 240 × 180 resolution scalable to 640 × 480 by exploiting the multithreading capabilities of the hard-core processor. Furthermore, timing cost for different operational modes and for different resolutions are investigated to maintain a real time system under Wireless Sensor Networks constraints.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Park, K. S., S. Y. Chang und S. K. Youn. „Topology optimization of the primary mirror of a multi-spectral camera“. Structural and Multidisciplinary Optimization 25, Nr. 1 (01.03.2003): 46–53. http://dx.doi.org/10.1007/s00158-002-0271-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Lin, Shifeng, und Ning Wang. „Cloud robotic grasping of Gaussian mixture model based on point cloud projection under occlusion“. Assembly Automation 41, Nr. 3 (05.04.2021): 312–23. http://dx.doi.org/10.1108/aa-11-2020-0170.

Der volle Inhalt der Quelle
Annotation:
Purpose In multi-robot cooperation, the cloud can share sensor data, which can help robots better perceive the environment. For cloud robotics, robot grasping is an important ability that must be mastered. Usually, the information source of grasping mainly comes from visual sensors. However, due to the uncertainty of the working environment, the information acquisition of the vision sensor may encounter the situation of being blocked by unknown objects. This paper aims to propose a solution to the problem in robot grasping when the vision sensor information is blocked by sharing the information of multi-vision sensors in the cloud. Design/methodology/approach First, the random sampling consensus algorithm and principal component analysis (PCA) algorithms are used to detect the desktop range. Then, the minimum bounding rectangle of the occlusion area is obtained by the PCA algorithm. The candidate camera view range is obtained by plane segmentation. Then the candidate camera view range is combined with the manipulator workspace to obtain the camera posture and drive the arm to take pictures of the desktop occlusion area. Finally, the Gaussian mixture model (GMM) is used to approximate the shape of the object projection and for every single Gaussian model, the grabbing rectangle is generated and evaluated to get the most suitable one. Findings In this paper, a variety of cloud robotic being blocked are tested. Experimental results show that the proposed algorithm can capture the image of the occluded desktop and grab the objects in the occluded area successfully. Originality/value In the existing work, there are few research studies on using active multi-sensor to solve the occlusion problem. This paper presents a new solution to the occlusion problem. The proposed method can be applied to the multi-cloud robotics working environment through cloud sharing, which helps the robot to perceive the environment better. In addition, this paper proposes a method to obtain the object-grabbing rectangle based on GMM shape approximation of point cloud projection. Experiments show that the proposed methods can work well.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

González-Galván, Emilio J., Sergio R. Cruz-Ramı́rez, Michael J. Seelinger und J. Jesús Cervantes-Sánchez. „An efficient multi-camera, multi-target scheme for the three-dimensional control of robots using uncalibrated vision“. Robotics and Computer-Integrated Manufacturing 19, Nr. 5 (Oktober 2003): 387–400. http://dx.doi.org/10.1016/s0736-5845(03)00048-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Tao, Kekai, Gaoge Lian, Yongshun Liu, Huaming Xing, Yi Xing, Xiangdong Su, Xin Feng und Yihui Wu. „Design and Integration of the Single-Lens Curved Multi-Focusing Compound Eye Camera“. Micromachines 12, Nr. 3 (21.03.2021): 331. http://dx.doi.org/10.3390/mi12030331.

Der volle Inhalt der Quelle
Annotation:
Compared with a traditional optical system, the single-lens curved compound eye imaging system has superior optical performance, such as a large field of view (FOV), small size, and high portability. However, defocus and low resolution hinder the further development of single-lens curved compound eye imaging systems. In this study, the design of a nonuniform curved compound eye with multiple focal lengths was used to solve the defocus problem. A two-step gas-assisted process, which was combined with photolithography, soft photolithography, and ultraviolet curing, was proposed for fabricating the ommatidia with a large numerical aperture precisely. Ommatidia with high resolution were fabricated and arranged in five rings. Based on the imaging experimental results, it was demonstrated that the high-resolution and small-volume single-lens curved compound eye imaging system has significant advantages in large-field imaging and rapid recognition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Nuger, Evgeny, und Beno Benhabib. „Multi-Camera Active-Vision for Markerless Shape Recovery of Unknown Deforming Objects“. Journal of Intelligent & Robotic Systems 92, Nr. 2 (09.02.2018): 223–64. http://dx.doi.org/10.1007/s10846-018-0773-0.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie