Artigos de revistas sobre o tema "Système de multi-Camera"

Siga este link para ver outros tipos de publicações sobre o tema: Système de multi-Camera.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Système de multi-Camera".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Guo Peiyao, 郭珮瑶, 蒲志远 Pu Zhiyuan e 马展 Ma Zhan. "多相机系统:成像增强及应用". Laser & Optoelectronics Progress 58, n.º 18 (2021): 1811013. http://dx.doi.org/10.3788/lop202158.1811013.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Xiao Yifan, 肖一帆, e 胡伟 Hu Wei. "基于多相机系统的高精度标定". Laser & Optoelectronics Progress 60, n.º 20 (2023): 2015003. http://dx.doi.org/10.3788/lop222787.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zhao Yanfang, 赵艳芳, 孙鹏 Sun Peng, 董明利 Dong Mingli, 刘其林 Liu Qilin, 燕必希 Yan Bixi e 王君 Wang Jun. "多相机视觉测量系统在轨自主定向方法". Laser & Optoelectronics Progress 61, n.º 10 (2024): 1011003. http://dx.doi.org/10.3788/lop231907.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ren Guoyin, 任国印, 吕晓琪 Xiaoqi Lü e 李宇豪 Li Yuhao. "多摄像机视场下基于一种DTN的多人脸实时跟踪系统". Laser & Optoelectronics Progress 59, n.º 2 (2022): 0210004. http://dx.doi.org/10.3788/lop202259.0210004.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Kulathunga, Geesara, Aleksandr Buyval e Aleksandr Klimchik. "Multi-Camera Fusion in Apollo Software Distribution". IFAC-PapersOnLine 52, n.º 8 (2019): 49–54. http://dx.doi.org/10.1016/j.ifacol.2019.08.047.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Mehta, S. S., e T. F. Burks. "Multi-camera Fruit Localization in Robotic Harvesting". IFAC-PapersOnLine 49, n.º 16 (2016): 90–95. http://dx.doi.org/10.1016/j.ifacol.2016.10.017.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

R.Kennady, Et al. "A Nonoverlapping Vision Field Multi-Camera Network for Tracking Human Build Targets". International Journal on Recent and Innovation Trends in Computing and Communication 11, n.º 3 (31 de março de 2023): 366–69. http://dx.doi.org/10.17762/ijritcc.v11i3.9871.

Texto completo da fonte
Resumo:
This research presents a procedure for tracking human build targets in a multi-camera network with nonoverlapping vision fields. The proposed approach consists of three main steps: single-camera target detection, single-camera target tracking, and multi-camera target association and continuous tracking. The multi-camera target association includes target characteristic extraction and the establishment of topological relations. Target characteristics are extracted based on the HSV (Hue, Saturation, and Value) values of each human build movement target, and the space-time topological relations of the multi-camera network are established using the obtained target associations. This procedure enables the continuous tracking of human build movement targets in large scenes, overcoming the limitations of monitoring within the narrow field of view of a single camera.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Guler, Puren, Deniz Emeksiz, Alptekin Temizel, Mustafa Teke e Tugba Taskaya Temizel. "Real-time multi-camera video analytics system on GPU". Journal of Real-Time Image Processing 11, n.º 3 (27 de março de 2013): 457–72. http://dx.doi.org/10.1007/s11554-013-0337-2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Huang, Sunan, Rodney Swee Huat Teo e William Wai Lun Leong. "Multi-Camera Networks for Coverage Control of Drones". Drones 6, n.º 3 (3 de março de 2022): 67. http://dx.doi.org/10.3390/drones6030067.

Texto completo da fonte
Resumo:
Multiple unmanned multirotor (MUM) systems are becoming a reality. They have a wide range of applications such as for surveillance, search and rescue, monitoring operations in hazardous environments and providing communication coverage services. Currently, an important issue in MUM is coverage control. In this paper, an existing coverage control algorithm has been extended to incorporate a new sensor model, which is downward facing and allows pan-tilt-zoom (PTZ). Two new constraints, namely view angle and collision avoidance, have also been included. Mobile network coverage among the MUMs is studied. Finally, the proposed scheme is tested in computer simulations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

WANG, Liang. "Multi-Camera Calibration Based on 1D Calibration Object". ACTA AUTOMATICA SINICA 33, n.º 3 (2007): 0225. http://dx.doi.org/10.1360/aas-007-0225.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Moreno, Patricio, Juan Francisco Presenza, Ignacio Mas e Juan Ignacio Giribet. "Aerial Multi-Camera Robotic Jib Crane". IEEE Robotics and Automation Letters 6, n.º 2 (abril de 2021): 4103–8. http://dx.doi.org/10.1109/lra.2021.3065299.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Fu, Qiang, Xiang-Yang Chen e Wei He. "A Survey on 3D Visual Tracking of Multicopters". International Journal of Automation and Computing 16, n.º 6 (1 de outubro de 2019): 707–19. http://dx.doi.org/10.1007/s11633-019-1199-2.

Texto completo da fonte
Resumo:
Abstract Three-dimensional (3D) visual tracking of a multicopter (where the camera is fixed while the multicopter is moving) means continuously recovering the six-degree-of-freedom pose of the multicopter relative to the camera. It can be used in many applications, such as precision terminal guidance and control algorithm validation for multicopters. However, it is difficult for many researchers to build a 3D visual tracking system for multicopters (VTSMs) by using cheap and off-the-shelf cameras. This paper firstly gives an over- view of the three key technologies of a 3D VTSMs: multi-camera placement, multi-camera calibration and pose estimation for multi-copters. Then, some representative 3D visual tracking systems for multicopters are introduced. Finally, the future development of the 3D VTSMs is analyzed and summarized.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Wu, Yi-Chang, Ching-Han Chen, Yao-Te Chiu e Pi-Wei Chen. "Cooperative People Tracking by Distributed Cameras Network". Electronics 10, n.º 15 (25 de julho de 2021): 1780. http://dx.doi.org/10.3390/electronics10151780.

Texto completo da fonte
Resumo:
In the application of video surveillance, reliable people detection and tracking are always challenging tasks. The conventional single-camera surveillance system may encounter difficulties such as narrow-angle of view and dead space. In this paper, we proposed multi-cameras network architecture with an inter-camera hand-off protocol for cooperative people tracking. We use the YOLO model to detect multiple people in the video scene and incorporate the particle swarm optimization algorithm to track the person movement. When a person leaves the area covered by a camera and enters an area covered by another camera, these cameras can exchange relevant information for uninterrupted tracking. The motion smoothness (MS) metrics is proposed for evaluating the tracking quality of multi-camera networking system. We used a three-camera system for two persons tracking in overlapping scene for experimental evaluation. Most tracking person offsets at different frames were lower than 30 pixels. Only 0.15% of the frames showed abrupt increases in offsets pixel. The experiment results reveal that our multi-camera system achieves robust, smooth tracking performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Yang, Yi, Di Tang, Dongsheng Wang, Wenjie Song, Junbo Wang e Mengyin Fu. "Multi-camera visual SLAM for off-road navigation". Robotics and Autonomous Systems 128 (junho de 2020): 103505. http://dx.doi.org/10.1016/j.robot.2020.103505.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Feng, Xin, Xiao Lv, Junyu Dong, Yongshun Liu, Fengfeng Shu e Yihui Wu. "Double-Glued Multi-Focal Bionic Compound Eye Camera". Micromachines 14, n.º 8 (31 de julho de 2023): 1548. http://dx.doi.org/10.3390/mi14081548.

Texto completo da fonte
Resumo:
Compound eye cameras are a vital component of bionics. Compound eye lenses are currently used in light field cameras, monitoring imaging, medical endoscopes, and other fields. However, the resolution of the compound eye lens is still low at the moment, which has an impact on the application scene. Photolithography and negative pressure molding were used to create a double-glued multi-focal bionic compound eye camera in this study. The compound eye camera has 83 microlenses, with ommatidium diameters ranging from 400 μm to 660 μm, and a 92.3 degree field-of-view angle. The double-gluing structure significantly improves the optical performance of the compound eye lens, and the spatial resolution of the ommatidium is 57.00 lp mm−1. Additionally, the measurement of speed is investigated. This double-glue compound eye camera has numerous potential applications in the military, machine vision, and other fields.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Liu, Xinhua, Jie Tian, Hailan Kuang e Xiaolin Ma. "A Stereo Calibration Method of Multi-Camera Based on Circular Calibration Board". Electronics 11, n.º 4 (17 de fevereiro de 2022): 627. http://dx.doi.org/10.3390/electronics11040627.

Texto completo da fonte
Resumo:
In the application of 3D reconstruction of multi-cameras, it is necessary to calibrate the camera used separately, and at the same time carry out multi-stereo calibration, and the calibration accuracy directly affects the effect of the 3D reconstruction of the system. Many researchers focus on the optimization of the calibration algorithm and the improvement of calibration accuracy after obtaining the calibration plate pattern coordinates, ignoring the impact of calibration on the accuracy of the calibration board pattern coordinate extraction. Therefore, this paper proposes a multi-camera stereo calibration method based on circular calibration plate focusing on the extraction of pattern features during the calibration process. This method preforms the acquisition of the subpixel edge acquisition based on Franklin matrix and circular feature extraction of the circular calibration plate pattern collected by the camera, and then combines the Zhang’s calibration method to calibrate the camera. Experimental results show that compared with the traditional calibration method, the method has better calibration effect and calibration accuracy, and the average reprojection error of the multi-camera is reduced by more than 0.006 pixels.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Alsadik, Bashar, Fabio Remondino e Francesco Nex. "Simulating a Hybrid Acquisition System for UAV Platforms". Drones 6, n.º 11 (25 de outubro de 2022): 314. http://dx.doi.org/10.3390/drones6110314.

Texto completo da fonte
Resumo:
Currently, there is a rapid trend in the production of airborne sensors consisting of multi-view cameras or hybrid sensors, i.e., a LiDAR scanner coupled with one or multiple cameras to enrich the data acquisition in terms of colors, texture, completeness of coverage, accuracy, etc. However, the current UAV hybrid systems are mainly equipped with a single camera that will not be sufficient to view the facades of buildings or other complex objects without having double flight paths with a defined oblique angle. This entails extensive flight planning, acquisition duration, extra costs, and data handling. In this paper, a multi-view camera system which is similar to the conventional Maltese cross configurations used in the standard aerial oblique camera systems is simulated. This proposed camera system is integrated with a multi-beam LiDAR to build an efficient UAV hybrid system. To design the low-cost UAV hybrid system, two types of cameras are investigated and proposed, namely the MAPIR Survey and the SenseFly SODA, integrated with a multi-beam digital Ouster OS1-32 LiDAR sensor. Two simulated UAV flight experiments are created with a dedicated methodology and processed with photogrammetric methods. The results show that with a flight speed of 5 m/sec and an image overlap of 80/80, an average density of up to 1500 pts/m2 can be achieved with adequate facade coverage in one-pass flight strips.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Dexheimer, Eric, Patrick Peluse, Jianhui Chen, James Pritts e Michael Kaess. "Information-Theoretic Online Multi-Camera Extrinsic Calibration". IEEE Robotics and Automation Letters 7, n.º 2 (abril de 2022): 4757–64. http://dx.doi.org/10.1109/lra.2022.3145061.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Xu, Jian, Chunjuan Bo e Dong Wang. "A novel multi-target multi-camera tracking approach based on feature grouping". Computers & Electrical Engineering 92 (junho de 2021): 107153. http://dx.doi.org/10.1016/j.compeleceng.2021.107153.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Li, Yun-Lun, Hao-Ting Li e Chen-Kuo Chiang. "Multi-Camera Vehicle Tracking Based on Deep Tracklet Similarity Network". Electronics 11, n.º 7 (24 de março de 2022): 1008. http://dx.doi.org/10.3390/electronics11071008.

Texto completo da fonte
Resumo:
Multi-camera vehicle tracking at the city scale has received lots of attention in the last few years. It has large-scale differences, frequent occlusion, and appearance differences caused by the viewing angle differences, which is quite challenging. In this research, we propose the Tracklet Similarity Network (TSN) for a multi-target multi-camera (MTMC) vehicle tracking system based on the evaluation of the similarity between vehicle tracklets. In addition, a novel component, Candidates Intersection Ratio (CIR), is proposed to refine the similarity. It provides an associate scheme to build the multi-camera tracking results as a tree structure. Based on these components, an end-to-end vehicle tracking system is proposed. The experimental results demonstrate that an 11% improvement on the evaluation score is obtained compared to the conventional similarity baseline.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Oh, Hyondong, Dae-Yeon Won, Sung-Sik Huh, David Hyunchul Shim, Min-Jea Tahk e Antonios Tsourdos. "Indoor UAV Control Using Multi-Camera Visual Feedback". Journal of Intelligent & Robotic Systems 61, n.º 1-4 (1 de dezembro de 2010): 57–84. http://dx.doi.org/10.1007/s10846-010-9506-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Wang, Chuan, Shijie Liu, Xiaoyan Wang e Xiaowei Lan. "Time Synchronization and Space Registration of Roadside LiDAR and Camera". Electronics 12, n.º 3 (20 de janeiro de 2023): 537. http://dx.doi.org/10.3390/electronics12030537.

Texto completo da fonte
Resumo:
The sensing system consisting of Light Detection and Ranging (LiDAR) and a camera provides complementary information about the surrounding environment. To take full advantage of multi-source data provided by different sensors, an accurate fusion of multi-source sensor information is needed. Time synchronization and space registration are the key technologies that affect the fusion accuracy of multi-source sensors. Due to the difference in data acquisition frequency and deviation in startup time between LiDAR and the camera, asynchronous data acquisition between LiDAR and camera is easy to occur, which has a significant influence on subsequent data fusion. Therefore, a time synchronization method of multi-source sensors based on frequency self-matching is developed in this paper. Without changing the sensor frequency, the sensor data are processed to obtain the same number of data frames and set the same ID number, so that the LiDAR and camera data correspond one by one. Finally, data frames are merged into new data packets to realize time synchronization between LiDAR and camera. Based on time synchronization, to achieve spatial synchronization, a nonlinear optimization algorithm of joint calibration parameters is used, which can effectively reduce the reprojection error in the process of sensor spatial registration. The accuracy of the proposed time synchronization method is 99.86% and the space registration accuracy is 99.79%, which is better than the calibration method of the Matlab calibration toolbox.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Kim, Jinwoo, e Seokho Chi. "Multi-camera vision-based productivity monitoring of earthmoving operations". Automation in Construction 112 (abril de 2020): 103121. http://dx.doi.org/10.1016/j.autcon.2020.103121.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Dinc, Semih, Farbod Fahimi e Ramazan Aygun. "Mirage: an O(n) time analytical solution to 3D camera pose estimation with multi-camera support". Robotica 35, n.º 12 (16 de fevereiro de 2017): 2278–96. http://dx.doi.org/10.1017/s0263574716000874.

Texto completo da fonte
Resumo:
SUMMARYMirage is a camera pose estimation method that analytically solves pose parameters in linear time for multi-camera systems. It utilizes a reference camera pose to calculate the pose by minimizing the 2D projection error between reference and actual pixel coordinates. Previously, Mirage has been successfully applied to trajectory tracking (visual servoing) problem. In this study, a comprehensive evaluation of Mirage is performed by particularly focusing on the area of camera pose estimation. Experiments have been performed using simulated and real data on noisy and noise-free environments. The results are compared with the state-of-the-art techniques. Mirage outperforms other methods by generating fast and accurate results in all tested environments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Pedersini, F., A. Sarti e S. Tubaro. "Accurate and simple geometric calibration of multi-camera systems". Signal Processing 77, n.º 3 (setembro de 1999): 309–34. http://dx.doi.org/10.1016/s0165-1684(99)00042-0.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Su, Shan, Li Yan, Hong Xie, Changjun Chen, Xiong Zhang, Lyuzhou Gao e Rongling Zhang. "Multi-Level Hazard Detection Using a UAV-Mounted Multi-Sensor for Levee Inspection". Drones 8, n.º 3 (6 de março de 2024): 90. http://dx.doi.org/10.3390/drones8030090.

Texto completo da fonte
Resumo:
This paper introduces a developed multi-sensor integrated system comprising a thermal infrared camera, an RGB camera, and a LiDAR sensor, mounted on a lightweight unmanned aerial vehicle (UAV). This system is applied to the inspection tasks of levee engineering, enabling the real-time, rapid, all-day, all-round, and non-contact acquisition of multi-source data for levee structures and their surrounding environments. Our aim is to address the inefficiencies, high costs, limited data diversity, and potential safety hazards associated with traditional methods, particularly concerning the structural safety of dam bodies. In the preprocessing stage of multi-source data, techniques such as thermal infrared data enhancement and multi-source data alignment are employed to enhance data quality and consistency. Subsequently, a multi-level approach to detecting and screening suspected risk areas is implemented, facilitating the rapid localization of potential hazard zones and assisting in assessing the urgency of addressing these concerns. The reliability of the developed multi-sensor equipment and the multi-level suspected hazard detection algorithm is validated through on-site levee engineering inspections conducted during flood disasters. The application reliably detects and locates suspected hazards, significantly reducing the time and resource costs associated with levee inspections. Moreover, it mitigates safety risks for personnel engaged in levee inspections. Therefore, this method provides reliable data support and technical services for levee inspection, hazard identification, flood control, and disaster reduction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Li, Congcong, Jing Li, Yuguang Xie, Jiayang Nie, Tao Yang e Zhaoyang Lu. "Multi-camera joint spatial self-organization for intelligent interconnection surveillance". Engineering Applications of Artificial Intelligence 107 (janeiro de 2022): 104533. http://dx.doi.org/10.1016/j.engappai.2021.104533.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

GU, YUANTAO, YILUN CHEN, ZHENGWEI JIANG e KUN TANG. "PARTICLE FILTER BASED MULTI-CAMERA INTEGRATION FOR FACE 3D-POSE TRACKING". International Journal of Wavelets, Multiresolution and Information Processing 04, n.º 04 (dezembro de 2006): 677–90. http://dx.doi.org/10.1142/s0219691306001531.

Texto completo da fonte
Resumo:
Face tracking has many visual applications such as human-computer interfaces, video communications and surveillance. Color-based particle trackers have been proved robust and versatile for a modest computational cost. In this paper, a probabilistic method for integrating multi-camera information is introduced to track human face 3D-pose variations. The proposed method fuses information coming from several calibrated cameras via one color-based particle filter. The algorithm relies on the following novelties. First, the human head other than face is defined as the target of our algorithm. To distinguish the face region and hair region, a dual-color-ball is utilized to model the human head in 3D space. Second, to enhance the robustness to illumination variety, the Fisher criterion is applied to measure the separability of the face region and the hair region on the color histogram. Consequently, the color distribution template can be adapted at the proper time. Finally, the algorithm is performed based on the distributed framework, therefore the computation is implemented equally by all client processors. To demonstrate the performance of the proposed algorithm, several scenarios of visual tracking are tested in an office environment with three to four calibrated cameras. Experiments show that accurate tracking results are achieved, even in some difficult scenarios, such as the complete occlusion and the temptation of anything with skin color. Furthermore, the additional information of our track results, including the head posture and the face orientation schemes, can be used for further work such as face recognition and eye gaze estimation, which is also explained by elaborated designed experiments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Svanström, Fredrik, Fernando Alonso-Fernandez e Cristofer Englund. "Drone Detection and Tracking in Real-Time by Fusion of Different Sensing Modalities". Drones 6, n.º 11 (26 de outubro de 2022): 317. http://dx.doi.org/10.3390/drones6110317.

Texto completo da fonte
Resumo:
Automatic detection of flying drones is a key issue where its presence, especially if unauthorized, can create risky situations or compromise security. Here, we design and evaluate a multi-sensor drone detection system. In conjunction with standard video cameras and microphone sensors, we explore the use of thermal infrared cameras, pointed out as a feasible and promising solution that is scarcely addressed in the related literature. Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest. The sensing solutions are complemented with an ADS-B receiver, a GPS receiver, and a radar module. However, our final deployment has not included the latter due to its limited detection range. The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution. Two other novelties of our work are the creation of a new public dataset of multi-sensor annotated data that expands the number of classes compared to existing ones, as well as the study of the detector performance as a function of the sensor-to-target distance. Sensor fusion is also explored, showing that the system can be made more robust in this way, mitigating false detections of the individual sensors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Li, Jincheng, Guoqing Deng, Wen Zhang, Chaofan Zhang, Fan Wang e Yong Liu. "Realization of CUDA-based real-time multi-camera visual SLAM in embedded systems". Journal of Real-Time Image Processing 17, n.º 3 (27 de novembro de 2019): 713–27. http://dx.doi.org/10.1007/s11554-019-00924-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Chen, Andrew Tzer-Yeu, Morteza Biglari-Abhari e Kevin I.-Kai Wang. "Investigating fast re-identification for multi-camera indoor person tracking". Computers & Electrical Engineering 77 (julho de 2019): 273–88. http://dx.doi.org/10.1016/j.compeleceng.2019.06.009.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Wang, Sheng, Zhisheng You e Yuxi Zhang. "A Novel Multi-Projection Correction Method Based on Binocular Vision". Electronics 12, n.º 4 (10 de fevereiro de 2023): 910. http://dx.doi.org/10.3390/electronics12040910.

Texto completo da fonte
Resumo:
In order to improve the accuracy of multi-projection correction fusion, a multi-projection correction method based on binocular vision is proposed. To date, most of the existing methods are based on the single-camera mode, which may lose the depth information of the display wall and may not accurately obtain the details of the geometric structure of the display wall. The proposed method uses the depth information of a binocular camera to build a high-precision 3D display wall model; thus, there is no need to know the specific CAD size of the display wall in advance. Meanwhile, this method can be applied to display walls of any shape. By calibrating the binocular vision camera, the radial and eccentric aberration of the camera can be reduced. The projector projects the encoded structured light stripes, and the high-precision 3D structural information of the projection screen can be reconstructed according to the phase relationship after collecting the deformation stripes on the projection screen with the binocular camera. Thus, the screen-projector sub-pixel-level mapping relationship is established, and a high-precision geometric correction is achieved. In addition, by means of the one-to-one mapping relation between the phase information and the three-dimensional space points, accurate point cloud matching among multiple binocular phase sets could be established, so that such method can be applied to any quantity of projectors. The experimental results based on various special-shaped projection screens show that, comparing to the single-camera-based method, the proposed method improves the geometric correction accuracy of multi-projection stitching by more than 20%. This method also has many advantages, such as strong universality, high measurement accuracy, and rapid measurement speed, which indicate its wide application potential in many different fields.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Sasaki, Kazuyuki, Yasuo Sakamoto, Takashi Shibata e Yasufumi Emori. "The Multi-Purpose Camera: A New Anterior Eye Segment Analysis System". Ophthalmic Research 22, n.º 1 (1990): 3–8. http://dx.doi.org/10.1159/000267056.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Kornuta, Tomasz, e Cezary Zieliński. "Robot Control System Design Exemplified by Multi-Camera Visual Servoing". Journal of Intelligent & Robotic Systems 77, n.º 3-4 (15 de setembro de 2013): 499–523. http://dx.doi.org/10.1007/s10846-013-9883-x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Hung, Michael Chien-Chun, e Kate Ching-Ju Lin. "Joint sink deployment and association for multi-sink wireless camera networks". Wireless Communications and Mobile Computing 16, n.º 2 (19 de agosto de 2014): 209–22. http://dx.doi.org/10.1002/wcm.2509.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Tran, Nha, Toan Nguyen, Minh Nguyen, Khiet Luong e Tai Lam. "Global-local attention with triplet loss and label smoothed crossentropy for person re-identification". IAES International Journal of Artificial Intelligence (IJ-AI) 12, n.º 4 (1 de dezembro de 2023): 1883. http://dx.doi.org/10.11591/ijai.v12.i4.pp1883-1891.

Texto completo da fonte
Resumo:
Person re-identification (Person Re-ID) is a research direction on tracking and identifying people in surveillance camera systems with non-overlapping camera perspectives. Despite much research on this topic, there are still some practical problems that Person Re-ID has not yet solved, in reality, human objects can easily be obscured by obstructions such as other people, trees, luggage, umbrellas, signs, cars, motorbikes. In this paper, we propose a multibranch deep learning network architecture. In which one branch is for the representation of global features and two branches are for the representation of local features. Dividing the input image into small parts and changing the number of parts between the two branches helps the model to represent the features better. In addition, we add an attention module to the ResNet50 backbone that enhances important human characteristics and eliminates irrelevant information. To improve robustness, the model is trained by combining triplet loss and label smoothing cross-entropy loss (LSCE). Experiments are carried out on datasets Market1501, and duke multi-target multi-camera (DukeMTMC) datasets, our method achieved 96.04% rank-1, 88,11% mean average precision (mAP) on the Market1501 dataset, and 88.78% rank-1, 78,6% mAP on the DukeMTMC dataset. This method achieves performance better than some state-of-the-art methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Wang, Haoyu, Chi Chen, Yong He, Shangzhe Sun, Liuchun Li, Yuhang Xu e Bisheng Yang. "Easy Rocap: A Low-Cost and Easy-to-Use Motion Capture System for Drones". Drones 8, n.º 4 (2 de abril de 2024): 137. http://dx.doi.org/10.3390/drones8040137.

Texto completo da fonte
Resumo:
Fast and accurate pose estimation is essential for the local motion control of robots such as drones. At present, camera-based motion capture (Mocap) systems are mostly used by robots. However, this kind of Mocap system is easily affected by light noise and camera occlusion, and the cost of common commercial Mocap systems is high. To address these challenges, we propose Easy Rocap, a low-cost, open-source robot motion capture system, which can quickly and robustly capture the accurate position and orientation of the robot. Firstly, based on training a real-time object detector, an object-filtering algorithm using class and confidence is designed to eliminate false detections. Secondly, multiple-object tracking (MOT) is applied to maintain the continuity of the trajectories, and the epipolar constraint is applied to multi-view correspondences. Finally, the calibrated multi-view cameras are used to calculate the 3D coordinates of the markers and effectively estimate the 3D pose of the target robot. Our system takes in real-time multi-camera data streams, making it easy to integrate into the robot system. In the simulation scenario experiment, the average position estimation error of the method is less than 0.008 m, and the average orientation error is less than 0.65 degrees. In the real scenario experiment, we compared the localization results of our method with the advanced LiDAR-Inertial Simultaneous Localization and Mapping (SLAM) algorithm. According to the experimental results, SLAM generates drifts during turns, while our method can overcome the drifts and accumulated errors of SLAM, making the trajectory more stable and accurate. In addition, the pose estimation speed of our system can reach 30 Hz.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Popovic, Vladan, Kerem Seyid, Abdulkadir Akin, Ömer Cogal, Hossein Afshari, Alexandre Schmid e Yusuf Leblebici. "Image Blending in a High Frame Rate FPGA-based Multi-Camera System". Journal of Signal Processing Systems 76, n.º 2 (8 de novembro de 2013): 169–84. http://dx.doi.org/10.1007/s11265-013-0858-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Fan, Zhijie, Zhiwei Cao, Xin Li, Chunmei Wang, Bo Jin e Qianjin Tang. "Video Surveillance Camera Identity Recognition Method Fused With Multi-Dimensional Static and Dynamic Identification Features". International Journal of Information Security and Privacy 17, n.º 1 (9 de março de 2023): 1–18. http://dx.doi.org/10.4018/ijisp.319304.

Texto completo da fonte
Resumo:
With the development of smart cities, video surveillance networks have become an important infrastructure for urban governance. However, by replacing or tampering with surveillance cameras, an important front-end device, attackers are able to access the internal network. In order to identify illegal or suspicious camera identities in advance, a camera identity identification method that incorporates multidimensional identification features is proposed. By extracting the static information of cameras and dynamic traffic information, a camera identity system that incorporates explicit, implicit, and dynamic identifiers is constructed. The experimental results show that the explicit identifiers have the highest contribution, but they are easy to forge; the dynamic identifiers rank second, but the traffic preprocessing is complex; the static identifiers rank last but are indispensable. Experiments on 40 cameras verified the effectiveness and feasibility of the proposed identifier system for camera identification, and the accuracy of identification reached 92.5%.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Kim, Juhwan, e Dongsik Jo. "Optimal Camera Placement to Generate 3D Reconstruction of a Mixed-Reality Human in Real Environments". Electronics 12, n.º 20 (13 de outubro de 2023): 4244. http://dx.doi.org/10.3390/electronics12204244.

Texto completo da fonte
Resumo:
Virtual reality and augmented reality are increasingly used for immersive engagement by utilizing information from real environments. In particular, three-dimensional model data, which is the basis for creating virtual places, can be manually developed using commercial modeling toolkits, but with the advancement of sensing technology, computer vision technology can also be used to create virtual environments. Specifically, a 3D reconstruction approach can generate a single 3D model from image information obtained from various scenes in real environments using several cameras (multi-cameras). The goal is to generate a 3D model with excellent precision. However, the rules for choosing the optimal number of cameras and settings to capture information from in real environments (e.g., actual people) employing several cameras in unconventional positions are lacking. In this study, we propose an optimal camera placement strategy for acquiring high-quality 3D data using an irregular camera placement, essential for organizing image information while acquiring human data in a three-dimensional real space, using multiple irregular cameras in real environments. Our results show that installation costs can be lowered by arranging a minimum number of multi-camera cameras in an arbitrary space, and automated virtual human manufacturing with high accuracy can be conducted using optimal irregular camera location.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Chebi, Hocine. "Novel greedy grid-voting algorithm for optimisation placement of multi-camera". International Journal of Sensor Networks 35, n.º 3 (2021): 170. http://dx.doi.org/10.1504/ijsnet.2021.10036663.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Chebi, Hocine. "Novel greedy grid-voting algorithm for optimisation placement of multi-camera". International Journal of Sensor Networks 35, n.º 3 (2021): 170. http://dx.doi.org/10.1504/ijsnet.2021.113840.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Wang, Bo, Jiayao Hou, Yanyan Ma, Fei Wang e Fei Wei. "Multi-DS Strategy for Source Camera Identification in Few-Shot Sample Data Sets". Security and Communication Networks 2022 (6 de setembro de 2022): 1–14. http://dx.doi.org/10.1155/2022/8716884.

Texto completo da fonte
Resumo:
Source camera identification (SCI) is an intriguing problem in digital forensics, which identifies the source device of given images. However, most existing works require sufficient training samples to ensure performance. In this work, we propose a method based on semi-supervised ensemble learning (multi-DS) strategy, which extends labeled data set by multi-distance-based clustering strategy and then calibrate the pseudo-labels through a self-correction mechanism. Next, we iteratively perform the calibration-appending-training process to improve our model. We design comprehensive experiments, and our model achieves satisfactory performance on benchmark public databases (i.e., Dresden, VISION, and SOCRatES).
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Ferraguti, Federica, Chiara Talignani Landi, Silvia Costi, Marcello Bonfè, Saverio Farsoni, Cristian Secchi e Cesare Fantuzzi. "Safety barrier functions and multi-camera tracking for human–robot shared environment". Robotics and Autonomous Systems 124 (fevereiro de 2020): 103388. http://dx.doi.org/10.1016/j.robot.2019.103388.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Vandendriessche, Jurgen, Bruno da Silva, Lancelot Lhoest, An Braeken e Abdellah Touhafi. "M3-AC: A Multi-Mode Multithread SoC FPGA Based Acoustic Camera". Electronics 10, n.º 3 (29 de janeiro de 2021): 317. http://dx.doi.org/10.3390/electronics10030317.

Texto completo da fonte
Resumo:
Acoustic cameras allow the visualization of sound sources using microphone arrays and beamforming techniques. The required computational power increases with the number of microphones in the array, the acoustic images resolution, and in particular, when targeting real-time. Such a constraint limits the use of acoustic cameras in many wireless sensor network applications (surveillance, industrial monitoring, etc.). In this paper, we propose a multi-mode System-on-Chip (SoC) Field-Programmable Gate Arrays (FPGA) architecture capable to satisfy the high computational demand while providing wireless communication for remote control and monitoring. This architecture produces real-time acoustic images of 240 × 180 resolution scalable to 640 × 480 by exploiting the multithreading capabilities of the hard-core processor. Furthermore, timing cost for different operational modes and for different resolutions are investigated to maintain a real time system under Wireless Sensor Networks constraints.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Park, K. S., S. Y. Chang e S. K. Youn. "Topology optimization of the primary mirror of a multi-spectral camera". Structural and Multidisciplinary Optimization 25, n.º 1 (1 de março de 2003): 46–53. http://dx.doi.org/10.1007/s00158-002-0271-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Lin, Shifeng, e Ning Wang. "Cloud robotic grasping of Gaussian mixture model based on point cloud projection under occlusion". Assembly Automation 41, n.º 3 (5 de abril de 2021): 312–23. http://dx.doi.org/10.1108/aa-11-2020-0170.

Texto completo da fonte
Resumo:
Purpose In multi-robot cooperation, the cloud can share sensor data, which can help robots better perceive the environment. For cloud robotics, robot grasping is an important ability that must be mastered. Usually, the information source of grasping mainly comes from visual sensors. However, due to the uncertainty of the working environment, the information acquisition of the vision sensor may encounter the situation of being blocked by unknown objects. This paper aims to propose a solution to the problem in robot grasping when the vision sensor information is blocked by sharing the information of multi-vision sensors in the cloud. Design/methodology/approach First, the random sampling consensus algorithm and principal component analysis (PCA) algorithms are used to detect the desktop range. Then, the minimum bounding rectangle of the occlusion area is obtained by the PCA algorithm. The candidate camera view range is obtained by plane segmentation. Then the candidate camera view range is combined with the manipulator workspace to obtain the camera posture and drive the arm to take pictures of the desktop occlusion area. Finally, the Gaussian mixture model (GMM) is used to approximate the shape of the object projection and for every single Gaussian model, the grabbing rectangle is generated and evaluated to get the most suitable one. Findings In this paper, a variety of cloud robotic being blocked are tested. Experimental results show that the proposed algorithm can capture the image of the occluded desktop and grab the objects in the occluded area successfully. Originality/value In the existing work, there are few research studies on using active multi-sensor to solve the occlusion problem. This paper presents a new solution to the occlusion problem. The proposed method can be applied to the multi-cloud robotics working environment through cloud sharing, which helps the robot to perceive the environment better. In addition, this paper proposes a method to obtain the object-grabbing rectangle based on GMM shape approximation of point cloud projection. Experiments show that the proposed methods can work well.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

González-Galván, Emilio J., Sergio R. Cruz-Ramı́rez, Michael J. Seelinger e J. Jesús Cervantes-Sánchez. "An efficient multi-camera, multi-target scheme for the three-dimensional control of robots using uncalibrated vision". Robotics and Computer-Integrated Manufacturing 19, n.º 5 (outubro de 2003): 387–400. http://dx.doi.org/10.1016/s0736-5845(03)00048-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Tao, Kekai, Gaoge Lian, Yongshun Liu, Huaming Xing, Yi Xing, Xiangdong Su, Xin Feng e Yihui Wu. "Design and Integration of the Single-Lens Curved Multi-Focusing Compound Eye Camera". Micromachines 12, n.º 3 (21 de março de 2021): 331. http://dx.doi.org/10.3390/mi12030331.

Texto completo da fonte
Resumo:
Compared with a traditional optical system, the single-lens curved compound eye imaging system has superior optical performance, such as a large field of view (FOV), small size, and high portability. However, defocus and low resolution hinder the further development of single-lens curved compound eye imaging systems. In this study, the design of a nonuniform curved compound eye with multiple focal lengths was used to solve the defocus problem. A two-step gas-assisted process, which was combined with photolithography, soft photolithography, and ultraviolet curing, was proposed for fabricating the ommatidia with a large numerical aperture precisely. Ommatidia with high resolution were fabricated and arranged in five rings. Based on the imaging experimental results, it was demonstrated that the high-resolution and small-volume single-lens curved compound eye imaging system has significant advantages in large-field imaging and rapid recognition.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Nuger, Evgeny, e Beno Benhabib. "Multi-Camera Active-Vision for Markerless Shape Recovery of Unknown Deforming Objects". Journal of Intelligent & Robotic Systems 92, n.º 2 (9 de fevereiro de 2018): 223–64. http://dx.doi.org/10.1007/s10846-018-0773-0.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia