Journal articles on the topic 'Multi-Camera System'

To see the other types of publications on this topic, follow the link: Multi-Camera System.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multi-Camera System.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zhang, Zirui, and Jun Cheng. "Multi-Camera Tracking Helmet System." Journal of Image and Graphics 1, no. 2 (2013): 76–79. http://dx.doi.org/10.12720/joig.1.2.76-79.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

MOMIYAMA, Takumi, and Tsuyoshi SHIMIZU. "202 Calibration Flexible multi camera system." Proceedings of Yamanashi District Conference 2014 (2014): 31–32. http://dx.doi.org/10.1299/jsmeyamanashi.2014.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Detchev, I., M. Mazaheri, S. Rondeel, and A. Habib. "Calibration of multi-camera photogrammetric systems." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1 (November 7, 2014): 101–8. http://dx.doi.org/10.5194/isprsarchives-xl-1-101-2014.

Full text
Abstract:
Due to the low-cost and off-the-shelf availability of consumer grade cameras, multi-camera photogrammetric systems have become a popular means for 3D reconstruction. These systems can be used in a variety of applications such as infrastructure monitoring, cultural heritage documentation, biomedicine, mobile mapping, as-built architectural surveys, etc. In order to ensure that the required precision is met, a system calibration must be performed prior to the data collection campaign. This system calibration should be performed as efficiently as possible, because it may need to be completed many times. Multi-camera system calibration involves the estimation of the interior orientation parameters of each involved camera and the estimation of the relative orientation parameters among the cameras. This paper first reviews a method for multi-camera system calibration with built-in relative orientation constraints. A system stability analysis algorithm is then presented which can be used to assess different system calibration outcomes. The paper explores the required calibration configuration for a specific system in two situations: major calibration (when both the interior orientation parameters and relative orientation parameters are estimated), and minor calibration (when the interior orientation parameters are known a-priori and only the relative orientation parameters are estimated). In both situations, system calibration results are compared using the system stability analysis methodology.
APA, Harvard, Vancouver, ISO, and other styles
4

Aizawa, Kiyoharu. "Multi-camera Surveillance System using Wireless LAN." Journal of Life Support Engineering 18, Supplement (2006): 3. http://dx.doi.org/10.5136/lifesupport.18.supplement_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wierzbicki, Damian. "Multi-Camera Imaging System for UAV Photogrammetry." Sensors 18, no. 8 (July 26, 2018): 2433. http://dx.doi.org/10.3390/s18082433.

Full text
Abstract:
In the last few years, it has been possible to observe a considerable increase in the use of unmanned aerial vehicles (UAV) equipped with compact digital cameras for environment mapping. The next stage in the development of photogrammetry from low altitudes was the development of the imagery data from UAV oblique images. Imagery data was obtained from side-facing directions. As in professional photogrammetric systems, it is possible to record footprints of tree crowns and other forms of the natural environment. The use of a multi-camera system will significantly reduce one of the main UAV photogrammetry limitations (especially in the case of multirotor UAV) which is a reduction of the ground coverage area, while increasing the number of images, increasing the number of flight lines, and reducing the surface imaged during one flight. The approach proposed in this paper is based on using several head cameras to enhance the imaging geometry during one flight of UAV for mapping. As part of the research work, a multi-camera system consisting of several cameras was designed to increase the total Field of View (FOV). Thanks to this, it will be possible to increase the ground coverage area and to acquire image data effectively. The acquired images will be mosaicked in order to limit the total number of images for the mapped area. As part of the research, a set of cameras was calibrated to determine the interior orientation parameters (IOPs). Next, the method of image alignment using the feature image matching algorithms was presented. In the proposed approach, the images are combined in such a way that the final image has a joint centre of projections of component images. The experimental results showed that the proposed solution was reliable and accurate for the mapping purpose. The paper also presents the effectiveness of existing transformation models for images with a large coverage subjected to initial geometric correction due to the influence of distortion.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Zhipeng, Dong Yin, Jinwen Ding, Yuhao Luo, Mingyue Yuan, and Chengfeng Zhu. "Collaborative Tracking Method in Multi-Camera System." Journal of Shanghai Jiaotong University (Science) 25, no. 6 (May 29, 2020): 802–10. http://dx.doi.org/10.1007/s12204-020-2188-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Leipner, Anja, Rilana Baumeister, Michael J. Thali, Marcel Braun, Erika Dobler, and Lars C. Ebert. "Multi-camera system for 3D forensic documentation." Forensic Science International 261 (April 2016): 123–28. http://dx.doi.org/10.1016/j.forsciint.2016.02.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hsu, Che-Hao, Wen-Huang Cheng, Yi-Leh Wu, Wen-Shiung Huang, Tao Mei, and Kai-Lung Hua. "CrossbowCam: a handheld adjustable multi-camera system." Multimedia Tools and Applications 76, no. 23 (June 5, 2017): 24961–81. http://dx.doi.org/10.1007/s11042-017-4852-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yin, Lei, Xiangjun Wang, Yubo Ni, Kai Zhou, and Jilong Zhang. "Extrinsic Parameters Calibration Method of Cameras with Non-Overlapping Fields of View in Airborne Remote Sensing." Remote Sensing 10, no. 8 (August 16, 2018): 1298. http://dx.doi.org/10.3390/rs10081298.

Full text
Abstract:
Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the onboard cameras. Unlike conventional multi-camera calibration methods with a common field of view (FOV), multi-camera calibration without overlapping FOVs has certain difficulties. In this paper, we propose a calibration method for a multi-camera system without common FOVs, which is used on aero photogrammetry. First, the extrinsic parameters of any two cameras in a multi-camera system is calibrated, and the extrinsic matrix is optimized by the re-projection error. Then, the extrinsic parameters of each camera are unified to the system reference coordinate system by using the global optimization method. A simulation experiment and a physical verification experiment are designed for the theoretical arithmetic. The experimental results show that this method is operable. The rotation error angle of the camera’s extrinsic parameters is less than 0.001rad and the translation error is less than 0.08 mm.
APA, Harvard, Vancouver, ISO, and other styles
10

ZHANG Lai-gang, 张来刚, 魏仲慧 WEO Zhong-hui, 何昕 HE Xin, and 孙群 SUN Qun. "Multi-Camera Measurement System Based on Multi-Constraint Fusion Algorithm." Chinese Journal of Liquid Crystals and Displays 28, no. 4 (2013): 608–14. http://dx.doi.org/10.3788/yjyxs20132804.0608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Cui, Hainan, and Shuhan Shen. "MMA: Multi-Camera Based Global Motion Averaging." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 490–98. http://dx.doi.org/10.1609/aaai.v36i1.19927.

Full text
Abstract:
In order to fully perceive the surrounding environment, many intelligent robots and self-driving cars are equipped with a multi-camera system. Based on this system, the structure-from-motion (SfM) technology is used to realize scene reconstruction, but the fixed relative poses between cameras in the multi-camera system are usually not considered. This paper presents a tailor-made multi-camera based motion averaging system, where the fixed relative poses are utilized to improve the accuracy and robustness of SfM. Our approach starts by dividing the images into reference images and non-reference images, and edges in view-graph are divided into four categories accordingly. Then, a multi-camera based rotating averaging problem is formulated and solved in two stages, where an iterative re-weighted least squares scheme is used to deal with outliers. Finally, a multi-camera based translation averaging problem is formulated and a l1-norm based optimization scheme is proposed to compute the relative translations of multi-camera system and reference camera positions simultaneously. Experiments demonstrate that our algorithm achieves superior accuracy and robustness on various data sets compared to the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
12

Orts-Escolano, Sergio, Jose Garcia-Rodriguez, Vicente Morell, Miguel Cazorla, Jorge Azorin, and Juan Garcia-Chamizo. "Parallel Computational Intelligence-Based Multi-Camera Surveillance System." Journal of Sensor and Actuator Networks 3, no. 2 (April 11, 2014): 95–112. http://dx.doi.org/10.3390/jsan3020095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Habib, Ayman, Ivan Detchev, and Eunju Kwak. "Stability Analysis for a Multi-Camera Photogrammetric System." Sensors 14, no. 8 (August 18, 2014): 15084–112. http://dx.doi.org/10.3390/s140815084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Black, James, Dimitrios Makris, and Tim Ellis. "Hierarchical database for a multi-camera surveillance system." Pattern Analysis and Applications 7, no. 4 (December 2004): 430–46. http://dx.doi.org/10.1007/s10044-005-0243-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Leizea, Ibai, Imanol Herrera, and Pablo Puerto. "Calibration Procedure of a Multi-Camera System: Process Uncertainty Budget." Sensors 23, no. 2 (January 4, 2023): 589. http://dx.doi.org/10.3390/s23020589.

Full text
Abstract:
The Automated six Degrees of Freedom (DoF) definition of industrial components has become an added value in production processes as long as the required accuracy is guaranteed. This is where multi-camera systems are finding their niche in the market. These systems provide, among other things, the ease of automating tracking processes without human intervention and knowledge about vision and/or metrology. In addition, the cost of integrating a new sensor into the complete system is negligible compared to other multi-tracker systems. The increase in information from different points of view in multi-camera systems raises the accuracy, based on the premise that the more points of view, the lower the level of uncertainty. This work is devoted to the calibration procedures of multi-camera systems, which is decisive to achieve high performance, with a particular focus on the uncertainty budget. Moreover, an evaluation methodology has been carried out, which is key to determining the level of accuracy of the measurement system.
APA, Harvard, Vancouver, ISO, and other styles
16

Nishio, Norihiro, Yuki Deguchi, Takahiro Sugiyama, and Yoichi Takebayashi. "Multi-Camera Shooting Support System for Novices in a Compact Studio." Advanced Materials Research 222 (April 2011): 329–32. http://dx.doi.org/10.4028/www.scientific.net/amr.222.329.

Full text
Abstract:
We developed a multi-camera shooting support system in a compact studio for novice cameramen. We analyzed the problems they faced when shooting video programs. We have suggested that our automated multi-camera controls and the adjustment of camera angles for face detection reduce the burden on novices. Our experiment shows that a novice cameraman alone can carry out the task of shooting a video program.
APA, Harvard, Vancouver, ISO, and other styles
17

Khule, Shruti, Supriya Jaybhay, Pranjal Metkari, and Prof Balasaheb Balkhande. "Smart Surveillance System Real-Time Multi-Person Multi-Camera Tracking at the Edge." International Journal for Research in Applied Science and Engineering Technology 10, no. 4 (April 30, 2022): 1196–98. http://dx.doi.org/10.22214/ijraset.2022.40954.

Full text
Abstract:
Abstract: Nowadays, new Artificial Intelligence (AI) and Deep Learning based processing methods are replacing traditional computer vision algorithms. On the other hand, the rise of the Internet of Things (IoT) and edge computing, has led to many research works that propose distributed video-surveillance systems based on this notion. Usually, the advanced systems process massive volumes of data in different computing facilities. Instead, this paper presents a system that incorporates AI algorithms into low-power embedded devices. The computer vision technique, which is commonly used in surveillance applications, is designed to identify, count, and monitor people's movements in the area. A distributed camera system is required for this application. The proposed AI system detects people in the monitored area using a MobileNet-SSD architecture. This algorithm can keep track of people in the surveillance providing the number of people present in the frame. The proposed framework is both privacy-aware and scalable supporting a processing pipeline on the edge consisting of person detection, tracking and robust person re-identification. The expected results show the usefulness of deploying this smart camera node throughout a distributed surveillance system. Keywords: Edge Analytics, Person detection, Person re-identification, Deep learning, embedded systems, artificial intelligence
APA, Harvard, Vancouver, ISO, and other styles
18

Ince, Omer Faruk, and Jun-Sik Kim. "TIMA SLAM: Tracking Independently and Mapping Altogether for an Uncalibrated Multi-Camera System." Sensors 21, no. 2 (January 8, 2021): 409. http://dx.doi.org/10.3390/s21020409.

Full text
Abstract:
We present a novel simultaneous localization and mapping (SLAM) system that extends the state-of-the-art ORB-SLAM2 for multi-camera usage without precalibration. In this system, each camera is tracked independently on a shared map, and the extrinsic parameters of each camera in the fixed multi-camera system are estimated online up to a scalar ambiguity (for RGB cameras). Thus, the laborious precalibration of extrinsic parameters between cameras becomes needless. By optimizing the map, the keyframe poses, and the relative poses of the multi-camera system simultaneously, observations from the multiple cameras are utilized robustly, and the accuracy of the shared map is improved. The system is not only compatible with RGB sensors but also works on RGB-D cameras. For RGB cameras, the performance of the system tested on the well-known EuRoC/ASL and KITTI datasets that are in the stereo configuration for indoor and outdoor environments, respectively, as well as our dataset that consists of three cameras with small overlapping regions. For the RGB-D tests, we created a dataset that consists of two cameras for an indoor environment. The experimental results showed that the proposed method successfully provides an accurate multi-camera SLAM system without precalibration of the multi-cameras.
APA, Harvard, Vancouver, ISO, and other styles
19

Ince, Omer Faruk, and Jun-Sik Kim. "TIMA SLAM: Tracking Independently and Mapping Altogether for an Uncalibrated Multi-Camera System." Sensors 21, no. 2 (January 8, 2021): 409. http://dx.doi.org/10.3390/s21020409.

Full text
Abstract:
We present a novel simultaneous localization and mapping (SLAM) system that extends the state-of-the-art ORB-SLAM2 for multi-camera usage without precalibration. In this system, each camera is tracked independently on a shared map, and the extrinsic parameters of each camera in the fixed multi-camera system are estimated online up to a scalar ambiguity (for RGB cameras). Thus, the laborious precalibration of extrinsic parameters between cameras becomes needless. By optimizing the map, the keyframe poses, and the relative poses of the multi-camera system simultaneously, observations from the multiple cameras are utilized robustly, and the accuracy of the shared map is improved. The system is not only compatible with RGB sensors but also works on RGB-D cameras. For RGB cameras, the performance of the system tested on the well-known EuRoC/ASL and KITTI datasets that are in the stereo configuration for indoor and outdoor environments, respectively, as well as our dataset that consists of three cameras with small overlapping regions. For the RGB-D tests, we created a dataset that consists of two cameras for an indoor environment. The experimental results showed that the proposed method successfully provides an accurate multi-camera SLAM system without precalibration of the multi-cameras.
APA, Harvard, Vancouver, ISO, and other styles
20

Alsadik, Bashar, Fabio Remondino, and Francesco Nex. "Simulating a Hybrid Acquisition System for UAV Platforms." Drones 6, no. 11 (October 25, 2022): 314. http://dx.doi.org/10.3390/drones6110314.

Full text
Abstract:
Currently, there is a rapid trend in the production of airborne sensors consisting of multi-view cameras or hybrid sensors, i.e., a LiDAR scanner coupled with one or multiple cameras to enrich the data acquisition in terms of colors, texture, completeness of coverage, accuracy, etc. However, the current UAV hybrid systems are mainly equipped with a single camera that will not be sufficient to view the facades of buildings or other complex objects without having double flight paths with a defined oblique angle. This entails extensive flight planning, acquisition duration, extra costs, and data handling. In this paper, a multi-view camera system which is similar to the conventional Maltese cross configurations used in the standard aerial oblique camera systems is simulated. This proposed camera system is integrated with a multi-beam LiDAR to build an efficient UAV hybrid system. To design the low-cost UAV hybrid system, two types of cameras are investigated and proposed, namely the MAPIR Survey and the SenseFly SODA, integrated with a multi-beam digital Ouster OS1-32 LiDAR sensor. Two simulated UAV flight experiments are created with a dedicated methodology and processed with photogrammetric methods. The results show that with a flight speed of 5 m/sec and an image overlap of 80/80, an average density of up to 1500 pts/m2 can be achieved with adequate facade coverage in one-pass flight strips.
APA, Harvard, Vancouver, ISO, and other styles
21

Perfetti, L., C. Polari, and F. Fassi. "FISHEYE MULTI-CAMERA SYSTEM CALIBRATION FOR SURVEYING NARROW AND COMPLEX ARCHITECTURES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 877–83. http://dx.doi.org/10.5194/isprs-archives-xlii-2-877-2018.

Full text
Abstract:
Narrow spaces and passages are not a rare encounter in cultural heritage, the shape and extension of those areas place a serious challenge on any techniques one may choose to survey their 3D geometry. Especially on techniques that make use of stationary instrumentation like terrestrial laser scanning. The ratio between space extension and cross section width of many corridors and staircases can easily lead to distortions/drift of the 3D reconstruction because of the problem of propagation of uncertainty. This paper investigates the use of fisheye photogrammetry to produce the 3D reconstruction of such spaces and presents some tests to contain the degree of freedom of the photogrammetric network, thereby containing the drift of long data set as well. The idea is that of employing a multi-camera system composed of several fisheye cameras and to implement distances and relative orientation constraints, as well as the pre-calibration of the internal parameters for each camera, within the bundle adjustment. For the beginning of this investigation, we used the NCTech iSTAR panoramic camera as a rigid multi-camera system. The case study of the Amedeo Spire of the Milan Cathedral, that encloses a spiral staircase, is the stage for all the tests. Comparisons have been made between the results obtained with the multi-camera configuration, the auto-stitched equirectangular images and a data set obtained with a monocular fisheye configuration using a full frame DSLR. Results show improved accuracy, down to millimetres, using a rigidly constrained multi-camera.
APA, Harvard, Vancouver, ISO, and other styles
22

Jiang, Mingjun, Zihan Zhang, Kohei Shimasaki, Shaopeng Hu, and Idaku Ishii. "Multi-Thread AI Cameras Using High-Speed Active Vision System." Journal of Robotics and Mechatronics 34, no. 5 (October 20, 2022): 1053–62. http://dx.doi.org/10.20965/jrm.2022.p1053.

Full text
Abstract:
In this study, we propose a multi-thread artificial intelligence (AI) camera system that can simultaneously recognize remote objects in desired multiple areas of interest (AOIs), which are distributed in a wide field of view (FOV) by using single image sensor. The proposed multi-thread AI camera consists of an ultrafast active vision system and a convolutional neural network (CNN)-based ultrafast object recognition system. The ultrafast active vision system can function as multiple virtual cameras with high spatial resolution by synchronizing exposure of a high-speed camera and movement of an ultrafast two-axis mirror device at hundreds of hertz, and the CNN-based ultrafast object recognition system simultaneously recognizes the acquired high-frame-rate images in real time. The desired AOIs for monitoring can be automatically determined after rapidly scanning pre-placed visual anchors in the wide FOV at hundreds of fps with object recognition. The effectiveness of the proposed multi-thread AI camera system was demonstrated by conducting several wide area monitoring experiments on quick response (QR) codes and persons in nature spacious scene such as meeting room, which was formerly too wide for a single still camera with wide angle lens to simultaneously acquire clear images.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Shuya, Zhiyu Xiang, Nan Zou, Yiman Chen, and Chengyu Qiao. "Multi-stereo 3D reconstruction with a single-camera multi-mirror catadioptric system." Measurement Science and Technology 31, no. 1 (October 23, 2019): 015102. http://dx.doi.org/10.1088/1361-6501/ab3be4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Jung, Taek-Hoon, Benjamin Cates, In-Kyo Choi, Sang-Heon Lee, and Jong-Min Choi. "Multi-Camera-Based Person Recognition System for Autonomous Tractors." Designs 4, no. 4 (December 9, 2020): 54. http://dx.doi.org/10.3390/designs4040054.

Full text
Abstract:
Recently, the development of autonomous tractors is being carried out as an alternative to solving the labor shortage problem of agricultural workers due to an aging population and low birth rate. As the level of autonomous driving technology advances, tractor manufacturers should develop technology with the safety of their customers as a top priority. In this paper, we suggest a person recognition system for the entire environment of the tractor using a four-channel camera mounted on the tractor and the NVIDIA Jetson Xavier platform. The four-channel frame synchronization and preprocessing were performed, and the methods of recognizing people in the agricultural environment were combined using the YOLO-v3 algorithm. Among the many objects provided by COCO dataset for learning the YOLO-v3 algorithm, only person objects were extracted and the network was learned. A total of 8602 image frames were collected at the LSMtron driving test field to measure the recognition performance of actual autonomous tractors. In the collected images, various postures of agricultural workers (ex. Parts of the body are obscured by crops, squatting, etc.) that may appear in the agricultural environment were required to be expressed. The person object labeling was performed manually for the collected test datasets. For this test dataset, a comparison of the person recognition performance of the standard YOLO-v3 (80 classes detect) and Our YOLO-v3 (only person detect) was performed. As a result, our system showed 88.43% precision and 86.19% recall. This was 0.71% higher precision and 2.3 fps faster than the standard YOLO-v3. This recognition performance was judged to be sufficient considering the working conditions of autonomous tractors.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhou, Jiaqing, Yue Wu, Xiangrong Zeng, Xin Long, and Guang Jin. "Multi-view Blind Deconvolution Method of Camera Array System." Journal of Computer-Aided Design & Computer Graphics 30, no. 8 (2018): 1446. http://dx.doi.org/10.3724/sp.j.1089.2018.16792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Javed, Zeeshan, and Gon-Woo Kim. "Omni-directional Visual-LiDAR SLAM for Multi-Camera System." Journal of Korea Robotics Society 17, no. 3 (September 1, 2022): 353–58. http://dx.doi.org/10.7746/jkros.2022.17.3.353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Dinh, B. M., A. N. Timofeev, V. V. Korotaev, and T. V. Turgalieva. "Multi-camera system for determining complex-shape fruit size." Izvestiâ vysših učebnyh zavedenij. Priborostroenie 64, no. 8 (August 31, 2021): 656–66. http://dx.doi.org/10.17586/0021-3454-2021-64-8-656-666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Olagoke, Adeshina Sirajdin, Haidi Ibrahim, and Soo Siang Teoh. "Literature Survey on Multi-Camera System and Its Application." IEEE Access 8 (2020): 172892–922. http://dx.doi.org/10.1109/access.2020.3024568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Uemura, Tomomasa, Sumio Yamada, Fujio Yamamoto, Hiroaki Nakajima, and Hisashi Usui. "Development and Application of a Simple Multi-Camera System." Journal of the Visualization Society of Japan 11, Supplement1 (1991): 67–70. http://dx.doi.org/10.3154/jvs.11.supplement1_67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

ZHANG Tian-shu, 张天舒, 金光 JIN Guang, and 刘春雨 LIU Chun-yu. "Optical system design of multi-angle coupled framing camera." Chinese Optics 11, no. 4 (2018): 615–22. http://dx.doi.org/10.3788/co.20181104.0615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Xinlei. "Novel Calibration Method for the Multi-Camera Measurement System." Journal of the Optical Society of Korea 18, no. 6 (December 25, 2014): 746–52. http://dx.doi.org/10.3807/josk.2014.18.6.746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Tianlong, Qiancheng Zhao, Xian Wang, and Dongzhao Huang. "Accurate calibration approach for non-overlapping multi-camera system." Optics & Laser Technology 110 (February 2019): 78–86. http://dx.doi.org/10.1016/j.optlastec.2018.07.054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Maas, Hans-Gerd. "Image sequence based automatic multi-camera system calibration techniques." ISPRS Journal of Photogrammetry and Remote Sensing 54, no. 5-6 (December 1999): 352–59. http://dx.doi.org/10.1016/s0924-2716(99)00029-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

IWAKUMA, Keigo, and Yuji YAMAKAWA. "Multiple Objects Tracking with High-speed Multi-camera System." Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2020 (2020): 2P2—J15. http://dx.doi.org/10.1299/jsmermd.2020.2p2-j15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Guler, Puren, Deniz Emeksiz, Alptekin Temizel, Mustafa Teke, and Tugba Taskaya Temizel. "Real-time multi-camera video analytics system on GPU." Journal of Real-Time Image Processing 11, no. 3 (March 27, 2013): 457–72. http://dx.doi.org/10.1007/s11554-013-0337-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Dan, Xizuo, Junrui Li, Qihan Zhao, Fangyuan Sun, Yonghong Wang, and Lianxiang Yang. "A Cross-Dichroic-Prism-Based Multi-Perspective Digital Image Correlation System." Applied Sciences 9, no. 4 (February 16, 2019): 673. http://dx.doi.org/10.3390/app9040673.

Full text
Abstract:
A robust three-perspective digital image correlation (DIC) system based on a cross dichroic prism and single three charge-coupled device (3CCD) color cameras is proposed in this study. Images from three different perspectives are captured by a 3CCD camera using the cross dichroic prism and two planar mirrors. These images are then separated by different CCD channels to perform correlation calculation with an existing multi-camera DIC algorithm. The proposed system is considerably more compact than the conventional multi-camera DIC system. In addition, the proposed system has no loss of spatial resolution compared with the traditional single-camera DIC system. The principle and experimental setup of the proposed system is described in detail, and a series of tests is performed to validate the system. Experimental results show that the proposed system performs well in displacement, morphology, and strain measurement.
APA, Harvard, Vancouver, ISO, and other styles
37

Guo Peiyao, 郭珮瑶, 蒲志远 Pu Zhiyuan, and 马展 Ma Zhan. "多相机系统:成像增强及应用." Laser & Optoelectronics Progress 58, no. 18 (2021): 1811013. http://dx.doi.org/10.3788/lop202158.1811013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Gieseler, Oliver, Hubert Roth, and Jürgen Wahrburg. "A novel 4 camera multi-stereo tracking system for application in surgical navigation systems." tm - Technisches Messen 87, no. 7-8 (July 26, 2020): 451–58. http://dx.doi.org/10.1515/teme-2019-0144.

Full text
Abstract:
AbstractIn this paper, we present a novel 4 camera stereo system for application as optical tracking component in navigation systems in computer-assisted surgery. This shall replace a common stereo camera system in several applications. The objective is to provide a tracking component consisting of four single industrial cameras. The system can be built up flexibly in the operating room e. g. at the operating room lamp. The concept is characterized by independent, arbitrary camera mounting poses and demands easy on-site calibration procedures of the camera setup. Following a short introduction describing the environment, motivation and advantages of the new camera system, a simulation of the camera setup and arrangement is depicted in Section 2. From this, we gather important information and parameters for the hardware setup, which is described in Section 3. Section 4 includes the calibration of the cameras. Here, we illustrate the background of camera model and applied calibration procedures, a comparison of calibration results obtained with different calibration programs and a new concept for fast and easy extrinsic calibration.
APA, Harvard, Vancouver, ISO, and other styles
39

Abdullah, Hasanen S., and Sana A. Jabber. ""Faces Tracking from Multi-Surveillance Camera (FTMSC)"." Muthanna Journal of Pure Science 4, no. 2 (October 19, 2017): 23–32. http://dx.doi.org/10.52113/2/04.02.2017/23-32.

Full text
Abstract:
"The development of a robust and integrated multi-camera surveillance device is an important requirement to ensure public safety and security. Being able to re-identify and track one or more targets in different scenes with surveillance cameras. That remains an important and difficult problem due to clogging, significant change of views, and lighting across cameras. In this paper, traditional surveillance systems developed and supported by intelligent techniques. That system have ability to performance the parallel processing of all cameras to track peoples in the different scenes (places). In addition, show information about authorized people appearing on surveillance cameras and issue a warning whistle to alert security men if any unauthorized person appears. We used Viola and Jones approach to detected face, and then classifying the target face as one of the authorized faces or not by using Local Binary Patterns (LBP)."
APA, Harvard, Vancouver, ISO, and other styles
40

Zong, Weijia, Zhouyi Wang, Qiang Xing, Junjie Zhu, Liuwei Wang, Kai Qin, Hemin Bai, Min Yu, and Zhendong Dai. "The Method of Multi-Camera Layout in Motion Capture System for Diverse Small Animals." Applied Sciences 8, no. 9 (September 5, 2018): 1562. http://dx.doi.org/10.3390/app8091562.

Full text
Abstract:
Motion capture based on multi-camera is widely used in the quantification of animal locomotor behaviors and this is one of the main research methods to reveal the physical laws of animal locomotion and to inspire the design and realization of bionic robot. It has been found that the multi-camera layout patterns greatly affect the effect of motion capture. Due to the researches for animals of diverse species, determining the most appropriate layout patterns to achieve excellent capture performance remains an unresolved challenge. To improve the capturing accuracy, this investigation focuses on the method of multi-camera layout as a motion capture system for diverse animals with significant differences in outward appearance characteristics and locomotor behaviors. The demand boundaries of motion capture are determined according to the appearance types (shapes and space volume) and the behavior characteristics of the animals, resulting in the matching principle of the typical multi-camera layout patterns (arch, annular and half-annular) with diverse animals. The results of the calibration experiments show that the average standard deviation rate (ASDR) of multi-camera system in the half-annular layout patterns (0.52%) is apparently smaller than that of the other two patterns, while its intersecting volume is the largest among the three patterns. The ASDR at different depths of field in a half-annular layout demonstrate that the greater depth of field is conducive to improving the precision of the motion capture system. Laboratory experiments of the motion capture for small animals (geckos and spiders) employed the multi-camera system locked in the 3-D force measuring platform in a half-annular layout pattern indicate that the ASDR of them could reach less than 3.8% and their capturing deviation rate (ACDR) are respectively 3.43% and 1.74%. In this report, the correlations between the motion capture demand boundaries of small animals and the characteristics of the multi-camera layout patterns were determined to advance the motion capture experimental technology for all kinds of small animals, which can provide effective support for the understanding of animal locomotion.
APA, Harvard, Vancouver, ISO, and other styles
41

Wu, Yi-Chang, Ching-Han Chen, Yao-Te Chiu, and Pi-Wei Chen. "Cooperative People Tracking by Distributed Cameras Network." Electronics 10, no. 15 (July 25, 2021): 1780. http://dx.doi.org/10.3390/electronics10151780.

Full text
Abstract:
In the application of video surveillance, reliable people detection and tracking are always challenging tasks. The conventional single-camera surveillance system may encounter difficulties such as narrow-angle of view and dead space. In this paper, we proposed multi-cameras network architecture with an inter-camera hand-off protocol for cooperative people tracking. We use the YOLO model to detect multiple people in the video scene and incorporate the particle swarm optimization algorithm to track the person movement. When a person leaves the area covered by a camera and enters an area covered by another camera, these cameras can exchange relevant information for uninterrupted tracking. The motion smoothness (MS) metrics is proposed for evaluating the tracking quality of multi-camera networking system. We used a three-camera system for two persons tracking in overlapping scene for experimental evaluation. Most tracking person offsets at different frames were lower than 30 pixels. Only 0.15% of the frames showed abrupt increases in offsets pixel. The experiment results reveal that our multi-camera system achieves robust, smooth tracking performance.
APA, Harvard, Vancouver, ISO, and other styles
42

Álvarez, Hugo, Marcos Alonso, Jairo R. Sánchez, and Alberto Izaguirre. "A Multi Camera and Multi Laser Calibration Method for 3D Reconstruction of Revolution Parts." Sensors 21, no. 3 (January 24, 2021): 765. http://dx.doi.org/10.3390/s21030765.

Full text
Abstract:
This paper describes a method for calibrating multi camera and multi laser 3D triangulation systems, particularly for those using Scheimpflug adapters. Under this configuration, the focus plane of the camera is located at the laser plane, making it difficult to use traditional calibration methods, such as chessboard pattern-based strategies. Our method uses a conical calibration object whose intersections with the laser planes generate stepped line patterns that can be used to calculate the camera-laser homographies. The calibration object has been designed to calibrate scanners for revolving surfaces, but it can be easily extended to linear setups. The experiments carried out show that the proposed system has a precision of 0.1 mm.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Yi, and J. Chen. "A Intelligent Wheelchair Obstacle Avoidance System Based on Multi-Sensor Fusion Technology." Key Engineering Materials 455 (December 2010): 121–26. http://dx.doi.org/10.4028/www.scientific.net/kem.455.121.

Full text
Abstract:
In this paper an intelligent wheelchair obstacle avoidance system based on multi-sensor data fusion technology is instructed. It giving rises to the hardware architecture of the wheelchair and develops a sonar and camera data acquisition system on the VC++ platform by which we could complete the sonar and camera sensor information collection and data processing. Use a T-S model based fuzzy neural network multi-sensor data fusion method for intelligent wheelchair obstacle avoidance. Some simulations were done to test the method in different environments and the method can effectively integrated the information of sonar and camera, give appropriate control signals to avoid obstacles.
APA, Harvard, Vancouver, ISO, and other styles
44

Pan, Yingwei, Yue Chen, Qian Bao, Ning Zhang, Ting Yao, Jingen Liu, and Tao Mei. "Smart Director: An Event-Driven Directing System for Live Broadcasting." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 4 (November 30, 2021): 1–18. http://dx.doi.org/10.1145/3448981.

Full text
Abstract:
Live video broadcasting normally requires a multitude of skills and expertise with domain knowledge to enable multi-camera productions. As the number of cameras keeps increasing, directing a live sports broadcast has now become more complicated and challenging than ever before. The broadcast directors need to be much more concentrated, responsive, and knowledgeable, during the production. To relieve the directors from their intensive efforts, we develop an innovative automated sports broadcast directing system, called Smart Director, which aims at mimicking the typical human-in-the-loop broadcasting process to automatically create near-professional broadcasting programs in real-time by using a set of advanced multi-view video analysis algorithms. Inspired by the so-called “three-event” construction of sports broadcast [ 14 ], we build our system with an event-driven pipeline consisting of three consecutive novel components: (1) the Multi-View Event Localization to detect events by modeling multi-view correlations, (2) the Multi-View Highlight Detection to rank camera views by the visual importance for view selection, and (3) the Auto-Broadcasting Scheduler to control the production of broadcasting videos. To our best knowledge, our system is the first end-to-end automated directing system for multi-camera sports broadcasting, completely driven by the semantic understanding of sports events. It is also the first system to solve the novel problem of multi-view joint event detection by cross-view relation modeling. We conduct both objective and subjective evaluations on a real-world multi-camera soccer dataset, which demonstrate the quality of our auto-generated videos is comparable to that of the human-directed videos. Thanks to its faster response, our system is able to capture more fast-passing and short-duration events which are usually missed by human directors.
APA, Harvard, Vancouver, ISO, and other styles
45

Yaguchi, Hiroaki, Nikolaus Zaoputra, Naotaka Hatao, Kimitoshi Yamazaki, Kei Okada, and Masayuki Inaba. "View-Based Localization Using Head-Mounted Multi Sensors Information." Journal of Robotics and Mechatronics 21, no. 3 (June 20, 2009): 376–83. http://dx.doi.org/10.20965/jrm.2009.p0376.

Full text
Abstract:
In view-based navigation, view sequences are constructed by considering only the appearance of images. This approach can work only in limited situation, because the structure of environment and camera poses with 3D camera motion is not considered. In this paper, we construct a multi sensor system using an omnidirectional camera, a motion sensor and laser range finders. Using this system, we propose a method of construction view sequence, that takes 3D camera poses into account.
APA, Harvard, Vancouver, ISO, and other styles
46

Chi, Jiannan, Jiahui Liu, Feng Wang, Yingkai Chi, and Zeng-Guang Hou. "3-D Gaze-Estimation Method Using a Multi-Camera-Multi-Light-Source System." IEEE Transactions on Instrumentation and Measurement 69, no. 12 (December 2020): 9695–708. http://dx.doi.org/10.1109/tim.2020.3006681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Zhang, Xinman, Weiyong Gong, and Xuebin Xu. "Magnetic Ring Multi-Defect Stereo Detection System Based on Multi-Camera Vision Technology." Sensors 20, no. 2 (January 10, 2020): 392. http://dx.doi.org/10.3390/s20020392.

Full text
Abstract:
Magnetic rings are the most widely used magnetic material product in industry. The existing manual defect detection method for magnetic rings has high cost, low efficiency and low precision. To address this issue, a magnetic ring multi-defect stereo detection system based on multi-camera vision technology is developed to complete the automatic inspection of magnetic rings. The system can detect surface defects and measure ring height simultaneously. Two image processing algorithms are proposed, namely, the image edge removal algorithm (IERA) and magnetic ring location algorithm (MRLA), separately. On the basis of these two algorithms, connected domain filtering methods for crack, fiber and large-area defects are established to complete defect inspection. This system achieves a recognition rate of 100% for defects such as crack, adhesion, hanger adhesion and pitting. Furthermore, the recognition rate for fiber and foreign matter defects attains 92.5% and 91.5%, respectively. The detection speed exceeds 120 magnetic rings per minutes, and the precision is within 0.05 mm. Both precision and speed meet the requirements of real-time quality inspection in actual production.
APA, Harvard, Vancouver, ISO, and other styles
48

Khoramshahi, Ehsan, Mariana Campos, Antonio Tommaselli, Niko Vilijanen, Teemu Mielonen, Harri Kaartinen, Antero Kukko, and Eija Honkavaara. "Accurate Calibration Scheme for a Multi-Camera Mobile Mapping System." Remote Sensing 11, no. 23 (November 25, 2019): 2778. http://dx.doi.org/10.3390/rs11232778.

Full text
Abstract:
Mobile mapping systems (MMS) are increasingly used for many photogrammetric and computer vision applications, especially encouraged by the fast and accurate geospatial data generation. The accuracy of point position in an MMS is mainly dependent on the quality of calibration, accuracy of sensor synchronization, accuracy of georeferencing and stability of geometric configuration of space intersections. In this study, we focus on multi-camera calibration (interior and relative orientation parameter estimation) and MMS calibration (mounting parameter estimation). The objective of this study was to develop a practical scheme for rigorous and accurate system calibration of a photogrammetric mapping station equipped with a multi-projective camera (MPC) and a global navigation satellite system (GNSS) and inertial measurement unit (IMU) for direct georeferencing. The proposed technique is comprised of two steps. Firstly, interior orientation parameters of each individual camera in an MPC and the relative orientation parameters of each cameras of the MPC with respect to the first camera are estimated. In the second step the offset and misalignment between MPC and GNSS/IMU are estimated. The global accuracy of the proposed method was assessed using independent check points. A correspondence map for a panorama is introduced that provides metric information. Our results highlight that the proposed calibration scheme reaches centimeter-level global accuracy for 3D point positioning. This level of global accuracy demonstrates the feasibility of the proposed technique and has the potential to fit accurate mapping purposes.
APA, Harvard, Vancouver, ISO, and other styles
49

Li, Jing, Jing Xu, Fangwei Zhong, Xiangyu Kong, Yu Qiao, and Yizhou Wang. "Pose-Assisted Multi-Camera Collaboration for Active Object Tracking." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 759–66. http://dx.doi.org/10.1609/aaai.v34i01.5419.

Full text
Abstract:
Active Object Tracking (AOT) is crucial to many vision-based applications, e.g., mobile robot, intelligent surveillance. However, there are a number of challenges when deploying active tracking in complex scenarios, e.g., target is frequently occluded by obstacles. In this paper, we extend the single-camera AOT to a multi-camera setting, where cameras tracking a target in a collaborative fashion. To achieve effective collaboration among cameras, we propose a novel Pose-Assisted Multi-Camera Collaboration System, which enables a camera to cooperate with the others by sharing camera poses for active object tracking. In the system, each camera is equipped with two controllers and a switcher: The vision-based controller tracks targets based on observed images. The pose-based controller moves the camera in accordance to the poses of the other cameras. At each step, the switcher decides which action to take from the two controllers according to the visibility of the target. The experimental results demonstrate that our system outperforms all the baselines and is capable of generalizing to unseen environments. The code and demo videos are available on our website https://sites.google.com/view/pose-assisted-collaboration.
APA, Harvard, Vancouver, ISO, and other styles
50

Bapin, Yerzhigit, Kanat Alimanov, and Vasilios Zarikas. "Camera-Driven Probabilistic Algorithm for Multi-Elevator Systems." Energies 13, no. 23 (November 24, 2020): 6161. http://dx.doi.org/10.3390/en13236161.

Full text
Abstract:
A fast and reliable vertical transportation system is an important component of modern office buildings. Optimization of elevator control strategies can be easily done using the state-of-the-art artificial intelligence (AI) algorithms. This study presents a novel method for optimal dispatching of conventional passenger elevators using the information obtained by surveillance cameras. It is assumed that a real-time video is processed by an image processing system that determines the number of passengers and items waiting for an elevator car in hallways and riding the lifts. It is supposed that these numbers are also associated with a given uncertainly probability. The efficiency of our novel elevator control algorithm is achieved not only by the probabilistic utilization of the number of people and/or items waiting but also from the demand to exhaustively serve a crowded floor, directing to it as many elevators as there are available and filling them up to the maximum allowed weight. The proposed algorithm takes into account the uncertainty that can take place due to inaccuracy of the image processing system, introducing the concept of effective number of people and items using Bayesian networks. The aim is to reduce the waiting time. According to the simulation results, the implementation of the proposed algorithm resulted in reduction of the passenger journey time. The proposed approach was tested on a 10-storey office building with five elevator cars and traffic size and intensity varying from 10 to 300 and 0.01 to 3, respectively. The results showed that, for the interfloor traffic conditions, the average travel time for scenarios with varying traffic size and intensity improved by 39.94% and 19.53%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography