Academic literature on the topic 'Multi-Camera System'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-Camera System.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multi-Camera System"

1

Zhang, Zirui, and Jun Cheng. "Multi-Camera Tracking Helmet System." Journal of Image and Graphics 1, no. 2 (2013): 76–79. http://dx.doi.org/10.12720/joig.1.2.76-79.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

MOMIYAMA, Takumi, and Tsuyoshi SHIMIZU. "202 Calibration Flexible multi camera system." Proceedings of Yamanashi District Conference 2014 (2014): 31–32. http://dx.doi.org/10.1299/jsmeyamanashi.2014.31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Detchev, I., M. Mazaheri, S. Rondeel, and A. Habib. "Calibration of multi-camera photogrammetric systems." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1 (November 7, 2014): 101–8. http://dx.doi.org/10.5194/isprsarchives-xl-1-101-2014.

Full text
Abstract:
Due to the low-cost and off-the-shelf availability of consumer grade cameras, multi-camera photogrammetric systems have become a popular means for 3D reconstruction. These systems can be used in a variety of applications such as infrastructure monitoring, cultural heritage documentation, biomedicine, mobile mapping, as-built architectural surveys, etc. In order to ensure that the required precision is met, a system calibration must be performed prior to the data collection campaign. This system calibration should be performed as efficiently as possible, because it may need to be completed many times. Multi-camera system calibration involves the estimation of the interior orientation parameters of each involved camera and the estimation of the relative orientation parameters among the cameras. This paper first reviews a method for multi-camera system calibration with built-in relative orientation constraints. A system stability analysis algorithm is then presented which can be used to assess different system calibration outcomes. The paper explores the required calibration configuration for a specific system in two situations: major calibration (when both the interior orientation parameters and relative orientation parameters are estimated), and minor calibration (when the interior orientation parameters are known a-priori and only the relative orientation parameters are estimated). In both situations, system calibration results are compared using the system stability analysis methodology.
APA, Harvard, Vancouver, ISO, and other styles
4

Aizawa, Kiyoharu. "Multi-camera Surveillance System using Wireless LAN." Journal of Life Support Engineering 18, Supplement (2006): 3. http://dx.doi.org/10.5136/lifesupport.18.supplement_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wierzbicki, Damian. "Multi-Camera Imaging System for UAV Photogrammetry." Sensors 18, no. 8 (July 26, 2018): 2433. http://dx.doi.org/10.3390/s18082433.

Full text
Abstract:
In the last few years, it has been possible to observe a considerable increase in the use of unmanned aerial vehicles (UAV) equipped with compact digital cameras for environment mapping. The next stage in the development of photogrammetry from low altitudes was the development of the imagery data from UAV oblique images. Imagery data was obtained from side-facing directions. As in professional photogrammetric systems, it is possible to record footprints of tree crowns and other forms of the natural environment. The use of a multi-camera system will significantly reduce one of the main UAV photogrammetry limitations (especially in the case of multirotor UAV) which is a reduction of the ground coverage area, while increasing the number of images, increasing the number of flight lines, and reducing the surface imaged during one flight. The approach proposed in this paper is based on using several head cameras to enhance the imaging geometry during one flight of UAV for mapping. As part of the research work, a multi-camera system consisting of several cameras was designed to increase the total Field of View (FOV). Thanks to this, it will be possible to increase the ground coverage area and to acquire image data effectively. The acquired images will be mosaicked in order to limit the total number of images for the mapped area. As part of the research, a set of cameras was calibrated to determine the interior orientation parameters (IOPs). Next, the method of image alignment using the feature image matching algorithms was presented. In the proposed approach, the images are combined in such a way that the final image has a joint centre of projections of component images. The experimental results showed that the proposed solution was reliable and accurate for the mapping purpose. The paper also presents the effectiveness of existing transformation models for images with a large coverage subjected to initial geometric correction due to the influence of distortion.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Zhipeng, Dong Yin, Jinwen Ding, Yuhao Luo, Mingyue Yuan, and Chengfeng Zhu. "Collaborative Tracking Method in Multi-Camera System." Journal of Shanghai Jiaotong University (Science) 25, no. 6 (May 29, 2020): 802–10. http://dx.doi.org/10.1007/s12204-020-2188-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Leipner, Anja, Rilana Baumeister, Michael J. Thali, Marcel Braun, Erika Dobler, and Lars C. Ebert. "Multi-camera system for 3D forensic documentation." Forensic Science International 261 (April 2016): 123–28. http://dx.doi.org/10.1016/j.forsciint.2016.02.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hsu, Che-Hao, Wen-Huang Cheng, Yi-Leh Wu, Wen-Shiung Huang, Tao Mei, and Kai-Lung Hua. "CrossbowCam: a handheld adjustable multi-camera system." Multimedia Tools and Applications 76, no. 23 (June 5, 2017): 24961–81. http://dx.doi.org/10.1007/s11042-017-4852-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yin, Lei, Xiangjun Wang, Yubo Ni, Kai Zhou, and Jilong Zhang. "Extrinsic Parameters Calibration Method of Cameras with Non-Overlapping Fields of View in Airborne Remote Sensing." Remote Sensing 10, no. 8 (August 16, 2018): 1298. http://dx.doi.org/10.3390/rs10081298.

Full text
Abstract:
Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the onboard cameras. Unlike conventional multi-camera calibration methods with a common field of view (FOV), multi-camera calibration without overlapping FOVs has certain difficulties. In this paper, we propose a calibration method for a multi-camera system without common FOVs, which is used on aero photogrammetry. First, the extrinsic parameters of any two cameras in a multi-camera system is calibrated, and the extrinsic matrix is optimized by the re-projection error. Then, the extrinsic parameters of each camera are unified to the system reference coordinate system by using the global optimization method. A simulation experiment and a physical verification experiment are designed for the theoretical arithmetic. The experimental results show that this method is operable. The rotation error angle of the camera’s extrinsic parameters is less than 0.001rad and the translation error is less than 0.08 mm.
APA, Harvard, Vancouver, ISO, and other styles
10

ZHANG Lai-gang, 张来刚, 魏仲慧 WEO Zhong-hui, 何昕 HE Xin, and 孙群 SUN Qun. "Multi-Camera Measurement System Based on Multi-Constraint Fusion Algorithm." Chinese Journal of Liquid Crystals and Displays 28, no. 4 (2013): 608–14. http://dx.doi.org/10.3788/yjyxs20132804.0608.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Multi-Camera System"

1

Vibeck, Alexander. "Synchronization of a Multi Camera System." Thesis, Linköpings universitet, Datorseende, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119408.

Full text
Abstract:
In a synchronized multi camera system it is imperative that the synchronization error between the different cameras is as close to zero as possible and the jitter of the presumed frame rate is as small as possible. It is even more important when these systems are used in an autonomous vehicle trying to sense its surroundings. We would never hand over the control to a autonomous vehicle if we couldn't trust the data it is using for moving around. The purpose of this thesis was to build a synchronization setup for a multi camera system using state of the art RayTrix digital cameras that will be used in the iQMatic project involving autonomous heavy duty vehicles. The iQMatic project is a collaboration between several Swedish industrial partners and universities. There was also software development for the multi camera system involved. Different synchronization techniques were implemented and then analysed against the system requirements. The two techniques were hardware trigger i.e. external trigger using a microcontroller, and software trigger using the API from the digital cameras. Experiments were conducted by testing the different trigger modes with the developed multi camera software. The conclusions show that the hardware trigger is preferable in this particular system by showing more stability and better statistics against the system requirements than the software trigger. But the thesis also show that additional experiments are needed for a more accurate analysis.
iQMatic
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Jae-Hak, and Jae-Hak Kim@anu edu au. "Camera Motion Estimation for Multi-Camera Systems." The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20081211.011120.

Full text
Abstract:
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. First, we focus on the problem of translational motion of omnidirectional cameras, which are multi-camera systems, and present a constrained minimization method to obtain robust estimation results. Given known rotation, we show that bilinear and trilinear relations can be used to build a system of linear equations, and singular value decomposition (SVD) is used to solve the equations. Second, we present a linear method that estimates the relative motion of generalized cameras, in particular, in the case of non-overlapping views. We also present four types of generalized cameras, which can be solvable using our proposed, modified SVD method. This is the first study finding linear relations for certain types of generalized cameras and performing experiments using our proposed linear method. Third, we present a linear 6-point method (5 points from the same camera and 1 point from another camera) that estimates the relative motion of multi-camera systems, where cameras have no overlapping views. In addition, we discuss the theoretical and geometric analyses of multi-camera systems as well as certain critical configurations where the scale of translation cannot be determined. Fourth, we develop a global solution under an L∞ norm error for the relative motion problem of multi-camera systems using second-order cone programming. Finally, we present a fast searching method to obtain a global solution under an L∞ norm error for the relative motion problem of multi-camera systems, with non-overlapping views, using a branch-and-bound algorithm and linear programming (LP). By testing the feasibility of LP at the earlier stage, we reduced the time of computation of solving LP.¶ We tested our proposed methods by performing experiments with synthetic and real data. The Ladybug2 camera, for example, was used in the experiment on estimation of the translation of omnidirectional cameras and in the estimation of the relative motion of non-overlapping multi-camera systems. These experiments showed that a global solution using L∞ to estimate the relative motion of multi-camera systems could be achieved.
APA, Harvard, Vancouver, ISO, and other styles
3

Mortensen, Daniel T. "Foreground Removal in a Multi-Camera System." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7669.

Full text
Abstract:
Traditionally, whiteboards have been used to brainstorm, teach, and convey ideas with others. However distributing whiteboard content remotely can be challenging. To solve this problem, A multi-camera system was developed which can be scaled to broadcast an arbitrarily large writing surface while removing objects not related to the whiteboard content. Related research has been performed previously to combine multiple images together, identify and remove unrelated objects, also referred to as foreground, in a single image and correct for warping differences in camera frames. However, this is the first time anyone has attempted to solve this problem using a multi-camera system. The main components of this problem include stitching the input images together, identifying foreground material, and replacing the foreground information with the most recent background (desired) information. This problem can be subdivided into two main components: fusing multiple images into one cohesive frame, and detecting/removing foreground objects. for the first component, homographic transformations are used to create a mathematical mapping from the input image to the desired reference frame. Blending techniques are then applied to remove artifacts that remain after the perspective transform. For the second, statistical tests and modeling in conjunction with additional classification algorithms were used.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, Han, and 周晗. "Intelligent video surveillance in a calibrated multi-camera system." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B45989217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Åkesson, Ulrik. "Design of a multi-camera system for object identification, localisation, and visual servoing." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44082.

Full text
Abstract:
In this thesis, the development of a stereo camera system for an intelligent tool is presented. The task of the system is to identify and localise objects so that the tool can guide a robot. Different approaches to object detection have been implemented and evaluated and the systems ability to localise objects has been tested. The results show that the system can achieve a localisation accuracy below 5 mm.
APA, Harvard, Vancouver, ISO, and other styles
6

Turesson, Eric. "Multi-camera Computer Vision for Object Tracking: A comparative study." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21810.

Full text
Abstract:
Background: Video surveillance is a growing area where it can help with deterring crime, support investigation or to help gather statistics. These are just some areas where video surveillance can aid society. However, there is an improvement that could increase the efficiency of video surveillance by introducing tracking. More specifically, tracking between cameras in a network. Automating this process could reduce the need for humans to monitor and review since the tracking can track and inform the relevant people on its own. This has a wide array of usability areas, such as forensic investigation, crime alerting, or tracking down people who have disappeared. Objectives: What we want to investigate is the common setup of real-time multi-target multi-camera tracking (MTMCT) systems. Next up, we want to investigate how the components in an MTMCT system affect each other and the complete system. Lastly, we want to see how image enhancement can affect the MTMCT. Methods: To achieve our objectives, we have conducted a systematic literature review to gather information. Using the information, we implemented an MTMCT system where we evaluated the components to see how they interact in the complete system. Lastly, we implemented two image enhancement techniques to see how they affect the MTMCT. Results: As we have discovered, most often, MTMCT is constructed using a detection for discovering object, tracking to keep track of the objects in a single camera and a re-identification method to ensure that objects across cameras have the same ID. The different components have quite a considerable effect on each other where they can sabotage and improve each other. An example could be that the quality of the bounding boxes affect the data which re-identification can extract. We discovered that the image enhancement we used did not introduce any significant improvement. Conclusions: The most common structure for MTMCT are detection, tracking and re-identification. From our finding, we can see that all the component affect each other, but re-identification is the one that is mostly affected by the other components and the image enhancement. The two tested image enhancement techniques could not introduce enough improvement, but other image enhancement could be used to make the MTMCT perform better. The MTMCT system we constructed did not manage to reach real-time.
APA, Harvard, Vancouver, ISO, and other styles
7

Bachnak, Rafic A. "Development of a stereo-based multi-camera system for 3-D vision." Ohio : Ohio University, 1989. http://www.ohiolink.edu/etd/view.cgi?ohiou1172005477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Becklinger, Nicole Lynn. "Design and test of a multi-camera based orthorectified airborne imaging system." Thesis, University of Iowa, 2010. https://ir.uiowa.edu/etd/461.

Full text
Abstract:
Airborne imaging platforms have been applied to such diverse areas as surveillance, natural disaster monitoring, cartography and environmental research. However, airborne imaging data can be expensive, out of date, or difficult to interpret. This work introduces an Orthorectified Airborne Imaging (OAI) system designed to provide near real time images in Google Earth. The OAI system consists of a six camera airborne image collection system and a ground based image processing system. Images and position data are transmitted from the air to the ground station using a point to point (PTP) data link antenna connection. Upon reaching the ground station, image processing software combines the six individual images into a larger stitched image. Stitched images are processed to remove distortions and then rotated so that north is pointed up (orthorectified). Because the OAI images are very large, they must be broken down into a series of progressively higher resolution tiles called an image pyramid before being loaded into Google Earth. A KML programming technique called a super overlay is used to load the image pyramid into Google Earth. A program and Graphical User Interface created in C# create the KML super overlay files according to user specifications. Image resolution and the location of the area being imaged relitive to the aircraft are functions of altitude and the position of the imaging cameras. Placement of OAI images in Google Earth allows the user to take advantage of the place markers, street names, and navigation features native to the Google Earth environment.
APA, Harvard, Vancouver, ISO, and other styles
9

Beriault, Silvain. "Multi-camera system design, calibration and three-dimensional reconstruction for markerless motion capture." Thesis, University of Ottawa (Canada), 2008. http://hdl.handle.net/10393/27957.

Full text
Abstract:
Recently, significant advances have been made in many sub-areas regarding the problem of markerless human motion capture. However, markerless solutions still tend to introduce major simplifications, especially in early stages of the process, that temper the robustness and the generality of any subsequent modules and, consequently, of the whole application. This thesis concentrates on improving the aspects of multi-camera system design, multi-camera calibration and shape-from-silhouette volumetric reconstruction. In Chapter 3, a thoughtful system analysis is first proposed with the objective of achieving an optimal synchronized multi-camera system. Chapter 4 proposes an easy-to-use multi-camera calibration technique to estimate the relative positioning and orientation of every camera with sub-pixel accuracy. In Chapter 5 a robust shape-from-silhouette algorithm, with precise voxel coloring, is developed. Overall, the proposed framework is successful to reconstruct various 3D human postures and, in particular, complex and self-occlusive pianist postures in real-world (minimally constrained) scenes.
APA, Harvard, Vancouver, ISO, and other styles
10

Santos, de Freitas Rafael Luiz. "MULTI-CAMERA SURVEILLANCE SYSTEM FOR TIME AND MOTION STUDIES OF TIMBER HARVESTING OPERATIONS." UKnowledge, 2019. https://uknowledge.uky.edu/forestry_etds/48.

Full text
Abstract:
Timber harvesting is an important activity in the state of Kentucky; however, there is still a lack of information about the procedure used by the local loggers. The stump to landing transport of logs with skidders is often the most expensive and time-consuming task in timber harvesting operations. This thesis evaluated the feasibility of using a multi-camera system for time and motion studies of timber harvesting operations. It was installed in 5 skidders in 3 different harvesting sites in Kentucky. The time stamped video provided accurate time consumption data for each work phase of the skidders, which was used to fit linear regressions and find the influence of skidding distance, skid-trail gradient, and load size on skidding time. The multi-camera systems were found to be a reliable tool for time and motion studies in timber harvesting sites. Six different time equations and two speed equations were fitted for skidding cycles and sections of skid-trails, for skidders that are both loaded and unloaded. Skid-trail gradient and load size did not have an influence on skidding time. There is a need for future studies of different variables that could affect skidding time and, consequently, cost.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Multi-Camera System"

1

HST Calibration Workshop (4th 2002 Baltimore, Md.). The 2002 HST calibration workshop: Hubble after the installation of the ACS and the NICMOS cooling system : proceedings of a workshop held at the Space Telescope Science Institute, Baltimore, Maryland, October 17 and 18, 2002. Baltimore, MD: Published and distributed by the Space Telescope Science Institute, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Beach, David Michael. Multi-camera benchmark localization for mobile robot networks. 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Beach, David Michael. Multi-camera benchmark localization for mobile robot networks. 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Knorr, Moritz. Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing. Saint Philip Street Press, 2020.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cerqueira, Manuel D. Gated SPECT MPI. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199392094.003.0006.

Full text
Abstract:
Protocols for SPECT MPI have evolved over the last 40-years based on the following factors: available radiotracers and gamma camera imaging systems, alternative methods of stress, the needs and demands of patients and referring physicians, the need for radiation dose reduction and optimization of laboratory efficiency. Initially studies were performed using dynamic exercise planar multi-day Thallium-201 (Tl-201) studies. Pharmacologic stress agents were not available and novel methods of stress included swallowed esophageal pacing leads, cold presser limb emersion, direct atrial pacing, crushed dipyridamole tablets and even the use of intravenous ergonovine maleate. Eventually intravenous dobutamine, dipyridamole, adenosine and regadenoson became available to allow reliable and safe pharmacologic stress for patients unable to exercise. Tomographic SPECT camera systems replaced planar units and Tc-99m agents offered better imaging characteristics over Tl-201. These gamma camera systems, radiopharmaceutical agents and pharmacologic stress agents were all available by the mid-1990s and still represent the majority of MPI being performed today.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multi-Camera System"

1

Javed, Omar, and Mubarak Shah. "Knight Surveillance System Deployment." In Automated Multi-Camera Surveillance: Algorithms and Practice, 1–5. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-78881-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Qi, Xinlei, Yaqing Ding, Jin Xie, and Jian Yang. "Planar Motion Estimation for Multi-camera System." In Lecture Notes in Computer Science, 116–29. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-02375-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gillich, Eugen, Jan-Friedrich Ehlenbröker, Jan Leif Hoffmann, and Uwe Mönks. "A Low-Cost Multi-Camera System With Multi-Spectral Illumination." In Technologien für die intelligente Automation, 302–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2020. http://dx.doi.org/10.1007/978-3-662-59895-5_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Alidoost, Fatemeh, Gerrit Austen, and Michael Hahn. "A Multi-camera Mobile System for Tunnel Inspection." In iCity. Transformative Research for the Livable, Intelligent, and Sustainable City, 211–24. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-92096-8_13.

Full text
Abstract:
AbstractThe safety, proper maintenance, and renovation of tunnel structures have become a critical problem for urban management in view of the aging of tunnels. Tunnel inspection and inventory are regulated by construction laws and must be carried out at regular intervals. Advances in digitalization and machine vision technologies enable the development of an automated and BIM-based system to collect data from tunnel surfaces. In this study, a tunnel inspection system using vision-based systems and the related principles are introduced to measure the tunnel surfaces efficiently. In addition, the main components and requirements for subsystems are presented, and different challenges in data acquisition and point cloud generation are explained based on investigations during initial experiments.
APA, Harvard, Vancouver, ISO, and other styles
5

Marcinkowski, Piotr, Adam Korzeniewski, and Andrzej Cżyzewski. "Human Tracking in Multi-camera Visual Surveillance System." In Communications in Computer and Information Science, 277–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21512-4_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mliki, Hazar, Mariem Naffeti, and Emna Fendri. "Bimodal Person Re-identification in Multi-camera System." In Advanced Concepts for Intelligent Vision Systems, 554–65. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70353-4_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dockstader, Shiloh L., and A. Murat Tekalp. "Biometric Feature Extraction in a Multi-Camera Surveillance System." In Multisensor Surveillance Systems, 219–34. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4615-0371-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Collins, Robert T., Omead Amidi, and Takeo Kanade. "Acquiring Multi-View Video with an Active Camera System." In Multisensor Surveillance Systems, 135–47. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4615-0371-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Han, Song, Xiaojing Gu, and Xingsheng Gu. "An Accurate Calibration Method of a Multi Camera System." In Communications in Computer and Information Science, 491–501. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-6370-1_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Manolova, Agata, Stanislav Panev, and Krasimir Tonchev. "Human Gaze Tracking With An Active Multi-Camera System." In Biometric Authentication, 176–88. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13386-7_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multi-Camera System"

1

Salman, Bakhita, Mohammed I. Thanoon, Saleh Zein-Sabatto, and Fenghui Yao. "Multi-camera Smart Surveillance System." In 2017 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, 2017. http://dx.doi.org/10.1109/csci.2017.78.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Napoletano, Paolo, and Francesco Tisato. "An attentive multi-camera system." In IS&T/SPIE Electronic Imaging, edited by Kurt S. Niel and Philip R. Bingham. SPIE, 2014. http://dx.doi.org/10.1117/12.2042652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Behera, Reena Kumari, Pallavi Kharade, Suresh Yerva, Pranali Dhane, Ankita Jain, and Krishnan Kutty. "Multi-camera based surveillance system." In 2012 World Congress on Information and Communication Technologies (WICT). IEEE, 2012. http://dx.doi.org/10.1109/wict.2012.6409058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yang, Feng, Zhao Liming, Zhang Yi, and Kuang Hengyang. "Multi-camera System Depth Estimation." In 2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC). IEEE, 2022. http://dx.doi.org/10.1109/itoec53115.2022.9734714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hussain, Muddsser, Rong Xie, Liang Zhang, Mehmood Nawaz, and Malik Asfandyar. "Multi-target tracking identification system under multi-camera surveillance system." In 2016 International Conference on Progress in Informatics and Computing (PIC). IEEE, 2016. http://dx.doi.org/10.1109/pic.2016.7949516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dias, João, and Pedro Mendes Jorge. "People tracking with multi-camera system." In ICDSC '15: International Conference on distributed Smart Cameras. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2789116.2789141.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Specker, Andreas, Daniel Stadler, Lucas Florin, and Jurgen Beyerer. "An Occlusion-aware Multi-target Multi-camera Tracking System." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2021. http://dx.doi.org/10.1109/cvprw53098.2021.00471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dong, Chenwei, Junlei Zhou, Weipeng Wen, and Si Chen. "Deep Learning Based Multi-Target Multi-Camera Tracking System." In ICCAI '22: 2022 8th International Conference on Computing and Artificial Intelligence. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3532213.3532276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Szaloki, David, Norbert Koszo, Kristof Csorba, and Gabor Tevesz. "Marker localization with a multi-camera system." In 2013 IEEE International Conference on System Science and Engineering (ICSSE). IEEE, 2013. http://dx.doi.org/10.1109/icsse.2013.6614647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Razalli, Husniza, Mohammed Hazim Alkawaz, and Aizat Syazwan Suhemi. "Smart IOT Surveillance Multi-Camera Monitoring System." In 2019 IEEE 7th Conference on Systems, Process and Control (ICSPC). IEEE, 2019. http://dx.doi.org/10.1109/icspc47137.2019.9067984.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Multi-Camera System"

1

Frankel, Martin, and Jon A. Webb. Design, Implementation, and Performance of a Scalable Multi-Camera Interactive Video Capture System,. Fort Belvoir, VA: Defense Technical Information Center, June 1995. http://dx.doi.org/10.21236/ada303255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Davis, Tim, Frank Lang, Joe Sinneger, Paul Stabile, and John Tower. Multi-Band Infrared Camera Systems. Fort Belvoir, VA: Defense Technical Information Center, December 1994. http://dx.doi.org/10.21236/ada294028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tao, Yang, Amos Mizrach, Victor Alchanatis, Nachshon Shamir, and Tom Porter. Automated imaging broiler chicksexing for gender-specific and efficient production. United States Department of Agriculture, December 2014. http://dx.doi.org/10.32747/2014.7594391.bard.

Full text
Abstract:
Extending the previous two years of research results (Mizarch, et al, 2012, Tao, 2011, 2012), the third year’s efforts in both Maryland and Israel were directed towards the engineering of the system. The activities included the robust chick handling and its conveyor system development, optical system improvement, online dynamic motion imaging of chicks, multi-image sequence optimal feather extraction and detection, and pattern recognition. Mechanical System Engineering The third model of the mechanical chick handling system with high-speed imaging system was built as shown in Fig. 1. This system has the improved chick holding cups and motion mechanisms that enable chicks to open wings through the view section. The mechanical system has achieved the speed of 4 chicks per second which exceeds the design specs of 3 chicks per second. In the center of the conveyor, a high-speed camera with UV sensitive optical system, shown in Fig.2, was installed that captures chick images at multiple frames (45 images and system selectable) when the chick passing through the view area. Through intensive discussions and efforts, the PIs of Maryland and ARO have created the protocol of joint hardware and software that uses sequential images of chick in its fall motion to capture opening wings and extract the optimal opening positions. This approached enables the reliable feather feature extraction in dynamic motion and pattern recognition. Improving of Chick Wing Deployment The mechanical system for chick conveying and especially the section that cause chicks to deploy their wings wide open under the fast video camera and the UV light was investigated along the third study year. As a natural behavior, chicks tend to deploy their wings as a mean of balancing their body when a sudden change in the vertical movement was applied. In the latest two years, this was achieved by causing the chicks to move in a free fall, in the earth gravity (g) along short vertical distance. The chicks have always tended to deploy their wing but not always in wide horizontal open situation. Such position is requested in order to get successful image under the video camera. Besides, the cells with checks bumped suddenly at the end of the free falling path. That caused the chicks legs to collapse inside the cells and the image of wing become bluer. For improving the movement and preventing the chick legs from collapsing, a slowing down mechanism was design and tested. This was done by installing of plastic block, that was printed in a predesign variable slope (Fig. 3) at the end of the path of falling cells (Fig.4). The cells are moving down in variable velocity according the block slope and achieve zero velocity at the end of the path. The slop was design in a way that the deacceleration become 0.8g instead the free fall gravity (g) without presence of the block. The tests showed better deployment and wider chick's wing opening as well as better balance along the movement. Design of additional sizes of block slops is under investigation. Slops that create accelerations of 0.7g, 0.9g, and variable accelerations are designed for improving movement path and images.
APA, Harvard, Vancouver, ISO, and other styles
4

Anderson, Gerald L., and Kalman Peleg. Precision Cropping by Remotely Sensed Prorotype Plots and Calibration in the Complex Domain. United States Department of Agriculture, December 2002. http://dx.doi.org/10.32747/2002.7585193.bard.

Full text
Abstract:
This research report describes a methodology whereby multi-spectral and hyperspectral imagery from remote sensing, is used for deriving predicted field maps of selected plant growth attributes which are required for precision cropping. A major task in precision cropping is to establish areas of the field that differ from the rest of the field and share a common characteristic. Yield distribution f maps can be prepared by yield monitors, which are available for some harvester types. Other field attributes of interest in precision cropping, e.g. soil properties, leaf Nitrate, biomass etc. are obtained by manual sampling of the filed in a grid pattern. Maps of various field attributes are then prepared from these samples by the "Inverse Distance" interpolation method or by Kriging. An improved interpolation method was developed which is based on minimizing the overall curvature of the resulting map. Such maps are the ground truth reference, used for training the algorithm that generates the predicted field maps from remote sensing imagery. Both the reference and the predicted maps are stratified into "Prototype Plots", e.g. 15xl5 blocks of 2m pixels whereby the block size is 30x30m. This averaging reduces the datasets to manageable size and significantly improves the typically poor repeatability of remote sensing imaging systems. In the first two years of the project we used the Normalized Difference Vegetation Index (NDVI), for generating predicted yield maps of sugar beets and com. The NDVI was computed from image cubes of three spectral bands, generated by an optically filtered three camera video imaging system. A two dimensional FFT based regression model Y=f(X), was used wherein Y was the reference map and X=NDVI was the predictor. The FFT regression method applies the "Wavelet Based", "Pixel Block" and "Image Rotation" transforms to the reference and remote images, prior to the Fast - Fourier Transform (FFT) Regression method with the "Phase Lock" option. A complex domain based map Yfft is derived by least squares minimization between the amplitude matrices of X and Y, via the 2D FFT. For one time predictions, the phase matrix of Y is combined with the amplitude matrix ofYfft, whereby an improved predicted map Yplock is formed. Usually, the residuals of Y plock versus Y are about half of the values of Yfft versus Y. For long term predictions, the phase matrix of a "field mask" is combined with the amplitude matrices of the reference image Y and the predicted image Yfft. The field mask is a binary image of a pre-selected region of interest in X and Y. The resultant maps Ypref and Ypred aremodified versions of Y and Yfft respectively. The residuals of Ypred versus Ypref are even lower than the residuals of Yplock versus Y. The maps, Ypref and Ypred represent a close consensus of two independent imaging methods which "view" the same target. In the last two years of the project our remote sensing capability was expanded by addition of a CASI II airborne hyperspectral imaging system and an ASD hyperspectral radiometer. Unfortunately, the cross-noice and poor repeatability problem we had in multi-spectral imaging was exasperated in hyperspectral imaging. We have been able to overcome this problem by over-flying each field twice in rapid succession and developing the Repeatability Index (RI). The RI quantifies the repeatability of each spectral band in the hyperspectral image cube. Thereby, it is possible to select the bands of higher repeatability for inclusion in the prediction model while bands of low repeatability are excluded. Further segregation of high and low repeatability bands takes place in the prediction model algorithm, which is based on a combination of a "Genetic Algorithm" and Partial Least Squares", (PLS-GA). In summary, modus operandi was developed, for deriving important plant growth attribute maps (yield, leaf nitrate, biomass and sugar percent in beets), from remote sensing imagery, with sufficient accuracy for precision cropping applications. This achievement is remarkable, given the inherently high cross-noice between the reference and remote imagery as well as the highly non-repeatable nature of remote sensing systems. The above methodologies may be readily adopted by commercial companies, which specialize in proving remotely sensed data to farmers.
APA, Harvard, Vancouver, ISO, and other styles
5

Kulhandjian, Hovannes. Detecting Driver Drowsiness with Multi-Sensor Data Fusion Combined with Machine Learning. Mineta Transportation Institute, September 2021. http://dx.doi.org/10.31979/mti.2021.2015.

Full text
Abstract:
In this research work, we develop a drowsy driver detection system through the application of visual and radar sensors combined with machine learning. The system concept was derived from the desire to achieve a high level of driver safety through the prevention of potentially fatal accidents involving drowsy drivers. According to the National Highway Traffic Safety Administration, drowsy driving resulted in 50,000 injuries across 91,000 police-reported accidents, and a death toll of nearly 800 in 2017. The objective of this research work is to provide a working prototype of Advanced Driver Assistance Systems that can be installed in present-day vehicles. By integrating two modes of visual surveillance to examine a biometric expression of drowsiness, a camera and a micro-Doppler radar sensor, our system offers high reliability over 95% in the accuracy of its drowsy driver detection capabilities. The camera is used to monitor the driver’s eyes, mouth and head movement and recognize when a discrepancy occurs in the driver's blinking pattern, yawning incidence, and/or head drop, thereby signaling that the driver may be experiencing fatigue or drowsiness. The micro-Doppler sensor allows the driver's head movement to be captured both during the day and at night. Through data fusion and deep learning, the ability to quickly analyze and classify a driver's behavior under various conditions such as lighting, pose-variation, and facial expression in a real-time monitoring system is achieved.
APA, Harvard, Vancouver, ISO, and other styles
6

Warrick, Arthur W., Gideon Oron, Mary M. Poulton, Rony Wallach, and Alex Furman. Multi-Dimensional Infiltration and Distribution of Water of Different Qualities and Solutes Related Through Artificial Neural Networks. United States Department of Agriculture, January 2009. http://dx.doi.org/10.32747/2009.7695865.bard.

Full text
Abstract:
The project exploits the use of Artificial Neural Networks (ANN) to describe infiltration, water, and solute distribution in the soil during irrigation. It provides a method of simulating water and solute movement in the subsurface which, in principle, is different and has some advantages over the more common approach of numerical modeling of flow and transport equations. The five objectives were (i) Numerically develop a database for the prediction of water and solute distribution for irrigation; (ii) Develop predictive models using ANN; (iii) Develop an experimental (laboratory) database of water distribution with time; within a transparent flow cell by high resolution CCD video camera; (iv) Conduct field studies to provide basic data for developing and testing the ANN; and (v) Investigate the inclusion of water quality [salinity and organic matter (OM)] in an ANN model used for predicting infiltration and subsurface water distribution. A major accomplishment was the successful use of Moment Analysis (MA) to characterize “plumes of water” applied by various types of irrigation (including drip and gravity sources). The general idea is to describe the subsurface water patterns statistically in terms of only a few (often 3) parameters which can then be predicted by the ANN. It was shown that ellipses (in two dimensions) or ellipsoids (in three dimensions) can be depicted about the center of the plume. Any fraction of water added can be related to a ‘‘probability’’ curve relating the size of the ellipse (or ellipsoid) that contains that amount of water. The initial test of an ANN to predict the moments (and hence the water plume) was with numerically generated data for infiltration from surface and subsurface drip line and point sources in three contrasting soils. The underlying dataset consisted of 1,684,500 vectors (5 soils×5 discharge rates×3 initial conditions×1,123 nodes×20 print times) where each vector had eleven elements consisting of initial water content, hydraulic properties of the soil, flow rate, time and space coordinates. The output is an estimate of subsurface water distribution for essentially any soil property, initial condition or flow rate from a drip source. Following the formal development of the ANN, we have prepared a “user-friendly” version in a spreadsheet environment (in “Excel”). The input data are selected from appropriate values and the output is instantaneous resulting in a picture of the resulting water plume. The MA has also proven valuable, on its own merit, in the description of the flow in soil under laboratory conditions for both wettable and repellant soils. This includes non-Darcian flow examples and redistribution and well as infiltration. Field experiments were conducted in different agricultural fields and various water qualities in Israel. The obtained results will be the basis for the further ANN models development. Regions of high repellence were identified primarily under the canopy of various orchard crops, including citrus and persimmons. Also, increasing OM in the applied water lead to greater repellency. Major scientific implications are that the ANN offers an alternative to conventional flow and transport modeling and that MA is a powerful technique for describing the subsurface water distributions for normal (wettable) and repellant soil. Implications of the field measurements point to the special role of OM in affecting wettability, both from the irrigation water and from soil accumulation below canopies. Implications for agriculture are that a modified approach for drip system design should be adopted for open area crops and orchards, and taking into account the OM components both in the soil and in the applied waters.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography