Auswahl der wissenschaftlichen Literatur zum Thema „Système de multi-Camera“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Système de multi-Camera" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Système de multi-Camera"

1

Guo Peiyao, 郭珮瑶, 蒲志远 Pu Zhiyuan und 马展 Ma Zhan. „多相机系统:成像增强及应用“. Laser & Optoelectronics Progress 58, Nr. 18 (2021): 1811013. http://dx.doi.org/10.3788/lop202158.1811013.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Xiao Yifan, 肖一帆, und 胡伟 Hu Wei. „基于多相机系统的高精度标定“. Laser & Optoelectronics Progress 60, Nr. 20 (2023): 2015003. http://dx.doi.org/10.3788/lop222787.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhao Yanfang, 赵艳芳, 孙鹏 Sun Peng, 董明利 Dong Mingli, 刘其林 Liu Qilin, 燕必希 Yan Bixi und 王君 Wang Jun. „多相机视觉测量系统在轨自主定向方法“. Laser & Optoelectronics Progress 61, Nr. 10 (2024): 1011003. http://dx.doi.org/10.3788/lop231907.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ren Guoyin, 任国印, 吕晓琪 Xiaoqi Lü und 李宇豪 Li Yuhao. „多摄像机视场下基于一种DTN的多人脸实时跟踪系统“. Laser & Optoelectronics Progress 59, Nr. 2 (2022): 0210004. http://dx.doi.org/10.3788/lop202259.0210004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kulathunga, Geesara, Aleksandr Buyval und Aleksandr Klimchik. „Multi-Camera Fusion in Apollo Software Distribution“. IFAC-PapersOnLine 52, Nr. 8 (2019): 49–54. http://dx.doi.org/10.1016/j.ifacol.2019.08.047.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Mehta, S. S., und T. F. Burks. „Multi-camera Fruit Localization in Robotic Harvesting“. IFAC-PapersOnLine 49, Nr. 16 (2016): 90–95. http://dx.doi.org/10.1016/j.ifacol.2016.10.017.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

R.Kennady, Et al. „A Nonoverlapping Vision Field Multi-Camera Network for Tracking Human Build Targets“. International Journal on Recent and Innovation Trends in Computing and Communication 11, Nr. 3 (31.03.2023): 366–69. http://dx.doi.org/10.17762/ijritcc.v11i3.9871.

Der volle Inhalt der Quelle
Annotation:
This research presents a procedure for tracking human build targets in a multi-camera network with nonoverlapping vision fields. The proposed approach consists of three main steps: single-camera target detection, single-camera target tracking, and multi-camera target association and continuous tracking. The multi-camera target association includes target characteristic extraction and the establishment of topological relations. Target characteristics are extracted based on the HSV (Hue, Saturation, and Value) values of each human build movement target, and the space-time topological relations of the multi-camera network are established using the obtained target associations. This procedure enables the continuous tracking of human build movement targets in large scenes, overcoming the limitations of monitoring within the narrow field of view of a single camera.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Guler, Puren, Deniz Emeksiz, Alptekin Temizel, Mustafa Teke und Tugba Taskaya Temizel. „Real-time multi-camera video analytics system on GPU“. Journal of Real-Time Image Processing 11, Nr. 3 (27.03.2013): 457–72. http://dx.doi.org/10.1007/s11554-013-0337-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Huang, Sunan, Rodney Swee Huat Teo und William Wai Lun Leong. „Multi-Camera Networks for Coverage Control of Drones“. Drones 6, Nr. 3 (03.03.2022): 67. http://dx.doi.org/10.3390/drones6030067.

Der volle Inhalt der Quelle
Annotation:
Multiple unmanned multirotor (MUM) systems are becoming a reality. They have a wide range of applications such as for surveillance, search and rescue, monitoring operations in hazardous environments and providing communication coverage services. Currently, an important issue in MUM is coverage control. In this paper, an existing coverage control algorithm has been extended to incorporate a new sensor model, which is downward facing and allows pan-tilt-zoom (PTZ). Two new constraints, namely view angle and collision avoidance, have also been included. Mobile network coverage among the MUMs is studied. Finally, the proposed scheme is tested in computer simulations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

WANG, Liang. „Multi-Camera Calibration Based on 1D Calibration Object“. ACTA AUTOMATICA SINICA 33, Nr. 3 (2007): 0225. http://dx.doi.org/10.1360/aas-007-0225.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Système de multi-Camera"

1

Mennillo, Laurent. „Reconstruction 3D de l'environnement dynamique d'un véhicule à l'aide d'un système multi-caméras hétérogène en stéréo wide-baseline“. Thesis, Université Clermont Auvergne‎ (2017-2020), 2019. http://www.theses.fr/2019CLFAC022/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse a été réalisée dans le secteur de l'industrie automobile, en collaboration avec le Groupe Renault et concerne en particulier le développement de systèmes d'aide à la conduite avancés et de véhicules autonomes. Les progrès réalisés par la communauté scientifique durant les dernières décennies, dans les domaines de l'informatique et de la robotique notamment, ont été si importants qu'ils permettent aujourd'hui la mise en application de systèmes complexes au sein des véhicules. Ces systèmes visent dans un premier temps à réduire les risques inhérents à la conduite en assistant les conducteurs, puis dans un second temps à offrir des moyens de transport entièrement autonomes. Les méthodes de SLAM multi-objets actuellement intégrées au sein de ces véhicules reposent pour majeure partie sur l'utilisation de capteurs embarqués très performants tels que des télémètres laser, au coût relativement élevé. Les caméras numériques en revanche, de par leur coût largement inférieur, commencent à se démocratiser sur certains véhicules de grande série et assurent généralement des fonctions d'assistance à la conduite, pour l'aide au parking ou le freinage d'urgence, par exemple. En outre, cette implantation plus courante permet également d'envisager leur utilisation afin de reconstruire l'environnement dynamique proche des véhicules en trois dimensions. D'un point de vue scientifique, les techniques de SLAM visuel multi-objets existantes peuvent être regroupées en deux catégories de méthodes. La première catégorie et plus ancienne historiquement concerne les méthodes stéréo, faisant usage de plusieurs caméras à champs recouvrants afin de reconstruire la scène dynamique observée. La plupart reposent en général sur l'utilisation de paires stéréo identiques et placées à faible distance l'une de l'autre, ce qui permet un appariement dense des points d'intérêt dans les images et l'estimation de cartes de disparités utilisées lors de la segmentation du mouvement des points reconstruits. L'autre catégorie de méthodes, dites monoculaires, ne font usage que d'une unique caméra lors du processus de reconstruction. Cela implique la compensation du mouvement propre du système d'acquisition lors de l'estimation du mouvement des autres objets mobiles de la scène de manière indépendante. Plus difficiles, ces méthodes posent plusieurs problèmes, notamment le partitionnement de l'espace de départ en plusieurs sous-espaces représentant les mouvements individuels de chaque objet mobile, mais aussi le problème d'estimation de l'échelle relative de reconstruction de ces objets lors de leur agrégation au sein de la scène statique. La problématique industrielle de cette thèse, consistant en la réutilisation des systèmes multi-caméras déjà implantés au sein des véhicules, majoritairement composés d'un caméra frontale et de caméras surround équipées d'objectifs très grand angle, a donné lieu au développement d'une méthode de reconstruction multi-objets adaptée aux systèmes multi-caméras hétérogènes en stéréo wide-baseline. Cette méthode est incrémentale et permet la reconstruction de points mobiles éparses, grâce notamment à plusieurs contraintes géométriques de segmentation des points reconstruits ainsi que de leur trajectoire. Enfin, une évaluation quantitative et qualitative des performances de la méthode a été menée sur deux jeux de données distincts, dont un a été développé durant ces travaux afin de présenter des caractéristiques similaires aux systèmes hétérogènes existants
This Ph.D. thesis, which has been carried out in the automotive industry in association with Renault Group, mainly focuses on the development of advanced driver-assistance systems and autonomous vehicles. The progress made by the scientific community during the last decades in the fields of computer science and robotics has been so important that it now enables the implementation of complex embedded systems in vehicles. These systems, primarily designed to provide assistance in simple driving scenarios and emergencies, now aim to offer fully autonomous transport. Multibody SLAM methods currently used in autonomous vehicles often rely on high-performance and expensive onboard sensors such as LIDAR systems. On the other hand, digital video cameras are much cheaper, which has led to their increased use in newer vehicles to provide driving assistance functions, such as parking assistance or emergency braking. Furthermore, this relatively common implementation now allows to consider their use in order to reconstruct the dynamic environment surrounding a vehicle in three dimensions. From a scientific point of view, existing multibody visual SLAM techniques can be divided into two categories of methods. The first and oldest category concerns stereo methods, which use several cameras with overlapping fields of view in order to reconstruct the observed dynamic scene. Most of these methods use identical stereo pairs in short baseline, which allows for the dense matching of feature points to estimate disparity maps that are then used to compute the motions of the scene. The other category concerns monocular methods, which only use one camera during the reconstruction process, meaning that they have to compensate for the ego-motion of the acquisition system in order to estimate the motion of other objects. These methods are more difficult in that they have to address several additional problems, such as motion segmentation, which consists in clustering the initial data into separate subspaces representing the individual movement of each object, but also the problem of the relative scale estimation of these objects before their aggregation within the static scene. The industrial motive for this work lies in the use of existing multi-camera systems already present in actual vehicles to perform dynamic scene reconstruction. These systems, being mostly composed of a front camera accompanied by several surround fisheye cameras in wide-baseline stereo, has led to the development of a multibody reconstruction method dedicated to such heterogeneous systems. The proposed method is incremental and allows for the reconstruction of sparse mobile points as well as their trajectory using several geometric constraints. Finally, a quantitative and qualitative evaluation conducted on two separate datasets, one of which was developed during this thesis in order to present characteristics similar to existing multi-camera systems, is provided
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Petit, Benjamin. „Téléprésence, immersion et interactions pour le reconstruction 3D temps-réel“. Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00584001.

Der volle Inhalt der Quelle
Annotation:
Les environnements 3D immersifs et collaboratifs en ligne sont en pleine émergence. Ils posent les problématiques du sentiment de présence au sein des mondes virtuels, de l'immersion et des capacités d'interaction. Les systèmes 3D multi-caméra permettent, sur la base d'une information photométrique, d'extraire une information géométrique (modèle 3D) de la scène observée. Il est alors possible de calculer un modèle numérique texturé en temps-réel qui est utilisé pour assurer la présence de l'utilisateur dans l'espace numérique. Dans le cadre de cette thèse nous avons étudié comment coupler la capacité de présence fournie par un tel système avec une immersion visuelle et des interactions co-localisées. Ceci a mené à la réalisation d'une application qui couple un visio-casque, un système de suivi optique et un système multi-caméra. Ainsi l'utilisateur peut visualiser son modèle 3D correctement aligné avec son corps et mixé avec les objets virtuels. Nous avons aussi mis en place une expérimentation de télépresence sur 3 sites (Bordeaux, Grenoble, Orléans) qui permet à plusieurs utilisateurs de se rencontrer en 3D et de collaborer à distance. Le modèle 3D texturé donne une très forte impression de présence de l'autre et renforce les interactions physiques grâce au langage corporel et aux expressions faciales. Enfin, nous avons étudié comment extraire une information de vitesse à partir des informations issues des caméras, grâce au flot optique et à des correspondances 2D et 3D, nous pouvons estimer le déplacement dense du modèle 3D. Cette donnée étend les capacités d'interaction en enrichissant le modèle 3D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kim, Jae-Hak, und Jae-Hak Kim@anu edu au. „Camera Motion Estimation for Multi-Camera Systems“. The Australian National University. Research School of Information Sciences and Engineering, 2008. http://thesis.anu.edu.au./public/adt-ANU20081211.011120.

Der volle Inhalt der Quelle
Annotation:
The estimation of motion of multi-camera systems is one of the most important tasks in computer vision research. Recently, some issues have been raised about general camera models and multi-camera systems. Using many cameras as a single camera is studied [60], and the epipolar geometry constraints of general camera models is theoretically derived. Methods for calibration, including a self-calibration method for general camera models, are studied [78, 62]. Multi-camera systems are an example of practically implementable general camera models and they are widely used in many applications nowadays because of both the low cost of digital charge-coupled device (CCD) cameras and the high resolution of multiple images from the wide field of views. To our knowledge, no research has been conducted on the relative motion of multi-camera systems with non-overlapping views to obtain a geometrically optimal solution. ¶ In this thesis, we solve the camera motion problem for multi-camera systems by using linear methods and convex optimization techniques, and we make five substantial and original contributions to the field of computer vision. First, we focus on the problem of translational motion of omnidirectional cameras, which are multi-camera systems, and present a constrained minimization method to obtain robust estimation results. Given known rotation, we show that bilinear and trilinear relations can be used to build a system of linear equations, and singular value decomposition (SVD) is used to solve the equations. Second, we present a linear method that estimates the relative motion of generalized cameras, in particular, in the case of non-overlapping views. We also present four types of generalized cameras, which can be solvable using our proposed, modified SVD method. This is the first study finding linear relations for certain types of generalized cameras and performing experiments using our proposed linear method. Third, we present a linear 6-point method (5 points from the same camera and 1 point from another camera) that estimates the relative motion of multi-camera systems, where cameras have no overlapping views. In addition, we discuss the theoretical and geometric analyses of multi-camera systems as well as certain critical configurations where the scale of translation cannot be determined. Fourth, we develop a global solution under an L∞ norm error for the relative motion problem of multi-camera systems using second-order cone programming. Finally, we present a fast searching method to obtain a global solution under an L∞ norm error for the relative motion problem of multi-camera systems, with non-overlapping views, using a branch-and-bound algorithm and linear programming (LP). By testing the feasibility of LP at the earlier stage, we reduced the time of computation of solving LP.¶ We tested our proposed methods by performing experiments with synthetic and real data. The Ladybug2 camera, for example, was used in the experiment on estimation of the translation of omnidirectional cameras and in the estimation of the relative motion of non-overlapping multi-camera systems. These experiments showed that a global solution using L∞ to estimate the relative motion of multi-camera systems could be achieved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kim, Jae-Hak. „Camera motion estimation for multi-camera systems /“. View thesis entry in Australian Digital Theses Program, 2008. http://thesis.anu.edu.au/public/adt-ANU20081211.011120/index.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Jiang, Xiaoyan [Verfasser]. „Multi-Object Tracking-by-Detection Using Multi-Camera Systems / Xiaoyan Jiang“. München : Verlag Dr. Hut, 2016. http://d-nb.info/1084385325/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Krucki, Kevin C. „Person Re-identification in Multi-Camera Surveillance Systems“. University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1448997579.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hammarlund, Emil. „Target-less and targeted multi-camera color calibration“. Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-33876.

Der volle Inhalt der Quelle
Annotation:
Multiple camera arrays are beginning to see more widespread use in a variety of different applications, be it for research purposes or for enhancing the view- ing experience in entertainment. However, when using multiple cameras the images produced are often not color consistent due to a variety of different rea- sons such as differences in lighting, chip-level differences e.t.c. To address this there exists a multitude of different color calibration algorithms. This paper ex- amines two different color calibration algorithms one targeted and one target- less. Both methods were implemented in Python using the libraries OpenCV, Matplotlib, and NumPy. Once the algorithms had been implemented, they were evaluated based on two metrics; color range homogeneity and color ac- curacy to target values. The targeted color calibration algorithm was more ef- fective improving the color accuracy to ground truth then the target-less color calibration algorithm, but the target-less algorithm deteriorated the color range homogeneity less than the targeted color calibration algorithm. After both methods where tested, an improvement of the targeted color calibration al- gorithm was attempted. The resulting images were then evaluated based on the same two criteria as before, the modified version of the targeted color cal- ibration algorithm performed better than the original targeted algorithm with respect to color range homogeneity while still maintaining a similar level of performance with respect to color accuracy to ground truth as before. Further- more, when the color range homogeneity of the modified targeted algorithm was compared with the color range homogeneity of the target-less algorithm. The performance of the modified targeted algorithm performed similarly to the target-less algorithm. Based on these results, it was concluded that the targeted color calibration was superior to the target-less algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Åkesson, Ulrik. „Design of a multi-camera system for object identification, localisation, and visual servoing“. Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44082.

Der volle Inhalt der Quelle
Annotation:
In this thesis, the development of a stereo camera system for an intelligent tool is presented. The task of the system is to identify and localise objects so that the tool can guide a robot. Different approaches to object detection have been implemented and evaluated and the systems ability to localise objects has been tested. The results show that the system can achieve a localisation accuracy below 5 mm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Turesson, Eric. „Multi-camera Computer Vision for Object Tracking: A comparative study“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21810.

Der volle Inhalt der Quelle
Annotation:
Background: Video surveillance is a growing area where it can help with deterring crime, support investigation or to help gather statistics. These are just some areas where video surveillance can aid society. However, there is an improvement that could increase the efficiency of video surveillance by introducing tracking. More specifically, tracking between cameras in a network. Automating this process could reduce the need for humans to monitor and review since the tracking can track and inform the relevant people on its own. This has a wide array of usability areas, such as forensic investigation, crime alerting, or tracking down people who have disappeared. Objectives: What we want to investigate is the common setup of real-time multi-target multi-camera tracking (MTMCT) systems. Next up, we want to investigate how the components in an MTMCT system affect each other and the complete system. Lastly, we want to see how image enhancement can affect the MTMCT. Methods: To achieve our objectives, we have conducted a systematic literature review to gather information. Using the information, we implemented an MTMCT system where we evaluated the components to see how they interact in the complete system. Lastly, we implemented two image enhancement techniques to see how they affect the MTMCT. Results: As we have discovered, most often, MTMCT is constructed using a detection for discovering object, tracking to keep track of the objects in a single camera and a re-identification method to ensure that objects across cameras have the same ID. The different components have quite a considerable effect on each other where they can sabotage and improve each other. An example could be that the quality of the bounding boxes affect the data which re-identification can extract. We discovered that the image enhancement we used did not introduce any significant improvement. Conclusions: The most common structure for MTMCT are detection, tracking and re-identification. From our finding, we can see that all the component affect each other, but re-identification is the one that is mostly affected by the other components and the image enhancement. The two tested image enhancement techniques could not introduce enough improvement, but other image enhancement could be used to make the MTMCT perform better. The MTMCT system we constructed did not manage to reach real-time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Nadella, Suman. „Multi camera stereo and tracking patient motion for SPECT scanning systems“. Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-082905-161037/.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Feature matching in multiple cameras; Multi camera stereo computation; Patient Motion Tracking; SPECT Imaging Includes bibliographical references. (p.84-88)
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Système de multi-Camera"

1

Beach, David Michael. Multi-camera benchmark localization for mobile robot networks. 2004.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Beach, David Michael. Multi-camera benchmark localization for mobile robot networks. 2005.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Knorr, Moritz. Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing. Saint Philip Street Press, 2020.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Cerqueira, Manuel D. Gated SPECT MPI. Oxford University Press, 2015. http://dx.doi.org/10.1093/med/9780199392094.003.0006.

Der volle Inhalt der Quelle
Annotation:
Protocols for SPECT MPI have evolved over the last 40-years based on the following factors: available radiotracers and gamma camera imaging systems, alternative methods of stress, the needs and demands of patients and referring physicians, the need for radiation dose reduction and optimization of laboratory efficiency. Initially studies were performed using dynamic exercise planar multi-day Thallium-201 (Tl-201) studies. Pharmacologic stress agents were not available and novel methods of stress included swallowed esophageal pacing leads, cold presser limb emersion, direct atrial pacing, crushed dipyridamole tablets and even the use of intravenous ergonovine maleate. Eventually intravenous dobutamine, dipyridamole, adenosine and regadenoson became available to allow reliable and safe pharmacologic stress for patients unable to exercise. Tomographic SPECT camera systems replaced planar units and Tc-99m agents offered better imaging characteristics over Tl-201. These gamma camera systems, radiopharmaceutical agents and pharmacologic stress agents were all available by the mid-1990s and still represent the majority of MPI being performed today.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Système de multi-Camera"

1

Porikli, Fatih. „Multi-Camera Surveillance“. In Multisensor Surveillance Systems, 183–98. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4615-0371-2_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jirafe, Apurva, Mayuri Jibhe und V. R. Satpute. „Camera Handoff for Multi-camera Surveillance“. In Algorithms for Intelligent Systems, 267–74. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4862-2_29.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Popovic, Vladan, Kerem Seyid, Ömer Cogal, Abdulkadir Akin und Yusuf Leblebici. „Omnidirectional Multi-Camera Systems Design“. In Design and Implementation of Real-Time Multi-Sensor Vision Systems, 69–88. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59057-8_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Popovic, Vladan, Kerem Seyid, Ömer Cogal, Abdulkadir Akin und Yusuf Leblebici. „Miniaturization of Multi-Camera Systems“. In Design and Implementation of Real-Time Multi-Sensor Vision Systems, 89–115. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59057-8_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Javed, Omar, und Mubarak Shah. „Knight Surveillance System Deployment“. In Automated Multi-Camera Surveillance: Algorithms and Practice, 1–5. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-78881-4_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Frahm, Jan-Michael, Kevin Köser und Reinhard Koch. „Pose Estimation for Multi-camera Systems“. In Lecture Notes in Computer Science, 286–93. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-28649-3_35.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Brückner, Marcel, und Joachim Denzler. „Active Self-calibration of Multi-camera Systems“. In Lecture Notes in Computer Science, 31–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15986-2_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Bae, Soonmin. „Dense 3D Reconstruction in Multi-camera Systems“. In Progress in Optomechatronic Technologies, 51–59. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-05711-8_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Matsuyama, Takashi, Shohei Nobuhara, Takeshi Takai und Tony Tung. „Multi-camera Systems for 3D Video Production“. In 3D Video and Its Applications, 17–44. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4120-4_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Popovic, Vladan, Kerem Seyid, Ömer Cogal, Abdulkadir Akin und Yusuf Leblebici. „State-of-the-Art Multi-Camera Systems“. In Design and Implementation of Real-Time Multi-Sensor Vision Systems, 13–31. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59057-8_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Système de multi-Camera"

1

Dondo, Diego Gonzalez, Fernando Trasobares, Leandro Yoaquino, Julian Padilla und Javier Redolfi. „Calibration of multi-camera systems“. In 2015 XVI Workshop on Information Processing and Control (RPIC). IEEE, 2015. http://dx.doi.org/10.1109/rpic.2015.7497094.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Salman, Bakhita, Mohammed I. Thanoon, Saleh Zein-Sabatto und Fenghui Yao. „Multi-camera Smart Surveillance System“. In 2017 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, 2017. http://dx.doi.org/10.1109/csci.2017.78.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Napoletano, Paolo, und Francesco Tisato. „An attentive multi-camera system“. In IS&T/SPIE Electronic Imaging, herausgegeben von Kurt S. Niel und Philip R. Bingham. SPIE, 2014. http://dx.doi.org/10.1117/12.2042652.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Behera, Reena Kumari, Pallavi Kharade, Suresh Yerva, Pranali Dhane, Ankita Jain und Krishnan Kutty. „Multi-camera based surveillance system“. In 2012 World Congress on Information and Communication Technologies (WICT). IEEE, 2012. http://dx.doi.org/10.1109/wict.2012.6409058.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yang, Feng, Zhao Liming, Zhang Yi und Kuang Hengyang. „Multi-camera System Depth Estimation“. In 2022 IEEE 6th Information Technology and Mechatronics Engineering Conference (ITOEC). IEEE, 2022. http://dx.doi.org/10.1109/itoec53115.2022.9734714.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Costache, Alexandru, Dan Popescu, Cosmin Popa und Stefan Mocanu. „Multi-Camera Video Surveillance“. In 2019 22nd International Conference on Control Systems and Computer Science (CSCS). IEEE, 2019. http://dx.doi.org/10.1109/cscs.2019.00096.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Kumar, Avinash, Manjula Gururaj, Kalpana Seshadrinathan und Ramkumar Narayanswamy. „Multi-capture Dynamic Calibration of Multi-camera Systems“. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2018. http://dx.doi.org/10.1109/cvprw.2018.00238.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Wu, Haoyu, Shaomin Xiong und Toshiki Hirano. „A Real-Time Human Recognition and Tracking System With a Dual-Camera Setup“. In ASME 2019 28th Conference on Information Storage and Processing Systems. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/isps2019-7469.

Der volle Inhalt der Quelle
Annotation:
Abstract Most surveillance camera systems are still controlled and monitored by humans. Smart surveillance camera systems are proposed to automatically understand the scene captured, identify the objects of interest, detect the abnormality, etc. However, most surveillance cameras are either wide-angle or pan-tilt-zoom (PTZ). When the cameras are in the wide-view mode, small objects can be hard to be recognized. On the other hand, when the cameras are zoomed-in to the object of interest, the global view cannot be covered and important events outside the zoomed view will be missed. In this paper, we proposed a system composed of a wide-angle camera and a PTZ camera. The system is able to capture the wide-view and the zoomed-view at the same time, taking the advantages from both views. A real-time human detection and identification algorithm based on a neural network is developed. The system can efficiently and effectively recognize humans, distinguish different identities, and follow the person of interest using the PTZ camera. A multi-target multi-camera (MTMC) system is developed based on the original system. In the MTMC system, multiple cameras are placed at different places to look at different views. The same person shown in any camera can be recognized as the same person while different persons can be distinguished among all the cameras.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zhong, Jianghua, W. Bastiaan Kleijn und Xiaoming Hu. „Video quality improvement for multi-camera systems using camera control“. In 2014 33rd Chinese Control Conference (CCC). IEEE, 2014. http://dx.doi.org/10.1109/chicc.2014.6896924.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zhao, Chunhui, Bin Fan, Jinwen Hu, Limin Tian, Zhiyuan Zhang, Sijia Li und Quan Pan. „Pose estimation for multi-camera systems“. In 2017 IEEE International Conference on Unmanned Systems (ICUS). IEEE, 2017. http://dx.doi.org/10.1109/icus.2017.8278403.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Système de multi-Camera"

1

Davis, Tim, Frank Lang, Joe Sinneger, Paul Stabile und John Tower. Multi-Band Infrared Camera Systems. Fort Belvoir, VA: Defense Technical Information Center, Dezember 1994. http://dx.doi.org/10.21236/ada294028.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Frankel, Martin, und Jon A. Webb. Design, Implementation, and Performance of a Scalable Multi-Camera Interactive Video Capture System,. Fort Belvoir, VA: Defense Technical Information Center, Juni 1995. http://dx.doi.org/10.21236/ada303255.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Tao, Yang, Amos Mizrach, Victor Alchanatis, Nachshon Shamir und Tom Porter. Automated imaging broiler chicksexing for gender-specific and efficient production. United States Department of Agriculture, Dezember 2014. http://dx.doi.org/10.32747/2014.7594391.bard.

Der volle Inhalt der Quelle
Annotation:
Extending the previous two years of research results (Mizarch, et al, 2012, Tao, 2011, 2012), the third year’s efforts in both Maryland and Israel were directed towards the engineering of the system. The activities included the robust chick handling and its conveyor system development, optical system improvement, online dynamic motion imaging of chicks, multi-image sequence optimal feather extraction and detection, and pattern recognition. Mechanical System Engineering The third model of the mechanical chick handling system with high-speed imaging system was built as shown in Fig. 1. This system has the improved chick holding cups and motion mechanisms that enable chicks to open wings through the view section. The mechanical system has achieved the speed of 4 chicks per second which exceeds the design specs of 3 chicks per second. In the center of the conveyor, a high-speed camera with UV sensitive optical system, shown in Fig.2, was installed that captures chick images at multiple frames (45 images and system selectable) when the chick passing through the view area. Through intensive discussions and efforts, the PIs of Maryland and ARO have created the protocol of joint hardware and software that uses sequential images of chick in its fall motion to capture opening wings and extract the optimal opening positions. This approached enables the reliable feather feature extraction in dynamic motion and pattern recognition. Improving of Chick Wing Deployment The mechanical system for chick conveying and especially the section that cause chicks to deploy their wings wide open under the fast video camera and the UV light was investigated along the third study year. As a natural behavior, chicks tend to deploy their wings as a mean of balancing their body when a sudden change in the vertical movement was applied. In the latest two years, this was achieved by causing the chicks to move in a free fall, in the earth gravity (g) along short vertical distance. The chicks have always tended to deploy their wing but not always in wide horizontal open situation. Such position is requested in order to get successful image under the video camera. Besides, the cells with checks bumped suddenly at the end of the free falling path. That caused the chicks legs to collapse inside the cells and the image of wing become bluer. For improving the movement and preventing the chick legs from collapsing, a slowing down mechanism was design and tested. This was done by installing of plastic block, that was printed in a predesign variable slope (Fig. 3) at the end of the path of falling cells (Fig.4). The cells are moving down in variable velocity according the block slope and achieve zero velocity at the end of the path. The slop was design in a way that the deacceleration become 0.8g instead the free fall gravity (g) without presence of the block. The tests showed better deployment and wider chick's wing opening as well as better balance along the movement. Design of additional sizes of block slops is under investigation. Slops that create accelerations of 0.7g, 0.9g, and variable accelerations are designed for improving movement path and images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Anderson, Gerald L., und Kalman Peleg. Precision Cropping by Remotely Sensed Prorotype Plots and Calibration in the Complex Domain. United States Department of Agriculture, Dezember 2002. http://dx.doi.org/10.32747/2002.7585193.bard.

Der volle Inhalt der Quelle
Annotation:
This research report describes a methodology whereby multi-spectral and hyperspectral imagery from remote sensing, is used for deriving predicted field maps of selected plant growth attributes which are required for precision cropping. A major task in precision cropping is to establish areas of the field that differ from the rest of the field and share a common characteristic. Yield distribution f maps can be prepared by yield monitors, which are available for some harvester types. Other field attributes of interest in precision cropping, e.g. soil properties, leaf Nitrate, biomass etc. are obtained by manual sampling of the filed in a grid pattern. Maps of various field attributes are then prepared from these samples by the "Inverse Distance" interpolation method or by Kriging. An improved interpolation method was developed which is based on minimizing the overall curvature of the resulting map. Such maps are the ground truth reference, used for training the algorithm that generates the predicted field maps from remote sensing imagery. Both the reference and the predicted maps are stratified into "Prototype Plots", e.g. 15xl5 blocks of 2m pixels whereby the block size is 30x30m. This averaging reduces the datasets to manageable size and significantly improves the typically poor repeatability of remote sensing imaging systems. In the first two years of the project we used the Normalized Difference Vegetation Index (NDVI), for generating predicted yield maps of sugar beets and com. The NDVI was computed from image cubes of three spectral bands, generated by an optically filtered three camera video imaging system. A two dimensional FFT based regression model Y=f(X), was used wherein Y was the reference map and X=NDVI was the predictor. The FFT regression method applies the "Wavelet Based", "Pixel Block" and "Image Rotation" transforms to the reference and remote images, prior to the Fast - Fourier Transform (FFT) Regression method with the "Phase Lock" option. A complex domain based map Yfft is derived by least squares minimization between the amplitude matrices of X and Y, via the 2D FFT. For one time predictions, the phase matrix of Y is combined with the amplitude matrix ofYfft, whereby an improved predicted map Yplock is formed. Usually, the residuals of Y plock versus Y are about half of the values of Yfft versus Y. For long term predictions, the phase matrix of a "field mask" is combined with the amplitude matrices of the reference image Y and the predicted image Yfft. The field mask is a binary image of a pre-selected region of interest in X and Y. The resultant maps Ypref and Ypred aremodified versions of Y and Yfft respectively. The residuals of Ypred versus Ypref are even lower than the residuals of Yplock versus Y. The maps, Ypref and Ypred represent a close consensus of two independent imaging methods which "view" the same target. In the last two years of the project our remote sensing capability was expanded by addition of a CASI II airborne hyperspectral imaging system and an ASD hyperspectral radiometer. Unfortunately, the cross-noice and poor repeatability problem we had in multi-spectral imaging was exasperated in hyperspectral imaging. We have been able to overcome this problem by over-flying each field twice in rapid succession and developing the Repeatability Index (RI). The RI quantifies the repeatability of each spectral band in the hyperspectral image cube. Thereby, it is possible to select the bands of higher repeatability for inclusion in the prediction model while bands of low repeatability are excluded. Further segregation of high and low repeatability bands takes place in the prediction model algorithm, which is based on a combination of a "Genetic Algorithm" and Partial Least Squares", (PLS-GA). In summary, modus operandi was developed, for deriving important plant growth attribute maps (yield, leaf nitrate, biomass and sugar percent in beets), from remote sensing imagery, with sufficient accuracy for precision cropping applications. This achievement is remarkable, given the inherently high cross-noice between the reference and remote imagery as well as the highly non-repeatable nature of remote sensing systems. The above methodologies may be readily adopted by commercial companies, which specialize in proving remotely sensed data to farmers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie