Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Active stereo vision.

Zeitschriftenartikel zum Thema „Active stereo vision“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Active stereo vision" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Grosso, E., und M. Tistarelli. „Active/dynamic stereo vision“. IEEE Transactions on Pattern Analysis and Machine Intelligence 17, Nr. 9 (1995): 868–79. http://dx.doi.org/10.1109/34.406652.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jang, Mingyu, Hyunse Yoon, Seongmin Lee, Jiwoo Kang und Sanghoon Lee. „A Comparison and Evaluation of Stereo Matching on Active Stereo Images“. Sensors 22, Nr. 9 (26.04.2022): 3332. http://dx.doi.org/10.3390/s22093332.

Der volle Inhalt der Quelle
Annotation:
The relationship between the disparity and depth information of corresponding pixels is inversely proportional. Thus, in order to accurately estimate depth from stereo vision, it is important to obtain accurate disparity maps, which encode the difference between horizontal coordinates of corresponding image points. Stereo vision can be classified as either passive or active. Active stereo vision generates pattern texture, which passive stereo vision does not have, on the image to fill the textureless regions. In passive stereo vision, many surveys have discovered that disparity accuracy is heavily reliant on attributes, such as radiometric variation and color variation, and have found the best-performing conditions. However, in active stereo matching, the accuracy of the disparity map is influenced not only by those affecting the passive stereo technique, but also by the attributes of the generated pattern textures. Therefore, in this paper, we analyze and evaluate the relationship between the performance of the active stereo technique and the attributes of pattern texture. When evaluating, experiments are conducted under various settings, such as changing the pattern intensity, pattern contrast, number of pattern dots, and global gain, that may affect the overall performance of the active stereo matching technique. Through this evaluation, our discovery can act as a noteworthy reference for constructing an active stereo system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gasteratos, Antonios. „Tele-Autonomous Active Stereo-Vision Head“. International Journal of Optomechatronics 2, Nr. 2 (13.06.2008): 144–61. http://dx.doi.org/10.1080/15599610802081753.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Yexin Wang, Yexin Wang, Fuqiang Zhou Fuqiang Zhou und Yi Cui Yi Cui. „Single-camera active stereo vision system using fiber bundles“. Chinese Optics Letters 12, Nr. 10 (2014): 101301–4. http://dx.doi.org/10.3788/col201412.101301.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Samson, Eric, Denis Laurendeau, Marc Parizeau, Sylvain Comtois, Jean-François Allan und Clément Gosselin. „The Agile Stereo Pair for active vision“. Machine Vision and Applications 17, Nr. 1 (23.02.2006): 32–50. http://dx.doi.org/10.1007/s00138-006-0013-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Feller, Michael, Jae-Sang Hyun und Song Zhang. „Active Stereo Vision for Precise Autonomous Vehicle Control“. Electronic Imaging 2020, Nr. 16 (26.01.2020): 258–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.16.avm-257.

Der volle Inhalt der Quelle
Annotation:
This paper describes the development of a low-cost, lowpower, accurate sensor designed for precise, feedback control of an autonomous vehicle to a hitch. The solution that has been developed uses an active stereo vision system, combining classical stereo vision with a low cost, low power laser speckle projection system, which solves the correspondence problem experienced by classic stereo vision sensors. A third camera is added to the sensor for texture mapping. A model test of the hitching problem was developed using an RC car and a target to represent a hitch. A control system is implemented to precisely control the vehicle to the hitch. The system can successfully control the vehicle from within 35° of perpendicular to the hitch, to a final position with an overall standard deviation of 3.0 m m of lateral error and 1.5° of angular error.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ko, Jung-Hwan. „Active Object Tracking System based on Stereo Vision“. Journal of the Institute of Electronics and Information Engineers 53, Nr. 4 (25.04.2016): 159–66. http://dx.doi.org/10.5573/ieie.2016.53.4.159.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Porta, J. M., J. J. Verbeek und B. J. A. Kröse. „Active Appearance-Based Robot Localization Using Stereo Vision“. Autonomous Robots 18, Nr. 1 (Januar 2005): 59–80. http://dx.doi.org/10.1023/b:auro.0000047287.00119.b6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Yongchang Wang, Kai Liu, Qi Hao, Xianwang Wang, D. L. Lau und L. G. Hassebrook. „Robust Active Stereo Vision Using Kullback-Leibler Divergence“. IEEE Transactions on Pattern Analysis and Machine Intelligence 34, Nr. 3 (März 2012): 548–63. http://dx.doi.org/10.1109/tpami.2011.162.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Mohamed, Abdulla, Phil F. Culverhouse, Ricardo De Azambuja, Angelo Cangelosi und Chenguang Yang. „Automating Active Stereo Vision Calibration Process with Cobots“. IFAC-PapersOnLine 50, Nr. 2 (Dezember 2017): 163–68. http://dx.doi.org/10.1016/j.ifacol.2017.12.030.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Jung, Keonhwa, Seokjung Kim, Sungbin Im, Taehwan Choi und Minho Chang. „A Photometric Stereo Using Re-Projected Images for Active Stereo Vision System“. Applied Sciences 7, Nr. 10 (13.10.2017): 1058. http://dx.doi.org/10.3390/app7101058.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Yau, Wei-Yun, und Han Wang. „Active Visual Feedback Control of Robot Manipulator“. Journal of Robotics and Mechatronics 9, Nr. 3 (20.06.1997): 231–38. http://dx.doi.org/10.20965/jrm.1997.p0231.

Der volle Inhalt der Quelle
Annotation:
This paper describes an approach to control the robot manipulator using an active stereo camera system as the feedback mechanism. In the conventional system, increasing the precision of the hand-eye system inevitably reduces its operating range. It is also not robust to perturbations of the vision system which is commonly encountered in real-world applications. The proposed hand-eye system addresses these limitations and shortcomings. In this paper, the concept of pseudo image space which has three dimension is introduced. A relationship between the pseudo image space and the robot space is established via a mapping matrix. Using the mapping matrix together with visual feedback to update it, algorithms are developed to control the robot manipulator. No camera calibration to recover the homogeneous transformation matrix of the stereo vision system is required. Thus, the hand-eye system is robust to changes in the camera orientation. A method to cater for focal length changes in the vision system is also described. This allow the hand-eye system to select the required resolution and accuracy suitable for its task. The proposed method has been tested using actual implementation to verify its robustness and accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Fan, Di, Yanyang Liu, Xiaopeng Chen, Fei Meng, Xilong Liu, Zakir Ullah, Wei Cheng, Yunhui Liu und Qiang Huang. „Eye Gaze Based 3D Triangulation for Robotic Bionic Eyes“. Sensors 20, Nr. 18 (15.09.2020): 5271. http://dx.doi.org/10.3390/s20185271.

Der volle Inhalt der Quelle
Annotation:
Three-dimensional (3D) triangulation based on active binocular vision has increasing amounts of applications in computer vision and robotics. An active binocular vision system with non-fixed cameras needs to calibrate the stereo extrinsic parameters online to perform 3D triangulation. However, the accuracy of stereo extrinsic parameters and disparity have a significant impact on 3D triangulation precision. We propose a novel eye gaze based 3D triangulation method that does not use stereo extrinsic parameters directly in order to reduce the impact. Instead, we drive both cameras to gaze at a 3D spatial point P at the optical center through visual servoing. Subsequently, we can obtain the 3D coordinates of P through the intersection of the two optical axes of both cameras. We have performed experiments to compare with previous disparity based work, named the integrated two-pose calibration (ITPC) method, using our robotic bionic eyes. The experiments show that our method achieves comparable results with ITPC.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Hu, Shaopeng, Mingjun Jiang, Takeshi Takaki und Idaku Ishii. „Real-Time Monocular Three-Dimensional Motion Tracking Using a Multithread Active Vision System“. Journal of Robotics and Mechatronics 30, Nr. 3 (20.06.2018): 453–66. http://dx.doi.org/10.20965/jrm.2018.p0453.

Der volle Inhalt der Quelle
Annotation:
In this study, we developed a monocular stereo tracking system to be used as a marker-based, three-dimensional (3-D) motion capture system. This system aims to localize dozens of markers on multiple moving objects in real time by switching five hundred different views in 1 s. The ultrafast mirror-drive active vision used in our catadioptric stereo tracking system can accelerate a series of operations for multithread gaze control with video shooting, computation, and actuation within 2 ms. By switching between five hundred different views in 1 s, with real-time video processing for marker extraction, our system can function asJvirtual left and right pan-tilt tracking cameras, operating at 250/Jfps to simultaneously capture and processJpairs of 512 × 512 stereo images with different views via the catadioptric mirror system. We conducted several real-time 3-D motion experiments to capture multiple fast-moving objects with markers. The results demonstrated the effectiveness of our monocular 3-D motion tracking system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Enescu, V., G. De Cubber, K. Cauwerts, H. Sahli, E. Demeester, D. Vanhooydonck und M. Nuttin. „Active stereo vision-based mobile robot navigation for person tracking“. Integrated Computer-Aided Engineering 13, Nr. 3 (17.07.2006): 203–22. http://dx.doi.org/10.3233/ica-2006-13302.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Xue, Ting, und Bin Wu. „Reparability measurement of vision sensor in active stereo visual system“. Measurement 49 (März 2014): 275–82. http://dx.doi.org/10.1016/j.measurement.2013.12.008.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Barone, Sandro, Paolo Neri, Alessandro Paoli und Armando Viviano Razionale. „Flexible calibration of a stereo vision system by active display“. Procedia Manufacturing 38 (2019): 564–72. http://dx.doi.org/10.1016/j.promfg.2020.01.071.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Krotkov, Eric, und Ruzena Bajcsy. „Active vision for reliable ranging: Cooperating focus, stereo, and vergence“. International Journal of Computer Vision 11, Nr. 2 (Oktober 1993): 187–203. http://dx.doi.org/10.1007/bf01469228.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Yau, Wei Yun, und Han Wang. „Fast Relative Depth Computation for an Active Stereo Vision System“. Real-Time Imaging 5, Nr. 3 (Juni 1999): 189–202. http://dx.doi.org/10.1006/rtim.1997.0114.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Yamashita, Atsushi, Toru Kaneko, Shinya Matsushita, Kenjiro T. Miura und Suekichi Isogai. „Camera Calibration and 3-D Measurement with an Active Stereo Vision System for Handling Moving Objects“. Journal of Robotics and Mechatronics 15, Nr. 3 (20.06.2003): 304–13. http://dx.doi.org/10.20965/jrm.2003.p0304.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose a fast, easy camera calibration and 3-D measurement method with an active stereo vision system for handling moving objects whose geometric models are known. We use stereo cameras that change direction independently to follow moving objects. To gain extrinsic camera parameters in real time, a baseline stereo camera (parallel stereo camera) model and projective transformation of stereo images are used by considering epipolar constraints. To make use of 3-D measurement results for a moving object, the manipulator hand approaches the object. When the manipulator hand and object are near enough to be situated in a single image, very accurate camera calibration is executed to calculate the manipulator size in the image. Our calibration is simple and practical because it does not need to calibrate all camera parameters. The computation time for real-time calibration is not large because we need only search for one parameter in real time by deciding the relationship between all parameters in advance. Our method does not need complicated image processing or matrix calculation. Experimental results show that the accuracy of 3-D reconstruction of a cubic box whose edge is 60 mm long is within 1.8 mm when the distance between the camera and the box is 500 mm. Total computation time for object tracking, camera calibration, and manipulation control is within 0.5 seconds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

LI, ZE-NIAN, und FRANK TONG. „RECIPROCAL-WEDGE TRANSFORM IN ACTIVE STEREO“. International Journal of Pattern Recognition and Artificial Intelligence 13, Nr. 01 (Februar 1999): 25–48. http://dx.doi.org/10.1142/s0218001499000033.

Der volle Inhalt der Quelle
Annotation:
The Reciprocal-Wedge Transform (RWT) facilitates space-variant image representation. In this paper a V-plane projection method is presented as a model for imaging using the RWT. It is then shown that space-variant sensing with this new RWT imaging model is suitable for fixation control in active stereo that exhibits vergence and versional eye movements and scanpath behaviors. A computational interpretation of stereo fusion in relation to disparity limit in space-variant imagery leads to the development of a computational model for binocular fixation. The vergence-version movement sequence is implemented as an effective fixation mechanism using the RWT imaging. A fixation system is presented to show the various modules of camera control, vergence and version.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

DU, FENGLEI, und MICHAEL BRADY. „A FOUR DEGREE-OF-FREEDOM ROBOT HEAD FOR ACTIVE VISION“. International Journal of Pattern Recognition and Artificial Intelligence 08, Nr. 06 (Dezember 1994): 1439–69. http://dx.doi.org/10.1142/s021800149400070x.

Der volle Inhalt der Quelle
Annotation:
The design of a robot head for active computer vision tasks is described. The stereo head/eye platform uses a common elevation configuration and has four degree-of-freedom. The joints are driven by DC servo motors coupled with incremental optical encoders and backlash minimizing gearboxes. The details of mechanical design, head controller design, the architecture of the system, and the design criteria for various specifications are presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Du, Qin Jun, Xue Yi Zhang und Xing Guo Huang. „Modeling and Analysis of a Humanoid Robot Active Stereo Vision Platform“. Applied Mechanics and Materials 55-57 (Mai 2011): 868–71. http://dx.doi.org/10.4028/www.scientific.net/amm.55-57.868.

Der volle Inhalt der Quelle
Annotation:
Humanoid robot is not only expected to walk stably, but also is required to perform manipulation tasks autonomously in our work and living environment. This paper discusses the visual perception and the object manipulation based on visual servoing of a humanoid robot, an active robot vision model is built, and then the 3D location principle, the calibration method and precision of this model are analyzed. This active robot vision system with two DOF enlarges its visual field and the stereo is the most simple camera configuration for 3D position information.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Charles, Priya, und A. V.Patil. „Non parametric methods of disparity computation“. International Journal of Engineering & Technology 7, Nr. 2.6 (11.03.2018): 28. http://dx.doi.org/10.14419/ijet.v7i2.6.10062.

Der volle Inhalt der Quelle
Annotation:
Disparity is inversely proportional to depth. Informationabout depth is a key factor in many real time applicationslikecomputer vision applications, medical diagnosis, model precision etc. Disparity is measured first in order to calculate the depth that suitsthe real world applications. There are two approaches viz., active and passive methods. Due to its cost effectiveness, passive approach is the most popular approach. In spite of this, the measures arelimited by its occlusion, more number of objects and texture areas. So, effective and efficient stereo depth estimation algorithms have taken the toll on the researchers. Theimportant goal of stereo vision algorithms is the disparity map calculation between twoimages clicked the same time. These pictures are taken using two cameras. We have implemented the non-parametric algorithmsfor stereo vision viz., Rank and Census transform in both single processor and multicore processors are implemented andthe results showsits time efficient by 1500 times.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Zoppi, Matteo, und Rezia Molfino. „ArmillEye: Flexible Platform for Underwater Stereo Vision“. Journal of Mechanical Design 129, Nr. 8 (08.08.2006): 808–15. http://dx.doi.org/10.1115/1.2735338.

Der volle Inhalt der Quelle
Annotation:
The paper describes ArmillEye, a 3-degree of freedom (DOF) flexible hybrid platform designed for agile underwater stereoptic vision. Effective telecontrol systems of remote operated vehicles require active and dexterous camera support in order to allow the operator to easily and promptly change the point of view, also improving the virtual reconstruction of the environment in difficult operative conditions (dirtiness, turbulence, and partial occlusion). The same concepts hold for visual servoing of autonomous underwater vehicles. ArmillEye was designed for this specific application; it is based on the concept of using a parallel-hybrid mechanism architecture that, in principle, allows us to minimize the ad hoc waterproof boxes (generally only for cameras) while the actuators, fixed to the base of the mechanism, can be placed into the main body of the underwater vehicle. This concept was revealed effective and was previously proposed for underwater arms. The synthesis of ArmillEye followed the specific aims of visual telecontrol and servoing, specifying vision workspace, dexterity, and dynamics parameters. Two versions of ArmillEye are proposed: the first one with two cameras to obtain a steroptic vision by using two viewpoints (two rotational freedoms with a fixed tilt or pan axis and vergence); the second one with one camera operated to obtain a stereoptic vision by using one viewpoint (two rotational freedoms with a fixed tilt or pan axis and extrusion).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

TANG, CHENG-YUAN, ZEN CHEN und YI-PING HUNG. „AUTOMATIC DETECTION AND TRACKING OF HUMAN HEADS USING AN ACTIVE STEREO VISION SYSTEM“. International Journal of Pattern Recognition and Artificial Intelligence 14, Nr. 02 (März 2000): 137–66. http://dx.doi.org/10.1142/s0218001400000118.

Der volle Inhalt der Quelle
Annotation:
A new head tracking algorithm for automatically detecting and tracking human heads in complex backgrounds is proposed. By using an elliptical model for the human head, our Maximum Likelihood (ML) head detector can reliably locate human heads in images having complex backgrounds and is relatively insensitive to illumination and rotation of the human heads. Our head detector consists of two channels: the horizontal and the vertical channels. Each channel is implemented by multiscale template matching. Using a hierarchical structure in implementing our head detector, the execution time for detecting the human heads in a 512×512 image is about 0.02 second in a Sparc 20 workstation (not including the time for image acquisition). Based on the ellipse-based ML head detector, we have developed a head tracking method that can monitor the entrance of a person, detect and track the person's head, and then control the stereo cameras to focus their gaze on this person's head. In this method, the ML head detector and the mutually-supported constraint are used to extract the corresponding ellipses in a stereo image pair. To implement a practical and reliable face detection and tracking system, further verification using facial features, such as eyes, mouth and nostrils, may be essential. The 3D position computed from the centers of the two corresponding ellipses is then used for fixation. An active stereo head has been used to perform the experiments and has demonstrated that the proposed approach is feasible and promising for practical uses.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Tang Yiping, 汤一平, 鲁少辉 Lu Shaohui, 吴. 挺. Wu Ting und 韩国栋 Han Guodong. „Pipe morphology defects inspection system with active stereo omnidirectional vision sensor“. Infrared and Laser Engineering 45, Nr. 11 (2016): 1117005. http://dx.doi.org/10.3788/irla201645.1117005.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Tang Yiping, 汤一平, 鲁少辉 Lu Shaohui, 吴. 挺. Wu Ting und 韩国栋 Han Guodong. „Pipe morphology defects inspection system with active stereo omnidirectional vision sensor“. Infrared and Laser Engineering 45, Nr. 11 (2016): 1117005. http://dx.doi.org/10.3788/irla20164511.1117005.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Huber, Eric, und David Kortenkamp. „A behavior-based approach to active stereo vision for mobile robots“. Engineering Applications of Artificial Intelligence 11, Nr. 2 (April 1998): 229–43. http://dx.doi.org/10.1016/s0952-1976(97)00078-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Okubo, Atsushi, Atsushi Nishikawa und Fumio Miyazaki. „Selective acquisition of 3D structure with an active stereo vision system“. Systems and Computers in Japan 30, Nr. 12 (15.11.1999): 1–15. http://dx.doi.org/10.1002/(sici)1520-684x(19991115)30:12<1::aid-scj1>3.0.co;2-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

XU, TINGTING, TIANGUANG ZHANG, KOLJA KÜHNLENZ und MARTIN BUSS. „ATTENTIONAL OBJECT DETECTION WITH AN ACTIVE MULTI-FOCAL VISION SYSTEM“. International Journal of Humanoid Robotics 07, Nr. 02 (Juni 2010): 223–43. http://dx.doi.org/10.1142/s0219843610002076.

Der volle Inhalt der Quelle
Annotation:
A biologically inspired foveated attention system in an object detection scenario is proposed. Bottom-up attention uses wide-angle stereo camera data to select a sequence of fixation points. Successive snapshots of high foveal resolution using a telephoto camera enable highly accurate object recognition based on SIFT algorithm. Top-down information is incrementally estimated and integrated using a Kalman-filter, enabling parameter adaptation to changing environments due to robot locomotion. In the experimental evaluation, all the target objects were detected in different backgrounds. Significant improvements in flexibility and efficiency are achieved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Bi, Songlin, Menghao Wang, Jiaqi Zou, Yonggang Gu, Chao Zhai und Ming Gong. „Dental Implant Navigation System Based on Trinocular Stereo Vision“. Sensors 22, Nr. 7 (27.03.2022): 2571. http://dx.doi.org/10.3390/s22072571.

Der volle Inhalt der Quelle
Annotation:
Traditional dental implant navigation systems (DINS) based on binocular stereo vision (BSV) have limitations, for example, weak anti-occlusion abilities, as well as problems with feature point mismatching. These shortcomings limit the operators’ operation scope, and the instruments may even cause damage to the adjacent important blood vessels, nerves, and other anatomical structures. Trinocular stereo vision (TSV) is introduced to DINS to improve the accuracy and safety of dental implants in this study. High positioning accuracy is provided by adding cameras. When one of the cameras is blocked, spatial positioning can still be achieved, and doctors can adjust to system tips; thus, the continuity and safety of the surgery is significantly improved. Some key technologies of DINS have also been updated. A bipolar line constraint algorithm based on TSV is proposed to eliminate the feature point mismatching problem. A reference template with active optical markers attached to the jaw measures head movement. A T-type template with active optical markers is used to obtain the position and direction of surgery instruments. The calibration algorithms of endpoint, axis, and drill are proposed for 3D display of the surgical instrument in real time. With the preoperative path planning of implant navigation software, implant surgery can be carried out. Phantom experiments are carried out based on the system to assess the feasibility and accuracy. The results show that the mean entry deviation, exit deviation, and angle deviation are 0.55 mm, 0.88 mm, and 2.23 degrees, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Chung, Jae-Moon, und Tadashi Nagata. „Binocular vision planning with anthropomorphic features for grasping parts by robots“. Robotica 14, Nr. 3 (Mai 1996): 269–79. http://dx.doi.org/10.1017/s0263574700019585.

Der volle Inhalt der Quelle
Annotation:
SUMMARYPlanning of an active vision having anthropomorphic features, such as binocularity, foveas and gaze control, is proposed. The aim of the vision is to provide robots with the pose informaton of an adequate object to be grasped by the robots. For this, the paper describes a viewer-oriented fixation point frame and its calibration, active motion and gaze control of the vision, disparity filtering, zoom control, and estimation of the pose of a specific portion of a selected object. On the basis of the importance of the contour information and the scheme of stereo vision in recognizing objects by humans, the occluding contour pairs of objects are used as inputs in order to show the proposed visual planning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Shibata, Masaaki, und Taiga Honma. „A Control Technique for 3D Object Tracking on Active Stereo Vision Robot“. IEEJ Transactions on Electronics, Information and Systems 125, Nr. 3 (2005): 536–37. http://dx.doi.org/10.1541/ieejeiss.125.536.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Chichyang Chen und Y. F. Zheng. „Passive and active stereo vision for smooth surface detection of deformed plates“. IEEE Transactions on Industrial Electronics 42, Nr. 3 (Juni 1995): 300–306. http://dx.doi.org/10.1109/41.382141.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Wallner, F., und R. Dillman. „Real-time map refinement by use of sonar and active stereo-vision“. Robotics and Autonomous Systems 16, Nr. 1 (November 1995): 47–56. http://dx.doi.org/10.1016/0921-8890(95)00147-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Nishikawa, Atsushi, Shinpei Ogawa, Noriaki Maru und Fumio Miyazaki. „Reconstruction of object surfaces by using occlusion information from active stereo vision“. Systems and Computers in Japan 28, Nr. 9 (August 1997): 86–97. http://dx.doi.org/10.1002/(sici)1520-684x(199708)28:9<86::aid-scj10>3.0.co;2-f.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Deris, A., I. Trigonis, A. Aravanis und E. K. Stathopoulou. „DEPTH CAMERAS ON UAVs: A FIRST APPROACH“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W3 (23.02.2017): 231–36. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w3-231-2017.

Der volle Inhalt der Quelle
Annotation:
Accurate depth information retrieval of a scene is a field under investigation in the research areas of photogrammetry, computer vision and robotics. Various technologies, active, as well as passive, are used to serve this purpose such as laser scanning, photogrammetry and depth sensors, with the latter being a promising innovative approach for fast and accurate 3D object reconstruction using a broad variety of measuring principles including stereo vision, infrared light or laser beams. In this study we investigate the use of the newly designed Stereolab's ZED depth camera based on passive stereo depth calculation, mounted on an Unmanned Aerial Vehicle with an ad-hoc setup, specially designed for outdoor scene applications. Towards this direction, the results of its depth calculations and scene reconstruction generated by Simultaneous Localization and Mapping (SLAM) algorithms are compared and evaluated based on qualitative and quantitative criteria with respect to the ones derived by a typical Structure from Motion (SfM) and Multiple View Stereo (MVS) pipeline for a challenging cultural heritage application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Grace, A. E., D. Pycock, H. T. Tillotson und M. S. Snaith. „Active shape from stereo for highway inspection“. Machine Vision and Applications 12, Nr. 1 (01.07.2000): 7–15. http://dx.doi.org/10.1007/s001380050119.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Wu, Tao T., und Jianan Y. Qu. „Optical imaging for medical diagnosis based on active stereo vision and motion tracking“. Optics Express 15, Nr. 16 (02.08.2007): 10421. http://dx.doi.org/10.1364/oe.15.010421.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Das, S., und N. Ahuja. „Performance analysis of stereo, vergence, and focus as depth cues for active vision“. IEEE Transactions on Pattern Analysis and Machine Intelligence 17, Nr. 12 (1995): 1213–19. http://dx.doi.org/10.1109/34.476513.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Dipanda, A., S. Woo, F. Marzani und J. M. Bilbault. „3-D shape reconstruction in an active stereo vision system using genetic algorithms“. Pattern Recognition 36, Nr. 9 (September 2003): 2143–59. http://dx.doi.org/10.1016/s0031-3203(03)00049-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Dankers, Andrew, Nick Barnes und Alex Zelinsky. „MAP ZDF segmentation and tracking using active stereo vision: Hand tracking case study“. Computer Vision and Image Understanding 108, Nr. 1-2 (Oktober 2007): 74–86. http://dx.doi.org/10.1016/j.cviu.2006.10.013.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

PAU, L. F. „AN INTELLIGENT CAMERA FOR ACTIVE VISION“. International Journal of Pattern Recognition and Artificial Intelligence 10, Nr. 01 (Februar 1996): 33–42. http://dx.doi.org/10.1142/s0218001496000049.

Der volle Inhalt der Quelle
Annotation:
Much research is currently going on about the processing of one or two-camera imagery, possibly combined with other sensors and actuators, in view of achieving attentive vision, i.e. processing selectively some parts of a scene possibly with another resolution. Attentive vision in turn is an element of active vision where the outcome of the image processing triggers changes in the image acquisition geometry and/or of the environment. Almost all this research is assuming classical imaging, scanning and conversion geometries, such as raster based scanning and processing of several digitized outputs on separate image processing units. A consortium of industrial companies comprising Digital Equipment Europe, Thomson CSF, and a few others, have taken a more radical view of this. To meet active vision requirements in industry, an intelligent camera is being designed and built, comprised of three basic elements: – a unique Thomson CSF CCD sensor architecture with random addressing – the DEC Alpha 21064 275MHz processor chip, sharing the same internal data bus as the digital sensor output – a generic library of basic image manipulation, control and image processing functions, executed right in the sensor-internal bus-processor unit, so that only higher level results or commands get exchanged with the processing environment. Extensions to color imaging (with lower spatial resolution), and to stereo imaging, are relatively straightforward. The basic sensor is 1024*1024 pixels with 2*10 bits addresses, and a 2.5 ms (400 frames/second) image data rate compatible with the Alpha bus and 64 bits addressing. For attentive vision, several connex fields of max 40 000 pixels, min 5*3 pixels, can be read and addressed within each 2.5 ms image frame. There is nondestructive readout, and the image processing addressing over 64 bits shall allow for 8 full pixel readouts in one single word. The main difficulties have been identified as the access and reading delays, the signal levels, and dimensioning of some buffer arrays in the processor. The commercial applications targeted initially will be in industrial inspection, traffic control and document imaging. In all of these fields, selective position dependent processing shall take place, followed by feature dependent processing. Very large savings are expected both in terms of solutions costs to the end users, development time, as well as major performance gains for the ultimate processes. The reader will appreciate that at this stage no further implementation details can be given.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

YI, Ying Min, und Yu Hui. „Simultaneous Localization and Mapping with Identification of Landmarks Based on Monocular Vision“. Advanced Materials Research 366 (Oktober 2011): 90–94. http://dx.doi.org/10.4028/www.scientific.net/amr.366.90.

Der volle Inhalt der Quelle
Annotation:
How to identify objects is a hot issue of robot simultaneous localization and mapping (SLAM) with monocular vision. In this paper, an algorithm of wheeled robot’s simultaneous localization and mapping with identification of landmarks based on monocular vision is proposed. In observation steps, identifying landmarks and locating position are performed by image processing and analyzing, which converts vision image projection of wheeled robots and geometrical relations of spatial objects into calculating robots’ relative landmarks distance and angle. The integral algorithm procedure follows the recursive order of prediction, observation, data association, update, mapping to have simultaneous localization and map building. Compared with Active Vision algorithm, Three dimensional vision and stereo vision algorithm, the proposed algorithm is able to identify environmental objects and conduct smooth movement as well.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Sumetheeprasit, Borwonpob, Ricardo Rosales Martinez, Hannibal Paul und Kazuhiro Shimonomura. „Long-Range 3D Reconstruction Based on Flexible Configuration Stereo Vision Using Multiple Aerial Robots“. Remote Sensing 16, Nr. 2 (07.01.2024): 234. http://dx.doi.org/10.3390/rs16020234.

Der volle Inhalt der Quelle
Annotation:
Aerial robots, or unmanned aerial vehicles (UAVs), are widely used in 3D reconstruction tasks employing a wide range of sensors. In this work, we explore the use of wide baseline and non-parallel stereo vision for fast and movement-efficient long-range 3D reconstruction with multiple aerial robots. Each viewpoint of the stereo vision system is distributed on separate aerial robots, facilitating the adjustment of various parameters, including baseline length, configuration axis, and inward yaw tilt angle. Additionally, multiple aerial robots with different sets of parameters can be used simultaneously, including the use of multiple baselines, which allows for 3D monitoring at various depth ranges simultaneously, and the combined use of horizontal and vertical stereo, which improves the quality and completeness of depth estimation. Depth estimation at a distance of up to 400 m with less than 10% error using only 10 m of active flight distance is demonstrated in the simulation. Additionally, estimation of a distance of up to 100 m with flight distance of up to 10 m on the vertical axis and horizontal axis is demonstrated in an outdoor mapping experiment using the developed prototype UAVs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Wang, Xin, und Pieter Jonker. „An Advanced Active Vision System with Multimodal Visual Odometry Perception for Humanoid Robots“. International Journal of Humanoid Robotics 14, Nr. 03 (25.08.2017): 1750006. http://dx.doi.org/10.1142/s0219843617500062.

Der volle Inhalt der Quelle
Annotation:
Using active vision to perceive surroundings instead of just passively receiving information, humans develop the ability to explore unknown environments. Humanoid robot active vision research has already half a century history. It covers comprehensive research areas and plenty of studies have been done. Nowadays, the new trend is to use a stereo setup or a Kinect with neck movements to realize active vision. However, human perception is a combination of eye and neck movements. This paper presents an advanced active vision system that works in a similar way as human vision. The main contributions are: a design of a set of controllers that mimic eye and neck movements, including saccade eye movements, pursuit eye movements, vestibulo-ocular reflex eye movements and vergence eye movements; an adaptive selection mechanism based on properties of objects to automatically choose an optimal tracking algorithm; a novel Multimodal Visual Odometry Perception method that combines stereopsis and convergence to enable robots to perform both precise action in action space and scene exploration in personal space. Experimental results prove the effectiveness and robustness of our system. Besides, the system works in real-time constraints with low-cost cameras and motors, providing an affordable solution for industrial applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

SHIBATA, Masaaki, und Taiga HONMA. „Visual Tracking Control for Static Pose and Dynamic Response on Active Stereo Vision Robot“. Journal of the Japan Society for Precision Engineering, Contributed Papers 71, Nr. 8 (2005): 1036–40. http://dx.doi.org/10.2493/jspe.71.1036.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Busboom, A., und R. J. Schalkoff. „Active stereo vision and direct surface parameter estimation: curve-to-curve image plane mappings“. IEE Proceedings - Vision, Image, and Signal Processing 143, Nr. 2 (1996): 109. http://dx.doi.org/10.1049/ip-vis:19960162.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

NAGAHAMA, Kotaro, Shota SHIRAYAMA, Ryusuke UEKI, Mitsuharu KOJIMA, Kei OKADA und Masayuki INABA. „2P1-D19 Gaze Control to Human and Handling Objects for Humanoid's Stereo Active Vision“. Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) 2010 (2010): _2P1—D19_1—_2P1—D19_4. http://dx.doi.org/10.1299/jsmermd.2010._2p1-d19_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie