To see the other types of publications on this topic, follow the link: Computer vision; Active.

Journal articles on the topic 'Computer vision; Active'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computer vision; Active.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Aloimonos, John, Isaac Weiss, and Amit Bandyopadhyay. "Active vision." International Journal of Computer Vision 1, no. 4 (January 1988): 333–56. http://dx.doi.org/10.1007/bf00133571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

McGarrity, C., and R. L. Dalglish. "An autonomous computer peripheral for active 3D vision." Measurement Science and Technology 7, no. 11 (November 1, 1996): 1591–604. http://dx.doi.org/10.1088/0957-0233/7/11/008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sipe, M. A., and D. Casasent. "Feature space trajectory methods for active computer vision." IEEE Transactions on Pattern Analysis and Machine Intelligence 24, no. 12 (December 2002): 1634–43. http://dx.doi.org/10.1109/tpami.2002.1114854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

de Croon, G. C. H. E., I. G. Sprinkhuizen-Kuyper, and E. O. Postma. "Comparing active vision models." Image and Vision Computing 27, no. 4 (March 2009): 374–84. http://dx.doi.org/10.1016/j.imavis.2008.06.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

PINHANEZ, CLAUDIO S. "BEHAVIOR-BASED ACTIVE VISION." International Journal of Pattern Recognition and Artificial Intelligence 08, no. 06 (December 1994): 1493–526. http://dx.doi.org/10.1142/s0218001494000723.

Full text
Abstract:
A vision system was built using a behavior-based model, the subsumption architecture. The so-called active eye moves the camera’s axis through the environment, detecting areas with high concentration of edges, with the help of a kind of saccadic movement. The design and implementation process is detailed in the article, paying particular attention to the fovea-like sensor structure which enables the active eye to efficiently use local information to control its movements. Numerical measures for the eye’s behavior were developed, and applied to evaluate the incremental building process and the effects of the saccadic movements on the whole system. A higher level behavior was also implemented, with the purpose of detecting long straight edges in the image, producing pictures similar to hand drawings. Robustness and efficiency problems are addressed at the end of the paper. The results seem to prove that interesting behaviors can be achieved using simple vision methods and algorithms, if their results are properly interconnected and timed.
APA, Harvard, Vancouver, ISO, and other styles
6

Pahlavan, Kourosh, and Jan-Olof Eklundh. "Mechatronics of active vision." Mechatronics 4, no. 2 (March 1994): 113–23. http://dx.doi.org/10.1016/0957-4158(94)90038-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sipe, M. A., and D. Casasent. "Global feature space neural network for active computer vision." Neural Computing & Applications 7, no. 3 (September 1998): 195–215. http://dx.doi.org/10.1007/bf01414882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tistarelli, M., and E. Grosso. "Active vision-based face authentication." Image and Vision Computing 18, no. 4 (March 2000): 299–314. http://dx.doi.org/10.1016/s0262-8856(99)00059-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Allen, M. J., F. J. Marin, F. García-Lagos, N. E. Gough, and Q. Mehdi. "Fuzzy processing for active vision." Integrated Computer-Aided Engineering 10, no. 3 (June 27, 2003): 267–85. http://dx.doi.org/10.3233/ica-2003-10304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Serebrennyi, Vladimir, Andrei Boshliakov, and Georgii Ovsiankin. "Active stabilization in robotic vision systems." MATEC Web of Conferences 161 (2018): 03019. http://dx.doi.org/10.1051/matecconf/201816103019.

Full text
Abstract:
In the article considered prospective approaches to the design of active systems for stabilizing systems based on the parallel kinematics mechanism and possible applications of such systems. Attention is drawn to the fact that not only object fluctuations are an important object for stabilization, but it is also important to compensate for the body vibrations, along with its vibrations. Based on the analysis, it was concluded that it is perspective to use mechanisms with parallel kinematics for the design of active stabilization systems. Was obtained a mathematical model of the hexapod, according to which a computer model in the Simulink package was designed. Its analysis confirmed the possibility of using a mechanism with parallel kinematics in designing an active stabilization system and presented requirements to the actuators of the system.
APA, Harvard, Vancouver, ISO, and other styles
11

Swain, Michael J., and Markus A. Stricker. "Promising directions in active vision." International Journal of Computer Vision 11, no. 2 (October 1993): 109–26. http://dx.doi.org/10.1007/bf01469224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ilie, Adrian, and Greg Welch. "Online control of active camera networks for computer vision tasks." ACM Transactions on Sensor Networks 10, no. 2 (January 2014): 1–40. http://dx.doi.org/10.1145/2530283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, Joowan, Myung-Hwan Jeon, Younggun Cho, and Ayoung Kim. "Dark Synthetic Vision: Lightweight Active Vision to Navigate in the Dark." IEEE Robotics and Automation Letters 6, no. 1 (January 2021): 143–50. http://dx.doi.org/10.1109/lra.2020.3035137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kragic, Danica. "From active perception to deep learning." Science Robotics 3, no. 23 (October 17, 2018): eaav1778. http://dx.doi.org/10.1126/scirobotics.aav1778.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Icasio-Hernández, O., Y. I. Curiel-Razo, C. C. Almaraz-Cabral, S. R. Rojas-Ramirez, and J. J. González-Barbosa. "MEASUREMENT ERROR WITH DIFFERENT COMPUTER VISION TECHNIQUES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7 (September 12, 2017): 227–35. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w7-227-2017.

Full text
Abstract:
The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.
APA, Harvard, Vancouver, ISO, and other styles
16

Floreano, Dario, Toshifumi Kato, Davide Marocco, and Eric Sauser. "Coevolution of active vision and feature selection." Biological Cybernetics 90, no. 3 (March 1, 2004): 218–28. http://dx.doi.org/10.1007/s00422-004-0467-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

PAU, L. F. "AN INTELLIGENT CAMERA FOR ACTIVE VISION." International Journal of Pattern Recognition and Artificial Intelligence 10, no. 01 (February 1996): 33–42. http://dx.doi.org/10.1142/s0218001496000049.

Full text
Abstract:
Much research is currently going on about the processing of one or two-camera imagery, possibly combined with other sensors and actuators, in view of achieving attentive vision, i.e. processing selectively some parts of a scene possibly with another resolution. Attentive vision in turn is an element of active vision where the outcome of the image processing triggers changes in the image acquisition geometry and/or of the environment. Almost all this research is assuming classical imaging, scanning and conversion geometries, such as raster based scanning and processing of several digitized outputs on separate image processing units. A consortium of industrial companies comprising Digital Equipment Europe, Thomson CSF, and a few others, have taken a more radical view of this. To meet active vision requirements in industry, an intelligent camera is being designed and built, comprised of three basic elements: – a unique Thomson CSF CCD sensor architecture with random addressing – the DEC Alpha 21064 275MHz processor chip, sharing the same internal data bus as the digital sensor output – a generic library of basic image manipulation, control and image processing functions, executed right in the sensor-internal bus-processor unit, so that only higher level results or commands get exchanged with the processing environment. Extensions to color imaging (with lower spatial resolution), and to stereo imaging, are relatively straightforward. The basic sensor is 1024*1024 pixels with 2*10 bits addresses, and a 2.5 ms (400 frames/second) image data rate compatible with the Alpha bus and 64 bits addressing. For attentive vision, several connex fields of max 40 000 pixels, min 5*3 pixels, can be read and addressed within each 2.5 ms image frame. There is nondestructive readout, and the image processing addressing over 64 bits shall allow for 8 full pixel readouts in one single word. The main difficulties have been identified as the access and reading delays, the signal levels, and dimensioning of some buffer arrays in the processor. The commercial applications targeted initially will be in industrial inspection, traffic control and document imaging. In all of these fields, selective position dependent processing shall take place, followed by feature dependent processing. Very large savings are expected both in terms of solutions costs to the end users, development time, as well as major performance gains for the ultimate processes. The reader will appreciate that at this stage no further implementation details can be given.
APA, Harvard, Vancouver, ISO, and other styles
18

Armstrong, P. J., and J. Antonis. "The development of an active computer vision system for reverse engineering." Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 214, no. 7 (July 2000): 615–18. http://dx.doi.org/10.1243/0954405001518305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Grauman, Kristen, and Serge Belongie. "Editorial: Special Issue on Active and Interactive Methods in Computer Vision." International Journal of Computer Vision 108, no. 1-2 (April 26, 2014): 1–2. http://dx.doi.org/10.1007/s11263-014-0724-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Musik, Christoph, and Matthias Zeppelzauer. "Computer Vision and the Digital Humanities." Audiovisual Data in Digital Humanities 7, no. 14 (December 31, 2018): 59. http://dx.doi.org/10.18146/2213-0969.2018.jethc153.

Full text
Abstract:
Automated computer vision methods and tools offer new ways of analysing audio-visual material in the realm of the Digital Humanities (DH). While there are some promising results where these tools can be applied, there are basic challenges, such as algorithmic bias and the lack of sufficient transparency, one needs to carefully use these tools in a productive and responsible way. When it comes to the socio-technical understanding of computer vision tools and methods, a major unit of sociological analysis, attentiveness, and access for configuration (for both computer vision scientists and DH scholars) is what computer science calls “ground truth”. What is specified in the ground truth is the template or rule to follow, e.g. what an object looks like. This article aims at providing scholars in the DH with knowledge about how automated tools for image analysis work and how they are constructed. Based on these insights, the paper introduces an approach called “active learning” that can help to configure these tools in ways that fit the specific requirements and research questions of the DH in a more adaptive and user-centered way. We argue that both objectives need to be addressed, as this is, by all means, necessary for a successful implementation of computer vision tools in the DH and related fields.
APA, Harvard, Vancouver, ISO, and other styles
21

Jonathan Howell, A., and Hilary Buxton. "Active vision techniques for visually mediated interaction." Image and Vision Computing 20, no. 12 (October 2002): 861–71. http://dx.doi.org/10.1016/s0262-8856(02)00095-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Samarin, A. I., L. N. Podladchikova, M. V. Petrushan, and D. G. Shaposhnikov. "Active Vision: From Theory to Application." Optical Memory and Neural Networks 28, no. 3 (July 2019): 185–91. http://dx.doi.org/10.3103/s1060992x19030068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Shafik, M., and B. Mertsching. "Enhanced motion parameters estimation for an active vision system." Pattern Recognition and Image Analysis 18, no. 3 (September 2008): 370–75. http://dx.doi.org/10.1134/s1054661808030024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Fiala, John C., Ronald Lumia, Karen J. Roberts, and Albert J. Wavering. "TRICLOPS: A tool for studying active vision." International Journal of Computer Vision 12, no. 2-3 (April 1994): 231–50. http://dx.doi.org/10.1007/bf01421204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

DU, FENGLEI, and MICHAEL BRADY. "A FOUR DEGREE-OF-FREEDOM ROBOT HEAD FOR ACTIVE VISION." International Journal of Pattern Recognition and Artificial Intelligence 08, no. 06 (December 1994): 1439–69. http://dx.doi.org/10.1142/s021800149400070x.

Full text
Abstract:
The design of a robot head for active computer vision tasks is described. The stereo head/eye platform uses a common elevation configuration and has four degree-of-freedom. The joints are driven by DC servo motors coupled with incremental optical encoders and backlash minimizing gearboxes. The details of mechanical design, head controller design, the architecture of the system, and the design criteria for various specifications are presented.
APA, Harvard, Vancouver, ISO, and other styles
26

Noyer, J. C., C. Boucher, and M. Benjelloun. "3D particle tracking using an active vision." Pattern Recognition Letters 24, no. 9-10 (June 2003): 1227–40. http://dx.doi.org/10.1016/s0167-8655(02)00304-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Polli, Andrea. "Active Vision: Controlling Sound with Eye Movements." Leonardo 32, no. 5 (October 1999): 405–11. http://dx.doi.org/10.1162/002409499553479.

Full text
Abstract:
In this article, the author discusses the inspiration, concept, and technology behind her sound performance work using eye movements in relationship to current research on human eye movement. She also compares the playing of the eye-tracking instrument to research on musical improvisation using unconventional musical instruments and “active music.”
APA, Harvard, Vancouver, ISO, and other styles
28

Takeuchi, Yoshinori, Noboru Ohnishi, and Noboru Sugie. "Active vision system based on information theory." Systems and Computers in Japan 29, no. 11 (October 1998): 31–39. http://dx.doi.org/10.1002/(sici)1520-684x(199810)29:11<31::aid-scj4>3.0.co;2-t.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Rabie, Tamer, Baher Abdulhai, Amer Shalaby, and Ahmed El-Rabbany. "Mobile Active-Vision Traffic Surveillance System for Urban Networks." Computer-Aided Civil and Infrastructure Engineering 20, no. 4 (July 2005): 231–41. http://dx.doi.org/10.1111/j.1467-8667.2005.00390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Halverson, Tim, and Anthony J. Hornof. "A Computational Model of “Active Vision” for Visual Search in Human–Computer Interaction." Human–Computer Interaction 26, no. 4 (December 30, 2011): 285–314. http://dx.doi.org/10.1080/07370024.2011.625237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Juan, Olivier, Renaud Keriven, and Gheorghe Postelnicu. "Stochastic Motion and the Level Set Method in Computer Vision: Stochastic Active Contours." International Journal of Computer Vision 69, no. 1 (April 1, 2006): 7–25. http://dx.doi.org/10.1007/s11263-006-6849-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Bederson, Benjamin B., Richard S. Wallace, and Eric Schwartz. "A miniaturized space-variant active vision system: Cortex-I." Machine Vision and Applications 8, no. 2 (March 1995): 101–9. http://dx.doi.org/10.1007/bf01213475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Al Smadi, Takialddin. "Modern Technology for Image processing and Computer vision -A Review." Journal of advanced Sciences and Engineering Technologies 1, no. 2 (May 21, 2018): 17–23. http://dx.doi.org/10.32441/jaset.v1i2.178.

Full text
Abstract:
This survey outlines the use of computer vision in Image and video processing in multidisciplinary applications; either in academia or industry, which are active in this field.The scope of this paper covers the theoretical and practical aspects in image and video processing in addition of computer vision, from essential research to evolution of application.In this paper a various subjects of image processing and computer vision will be demonstrated ,these subjects are spanned from the evolution of mobile augmented reality (MAR) applications, to augmented reality under 3D modeling and real time depth imaging, video processing algorithms will be discussed to get higher depth video compression, beside that in the field of mobile platform an automatic computer vision system for citrus fruit has been implemented ,where the Bayesian classification with Boundary Growing to detect the text in the video scene. Also the paper illustrates the usability of the handed interactive method to the portable projector based on augmented reality. © 2018 JASET, International Scholars and Researchers Association
APA, Harvard, Vancouver, ISO, and other styles
34

Loványi, István, and Ákos Nagy. "3D robot vision using laser based active lighting." Mechatronics 3, no. 2 (April 1993): 173–80. http://dx.doi.org/10.1016/0957-4158(93)90048-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Xu, De, and Qingbin Wang. "A new vision measurement method based on active object gazing." International Journal of Advanced Robotic Systems 14, no. 4 (July 1, 2017): 172988141771598. http://dx.doi.org/10.1177/1729881417715984.

Full text
Abstract:
A new vision measurement system is developed with two cameras. One is fixed in pose to serve as a monitor camera. It finds and tracks objects in image space. The other is actively rotated to track the object in Cartesian space, working as an active object-gazing camera. The intrinsic parameters of the monitor camera are calibrated. The view angle corresponding to the object is calculated from the object’s image coordinates and the camera’s intrinsic parameters. The rotation angle of the object-gazing camera is measured with an encoder. The object’s depth is computed with the rotation angle and the view angle. Then the object’s three-dimensional position is obtained with its depth and normalized imaging coordinates. The error analysis is provided to assess the measurement accuracy. The experimental results verify the effectiveness of the proposed vision system and measurement method.
APA, Harvard, Vancouver, ISO, and other styles
36

de Oliveira Schultz Ascari, Rúbia Eliza, Luciano Silva, and Roberto Pereira. "Computer Vision applied to improve interaction and communication of people with motor disabilities: A systematic mapping." Technology and Disability 33, no. 1 (February 24, 2021): 11–28. http://dx.doi.org/10.3233/tad-200308.

Full text
Abstract:
BACKGROUND: The use of computers as a communication tool by people with disabilities can serve as an alternative effective to promote social interactions and the more inclusive and active participation of people in society. OBJECTIVE: This paper presents a systematic mapping of the literature that provides a survey of scientific contributions where Computer Vision is applied to enable users with motor and speech impairments to access computers easily, allowing them to exert their communicative abilities. METHODS: The mapping was conducted employing searches that identified 221 potentially eligible scientific articles published between 2009 and 2019, indexed by ACM, IEEE, Science Direct, and Springer databases. RESULTS: From the retrieved papers, 33 were selected and categorized into themes of this research interest: Human-Computer Interaction, Human-Machine Interaction, Human-Robot Interaction, Recreation, and surveys. Most of the chosen studies use sets of predefined gestures, low-cost cameras, and tracking a specific body region for gestural interaction. CONCLUSION: The results offer an overview of the Computer Vision techniques used in applied research on Assistive Technology for people with motor and speech disabilities, pointing out opportunities and challenges in this research domain.
APA, Harvard, Vancouver, ISO, and other styles
37

Mirolli, Marco, Tomassino Ferrauto, and Stefano Nolfi. "Categorisation through evidence accumulation in an active vision system." Connection Science 22, no. 4 (November 19, 2010): 331–54. http://dx.doi.org/10.1080/09540091.2010.505976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Oe, Shunichiro. "Special Issue on Vision." Journal of Robotics and Mechatronics 11, no. 2 (April 20, 1999): 87. http://dx.doi.org/10.20965/jrm.1999.p0087.

Full text
Abstract:
The widely used term <B>Computer Vision</B> applies to when computers are substituted for human visual information processing. As Real-world objects, except for characters, symbols, figures and photographs created by people, are 3-dimensional (3-D), their two-dimensional (2-D) images obtained by camera are produced by compressing 3-D information to 2-D. Many methods of 2-D image processing and pattern recognition have been developed and widely applied to industrial and medical processing, etc. Research work enabling computers to recognize 3-D objects by 3-D information extracted from 2-D images has been carried out in artificial intelligent robotics. Many techniques have been developed and some applied practically in scene analysis or 3-D measurement. These practical applications are based on image sensing, image processing, pattern recognition, image measurement, extraction of 3-D information, and image understanding. New techniques are constantly appearing. The title of this special issue is <B>Vision</B>, and it features 8 papers from basic computer vision theory to industrial applications. These papers include the following: Kohji Kamejima proposes a method to detect self-similarity in random image fields - the basis of human visual processing. Akio Nagasaka et al. developed a way to identify a real scene in real time using run-length encoding of video feature sequences. This technique will become a basis for active video recording and new robotic machine vision. Toshifumi Honda presents a method for visual inspection of solder joint by 3-D image analysis - a very important issue in the inspection of printed circuit boards. Saburo Okada et al. contribute a new technique on simultaneous measurement of shape and normal vector for specular objects. These methods are all useful for obtaining 3-D information. Masato Nakajima presents a human face identification method for security monitoring using 3-D gray-level information. Kenji Terada et al. propose a method of automatic counting passing people using image sensing. These two technologies are very useful in access control. Yoji. Ogawa presents a new image processing method for automatic welding in turbid water under a non-preparatory environment. Liu Wei et al. develop a method for detection and management of cutting-tool wear using visual sensors. We are certain that all of these papers will contribute greatly to the development of vision systems in robotics and mechatronics.
APA, Harvard, Vancouver, ISO, and other styles
39

Gafurov, Artur M., and Oleg P. Yermolayev. "Automatic Gully Detection: Neural Networks and Computer Vision." Remote Sensing 12, no. 11 (May 28, 2020): 1743. http://dx.doi.org/10.3390/rs12111743.

Full text
Abstract:
Transition from manual (visual) interpretation to fully automated gully detection is an important task for quantitative assessment of modern gully erosion, especially when it comes to large mapping areas. Existing approaches to semi-automated gully detection are based on either object-oriented selection based on multispectral images or gully selection based on a probabilistic model obtained using digital elevation models (DEMs). These approaches cannot be used for the assessment of gully erosion on the territory of the European part of Russia most affected by gully erosion due to the lack of national large-scale DEM and limited resolution of open source multispectral satellite images. An approach based on the use of convolutional neural networks for automated gully detection on the RGB-synthesis of ultra-high resolution satellite images publicly available for the test region of the east of the Russian Plain with intensive basin erosion has been proposed and developed. The Keras library and U-Net architecture of convolutional neural networks were used for training. Preliminary results of application of the trained gully erosion convolutional neural network (GECNN) allow asserting that the algorithm performs well in detecting active gullies, well differentiates gullies from other linear forms of slope erosion — rills and balkas, but so far has errors in detecting complex gully systems. Also, GECNN does not identify a gully in 10% of cases and in another 10% of cases it identifies not a gully. To solve these problems, it is necessary to additionally train the neural network on the enlarged training data set.
APA, Harvard, Vancouver, ISO, and other styles
40

Ross, Robert, Lyle Parsons, Ba Son Thai, Richard Hall, and Meha Kaushik. "An IoT Smart Rodent Bait Station System Utilizing Computer Vision." Sensors 20, no. 17 (August 19, 2020): 4670. http://dx.doi.org/10.3390/s20174670.

Full text
Abstract:
Across the world billions of dollars of damage are attributed to rodents, resulting in them being classified collectively as the biggest animal pest in the world. At a commercial scale most pest control companies employ the labour intensive approach of deploying and manually monitoring rodenticide bait stations. In this paper was present, RatSpy, a visual, low-power bait station monitoring system which wirelessly reports both on bait station levels and intruders entering the bait station. The smart bait stations report data back to a custom designed cloud platform. The system performance was evaluated under realistic field conditions (on an active cattle farm) with initial results showing significant potential in terms of reducing manual labour, improving scalability and data.
APA, Harvard, Vancouver, ISO, and other styles
41

Iwatani, Yasushi. "Task Selection for Control of Active-Vision Systems." IEEE Transactions on Robotics 26, no. 4 (August 2010): 720–25. http://dx.doi.org/10.1109/tro.2010.2050517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Chalimbaud, Pierre, and François Berry. "Embedded Active Vision System Based on an FPGA Architecture." EURASIP Journal on Embedded Systems 2007 (2007): 1–12. http://dx.doi.org/10.1155/2007/35010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Sharma, Rajeev, and Narayan Srinivasa. "A framework for active vision-based robot control using neural networks." Robotica 16, no. 3 (May 1998): 309–27. http://dx.doi.org/10.1017/s0263574798000381.

Full text
Abstract:
Assembly robots that use an active camera system for visual feedback can achieve greater flexibility, including the ability to operate in an uncertain and changing environment. Incorporating active vision into a robot control loop involves some inherent difficulties, including calibration, and the need for redefining the servoing goal as the camera configuration changes. In this paper, we propose a novel self-organizing neural network that learns a calibration-free spatial representation of 3D point targets in a manner that is invariant to changing camera configurations. This representation is used to develop a new framework for robot control with active vision. The salient feature of this framework is that it decouples active camera control from robot control. The feasibility of this approach is established with the help of computer simulations and experiments with the University of Illinois Active Vision System (UIAVS).
APA, Harvard, Vancouver, ISO, and other styles
44

Faraji, Mehdi, and Anup Basu. "Simplified Active Calibration." Image and Vision Computing 91 (November 2019): 103799. http://dx.doi.org/10.1016/j.imavis.2019.08.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Nutt, Kyle J., Nils Hempler, Gareth T. Maker, Graeme P. A. Malcolm, Miles J. Padgett, and Graham M. Gibson. "Developing a portable gas imaging camera using highly tunable active-illumination and computer vision." Optics Express 28, no. 13 (June 8, 2020): 18566. http://dx.doi.org/10.1364/oe.389634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zhu, Dianchen, Huiying Wen, and Yichuan Deng. "Pro-active warning system for the crossroads at construction sites based on computer vision." Engineering, Construction and Architectural Management 27, no. 5 (January 7, 2020): 1145–68. http://dx.doi.org/10.1108/ecam-06-2019-0325.

Full text
Abstract:
Purpose To improve insufficient management by artificial management, especially for traffic accidents that occur at crossroads, the purpose of this paper is to develop a pro-active warning system for crossroads at construction sites. Although prior studies have made efforts to develop warning systems for construction sites, most of them paid attention to the construction process, while the accidents that occur at crossroads were probably overlooked. Design/methodology/approach By summarizing the main reasons resulting for those accidents occurring at crossroads, a pro-active warning system that could provide six functions for countermeasures was designed. Several approaches relating to computer vision and a prediction algorithm were applied and proposed to realize the setting functions. Findings One 12-hour video that films a crossroad at a construction site was selected as the original data. The test results show that all designed functions could operate normally, several predicted dangerous situations could be detected and corresponding proper warnings could be given. To validate the applicability of this system, another 36-hour video data were chosen for a performance test, and the findings indicate that all applied algorithms show a significant fitness of the data. Originality/value Computer vision algorithms have been widely used in previous studies to address video data or monitoring information; however, few of them have demonstrated the high applicability of identification and classification of the different participants at construction sites. In addition, none of these studies attempted to use a dynamic prediction algorithm to predict risky events, which could provide significant information for relevant active warnings.
APA, Harvard, Vancouver, ISO, and other styles
47

Yue ZHAO, Xiaodan LV, and Aiju WANG. "A Nonlinear Camera Self-calibration Approach based on Active Vision." International Journal of Digital Content Technology and its Applications 5, no. 4 (April 30, 2011): 34–42. http://dx.doi.org/10.4156/jdcta.vol5.issue4.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lam, C. P., S. Venkatesh, and G. A. W. West. "Hypothesis Verification Using Parametric Models and Active Vision Strategies." Computer Vision and Image Understanding 68, no. 2 (November 1997): 209–36. http://dx.doi.org/10.1006/cviu.1997.0554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

González-Rubio, Jesús, and Francisco Casacuberta. "Cost-sensitive active learning for computer-assisted translation." Pattern Recognition Letters 37 (February 2014): 124–34. http://dx.doi.org/10.1016/j.patrec.2013.06.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Enescu, V., G. De Cubber, K. Cauwerts, H. Sahli, E. Demeester, D. Vanhooydonck, and M. Nuttin. "Active stereo vision-based mobile robot navigation for person tracking." Integrated Computer-Aided Engineering 13, no. 3 (July 17, 2006): 203–22. http://dx.doi.org/10.3233/ica-2006-13302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography