Academic literature on the topic 'Computer vision algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computer vision algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computer vision algorithm"

1

Kotyk, Vladyslav, and Oksana Lashko. "Software Implementation of Gesture Recognition Algorithm Using Computer Vision." Advances in Cyber-Physical Systems 6, no. 1 (January 23, 2021): 21–26. http://dx.doi.org/10.23939/acps2021.01.021.

Full text
Abstract:
This paper examines the main methods and principles of image formation, display of the sign language recognition algorithm using computer vision to improve communication between people with hearing and speech impairments. This algorithm allows to effectively recognize gestures and display information in the form of labels. A system that includes the main modules for implementing this algorithm has been designed. The modules include the implementation of perception, transformation and image processing, the creation of a neural network using artificial intelligence tools to train a model for predicting input gesture labels. The aim of this work is to create a full-fledged program for implementing a real-time gesture recognition algorithm using computer vision and machine learning.
APA, Harvard, Vancouver, ISO, and other styles
2

Bunin, Y. V., E. V. Vakulik, R. N. Mikhaylusov, V. V. Negoduyko, K. S. Smelyakov, and O. V. Yasinsky. "Estimation of lung standing size with the application of computer vision algorithms." Experimental and Clinical Medicine 89, no. 4 (December 17, 2020): 87–94. http://dx.doi.org/10.35339/ekm.2020.89.04.13.

Full text
Abstract:
Evaluation of spiral computed tomography data is important to improve the diagnosis of gunshot wounds and the development of further surgical tactics. The aim of the work is to improve the results of the diagnosis of foreign bodies in the lungs by using computer vision algorithms. Image gradation correction, interval segmentation, threshold segmentation, three-dimensional wave method, principal components method are used as a computer vision device. The use of computer vision algorithm allows to clearly determine the size of the foreign body of the lung with an error of 6.8 to 7.2%, which is important for in-depth diagnosis and development of further surgical tactics. Computed vision techniques increase the detail of foreign bodies in the lungs and have significant prospects for the use of spiral computed tomography for in-depth data processing. Keywords: computer vision, spiral computed tomography, lungs, foreign bodies.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Zheng Guang, Chen Chen, and Xu Hong Liu. "An Efficient View-Point Invariant Detector and Descriptor." Advanced Materials Research 659 (January 2013): 143–48. http://dx.doi.org/10.4028/www.scientific.net/amr.659.143.

Full text
Abstract:
Many computer vision applications need keypoint correspondence between images under different view conditions. Generally speaking, traditional algorithms target applications with either good performance in invariance to affine transformation or speed of computation. Nowadays, the widely usage of computer vision algorithms on handle devices such as mobile phones and embedded devices with low memory and computation capability has proposed a target of making descriptors faster to computer and more compact while remaining robust to affine transformation and noise. To best address the whole process, this paper covers keypoint detection, description and matching. Binary descriptors are computed by comparing the intensities of two sampling points in image patches and they are matched by Hamming distance using an SSE 4.2 optimized popcount. In experiment results, we will show that our algorithm is fast to compute with lower memory usage and invariant to view-point change, blur change, brightness change, and JPEG compression.
APA, Harvard, Vancouver, ISO, and other styles
4

Camps, Octavia I., Linda G. Shapiro, and Robert M. Haralick. "A probabilistic matching algorithm for computer vision." Annals of Mathematics and Artificial Intelligence 10, no. 1-2 (March 1994): 85–124. http://dx.doi.org/10.1007/bf01530945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhuang, Yizhou, Weimin Chen, Tao Jin, Bin Chen, He Zhang, and Wen Zhang. "A Review of Computer Vision-Based Structural Deformation Monitoring in Field Environments." Sensors 22, no. 10 (May 16, 2022): 3789. http://dx.doi.org/10.3390/s22103789.

Full text
Abstract:
Computer vision-based structural deformation monitoring techniques were studied in a large number of applications in the field of structural health monitoring (SHM). Numerous laboratory tests and short-term field applications contributed to the formation of the basic framework of computer vision deformation monitoring systems towards developing long-term stable monitoring in field environments. The major contribution of this paper was to analyze the influence mechanism of the measuring accuracy of computer vision deformation monitoring systems from two perspectives, the physical impact, and target tracking algorithm impact, and provide the existing solutions. Physical impact included the hardware impact and the environmental impact, while the target tracking algorithm impact included image preprocessing, measurement efficiency and accuracy. The applicability and limitations of computer vision monitoring algorithms were summarized.
APA, Harvard, Vancouver, ISO, and other styles
6

Petrivskyi, Volodymyr. "Some features of creating a computer vision system." Modeling Control and Information Technologies, no. 5 (November 21, 2021): 72–74. http://dx.doi.org/10.31713/mcit.2021.22.

Full text
Abstract:
In the paper some features of models and algorithms of computer vision are presented. An algorithm for training the neural network of object recognition is proposed and described. The peculiarity of the proposed approach is the parallel training of networks with the subsequent selection of the most accurate. The presented results of experiments confirm the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
7

HARALICK, ROBERT M. "PROPAGATING COVARIANCE IN COMPUTER VISION." International Journal of Pattern Recognition and Artificial Intelligence 10, no. 05 (August 1996): 561–72. http://dx.doi.org/10.1142/s0218001496000347.

Full text
Abstract:
This paper describes how to propagate approximately additive random perturbations through any kind of vision algorithm step in which the appropriate random perturbation model for the estimated quantity produced by the vision step is also an additive random perturbation. We assume that the vision algorithm step can be modeled as a calculation (linear or non-linear) that produces an estimate that minimizes an implicit scaler function of the input quantity and the calculated estimate. The only assumption is that the scaler function has finite second partial derivatives and that the random perturbations are small enough so that the relationship between the scaler function evaluated at the ideal but unknown input and output quantities and the observed input quantity and perturbed output quantity can be approximated sufficiently well by a first order Taylor series expansion. The paper finally discusses the issues of verifying that the derived statistical behavior agrees with the experimentally observed statistical behavior.
APA, Harvard, Vancouver, ISO, and other styles
8

Patel, Mitesh, Sara Lal, Diarmuid Kavanagh, and Peter Rossiter. "Fatigue Detection Using Computer Vision." International Journal of Electronics and Telecommunications 56, no. 4 (November 1, 2010): 457–61. http://dx.doi.org/10.2478/v10177-010-0062-8.

Full text
Abstract:
Fatigue Detection Using Computer VisionLong duration driving is a significant cause of fatigue related accidents of cars, airplanes, trains and other means of transport. This paper presents a design of a detection system which can be used to detect fatigue in drivers. The system is based on computer vision with main focus on eye blink rate. We propose an algorithm for eye detection that is conducted through a process of extracting the face image from the video image followed by evaluating the eye region and then eventually detecting the iris of the eye using the binary image. The advantage of this system is that the algorithm works without any constraint of the background as the face is detected using a skin segmentation technique. The detection performance of this system was tested using video images which were recorded under laboratory conditions. The applicability of the system is discussed in light of fatigue detection for drivers.
APA, Harvard, Vancouver, ISO, and other styles
9

Sokolov, Sergey, Andrey Boguslavsky, and Sergei Romanenko. "Implementation of the visual data processing algorithms for onboard computing units." Robotics and Technical Cybernetics 9, no. 2 (June 30, 2021): 106–11. http://dx.doi.org/10.31776/rtcj.9204.

Full text
Abstract:
According to the short analysis of modern experience of hardware and software for autonomous mobile robots a role of computer vision systems in the structure of those robots is considered. A number of configurations of onboard computers and implementation of algorithms for visual data capturing and processing are described. In original configuration space the «algorithms-hardware» plane is considered. For software designing the realtime vision system framework is used. Experiments with the computing module based on the Intel/Altera Cyclone IV FPGA (implementation of the histogram computation algorithm and the Canny's algorithm), with the computing module based on the Xilinx FPGA (implementation of a sparse and dense optical flow algorithms) are described. Also implementation of algorithm of graph segmentation of grayscale images is considered and analyzed. Results of the first experiments are presented.
APA, Harvard, Vancouver, ISO, and other styles
10

Kushnir, A., and B. Kopchak. "DEVELOPMENT OF COMPUTER VISION-BASED AUTOMATIC FLAME DETECTION ALGORITHM USING MATLAB SOFTWARE ENVIRONMENT." Fire Safety 36 (July 20, 2020): 49–58. http://dx.doi.org/10.32447/20786662.36.2020.05.

Full text
Abstract:
Introduction. Fire detection systems plays an important role in protecting objects from fires and saving lives. In traditional fire detection systems, fire detectors detect fires by combustion of by-products, such as smoke, temperature, flame radiation. This principle is effective, but unfortunately, the fire detector works with a significant delay if the ignition source is not in close proximity to it. In addition, such systems have a high frequency of false positives. The most promising area for early fire detection is the use of computer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systems, as they detect fires rather than their combustion products. Such systems, as well as traditional fire detection systems, analyze the signs of a fire, such as smoke, flames, and even the air temperature by means of the image coming directly from the cameras, due to which the range of the system increases significantly. Unlike traditional systems, they are more efficient, do not require indoor spaces, have high performance and minimize the number of false positives. In addition, when notifying the operator about a fire, the video system can provide him with an image of probable ignition place.Fire detection algorithms are quite complex because the signs of a fire are non-static. Today, more and more scientists are trying to develop algorithms and methods that will detect fires at an early stage in the video stream with high accuracy, without false positives. When creating such algorithms, there are four main approaches. These are flame colour segmentation, motion de-tection in the image, analysis of spatial changes in brightness and analysis of temporal changes in boundaries. Each approach requires the development of its own individual algorithm, combining them, which is quite a difficult task. However, all algorithms are based on the process of selecting colours in the image that are characteristic of fire. There are many algorithms that use two or three approaches and they provide good results. Using the MATLAB software environment and its standard packages to create a flame detection system is considered in this paper.Purpose. The research aims to develop an algorithm for automatic flame detection in images based on pixel analysis, which identifies the colour of the flame and flame area using the MATLAB software environment, in order to further create a reliable computer vision-based flame detection system.Results. The MATLAB software environment includes Image Acquisition Toolbox and Image Processing Toolbox, which are compatible environments for developing real-time imaging applications that can come from digital video cameras, satellite and aviation on-board sensors, and other scientific devices. Using them, one can implement new ideas, including the development of fire detection algorithms.The flame has a fairly uniform intensity compared to other intensities of objects, unlike smoke. That's why there are so many flame-based fire detection algorithms. However, in practice, developing an effective algorithm is not an easy task, because the image under study may contain objects that have signs of flame. In the image, you need to select the pixels with the characteristic colour that are inherent in the flame. At this stage, various images with flames in the RGB colour model were analyzed and the mean value of their intensity and standard deviation (R, G and B) were determined. Image segmentation was performed on the basis of the obtained values. The purpose of segmentation was to highlight the flame in the image. However, there may be other objects in the image whose pixel intensities match the flame pixel intensities. As a result, in addition to the flame, other objects may be highlighted in the segmented image. Based on the previously selected segmentation method, we can assume that the flame in this image occupies the largest area. Therefore, another criterion was chosen for the flame search, based on the area, which enabled to remove other objects that do not belong to the flame. In the final stage, the flame in the image is highlighted by a rectangle.Conclusions. The possibility of using the MATLAB software environment with the Image Acquisition Toolbox and Image Processing Toolbox packages to create a computer vision based flame detection system is considered. The functions of the packages allow you to implement new ideas when creating algorithms for automatic fire detection. The article develops the algorithm for automatic flame detection in the image based on the analysis of flame colour pixels and flame area. Various images with flames in the RGB colour model were analyzed and their mean value and standard deviation were determined. Image segmentation was performed on the basis of the obtained values. Experimental studies in the MATLAB software environment have proved the effi-ciency of the developed algorithm. To create a reliable computer vision based flame detection system in future, it is proposed to develop an algorithm that would analyze the boundaries, shape and flicker of the flame in addition to analyzing the flame colour pixels and flame area.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Computer vision algorithm"

1

Anani-Manyo, Nina K. "Computer Vision and Building Envelopes." Kent State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=kent1619539038754026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mac, Aodha O. "Supervised algorithm selection for flow and other computer vision problems." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1426968/.

Full text
Abstract:
Motion estimation is one of the core problems of computer vision. Given two or more frames from a video sequence, the goal is to find the temporal correspondence for one or more points from the sequence. For dense motion estimation, or optical flow, a dense correspondence field is sought between the pair of frames. A standard approach to optical flow involves constructing an energy function and then using some optimization scheme to find its minimum. These energy functions are hand designed to work well generally, with the intention that the global minimum corresponds to the ground truth temporal correspondence. As an alternative to these heuristic energy functions we aim to assess the quality of existing algorithms directly from training data. We show that the addition of an offline training phase can improve the quality of motion estimation. For optical flow, decisions such as which algorithm to use and when to trust its accuracy, can all be learned from training data. Generating ground truth optical flow data is a difficult and time consuming process. We propose the use of synthetic data for training and present a new dataset for optical flow evaluation and a tool for generating an unlimited quantity of ground truth correspondence data. We use this method for generating data to synthesize depth images for the problem of depth image super-resolution and show that it is superior to real data. We present results for optical flow confidence estimation with improved performance on a standard benchmark dataset. Using a similar feature representation, we extend this work to occlusion region detection and present state of the art results for challenging real scenes. Finally, given a set of different algorithms we treat optical flow estimation as the problem of choosing the best algorithm from this set for a given pixel. However, posing algorithm selection as a standard classification problem assumes that class labels are disjoint. For each training example it is assumed that there is only one class label that correctly describes it, and that all other labels are equally bad. To overcome this, we propose a novel example dependent cost-sensitive learning algorithm based on decision trees where each label is instead a vector representing a data point's affinity for each of the algorithms. We show that this new algorithm has improved accuracy compared to other classification baselines on several computer vision problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Zakaria, Marwan F. "An automated vision system using a fast 2-dimensional moment invariants algorithm /." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Lichang. "Non-invasive detection algorithm of thermal comfort based on computer vision." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-241082.

Full text
Abstract:
The waste of building energy consumption is a major challenge in the world. And the real-time detection of human thermal comfort is an effective way to meet this issue. As mentioned in name, it means to detect the human’s comfort level in real-time and non-invasively. However, due to the various factors such as individual difference of thermal comfort, elements related to climatic (temperature, humidity, illumination, etc.) and so on, there is still a long way to implement this strategy in real life. From another perspective, the current HVAC (heating, ventilating and air-conditioning) systems cannot provide flexible interaction channels to adjust atmosphere, and naturally fails to satisfy requirements of users. All of them indicate the necessity to develop a detection method for human thermal comfort. In this paper, a non-invasion detection method toward human thermal comfort is proposed from two perspectives: macro human postures and skin textures. In posture part, OpenPose is used for analyzing the position coordinates of human body key points’ in images, for example, elbow, knee, and hipbone, etc. And the results of analyzing would be interpreted from the term of thermal comfort. In skin textures, deep neural network is used to predict the temperature of human skins via images. Based on Fanger’s theory of thermal comfort, the results of both parts are satisfying: subjects’ postures can be captured and interpreted into different thermal comfort level: hot, cold and comfort. And the absolute error of prediction from neurons network is less than 0.125 degrees centigrade which is the equipment error of thermometer used in data acquisition. With the solution proposed by this paper, it is promising to non-invasively detect the thermal comfort level of users from postures and skin textures. Finally, theconclusion and future work are discussed in final chapter.
Slöseriet med att bygga energiförbrukningen är en stor utmaning i världen. Ochdetektering av mänsklig termisk komfort i realtid är ett effektivt sätt att lösaproblemet. Som nämns i namn betyder det att detektera människans komfortnivå i realtid och icke-invasivt. På grund av de olika faktorerna som individuell skillnad i termisk komfort, är emellertid faktorer som är relaterade till klimat (temperatur, luftfuktighet, belysning etc.) det fortfarande en lång väg att implementera denna strategi i verkligheten. Från ett annat perspektiv kan nuvarande system för uppvärmning, ventilation och luftkonditionering inte tillhandahålla flexibla interaktionskanaler för att anpassa atmosfären och naturligtvis misslyckas till nöjda krav från användarna. Alla indikerar nödvändigheten av att utveckla en detekteringsmetod för mänsklig termisk komfort. I detta dokument föreslås en ickeinvasion detekteringsmetod mot mänsklig termisk komfort från två perspektiv: makro mänskliga hållningar och hudtexturer. I hållningspartiet används OpenPose för att analysera positionskoordinaterna för kroppens huvudpunkter i bilder, till exempel armbåge, knä och höftben osv. Och resultaten av analysen skulle tolkas från termen av termisk komfort. I hudtexturer används djupt neuralt nätverk för att förutse temperaturen på mänskliga skinn via bilder. Baserat på Fangers teorin om värmekomfort är resultaten av båda delarna tillfredsställande: subjektens hållningar kan fångas och tolkas till olika värmekomfortnivåer: varm, kall och komfort. Och det absoluta felet av prediktering från neuronnätverket är mindre än 0,125 grader Celsius, vilket är utrustningsfelet hos termometern som används vid datainsamling. Med lösningar i detta papper är det lovande att detektera användarens värmekomfortnivå fritt från invändningar och hudtexturer. Slutligen diskuteras slutsatserna och detframtida arbetet i sista kapitlet.
APA, Harvard, Vancouver, ISO, and other styles
5

Anderson, Travis M. "Motion detection algorithm based on the common housefly eye." Laramie, Wyo. : University of Wyoming, 2007. http://proquest.umi.com/pqdweb?did=1400965531&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chavez, Aaron J. "A fast interest point detection algorithm." Thesis, Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bergendahl, Jason Robert. "A computationally efficient stereo vision algorithm for adaptive cruise control." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/43389.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (p. 55-56).
by Jason Robert Bergendahl.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
8

Ng, Brian Walter. "Wavelet based image texture segementation using a modified K-means algorithm." Title page, table of contents and abstract only, 2003. http://web4.library.adelaide.edu.au/theses/09PH/09phn5759.pdf.

Full text
Abstract:
"August, 2003" Bibliography: p. 261-268. In this thesis, wavelet transforms are chosen as the primary analytical tool for texture analysis. Specifically, Dual-Tree Complex Wavelet Transform is applied to the texture segmentation problem. Several possibilities for feature extraction and clustering steps are examined, new schemes being introduced and compared to known techniques.
APA, Harvard, Vancouver, ISO, and other styles
9

Ramswamy, Lakshmy. "PARZSweep a novel parallel algorithm for volume rendering of regular datasets /." Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-04012003-140443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Johnson, Amanda R. "A pose estimation algorithm based on points to regions correspondence using multiple viewpoints." Laramie, Wyo. : University of Wyoming, 2008. http://proquest.umi.com/pqdweb?did=1798480891&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Computer vision algorithm"

1

Jeong, Hong. Architectures for computer vision: From algorithm to chip with Verilog. Singapore: John Wiley & Sons Singapore Pte. Ltd., 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Svetlana, Lazebnik, Perona Pietro, Sato Yoichi, Schmid Cordelia, and SpringerLink (Online service), eds. Computer Vision – ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part III. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Svetlana, Lazebnik, Perona Pietro, Sato Yoichi, Schmid Cordelia, and SpringerLink (Online service), eds. Computer Vision – ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part II. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Svetlana, Lazebnik, Perona Pietro, Sato Yoichi, Schmid Cordelia, and SpringerLink (Online service), eds. Computer Vision – ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part VI. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Svetlana, Lazebnik, Perona Pietro, Sato Yoichi, Schmid Cordelia, and SpringerLink (Online service), eds. Computer Vision – ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part VII. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Svetlana, Lazebnik, Perona Pietro, Sato Yoichi, Schmid Cordelia, and SpringerLink (Online service), eds. Computer Vision – ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part V. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shukla, K. K. Efficient Algorithms for Discrete Wavelet Transform: With Applications to Denoising and Fuzzy Inference Systems. London: Springer London, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

José, Braz, and SpringerLink (Online service), eds. Computer Vision, Imaging and Computer Graphics. Theory and Applications: International Joint Conference, VISIGRAPP 2010, Angers, France, May 17-21, 2010. Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Csurka, Gabriela. Computer Vision, Imaging and Computer Graphics. Theory and Applications: International Joint Conference, VISIGRAPP 2011, Vilamoura, Portugal, March 5-7, 2011. Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ryszard, Tadeusiewicz, Chmielewski Leszek J, Wojciechowski Konrad, and SpringerLink (Online service), eds. Computer Vision and Graphics: International Conference, ICCVG 2012, Warsaw, Poland, September 24-26, 2012. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Computer vision algorithm"

1

Flach, Boris, and Vaclav Hlavac. "Expectation-Maximization Algorithm." In Computer Vision, 1–4. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_692-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Barbu, Adrian, and Song-Chun Zhu. "Swendsen-Wang Algorithm." In Computer Vision, 1–4. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_721-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Zhengyou. "Eight-Point Algorithm." In Computer Vision, 239–40. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Flach, Boris, and ProfessorVaclav Hlavac. "Expectation Maximization Algorithm." In Computer Vision, 265–68. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Barbu, Adrian. "Swendsen-Wang Algorithm." In Computer Vision, 783–85. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Baker, Simon. "Inverse Compositional Algorithm." In Computer Vision, 426–28. Boston, MA: Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Zhengyou. "Eight-Point Algorithm." In Computer Vision, 370–71. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63416-2_156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Barbu, Adrian, and Song-Chun Zhu. "Swendsen-Wang Algorithm." In Computer Vision, 1227–29. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63416-2_721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baker, Simon. "Inverse Compositional Algorithm." In Computer Vision, 706–8. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63416-2_759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Flach, Boris, and Vaclav Hlavac. "Expectation-Maximization Algorithm." In Computer Vision, 409–12. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63416-2_692.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computer vision algorithm"

1

Efimov, Aleksey Igorevich, and Dmitry Igorevich Ustukov. "Comparative Analysis of Stereo Vision Algorithms Implementation on Various Architectures." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-484-489.

Full text
Abstract:
A comparative analysis of the functionality of stereo vision algorithms on various hardware architectures has been carried out. The quantitative results of stereo vision algorithms implementation are presented, taking into account the specifics of the applied hardware base. The description of the original algorithm for calculating the depth map using the summed-area table is given. The complexity of the algorithm does not depend on the size of the search window. The article presents the content and results of the implementation of the stereo vision method on standard architecture computers, including multi-threaded implementation, a single-board computer and FPGA. The proposed results may be of interest in the design of vision systems for applied applications.
APA, Harvard, Vancouver, ISO, and other styles
2

Yaroslavtseva, A. I. "Development of Software for the Orientation System of a Mobile Robotic System in Space Based on Stereo Vision." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-584-596.

Full text
Abstract:
Stereovision algorithms may be of great interest for the design of computer vision systems for autonomous mobile objects. The article presents the content and results of the stereo vision method implementation on a single-board computer. It is worth noting that for many mobile systems, energy efficiency, size, and cost will be more important than performance. As a result of the work, a software shell was deployed to implement the stereovision algorithm on a Raspberry Pi single-board computer running the Linux operating system, as well as a comparison of the performance of singleboard computers with various technical characteristics. The proposed results may be of interest in the design of vision systems for applied applications.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Jianhua, and Lin Liao. "Multi-Resolution-Based Contour Corner Extraction Algorithm for Computer Vision-Based Measurement." In ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/detc2018-85890.

Full text
Abstract:
Corner-based registration of the industry standard contour and the actual product contour is one of the key steps in industrial computer vision-based measurement. However, existing corner extraction algorithms do not achieve satisfactory results in the extraction of the standard contour and the deformed contour of the actual product. This paper proposes a multi-resolution-based contour corner extraction algorithm for computer vision-based measurement. The algorithm first obtains different corners in multiple resolutions, then sums up the weighted corner values, and finally chooses the corner points with the appropriate corner values as the final contour corners. The experimental results show that the proposed algorithm, based on multi-resolution, outperforms the original algorithm in the aspect of the corner matching situation and helps in subsequent product measurements.
APA, Harvard, Vancouver, ISO, and other styles
4

Casasent, David P., Anqi Ye, and Ashit Talukder. "Detection algorithm fusion concepts for computer vision." In Photonics East '95, edited by David P. Casasent. SPIE, 1995. http://dx.doi.org/10.1117/12.222659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Laptev, V. V., N. V. Laptev, A. H. Ozdiev, and O. M. Gerget. "Analysis of Parking Space Using Computer Vision." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-508-517.

Full text
Abstract:
The article deals with the system of parking space analysis, based on the processing of video monitoring through image segmentation, performed with the help of an artificial neural network (ANN). Two key algorithms included into the parking space analysis system are described in detail: 1) the algorithm of the image quality analysis; 2) the algorithm of the adaptive image analysis. The main purpose of the development and implementation of the first algorithm is to evaluate the feasibility of ANN image analysis, based on the recognition of object boundaries. The idea of the second algorithm-adaptive image analysis is as follows: to analyze images of different size and approximation (scale), by bringing the input image to a single dimensional format, followed by the approach of splitting into patch. After ANN analyzing the resulting parts of the image, the algorithm again combines them into a whole picture, identical to the input format. The final stage of the system proposed by the authors is an algorithm for the classification of multiple parking spaces, based on the combined analysis of the segmentation mask obtained from the ANN and a pre-prepared polygonal grid of parking spaces in the image. The paper describes the data used to achieve the obtained results and the characteristics of the training of the semantic segmentation model, giving the statistics of the performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Babayan, P. V., O. N. Burkina, and V. S. Muraviev. "Research of the Exposure Time Correction Algorithm for Vision Systems." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-613-619.

Full text
Abstract:
Computer vision is used in many areas, from stationary video surveillance systems to mobile robots with artificial intelligence. The ability of video sensors to automatically adapt to changes in illumination is significant for high image quality, so the development and research of exposure correction algorithms is an important problem. In the paper an exposure time correction algorithm designed for use in computer vision systems is described. There three main ways to change light sensitivity in video camera is known: controlling the camera's exposure time, changing the sensor sensitivity, and lens aperture size. Each of these approaches affects the properties of the generated images in its own way and has its own advantages and limitations. The most simple to implement approaches are often based on exposure time correction. The authors propose a heuristic algorithm based on image entropy maximization. The results of simulation and testing of the algorithm in real conditions are presented. A comparison has been made with the camera auto exposure mode. Based on the results of the research, it has been concluded that the proposed algorithm has better performance with a similar final exposure time, while it has a low computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
7

Prakash, R. Om, and Chandran Saravanan. "Autonomous robust helipad detection algorithm using computer vision." In 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT). IEEE, 2016. http://dx.doi.org/10.1109/iceeot.2016.7755163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gui, Wu, and Tao Jun. "Chinese chess recognition algorithm based on computer vision." In 2014 26th Chinese Control And Decision Conference (CCDC). IEEE, 2014. http://dx.doi.org/10.1109/ccdc.2014.6852759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ma, Ruyi. "Image processing technology based on computer vision algorithm." In 2022 IEEE 2nd International Conference on Power, Electronics and Computer Applications (ICPECA). IEEE, 2022. http://dx.doi.org/10.1109/icpeca53709.2022.9719261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Maresca, Massimo. "Packet-switching algorithm for SIMD computers and its application to parallel computer vision." In Fibers '91, Boston, MA, edited by Michael J. W. Chen. SPIE, 1991. http://dx.doi.org/10.1117/12.25324.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Computer vision algorithm"

1

Riseman. Intermediate Level Computer Vision Processing Algorithm Development for the Content Addressable Array Parallel Processor. Fort Belvoir, VA: Defense Technical Information Center, November 1986. http://dx.doi.org/10.21236/ada179515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Riseman, Edward. Intermediate Level Computer Vision Processing Algorithm Development for the content Addressable Array Parallel Processor. Fort Belvoir, VA: Defense Technical Information Center, November 1987. http://dx.doi.org/10.21236/ada192086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ferrell, Regina, Deniz Aykac, Thomas Karnowski, and Nisha Srinivas. A Publicly Available, Annotated Dataset for Naturalistic Driving Study and Computer Vision Algorithm Development. Office of Scientific and Technical Information (OSTI), January 2021. http://dx.doi.org/10.2172/1760158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Poggio, Tomaso, and James Little. Parallel Algorithms for Computer Vision. Fort Belvoir, VA: Defense Technical Information Center, March 1988. http://dx.doi.org/10.21236/ada203947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bragdon, Sophia, Vuong Truong, and Jay Clausen. Environmentally informed buried object recognition. Engineer Research and Development Center (U.S.), November 2022. http://dx.doi.org/10.21079/11681/45902.

Full text
Abstract:
The ability to detect and classify buried objects using thermal infrared imaging is affected by the environmental conditions at the time of imaging, which leads to an inconsistent probability of detection. For example, periods of dense overcast or recent precipitation events result in the suppression of the soil temperature difference between the buried object and soil, thus preventing detection. This work introduces an environmentally informed framework to reduce the false alarm rate in the classification of regions of interest (ROIs) in thermal IR images containing buried objects. Using a dataset that consists of thermal images containing buried objects paired with the corresponding environmental and meteorological conditions, we employ a machine learning approach to determine which environmental conditions are the most impactful on the visibility of the buried objects. We find the key environmental conditions include incoming shortwave solar radiation, soil volumetric water content, and average air temperature. For each image, ROIs are computed using a computer vision approach and these ROIs are coupled with the most important environmental conditions to form the input for the classification algorithm. The environmentally informed classification algorithm produces a decision on whether the ROI contains a buried object by simultaneously learning on the ROIs with a classification neural network and on the environmental data using a tabular neural network. On a given set of ROIs, we have shown that the environmentally informed classification approach improves the detection of buried objects within the ROIs.
APA, Harvard, Vancouver, ISO, and other styles
6

Schoening, Timm. OceanCV. GEOMAR, 2022. http://dx.doi.org/10.3289/sw_5_2022.

Full text
Abstract:
OceanCV provides computer vision algorithms and tools for underwater image analysis. This includes image processing, pattern recognition, machine learning and geometric algorithms but also functionality for navigation data processing, data provenance etc.
APA, Harvard, Vancouver, ISO, and other styles
7

Varastehpour, Soheil, Hamid Sharifzadeh, and Iman Ardekani. A Comprehensive Review of Deep Learning Algorithms. Unitec ePress, 2021. http://dx.doi.org/10.34074/ocds.092.

Full text
Abstract:
Deep learning algorithms are a subset of machine learning algorithms that aim to explore several levels of the distributed representations from the input data. Recently, many deep learning algorithms have been proposed to solve traditional artificial intelligence problems. In this review paper, some of the up-to-date algorithms of this topic in the field of computer vision and image processing are reviewed. Following this, a brief overview of several different deep learning methods and their recent developments are discussed.
APA, Harvard, Vancouver, ISO, and other styles
8

Тарасова, Олена Юріївна, and Ірина Сергіївна Мінтій. Web application for facial wrinkle recognition. Кривий Ріг, КДПУ, 2022. http://dx.doi.org/10.31812/123456789/7012.

Full text
Abstract:
Facial recognition technology is named one of the main trends of recent years. It’s wide range of applications, such as access control, biometrics, video surveillance and many other interactive humanmachine systems. Facial landmarks can be described as key characteristics of the human face. Commonly found landmarks are, for example, eyes, nose or mouth corners. Analyzing these key points is useful for a variety of computer vision use cases, including biometrics, face tracking, or emotion detection. Different methods produce different facial landmarks. Some methods use only basic facial landmarks, while others bring out more detail. We use 68 facial markup, which is a common format for many datasets. Cloud computing creates all the necessary conditions for the successful implementation of even the most complex tasks. We created a web application using the Django framework, Python language, OpenCv and Dlib libraries to recognize faces in the image. The purpose of our work is to create a software system for face recognition in the photo and identify wrinkles on the face. The algorithm for determining the presence and location of various types of wrinkles and determining their geometric determination on the face is programmed.
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Haohang, Erol Tutumluer, Jiayi Luo, Kelin Ding, Issam Qamhia, and John Hart. 3D Image Analysis Using Deep Learning for Size and Shape Characterization of Stockpile Riprap Aggregates—Phase 2. Illinois Center for Transportation, September 2022. http://dx.doi.org/10.36501/0197-9191/22-017.

Full text
Abstract:
Riprap rock and aggregates are extensively used in structural, transportation, geotechnical, and hydraulic engineering applications. Field determination of morphological properties of aggregates such as size and shape can greatly facilitate the quality assurance/quality control (QA/QC) process for proper aggregate material selection and engineering use. Many aggregate imaging approaches have been developed to characterize the size and morphology of individual aggregates by computer vision. However, 3D field characterization of aggregate particle morphology is challenging both during the quarry production process and at construction sites, particularly for aggregates in stockpile form. This research study presents a 3D reconstruction-segmentation-completion approach based on deep learning techniques by combining three developed research components: field 3D reconstruction procedures, 3D stockpile instance segmentation, and 3D shape completion. The approach was designed to reconstruct aggregate stockpiles from multi-view images, segment the stockpile into individual instances, and predict the unseen side of each instance (particle) based on the partial visible shapes. Based on the dataset constructed from individual aggregate models, a state-of-the-art 3D instance segmentation network and a 3D shape completion network were implemented and trained, respectively. The application of the integrated approach was demonstrated on re-engineered stockpiles and field stockpiles. The validation of results using ground-truth measurements showed satisfactory algorithm performance in capturing and predicting the unseen sides of aggregates. The algorithms are integrated into a software application with a user-friendly graphical user interface. Based on the findings of this study, this stockpile aggregate analysis approach is envisioned to provide efficient field evaluation of aggregate stockpiles by offering convenient and reliable solutions for on-site QA/QC tasks of riprap rock and aggregate stockpiles.
APA, Harvard, Vancouver, ISO, and other styles
10

Hofer, Martin, Tomas Sako, Arturo Martinez Jr., Mildred Addawe, Joseph Bulan, Ron Lester Durante, and Marymell Martillan. Applying Artificial Intelligence on Satellite Imagery to Compile Granular Poverty Statistics. Asian Development Bank, December 2020. http://dx.doi.org/10.22617/wps200432-2.

Full text
Abstract:
This study outlines a computational framework to enhance the spatial granularity of government-published poverty estimates, citing data from the Philippines and Thailand. Computer vision techniques were applied on publicly available medium resolution satellite imagery, household surveys, and census data from the two countries. The results suggest that even using publicly accessible satellite imagery, predictions generally aligned with the distributional structure of government-published poverty estimates after calibration. The study further examines the robustness of the resulting estimates to user-specified algorithmic parameters and model specifications.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography