Journal articles on the topic 'Computer vision algorithm'

To see the other types of publications on this topic, follow the link: Computer vision algorithm.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Computer vision algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kotyk, Vladyslav, and Oksana Lashko. "Software Implementation of Gesture Recognition Algorithm Using Computer Vision." Advances in Cyber-Physical Systems 6, no. 1 (January 23, 2021): 21–26. http://dx.doi.org/10.23939/acps2021.01.021.

Full text
Abstract:
This paper examines the main methods and principles of image formation, display of the sign language recognition algorithm using computer vision to improve communication between people with hearing and speech impairments. This algorithm allows to effectively recognize gestures and display information in the form of labels. A system that includes the main modules for implementing this algorithm has been designed. The modules include the implementation of perception, transformation and image processing, the creation of a neural network using artificial intelligence tools to train a model for predicting input gesture labels. The aim of this work is to create a full-fledged program for implementing a real-time gesture recognition algorithm using computer vision and machine learning.
APA, Harvard, Vancouver, ISO, and other styles
2

Bunin, Y. V., E. V. Vakulik, R. N. Mikhaylusov, V. V. Negoduyko, K. S. Smelyakov, and O. V. Yasinsky. "Estimation of lung standing size with the application of computer vision algorithms." Experimental and Clinical Medicine 89, no. 4 (December 17, 2020): 87–94. http://dx.doi.org/10.35339/ekm.2020.89.04.13.

Full text
Abstract:
Evaluation of spiral computed tomography data is important to improve the diagnosis of gunshot wounds and the development of further surgical tactics. The aim of the work is to improve the results of the diagnosis of foreign bodies in the lungs by using computer vision algorithms. Image gradation correction, interval segmentation, threshold segmentation, three-dimensional wave method, principal components method are used as a computer vision device. The use of computer vision algorithm allows to clearly determine the size of the foreign body of the lung with an error of 6.8 to 7.2%, which is important for in-depth diagnosis and development of further surgical tactics. Computed vision techniques increase the detail of foreign bodies in the lungs and have significant prospects for the use of spiral computed tomography for in-depth data processing. Keywords: computer vision, spiral computed tomography, lungs, foreign bodies.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Zheng Guang, Chen Chen, and Xu Hong Liu. "An Efficient View-Point Invariant Detector and Descriptor." Advanced Materials Research 659 (January 2013): 143–48. http://dx.doi.org/10.4028/www.scientific.net/amr.659.143.

Full text
Abstract:
Many computer vision applications need keypoint correspondence between images under different view conditions. Generally speaking, traditional algorithms target applications with either good performance in invariance to affine transformation or speed of computation. Nowadays, the widely usage of computer vision algorithms on handle devices such as mobile phones and embedded devices with low memory and computation capability has proposed a target of making descriptors faster to computer and more compact while remaining robust to affine transformation and noise. To best address the whole process, this paper covers keypoint detection, description and matching. Binary descriptors are computed by comparing the intensities of two sampling points in image patches and they are matched by Hamming distance using an SSE 4.2 optimized popcount. In experiment results, we will show that our algorithm is fast to compute with lower memory usage and invariant to view-point change, blur change, brightness change, and JPEG compression.
APA, Harvard, Vancouver, ISO, and other styles
4

Camps, Octavia I., Linda G. Shapiro, and Robert M. Haralick. "A probabilistic matching algorithm for computer vision." Annals of Mathematics and Artificial Intelligence 10, no. 1-2 (March 1994): 85–124. http://dx.doi.org/10.1007/bf01530945.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhuang, Yizhou, Weimin Chen, Tao Jin, Bin Chen, He Zhang, and Wen Zhang. "A Review of Computer Vision-Based Structural Deformation Monitoring in Field Environments." Sensors 22, no. 10 (May 16, 2022): 3789. http://dx.doi.org/10.3390/s22103789.

Full text
Abstract:
Computer vision-based structural deformation monitoring techniques were studied in a large number of applications in the field of structural health monitoring (SHM). Numerous laboratory tests and short-term field applications contributed to the formation of the basic framework of computer vision deformation monitoring systems towards developing long-term stable monitoring in field environments. The major contribution of this paper was to analyze the influence mechanism of the measuring accuracy of computer vision deformation monitoring systems from two perspectives, the physical impact, and target tracking algorithm impact, and provide the existing solutions. Physical impact included the hardware impact and the environmental impact, while the target tracking algorithm impact included image preprocessing, measurement efficiency and accuracy. The applicability and limitations of computer vision monitoring algorithms were summarized.
APA, Harvard, Vancouver, ISO, and other styles
6

Petrivskyi, Volodymyr. "Some features of creating a computer vision system." Modeling Control and Information Technologies, no. 5 (November 21, 2021): 72–74. http://dx.doi.org/10.31713/mcit.2021.22.

Full text
Abstract:
In the paper some features of models and algorithms of computer vision are presented. An algorithm for training the neural network of object recognition is proposed and described. The peculiarity of the proposed approach is the parallel training of networks with the subsequent selection of the most accurate. The presented results of experiments confirm the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
7

HARALICK, ROBERT M. "PROPAGATING COVARIANCE IN COMPUTER VISION." International Journal of Pattern Recognition and Artificial Intelligence 10, no. 05 (August 1996): 561–72. http://dx.doi.org/10.1142/s0218001496000347.

Full text
Abstract:
This paper describes how to propagate approximately additive random perturbations through any kind of vision algorithm step in which the appropriate random perturbation model for the estimated quantity produced by the vision step is also an additive random perturbation. We assume that the vision algorithm step can be modeled as a calculation (linear or non-linear) that produces an estimate that minimizes an implicit scaler function of the input quantity and the calculated estimate. The only assumption is that the scaler function has finite second partial derivatives and that the random perturbations are small enough so that the relationship between the scaler function evaluated at the ideal but unknown input and output quantities and the observed input quantity and perturbed output quantity can be approximated sufficiently well by a first order Taylor series expansion. The paper finally discusses the issues of verifying that the derived statistical behavior agrees with the experimentally observed statistical behavior.
APA, Harvard, Vancouver, ISO, and other styles
8

Patel, Mitesh, Sara Lal, Diarmuid Kavanagh, and Peter Rossiter. "Fatigue Detection Using Computer Vision." International Journal of Electronics and Telecommunications 56, no. 4 (November 1, 2010): 457–61. http://dx.doi.org/10.2478/v10177-010-0062-8.

Full text
Abstract:
Fatigue Detection Using Computer VisionLong duration driving is a significant cause of fatigue related accidents of cars, airplanes, trains and other means of transport. This paper presents a design of a detection system which can be used to detect fatigue in drivers. The system is based on computer vision with main focus on eye blink rate. We propose an algorithm for eye detection that is conducted through a process of extracting the face image from the video image followed by evaluating the eye region and then eventually detecting the iris of the eye using the binary image. The advantage of this system is that the algorithm works without any constraint of the background as the face is detected using a skin segmentation technique. The detection performance of this system was tested using video images which were recorded under laboratory conditions. The applicability of the system is discussed in light of fatigue detection for drivers.
APA, Harvard, Vancouver, ISO, and other styles
9

Sokolov, Sergey, Andrey Boguslavsky, and Sergei Romanenko. "Implementation of the visual data processing algorithms for onboard computing units." Robotics and Technical Cybernetics 9, no. 2 (June 30, 2021): 106–11. http://dx.doi.org/10.31776/rtcj.9204.

Full text
Abstract:
According to the short analysis of modern experience of hardware and software for autonomous mobile robots a role of computer vision systems in the structure of those robots is considered. A number of configurations of onboard computers and implementation of algorithms for visual data capturing and processing are described. In original configuration space the «algorithms-hardware» plane is considered. For software designing the realtime vision system framework is used. Experiments with the computing module based on the Intel/Altera Cyclone IV FPGA (implementation of the histogram computation algorithm and the Canny's algorithm), with the computing module based on the Xilinx FPGA (implementation of a sparse and dense optical flow algorithms) are described. Also implementation of algorithm of graph segmentation of grayscale images is considered and analyzed. Results of the first experiments are presented.
APA, Harvard, Vancouver, ISO, and other styles
10

Kushnir, A., and B. Kopchak. "DEVELOPMENT OF COMPUTER VISION-BASED AUTOMATIC FLAME DETECTION ALGORITHM USING MATLAB SOFTWARE ENVIRONMENT." Fire Safety 36 (July 20, 2020): 49–58. http://dx.doi.org/10.32447/20786662.36.2020.05.

Full text
Abstract:
Introduction. Fire detection systems plays an important role in protecting objects from fires and saving lives. In traditional fire detection systems, fire detectors detect fires by combustion of by-products, such as smoke, temperature, flame radiation. This principle is effective, but unfortunately, the fire detector works with a significant delay if the ignition source is not in close proximity to it. In addition, such systems have a high frequency of false positives. The most promising area for early fire detection is the use of computer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systemscomputer vision based fire detection systems, as they detect fires rather than their combustion products. Such systems, as well as traditional fire detection systems, analyze the signs of a fire, such as smoke, flames, and even the air temperature by means of the image coming directly from the cameras, due to which the range of the system increases significantly. Unlike traditional systems, they are more efficient, do not require indoor spaces, have high performance and minimize the number of false positives. In addition, when notifying the operator about a fire, the video system can provide him with an image of probable ignition place.Fire detection algorithms are quite complex because the signs of a fire are non-static. Today, more and more scientists are trying to develop algorithms and methods that will detect fires at an early stage in the video stream with high accuracy, without false positives. When creating such algorithms, there are four main approaches. These are flame colour segmentation, motion de-tection in the image, analysis of spatial changes in brightness and analysis of temporal changes in boundaries. Each approach requires the development of its own individual algorithm, combining them, which is quite a difficult task. However, all algorithms are based on the process of selecting colours in the image that are characteristic of fire. There are many algorithms that use two or three approaches and they provide good results. Using the MATLAB software environment and its standard packages to create a flame detection system is considered in this paper.Purpose. The research aims to develop an algorithm for automatic flame detection in images based on pixel analysis, which identifies the colour of the flame and flame area using the MATLAB software environment, in order to further create a reliable computer vision-based flame detection system.Results. The MATLAB software environment includes Image Acquisition Toolbox and Image Processing Toolbox, which are compatible environments for developing real-time imaging applications that can come from digital video cameras, satellite and aviation on-board sensors, and other scientific devices. Using them, one can implement new ideas, including the development of fire detection algorithms.The flame has a fairly uniform intensity compared to other intensities of objects, unlike smoke. That's why there are so many flame-based fire detection algorithms. However, in practice, developing an effective algorithm is not an easy task, because the image under study may contain objects that have signs of flame. In the image, you need to select the pixels with the characteristic colour that are inherent in the flame. At this stage, various images with flames in the RGB colour model were analyzed and the mean value of their intensity and standard deviation (R, G and B) were determined. Image segmentation was performed on the basis of the obtained values. The purpose of segmentation was to highlight the flame in the image. However, there may be other objects in the image whose pixel intensities match the flame pixel intensities. As a result, in addition to the flame, other objects may be highlighted in the segmented image. Based on the previously selected segmentation method, we can assume that the flame in this image occupies the largest area. Therefore, another criterion was chosen for the flame search, based on the area, which enabled to remove other objects that do not belong to the flame. In the final stage, the flame in the image is highlighted by a rectangle.Conclusions. The possibility of using the MATLAB software environment with the Image Acquisition Toolbox and Image Processing Toolbox packages to create a computer vision based flame detection system is considered. The functions of the packages allow you to implement new ideas when creating algorithms for automatic fire detection. The article develops the algorithm for automatic flame detection in the image based on the analysis of flame colour pixels and flame area. Various images with flames in the RGB colour model were analyzed and their mean value and standard deviation were determined. Image segmentation was performed on the basis of the obtained values. Experimental studies in the MATLAB software environment have proved the effi-ciency of the developed algorithm. To create a reliable computer vision based flame detection system in future, it is proposed to develop an algorithm that would analyze the boundaries, shape and flicker of the flame in addition to analyzing the flame colour pixels and flame area.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Xianghan, Jie Jiang, Yingmei Wei, Lai Kang, and Yingying Gao. "Research on Gesture Recognition Method Based on Computer Vision." MATEC Web of Conferences 232 (2018): 03042. http://dx.doi.org/10.1051/matecconf/201823203042.

Full text
Abstract:
Gesture recognition is an important way of human-computer interaction. With time going on, people are no longer satisfied with gesture recognition based on wearable devices, but hope to perform gesture recognition in a more natural way. Computer vision-based gesture recognition can transfer human feelings and instructions to computers conveniently and efficiently, and improve the efficiency of human-computer interaction significantly. The gesture recognition based on computer vision is mainly based on hidden Markov, dynamic time rounding algorithm and neural network algorithm. The process is roughly divided into three steps: image collection, hand segmentation, gesture recognition and classification. This paper reviews the computer vision-based gesture recognition methods in the past 20 years, analyses the research status at home and abroad, summarizes its current development, the advantages and disadvantages of different gesture recognition methods, and looks forward to the development trend of gesture recognition technology in the next stage.
APA, Harvard, Vancouver, ISO, and other styles
12

Erokhin, D. Y., A. B. Feldman, and S. E. Korepanov. "DETECTION AND TRACKING OF MOVING OBJECTS WITH REAL-TIME ONBOARD VISION SYSTEM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W4 (May 10, 2017): 67–71. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w4-67-2017.

Full text
Abstract:
Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.
APA, Harvard, Vancouver, ISO, and other styles
13

Chow, Bona, and Constantino Reyes-Aldasoro. "Automatic Gemstone Classification Using Computer Vision." Minerals 12, no. 1 (December 31, 2021): 60. http://dx.doi.org/10.3390/min12010060.

Full text
Abstract:
This paper presents a computer-vision-based methodology for automatic image-based classification of 2042 training images and 284 unseen (test) images divided into 68 categories of gemstones. A series of feature extraction techniques (33 including colour histograms in the RGB, HSV and CIELAB space, local binary pattern, Haralick texture and grey-level co-occurrence matrix properties) were used in combination with different machine-learning algorithms (Logistic Regression, Linear Discriminant Analysis, K-Nearest Neighbour, Decision Tree, Random Forest, Naive Bayes and Support Vector Machine). Deep-learning classification with ResNet-18 and ResNet-50 was also investigated. The optimal combination was provided by a Random Forest algorithm with the RGB eight-bin colour histogram and local binary pattern features, with an accuracy of 69.4% on unseen images; the algorithms required 0.0165 s to process the 284 test images. These results were compared against three expert gemmologists with at least 5 years of experience in gemstone identification, who obtained accuracies between 42.6% and 66.9% and took 42–175 min to classify the test images. As expected, the human experts took much longer than the computer vision algorithms, which in addition provided, albeit marginal, higher accuracy. Although these experiments included a relatively low number of images, the superiority of computer vision over humans is in line with what has been reported in other areas of study, and it is encouraging to further explore the application in gemmology and related areas.
APA, Harvard, Vancouver, ISO, and other styles
14

Iskra, N. A. "APPROACH TO IMAGE ANALYSIS FOR COMPUTER VISION SYSTEMS." Doklady BGUIR 18, no. 2 (March 31, 2020): 62–70. http://dx.doi.org/10.35596/1729-7648-2020-18-2-62-70.

Full text
Abstract:
This paper suggests an approach to the semantic image analysis for application in computer vision systems. The aim of the work is to develop a method for automatically construction of a semantic model, that formalizes the spatial relationships between objects in the image and research thereof. A distinctive feature of this model is the detection of salient objects, due to which the construction algorithm analyzes significantly less relations between objects, which can greatly reduce the image processing time and the amount of resources spent for processing. Attention is paid to the selection of a neural network algorithm for object detection in an image, as a preliminary stage of model construction. Experiments were conducted on test datasets provided by Visual Genome database, developed by researchers from Stanford University to evaluate object detection algorithms, image captioning models, and other relevant image analysis tasks. When assessing the performance of the model, the accuracy of spatial relations recognition was evaluated. Further, the experiments on resulting model interpretation were conducted, namely image annotation, i.e. generating a textual description of the image content. The experimental results were compared with similar results obtained by means of the algorithm based on neural networks algorithm on the same dataset by other researchers, as well as by the author of this paper earlier. Up to 60 % improvement in image captioning quality (according to the METEOR metric) compared with neural network methods has been shown. In addition, the use of this model allows partial cleansing and normalization of data for training neural network architectures, which are widely used in image analysis among others. The prospects of using this technique in situational monitoring are considered. The disadvantages of this approach are some simplifications in the construction of the model, which will be taken into account in the further development of the model.
APA, Harvard, Vancouver, ISO, and other styles
15

Gao, Terry. "Detection and Tracking Cows by Computer Vision and Image Classification Methods." International Journal of Security and Privacy in Pervasive Computing 13, no. 1 (January 2021): 1–45. http://dx.doi.org/10.4018/ijsppc.2021010101.

Full text
Abstract:
In this paper, the cow recognition and traction in video sequences is studied. In the recognition phase, this paper does some discussion and analysis which aim at different classification algorithms and feature extraction algorithms, and cow's detection is transformed into a binary classification problem. The detection method extracts cow's features using a method of multiple feature fusion. These features include edge characters which reflects the cow body contour, grey value, and spatial position relationship. In addition, the algorithm detects the cow body through the classifier which is trained by Gentle Adaboost algorithm. Experiments show that the method has good detection performance when the target has deformation or the contrast between target and background is low. Compared with the general target detection algorithm, this method reduces the miss rate and the detection precision is improved. Detection rate can reach 97.3%. In traction phase, the popular compressive tracking (CT) algorithm is proposed. The learning rate is changed through adaptively calculating the pap distance of image block. Moreover, the update for target model is stopped to avoid introducing error and noise when the classification response values are negative. The experiment results show that the improved tracking algorithm can effectively solve the target model update by mistaken when there are large covers or the attitude is changed frequently. For the detection and tracking of cow body, a detection and tracking framework for the image of cow is built and the detector is combined with the tracking framework. The algorithm test for some video sequences under the complex environment indicates the detection algorithm based on improved compressed perception shows good tracking effect in the changing and complicated background.
APA, Harvard, Vancouver, ISO, and other styles
16

Asadi, Hamed, Guoyang Zhou, Vaneet Aggarwal, and Denny Yu. "Computer Vision Algorithm to Identify High Force Exertions." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 1586. http://dx.doi.org/10.1177/1071181320641378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Dorj, Ulzii-Orshikh, Keun-kwang Lee, and Malrey Lee. "A computer Vision Algorithm for Tangerine Yield Estimation." International Journal of Bio-Science and Bio-Technology 5, no. 5 (October 31, 2013): 101–10. http://dx.doi.org/10.14257/ijbsbt.2013.5.5.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ye, Ya Lin, Ning Shan, Qian Zhang, and Ke Li Yang. "B-Spline Wavelets Vision Detection Technology Based on Adaptive Filter." Applied Mechanics and Materials 198-199 (September 2012): 284–87. http://dx.doi.org/10.4028/www.scientific.net/amm.198-199.284.

Full text
Abstract:
Edge is the most important information for computer vision. Wavelets edge detection can reduce noise disturbing, and also loses weak edging. This paper presents a new algorithm for edge detection. Based on sharping imaging edging by adaptive filter algorithm, the algorithm can detect edge by B-spline wavelets. This new algorithm has more higher precision than those normal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
19

Finlayson, Graham D. "Colour and illumination in computer vision." Interface Focus 8, no. 4 (June 15, 2018): 20180008. http://dx.doi.org/10.1098/rsfs.2018.0008.

Full text
Abstract:
In computer vision, illumination is considered to be a problem that needs to be ‘solved’. The colour cast due to illumination is removed to support colour-based image recognition and stable tracking (in and out of shadows), among other tasks. In this paper, I review historical and current algorithms for illumination estimation. In the classical approach, the illuminant colour is estimated by an ever more sophisticated analysis of simple image summary statistics often followed by a bias correction step. Bias correction is a function applied to the estimates made by a given illumination estimation algorithm to correct consistent errors in the estimations. Most recently, the full power, and much higher complexity, of deep learning has been deployed (where, effectively, the definition of the image statistics of interest and the type of analysis carried out are found as part of an overall optimization). In this paper, I challenge the orthodoxy of deep learning, i.e. that it is the best approach for illuminant estimation. We instead focus on the final bias correction stage found in many simple illumination estimation algorithms. There are two key insights in our method. First, we argue that the bias must be corrected in an exposure invariant way. Second, we show that this bias correction amounts to ‘solving for a homography’. Homography-based illuminant estimation is shown to deliver leading illumination estimation performance (at a very small fraction of the complexity of deep learning methods).
APA, Harvard, Vancouver, ISO, and other styles
20

Tang, Yi, Jin Qiu, and Ming Gao. "Fuzzy Medical Computer Vision Image Restoration and Visual Application." Computational and Mathematical Methods in Medicine 2022 (June 21, 2022): 1–10. http://dx.doi.org/10.1155/2022/6454550.

Full text
Abstract:
In order to shorten the image registration time and improve the imaging quality, this paper proposes a fuzzy medical computer vision image information recovery algorithm based on the fuzzy sparse representation algorithm. Firstly, by constructing a computer vision image acquisition model, the visual feature quantity of the fuzzy medical computer vision image is extracted, and the feature registration design of the fuzzy medical computer vision image is carried out by using the 3D visual reconstruction technology. Then, by establishing a multidimensional histogram structure model, the wavelet multidimensional scale feature detection method is used to achieve grayscale feature extraction of fuzzy medical computer vision images. Finally, the fuzzy sparse representation algorithm is used to automatically optimize the fuzzy medical computer vision images. The experimental results show that the proposed method has a short image information registration time, less than 10 ms, and has a high peak PSNR. When the number of pixels is 700, its peak PSNR can reach 83.5 dB, which is suitable for computer image restoration.
APA, Harvard, Vancouver, ISO, and other styles
21

He, Peng, and Feng Gao. "Study on Lane Detection Based on Computer Vision." Advanced Materials Research 765-767 (September 2013): 2229–32. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.2229.

Full text
Abstract:
Lane detection is a crucial component of automotive driver assistance system aiming to increase safety, convenience and efficiency of driving. This paper developed a vision based algorithm of detecting road lanes which is performed by extracting edges and finding straight lines using improved Hough transform. The experimental results indicate that this algorithm is effective and precise. Furthermore, this algorithm paves the way for the implementation of automotive driver assistance system.
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Hai. "Research on the Real-Time Infrared Tracking Athletics Image Registration Based on Computer Vision." Advanced Materials Research 791-793 (September 2013): 2002–6. http://dx.doi.org/10.4028/www.scientific.net/amr.791-793.2002.

Full text
Abstract:
with the development of computer hardware, computer vision technology has been applied to the engineering field. Through the computer vision technology, the image track registration algorithm is conducted in-depth research, and based on the images rigid body transformation, affine, projection and linear superposition method, this paper is the reconstruction of wavelet algorithm and program. This paper uses the general image processing software MATLAB to carry on image processing for the track and field sports, and then the computer vision infrared tracking image registration is numerically simulated, we find that the standard deviation of wavelet reconstruction algorithm achieves 72.3258, which is maximum in four algorithms, entropy and joint entropy respectively reach 5.2332 and 6.2369, after pixel reaching 800, coincidence degree is still maintained at more than 90%. Through the fusion degree of image and the standard deviation of gray entropy, it can be seen that the image registration is achieved good effect.
APA, Harvard, Vancouver, ISO, and other styles
23

Kumar, Ravindra. "Study of Impact of Computer Vision in Detecting Human Emotions." International Journal of Innovative Technology and Exploring Engineering 10, no. 11 (September 30, 2021): 82–83. http://dx.doi.org/10.35940/ijitee.j9394.09101121.

Full text
Abstract:
Emotions play a powerful role in people's thinking and behaviors. Emotions act as a compulsion to take any action and can influence daily life decisions. Human facial expressions show humans share the same set of emotions. From the setting, the concept of emotion-sensing facial recognition was brought up. Humans have been working actively on computer vision algorithms, the algorithm will help determine the emotions of an individual and can determine the set of intentions accompanied by the emotions. The emotion-sensing facial expression computers are designed using data-centric skills in machine learning and can achieve their desired work by emotion identification and a set of intentions related to the emotion obtained.
APA, Harvard, Vancouver, ISO, and other styles
24

Guo, Lang, and Jian Wang. "A Facial Expression Recognition Algorithm Based on Computer Vision." Applied Mechanics and Materials 380-384 (August 2013): 4057–60. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.4057.

Full text
Abstract:
Analyzing the defects of two-dimensional facial expression recognition algorithm, this paper proposes a new three-dimensional facial expression recognition algorithm. The algorithm is tested in JAFFE facial expression database. The results show that the proposed algorithm dynamically determines the size of the local neighborhood according to the manifold structure, effectively solves the problem of facial expression recognition, and has good recognition rate.
APA, Harvard, Vancouver, ISO, and other styles
25

Ivorra, Eugenio, Mario Ortega, José Catalán, Santiago Ezquerro, Luis Lledó, Nicolás Garcia-Aracil, and Mariano Alcañiz. "Intelligent Multimodal Framework for Human Assistive Robotics Based on Computer Vision Algorithms." Sensors 18, no. 8 (July 24, 2018): 2408. http://dx.doi.org/10.3390/s18082408.

Full text
Abstract:
Assistive technologies help all persons with disabilities to improve their accessibility in all aspects of their life. The AIDE European project contributes to the improvement of current assistive technologies by developing and testing a modular and adaptive multimodal interface customizable to the individual needs of people with disabilities. This paper describes the computer vision algorithms part of the multimodal interface developed inside the AIDE European project. The main contribution of this computer vision part is the integration with the robotic system and with the other sensory systems (electrooculography (EOG) and electroencephalography (EEG)). The technical achievements solved herein are the algorithm for the selection of objects using the gaze, and especially the state-of-the-art algorithm for the efficient detection and pose estimation of textureless objects. These algorithms were tested in real conditions, and were thoroughly evaluated both qualitatively and quantitatively. The experimental results of the object selection algorithm were excellent (object selection over 90%) in less than 12 s. The detection and pose estimation algorithms evaluated using the LINEMOD database were similar to the state-of-the-art method, and were the most computationally efficient.
APA, Harvard, Vancouver, ISO, and other styles
26

Cai-Wen Niu, Cai-Wen Niu, Shi-Hui Dong Cai-Wen Niu, and Yan Zeng Shi-Hui Dong. "Research on Improved Algorithm of Mobile Robot Vision SLAM Based on Depth Camera." 電腦學刊 33, no. 5 (October 2022): 197–211. http://dx.doi.org/10.53106/199115992022103305017.

Full text
Abstract:
<p>Aiming at the problems of low accuracy, poor real-time performance and large cumulative error in traditional depth camera visual synchronization positioning and map construction algorithms, this paper proposes an improved visual SLAM algorithm for mobile robots based on depth camera. According to the realization process of robot vision, a feature point extraction algorithm based on feature extraction and image segmentation watershed algorithm is proposed in the front-end process, which improves the real-time performance of the algorithm. Then the RE-RANSAC algorithm is used to eliminate the mismatched feature points to improve the matching accuracy, and then the accumulated error is eliminated through closed loop detection, and finally the process of robot mapping is completed. After simulation experiments, the feasibility of the improved algorithm is proved, and the robot’s mapping and trajectory estimation are completed.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
27

Romachev, Artem, Valentin Kuznetsov, Egor Ivanov, and Benndorf Jörg. "Flotation froth feature analysis using computer vision technology." E3S Web of Conferences 192 (2020): 02022. http://dx.doi.org/10.1051/e3sconf/202019202022.

Full text
Abstract:
The possibility of machine vision application in the field of flotation efficiency evaluation was studied. Algorithm for froth image analysis was developed with aim of obtaining bubble’s size distribution. Algorithm consists of two parts: image processing and object detection. Algorithm’s work was verified on the sulfide flotation froth. As result, mathematical correlations for air flow rate, mean bubble diameter and surface area bubble flux were established.
APA, Harvard, Vancouver, ISO, and other styles
28

Lebedev, A. O., and V. V. Vasil’ev. "UAV Control Algorithm in Automatic Mode Using Computer Vision." Optoelectronics, Instrumentation and Data Processing 57, no. 4 (July 2021): 406–11. http://dx.doi.org/10.3103/s8756699021040075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Dai, Huiming, Xin Zhang, and Dacheng Yang. "Road traffic sign recognition algorithm based on computer vision." International Journal of Computational Vision and Robotics 8, no. 1 (2018): 85. http://dx.doi.org/10.1504/ijcvr.2018.090026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

DaI, Huiming, Dacheng Yang, and Xin Zhang. "Road traffic sign recognition algorithm based on computer vision." International Journal of Computational Vision and Robotics 8, no. 1 (2018): 85. http://dx.doi.org/10.1504/ijcvr.2018.10008251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kim, Junghwan, and Shik Kim. "Autonomous-flight Drone Algorithm use Computer vision and GPS." IEMEK Journal of Embedded Systems and Applications 11, no. 3 (June 30, 2016): 193–200. http://dx.doi.org/10.14372/iemek.2016.11.3.193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Xu, Pengfei, Zhiqing Zhou, and Zexun Geng. "Technical Research on Moving Target Monitoring and Intelligent Tracking Algorithm Based on Machine Vision." Wireless Communications and Mobile Computing 2022 (May 27, 2022): 1–13. http://dx.doi.org/10.1155/2022/7277926.

Full text
Abstract:
Machine vision is an important branch of the rapid development of modern artificial intelligence, and it is a key technology to convert the image information of monitoring targets into digital signals. However, due to the wide range of machine vision applications, this research focuses on its application in video surveillance. In the era of artificial intelligence, the detection and tracking of moving objects have always been a key issue in video surveillance. The simulation of human vision is realized by combining the relevant functions of the computer and the image acquisition device, which enables the computer to have the ability to recognize the surrounding environment through images. The intelligent video analysis technology can automatically analyze and extract the key useful information from the video source with the powerful data processing ability of the computer, so as to realize the computer’s “understanding” of the video. It allows the computer to “understand” what is shown in the video or what kind of “event” happened and provides a new method and reliable basis for accident detection and accident analysis. Therefore, after a brief introduction to machine vision, moving target monitoring methods, and intelligent tracking algorithms, this paper will focus on moving target monitoring and intelligent tracking strategies for video surveillance. In addition, this paper will focus on introducing the principle of intelligent tracking algorithm through formulas and compare the accuracy and success rate of target monitoring and intelligent tracking between the machine vision-based algorithm and other algorithms during the experiment. Finally, experiments show that the monitoring and tracking effect of machine vision combined with “cloud” is the best, and the overall average can reach 85.7%. Based on this, this paper fully confirms the feasibility of the moving target monitoring and intelligent tracking algorithm based on machine vision.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Zheng, Cong Huang, Fei Zhong, Bote Qi, and Binghong Gao. "Posture Recognition and Behavior Tracking in Swimming Motion Images under Computer Machine Vision." Complexity 2021 (May 20, 2021): 1–9. http://dx.doi.org/10.1155/2021/5526831.

Full text
Abstract:
This study is to explore the gesture recognition and behavior tracking in swimming motion images under computer machine vision and to expand the application of moving target detection and tracking algorithms based on computer machine vision in this field. The objectives are realized by moving target detection and tracking, Gaussian mixture model, optimized correlation filtering algorithm, and Camshift tracking algorithm. Firstly, the Gaussian algorithm is introduced into target tracking and detection to reduce the filtering loss and make the acquired motion posture more accurate. Secondly, an improved kernel-related filter tracking algorithm is proposed by training multiple filters, which can clearly and accurately obtain the motion trajectory of the monitored target object. Finally, it is proposed to combine the Kalman algorithm with the Camshift algorithm for optimization, which can complete the tracking and recognition of moving targets. The experimental results show that the target tracking and detection method can obtain the movement form of the template object relatively completely, and the kernel-related filter tracking algorithm can also obtain the movement speed of the target object finely. In addition, the accuracy of Camshift tracking algorithm can reach 86.02%. Results of this study can provide reliable data support and reference for expanding the application of moving target detection and tracking methods.
APA, Harvard, Vancouver, ISO, and other styles
34

Gaonkar, Needhi U. "Road Traffic Analysis Using Computer Vision." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (August 31, 2021): 2002–6. http://dx.doi.org/10.22214/ijraset.2021.37630.

Full text
Abstract:
Abstract: Traffic analysis plays an important role in a transportation system for traffic management. Traffic analysis system using computer vision project paper proposes the video based data for vehicle detection and counting systems based on the computer vision. In most Transportation Systems cameras are installed in fixed locations. Vehicle detection is the most important requirement in traffic analysis part. Vehicle detection, tracking, classification and counting is very useful for people and government for traffic flow, highway monitoring, traffic planning. Vehicle analysis will supply with information about traffic flow, traffic summit times on road. The motivation of visual object detection is to track the vehicle position and then tracking in successive frames is to detect and connect target vehicles for frames. Recognising vehicles in an ongoing video is useful for traffic analysis. Recognizing what kind of vehicle in an ongoing video is helpful for traffic analysing. this system can classify the vehicle into bicycle, bus, truck, car and motorcycle. In this system I have used a video-based vehicle counting method in a highway traffic video capture using cctv camera. Project presents the analysis of tracking-by-detection approach which includes detection by YOLO(You Only Look Once) and tracking by SORT(simple online and realtime tracking) algorithm. Keywords: Vehicle detection, Vehicle tracking, Vehicle counting, YOLO, SORT, Analysis, Kalman filter, Hungarian algorithm.
APA, Harvard, Vancouver, ISO, and other styles
35

FOGGIA, PASQUALE, GENNARO PERCANNELLA, CARLO SANSONE, and MARIO VENTO. "A GRAPH-BASED ALGORITHM FOR CLUSTER DETECTION." International Journal of Pattern Recognition and Artificial Intelligence 22, no. 05 (August 2008): 843–60. http://dx.doi.org/10.1142/s0218001408006557.

Full text
Abstract:
In some Computer Vision applications there is the need for grouping, in one or more clusters, only a part of the whole dataset. This happens, for example, when samples of interest for the application at hand are present together with several noisy samples. In this paper we present a graph-based algorithm for cluster detection that is particularly suited for detecting clusters of any size and shape, without the need of specifying either the actual number of clusters or the other parameters. The algorithm has been tested on data coming from two different computer vision applications. A comparison with other four state-of-the-art graph-based algorithms was also provided, demonstrating the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
36

Jankar, Ms P. A. "Computer Vision Crowd Detection System." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 10, 2021): 438–41. http://dx.doi.org/10.22214/ijraset.2021.34996.

Full text
Abstract:
Now days, everyone is facing with the problem of very dangerous disease that is COVID-19. The virus is transmitted through infected person coughs, sneezes or exhales. So, maintaining close face to face contact with other peoples is the best solution (way) to minimize the spread of corona virus disease 2019(covid19). COVID-19 spreads mainly among people who are in close contact for a prolonged period so this problem is mostly occurred in public places like collages, schools, malls, stations, etc. We need of surveillance system that could be monitor and detect if the people are following social distance or not. This operation of making observations proposes an artificial intelligence system for grouping distancing of persons using images. An algorithm is also instrumented for measuring and putting in order the distance between persons and to automatically check if grouping distancing rules are respected or not.
APA, Harvard, Vancouver, ISO, and other styles
37

Kadoura, A., and E. P. Small. "Tracking Productivity in Real-time Using Computer Vision." IOP Conference Series: Materials Science and Engineering 1218, no. 1 (January 1, 2022): 012041. http://dx.doi.org/10.1088/1757-899x/1218/1/012041.

Full text
Abstract:
Abstract The construction industry is lagging behind other industries in terms of productivity gains with stagnant growth over several decades. The reasons for the lack of growth are complex and multifaceted, yet all causes and the resulting effects are realized on the output of the workers and machinery on-site. Often, typical site management practices do not identify productivity issues in a timely fashion when corrective actions may impact construction activity progress. Better management tools are required to optimize the productivity of on-site crews. This promises to positively impact the completion of individual tasks, work packages, projects, and ultimately the industry with respect to productivity, which is the driving force behind this research effort. The approach pursued focuses on utilizing commercially available cameras together with computer vision and artificial intelligence. With these tools, the main objective is to develop an algorithm capable of providing real-time feedback on the productivity of workers. Ideally, the algorithm will further identify each type of activity being conducted, thereby capturing useful productivity data. The initial model is a simple classifier that checks whether a worker performs work or stands idle. This is performed through algorithms that identify and track a person’s pose and joints and translate them to data points that can be evaluated and translated into helpful productivity measures. Finally, successfully developing a model capable of providing real-time productivity data will allow project managers and planners to better manage and utilize on-site resources. Additionally, since a large amount of data is collected and saved, trends in productivity levels can be tracked and studied further to optimize and improve them.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhao, Yi Zhi, Huan Wang, and Guo Cai Yin. "Research on Mean Shift Algorithm." Advanced Materials Research 756-759 (September 2013): 4021–25. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.4021.

Full text
Abstract:
Computer vision is a diverse and relatively new field of study. Object tracking plays a crucial role as a preliminary step for high-level image processing in the field of computer vision. However, mean shift algorithm in the target tracking has some defects, such as: the application of fixed bandwidth for probability density estimation usually causes lack of smooth or too smooth; moving target often appears partial occlusion or complete occlusion due to the complexity of the background; background pixels in object model will induce localization error of object tracking, and so on. Therefore, this paper elaborates several elegant algorithms to solve some of the problems. After discussing the application of Mean shift in the field of target tracking, this paper presented an improved Mean shift algorithm by combining Mean Shift and Kalman Filter.
APA, Harvard, Vancouver, ISO, and other styles
39

Vora, Parshva, and Sudhir Shrestha. "Detecting Diabetic Retinopathy Using Embedded Computer Vision." Applied Sciences 10, no. 20 (October 17, 2020): 7274. http://dx.doi.org/10.3390/app10207274.

Full text
Abstract:
Diabetic retinopathy is one of the leading causes of vision loss in the United States and other countries around the world. People who have diabetic retinopathy may not have symptoms until the condition becomes severe, which may eventually lead to vision loss. Thus, the medically underserved populations are at an increased risk of diabetic retinopathy-related blindness. In this paper, we present development efforts on an embedded vision algorithm that can classify healthy versus diabetic retinopathic images. Convolution neural network and a k-fold cross-validation process were used. We used 88,000 labeled high-resolution retina images obtained from the publicly available Kaggle/EyePacs database. The trained algorithm was able to detect diabetic retinopathy with up to 76% accuracy. Although the accuracy needs to be further improved, the presented results represent a significant step forward in the direction of detecting diabetic retinopathy using embedded computer vision. This technology has the potential of being able to detect diabetic retinopathy without having to see an eye specialist in remote and medically underserved locations, which can have significant implications in reducing diabetes-related vision losses.
APA, Harvard, Vancouver, ISO, and other styles
40

Watanabe, Toshihiko, Takeshi Kamai, and Tomoki Ishimaru. "Robust Estimation of Camera Homography by Fuzzy RANSAC Algorithm with Reinforcement Learning." Journal of Advanced Computational Intelligence and Intelligent Informatics 19, no. 6 (November 20, 2015): 833–42. http://dx.doi.org/10.20965/jaciii.2015.p0833.

Full text
Abstract:
The computer vision approach involves many modeling problems in preventing noise caused by sensing units such as cameras and projectors. To improve computer vision modeling performance, a robust modeling technique must be developed for essential models in the system. The RANSAC and LMedS algorithms have been widely applied in such issues, but performance deteriorates as the noise ratio increases and the calculation time for algorithms tends to increase in actual applications. In this study, we propose a new fuzzy RANSAC algorithm for homography estimation based on the reinforcement learning concept. The performance of the algorithm is evaluated by modeling synthetic data and camera homography experiments. Their results found the method to be effective in improving calculation time, model optimality, and robustness in modeling performance.
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Yishu, and Jun Li. "Brand Marketing Decision Support System Based on Computer Vision and Parallel Computing." Wireless Communications and Mobile Computing 2022 (March 30, 2022): 1–14. http://dx.doi.org/10.1155/2022/7416106.

Full text
Abstract:
With the rapid development of information technology, decision support systems that can assist business managers in making scientific decisions have become the focus of research. At present, there are not many related studies, but from the brand marketing level, there are not many studies combining smart technology. Based on computer vision technology and parallel computing algorithms, this paper launches an in-depth study of brand marketing decision support systems. First, use computer vision technology and Viola-Jones face detection framework to detect consumers’ faces, and use the classic convolutional neural network model AlexNet for gender judgment and age prediction to analyze consumer groups. Then, use parallel computing to optimize the genetic algorithm to improve the running speed of the algorithm. Design the brand marketing decision support system based on the above technology and algorithm, analyze the relevant data of the L brand, and divide the functional structure of the system into three parts: customer market analysis, performance evaluation, and demand forecasting. The ROC curve of the Viola-Jones face detection framework shows its superior performance. After 500 iterations of the AlexNet model, the verification set loss of the network is stable at 1.8, and the accuracy of the verification set is stable at 38%. Parallel genetic algorithms run 1.8 times faster than serial genetic algorithms at the lowest and 9 times faster at the highest. The minimum prediction error is 0.17%, and the maximum is 2%, which shows that the system can make accurate predictions based on previous years’ data. Computer vision is a technique that converts still image or video data into a decision or a new representation. All such transformations are done to accomplish a specific purpose. Therefore, a brand marketing decision support system based on computer vision and parallel computing can help managers make scientific decisions, save production costs, reduce inventory pressure, and enhance the brand’s competitive advantage.
APA, Harvard, Vancouver, ISO, and other styles
42

Gil, Ángel, Miguel Márquez, Erica Chacón, and Angélica Ramirez. "An Artificial Vision-Based Computer Interface." Journal of Konbin 8, no. 1 (January 1, 2008): 27–34. http://dx.doi.org/10.2478/v10040-008-0097-4.

Full text
Abstract:
An Artificial Vision-Based Computer InterfaceAn application has been developed to assist in using software applications to individuals that have null or limited mobility of their upper limbs. The system uses a Web camera for movement patterns recognition of the user's face. The image analysis allows emulating the basic functions of the computer mouse. The face detection process is carried out through the implementation of an algorithm that uses a cascade sort key of Haar-Like type. The application was developed in C++ to be used on Windows XP platform. Tryout of the application has been performed showing excellent acceptation and short time training requirements.
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Hongxia, Xue-Xue Kang, and Yang Yu. "Nighttime Pedestrian Ranging Algorithm Based on Monocular Vision." Cybernetics and Information Technologies 16, no. 5 (October 1, 2016): 156–67. http://dx.doi.org/10.1515/cait-2016-0062.

Full text
Abstract:
Abstract Since the traditional computer vision ranging algorithm is imperfect in pertinence and precision, night time monocular vision pedestrian ranging method is proposed for vehicular infrared night vision goggles. Firstly, the method calibrated the internal and external parameters of infrared night-vision goggles, then, it corrected distortion of collected Vehicular Infrared Night Vision Image, and finally it ranged objective pedestrians by using night time monocular vision pedestrian ranging algorithm. The experimental results show that this method has the characteristics of pertinence, high precision and good real-time, and has good practicability.
APA, Harvard, Vancouver, ISO, and other styles
44

Singh, Amit, Albert Haque, Alexandre Alahi, Serena Yeung, Michelle Guo, Jill R. Glassman, William Beninati, Terry Platchek, Li Fei-Fei, and Arnold Milstein. "Automatic detection of hand hygiene using computer vision technology." Journal of the American Medical Informatics Association 27, no. 8 (July 26, 2020): 1316–20. http://dx.doi.org/10.1093/jamia/ocaa115.

Full text
Abstract:
Abstract Objective Hand hygiene is essential for preventing hospital-acquired infections but is difficult to accurately track. The gold-standard (human auditors) is insufficient for assessing true overall compliance. Computer vision technology has the ability to perform more accurate appraisals. Our primary objective was to evaluate if a computer vision algorithm could accurately observe hand hygiene dispenser use in images captured by depth sensors. Materials and Methods Sixteen depth sensors were installed on one hospital unit. Images were collected continuously from March to August 2017. Utilizing a convolutional neural network, a machine learning algorithm was trained to detect hand hygiene dispenser use in the images. The algorithm’s accuracy was then compared with simultaneous in-person observations of hand hygiene dispenser usage. Concordance rate between human observation and algorithm’s assessment was calculated. Ground truth was established by blinded annotation of the entire image set. Sensitivity and specificity were calculated for both human and machine-level observation. Results A concordance rate of 96.8% was observed between human and algorithm (kappa = 0.85). Concordance among the 3 independent auditors to establish ground truth was 95.4% (Fleiss’s kappa = 0.87). Sensitivity and specificity of the machine learning algorithm were 92.1% and 98.3%, respectively. Human observations showed sensitivity and specificity of 85.2% and 99.4%, respectively. Conclusions A computer vision algorithm was equivalent to human observation in detecting hand hygiene dispenser use. Computer vision monitoring has the potential to provide a more complete appraisal of hand hygiene activity in hospitals than the current gold-standard given its ability for continuous coverage of a unit in space and time.
APA, Harvard, Vancouver, ISO, and other styles
45

Álvarez-Tuñón, Olaya, Alberto Jardón, and Carlos Balaguer. "Generation and Processing of Simulated Underwater Images for Infrastructure Visual Inspection with UUVs." Sensors 19, no. 24 (December 12, 2019): 5497. http://dx.doi.org/10.3390/s19245497.

Full text
Abstract:
The development of computer vision algorithms for navigation or object detection is one of the key issues of underwater robotics. However, extracting features from underwater images is challenging due to the presence of lighting defects, which need to be counteracted. This requires good environmental knowledge, either as a dataset or as a physic model. The lack of available data, and the high variability of the conditions, makes difficult the development of robust enhancement algorithms. A framework for the development of underwater computer vision algorithms is presented, consisting of a method for underwater imaging simulation, and an image enhancement algorithm, both integrated in the open-source robotics simulator UUV Simulator. The imaging simulation is based on a novel combination of the scattering model and style transfer techniques. The use of style transfer allows a realistic simulation of different environments without any prior knowledge of them. Moreover, an enhancement algorithm that successfully performs a correction of the imaging defects in any given scenario for either the real or synthetic images has been developed. The proposed approach showcases then a novel framework for the development of underwater computer vision algorithms for SLAM, navigation, or object detection in UUVs.
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Shen Gui, Nian Duan, Xiu Yu Chen, and Xi Peng Xu. "Application of Image Mosaic Method in the Detection of Grinding Wheel Topography." Advanced Materials Research 500 (April 2012): 302–7. http://dx.doi.org/10.4028/www.scientific.net/amr.500.302.

Full text
Abstract:
The topography of a grinding wheel can be obtained quickly and exactly through applying the computer vision method to detect the wheel topography. However, the application of computer vision in detecting wheel topography is restricted due to the contradiction between vision field and resolution while using traditional computer vision detecting method. In the present paper, the 3D topography of a diamond grinding wheel was reconstructed by combining image mosaic technique, corner detection algorithm, image matching algorithm and image fusion algorithm. The image mosaic technique was found to be effective in solving the contradiction between visual field and resolution and rapidly obtain high resolution image of the wheel topography in a wider range of vision field, thereby providing a valuable reference for quantitative evaluation of the performance of grinding wheels.
APA, Harvard, Vancouver, ISO, and other styles
47

Johan, Elisa Belinda, and Aminuddin Rizal. "Allergen Recognition in Food Ingredients with Computer Vision." Ultima Computing : Jurnal Sistem Komputer 13, no. 2 (December 30, 2021): 44–49. http://dx.doi.org/10.31937/sk.v13i2.2051.

Full text
Abstract:
The process of recognition and classification of food is very important. It can be useful for consumers who are sensitive in choosing foods that they want to consume. Considering that some food ingredients are allergens that can cause allergies for some people. This paper aims to design and build an Android-based system to detect food ingredients that can facilitate consumers in getting information about all allergens contained in the. The application is created by implementing Optical Character Recognition (OCR) algorithm and using Boyer Moore algorithm to do the word matching (string matching). The experiments were performed with trial of OCR, Boyer Moore, light sources, and technical words (uncommon words). Our experiment shows more than 90% accuracy obtained with different scenario applied.
APA, Harvard, Vancouver, ISO, and other styles
48

Supriyatin, Wahyu. "Perbandingan Metode Sobel, Prewitt, Robert dan Canny pada Deteksi Tepi Objek Bergerak." ILKOM Jurnal Ilmiah 12, no. 2 (August 27, 2020): 112–20. http://dx.doi.org/10.33096/ilkom.v12i2.541.112-120.

Full text
Abstract:
Computer vision is one of field of image processing. To be able to recognize a shape, it requires the initial stages in image processing, namely as edge detection. The object used in tracking in computer vision is a moving object (video). Edge detection is used to recognize edges of objects and reduce existing noise. Edge detection algorithms used for this research are using Sobel, Prewitt, Robert and Canny. Tests were carried out on three videos taken from the Matlab library. Testing is done using Simulik Matlab tools. The edge and overlay test results show that the Prewitt algorithm has better edge detection results compared to other algorithms. The Prewitt algorithm produces edges whose level of accuracy is smoother and clearer like the original object. The Canny algorithm failed to produce an edge on the video object. The Sobel and Robert algorithm can detect edges, but it is not clear as Prewitt does, because there are some missing edges.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Jun, and Xiao Hua Ni. "Angle Measurement Based on Computer Vision." Applied Mechanics and Materials 456 (October 2013): 115–19. http://dx.doi.org/10.4028/www.scientific.net/amm.456.115.

Full text
Abstract:
In order to improve the precision and the speed of angle measurement,A new method for measuring the angle of the workpiece is presented in this paper, which is based on the computer vision testing technology. The image of workpiece is obtained, the first step is image preprocessing, then the measured worpiece image is processed by edge detection through Canny algorithm, specific features of workpieces edge is fully extracted, Then one can accomplish line detection by using Hough transform, Finally, the angle value is obtained through the means of Angle Calculation. By employing practical examples in engineering and simulation experiments, the experimental results proved the method has more strong anti-interference ability, more high accuracy and speed than traditional method.
APA, Harvard, Vancouver, ISO, and other styles
50

Li, Nan. "Metal Jewelry Craft Design Based on Computer Vision." Computational Intelligence and Neuroscience 2022 (June 15, 2022): 1–11. http://dx.doi.org/10.1155/2022/3843421.

Full text
Abstract:
Combining computer vision technology with process design, a new design and production method is obtained, which breaks through the limitations of traditional jewelry creation and provides new possibilities for the realization of complex jewelry structures. When technology no longer becomes the bottleneck of artistic expression, the space of art will be greatly expanded. Science and technology leading design method has become a new way to assist jewelry artists in subjective creation. According to various thoughts and ideas in design, establishing the corresponding algorithm rules and parameters can generate the scheme through calculation. The design result obtained in this way not only has a scientifically logical basis but also obtains the result beyond the normal imagination space due to the intelligent design process. This paper tries to apply computer vision technology to modern jewelry design, analyzes several aspects of computer vision application in process design, and combines the latest technical means to put forward algorithms for verification. The results prove that computer vision can improve the efficiency of crafts design significantly.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography