Academic literature on the topic 'Segmentation; Feature tracking; Computer vision'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Segmentation; Feature tracking; Computer vision.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Segmentation; Feature tracking; Computer vision"

1

Kushwah, Chandra Pal. "Review on Semantic Segmentation of Satellite Images Using Deep Learning." International Journal for Research in Applied Science and Engineering Technology 9, no. VII (July 31, 2021): 3820–29. http://dx.doi.org/10.22214/ijraset.2021.37204.

Full text
Abstract:
Image segmentation for applications like scene understanding, medical image analysis, robotic vision, video tracking, improving reality, and image compression is a key subject of image processing and image evaluation. Semantic segmentation is an integral aspect of image comprehension and is essential for image processing tasks. Semantic segmentation is a complex process in computer vision applications. Many techniques have been developed, from self-sufficient cars, human interaction, robotics, medical science, agriculture, and so on, to tackle the issue.In a short period, satellite imagery will provide a lot of large-scale knowledge about the earth's surfaces, saving time. With the growth & development of satellite image sensors, the recorded object resolution was improved with advanced image processing techniques. Improving the performance of deep learning models in a broad range of vision applications, important work has recently been carried out to evaluate approaches for deep learning models in image segmentation.In this paper,a detailed overview provides onImage segmentation and describes its techniques likeregion, edge, feature, threshold, and model-based. Also, provide Semantic Segmentation, Satellite imageries, and Deep learning & its Techniques like-DNN, CNN, RNN, RBM, and so on.CNN is one of the efficient deep learning techniques among all of them that can be usedwith the U-net model in further work.
APA, Harvard, Vancouver, ISO, and other styles
2

KONWAR, LAKHYADEEP, ANJAN KUMAR TALUKDAR, and KANDARPA KUMAR SARMA. "Robust Real Time Multiple Human Detection and Tracking for Automatic Visual Surveillance System." WSEAS TRANSACTIONS ON SIGNAL PROCESSING 17 (August 6, 2021): 93–98. http://dx.doi.org/10.37394/232014.2021.17.13.

Full text
Abstract:
Detection of human for visual surveillance system provides most important rule for advancement in the design of future automation systems. Human detection and tracking are important for future automatic visual surveillance system (AVSS). In this paper we have proposed a flexible technique for proper human detection and tracking for the design of AVSS. We used graph cut for segment human as a foreground image by eliminating background, extract some feature points by using HOG, SVM classifier for proper classification and finally we used particle filter for tracking those of detected human. Our system can easily detect and track humans in poor lightening conditions, color, size, shape, and clothing due to the use of HOG feature descriptor and particle filter. We use graph cut based segmentation technique, therefore our system can handle occlusion at about 88%. Due to the use of HOG to extract features our system can properly work in indoor as well as outdoor environments with 97.61% automatic human detection and 92% automatic human detection and tracking accuracy of multiple human
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Yiqing, Jun Chu, Lu Leng, and Jun Miao. "Mask-Refined R-CNN: A Network for Refining Object Details in Instance Segmentation." Sensors 20, no. 4 (February 13, 2020): 1010. http://dx.doi.org/10.3390/s20041010.

Full text
Abstract:
With the rapid development of flexible vision sensors and visual sensor networks, computer vision tasks, such as object detection and tracking, are entering a new phase. Accordingly, the more challenging comprehensive task, including instance segmentation, can develop rapidly. Most state-of-the-art network frameworks, for instance, segmentation, are based on Mask R-CNN (mask region-convolutional neural network). However, the experimental results confirm that Mask R-CNN does not always successfully predict instance details. The scale-invariant fully convolutional network structure of Mask R-CNN ignores the difference in spatial information between receptive fields of different sizes. A large-scale receptive field focuses more on detailed information, whereas a small-scale receptive field focuses more on semantic information. So the network cannot consider the relationship between the pixels at the object edge, and these pixels will be misclassified. To overcome this problem, Mask-Refined R-CNN (MR R-CNN) is proposed, in which the stride of ROIAlign (region of interest align) is adjusted. In addition, the original fully convolutional layer is replaced with a new semantic segmentation layer that realizes feature fusion by constructing a feature pyramid network and summing the forward and backward transmissions of feature maps of the same resolution. The segmentation accuracy is substantially improved by combining the feature layers that focus on the global and detailed information. The experimental results on the COCO (Common Objects in Context) and Cityscapes datasets demonstrate that the segmentation accuracy of MR R-CNN is about 2% higher than that of Mask R-CNN using the same backbone. The average precision of large instances reaches 56.6%, which is higher than those of all state-of-the-art methods. In addition, the proposed method requires low time cost and is easily implemented. The experiments on the Cityscapes dataset also prove that the proposed method has great generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Xinyu, Hongbo Gao, Chong Xue, Jianhui Zhao, and Yuchao Liu. "Real-time vehicle detection and tracking using improved histogram of gradient features and Kalman filters." International Journal of Advanced Robotic Systems 15, no. 1 (January 1, 2018): 172988141774994. http://dx.doi.org/10.1177/1729881417749949.

Full text
Abstract:
Intelligent transportation systems and safety driver-assistance systems are important research topics in the field of transportation and traffic management. This study investigates the key problems in front vehicle detection and tracking based on computer vision. A video of a driven vehicle on an urban structured road is used to predict the subsequent motion of the front vehicle. This study provides the following contributions. (1) A new adaptive threshold segmentation algorithm is presented in the image preprocessing phase. This algorithm is resistant to interference from complex environments. (2) Symmetric computation based on a traditional histogram of gradient (HOG) feature vector is added in the vehicle detection phase. Symmetric HOG feature with AdaBoost classification improves the detection rate of the target vehicle. (3) A motion model based on adaptive Kalman filter is established. Experiments show that the prediction of Kalman filter model provides a reliable region for eliminating the interference of shadows and sharply decreasing the missed rate.
APA, Harvard, Vancouver, ISO, and other styles
5

Yao, Li Feng, and Jian Fei Ouyang. "Catching Data from Displayers by Machine Vision." Advanced Materials Research 566 (September 2012): 124–29. http://dx.doi.org/10.4028/www.scientific.net/amr.566.124.

Full text
Abstract:
With the emergence of eHealth, the importance of keeping digital personal health statistics is quickly rising in demand. Many current health assessment devices output values to the user without a method of digitally saving the data. This paper presents a method to directly translate the numeric displays of the devices into digital records using machine vision. A wireless-based machine vision system is designed to image the display and a tracking algorithm based on SIFT (Scale Invariant Feature Transform) is developed to recognize the numerals from the captured images. First, a local camera captures an image of the display and transfers it wirelessly to a remote computer, which generates the gray-scale and binary figures of the images for further processing. Next, the computer applies the watershed segmentation algorithm to divide the image into regions of individual values. Finally, the SIFT features of the segmented images are picked up in sequence and matched with the SIFT features of the ten standard digits from 0 to 9 one by one to recognize the digital numbers of the device’s display. The proposed approach can obtain the data directly from the display quickly and accurately with high environmental tolerance. The numeric recognition converts with over 99.2% accuracy, and processes an image in less than one second. The proposed method has been applied in the E-health Station, a physiological parameters measuring system that integrates a variety of commercial instruments, such as OMRON digital thermometer, oximeter, sphygmomanometer, glucometer, and fat monitor, to give a more complete physiological health measurement.
APA, Harvard, Vancouver, ISO, and other styles
6

Khalid, Nida, Munkhjargal Gochoo, Ahmad Jalal, and Kibum Kim. "Modeling Two-Person Segmentation and Locomotion for Stereoscopic Action Identification: A Sustainable Video Surveillance System." Sustainability 13, no. 2 (January 19, 2021): 970. http://dx.doi.org/10.3390/su13020970.

Full text
Abstract:
Due to the constantly increasing demand for automatic tracking and recognition systems, there is a need for more proficient, intelligent and sustainable human activity tracking. The main purpose of this study is to develop an accurate and sustainable human action tracking system that is capable of error-free identification of human movements irrespective of the environment in which those actions are performed. Therefore, in this paper we propose a stereoscopic Human Action Recognition (HAR) system based on the fusion of RGB (red, green, blue) and depth sensors. These sensors give an extra depth of information which enables the three-dimensional (3D) tracking of each and every movement performed by humans. Human actions are tracked according to four features, namely, (1) geodesic distance; (2) 3D Cartesian-plane features; (3) joints Motion Capture (MOCAP) features and (4) way-points trajectory generation. In order to represent these features in an optimized form, Particle Swarm Optimization (PSO) is applied. After optimization, a neuro-fuzzy classifier is used for classification and recognition. Extensive experimentation is performed on three challenging datasets: A Nanyang Technological University (NTU) RGB+D dataset; a UoL (University of Lincoln) 3D social activity dataset and a Collective Activity Dataset (CAD). Evaluation experiments on the proposed system proved that a fusion of vision sensors along with our unique features is an efficient approach towards developing a robust HAR system, having achieved a mean accuracy of 93.5% with the NTU RGB+D dataset, 92.2% with the UoL dataset and 89.6% with the Collective Activity dataset. The developed system can play a significant role in many computer vision-based applications, such as intelligent homes, offices and hospitals, and surveillance systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Et. al., Mohan kumar Shilpa ,. "An Effective Framework Using Region Merging and Learning Machine for Shadow Detection and Removal." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 2 (April 10, 2021): 2506–14. http://dx.doi.org/10.17762/turcomat.v12i2.2098.

Full text
Abstract:
Moving cast shadows of moving objects significantly degrade the performance of many high-level computer vision applications such as object tracking, object classification, behavior recognition and scene interpretation. Because they possess similar motion characteristics with their objects, moving cast shadow detection is still challenging. In this paper, the foreground is detected by background subtraction and the shadow is detected by combination of Mean-Shift and Region Merging Segmentation. Using Gabor method, we obtain the moving targets with texture features. According to the characteristics of shadow in HSV space and texture feature, the shadow is detected and removed to eliminate the shadow interference for the subsequent processing of moving targets. Finally, to guarantee the integrity of shadows and objects for further image processing, a simple post-processing procedure is designed to refine the results, which also drastically improves the accuracy of moving shadow detection. Extensive experiments on publicly common datasets that the performance of the proposed framework is superior to representative state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Byung-Gyu, and Dong-Jo Park. "Unsupervised video object segmentation and tracking based on new edge features." Pattern Recognition Letters 25, no. 15 (November 2004): 1731–42. http://dx.doi.org/10.1016/j.patrec.2004.07.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abdulghafoor, Nuha, and Hadeel Abdullah. "Enhancement Performance of Multiple Objects Detection and Tracking for Real-time and Online Applications." International Journal of Intelligent Engineering and Systems 13, no. 6 (December 31, 2020): 533–45. http://dx.doi.org/10.22266/ijies2020.1231.47.

Full text
Abstract:
Multi-object detection and tracking systems represent one of the basic and important tasks of surveillance and video traffic systems. Recently. The proposed tracking algorithms focused on the detection mechanism. It showed significant improvement in performance in the field of computer vision. Though. It faced many challenges and problems, such as many blockages and segmentation of paths, in addition to the increasing number of identification keys and false-positive paths. In this work, an algorithm was proposed that integrates information on appearance and visibility features to improve the tracker's performance. It enables us to track multiple objects throughout the video and for a longer period of clogging and buffer a number of ID switches. An effective and accurate data set, tools, and metrics were also used to measure the efficiency of the proposed algorithm. The experimental results show the great improvement in the performance of the tracker, with high accuracy of more than 65%, which achieves competitive performance with the existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
10

Volkov, Vladimir Yu, Oleg A. Markelov, and Mikhail I. Bogachev. "IMAGE SEGMENTATION AND OBJECT SELECTION BASED ON MULTI-THRESHOLD PROCESSING." Journal of the Russian Universities. Radioelectronics 22, no. 3 (July 2, 2019): 24–35. http://dx.doi.org/10.32603/1993-8985-2019-22-3-24-35.

Full text
Abstract:
Introduction. Detection, isolation, selection and localization of variously shaped objects in images are essential in a variety of applications. Computer vision systems utilizing television and infrared cameras, synthetic aperture surveillance radars as well as laser and acoustic remote sensing systems are prominent examples. Such problems as object identification, tracking and matching as well as combining information from images available from different sources are essential. Objective. Design of image segmentation and object selection methods based on multi-threshold processing. Materials and methods. The segmentation methods are classified according to the objects they deal with, including (i) pixel-level threshold estimation and clustering methods, (ii) boundary detection methods, (iii) regional level, and (iv) other classifiers, including many non-parametric methods, such as machine learning, neural networks, fuzzy sets, etc. The keynote feature of the proposed approach is that the choice of the optimal threshold for the image segmentation among a variety of test methods is carried out using a posteriori information about the selection results. Results. The results of the proposed approach is compared against the results obtained using the well-known binary integration method. The comparison is carried out both using simulated objects with known shapes with additive synthesized noise as well as using observational remote sensing imagery. Conclusion. The article discusses the advantages and disadvantages of the proposed approach for the selection of objects in images, and provides recommendations for their use.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Segmentation; Feature tracking; Computer vision"

1

Wiles, Charles S. "Closing the loop on multiple motions." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Graves, Alex. "GPU-Accelerated Feature Tracking." Wright State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=wright1462372516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Möller, Sebastian. "Image Segmentation and Target Tracking using Computer Vision." Thesis, Linköpings universitet, Datorseende, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-68061.

Full text
Abstract:
In this master thesis the possibility of detecting and tracking objects in multispectral infrared video sequences is investigated. The current method  with fix-sized rectangles have significant disadvantages. These disadvantages will be solved using image segmentation to estimate the shape of the object. The result of the image segmentation is used to determine the infrared contrast of the object. Our results show how some objects will give very good segmentation, tracking as well as shape detection. The objects that perform best are the flares and countermeasures. But especially helicopters seen from the side, with significant movements, is better detected with our method. The motion of the object is very important since movement is the main component in successful shape detection. This is so because helicopters are much colder than flares and engines. Detecting the presence and position of moving objects is easier and can be done quite successfully even with helicopters. But using structure tensors we can also detect the presence and estimate the position for stationary objects.
I detta examensarbete undersöks möjligheterna att detektera och spåra intressanta objekt i multispektrala infraröda videosekvenser. Den nuvarande metoden, som använder sig av rektanglar med fix storlek, har sina nackdelar. Dessa nackdelar kommer att lösas med hjälp av bildsegmentering för att uppskatta formen på önskade mål.Utöver detektering och spårning försöker vi också att hitta formen och konturen för intressanta objekt för att kunna använda den exaktare passformen vid kontrastberäkningar. Denna framsegmenterade kontur ersätter de gamla fixa rektanglarna som använts tidigare för att beräkna intensitetskontrasten för objekt i de infraröda våglängderna. Resultaten som presenteras visar att det för vissa objekt, som motmedel och facklor, är lättare att få fram en bra kontur samt målföljning än vad det är med helikoptrar, som var en annan önskad måltyp. De svårigheter som uppkommer med helikoptrar beror till stor del på att de är mycket svalare vilket gör att delar av helikoptern kan helt döljas i bruset från bildsensorn. För att kompensera för detta används metoder som utgår ifrån att objektet rör sig mycket i videon så att rörelsen kan användas som detekteringsparameter. Detta ger bra resultat för de videosekvenser där målet rör sig mycket i förhållande till sin storlek.
APA, Harvard, Vancouver, ISO, and other styles
4

Rowe, Simon Michael. "Robust feature search for active tracking." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pretorius, Eugene. "An adaptive feature-based tracking system." Thesis, Link to the online version, 2008. http://hdl.handle.net/10019/1441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lan, Xiangyuan. "Multi-cue visual tracking: feature learning and fusion." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/319.

Full text
Abstract:
As an important and active research topic in computer vision community, visual tracking is a key component in many applications ranging from video surveillance and robotics to human computer. In this thesis, we propose new appearance models based on multiple visual cues and address several research issues in feature learning and fusion for visual tracking. Feature extraction and feature fusion are two key modules to construct the appearance model for the tracked target with multiple visual cues. Feature extraction aims to extract informative features for visual representation of the tracked target, and many kinds of hand-crafted feature descriptors which capture different types of visual information have been developed. However, since large appearance variations, e.g. occlusion, illumination may occur during tracking, the target samples may be contaminated/corrupted. As such, the extracted raw features may not be able to capture the intrinsic properties of the target appearance. Besides, without explicitly imposing the discriminability, the extracted features may potentially suffer background distraction problem. To extract uncontaminated discriminative features from multiple visual cues, this thesis proposes a novel robust joint discriminative feature learning framework which is capable of 1) simultaneously and optimally removing corrupted features and learning reliable classifiers, and 2) exploiting the consistent and feature-specific discriminative information of multiple feature. In this way, the features and classifiers learned from potentially corrupted tracking samples can be better utilized for target representation and foreground/background discrimination. As shown by the Data Processing Inequality, information fusion in feature level contains more information than that in classifier level. In addition, not all visual cues/features are reliable, and thereby combining all the features may not achieve a better tracking performance. As such, it is more reasonable to dynamically select and fuse multiple visual cues for visual tracking. Based on aforementioned considerations, this thesis proposes a novel joint sparse representation model in which feature selection, fusion, and representation are performed optimally in a unified framework. By taking advantages of sparse representation, unreliable features are detected and removed while reliable features are fused on feature level for target representation. In order to capture the non-linear similarity of features, the model is further extended to perform feature fusion in kernel space. Experimental results demonstrate the effectiveness of the proposed model. Since different visual cues extracted from the same object should share some commonalities in their representations and each feature should also have some diversities to reflect its complementarity in appearance modeling, another important problem in feature fusion is how to learn the commonality and diversity in the fused representations of multiple visual cues to enhance the tracking accuracy. Different from existing multi-cue sparse trackers which only consider the commonalities among the sparsity patterns of multiple visual cues, this thesis proposes a novel multiple sparse representation model for multi-cue visual tracking which jointly exploits the underlying commonalities and diversities of different visual cues by decomposing multiple sparsity patterns. Moreover, this thesis introduces a novel online multiple metric learning to efficiently and adaptively incorporate the appearance proximity constraint, which ensures that the learned commonalities of multiple visual cues are more representative. Experimental results on tracking benchmark videos and other challenging videos show that the proposed tracker achieves better performance than the existing sparsity-based trackers and other state-of-the-art trackers.
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Shijun. "Video object segmentation and tracking using VSnakes /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/6038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Roberts, Jonathan Michael. "Attentive visual tracking and trajectory estimation for dynamic scene segmentation." Thesis, University of Southampton, 1994. https://eprints.soton.ac.uk/250163/.

Full text
Abstract:
Intelligent Co-Pilot Systems (ICPS) offer the next challenge to vehicle-highway automation. The key to ICPSs is the detection of moving objects (other vehicles) from the moving observer using a visual sensor. The aim of the work presented in this thesis was to design and implement a feature detection and tracking strategy that is capable of tracking image features independently, in parallel, and in real-time and to cluster/segment features utilising the inherent temporal information contained within feature trajectories. Most images contain areas that are of little or no interest to vision tasks. An attentive, data-driven, approach to feature detection and tracking is proposed which aims to increase the efficiency of feature detection and tracking by focusing attention onto relevant regions of the image likely to contain scene structure. This attentive algorithm lends itself naturally to parallelisation and results from a parallel implementation are presented. A scene may be segmented into independently moving objects based on the assumption that features belonging to the same object will move in an identical way in three dimensions (this assumes objects are rigid). A model for scene segmentation is proposed that uses information contained within feature trajectories to cluster, or group, features into independently moving objects. This information includes: image-plane position, time-to-collision of a feature with the image-plane, and the type of motion observed. The Multiple Model Adaptive Estimator (MMAE) algorithm is extended to cope with constituent filters with different states (MMAE2) in an attempt to accurately estimate the time-to-collision of a feature and provide a reliable idea of the type of motion observed (in the form of a model belief measure). Finally, poor state initialisation is identified as a likely prime cause for poor Extended Kalman Filter (EKF) performance (and hence poor MMAE2 performance) when using high order models. The idea of the neurofuzzy initialised EKF (NF-EKF) is introduced which attempts to reduce the time for an EKF to converge by improving the accuracy of the EKF's initial state estimates.
APA, Harvard, Vancouver, ISO, and other styles
9

Roychoudhury, Shoumik. "Tracking Human in Thermal Vision using Multi-feature Histogram." Master's thesis, Temple University Libraries, 2012. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/203794.

Full text
Abstract:
Electrical Engineering
M.S.E.E.
This thesis presents a multi-feature histogram approach to track a person in thermal vision. Illumination variation is a primary constraint in the performance of object tracking in visible spectrum. Thermal infrared (IR) sensor, which measures the heat energy emitted from an object, is less sensitive to illumination variations. Therefore, thermal vision has immense advantage in object tracking in varying illumination conditions. Kernel based approaches such as mean shift tracking algorithm which uses a single feature histogram for object representation, has gained popularity in the field of computer vision due its efficiency and robustness to track non-rigid object in significant complex background. However, due to low resolution of IR images the gray level intensity information is not sufficient enough to give a strong cue for object representation using histogram. Multi-feature histogram, which is the combination of the gray level intensity information and edge information, generates an object representation which is more robust in thermal vision. The objective of this research is to develop a robust human tracking system which can autonomously detect, identify and track a person in a complex thermal IR scene. In this thesis the tracking procedure has been adapted from the well-known and efficient mean shift tracking algorithm and has been modified to enable fusion of multiple features to increase the robustness of the tracking procedure in thermal vision. In order to identify the object of interest before tracking, rapid human detection in thermal IR scene is achieved using Adaboost classification algorithm. Furthermore, a computationally efficient body pose recognition method is developed which uses Hu-invariant moments for matching object shapes. An experimental setup consisting of a Forward Looking Infrared (FLIR) camera, mounted on a Pioneer P3-DX mobile robot platform was used to test the proposed human tracking system in both indoor and uncontrolled outdoor environments. The performance evaluation of the proposed tracking system on the OTCBVS benchmark dataset shows improvement in tracking performance in comparison to the traditional mean-shift tracking algorithm. Moreover, experimental results in different indoor and outdoor tracking scenarios involving different appearances of people show tracking is robust under cluttered background, varying illumination and partial occlusion of target object.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
10

Fang, Jian. "Optical Imaging and Computer Vision Technology for Corn Quality Measurement." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/733.

Full text
Abstract:
The official U.S. standards for corn have been available for almost one hundred years. Corn grading system has been gradually updated over the years. In this thesis, we investigated a fast corn grading system, which includes the mechanical part and the computer recognition part. The mechanical system can deliver the corn kernels onto the display plate. For the computer recognition algorithms, we extracted common features from each corn kernel, and classified them to measure the grain quality.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Segmentation; Feature tracking; Computer vision"

1

Video segmentation and its applications. New York: Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ngan, King Ngi, and Hongliang Li. Video Segmentation and Its Applications. Springer, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ngan, King Ngi, and Hongliang Li. Video Segmentation and Its Applications. Springer, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Segmentation; Feature tracking; Computer vision"

1

Kwolek, Bogdan. "Foreground Segmentation via Segments Tracking." In Computer Vision and Graphics, 270–81. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-02345-3_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Özuysal, Mustafa, Vincent Lepetit, François Fleuret, and Pascal Fua. "Feature Harvesting for Tracking-by-Detection." In Computer Vision – ECCV 2006, 592–605. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11744078_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Makadia, Ameesh. "Feature Tracking for Wide-Baseline Image Retrieval." In Computer Vision – ECCV 2010, 310–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15555-0_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Southall, B., J. A. Marchant, T. Hague, and B. F. Buxton. "Model based tracking for navigation and segmentation." In Computer Vision — ECCV'98, 797–811. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0055705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Raja, Yogesh, Stephen J. McKenna, and Shaogang Gong. "Segmentation and tracking using colour mixture models." In Computer Vision — ACCV'98, 607–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63930-6_173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Zhenli, Xiangyu Zhang, Chao Peng, Xiangyang Xue, and Jian Sun. "ExFuse: Enhancing Feature Fusion for Semantic Segmentation." In Computer Vision – ECCV 2018, 273–88. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01249-6_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Le, Hieu, Vu Nguyen, Chen-Ping Yu, and Dimitris Samaras. "Geodesic Distance Histogram Feature for Video Segmentation." In Computer Vision – ACCV 2016, 275–90. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54181-5_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gehrig, Daniel, Henri Rebecq, Guillermo Gallego, and Davide Scaramuzza. "Asynchronous, Photometric Feature Tracking Using Events and Frames." In Computer Vision – ECCV 2018, 766–81. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01258-8_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zheng, Linyu, Ming Tang, Yingying Chen, Jinqiao Wang, and Hanqing Lu. "Learning Feature Embeddings for Discriminant Model Based Tracking." In Computer Vision – ECCV 2020, 759–75. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58555-6_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Alismail, Hatem, Brett Browning, and Simon Lucey. "Enhancing Direct Camera Tracking with Dense Feature Descriptors." In Computer Vision – ACCV 2016, 535–51. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54190-7_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Segmentation; Feature tracking; Computer vision"

1

Allili, Mohand Said, and Djemel Ziou. "Using Feature Selection For Object Segmentation and Tracking." In >Fourth Canadian Conference on Computer and Robot Vision. IEEE, 2007. http://dx.doi.org/10.1109/crv.2007.67.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Chuan, Marshall Tappen, and Hassan Foroosh. "Feature-Independent Action Spotting without Human Localization, Segmentation, or Frame-wise Tracking." In 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2014. http://dx.doi.org/10.1109/cvpr.2014.344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Allili, Mohand Said, and Djemel Ziou. "Object of Interest segmentation and Tracking by Using Feature Selection and Active Contours." In 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2007. http://dx.doi.org/10.1109/cvpr.2007.383449.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ring, Dan, and Anil Kokaram. "Feature-Cut: Video object segmentation through local feature correspondences." In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. IEEE, 2009. http://dx.doi.org/10.1109/iccvw.2009.5457644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brendel, William, and Sinisa Todorovic. "Video object segmentation by tracking regions." In 2009 IEEE 12th International Conference on Computer Vision (ICCV). IEEE, 2009. http://dx.doi.org/10.1109/iccv.2009.5459242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ren, Xiaofeng, and Jitendra Malik. "Tracking as Repeated Figure/Ground Segmentation." In 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2007. http://dx.doi.org/10.1109/cvpr.2007.383177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ding, Henghui, Xudong Jiang, Ai Qun Liu, Nadia Magnenat Thalmann, and Gang Wang. "Boundary-Aware Feature Propagation for Scene Segmentation." In 2019 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2019. http://dx.doi.org/10.1109/iccv.2019.00692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ikedo, Ryota, and Kazuhiro Hotta. "Feature Sharing Cooperative Network for Semantic Segmentation." In 16th International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2021. http://dx.doi.org/10.5220/0010312505770584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Allili, Mohand Saïd, Djemel Ziou, Nizar Bouguila, and Sabri Boutemedjet. "Unsupervised Feature Selection and Learning for Image Segmentation." In 2010 Canadian Conference on Computer and Robot Vision. IEEE, 2010. http://dx.doi.org/10.1109/crv.2010.44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"Sparse Motion Segmentation using Propagation of Feature Labels." In International Conference on Computer Vision Theory and Applications. SciTePress - Science and and Technology Publications, 2013. http://dx.doi.org/10.5220/0004281203960401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography