Articles de revues sur le sujet « Robust Object Model »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Robust Object Model.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Robust Object Model ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

KIM, SUNGHO, GIJEONG JANG, WANG-HEON LEE et IN SO KWEON. « COMBINED MODEL-BASED 3D OBJECT RECOGNITION ». International Journal of Pattern Recognition and Artificial Intelligence 19, no 07 (novembre 2005) : 839–52. http://dx.doi.org/10.1142/s0218001405004368.

Texte intégral
Résumé :
This paper presents a combined model-based 3D object recognition method motivated by the robust properties of human vision. The human visual system (HVS) is very efficient and robust in identifying and grabbing objects, in part because of its properties of visual attention, contrast mechanism, feature binding, multiresolution and part-based representation. In addition, the HVS combines bottom-up and top-down information effectively using combined model representation. We propose a method for integrating these aspects under a Monte Carlo method. In this scheme, object recognition is regarded as a parameter optimization problem. The bottom-up process initializes parameters, and the top-down process optimizes them. Experimental results show that the proposed recognition model is feasible for 3D object identification and pose estimation.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Dong, Qiujie, Xuedong He, Haiyan Ge, Qin Liu, Aifu Han et Shengzong Zhou. « Improving model drift for robust object tracking ». Multimedia Tools and Applications 79, no 35-36 (7 juillet 2020) : 25801–15. http://dx.doi.org/10.1007/s11042-020-09032-z.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Wang, Yong, Xian Wei, Hao Shen, Xuan Tang et Hui Yu. « Adaptive model updating for robust object tracking ». Signal Processing : Image Communication 80 (février 2020) : 115656. http://dx.doi.org/10.1016/j.image.2019.115656.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Lee, Hyungtak, Seongju Kang et Kwangsue Chung. « Robust Data Augmentation Generative Adversarial Network for Object Detection ». Sensors 23, no 1 (23 décembre 2022) : 157. http://dx.doi.org/10.3390/s23010157.

Texte intégral
Résumé :
Generative adversarial network (GAN)-based data augmentation is used to enhance the performance of object detection models. It comprises two stages: training the GAN generator to learn the distribution of a small target dataset, and sampling data from the trained generator to enhance model performance. In this paper, we propose a pipelined model, called robust data augmentation GAN (RDAGAN), that aims to augment small datasets used for object detection. First, clean images and a small datasets containing images from various domains are input into the RDAGAN, which then generates images that are similar to those in the input dataset. Thereafter, it divides the image generation task into two networks: an object generation network and image translation network. The object generation network generates images of the objects located within the bounding boxes of the input dataset and the image translation network merges these images with clean images. A quantitative experiment confirmed that the generated images improve the YOLOv5 model’s fire detection performance. A comparative evaluation showed that RDAGAN can maintain the background information of input images and localize the object generation location. Moreover, ablation studies demonstrated that all components and objects included in the RDAGAN play pivotal roles.
Styles APA, Harvard, Vancouver, ISO, etc.
5

ABDELLAOUI, Mehrez, et Ali DOUIK. « Robust Object Tracker in Video via Discriminative Model ». Studies in Informatics and Control 28, no 3 (9 octobre 2019) : 337–46. http://dx.doi.org/10.24846/v28i3y201910.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Medley, Daniela O., Carlos Santiago et Jacinto C. Nascimento. « Deep Active Shape Model for Robust Object Fitting ». IEEE Transactions on Image Processing 29 (2020) : 2380–94. http://dx.doi.org/10.1109/tip.2019.2948728.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Wei Zhong, Huchuan Lu et Ming-Hsuan Yang. « Robust Object Tracking via Sparse Collaborative Appearance Model ». IEEE Transactions on Image Processing 23, no 5 (mai 2014) : 2356–68. http://dx.doi.org/10.1109/tip.2014.2313227.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Nai, Ke, Zhiyong Li, Guiji Li et Shanquan Wang. « Robust Object Tracking via Local Sparse Appearance Model ». IEEE Transactions on Image Processing 27, no 10 (octobre 2018) : 4958–70. http://dx.doi.org/10.1109/tip.2018.2848465.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Wang, Chong, et Kai-Qi Huang. « VFM : Visual Feedback Model for Robust Object Recognition ». Journal of Computer Science and Technology 30, no 2 (mars 2015) : 325–39. http://dx.doi.org/10.1007/s11390-015-1526-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Vajda, Peter, Ivan Ivanov, Lutz Goldmann, Jong-Seok Lee et Touradj Ebrahimi. « Robust Duplicate Detection of 2D and 3D Objects ». International Journal of Multimedia Data Engineering and Management 1, no 3 (juillet 2010) : 19–40. http://dx.doi.org/10.4018/jmdem.2010070102.

Texte intégral
Résumé :
In this paper, the authors analyze their graph-based approach for 2D and 3D object duplicate detection in still images. A graph model is used to represent the 3D spatial information of the object based on the features extracted from training images to avoid explicit and complex 3D object modeling. Therefore, improved performance can be achieved in comparison to existing methods in terms of both robustness and computational complexity. Different limitations of this approach are analyzed by evaluating performance with respect to the number of training images and calculation of optimal parameters in a number of applications. Furthermore, effectiveness of object duplicate detection algorithm is measured over different object classes. The authors’ method is shown to be robust in detecting the same objects even when images with objects are taken from different viewpoints or distances.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Akhtar, Malik Javed, Rabbia Mahum, Faisal Shafique Butt, Rashid Amin, Ahmed M. El-Sherbeeny, Seongkwan Mark Lee et Sarang Shaikh. « A Robust Framework for Object Detection in a Traffic Surveillance System ». Electronics 11, no 21 (22 octobre 2022) : 3425. http://dx.doi.org/10.3390/electronics11213425.

Texte intégral
Résumé :
Object recognition is the technique of specifying the location of various objects in images or videos. There exist numerous algorithms for the recognition of objects such as R-CNN, Fast R-CNN, Faster R-CNN, HOG, R-FCN, SSD, SSP-net, SVM, CNN, YOLO, etc., based on the techniques of machine learning and deep learning. Although these models have been employed for various types of object detection applications, however, tiny object detection faces the challenge of low precision. It is essential to develop a lightweight and robust model for object detection that can detect tiny objects with high precision. In this study, we suggest an enhanced YOLOv2 (You Only Look Once version 2) algorithm for object detection, i.e., vehicle detection and recognition in surveillance videos. We modified the base network of the YOLOv2 by reducing the number of parameters and replacing it with DenseNet. We employed the DenseNet-201 technique for feature extraction in our improved model that extracts the most representative features from the images. Moreover, our proposed model is more compact due to the dense architecture of the base network. We utilized DenseNet-201 as a base network due to the direct connection among all layers, which helps to extract a valuable information from the very first layer and pass it to the final layer. The dataset gathered from the Kaggle and KITTI was used for the training of the proposed model, and we cross-validated the performance using MS COCO and Pascal VOC datasets. To assess the efficacy of the proposed model, we utilized extensive experimentation, which demonstrates that our algorithm beats existing vehicle detection approaches, with an average precision of 97.51%.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Zhmud, V. A., A. S. Vostrikov, A. Yu Ivoilov et G. V. Sablina. « Synthesis of Robust PID Controllers by Double Optimization Method ». Mekhatronika, Avtomatizatsiya, Upravlenie 21, no 2 (10 février 2020) : 67–74. http://dx.doi.org/10.17587/mau.21.67-73.

Texte intégral
Résumé :
The design of adaptive controllers allows to solve the problem of control of the object with non-stationary parameters. However, if the parameters of the object do not change too much or if only a certain interval of their change is known, it may turn out that an adaptive controller is not required, since the problem can be solved with the help of a robust controller. The robust controller allows to provide an acceptable quality of control even if the parameters of the mathematical model of the object change in some predetermined interval. A method of designing such controllers is known as the method of numerical optimization of the controllers used in the ensemble of systems in which the models of objects are different and the models of controllers are identical. The ensemble uses object models with extreme parameter values. The disadvantages of this method are too many systems that need to be modeled and optimized at the same time if there are several parameters to be changed. In addition, the worst combination of model parameters may not be boundary, but middle, in this case this method is not applicable. This article offers and analyzes an alternative method of designing a robust controller on a numerical example. The essence of this method is the numerical optimization of the regulator for the model with the worst combination of the values of all modifiable parameters. The search for the worst combination of parameters is also carried out using the method of numerical optimization. In this case, a combination of model parameters is found in which the best relation of regulator coefficients gives the worst result of the system. The problem is solved in several optimization cycles with alternating cost functions. The utility of the method is illustrated numerically by an example of a third order dynamic object with a series linked delay element.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Liu, Sheng, Yangqing Wang, Fengji Dai et Jingxiang Yu. « Simultaneous 3D Motion Detection, Long-Term Tracking and Model Reconstruction for Multi-Objects ». International Journal of Humanoid Robotics 16, no 04 (août 2019) : 1950017. http://dx.doi.org/10.1142/s0219843619500178.

Texte intégral
Résumé :
Motion detection and object tracking play important roles in unsupervised human–machine interaction systems. Nevertheless, the human–machine interaction would become invalid when the system fails to detect the scene objects correctly due to occlusion and limited field of view. Thus, robust long-term tracking of scene objects is vital. In this paper, we present a 3D motion detection and long-term tracking system with simultaneous 3D reconstruction of dynamic objects. In order to achieve the high precision motion detection, an optimization framework with a novel motion pose estimation energy function is provided in the proposed method by which the 3D motion pose of each object can be estimated independently. We also develop an accurate object-tracking method which combines 2D visual information and depth. We incorporate a novel boundary-optimization segmentation based on 2D visual information and depth to improve the robustness of tracking significantly. Besides, we also introduce a new fusion and updating strategy in the 3D reconstruction process. This strategy brings higher robustness to 3D motion detection. Experiments results show that, for synthetic sequences, the root-mean-square error (RMSE) of our system is much smaller than Co-Fusion (CF); our system performs extremely well in 3D motion detection accuracy. In the case of occlusion or out-of-view on real scene data, CF will suffer the loss of tracking or object-label changing, by contrast, our system can always keep the robust tracking and maintain the correct labels for each dynamic object. Therefore, our system is robust to occlusion and out-of-view application scenarios.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Kabir, Raihan, Yutaka Watanobe, Md Rashedul Islam, Keitaro Naruse et Md Mostafizer Rahman. « Unknown Object Detection Using a One-Class Support Vector Machine for a Cloud–Robot System ». Sensors 22, no 4 (10 février 2022) : 1352. http://dx.doi.org/10.3390/s22041352.

Texte intégral
Résumé :
Inter-robot communication and high computational power are challenging issues for deploying indoor mobile robot applications with sensor data processing. Thus, this paper presents an efficient cloud-based multirobot framework with inter-robot communication and high computational power to deploy autonomous mobile robots for indoor applications. Deployment of usable indoor service robots requires uninterrupted movement and enhanced robot vision with a robust classification of objects and obstacles using vision sensor data in the indoor environment. However, state-of-the-art methods face degraded indoor object and obstacle recognition for multiobject vision frames and unknown objects in complex and dynamic environments. From these points of view, this paper proposes a new object segmentation model to separate objects from a multiobject robotic view-frame. In addition, we present a support vector data description (SVDD)-based one-class support vector machine for detecting unknown objects in an outlier detection fashion for the classification model. A cloud-based convolutional neural network (CNN) model with a SoftMax classifier is used for training and identification of objects in the environment, and an incremental learning method is introduced for adding unknown objects to the robot knowledge. A cloud–robot architecture is implemented using a Node-RED environment to validate the proposed model. A benchmarked object image dataset from an open resource repository and images captured from the lab environment were used to train the models. The proposed model showed good object detection and identification results. The performance of the model was compared with three state-of-the-art models and was found to outperform them. Moreover, the usability of the proposed system was enhanced by the unknown object detection, incremental learning, and cloud-based framework.
Styles APA, Harvard, Vancouver, ISO, etc.
15

TO, F. W., et K. M. TSANG. « RECOGNITION OF PARTIALLY OCCLUDED OBJECTS USING AN ORTHOGONAL COMPLEX AR MODEL APPROACH ». International Journal of Pattern Recognition and Artificial Intelligence 13, no 01 (février 1999) : 85–107. http://dx.doi.org/10.1142/s0218001499000069.

Texte intégral
Résumé :
The application of the orthogonal complex AR model approach has been extended for the recognition of partially occluded objects. The 2-D boundary of an object is extracted and is divided into a number of line segments by locating the turning points. Each line segment is resampled into a fixed number of data points and a complex AR model is used to represent each line segment. An orthogonal estimator is implemented to determine the correct model order and to estimate the corresponding AR model parameters. Each line segment is associated with a minimum feature vector denoting the estimated AR model parameters and similar line segments for different patterns have similar AR model parameters. Recognition of an occluded object based on the matching of line segments of two patterns is derived. An algorithm for finding the turning points has also been devised, which is quite robust to various object sizes and orientation, resolution and noise effect of the object image. Experimental results were obtained to show the effectiveness of the proposed approach for the recognition of objects which are partly covered or partially occluded, and the approach is robust to different object sizes and orientation.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Shimada, Atsushi, Satoshi Yoshinaga et Rin-ichiro Taniguchi. « Maintenance of Blind Background Model for Robust Object Detection ». IPSJ Transactions on Computer Vision and Applications 3 (2011) : 148–59. http://dx.doi.org/10.2197/ipsjtcva.3.148.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Jurie, Frederic. « Robust hypothesis verification : application to model-based object recognition ». Pattern Recognition 32, no 6 (juin 1999) : 1069–81. http://dx.doi.org/10.1016/s0031-3203(98)00126-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Yang, Min, MingTao Pei, YuWei Wu et YunDe Jia. « Learning online structural appearance model for robust object tracking ». Science China Information Sciences 58, no 3 (9 janvier 2015) : 1–14. http://dx.doi.org/10.1007/s11432-014-5177-6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Zhou, Yun, Jianghong Han, Xiaohui Yuan, Zhenchun Wei et Richang Hong. « Inverse Sparse Group Lasso Model for Robust Object Tracking ». IEEE Transactions on Multimedia 19, no 8 (août 2017) : 1798–810. http://dx.doi.org/10.1109/tmm.2017.2689918.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Zhao, Jianwei, Weidong Zhang et Feilong Cao. « Robust object tracking using a sparse coadjutant observation model ». Multimedia Tools and Applications 77, no 23 (4 juin 2018) : 30969–91. http://dx.doi.org/10.1007/s11042-018-6132-0.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Lee, Suk-Hwan, et Ki-Ryong Kwon. « Robust 3D mesh model hashing based on feature object ». Digital Signal Processing 22, no 5 (septembre 2012) : 744–59. http://dx.doi.org/10.1016/j.dsp.2012.04.015.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Velez, J., G. Hemann, A. S. Huang, I. Posner et N. Roy. « Modelling Observation Correlations for Active Exploration and Robust Object Detection ». Journal of Artificial Intelligence Research 44 (14 juillet 2012) : 423–53. http://dx.doi.org/10.1613/jair.3516.

Texte intégral
Résumé :
Today, mobile robots are expected to carry out increasingly complex tasks in multifarious, real-world environments. Often, the tasks require a certain semantic understanding of the workspace. Consider, for example, spoken instructions from a human collaborator referring to objects of interest; the robot must be able to accurately detect these objects to correctly understand the instructions. However, existing object detection, while competent, is not perfect. In particular, the performance of detection algorithms is commonly sensitive to the position of the sensor relative to the objects in the scene. This paper presents an online planning algorithm which learns an explicit model of the spatial dependence of object detection and generates plans which maximize the expected performance of the detection, and by extension the overall plan performance. Crucially, the learned sensor model incorporates spatial correlations between measurements, capturing the fact that successive measurements taken at the same or nearby locations are not independent. We show how this sensor model can be incorporated into an efficient forward search algorithm in the information space of detected objects, allowing the robot to generate motion plans efficiently. We investigate the performance of our approach by addressing the tasks of door and text detection in indoor environments and demonstrate significant improvement in detection performance during task execution over alternative methods in simulated and real robot experiments.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Wang, Yanjiang, Yujuan Qi et Yongping Li. « Memory-Based Multiagent Coevolution Modeling for Robust Moving Object Tracking ». Scientific World Journal 2013 (2013) : 1–13. http://dx.doi.org/10.1155/2013/793013.

Texte intégral
Résumé :
The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Montaño, Andrés, et Raúl Suárez. « Robust dexterous telemanipulation following object-orientation commands ». Industrial Robot : An International Journal 44, no 5 (21 août 2017) : 648–57. http://dx.doi.org/10.1108/ir-12-2015-0226.

Texte intégral
Résumé :
Purpose This paper aims to present a procedure to change the orientation of a grasped object using dexterous manipulation. The manipulation is controlled by teleoperation in a very simple way, with the commands introduced by an operator using a keyboard. Design/methodology/approach The paper shows a teleoperation scheme, hand kinematics and a manipulation strategy to manipulate different objects using the Schunk Dexterous Hand (SDH2). A state machine is used to model the teleoperation actions and the system states. A virtual link is used to include the contact point on the hand kinematics of the SDH2. Findings Experiments were conducted to evaluate the proposed approach with different objects, varying the initial grasp configuration and the sequence of actions commanded by the operator. Originality/value The proposed approach uses a shared telemanipulation schema to perform dexterous manipulation; in this schema, the operator sends high-level commands and a local system uses this information, jointly with tactile measurements and the current status of the system, to generate proper setpoints for the low-level control of the fingers, which may be a commercial close one. The main contribution of this work is the mentioned local system, simple enough for practical applications and robust enough to avoid object falls.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Wang, Dong, Huchuan Lu et Chunjuan Bo. « Fast and Robust Object Tracking via Probability Continuous Outlier Model ». IEEE Transactions on Image Processing 24, no 12 (décembre 2015) : 5166–76. http://dx.doi.org/10.1109/tip.2015.2478399.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Ayzenberg, Vladislav, Sami Yousif et Stella Lourenco. « The medial axis as a robust model of object representation ». Journal of Vision 16, no 12 (1 septembre 2016) : 169. http://dx.doi.org/10.1167/16.12.169.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Cottrell, G., et C. Kanan. « Robust object and face recognition using a biologically plausible model ». Journal of Vision 10, no 7 (13 août 2010) : 1015. http://dx.doi.org/10.1167/10.7.1015.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Bo, Chunjuan, Junxing Zhang, Junjie Liu et Qiang Yao. « Robust online object tracking via the convex hull representation model ». Neurocomputing 289 (mai 2018) : 44–54. http://dx.doi.org/10.1016/j.neucom.2018.02.013.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Han, Guang, Xingyue Wang, Jixin Liu, Ning Sun et Cailing Wang. « Robust object tracking based on local region sparse appearance model ». Neurocomputing 184 (avril 2016) : 145–67. http://dx.doi.org/10.1016/j.neucom.2015.07.122.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Li, Zhenxin, Xuande Zhang, Long Xu et Weiqiang Zhang. « Fast and Robust Visual Tracking with Few-Iteration Meta-Learning ». Sensors 22, no 15 (4 août 2022) : 5826. http://dx.doi.org/10.3390/s22155826.

Texte intégral
Résumé :
Visual object tracking has been a major research topic in the field of computer vision for many years. Object tracking aims to identify and localize objects of interest in subsequent frames, given the bounding box of the first frame. In addition, the object-tracking algorithms are also required to have robustness and real-time performance. These requirements create some unique challenges, which can easily become overfitting if given a very small training dataset of objects during offline training. On the other hand, if there are too many iterations in the model-optimization process during offline training or in the model-update process during online tracking, it will cause the problem of poor real-time performance. We address these problems by introducing a meta-learning method based on fast optimization. Our proposed tracking architecture mainly contains two parts, one is the base learner and the other is the meta learner. The base learner is primarily a target and background classifier, in addition, there is an object bounding box prediction regression network. The primary goal of a meta learner based on the transformer is to learn the representations used by the classifier. The accuracy of our proposed algorithm on OTB2015 and LaSOT is 0.930 and 0.688, respectively. Moreover, it performs well on VOT2018 and GOT-10k datasets. Combined with the comparative experiments on real-time performance, our algorithm is fast and robust.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Sauvet, Bruno, François Lévesque, SeungJae Park, Philippe Cardou et Clément Gosselin. « Model-Based Grasping of Unknown Objects from a Random Pile ». Robotics 8, no 3 (6 septembre 2019) : 79. http://dx.doi.org/10.3390/robotics8030079.

Texte intégral
Résumé :
Grasping an unknown object in a pile is no easy task for a robot—it is often difficult to distinguish different objects; objects occlude one another; object proximity limits the number of feasible grasps available; and so forth. In this paper, we propose a simple approach to grasping unknown objects one by one from a random pile. The proposed method is divided into three main actions—over-segmentation of the images, a decision algorithm and ranking according to a grasp robustness index. Thus, the robot is able to distinguish the objects from the pile, choose the best candidate for grasping among these objects, and pick the most robust grasp for this candidate. With this approach, we can clear out a random pile of unknown objects, as shown in the experiments reported herein.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Wiharta, Dewa Made. « PARTICLE FILTER-BASED OBJECT TRACKING USING JOINT FEATURES OF COLOR AND LOCAL BINARY PATTERN HISTOGRAM FOURIER ». Kursor 8, no 2 (12 décembre 2016) : 79. http://dx.doi.org/10.28961/kursor.v8i2.64.

Texte intégral
Résumé :
Object tracking is defined as the problem of estimating object location in image sequences. In general, the problems of object tracking in real time and complex environtment are affected by many uncertainty. In this research we use a sequensial Monte Carlo method, known as particle filter, to build an object tracking algorithm. Particle filter, due to its multiple hypotheses, is known to be a robust method in object tracking task. The performances of particle filter is defined by how the particles distributed. The role of distribution is regulated by the system model being used. In this research, a modified system model is proposed to manage particles distribution to achieve better performance. Object representation also plays important role in object tracking. In this research, we combine color histogram and texture from Local Binary Pattern Histogram Fourier (LBPHF) operator as feature in object tracking. Our experiments show that the proposed system model delivers a more robust tracking task, especially for objects with sudden changes in speed and direction. The proposed joint feature is able to capture object with changing shape and has better accuracy than single feature of color or joint color texture from other LBP variants.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Xie, Ying Hong, et Cheng Dong Wu. « Robust Visual Tracking Using Particle Filtering on SL(3) Group ». Applied Mechanics and Materials 457-458 (octobre 2013) : 1028–31. http://dx.doi.org/10.4028/www.scientific.net/amm.457-458.1028.

Texte intégral
Résumé :
Considering the process of objects imaging in the camera is essentially the projection transformation process. The paper proposes a novel visual tracking method using particle filtering on SL(3) group to predict the changes of the target area boundaries of next moment, which is used for dynamic model. Meanwhile, covariance matrices are applied for observation model. Extensive experiments prove that the proposed method can realize stable and accurate tracking for object with significant geometric deformation, even for nonrigid objects.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Song, Zhiguo, Jifeng Sun, Jialin Yu et Shengqing Liu. « Robust Visual Tracking via Patch Descriptor and Structural Local Sparse Representation ». Algorithms 11, no 8 (15 août 2018) : 126. http://dx.doi.org/10.3390/a11080126.

Texte intégral
Résumé :
Appearance models play an important role in visual tracking. Effective modeling of the appearance of tracked objects is still a challenging problem because of object appearance changes caused by factors, such as partial occlusion, illumination variation and deformation, etc. In this paper, we propose a tracking method based on the patch descriptor and the structural local sparse representation. In our method, the object is firstly divided into multiple non-overlapped patches, and the patch sparse coefficients are obtained by structural local sparse representation. Secondly, each patch is further decomposed into several sub-patches. The patch descriptors are defined as the proportion of sub-patches, of which the reconstruction error is less than the given threshold. Finally, the appearance of an object is modeled by the patch descriptors and the patch sparse coefficients. Furthermore, in order to adapt to appearance changes of an object and alleviate the model drift, an outlier-aware template update scheme is introduced. Experimental results on a large benchmark dataset demonstrate the effectiveness of the proposed method.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Cao, Chuqing, et Hanwei Liu. « Grasp Pose Detection Based on Shape Simplification ». International Journal of Humanoid Robotics 18, no 03 (juin 2021) : 2150006. http://dx.doi.org/10.1142/s0219843621500067.

Texte intégral
Résumé :
For robots in an unstructured work environment, grasping unknown objects that have neither model data nor RGB data is very important. The key to robotic autonomous grasping is not only in the judgment of object type but also in the shape of the object. We present a new grasping approach based on the basic compositions of objects. The simplification of complex objects is conducive to the description of object shape and provides effective ideas for the selection of grasping strategies. First, the depth camera is used to obtain partial 3D data of the target object. Then the 3D data are segmented and the segmented parts are simplified to a cylinder, a sphere, an ellipsoid, and a parallelepiped according to the geometric and semantic shape characteristics. The grasp pose is constrained according to the simplified shape feature and the core part of the object is used for grasping training using deep learning. The grasping model was evaluated in a simulation experiment and robot experiment, and the experiment result shows that learned grasp score using simplified constraints is more robust to gripper pose uncertainty than without simplified constraint.
Styles APA, Harvard, Vancouver, ISO, etc.
36

WANG, DONG, GANG YANG et HUCHUAN LU. « TRI-TRACKING : COMBINING THREE INDEPENDENT VIEWS FOR ROBUST VISUAL TRACKING ». International Journal of Image and Graphics 12, no 03 (juillet 2012) : 1250021. http://dx.doi.org/10.1142/s0219467812500210.

Texte intégral
Résumé :
Robust tracking is a challenging problem, due to intrinsic appearance variability of objects caused by in-plane or out-plane rotation and extrinsic factors change such as illumination, occlusion, background clutter and local blur. In this paper, we present a novel tri-tracking framework combining different views (different models using independent features) for robust object tracking. This new tracking framework exploits a hybrid discriminative generative model based on online semi-supervised learning. We only need the first frame for parameters initialization, and then the tracking process is automatic in the remaining frames, with updating the model online to capture the changes of both object appearance and background. There are three main contributions in our tri-tracking approach. First, we propose a tracking framework for combining generative model and discriminative model, together with different cues that complement each other. Second, by introducing a third tracker, we provide a solution to the problem that it is difficult to combine two classification results in co-training framework when they are opposite. Third, we propose a principle way for combing different views, which based on their Discriminative power. We conduct experiments on some challenging videos, the results from which demonstrate that the proposed tri-tracking framework is robust.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Oiwa, Daimu, Shinji Fukui, Yuji Iwahori, Tsuyoshi Nakamura, Boonserm Kijsirikul et M. K. Bhuyan. « Probabilistic Background Model by Density Forests for Tracking ». International Journal of Software Innovation 5, no 2 (avril 2017) : 1–16. http://dx.doi.org/10.4018/ijsi.2017040101.

Texte intégral
Résumé :
This paper proposes an approach for a robust tracking method to the objects intersection with appearances similar to a target object. The target is image sequences taken by a moving camera in this paper. Tracking methods using color information tend to track mistakenly a background region or an object with color similar to the target object since the proposed method is based on the particle filter. The method constructs the probabilistic background model by the histogram of the optical flow and defines the likelihood function so that the likelihood in the region of the target object may become large. This leads to increasing the accuracy of tracking. The probabilistic background model is made by the density forests. It can infer a probabilistic density fast. The proposed method can process faster than the authors' previous approach by introducing the density forests. Results are demonstrated by experiments using the real videos of outdoor scenes.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Williams, Christopher K. I., et Michalis K. Titsias. « Greedy Learning of Multiple Objects in Images Using Robust Statistics and Factorial Learning ». Neural Computation 16, no 5 (1 mai 2004) : 1039–62. http://dx.doi.org/10.1162/089976604773135096.

Texte intégral
Résumé :
We consider data that are images containing views of multiple objects. Our task is to learn about each of the objects present in the images. This task can be approached as a factorial learning problem, where each image must be explained by instantiating a model for each of the objects present with the correct instantiation parameters. A major problem with learning a factorial model is that as the number of objects increases, there is a combinatorial explosion of the number of configurations that need to be considered. We develop a method to extract object models sequentially from the data by making use of a robust statistical method, thus avoiding the combinatorial explosion, and present results showing successful extraction of objects from real images.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Wyatte, Dean, Tim Curran et Randall O'Reilly. « The Limits of Feedforward Vision : Recurrent Processing Promotes Robust Object Recognition when Objects Are Degraded ». Journal of Cognitive Neuroscience 24, no 11 (novembre 2012) : 2248–61. http://dx.doi.org/10.1162/jocn_a_00282.

Texte intégral
Résumé :
Everyday vision requires robustness to a myriad of environmental factors that degrade stimuli. Foreground clutter can occlude objects of interest, and complex lighting and shadows can decrease the contrast of items. How does the brain recognize visual objects despite these low-quality inputs? On the basis of predictions from a model of object recognition that contains excitatory feedback, we hypothesized that recurrent processing would promote robust recognition when objects were degraded by strengthening bottom–up signals that were weakened because of occlusion and contrast reduction. To test this hypothesis, we used backward masking to interrupt the processing of partially occluded and contrast reduced images during a categorization experiment. As predicted by the model, we found significant interactions between the mask and occlusion and the mask and contrast, such that the recognition of heavily degraded stimuli was differentially impaired by masking. The model provided a close fit of these results in an isomorphic version of the experiment with identical stimuli. The model also provided an intuitive explanation of the interactions between the mask and degradations, indicating that masking interfered specifically with the extensive recurrent processing necessary to amplify and resolve highly degraded inputs, whereas less degraded inputs did not require much amplification and could be rapidly resolved, making them less susceptible to masking. Together, the results of the experiment and the accompanying model simulations illustrate the limits of feedforward vision and suggest that object recognition is better characterized as a highly interactive, dynamic process that depends on the coordination of multiple brain areas.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Islam, Md, Guoqing Hu et Qianbo Liu. « Online Model Updating and Dynamic Learning Rate-Based Robust Object Tracking ». Sensors 18, no 7 (26 juin 2018) : 2046. http://dx.doi.org/10.3390/s18072046.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Luo, Bo, Chao Liang, Weijian Ruan et Ruimin Hu. « Stable and salient patch‐based appearance model for robust object tracking ». Electronics Letters 52, no 18 (septembre 2016) : 1522–24. http://dx.doi.org/10.1049/el.2016.0122.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Mordecai, Yaniv, et Dov Dori. « Model-based risk-oriented robust systems design with object-process methodology ». International Journal of Strategic Engineering Asset Management 1, no 4 (2013) : 331. http://dx.doi.org/10.1504/ijseam.2013.060467.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Dai, Yi, et Bin Liu. « Robust video object tracking via Bayesian model averaging-based feature fusion ». Optical Engineering 55, no 8 (5 août 2016) : 083102. http://dx.doi.org/10.1117/1.oe.55.8.083102.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Rui Yao. « Robust Model-Free Multi-Object Tracking with Online Kernelized Structural Learning ». IEEE Signal Processing Letters 22, no 12 (décembre 2015) : 2401–5. http://dx.doi.org/10.1109/lsp.2015.2488678.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Naiel, Mohamed A., M. Omair Ahmad, M. N. S. Swamy, Jongwoo Lim et Ming-Hsuan Yang. « Online multi-object tracking via robust collaborative model and sample selection ». Computer Vision and Image Understanding 154 (janvier 2017) : 94–107. http://dx.doi.org/10.1016/j.cviu.2016.07.003.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Zhao, Zhiqiang, Ping Feng, Tianjiang Wang, Fang Liu, Caihong Yuan, Jingjuan Guo, Zhijian Zhao et Zongmin Cui. « Dual-scale structural local sparse appearance model for robust object tracking ». Neurocomputing 237 (mai 2017) : 101–13. http://dx.doi.org/10.1016/j.neucom.2016.09.031.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Xie, Chengjun, Jieqing Tan, Peng Chen, Jie Zhang et Lei He. « Multi-scale patch-based sparse appearance model for robust object tracking ». Machine Vision and Applications 25, no 7 (27 août 2014) : 1859–76. http://dx.doi.org/10.1007/s00138-014-0632-3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Tang, Chuanming, Peng Qin et Jianlin Zhang. « Robust Template Adjustment Siamese Network for Object Visual Tracking ». Sensors 21, no 4 (20 février 2021) : 1466. http://dx.doi.org/10.3390/s21041466.

Texte intégral
Résumé :
Most of the existing trackers address the visual tracking problem by extracting an appearance template from the first frame, which is used to localize the target in the current frame. Unfortunately, they typically face the model degeneration challenge, which easily results in model drift and target loss. To address this issue, a novel Template Adjustment Siamese Network (TA-Siam) is proposed in this paper. The proposed framework TA-Siam consists of two simple subnetworks: The template adjustment subnetwork for feature extraction and the classification-regression subnetwork for bounding box prediction. The template adjustment module adaptively uses the feature of subsequent frames to adjust the current template. It makes the template adapt to the target appearance variation of long-term sequence and effectively overcomes model drift problem of Siamese networks. In order to reduce classification errors, the rhombus labels are proposed in our TA-Siam. For more efficient learning and faster convergence, our proposed tracker uses a more effective regression loss in the training process. Extensive experiments and comparisons with trackers are conducted on the challenging benchmarks including VOT2016, VOT2018, OTB50, OTB100, GOT-10K, and LaSOT. Our TA-Siam achieves state-of-the-art performance at the speed of 45 FPS.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Shen, Xiangjun, Jinghui Zhou, Zhongchen Ma, Bingkun Bao et Zhengjun Zha. « Cross-Domain Object Representation via Robust Low-Rank Correlation Analysis ». ACM Transactions on Multimedia Computing, Communications, and Applications 17, no 4 (30 novembre 2021) : 1–20. http://dx.doi.org/10.1145/3458825.

Texte intégral
Résumé :
Cross-domain data has become very popular recently since various viewpoints and different sensors tend to facilitate better data representation. In this article, we propose a novel cross-domain object representation algorithm (RLRCA) which not only explores the complexity of multiple relationships of variables by canonical correlation analysis (CCA) but also uses a low rank model to decrease the effect of noisy data. To the best of our knowledge, this is the first try to smoothly integrate CCA and a low-rank model to uncover correlated components across different domains and to suppress the effect of noisy or corrupted data. In order to improve the flexibility of the algorithm to address various cross-domain object representation problems, two instantiation methods of RLRCA are proposed from feature and sample space, respectively. In this way, a better cross-domain object representation can be achieved through effectively learning the intrinsic CCA features and taking full advantage of cross-domain object alignment information while pursuing low rank representations. Extensive experimental results on CMU PIE, Office-Caltech, Pascal VOC 2007, and NUS-WIDE-Object datasets, demonstrate that our designed models have superior performance over several state-of-the-art cross-domain low rank methods in image clustering and classification tasks with various corruption levels.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Lei, Haijun, Hai Xie, Wenbin Zou, Xiaoli Sun, Kidiyo Kpalma et Nikos Komodakis. « Hierarchical Saliency Detection via Probabilistic Object Boundaries ». International Journal of Pattern Recognition and Artificial Intelligence 31, no 06 (30 mars 2017) : 1755010. http://dx.doi.org/10.1142/s0218001417550102.

Texte intégral
Résumé :
Though there are many computational models proposed for saliency detection, few of them take object boundary information into account. This paper presents a hierarchical saliency detection model incorporating probabilistic object boundaries, which is based on the observation that salient objects are generally surrounded by explicit boundaries and show contrast with their surroundings. We perform adaptive thresholding operation on ultrametric contour map, which leads to hierarchical image segmentations, and compute the saliency map for each layer based on the proposed robust center bias, border bias, color dissimilarity and spatial coherence measures. After a linear weighted combination of multi-layer saliency maps, and Bayesian enhancement procedure, the final saliency map is obtained. Extensive experimental results on three challenging benchmark datasets demonstrate that the proposed model outperforms eight state-of-the-art saliency detection models.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie