Articles de revues sur le sujet « Local visual feature »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Local visual feature.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Local visual feature ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Jia, Xi Bin, et Mei Xia Zheng. « Video Based Visual Speech Feature Model Construction ». Applied Mechanics and Materials 182-183 (juin 2012) : 1367–71. http://dx.doi.org/10.4028/www.scientific.net/amm.182-183.1367.

Texte intégral
Résumé :
This paper aims to give a solutions for the construction of chinese visual speech feature model based on HMM. We propose and discuss three kind representation model of the visual speech which are lip geometrical features, lip motion features and lip texture features. The model combines the advantages of the local LBP and global DCT texture information together, which shows better performance than the single feature. Equally the model combines the advantages of the local LBP and geometrical information together is better than single feature. By computing the recognition rate of the visemes from the model, the paper shows the HMM which describing the dynamic of speech, coupled with the combined feature for describing the global and local texture is the best model.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wang, Yin-Tien, Chen-Tung Chi et Ying-Chieh Feng. « Robot mapping using local invariant feature detectors ». Engineering Computations 31, no 2 (25 février 2014) : 297–316. http://dx.doi.org/10.1108/ec-01-2013-0024.

Texte intégral
Résumé :
Purpose – To build a persistent map with visual landmarks is one of the most important steps for implementing the visual simultaneous localization and mapping (SLAM). The corner detector is a common method utilized to detect visual landmarks for constructing a map of the environment. However, due to the scale-variant characteristic of corner detection, extensive computational cost is needed to recover the scale and orientation of corner features in SLAM tasks. The purpose of this paper is to build the map using a local invariant feature detector, namely speeded-up robust features (SURF), to detect scale- and orientation-invariant features as well as provide a robust representation of visual landmarks for SLAM. Design/methodology/approach – SURF are scale- and orientation-invariant features which have higher repeatability than that obtained by other detection methods. Furthermore, SURF algorithms have better processing speed than other scale-invariant detection method. The procedures of detection, description and matching of regular SURF algorithms are modified in this paper in order to provide a robust representation of visual landmarks in SLAM. The sparse representation is also used to describe the environmental map and to reduce the computational complexity in state estimation using extended Kalman filter (EKF). Furthermore, the effective procedures of data association and map management for SURF features in SLAM are also designed to improve the accuracy of robot state estimation. Findings – Experimental works were carried out on an actual system with binocular vision sensors to prove the feasibility and effectiveness of the proposed algorithms. EKF SLAM with the modified SURF algorithms was applied in the experiments including the evaluation of accurate state estimation as well as the implementation of large-area SLAM. The performance of the modified SURF algorithms was compared with those obtained by regular SURF algorithms. The results show that the SURF with less-dimensional descriptors is the most suitable representation of visual landmarks. Meanwhile, the integrated system is successfully validated to fulfill the capabilities of visual SLAM system. Originality/value – The contribution of this paper is the novel approach to overcome the problem of recovering the scale and orientation of visual landmarks in SLAM tasks. This research also extends the usability of local invariant feature detectors in SLAM tasks by utilizing its robust representation of visual landmarks. Furthermore, data association and map management designed for SURF-based mapping in this paper also give another perspective for improving the robustness of SLAM systems.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Sun, Huadong, Xu Zhang, Xiaowei Han, Xuesong Jin et Zhijie Zhao. « Commodity Image Classification Based on Improved Bag-of-Visual-Words Model ». Complexity 2021 (17 mars 2021) : 1–10. http://dx.doi.org/10.1155/2021/5556899.

Texte intégral
Résumé :
With the increasing scale of e-commerce, the complexity of image content makes commodity image classification face great challenges. Image feature extraction often determines the quality of the final classification results. At present, the image feature extraction part mainly includes the underlying visual feature and the intermediate semantic feature. The intermediate semantics of the image acts as a bridge between the underlying features and the advanced semantics of the image, which can make up for the semantic gap to a certain extent and has strong robustness. As a typical intermediate semantic representation method, the bag-of-visual-words (BoVW) model has received extensive attention in image classification. However, the traditional BoVW model loses the location information of local features, and its local feature descriptors mainly focus on the texture shape information of local regions but lack the expression of color information. Therefore, in this paper, the improved bag-of-visual-words model is presented, which contains three aspects of improvement: (1) multiscale local region extraction; (2) local feature description by speeded up robust features (SURF) and color vector angle histogram (CVAH); and (3) diagonal concentric rectangular pattern. Experimental results show that the three aspects of improvement to the BoVW model are complementary, while compared with the traditional BoVW and the BoVW adopting SURF + SPM, the classification accuracy of the improved BoVW is increased by 3.60% and 2.33%, respectively.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Manandhar, Dipu, Kim-Hui Yap, Zhenwei Miao et Lap-Pui Chau. « Lattice-Support repetitive local feature detection for visual search ». Pattern Recognition Letters 98 (octobre 2017) : 123–29. http://dx.doi.org/10.1016/j.patrec.2017.09.021.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Yang, Hong-Ying, Yong-Wei Li, Wei-Yi Li, Xiang-Yang Wang et Fang-Yu Yang. « Content-based image retrieval using local visual attention feature ». Journal of Visual Communication and Image Representation 25, no 6 (août 2014) : 1308–23. http://dx.doi.org/10.1016/j.jvcir.2014.05.003.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Dong, Baoyu, et Guang Ren. « A New Scene Classification Method Based on Local Gabor Features ». Mathematical Problems in Engineering 2015 (2015) : 1–14. http://dx.doi.org/10.1155/2015/109718.

Texte intégral
Résumé :
A new scene classification method is proposed based on the combination of local Gabor features with a spatial pyramid matching model. First, new local Gabor feature descriptors are extracted from dense sampling patches of scene images. These local feature descriptors are embedded into a bag-of-visual-words (BOVW) model, which is combined with a spatial pyramid matching framework. The new local Gabor feature descriptors have sufficient discrimination abilities for dense regions of scene images. Then the efficient feature vectors of scene images can be obtained byK-means clustering method and visual word statistics. Second, in order to decrease classification time and improve accuracy, an improved kernel principal component analysis (KPCA) method is applied to reduce the dimensionality of pyramid histogram of visual words (PHOW). The principal components with the bigger interclass separability are retained in feature vectors, which are used for scene classification by the linear support vector machine (SVM) method. The proposed method is evaluated on three commonly used scene datasets. Experimental results demonstrate the effectiveness of the method.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Gao, Yuhang, et Long Zhao. « Coarse TRVO : A Robust Visual Odometry with Detector-Free Local Feature ». Journal of Advanced Computational Intelligence and Intelligent Informatics 26, no 5 (20 septembre 2022) : 731–39. http://dx.doi.org/10.20965/jaciii.2022.p0731.

Texte intégral
Résumé :
The visual SLAM system requires precise localization. To obtain consistent feature matching results, visual features acquired by neural networks are being increasingly used to replace traditional manual features in situations with weak texture, motion blur, or repeated patterns. However, to improve the level of accuracy, most deep learning enhanced SLAM systems, which have a decreased efficiency. In this paper, we propose Coarse TRVO, a visual odometry system that uses deep learning for feature matching. The deep learning network uses a CNN and transformer structures to provide dense high-quality end-to-end matches for a pair of images, even under indistinctive settings with low-texture regions or repeating patterns occupying the majority of the field of view. Meanwhile, we made the proposed model compatible with NVIDIA TensorRT runtime to boost the performance of the algorithm. After obtaining the matching point pairs, the camera pose is solved in an optimized way by minimizing the re-projection error of the feature points. Experiments based on multiple data sets and real environments show that Coarse TRVO achieves a higher robustness and relative positioning accuracy in comparison with the current mainstream visual SLAM system.
Styles APA, Harvard, Vancouver, ISO, etc.
8

N. Sultani, Zainab, et Ban N. Dhannoon. « Modified Bag of Visual Words Model for Image Classification ». Al-Nahrain Journal of Science 24, no 2 (1 juin 2021) : 78–86. http://dx.doi.org/10.22401/anjs.24.2.11.

Texte intégral
Résumé :
Image classification is acknowledged as one of the most critical and challenging tasks in computer vision. The bag of visual words (BoVW) model has proven to be very efficient for image classification tasks since it can effectively represent distinctive image features in vector space. In this paper, BoVW using Scale-Invariant Feature Transform (SIFT) and Oriented Fast and Rotated BRIEF(ORB) descriptors are adapted for image classification. We propose a novel image classification system using image local feature information obtained from both SIFT and ORB local feature descriptors. As a result, the constructed SO-BoVW model presents highly discriminative features, enhancing the classification performance. Experiments on Caltech-101 and flowers dataset prove the effectiveness of the proposed method.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Aw, Y. K., Robyn Owens et John Ross. « An analysis of local energy and phase congruency models in visual feature detection ». Journal of the Australian Mathematical Society. Series B. Applied Mathematics 40, no 1 (juillet 1998) : 97–122. http://dx.doi.org/10.1017/s0334270000012406.

Texte intégral
Résumé :
AbstractA variety of approaches have been developed for the detection of features such as edges, lines, and corners in images. Many techniques presuppose the feature type, such as a step edge, and use the differential properties of the luminance function to detect the location of such features. The local energy model provides an alternative approach, detecting a variety of feature types in a single pass by analysing order in the phase components of the Fourier transform of the image. The local energy model is usually implemented by calculating the envelope of the analytic signal associated with the image function. Here we analyse the accuracy of such an implementation, and show that in certain cases the feature location is only approximately given by the local energy model. Orientation selectivity is another aspect of the local energy model, and we show that a feature is only correctly located at a peak of the local energy function when local energy has a zero gradient in two orthogonal directions at the peak point.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Han, Xian-Hua, et Yen-Wei Chen. « Biomedical Imaging Modality Classification Using Combined Visual Features and Textual Terms ». International Journal of Biomedical Imaging 2011 (2011) : 1–7. http://dx.doi.org/10.1155/2011/241396.

Texte intégral
Résumé :
We describe an approach for the automatic modality classification in medical image retrieval task of the 2010 CLEF cross-language image retrieval campaign (ImageCLEF). This paper is focused on the process of feature extraction from medical images and fuses the different extracted visual features and textual feature for modality classification. To extract visual features from the images, we used histogram descriptor of edge, gray, or color intensity and block-based variation as global features and SIFT histogram as local feature. For textual feature of image representation, the binary histogram of some predefined vocabulary words from image captions is used. Then, we combine the different features using normalized kernel functions for SVM classification. Furthermore, for some easy misclassified modality pairs such as CT and MR or PET and NM modalities, a local classifier is used for distinguishing samples in the pair modality to improve performance. The proposed strategy is evaluated with the provided modality dataset by ImageCLEF 2010.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Wang, Di, Hongying Zhang et Yanhua Shao. « A Robust Invariant Local Feature Matching Method for Changing Scenes ». Wireless Communications and Mobile Computing 2021 (28 décembre 2021) : 1–13. http://dx.doi.org/10.1155/2021/8927822.

Texte intégral
Résumé :
The precise evaluation of camera position and orientation is a momentous procedure of most machine vision tasks, especially visual localization. Aiming at the shortcomings of local features of dealing with changing scenes and the problem of realizing a robust end-to-end network that worked from feature detection to matching, an invariant local feature matching method for changing scene image pairs is proposed, which is a network that integrates feature detection, descriptor constitution, and feature matching. In the feature point detection and descriptor construction stage, joint training is carried out based on a neural network. In the feature point extraction and descriptor construction stage, joint training is carried out based on a neural network. To obtain local features with solid robustness to viewpoint and illumination changes, the Vector of Locally Aggregated Descriptors based on Neural Network (NetVLAD) module is introduced to compute the degree of correlation of description vectors from one image to another counterpart. Then, to enhance the relationship between relevant local features of image pairs, the attentional graph neural network (AGNN) is introduced, and the Sinkhorn algorithm is used to match them; finally, the local feature matching results between image pairs are output. The experimental results show that, compared with the existed algorithms, the proposed method enhances the robustness of local features of varying sights, performs better in terms of homography estimation, matching precision, and recall, and when meeting the requirements of the visual localization system to the environment, the end-to-end network tasks can be realized.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Liu, Xianglong, Bo Lang, Yi Xu et Bo Cheng. « Feature grouping and local soft match for mobile visual search ». Pattern Recognition Letters 33, no 3 (février 2012) : 239–46. http://dx.doi.org/10.1016/j.patrec.2011.10.002.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Liu, Chang, Zhuocheng Zou, Yuan Miao et Jun Qiu. « Light field quality assessment based on aggregation learning of multiple visual features ». Optics Express 30, no 21 (30 septembre 2022) : 38298. http://dx.doi.org/10.1364/oe.467754.

Texte intégral
Résumé :
Light field imaging is a way to represent human vision from a computational perspective. It contains more visual information than traditional imaging systems. As a basic problem of light field imaging, light field quality assessment has received extensive attention in recent years. In this study, we explore the characteristics of light field data for different visual domains (spatial, angular, coupled, projection, and depth), study the multiple visual features of a light field, and propose a non-reference light field quality assessment method based on aggregation learning of multiple visual features. The proposed method has four key modules: multi-visual representation of a light field, feature extraction, feature aggregation, and quality assessment. It first extracts the natural scene statistics (NSS) features from the central view image in the spatial domain. It extracts gray-level co-occurrence matrix (GLCM) features both in the angular domain and in the spatial-angular coupled domain. Then, it extracts the rotation-invariant uniform local binary pattern (LBP) features of depth map in the depth domain, and the statistical characteristics of the local entropy (SDLE) features of refocused images in the projection domain. Finally, the multiple visual features are aggregated to form a visual feature vector for the light field. A prediction model is trained by support vector machines (SVM) to establish a light field quality assessment method based on aggregation learning of multiple visual features.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Fawad, Muhammad Jamil Khan et MuhibUr Rahman. « Person Re-Identification by Discriminative Local Features of Overlapping Stripes ». Symmetry 12, no 4 (17 avril 2020) : 647. http://dx.doi.org/10.3390/sym12040647.

Texte intégral
Résumé :
The human visual system can recognize a person based on his physical appearance, even if extreme spatio-temporal variations exist. However, the surveillance system deployed so far fails to re-identify the individual when it travels through the non-overlapping camera’s field-of-view. Person re-identification (Re-ID) is the task of associating individuals across disjoint camera views. In this paper, we propose a robust feature extraction model named Discriminative Local Features of Overlapping Stripes (DLFOS) that can associate corresponding actual individuals in the disjoint visual surveillance system. The proposed DLFOS model accumulates the discriminative features from the local patch of each overlapping strip of the pedestrian appearance. The concatenation of histogram of oriented gradients, Gaussian of color, and the magnitude operator of CJLBP bring robustness in the final feature vector. The experimental results show that our proposed feature extraction model achieves rank@1 matching rate of 47.18% on VIPeR, 64.4% on CAVIAR4REID, and 62.68% on Market1501, outperforming the recently reported models from the literature and validating the advantage of the proposed model.
Styles APA, Harvard, Vancouver, ISO, etc.
15

He, Xuan, Wang Gao, Chuanzhen Sheng, Ziteng Zhang, Shuguo Pan, Lijin Duan, Hui Zhang et Xinyu Lu. « LiDAR-Visual-Inertial Odometry Based on Optimized Visual Point-Line Features ». Remote Sensing 14, no 3 (27 janvier 2022) : 622. http://dx.doi.org/10.3390/rs14030622.

Texte intégral
Résumé :
This study presents a LiDAR-Visual-Inertial Odometry (LVIO) based on optimized visual point-line features, which can effectively compensate for the limitations of a single sensor in real-time localization and mapping. Firstly, an improved line feature extraction in scale space and constraint matching strategy, using the least square method, is proposed to provide a richer visual feature for the front-end of LVIO. Secondly, multi-frame LiDAR point clouds were projected into the visual frame for feature depth correlation. Thirdly, the initial estimation results of Visual-Inertial Odometry (VIO) were carried out to optimize the scanning matching accuracy of LiDAR. Finally, a factor graph based on Bayesian network is proposed to build the LVIO fusion system, in which GNSS factor and loop factor are introduced to constrain LVIO globally. The evaluations on indoor and outdoor datasets show that the proposed algorithm is superior to other state-of-the-art algorithms in real-time efficiency, positioning accuracy, and mapping effect. Specifically, the average RMSE of absolute trajectory in the indoor environment is 0.075 m and that in the outdoor environment is 3.77 m. These experimental results can prove that the proposed algorithm can effectively solve the problem of line feature mismatching and the accumulated error of local sensors in mobile carrier positioning.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Chi, Jianning, Xiaosheng Yu, Yifei Zhang et Huan Wang. « A Novel Local Human Visual Perceptual Texture Description with Key Feature Selection for Texture Classification ». Mathematical Problems in Engineering 2019 (4 février 2019) : 1–20. http://dx.doi.org/10.1155/2019/3756048.

Texte intégral
Résumé :
This paper proposes a novel local texture description method which defines six human visual perceptual characteristics and selects the minimal subset of relevant as well as nonredundant features based on principal component analysis (PCA). We assign six texture characteristics, which were originally defined by Tamura et al., with novel definition and local metrics so that these measurements reflect the human perception of each characteristic more precisely. Then, we propose a PCA-based feature selection method exploiting the structure of the principal components of the feature set to find a subset of the original feature vector, where the features reflect the most representative characteristics for the textures in the given image dataset. Experiments on different publicly available large datasets demonstrate that the proposed method provides superior performance of classification over most of the state-of-the-art feature description methods with respect to accuracy and efficiency.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Zhang, Peng, et Wenfen Liu. « DLALoc : Deep-Learning Accelerated Visual Localization Based on Mesh Representation ». Applied Sciences 13, no 2 (13 janvier 2023) : 1076. http://dx.doi.org/10.3390/app13021076.

Texte intégral
Résumé :
Visual localization, i.e., the camera pose localization within a known three-dimensional (3D) model, is a basic component for numerous applications such as autonomous driving cars and augmented reality systems. The most widely used methods from the literature are based on local feature matching between a query image that needs to be localized and database images with known camera poses and local features. However, this method still struggles with different illumination conditions and seasonal changes. Additionally, the scene is normally presented by a sparse structure-from-motion point cloud that has corresponding local features to match. This scene representation depends heavily on different local feature types, and changing the different local feature types requires an expensive feature-matching step to generate the 3D model. Moreover, the state-of-the-art matching strategies are too resource intensive for some real-time applications. Therefore, in this paper, we introduce a novel framework called deep-learning accelerated visual localization (DLALoc) based on mesh representation. In detail, we employ a dense 3D model, i.e., mesh, to represent a scene that can provide more robust 2D-3D matches than 3D point clouds and database images. We can obtain their corresponding 3D points from the depth map rendered from the mesh. Under this scene representation, we use a pretrained multilayer perceptron combined with homotopy continuation to calculate the relative pose of the query and database images. We also use the scale consistency of 2D-3D matches to perform the efficient random sample consensus to find the best 2D inlier set for the subsequential perspective-n-point localization step. Furthermore, we evaluate the proposed visual localization pipeline experimentally on Aachen DayNight v1.1 and RobotCar Seasons datasets. The results show that the proposed approach can achieve state-of-the-art accuracy and shorten the localization time about five times.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Ujala Razaq, Muhammad Muneeb Ullah et Muhammad Usman. « Local and Deep Features for Robust Visual Indoor Place Recognition ». Open Journal of Science and Technology 3, no 2 (4 août 2020) : 140–47. http://dx.doi.org/10.31580/ojst.v3i2.1475.

Texte intégral
Résumé :
This study focuses on the area of visual indoor place recognition (e.g., in an office setting, automatically recognizing different places, such as offices, corridor, wash room, etc.). The potential applications include robot navigation, augmented reality, and image retrieval. However, the task is extremely demanding because of the variations in appearance in such dynamic setups (e.g., view-point, occlusion, illumination, scale, etc.). Recently, Convolutional Neural Network (CNN) has emerged as a powerful learning mechanism, able to learn deep higher-level features when provided with a comparatively big quantity of labeled training data. Here, we exploit the generic nature of CNN features for robust visual place recognition in the challenging COLD dataset. So, we employ the pre-trained CNNs (on the related tasks of object and scene classification) for deep feature extraction in the COLD images. We demonstrate that these off-the-shelf features, when combined with a simple linear SVM classifier, outperform their bag-of-features counterpart. Moreover, a simple combination scheme, combining the local bag-of-features and higher-level deep CNN features, produce outstanding results on the COLD dataset.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Forschack, Norman, Søren K. Andersen et Matthias M. Müller. « Global Enhancement but Local Suppression in Feature-based Attention ». Journal of Cognitive Neuroscience 29, no 4 (avril 2017) : 619–27. http://dx.doi.org/10.1162/jocn_a_01075.

Texte intégral
Résumé :
A key property of feature-based attention is global facilitation of the attended feature throughout the visual field. Previously, we presented superimposed red and blue randomly moving dot kinematograms (RDKs) flickering at a different frequency each to elicit frequency-specific steady-state visual evoked potentials (SSVEPs) that allowed us to analyze neural dynamics in early visual cortex when participants shifted attention to one of the two colors. Results showed amplification of the attended and suppression of the unattended color as measured by SSVEP amplitudes. Here, we tested whether the suppression of the unattended color also operates globally. To this end, we presented superimposed flickering red and blue RDKs in the center of a screen and a red and blue RDK in the left and right periphery, respectively, also flickering at different frequencies. Participants shifted attention to one color of the superimposed RDKs in the center to discriminate coherent motion events in the attended from the unattended color RDK, whereas the peripheral RDKs were task irrelevant. SSVEP amplitudes elicited by the centrally presented RDKs confirmed the previous findings of amplification and suppression. For peripherally located RDKs, we found the expected SSVEP amplitude increase, relative to precue baseline when color matched the one of the centrally attended RDK. We found no reduction in SSVEP amplitude relative to precue baseline, when the peripheral color matched the unattended one of the central RDK, indicating that, while facilitation in feature-based attention operates globally, suppression seems to be linked to the location of focused attention.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Awad, Ali Ismail, et M. Hassaballah. « Bag-of-Visual-Words for Cattle Identification from Muzzle Print Images ». Applied Sciences 9, no 22 (15 novembre 2019) : 4914. http://dx.doi.org/10.3390/app9224914.

Texte intégral
Résumé :
Cattle, buffalo and cow identification plays an influential role in cattle traceability from birth to slaughter, understanding disease trajectories and large-scale cattle ownership management. Muzzle print images are considered discriminating cattle biometric identifiers for biometric-based cattle identification and traceability. This paper presents an exploration of the performance of the bag-of-visual-words (BoVW) approach in cattle identification using local invariant features extracted from a database of muzzle print images. Two local invariant feature detectors—namely, speeded-up robust features (SURF) and maximally stable extremal regions (MSER)—are used as feature extraction engines in the BoVW model. The performance evaluation criteria include several factors, namely, the identification accuracy, processing time and the number of features. The experimental work measures the performance of the BoVW model under a variable number of input muzzle print images in the training, validation, and testing phases. The identification accuracy values when utilizing the SURF feature detector and descriptor were 75%, 83%, 91%, and 93% for when 30%, 45%, 60%, and 75% of the database was used in the training phase, respectively. However, using MSER as a points-of-interest detector combined with the SURF descriptor achieved accuracies of 52%, 60%, 67%, and 67%, respectively, when applying the same training sizes. The research findings have proven the feasibility of deploying the BoVW paradigm in cattle identification using local invariant features extracted from muzzle print images.
Styles APA, Harvard, Vancouver, ISO, etc.
21

von der Heydt, Rüdiger, et Nan R. Zhang. « Figure and ground : how the visual cortex integrates local cues for global organization ». Journal of Neurophysiology 120, no 6 (1 décembre 2018) : 3085–98. http://dx.doi.org/10.1152/jn.00125.2018.

Texte intégral
Résumé :
Inferring figure-ground organization in two-dimensional images may require different complementary strategies. For isolated objects, it has been shown that mechanisms in visual cortex exploit the overall distribution of contours, but in images of cluttered scenes where the grouping of contours is not obvious, that strategy would fail. However, natural scenes contain local features, specifically contour junctions, that may contribute to the definition of object regions. To study the role of local features in the assignment of border ownership, we recorded single-cell activity from visual cortex in awake behaving Macaca mulatta. We tested configurations perceived as two overlapping figures in which T- and L-junctions depend on the direction of overlap, whereas the overall distribution of contours provides no valid information. While recording responses to the occluding contour, we varied direction of overlap and variably masked some of the critical contour features to determine their influences and their interactions. On average, most features influenced the responses consistently, producing either enhancement or suppression depending on border ownership. Different feature types could have opposite effects even at the same location. Features far from the receptive field produced effects as strong as near features and with the same short latency. Summation was highly nonlinear: any single feature produced more than two-thirds of the effect of all features together. These findings reveal fast and highly specific organization mechanisms, supporting a previously proposed model in which “grouping cells” integrate widely distributed edge signals with specific end-stopped signals to modulate the original edge signals by feedback. NEW & NOTEWORTHY Seeing objects seems effortless, but defining objects in a scene requires sophisticated neural mechanisms. For isolated objects, the visual cortex groups contours based on overall distribution, but this strategy does not work for cluttered scenes. Here, we demonstrate mechanisms that integrate local contour features like T- and L-junctions to resolve clutter. The process is fast, evaluates widely distributed features, and gives any single feature a decisive influence on figure-ground representation.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Xiong, Jian, Xinzhong Zhu, Jie Yuan, Ran Shi et Hao Gao. « Perceptual visual security assessment by fusing local and global feature similarity ». Computers & ; Electrical Engineering 91 (mai 2021) : 107071. http://dx.doi.org/10.1016/j.compeleceng.2021.107071.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Stach, Silke, Julie Benard et Martin Giurfa. « Local-feature assembling in visual pattern recognition and generalization in honeybees ». Nature 429, no 6993 (juin 2004) : 758–61. http://dx.doi.org/10.1038/nature02594.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
24

Dragoi, Valentin, Jitendra Sharma, Earl K. Miller et Mriganka Sur. « Dynamics of neuronal sensitivity in visual cortex and local feature discrimination ». Nature Neuroscience 5, no 9 (5 août 2002) : 883–91. http://dx.doi.org/10.1038/nn900.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Ren, Zhiquan, Yue Deng et Qionghai Dai. « Local visual feature fusion via maximum margin multimodal deep neural network ». Neurocomputing 175 (janvier 2016) : 427–32. http://dx.doi.org/10.1016/j.neucom.2015.10.076.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Zhang, Chunjie, Xian Xiao, Junbiao Pang, Chao Liang, Yifan Zhang et Qingming Huang. « Beyond visual word ambiguity : Weighted local feature encoding with governing region ». Journal of Visual Communication and Image Representation 25, no 6 (août 2014) : 1387–98. http://dx.doi.org/10.1016/j.jvcir.2014.05.010.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Xiao, Feng, et Qiuxia Wu. « Visual saliency based global–local feature representation for skin cancer classification ». IET Image Processing 14, no 10 (21 août 2020) : 2140–48. http://dx.doi.org/10.1049/iet-ipr.2019.1018.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Lee, J., K. Roh, D. Wagner et H. Ko. « Robust local feature extraction algorithm with visual cortex for object recognition ». Electronics Letters 47, no 19 (2011) : 1075. http://dx.doi.org/10.1049/el.2011.1832.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Da, Zikai, Yu Gao, Zihan Xue, Jing Cao et Peizhen Wang. « Local and Global Feature Aggregation-Aware Network for Salient Object Detection ». Electronics 11, no 2 (12 janvier 2022) : 231. http://dx.doi.org/10.3390/electronics11020231.

Texte intégral
Résumé :
With the rise of deep learning technology, salient object detection algorithms based on convolutional neural networks (CNNs) are gradually replacing traditional methods. The majority of existing studies, however, focused on the integration of multi-scale features, thereby ignoring the characteristics of other significant features. To address this problem, we fully utilized the features to alleviate redundancy. In this paper, a novel CNN named local and global feature aggregation-aware network (LGFAN) has been proposed. It is a combination of the visual geometry group backbone for feature extraction, an attention module for high-quality feature filtering, and an aggregation module with a mechanism for rich salient features to ease the dilution process on the top-down pathway. Experimental results on five public datasets demonstrated that the proposed method improves computational efficiency while maintaining favorable performance.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Zhou, Zhili, Meimin Wang, Yi Cao et Yuecheng Su. « CNN Feature-Based Image Copy Detection with Contextual Hash Embedding ». Mathematics 8, no 7 (17 juillet 2020) : 1172. http://dx.doi.org/10.3390/math8071172.

Texte intégral
Résumé :
As one of the important techniques for protecting the copyrights of digital images, content-based image copy detection has attracted a lot of attention in the past few decades. The traditional content-based copy detection methods usually extract local hand-crafted features and then quantize these features to visual words by the bag-of-visual-words (BOW) model to build an inverted index file for rapid image matching. Recently, deep learning features, such as the features derived from convolutional neural networks (CNN), have been proven to outperform the hand-crafted features in many applications of computer vision. However, it is not feasible to directly apply the existing global CNN features for copy detection, since they are usually sensitive to partial content-discarded attacks, such as copping and occlusion. Thus, we propose a local CNN feature-based image copy detection method with contextual hash embedding. We first extract the local CNN features from images and then quantize them to visual words to construct an index file. Then, as the BOW quantization process decreases the discriminability of these features to some extent, a contextual hash sequence is captured from a relatively large region surrounding each CNN feature and then is embedded into the index file to improve the feature’s discriminability. Extensive experimental results demonstrate that the proposed method achieves a superior performance compared to the related works in the copy detection task.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Bilquees, Samina, Hassan Dawood, Hussain Dawood, Nadeem Majeed, Ali Javed et Muhammad Tariq Mahmood. « Noise Resilient Local Gradient Orientation for Content-Based Image Retrieval ». International Journal of Optics 2021 (14 juillet 2021) : 1–19. http://dx.doi.org/10.1155/2021/4151482.

Texte intégral
Résumé :
In a world of multimedia information, where users seek accurate results against search query and demand relevant multimedia content retrieval, developing an accurate content-based image retrieval (CBIR) system is difficult due to the presence of noise in the image. The performance of the CBIR system is impaired by this noise. To estimate the distance between the query and database images, CBIR systems use image feature representation. The noise or artifacts present within the visual data might confuse the CBIR when retrieving relevant results. Therefore, we propose Noise Resilient Local Gradient Orientation (NRLGO) feature representation that overcomes the noise factor within the visual information and strengthens the CBIR to retrieve accurate and relevant results. The proposed NRLGO consists of three steps: estimation and removal of noise to protect the local visual structure; extraction of color, texture, and local contrast features; and, at the end, generation of microstructure for visual representation. The Manhattan distance between the query image and the database image is used to measure their similarity. The proposed technique was tested using the Corel dataset, which contains 10000 images from 100 different categories. The outcomes of the experiment signify that the proposed NRLGO has higher retrieval performance in comparison with state-of-the-art techniques.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Norhisham Razali, Mohd, Noridayu Manshor, Alfian Abdul Halin, Norwati Mustapha et Razali Yaakob. « Fuzzy encoding with hybrid pooling for visual dictionary in food recognition ». Indonesian Journal of Electrical Engineering and Computer Science 21, no 1 (1 janvier 2021) : 179. http://dx.doi.org/10.11591/ijeecs.v21.i1.pp179-195.

Texte intégral
Résumé :
<span>Tremendous number of f food images in the social media services can be exploited by using food recognition for healthcare benefits and food industry marketing. The main challenges in food recognition are the large variability of food appearance that often generates a highly diverse and ambiguous descriptions of local feature. Ironically, the ambiguous descriptions of local feature have triggered information loss in visual dictionary constructions from the hard assignment practices. The current method based on hard assignment and Fisher vector approach to construct visual dictionary have unexpectedly cause errors from the uncertainty problem during visual word assignation. This research proposes a method of combination in soft assignment technique by using fuzzy encoding approach and maximum pooling technique to aggregate the features to produce a highly discriminative and robust visual dictionary across various local features and machine learning classifiers. The local features by using MSER detector with SURF descriptor was encoded by using fuzzy encoding approach. Support vector machine (SVM) with linear kernel was employed to evaluate the effect of fuzzy encoding. The results of the experiments have demonstrated a noteworthy classification performance of fuzzy encoding approach compared to the traditional approach based on hard assignment and Fisher vector technique. The effects of uncertainty and plausibility were minimized along with more discriminative and compact visual dictionary representation.</span>
Styles APA, Harvard, Vancouver, ISO, etc.
33

Sarwar, Amna, Zahid Mehmood, Tanzila Saba, Khurram Ashfaq Qazi, Ahmed Adnan et Habibullah Jamal. « A novel method for content-based image retrieval to improve the effectiveness of the bag-of-words model using a support vector machine ». Journal of Information Science 45, no 1 (20 juin 2018) : 117–35. http://dx.doi.org/10.1177/0165551518782825.

Texte intégral
Résumé :
The advancements in the multimedia technologies result in the growth of the image databases. To retrieve images from such image databases using visual attributes of the images is a challenging task due to the close visual appearance among the visual attributes of these images, which also introduces the issue of the semantic gap. In this article, we recommend a novel method established on the bag-of-words (BoW) model, which perform visual words integration of the local intensity order pattern (LIOP) feature and local binary pattern variance (LBPV) feature to reduce the issue of the semantic gap and enhance the performance of the content-based image retrieval (CBIR). The recommended method uses LIOP and LBPV features to build two smaller size visual vocabularies (one from each feature), which are integrated together to build a larger size of the visual vocabulary, which also contains complementary features of both descriptors. Because for efficient CBIR, the smaller size of the visual vocabulary improves the recall, while the bigger size of the visual vocabulary improves the precision or accuracy of the CBIR. The comparative analysis of the recommended method is performed on three image databases, namely, WANG-1K, WANG-1.5K and Holidays. The experimental analysis of the recommended method on these image databases proves its robust performance as compared with the recent CBIR methods.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Qu, Haicheng, Siqi Zhao et Wanjun Liu. « Fine-Graine Visual Classification with Aggregated Object Localization and Salient Feature Suppression ». Journal of Physics : Conference Series 2171, no 1 (1 janvier 2022) : 012036. http://dx.doi.org/10.1088/1742-6596/2171/1/012036.

Texte intégral
Résumé :
Abstract Fine-grained visual classification (FGVC) is desired to classify sub-classes of objects in the same super-class. For the FGVC tasks, it is necessary to find subtle yet discriminative information from local areas. However, traditional FGVC approaches tended to extract strong discriminative features, and overlook some subtle yet useful features. Besides, current methods ignore the influence of background noises on feature extraction. Therefore, aggregated object localization combined with salient feature suppression are proposed, which is a stacked network. First, the feature maps extracted by the coarse network are fed into aggregated object localization to obtain complete foreground object in an image. Secondly, the refined features obtained through zooming in complete foreground object are fed into fine network. Finally, through finer network processing, the feature maps are fed into salient feature suppression module to find more valuable region discriminative features for classification. Experiment results on two datasets show that our proposed method can get superior result compared with state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Wu, Hui, Min Wang, Wengang Zhou, Yang Hu et Houqiang Li. « Learning Token-Based Representation for Image Retrieval ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 3 (28 juin 2022) : 2703–11. http://dx.doi.org/10.1609/aaai.v36i3.20173.

Texte intégral
Résumé :
In image retrieval, deep local features learned in a data-driven manner have been demonstrated effective to improve retrieval performance. To realize efficient retrieval on large image database, some approaches quantize deep local features with a large codebook and match images with aggregated match kernel. However, the complexity of these approaches is non-trivial with large memory footprint, which limits their capability to jointly perform feature learning and aggregation. To generate compact global representations while maintaining regional matching capability, we propose a unified framework to jointly learn local feature representation and aggregation. In our framework, we first extract local features using CNNs. Then, we design a tokenizer module to aggregate them into a few visual tokens, each corresponding to a specific visual pattern. This helps to remove background noise, and capture more discriminative regions in the image. Next, a refinement block is introduced to enhance the visual tokens with self-attention and cross-attention. Finally, different visual tokens are concatenated to generate a compact global representation. The whole framework is trained end-to-end with image-level labels. Extensive experiments are conducted to evaluate our approach, which outperforms the state-of-the-art methods on the Revisited Oxford and Paris datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Mao, Keming, Renjie Tang, Xinqi Wang, Weiyi Zhang et Haoxiang Wu. « Feature Representation Using Deep Autoencoder for Lung Nodule Image Classification ». Complexity 2018 (2018) : 1–11. http://dx.doi.org/10.1155/2018/3078374.

Texte intégral
Résumé :
This paper focuses on the problem of lung nodule image classification, which plays a key role in lung cancer early diagnosis. In this work, we propose a novel model for lung nodule image feature representation that incorporates both local and global characters. First, lung nodule images are divided into local patches with Superpixel. Then these patches are transformed into fixed-length local feature vectors using unsupervised deep autoencoder (DAE). The visual vocabulary is constructed based on the local features and bag of visual words (BOVW) is used to describe the global feature representation of lung nodule image. Finally, softmax algorithm is employed for lung nodule type classification, which can assemble the whole training process as an end-to-end mode. Comprehensive evaluations are conducted on the widely used public available ELCAP lung image database. Experimental results with regard to different parameter setting, data augmentation, model sparsity, classifier algorithms, and model ensemble validate the effectiveness of our proposed approach.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Xu, Shuai Hua, Sheng Qi Guan et Long Long Chen. « Steel Strip Defect Detection Based on Human Visual Attention Mechanism Model ». Applied Mechanics and Materials 530-531 (février 2014) : 456–62. http://dx.doi.org/10.4028/www.scientific.net/amm.530-531.456.

Texte intégral
Résumé :
According to the characteristics of steel strip, This paper propose the strip defect detection algorithm which is based on visual attention mechanism. First, extract the input image color, brightness and orientation characteristics and form simple feature map; secondly, prognosis on the features, get defective attention region by threshold segmentation to color characteristics of colored defect image. The wavelet decomposition to colorless defect image of brightness and direction features will form the multi-feature subgraph; then construct feature difference molecular graph through the feature decomposition map around central difference operations, and the characteristic difference of molecular graph is formed by the fusion of feature saliency map; finally, defect targets by using local threshold method and region growing segmentation. The experimental results show that this method can rapidly and accurately detect the defects of the strip image, at the same time it can improve the efficiency of detection.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Krishnan, Divya Lakshmi, Rajappa Muthaiah, Anand Madhukar Tapas et Krithivasan Kannan. « Evaluation of Local Feature Detectors for the Comparison of Thermal and Visual Low Altitude Aerial Images ». Defence Science Journal 68, no 5 (12 septembre 2018) : 473–79. http://dx.doi.org/10.14429/dsj.68.11233.

Texte intégral
Résumé :
Local features are key regions of an image suitable for applications such as image matching, and fusion. Detection of targets under varying atmospheric conditions, via aerial images is a typical defence application where multi spectral correlation is essential. Focuses on local features for the comparison of thermal and visual aerial images in this study. The state of the art differential and intensity comparison based features are evaluated over the dataset. An improved affine invariant feature is proposed with a new saliency measure. The performances of the existing and the proposed features are measured with a ground truth transformation estimated for each of the image pairs. Among the state of the art local features, Speeded Up Robust Feature exhibited the highest average repeatability of 57 per cent. The proposed detector produces features with average repeatability of 64 per cent. Future works include design of techniques for retrieval of corresponding regions.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Deng, Ruizhe, Yang Zhao et Yong Ding. « Hierarchical Feature Extraction Assisted with Visual Saliency for Image Quality Assessment ». Journal of Engineering 2017 (2017) : 1–11. http://dx.doi.org/10.1155/2017/4752378.

Texte intégral
Résumé :
Image quality assessment (IQA) is desired to evaluate the perceptual quality of an image in a manner consistent with subjective rating. Considering the characteristics of hierarchical visual cortex, a novel full reference IQA method is proposed in this paper. Quality-aware features that human visual system is sensitive to are extracted to describe image quality comprehensively. Concretely, log Gabor filters and local tetra patterns are employed to capture spatial frequency and local texture features, which are attractive to the primary and secondary visual cortex, respectively. Moreover, images are enhanced before feature extraction with the assistance of visual saliency maps since visual attention affects human evaluation of image quality. The similarities between the features extracted from distorted image and corresponding reference images are synthesized and mapped into an objective quality score by support vector regression. Experiments conducted on four public IQA databases show that the proposed method outperforms other state-of-the-art methods in terms of both accuracy and robustness; that is, it is highly consistent with subjective evaluation and is robust across different databases.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Yohanes, Banu Wirawan. « Images Similarity based on Bags of SIFT Descriptor and K-Means Clustering ». Techné : Jurnal Ilmiah Elektroteknika 18, no 02 (9 octobre 2019) : 137–46. http://dx.doi.org/10.31358/techne.v18i02.217.

Texte intégral
Résumé :
The content based image retrieval is developed and receives many attention from computer vision, supported by the ubiquity of Internet and digital devices. Bag-of-words method from text-based image retrieval trains images’ local features to build visual vocabulary. These visual words are used to represent local features, then quantized before clustering into number of bags. Here, the scale invariant feature transform descriptor is used as local features of images that will be compared each other to find their similarity. It is robust to clutter and partial visibility compared to global feature. The main goal of this research is to build and use a vocabolary to measure image similarity accross two tiny image datasets. K-means clustering algorithm is used to find the centroid of each cluster at different k values. From experiment results, the bag-of-keypoints method has potential to be implemented in the content based information retrieval.
Styles APA, Harvard, Vancouver, ISO, etc.
41

FRONTONI, EMANUELE, ADRIANO MANCINI et PRIMO ZINGARETTI. « FEATURE GROUP MATCHING : A NOVEL METHOD TO FILTER OUT INCORRECT LOCAL FEATURE MATCHINGS ». International Journal of Pattern Recognition and Artificial Intelligence 28, no 05 (31 juillet 2014) : 1450012. http://dx.doi.org/10.1142/s0218001414500128.

Texte intégral
Résumé :
The importance of finding correct correspondences between two images is the major aspect in problems such as appearance-based robot localization and content-based image retrieval. Local feature matching has become a commonly used method to compare images, despite being highly probable that at least some of the matchings/correspondences it detects are incorrect. In this paper, we describe a novel approach to local feature matching, named Feature Group Matching (FGM), to select stable features and obtain a more reliable similarity value between two images. The proposed technique is demonstrated to be translational, rotational and scaling invariant. Experimental evaluation was performed on large and heterogeneous datasets of images using SIFT and SURF, the actual state-of-the-art feature extractors. Results show that FGM avoids almost 95% of incorrect matchings, reduces the visual aliasing (number of images considered similar) and increases both robotic localization and image retrieval accuracy on the average of 13%.
Styles APA, Harvard, Vancouver, ISO, etc.
42

NISHIMURA, Koki, Masaki YANABE, Masaya HASHIMOTO, Tomoyuki ARAKI, Tomohiro NAGATANI, Takeru MORIYAMA et Shunji MAEDA. « Proposal of Electronic Component Visual Inspection Method with Histogram-Based Local Feature ». Journal of the Japan Society for Precision Engineering 84, no 7 (5 juillet 2018) : 664–70. http://dx.doi.org/10.2493/jjspe.84.664.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

MATSUZAKI, Kohei, Kazuyuki TASAKA et Hiromasa YANAGIHARA. « Local Feature Reliability Measure Consistent with Match Conditions for Mobile Visual Search ». IEICE Transactions on Information and Systems E101.D, no 12 (1 décembre 2018) : 3170–80. http://dx.doi.org/10.1587/transinf.2018edp7107.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Liu, Risheng, Shanshan Bai, Zhixun Su, Changcheng Zhang et Chunhai Sun. « Robust visual tracking via L 0 regularized local low-rank feature learning ». Journal of Electronic Imaging 24, no 3 (22 mai 2015) : 033012. http://dx.doi.org/10.1117/1.jei.24.3.033012.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Xiong, Wei, Lefei Zhang, Bo Du et Dacheng Tao. « Combining local and global : Rich and robust feature pooling for visual recognition ». Pattern Recognition 62 (février 2017) : 225–35. http://dx.doi.org/10.1016/j.patcog.2016.08.006.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Wang, Yong, Xian Wei, Lu Ding, Xiaoliang Tang et Huanlong Zhang. « A robust visual tracking method via local feature extraction and saliency detection ». Visual Computer 36, no 4 (9 avril 2019) : 683–700. http://dx.doi.org/10.1007/s00371-019-01646-1.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Tan, Weimin, Bo Yan, Ke Li et Qi Tian. « Image Retargeting for Preserving Robust Local Feature : Application to Mobile Visual Search ». IEEE Transactions on Multimedia 18, no 1 (janvier 2016) : 128–37. http://dx.doi.org/10.1109/tmm.2015.2500727.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
48

Jodogne, S. R., et J. H. Piater. « Closed-Loop Learning of Visual Control Policies ». Journal of Artificial Intelligence Research 28 (26 mars 2007) : 349–91. http://dx.doi.org/10.1613/jair.2110.

Texte intégral
Résumé :
In this paper we present a general, flexible framework for learning mappings from images to actions by interacting with the environment. The basic idea is to introduce a feature-based image classifier in front of a reinforcement learning algorithm. The classifier partitions the visual space according to the presence or absence of few highly informative local descriptors that are incrementally selected in a sequence of attempts to remove perceptual aliasing. We also address the problem of fighting overfitting in such a greedy algorithm. Finally, we show how high-level visual features can be generated when the power of local descriptors is insufficient for completely disambiguating the aliased states. This is done by building a hierarchy of composite features that consist of recursive spatial combinations of visual features. We demonstrate the efficacy of our algorithms by solving three visual navigation tasks and a visual version of the classical ``Car on the Hill'' control problem.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Hou, Zhaoyang, Kaiyun Lv, Xunqiang Gong et Yuting Wan. « A Remote Sensing Image Fusion Method Combining Low-Level Visual Features and Parameter-Adaptive Dual-Channel Pulse-Coupled Neural Network ». Remote Sensing 15, no 2 (6 janvier 2023) : 344. http://dx.doi.org/10.3390/rs15020344.

Texte intégral
Résumé :
Remote sensing image fusion can effectively solve the inherent contradiction between spatial resolution and spectral resolution of imaging systems. At present, the fusion methods of remote sensing images based on multi-scale transform usually set fusion rules according to local feature information and pulse-coupled neural network (PCNN), but there are problems such as single local feature, as fusion rule cannot effectively extract feature information, PCNN parameter setting is complex, and spatial correlation is poor. To this end, a fusion method of remote sensing images that combines low-level visual features and a parameter-adaptive dual-channel pulse-coupled neural network (PADCPCNN) in a non-subsampled shearlet transform (NSST) domain is proposed in this paper. In the low-frequency sub-band fusion process, a low-level visual feature fusion rule is constructed by combining three local features, local phase congruency, local abrupt measure, and local energy information to enhance the extraction ability of feature information. In the process of high-frequency sub-band fusion, the structure and parameters of the dual-channel pulse-coupled neural network (DCPCNN) are optimized, including: (1) the multi-scale morphological gradient is used as an external stimulus to enhance the spatial correlation of DCPCNN; and (2) implement parameter-adaptive representation according to the difference box-counting, the Otsu threshold, and the image intensity to solve the complexity of parameter setting. Five sets of remote sensing image data of different satellite platforms and ground objects are selected for experiments. The proposed method is compared with 16 other methods and evaluated from qualitative and quantitative aspects. The experimental results show that, compared with the average value of the sub-optimal method in the five sets of data, the proposed method is optimized by 0.006, 0.009, 0.009, 0.035, 0.037, 0.042, and 0.020, respectively, in the seven evaluation indexes of information entropy, mutual information, average gradient, spatial frequency, spectral distortion, ERGAS, and visual information fidelity, indicating that the proposed method has the best fusion effect.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Varga, Domonkos. « A Human Visual System Inspired No-Reference Image Quality Assessment Method Based on Local Feature Descriptors ». Sensors 22, no 18 (7 septembre 2022) : 6775. http://dx.doi.org/10.3390/s22186775.

Texte intégral
Résumé :
Objective quality assessment of natural images plays a key role in many fields related to imaging and sensor technology. Thus, this paper intends to introduce an innovative quality-aware feature extraction method for no-reference image quality assessment (NR-IQA). To be more specific, a various sequence of HVS inspired filters were applied to the color channels of an input image to enhance those statistical regularities in the image to which the human visual system is sensitive. From the obtained feature maps, the statistics of a wide range of local feature descriptors were extracted to compile quality-aware features since they treat images from the human visual system’s point of view. To prove the efficiency of the proposed method, it was compared to 16 state-of-the-art NR-IQA techniques on five large benchmark databases, i.e., CLIVE, KonIQ-10k, SPAQ, TID2013, and KADID-10k. It was demonstrated that the proposed method is superior to the state-of-the-art in terms of three different performance indices.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie