Journal articles on the topic '3D saliency'

To see the other types of publications on this topic, follow the link: 3D saliency.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic '3D saliency.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jiao, Yuzhong, Mark Ping Chan Mok, Kayton Wai Keung Cheung, Man Chi Chan, Tak Wai Shen, and Yiu Kei Li. "Dynamic Zero-Parallax-Setting Techniques for Multi-View Autostereoscopic Display." Electronic Imaging 2020, no. 2 (January 26, 2020): 98–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-098.

Full text
Abstract:
The objective of this paper is to research a dynamic computation of Zero-Parallax-Setting (ZPS) for multi-view autostereoscopic displays in order to effectively alleviate blurry 3D vision for images with large disparity. Saliency detection techniques can yield saliency map which is a topographic representation of saliency which refers to visually dominant locations. By using saliency map, we can predict what attracts the attention, or region of interest, to viewers. Recently, deep learning techniques have been applied in saliency detection. Deep learning-based salient object detection methods have the advantage of highlighting most of the salient objects. With the help of depth map, the spatial distribution of salient objects can be computed. In this paper, we will compare two dynamic ZPS techniques based on visual attention. They are 1) maximum saliency computation by Graphic-Based Visual Saliency (GBVS) algorithm and 2) spatial distribution of salient objects by a convolutional neural networks (CNN)-based model. Experiments prove that both methods can help improve the 3D effect of autostereoscopic displays. Moreover, the spatial distribution of salient objects-based dynamic ZPS technique can achieve better 3D performance than maximum saliency-based method.
APA, Harvard, Vancouver, ISO, and other styles
2

A K, Aswathi, and Namitha T N. "3D Saliency Detection." International Journal of Engineering Trends and Technology 47, no. 6 (May 25, 2017): 353–55. http://dx.doi.org/10.14445/22315381/ijett-v47p257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Ya, Chunyi Chen, Xiaojuan Hu, Ling Li, and Hailan Li. "Saliency detection of textured 3D models based on multi-view information and texel descriptor." PeerJ Computer Science 9 (October 25, 2023): e1584. http://dx.doi.org/10.7717/peerj-cs.1584.

Full text
Abstract:
Saliency-driven mesh simplification methods have shown promising results in maintaining visual detail, but effective simplification requires accurate 3D saliency maps. The conventional mesh saliency detection method may not capture salient regions in 3D models with texture. To address this issue, we propose a novel saliency detection method that fuses saliency maps from multi-view projections of textured models. Specifically, we introduce a texel descriptor that combines local convexity and chromatic aberration to capture texel saliency at multiple scales. Furthermore, we created a novel dataset that reflects human eye fixation patterns on textured models, which serves as an objective evaluation metric. Our experimental results demonstrate that our saliency-driven method outperforms existing approaches on several evaluation metrics. Our method source code can be accessed at https://github.com/bkballoon/mvsm-fusion and the dataset can be accessed at 10.5281/zenodo.8131602.
APA, Harvard, Vancouver, ISO, and other styles
4

Favorskaya, M. N., and L. C. Jain. "Saliency detection in deep learning era: trends of development." Information and Control Systems, no. 3 (June 21, 2019): 10–36. http://dx.doi.org/10.31799/1684-8853-2019-3-10-36.

Full text
Abstract:
Introduction:Saliency detection is a fundamental task of computer vision. Its ultimate aim is to localize the objects of interest that grab human visual attention with respect to the rest of the image. A great variety of saliency models based on different approaches was developed since 1990s. In recent years, the saliency detection has become one of actively studied topic in the theory of Convolutional Neural Network (CNN). Many original decisions using CNNs were proposed for salient object detection and, even, event detection.Purpose:A detailed survey of saliency detection methods in deep learning era allows to understand the current possibilities of CNN approach for visual analysis conducted by the human eyes’ tracking and digital image processing.Results:A survey reflects the recent advances in saliency detection using CNNs. Different models available in literature, such as static and dynamic 2D CNNs for salient object detection and 3D CNNs for salient event detection are discussed in the chronological order. It is worth noting that automatic salient event detection in durable videos became possible using the recently appeared 3D CNN combining with 2D CNN for salient audio detection. Also in this article, we have presented a short description of public image and video datasets with annotated salient objects or events, as well as the often used metrics for the results’ evaluation.Practical relevance:This survey is considered as a contribution in the study of rapidly developed deep learning methods with respect to the saliency detection in the images and videos.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Tao, Zhixiang Fang, Qingzhou Mao, Qingquan Li, and Xing Zhang. "A cube-based saliency detection method using integrated visual and spatial features." Sensor Review 36, no. 2 (March 21, 2016): 148–57. http://dx.doi.org/10.1108/sr-07-2015-0110.

Full text
Abstract:
Purpose The spatial feature is important for scene saliency detection. Scene-based visual saliency detection methods fail to incorporate 3D scene spatial aspects. This paper aims to propose a cube-based method to improve saliency detection through integrating visual and spatial features in 3D scenes. Design/methodology/approach In the presented approach, a multiscale cube pyramid is used to organize the 3D image scene and mesh model. Each 3D cube in this pyramid represents a space unit similar to a pixel in the image saliency model multiscale image pyramid. In each 3D cube color, intensity and orientation features are extracted from the image and a quantitative concave–convex descriptor is extracted from the 3D space. A Gaussian filter is then used on this pyramid of cubes with an extended center-surround difference introduced to compute the cube-based 3D scene saliency. Findings The precision-recall rate and receiver operating characteristic curve is used to evaluate the method and other state-of-art methods. The results show that the method used is better than traditional image-based methods, especially for 3D scenes. Originality/value This paper presents a method that improves the image-based visual saliency model.
APA, Harvard, Vancouver, ISO, and other styles
6

Yuan, Jing, Yang Cao, Yu Kang, Weiguo Song, Zhongcheng Yin, Rui Ba, and Qing Ma. "3D Layout encoding network for spatial‐aware 3D saliency modelling." IET Computer Vision 13, no. 5 (July 10, 2019): 480–88. http://dx.doi.org/10.1049/iet-cvi.2018.5591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hamidi, Mohamed, Aladine Chetouani, Mohamed El Haziti, Mohammed El Hassouni, and Hocine Cherifi. "Blind Robust 3D Mesh Watermarking Based on Mesh Saliency and Wavelet Transform for Copyright Protection." Information 10, no. 2 (February 18, 2019): 67. http://dx.doi.org/10.3390/info10020067.

Full text
Abstract:
Three-dimensional models have been extensively used in several applications including computer-aided design (CAD), video games, medical imaging due to the processing capability improvement of computers, and the development of network bandwidth. Therefore, the necessity of implementing 3D mesh watermarking schemes aiming to protect copyright has increased considerably. In this paper, a blind robust 3D mesh watermarking method based on mesh saliency and wavelet transform for copyright protection is proposed. The watermark is inserted by quantifying the wavelet coefficients using quantization index modulation (QIM) according to the mesh saliency of the 3D semiregular mesh. The synchronizing primitive is the distance between the mesh center and salient points in the descending order. The experimental results show the high imperceptibility of the proposed scheme while ensuring a good robustness against a wide range of attacks including smoothing, additive noise, element reordering, similarity transformations, etc.
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, Yanxiang, Yifei Pan, Minglong Song, and Meng Wang. "Image retargeting with a 3D saliency model." Signal Processing 112 (July 2015): 53–63. http://dx.doi.org/10.1016/j.sigpro.2014.11.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Hongyun, Chunyu Lin, Yao Zhao, and Anhong Wang. "3D saliency detection based on background detection." Journal of Visual Communication and Image Representation 48 (October 2017): 238–53. http://dx.doi.org/10.1016/j.jvcir.2017.06.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Junle Wang, M. P. DaSilva, P. LeCallet, and V. Ricordel. "Computational Model of Stereoscopic 3D Visual Saliency." IEEE Transactions on Image Processing 22, no. 6 (June 2013): 2151–65. http://dx.doi.org/10.1109/tip.2013.2246176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bao, Lei, Xiongwei Zhang, Yunfei Zheng, and Yang Li. "Video saliency detection using 3D shearlet transform." Multimedia Tools and Applications 75, no. 13 (June 23, 2015): 7761–78. http://dx.doi.org/10.1007/s11042-015-2692-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zong Baobao, 纵宝宝, 李朝锋 Li Chaofeng, and 桑庆兵 Sang Qingbing. "3D Image Saliency Detection Based on Log-Gabor Filtering and Saliency Map Fusion Optimization." Laser & Optoelectronics Progress 56, no. 8 (2019): 081003. http://dx.doi.org/10.3788/lop56.081003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Pan, Gang, An-Zhi Wang, Bao-Lei Xu, and Weihua Ou. "Multi-scale Feature Fusion for 3D Saliency Detection." Journal of Physics: Conference Series 1651 (November 2020): 012128. http://dx.doi.org/10.1088/1742-6596/1651/1/012128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Miao, Yongwei, and Jieqing Feng. "Perceptual-saliency extremum lines for 3D shape illustration." Visual Computer 26, no. 6-8 (April 9, 2010): 433–43. http://dx.doi.org/10.1007/s00371-010-0458-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Li-na, Shi-yao Wang, Jun Zhou, Jian Liu, and Chun-gang Zhu. "3D grasp saliency analysis via deep shape correspondence." Computer Aided Geometric Design 81 (August 2020): 101901. http://dx.doi.org/10.1016/j.cagd.2020.101901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Weiming, Haiyuan Chao, Jing Tong, Zhouwang Yang, Xin Tong, Hang Li, Xiuping Liu, and Ligang Liu. "Saliency-Preserving Slicing Optimization for Effective 3D Printing." Computer Graphics Forum 34, no. 6 (January 28, 2015): 148–60. http://dx.doi.org/10.1111/cgf.12527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Banitalebi-Dehkordi, Amin, and Panos Nasiopoulos. "Saliency inspired quality assessment of stereoscopic 3D video." Multimedia Tools and Applications 77, no. 19 (March 6, 2018): 26055–82. http://dx.doi.org/10.1007/s11042-018-5837-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Zhengyi, Tengfei Song, and Feng Xie. "RGB-D image saliency detection from 3D perspective." Multimedia Tools and Applications 78, no. 6 (July 31, 2018): 6787–804. http://dx.doi.org/10.1007/s11042-018-6319-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Lézoray, Olivier, and Anass Nouri. "3D mesh saliency from local spiral hop descriptors." Electronic Imaging 35, no. 17 (January 16, 2023): 103–1. http://dx.doi.org/10.2352/ei.2023.35.17.3dia-103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Xu, Tao, Songmin Jia, Zhengyin Dong, and Xiuzhi Li. "Obstacles Regions 3D-Perception Method for Mobile Robots Based on Visual Saliency." Journal of Robotics 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/720174.

Full text
Abstract:
A novel mobile robots 3D-perception obstacle regions method in indoor environment based on Improved Salient Region Extraction (ISRE) is proposed. This model acquires the original image by the Kinect sensor and then gains Original Salience Map (OSM) and Intensity Feature Map (IFM) from the original image by the salience filtering algorithm. The IFM was used as the input neutron of PCNN. In order to make the ignition range more exact, PCNN ignition pulse input was further improved as follows: point multiplication algorithm was taken between PCNN internal neuron and binarization salience image of OSM; then we determined the final ignition pulse input. The salience binarization region abstraction was fulfilled by improved PCNN multiple iterations finally. Finally, the binarization area was mapped to the depth map obtained by Kinect sensor, and mobile robot can achieve the obstacle localization function. The method was conducted on a mobile robot (Pioneer3-DX). The experimental results demonstrated the feasibility and effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
21

Ruiu, Pietro, Lorenzo Mascia, and Enrico Grosso. "Saliency-Guided Point Cloud Compression for 3D Live Reconstruction." Multimodal Technologies and Interaction 8, no. 5 (May 3, 2024): 36. http://dx.doi.org/10.3390/mti8050036.

Full text
Abstract:
3D modeling and reconstruction are critical to creating immersive XR experiences, providing realistic virtual environments, objects, and interactions that increase user engagement and enable new forms of content manipulation. Today, 3D data can be easily captured using off-the-shelf, specialized headsets; very often, these tools provide real-time, albeit low-resolution, integration of continuously captured depth maps. This approach is generally suitable for basic AR and MR applications, where users can easily direct their attention to points of interest and benefit from a fully user-centric perspective. However, it proves to be less effective in more complex scenarios such as multi-user telepresence or telerobotics, where real-time transmission of local surroundings to remote users is essential. Two primary questions emerge: (i) what strategies are available for achieving real-time 3D reconstruction in such systems? and (ii) how can the effectiveness of real-time 3D reconstruction methods be assessed? This paper explores various approaches to the challenge of live 3D reconstruction from typical point cloud data. It first introduces some common data flow patterns that characterize virtual reality applications and shows that achieving high-speed data transmission and efficient data compression is critical to maintaining visual continuity and ensuring a satisfactory user experience. The paper thus introduces the concept of saliency-driven compression/reconstruction and compares it with alternative state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
22

Shafiq, Muhammad Amir, Zhiling Long, Haibin Di, and Ghassan AlRegib. "A novel attention model for salient structure detection in seismic volumes." Applied Computing and Intelligence 1, no. 1 (2021): 31–45. http://dx.doi.org/10.3934/aci.2021002.

Full text
Abstract:
<abstract><p>A new approach to seismic interpretation is proposed to leverage visual perception and human visual system modeling. Specifically, a saliency detection algorithm based on a novel attention model is proposed for identifying subsurface structures within seismic data volumes. The algorithm employs 3D-FFT and a multi-dimensional spectral projection, which decomposes local spectra into three distinct components, each depicting variations along different dimensions of the data. Subsequently, a novel directional center-surround attention model is proposed to incorporate directional comparisons around each voxel for saliency detection within each projected dimension. Next, the resulting saliency maps along each dimension are combined adaptively to yield a consolidated saliency map, which highlights various structures characterized by subtle variations and relative motion with respect to their neighboring sections. A priori information about the seismic data can be either embedded into the proposed attention model in the directional comparisons, or incorporated into the algorithm by specifying a template when combining saliency maps adaptively. Experimental results on two real seismic datasets from the North Sea, Netherlands and Great South Basin, New Zealand demonstrate the effectiveness of the proposed algorithm for detecting salient seismic structures of different natures and appearances in one shot, which differs significantly from traditional seismic interpretation algorithms. The results further demonstrate that the proposed method outperforms comparable state-of-the-art saliency detection algorithms for natural images and videos, which are inadequate for seismic imaging data.</p></abstract>
APA, Harvard, Vancouver, ISO, and other styles
23

Zou, Wenbin, Shengkai Zhuo, Yi Tang, Shishun Tian, Xia Li, and Chen Xu. "STA3D: Spatiotemporally attentive 3D network for video saliency prediction." Pattern Recognition Letters 147 (July 2021): 78–84. http://dx.doi.org/10.1016/j.patrec.2021.04.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Shi, Zhenfeng, Hao Luo, and Xiamu Niu. "Saliency-based structural degradation evaluation of 3D mesh simplification." IEICE Electronics Express 8, no. 3 (2011): 161–67. http://dx.doi.org/10.1587/elex.8.161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Qi, Feng, Debin Zhao, Shaohui Liu, and Xiaopeng Fan. "3D visual saliency detection model with generated disparity map." Multimedia Tools and Applications 76, no. 2 (January 29, 2016): 3087–103. http://dx.doi.org/10.1007/s11042-015-3229-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Banerjee, Subhashis, Sushmita Mitra, and B. Uma Shankar. "Automated 3D segmentation of brain tumor using visual saliency." Information Sciences 424 (January 2018): 337–53. http://dx.doi.org/10.1016/j.ins.2017.10.011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Banitalebi-Dehkordi, Amin, Mahsa T. Pourazad, and Panos Nasiopoulos. "A learning-based visual saliency prediction model for stereoscopic 3D video (LBVS-3D)." Multimedia Tools and Applications 76, no. 22 (November 23, 2016): 23859–90. http://dx.doi.org/10.1007/s11042-016-4155-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Messai, Oussama, Aladine Chetouani, Fella Hachouf, and Zianou Ahmed Seghir. "Deep Quality evaluator guided by 3D Saliency for Stereoscopic Images." Electronic Imaging 2021, no. 11 (January 18, 2021): 110–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.11.hvei-110.

Full text
Abstract:
Due to the use of 3D contents in various applications, Stereo Image Quality Assessment (SIQA) has attracted more attention to ensure good viewing experience for the users. Several methods have been thus proposed in the literature with a clear improvement for deep learning-based methods. This paper introduces a new deep learning-based no-reference SIQA using cyclopean view hypothesis and human visual attention. First, the cyclopean image is built considering the presence of binocular rivalry that covers the asymmetric distortion case. Second, the saliency map is computed taking into account the depth information. The latter aims to extract patches on the most perceptual relevant regions. Finally, a modified version of the pre-trained vgg-19 is fine-tuned and used to predict the quality score through the selected patches. The performance of the proposed metric has been evaluated on 3D LIVE phase I and phase II databases. Compared with the state-of-the-art metrics, our method gives better outcomes.
APA, Harvard, Vancouver, ISO, and other styles
29

JIANG, Yibo, Hui BI, Hui LI, and Zhihao XU. "Automatic and Accurate 3D Measurement Based on RGBD Saliency Detection." IEICE Transactions on Information and Systems E102.D, no. 3 (March 1, 2019): 688–89. http://dx.doi.org/10.1587/transinf.2018edl8212.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Yanxiang, Yifei Pan, Minglong Song, and Meng Wang. "Improved seam carving combining with 3D saliency for image retargeting." Neurocomputing 151 (March 2015): 645–53. http://dx.doi.org/10.1016/j.neucom.2014.05.089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Zhao, Yitian, Yonghuai Liu, Yongjun Wang, Baogang Wei, Jian Yang, Yifan Zhao, and Yongtian Wang. "Region-based saliency estimation for 3D shape analysis and understanding." Neurocomputing 197 (July 2016): 1–13. http://dx.doi.org/10.1016/j.neucom.2016.01.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Miao, Yongwei, Jieqing Feng, and Renato Pajarola. "Visual saliency guided normal enhancement technique for 3D shape depiction." Computers & Graphics 35, no. 3 (June 2011): 706–12. http://dx.doi.org/10.1016/j.cag.2011.03.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Jeong, Se-Won, and Jae-Young Sim. "Saliency Detection for 3D Surface Geometry Using Semi-regular Meshes." IEEE Transactions on Multimedia 19, no. 12 (December 2017): 2692–705. http://dx.doi.org/10.1109/tmm.2017.2710802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Haoqian, Bing Yan, Xingzheng Wang, Yongbing Zhang, and Yi Yang. "Accurate saliency detection based on depth feature of 3D images." Multimedia Tools and Applications 77, no. 12 (August 16, 2017): 14655–72. http://dx.doi.org/10.1007/s11042-017-5052-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Mangal, Anuj, Hitendra Garg, and Charul Bhatnagar. "Modified ResNet50 model and semantic segmentation based image co-saliency detection." Journal of Information and Optimization Sciences 44, no. 6 (2023): 1035–42. http://dx.doi.org/10.47974/jios-1331.

Full text
Abstract:
Co-salient object identification is a new and emerging branch of the visual saliency detection technique that tries to find salient pattern appearing in several image groups. The proposed work has the potential to benefit a wide variety of important applications including the detection of objects of interest, more robust object recognition and animation synthesis, handling input image query, 3D object reconstruction, object co-segmentation etc. To build modified ResNet50 model, the hyperparameters are adjusted in the current work to increase accuracy while minimizing loss. The modified network is trained on HOG features to mine more significant features along with their corresponding ground truth images. For a more streamlined outcome, the proposed system was built using the SGDM optimizer. During testing among the relevant and irrelevant image the network generates appropriate co-saliency map of relevant images. Integrating the associated and prominent characteristics of the image yields the appropriate ground truth for each image. The proposed method reports better F1 value 98.7% and MAE score 0.089 value when compared with SOTA model.
APA, Harvard, Vancouver, ISO, and other styles
36

Wen, Falin, Qinghui Wang, Ruirui Zou, Ying Wang, Fenglin Liu, Yang Chen, Linghao Yu, Shaoyi Du, and Chengzhi Yuan. "A Salient Object Detection Method Based on Boundary Enhancement." Sensors 23, no. 16 (August 10, 2023): 7077. http://dx.doi.org/10.3390/s23167077.

Full text
Abstract:
Visual saliency refers to the human’s ability to quickly focus on important parts of their visual field, which is a crucial aspect of image processing, particularly in fields like medical imaging and robotics. Understanding and simulating this mechanism is crucial for solving complex visual problems. In this paper, we propose a salient object detection method based on boundary enhancement, which is applicable to both 2D and 3D sensors data. To address the problem of large-scale variation of salient objects, our method introduces a multi-level feature aggregation module that enhances the expressive ability of fixed-resolution features by utilizing adjacent features to complement each other. Additionally, we propose a multi-scale information extraction module to capture local contextual information at different scales for back-propagated level-by-level features, which allows for better measurement of the composition of the feature map after back-fusion. To tackle the low confidence issue of boundary pixels, we also introduce a boundary extraction module to extract the boundary information of salient regions. This information is then fused with salient target information to further refine the saliency prediction results. During the training process, our method uses a mixed loss function to constrain the model training from two levels: pixels and images. The experimental results demonstrate that our salient target detection method based on boundary enhancement shows good detection effects on targets of different scales, multi-targets, linear targets, and targets in complex scenes. We compare our method with the best method in four conventional datasets and achieve an average improvement of 6.2% on the mean absolute error (MAE) indicators. Overall, our approach shows promise for improving the accuracy and efficiency of salient object detection in a variety of settings, including those involving 2D/3D semantic analysis and reconstruction/inpainting of image/video/point cloud data.
APA, Harvard, Vancouver, ISO, and other styles
37

Lyu, Wei, Wei Wu, Lin Zhang, Zhaohui Wu, and Zhong Zhou. "Laplacian-based 3D mesh simplification with feature preservation." International Journal of Modeling, Simulation, and Scientific Computing 10, no. 02 (April 2019): 1950002. http://dx.doi.org/10.1142/s1793962319500028.

Full text
Abstract:
We propose a novel Laplacian-based algorithm that simplifies triangle surface meshes and can provide different preservation ratios of geometric features. Our efficient and fast algorithm uses a 3D mesh model as input and initially detects geometric features by using a Laplacian-based shape descriptor (L-descriptor). The algorithm further performs an optimized clustering approach that combines a Laplacian operator with K-means clustering algorithm to perform vertex classification. Moreover, we introduce a Laplacian weighted cost function based on L-descriptor to perform feature weighting and error statistics comparison, which are further used to change the deletion order of the model elements and preserve the saliency features. Our algorithm can provide different preservation ratios of geometric features and may be extended to handle arbitrary mesh topologies. Our experiments on a variety of 3D surface meshes demonstrate the advantages of our algorithm in terms of improving accuracy and applicability, and preserving saliency geometric features.
APA, Harvard, Vancouver, ISO, and other styles
38

Leal Narvaez, Esmeide Alberto, German Sanchez Torres, and John William Branch Bedoya. "Point cloud saliency detection via local sparse coding." DYNA 86, no. 209 (April 1, 2019): 238–47. http://dx.doi.org/10.15446/dyna.v86n209.75958.

Full text
Abstract:
The human visual system (HVS) can process large quantities of visual information instantly. Visual saliency perception is the process of locating and identifying regions with a high degree of saliency from a visual standpoint. Mesh saliency detection has been studied extensively in recent years, but few studies have focused on 3D point cloud saliency detection. The estimation of visual saliency is important for computer graphics tasks such as simplification, segmentation, shape matching and resizing. In this paper, we present a method for the direct detection of saliency on unorganized point clouds. First, our method computes a set of overlapping neighborhoods and estimates adescriptor vector for each point inside it. Then, the descriptor vectors are used as a natural dictionary in order to apply a sparse coding process. Finally, we estimate a saliency map of the point neighborhoods based on the Minimum Description Length (MDL) principle.Experiment results show that the proposed method achieves similar results to those from the literature review and in some cases even improves on them. It captures the geometry of the point clouds without using any topological information and achieves an acceptable performance. The effectiveness and robustness of our approach are shown by comparing it to previous studies in the literature review.
APA, Harvard, Vancouver, ISO, and other styles
39

Pham, Nam, and Sergey Fomel. "Uncertainty and interpretability analysis of encoder-decoder architecture for channel detection." GEOPHYSICS 86, no. 4 (July 1, 2021): O49—O58. http://dx.doi.org/10.1190/geo2020-0409.1.

Full text
Abstract:
We have adopted a method to understand uncertainty and interpretability of a Bayesian convolutional neural network for detecting 3D channel geobodies in seismic volumes. We measure heteroscedastic aleatoric uncertainty and epistemic uncertainty. Epistemic uncertainty captures the uncertainty of the network parameters, whereas heteroscedastic aleatoric uncertainty accounts for noise in the seismic volumes. We train a network modified from U-Net architecture on 3D synthetic seismic volumes, and then we apply it to field data. Tests on 3D field data sets from the Browse Basin, offshore Australia, and from Parihaka in New Zealand prove that uncertainty volumes are related to geologic uncertainty, model mispicks, and input noise. We analyze model interpretability on these data sets by creating saliency volumes with gradient-weighted class activation mapping. We find that the model takes a global-to-local approach to localize channel geobodies as well as the importance of different model components in overall strategy. Using channel probability, uncertainty, and saliency volumes, interpreters can accurately identify channel geobodies in 3D seismic volumes and also understand the model predictions.
APA, Harvard, Vancouver, ISO, and other styles
40

Wei, Guangshun, Long Ma, Chen Wang, Christian Desrosiers, and Yuanfeng Zhou. "Multi-Task Joint Learning of 3D Keypoint Saliency and Correspondence Estimation." Computer-Aided Design 141 (December 2021): 103105. http://dx.doi.org/10.1016/j.cad.2021.103105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Messai, Oussama, Aladine Chetouani, Fella Hachouf, and Zianou Ahmed Seghir. "3D saliency guided deep quality predictor for no-reference stereoscopic images." Neurocomputing 478 (March 2022): 22–36. http://dx.doi.org/10.1016/j.neucom.2022.01.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Fang, Yuming, Guanqun Ding, Jia Li, and Zhijun Fang. "Deep3DSaliency: Deep Stereoscopic Video Saliency Detection Model by 3D Convolutional Networks." IEEE Transactions on Image Processing 28, no. 5 (May 2019): 2305–18. http://dx.doi.org/10.1109/tip.2018.2885229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Filipe, Silvio, Laurent Itti, and Luis A. Alexandre. "BIK-BUS: Biologically Motivated 3D Keypoint Based on Bottom-Up Saliency." IEEE Transactions on Image Processing 24, no. 1 (January 2015): 163–75. http://dx.doi.org/10.1109/tip.2014.2371532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Castellani, U., M. Cristani, S. Fantoni, and V. Murino. "Sparse points matching by combining 3D mesh saliency with statistical descriptors." Computer Graphics Forum 27, no. 2 (April 2008): 643–52. http://dx.doi.org/10.1111/j.1467-8659.2008.01162.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Guo, Yu, Fei Wang, and Jingmin Xin. "Point-wise saliency detection on 3D point clouds via covariance descriptors." Visual Computer 34, no. 10 (June 26, 2017): 1325–38. http://dx.doi.org/10.1007/s00371-017-1416-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bulbul, Abdullah, Sami Arpa, and Tolga Capin. "A clustering-based method to estimate saliency in 3D animated meshes." Computers & Graphics 43 (October 2014): 11–20. http://dx.doi.org/10.1016/j.cag.2014.04.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Sun, Zhenhao, Xu Wang, Qiudan Zhang, and Jianmin Jiang. "Real-Time Video Saliency Prediction Via 3D Residual Convolutional Neural Network." IEEE Access 7 (2019): 147743–54. http://dx.doi.org/10.1109/access.2019.2946479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Saleh, Kamel, and Mark Sumner. "A SVM-3D Based Encoderless Control of a Fault-Tolerant PMSM Drive." Electronics 9, no. 7 (July 4, 2020): 1095. http://dx.doi.org/10.3390/electronics9071095.

Full text
Abstract:
This paper exhibits a novel technique to obtain an encoderless speed control of a permanent magnet synchronous motor (PMSM) in the case of a loss of one phase. The importance of this work is that it presents solutions in order to maintain the operation of the system in various conditions. This will increase the reliability of the whole drive system to meet the safety issues required in some applications. To achieve that, a fault-tolerant inverter modulated through a 3-dimension space vector pulse width modulation technique (3D-SVPWM) is used. Besides that, an algorithm to obtain the exact position of the saturation saliency in the case of a loss of one phase is introduced to achieve a closed-loop field-oriented encoderless speed control and to further enhance the reliability of the whole drive system. This algorithm is based on measuring the transient stator current responses of the motor due to the insulated-gate bipolar transistors (IGBTs) switching actions. Then according to the operating condition (normal or a loss of one phase), the saliency position signals are constructed from the dynamic current responses. Simulation results are provided to demonstrate the effectiveness of the saliency tracking technique under normal and under a loss of one phase conditions. Moreover, the results verify the maximum reliability for the whole drive system that is achieved in this work through a continuous operation of the drive system under a loss of one phase condition and under encoderless speed control.
APA, Harvard, Vancouver, ISO, and other styles
49

Nie, Weizhi, Lu Qu, Minjie Ren, Qi Liang, Yuting Su, Yangyang Li, and Hao Jin. "Two-Stream Network Based on Visual Saliency Sharing for 3D Model Recognition." IEEE Access 8 (2020): 5979–89. http://dx.doi.org/10.1109/access.2019.2963511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Abouelaziz, Ilyass, Aladine Chetouani, Mohammed El Hassouni, Longin Jan Latecki, and Hocine Cherifi. "3D visual saliency and convolutional neural network for blind mesh quality assessment." Neural Computing and Applications 32, no. 21 (October 19, 2019): 16589–603. http://dx.doi.org/10.1007/s00521-019-04521-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography