Artigos de revistas sobre o tema "3D saliency"

Siga este link para ver outros tipos de publicações sobre o tema: 3D saliency.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "3D saliency".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Jiao, Yuzhong, Mark Ping Chan Mok, Kayton Wai Keung Cheung, Man Chi Chan, Tak Wai Shen e Yiu Kei Li. "Dynamic Zero-Parallax-Setting Techniques for Multi-View Autostereoscopic Display". Electronic Imaging 2020, n.º 2 (26 de janeiro de 2020): 98–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-098.

Texto completo da fonte
Resumo:
The objective of this paper is to research a dynamic computation of Zero-Parallax-Setting (ZPS) for multi-view autostereoscopic displays in order to effectively alleviate blurry 3D vision for images with large disparity. Saliency detection techniques can yield saliency map which is a topographic representation of saliency which refers to visually dominant locations. By using saliency map, we can predict what attracts the attention, or region of interest, to viewers. Recently, deep learning techniques have been applied in saliency detection. Deep learning-based salient object detection methods have the advantage of highlighting most of the salient objects. With the help of depth map, the spatial distribution of salient objects can be computed. In this paper, we will compare two dynamic ZPS techniques based on visual attention. They are 1) maximum saliency computation by Graphic-Based Visual Saliency (GBVS) algorithm and 2) spatial distribution of salient objects by a convolutional neural networks (CNN)-based model. Experiments prove that both methods can help improve the 3D effect of autostereoscopic displays. Moreover, the spatial distribution of salient objects-based dynamic ZPS technique can achieve better 3D performance than maximum saliency-based method.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

A K, Aswathi, e Namitha T N. "3D Saliency Detection". International Journal of Engineering Trends and Technology 47, n.º 6 (25 de maio de 2017): 353–55. http://dx.doi.org/10.14445/22315381/ijett-v47p257.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zhang, Ya, Chunyi Chen, Xiaojuan Hu, Ling Li e Hailan Li. "Saliency detection of textured 3D models based on multi-view information and texel descriptor". PeerJ Computer Science 9 (25 de outubro de 2023): e1584. http://dx.doi.org/10.7717/peerj-cs.1584.

Texto completo da fonte
Resumo:
Saliency-driven mesh simplification methods have shown promising results in maintaining visual detail, but effective simplification requires accurate 3D saliency maps. The conventional mesh saliency detection method may not capture salient regions in 3D models with texture. To address this issue, we propose a novel saliency detection method that fuses saliency maps from multi-view projections of textured models. Specifically, we introduce a texel descriptor that combines local convexity and chromatic aberration to capture texel saliency at multiple scales. Furthermore, we created a novel dataset that reflects human eye fixation patterns on textured models, which serves as an objective evaluation metric. Our experimental results demonstrate that our saliency-driven method outperforms existing approaches on several evaluation metrics. Our method source code can be accessed at https://github.com/bkballoon/mvsm-fusion and the dataset can be accessed at 10.5281/zenodo.8131602.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Favorskaya, M. N., e L. C. Jain. "Saliency detection in deep learning era: trends of development". Information and Control Systems, n.º 3 (21 de junho de 2019): 10–36. http://dx.doi.org/10.31799/1684-8853-2019-3-10-36.

Texto completo da fonte
Resumo:
Introduction:Saliency detection is a fundamental task of computer vision. Its ultimate aim is to localize the objects of interest that grab human visual attention with respect to the rest of the image. A great variety of saliency models based on different approaches was developed since 1990s. In recent years, the saliency detection has become one of actively studied topic in the theory of Convolutional Neural Network (CNN). Many original decisions using CNNs were proposed for salient object detection and, even, event detection.Purpose:A detailed survey of saliency detection methods in deep learning era allows to understand the current possibilities of CNN approach for visual analysis conducted by the human eyes’ tracking and digital image processing.Results:A survey reflects the recent advances in saliency detection using CNNs. Different models available in literature, such as static and dynamic 2D CNNs for salient object detection and 3D CNNs for salient event detection are discussed in the chronological order. It is worth noting that automatic salient event detection in durable videos became possible using the recently appeared 3D CNN combining with 2D CNN for salient audio detection. Also in this article, we have presented a short description of public image and video datasets with annotated salient objects or events, as well as the often used metrics for the results’ evaluation.Practical relevance:This survey is considered as a contribution in the study of rapidly developed deep learning methods with respect to the saliency detection in the images and videos.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Liu, Tao, Zhixiang Fang, Qingzhou Mao, Qingquan Li e Xing Zhang. "A cube-based saliency detection method using integrated visual and spatial features". Sensor Review 36, n.º 2 (21 de março de 2016): 148–57. http://dx.doi.org/10.1108/sr-07-2015-0110.

Texto completo da fonte
Resumo:
Purpose The spatial feature is important for scene saliency detection. Scene-based visual saliency detection methods fail to incorporate 3D scene spatial aspects. This paper aims to propose a cube-based method to improve saliency detection through integrating visual and spatial features in 3D scenes. Design/methodology/approach In the presented approach, a multiscale cube pyramid is used to organize the 3D image scene and mesh model. Each 3D cube in this pyramid represents a space unit similar to a pixel in the image saliency model multiscale image pyramid. In each 3D cube color, intensity and orientation features are extracted from the image and a quantitative concave–convex descriptor is extracted from the 3D space. A Gaussian filter is then used on this pyramid of cubes with an extended center-surround difference introduced to compute the cube-based 3D scene saliency. Findings The precision-recall rate and receiver operating characteristic curve is used to evaluate the method and other state-of-art methods. The results show that the method used is better than traditional image-based methods, especially for 3D scenes. Originality/value This paper presents a method that improves the image-based visual saliency model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Yuan, Jing, Yang Cao, Yu Kang, Weiguo Song, Zhongcheng Yin, Rui Ba e Qing Ma. "3D Layout encoding network for spatial‐aware 3D saliency modelling". IET Computer Vision 13, n.º 5 (10 de julho de 2019): 480–88. http://dx.doi.org/10.1049/iet-cvi.2018.5591.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Hamidi, Mohamed, Aladine Chetouani, Mohamed El Haziti, Mohammed El Hassouni e Hocine Cherifi. "Blind Robust 3D Mesh Watermarking Based on Mesh Saliency and Wavelet Transform for Copyright Protection". Information 10, n.º 2 (18 de fevereiro de 2019): 67. http://dx.doi.org/10.3390/info10020067.

Texto completo da fonte
Resumo:
Three-dimensional models have been extensively used in several applications including computer-aided design (CAD), video games, medical imaging due to the processing capability improvement of computers, and the development of network bandwidth. Therefore, the necessity of implementing 3D mesh watermarking schemes aiming to protect copyright has increased considerably. In this paper, a blind robust 3D mesh watermarking method based on mesh saliency and wavelet transform for copyright protection is proposed. The watermark is inserted by quantifying the wavelet coefficients using quantization index modulation (QIM) according to the mesh saliency of the 3D semiregular mesh. The synchronizing primitive is the distance between the mesh center and salient points in the descending order. The experimental results show the high imperceptibility of the proposed scheme while ensuring a good robustness against a wide range of attacks including smoothing, additive noise, element reordering, similarity transformations, etc.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Chen, Yanxiang, Yifei Pan, Minglong Song e Meng Wang. "Image retargeting with a 3D saliency model". Signal Processing 112 (julho de 2015): 53–63. http://dx.doi.org/10.1016/j.sigpro.2014.11.001.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Lin, Hongyun, Chunyu Lin, Yao Zhao e Anhong Wang. "3D saliency detection based on background detection". Journal of Visual Communication and Image Representation 48 (outubro de 2017): 238–53. http://dx.doi.org/10.1016/j.jvcir.2017.06.011.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Junle Wang, M. P. DaSilva, P. LeCallet e V. Ricordel. "Computational Model of Stereoscopic 3D Visual Saliency". IEEE Transactions on Image Processing 22, n.º 6 (junho de 2013): 2151–65. http://dx.doi.org/10.1109/tip.2013.2246176.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Bao, Lei, Xiongwei Zhang, Yunfei Zheng e Yang Li. "Video saliency detection using 3D shearlet transform". Multimedia Tools and Applications 75, n.º 13 (23 de junho de 2015): 7761–78. http://dx.doi.org/10.1007/s11042-015-2692-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Zong Baobao, 纵宝宝, 李朝锋 Li Chaofeng e 桑庆兵 Sang Qingbing. "3D Image Saliency Detection Based on Log-Gabor Filtering and Saliency Map Fusion Optimization". Laser & Optoelectronics Progress 56, n.º 8 (2019): 081003. http://dx.doi.org/10.3788/lop56.081003.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Pan, Gang, An-Zhi Wang, Bao-Lei Xu e Weihua Ou. "Multi-scale Feature Fusion for 3D Saliency Detection". Journal of Physics: Conference Series 1651 (novembro de 2020): 012128. http://dx.doi.org/10.1088/1742-6596/1651/1/012128.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Miao, Yongwei, e Jieqing Feng. "Perceptual-saliency extremum lines for 3D shape illustration". Visual Computer 26, n.º 6-8 (9 de abril de 2010): 433–43. http://dx.doi.org/10.1007/s00371-010-0458-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Zhang, Li-na, Shi-yao Wang, Jun Zhou, Jian Liu e Chun-gang Zhu. "3D grasp saliency analysis via deep shape correspondence". Computer Aided Geometric Design 81 (agosto de 2020): 101901. http://dx.doi.org/10.1016/j.cagd.2020.101901.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Wang, Weiming, Haiyuan Chao, Jing Tong, Zhouwang Yang, Xin Tong, Hang Li, Xiuping Liu e Ligang Liu. "Saliency-Preserving Slicing Optimization for Effective 3D Printing". Computer Graphics Forum 34, n.º 6 (28 de janeiro de 2015): 148–60. http://dx.doi.org/10.1111/cgf.12527.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Banitalebi-Dehkordi, Amin, e Panos Nasiopoulos. "Saliency inspired quality assessment of stereoscopic 3D video". Multimedia Tools and Applications 77, n.º 19 (6 de março de 2018): 26055–82. http://dx.doi.org/10.1007/s11042-018-5837-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Liu, Zhengyi, Tengfei Song e Feng Xie. "RGB-D image saliency detection from 3D perspective". Multimedia Tools and Applications 78, n.º 6 (31 de julho de 2018): 6787–804. http://dx.doi.org/10.1007/s11042-018-6319-4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Lézoray, Olivier, e Anass Nouri. "3D mesh saliency from local spiral hop descriptors". Electronic Imaging 35, n.º 17 (16 de janeiro de 2023): 103–1. http://dx.doi.org/10.2352/ei.2023.35.17.3dia-103.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Xu, Tao, Songmin Jia, Zhengyin Dong e Xiuzhi Li. "Obstacles Regions 3D-Perception Method for Mobile Robots Based on Visual Saliency". Journal of Robotics 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/720174.

Texto completo da fonte
Resumo:
A novel mobile robots 3D-perception obstacle regions method in indoor environment based on Improved Salient Region Extraction (ISRE) is proposed. This model acquires the original image by the Kinect sensor and then gains Original Salience Map (OSM) and Intensity Feature Map (IFM) from the original image by the salience filtering algorithm. The IFM was used as the input neutron of PCNN. In order to make the ignition range more exact, PCNN ignition pulse input was further improved as follows: point multiplication algorithm was taken between PCNN internal neuron and binarization salience image of OSM; then we determined the final ignition pulse input. The salience binarization region abstraction was fulfilled by improved PCNN multiple iterations finally. Finally, the binarization area was mapped to the depth map obtained by Kinect sensor, and mobile robot can achieve the obstacle localization function. The method was conducted on a mobile robot (Pioneer3-DX). The experimental results demonstrated the feasibility and effectiveness of the proposed algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Ruiu, Pietro, Lorenzo Mascia e Enrico Grosso. "Saliency-Guided Point Cloud Compression for 3D Live Reconstruction". Multimodal Technologies and Interaction 8, n.º 5 (3 de maio de 2024): 36. http://dx.doi.org/10.3390/mti8050036.

Texto completo da fonte
Resumo:
3D modeling and reconstruction are critical to creating immersive XR experiences, providing realistic virtual environments, objects, and interactions that increase user engagement and enable new forms of content manipulation. Today, 3D data can be easily captured using off-the-shelf, specialized headsets; very often, these tools provide real-time, albeit low-resolution, integration of continuously captured depth maps. This approach is generally suitable for basic AR and MR applications, where users can easily direct their attention to points of interest and benefit from a fully user-centric perspective. However, it proves to be less effective in more complex scenarios such as multi-user telepresence or telerobotics, where real-time transmission of local surroundings to remote users is essential. Two primary questions emerge: (i) what strategies are available for achieving real-time 3D reconstruction in such systems? and (ii) how can the effectiveness of real-time 3D reconstruction methods be assessed? This paper explores various approaches to the challenge of live 3D reconstruction from typical point cloud data. It first introduces some common data flow patterns that characterize virtual reality applications and shows that achieving high-speed data transmission and efficient data compression is critical to maintaining visual continuity and ensuring a satisfactory user experience. The paper thus introduces the concept of saliency-driven compression/reconstruction and compares it with alternative state-of-the-art approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Shafiq, Muhammad Amir, Zhiling Long, Haibin Di e Ghassan AlRegib. "A novel attention model for salient structure detection in seismic volumes". Applied Computing and Intelligence 1, n.º 1 (2021): 31–45. http://dx.doi.org/10.3934/aci.2021002.

Texto completo da fonte
Resumo:
<abstract><p>A new approach to seismic interpretation is proposed to leverage visual perception and human visual system modeling. Specifically, a saliency detection algorithm based on a novel attention model is proposed for identifying subsurface structures within seismic data volumes. The algorithm employs 3D-FFT and a multi-dimensional spectral projection, which decomposes local spectra into three distinct components, each depicting variations along different dimensions of the data. Subsequently, a novel directional center-surround attention model is proposed to incorporate directional comparisons around each voxel for saliency detection within each projected dimension. Next, the resulting saliency maps along each dimension are combined adaptively to yield a consolidated saliency map, which highlights various structures characterized by subtle variations and relative motion with respect to their neighboring sections. A priori information about the seismic data can be either embedded into the proposed attention model in the directional comparisons, or incorporated into the algorithm by specifying a template when combining saliency maps adaptively. Experimental results on two real seismic datasets from the North Sea, Netherlands and Great South Basin, New Zealand demonstrate the effectiveness of the proposed algorithm for detecting salient seismic structures of different natures and appearances in one shot, which differs significantly from traditional seismic interpretation algorithms. The results further demonstrate that the proposed method outperforms comparable state-of-the-art saliency detection algorithms for natural images and videos, which are inadequate for seismic imaging data.</p></abstract>
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Zou, Wenbin, Shengkai Zhuo, Yi Tang, Shishun Tian, Xia Li e Chen Xu. "STA3D: Spatiotemporally attentive 3D network for video saliency prediction". Pattern Recognition Letters 147 (julho de 2021): 78–84. http://dx.doi.org/10.1016/j.patrec.2021.04.010.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Shi, Zhenfeng, Hao Luo e Xiamu Niu. "Saliency-based structural degradation evaluation of 3D mesh simplification". IEICE Electronics Express 8, n.º 3 (2011): 161–67. http://dx.doi.org/10.1587/elex.8.161.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Qi, Feng, Debin Zhao, Shaohui Liu e Xiaopeng Fan. "3D visual saliency detection model with generated disparity map". Multimedia Tools and Applications 76, n.º 2 (29 de janeiro de 2016): 3087–103. http://dx.doi.org/10.1007/s11042-015-3229-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Banerjee, Subhashis, Sushmita Mitra e B. Uma Shankar. "Automated 3D segmentation of brain tumor using visual saliency". Information Sciences 424 (janeiro de 2018): 337–53. http://dx.doi.org/10.1016/j.ins.2017.10.011.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Banitalebi-Dehkordi, Amin, Mahsa T. Pourazad e Panos Nasiopoulos. "A learning-based visual saliency prediction model for stereoscopic 3D video (LBVS-3D)". Multimedia Tools and Applications 76, n.º 22 (23 de novembro de 2016): 23859–90. http://dx.doi.org/10.1007/s11042-016-4155-y.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Messai, Oussama, Aladine Chetouani, Fella Hachouf e Zianou Ahmed Seghir. "Deep Quality evaluator guided by 3D Saliency for Stereoscopic Images". Electronic Imaging 2021, n.º 11 (18 de janeiro de 2021): 110–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.11.hvei-110.

Texto completo da fonte
Resumo:
Due to the use of 3D contents in various applications, Stereo Image Quality Assessment (SIQA) has attracted more attention to ensure good viewing experience for the users. Several methods have been thus proposed in the literature with a clear improvement for deep learning-based methods. This paper introduces a new deep learning-based no-reference SIQA using cyclopean view hypothesis and human visual attention. First, the cyclopean image is built considering the presence of binocular rivalry that covers the asymmetric distortion case. Second, the saliency map is computed taking into account the depth information. The latter aims to extract patches on the most perceptual relevant regions. Finally, a modified version of the pre-trained vgg-19 is fine-tuned and used to predict the quality score through the selected patches. The performance of the proposed metric has been evaluated on 3D LIVE phase I and phase II databases. Compared with the state-of-the-art metrics, our method gives better outcomes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

JIANG, Yibo, Hui BI, Hui LI e Zhihao XU. "Automatic and Accurate 3D Measurement Based on RGBD Saliency Detection". IEICE Transactions on Information and Systems E102.D, n.º 3 (1 de março de 2019): 688–89. http://dx.doi.org/10.1587/transinf.2018edl8212.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Chen, Yanxiang, Yifei Pan, Minglong Song e Meng Wang. "Improved seam carving combining with 3D saliency for image retargeting". Neurocomputing 151 (março de 2015): 645–53. http://dx.doi.org/10.1016/j.neucom.2014.05.089.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Zhao, Yitian, Yonghuai Liu, Yongjun Wang, Baogang Wei, Jian Yang, Yifan Zhao e Yongtian Wang. "Region-based saliency estimation for 3D shape analysis and understanding". Neurocomputing 197 (julho de 2016): 1–13. http://dx.doi.org/10.1016/j.neucom.2016.01.012.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Miao, Yongwei, Jieqing Feng e Renato Pajarola. "Visual saliency guided normal enhancement technique for 3D shape depiction". Computers & Graphics 35, n.º 3 (junho de 2011): 706–12. http://dx.doi.org/10.1016/j.cag.2011.03.017.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Jeong, Se-Won, e Jae-Young Sim. "Saliency Detection for 3D Surface Geometry Using Semi-regular Meshes". IEEE Transactions on Multimedia 19, n.º 12 (dezembro de 2017): 2692–705. http://dx.doi.org/10.1109/tmm.2017.2710802.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Wang, Haoqian, Bing Yan, Xingzheng Wang, Yongbing Zhang e Yi Yang. "Accurate saliency detection based on depth feature of 3D images". Multimedia Tools and Applications 77, n.º 12 (16 de agosto de 2017): 14655–72. http://dx.doi.org/10.1007/s11042-017-5052-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Mangal, Anuj, Hitendra Garg e Charul Bhatnagar. "Modified ResNet50 model and semantic segmentation based image co-saliency detection". Journal of Information and Optimization Sciences 44, n.º 6 (2023): 1035–42. http://dx.doi.org/10.47974/jios-1331.

Texto completo da fonte
Resumo:
Co-salient object identification is a new and emerging branch of the visual saliency detection technique that tries to find salient pattern appearing in several image groups. The proposed work has the potential to benefit a wide variety of important applications including the detection of objects of interest, more robust object recognition and animation synthesis, handling input image query, 3D object reconstruction, object co-segmentation etc. To build modified ResNet50 model, the hyperparameters are adjusted in the current work to increase accuracy while minimizing loss. The modified network is trained on HOG features to mine more significant features along with their corresponding ground truth images. For a more streamlined outcome, the proposed system was built using the SGDM optimizer. During testing among the relevant and irrelevant image the network generates appropriate co-saliency map of relevant images. Integrating the associated and prominent characteristics of the image yields the appropriate ground truth for each image. The proposed method reports better F1 value 98.7% and MAE score 0.089 value when compared with SOTA model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Wen, Falin, Qinghui Wang, Ruirui Zou, Ying Wang, Fenglin Liu, Yang Chen, Linghao Yu, Shaoyi Du e Chengzhi Yuan. "A Salient Object Detection Method Based on Boundary Enhancement". Sensors 23, n.º 16 (10 de agosto de 2023): 7077. http://dx.doi.org/10.3390/s23167077.

Texto completo da fonte
Resumo:
Visual saliency refers to the human’s ability to quickly focus on important parts of their visual field, which is a crucial aspect of image processing, particularly in fields like medical imaging and robotics. Understanding and simulating this mechanism is crucial for solving complex visual problems. In this paper, we propose a salient object detection method based on boundary enhancement, which is applicable to both 2D and 3D sensors data. To address the problem of large-scale variation of salient objects, our method introduces a multi-level feature aggregation module that enhances the expressive ability of fixed-resolution features by utilizing adjacent features to complement each other. Additionally, we propose a multi-scale information extraction module to capture local contextual information at different scales for back-propagated level-by-level features, which allows for better measurement of the composition of the feature map after back-fusion. To tackle the low confidence issue of boundary pixels, we also introduce a boundary extraction module to extract the boundary information of salient regions. This information is then fused with salient target information to further refine the saliency prediction results. During the training process, our method uses a mixed loss function to constrain the model training from two levels: pixels and images. The experimental results demonstrate that our salient target detection method based on boundary enhancement shows good detection effects on targets of different scales, multi-targets, linear targets, and targets in complex scenes. We compare our method with the best method in four conventional datasets and achieve an average improvement of 6.2% on the mean absolute error (MAE) indicators. Overall, our approach shows promise for improving the accuracy and efficiency of salient object detection in a variety of settings, including those involving 2D/3D semantic analysis and reconstruction/inpainting of image/video/point cloud data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Lyu, Wei, Wei Wu, Lin Zhang, Zhaohui Wu e Zhong Zhou. "Laplacian-based 3D mesh simplification with feature preservation". International Journal of Modeling, Simulation, and Scientific Computing 10, n.º 02 (abril de 2019): 1950002. http://dx.doi.org/10.1142/s1793962319500028.

Texto completo da fonte
Resumo:
We propose a novel Laplacian-based algorithm that simplifies triangle surface meshes and can provide different preservation ratios of geometric features. Our efficient and fast algorithm uses a 3D mesh model as input and initially detects geometric features by using a Laplacian-based shape descriptor (L-descriptor). The algorithm further performs an optimized clustering approach that combines a Laplacian operator with K-means clustering algorithm to perform vertex classification. Moreover, we introduce a Laplacian weighted cost function based on L-descriptor to perform feature weighting and error statistics comparison, which are further used to change the deletion order of the model elements and preserve the saliency features. Our algorithm can provide different preservation ratios of geometric features and may be extended to handle arbitrary mesh topologies. Our experiments on a variety of 3D surface meshes demonstrate the advantages of our algorithm in terms of improving accuracy and applicability, and preserving saliency geometric features.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Leal Narvaez, Esmeide Alberto, German Sanchez Torres e John William Branch Bedoya. "Point cloud saliency detection via local sparse coding". DYNA 86, n.º 209 (1 de abril de 2019): 238–47. http://dx.doi.org/10.15446/dyna.v86n209.75958.

Texto completo da fonte
Resumo:
The human visual system (HVS) can process large quantities of visual information instantly. Visual saliency perception is the process of locating and identifying regions with a high degree of saliency from a visual standpoint. Mesh saliency detection has been studied extensively in recent years, but few studies have focused on 3D point cloud saliency detection. The estimation of visual saliency is important for computer graphics tasks such as simplification, segmentation, shape matching and resizing. In this paper, we present a method for the direct detection of saliency on unorganized point clouds. First, our method computes a set of overlapping neighborhoods and estimates adescriptor vector for each point inside it. Then, the descriptor vectors are used as a natural dictionary in order to apply a sparse coding process. Finally, we estimate a saliency map of the point neighborhoods based on the Minimum Description Length (MDL) principle.Experiment results show that the proposed method achieves similar results to those from the literature review and in some cases even improves on them. It captures the geometry of the point clouds without using any topological information and achieves an acceptable performance. The effectiveness and robustness of our approach are shown by comparing it to previous studies in the literature review.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Pham, Nam, e Sergey Fomel. "Uncertainty and interpretability analysis of encoder-decoder architecture for channel detection". GEOPHYSICS 86, n.º 4 (1 de julho de 2021): O49—O58. http://dx.doi.org/10.1190/geo2020-0409.1.

Texto completo da fonte
Resumo:
We have adopted a method to understand uncertainty and interpretability of a Bayesian convolutional neural network for detecting 3D channel geobodies in seismic volumes. We measure heteroscedastic aleatoric uncertainty and epistemic uncertainty. Epistemic uncertainty captures the uncertainty of the network parameters, whereas heteroscedastic aleatoric uncertainty accounts for noise in the seismic volumes. We train a network modified from U-Net architecture on 3D synthetic seismic volumes, and then we apply it to field data. Tests on 3D field data sets from the Browse Basin, offshore Australia, and from Parihaka in New Zealand prove that uncertainty volumes are related to geologic uncertainty, model mispicks, and input noise. We analyze model interpretability on these data sets by creating saliency volumes with gradient-weighted class activation mapping. We find that the model takes a global-to-local approach to localize channel geobodies as well as the importance of different model components in overall strategy. Using channel probability, uncertainty, and saliency volumes, interpreters can accurately identify channel geobodies in 3D seismic volumes and also understand the model predictions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Wei, Guangshun, Long Ma, Chen Wang, Christian Desrosiers e Yuanfeng Zhou. "Multi-Task Joint Learning of 3D Keypoint Saliency and Correspondence Estimation". Computer-Aided Design 141 (dezembro de 2021): 103105. http://dx.doi.org/10.1016/j.cad.2021.103105.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Messai, Oussama, Aladine Chetouani, Fella Hachouf e Zianou Ahmed Seghir. "3D saliency guided deep quality predictor for no-reference stereoscopic images". Neurocomputing 478 (março de 2022): 22–36. http://dx.doi.org/10.1016/j.neucom.2022.01.002.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Fang, Yuming, Guanqun Ding, Jia Li e Zhijun Fang. "Deep3DSaliency: Deep Stereoscopic Video Saliency Detection Model by 3D Convolutional Networks". IEEE Transactions on Image Processing 28, n.º 5 (maio de 2019): 2305–18. http://dx.doi.org/10.1109/tip.2018.2885229.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Filipe, Silvio, Laurent Itti e Luis A. Alexandre. "BIK-BUS: Biologically Motivated 3D Keypoint Based on Bottom-Up Saliency". IEEE Transactions on Image Processing 24, n.º 1 (janeiro de 2015): 163–75. http://dx.doi.org/10.1109/tip.2014.2371532.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Castellani, U., M. Cristani, S. Fantoni e V. Murino. "Sparse points matching by combining 3D mesh saliency with statistical descriptors". Computer Graphics Forum 27, n.º 2 (abril de 2008): 643–52. http://dx.doi.org/10.1111/j.1467-8659.2008.01162.x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Guo, Yu, Fei Wang e Jingmin Xin. "Point-wise saliency detection on 3D point clouds via covariance descriptors". Visual Computer 34, n.º 10 (26 de junho de 2017): 1325–38. http://dx.doi.org/10.1007/s00371-017-1416-3.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Bulbul, Abdullah, Sami Arpa e Tolga Capin. "A clustering-based method to estimate saliency in 3D animated meshes". Computers & Graphics 43 (outubro de 2014): 11–20. http://dx.doi.org/10.1016/j.cag.2014.04.003.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Sun, Zhenhao, Xu Wang, Qiudan Zhang e Jianmin Jiang. "Real-Time Video Saliency Prediction Via 3D Residual Convolutional Neural Network". IEEE Access 7 (2019): 147743–54. http://dx.doi.org/10.1109/access.2019.2946479.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Saleh, Kamel, e Mark Sumner. "A SVM-3D Based Encoderless Control of a Fault-Tolerant PMSM Drive". Electronics 9, n.º 7 (4 de julho de 2020): 1095. http://dx.doi.org/10.3390/electronics9071095.

Texto completo da fonte
Resumo:
This paper exhibits a novel technique to obtain an encoderless speed control of a permanent magnet synchronous motor (PMSM) in the case of a loss of one phase. The importance of this work is that it presents solutions in order to maintain the operation of the system in various conditions. This will increase the reliability of the whole drive system to meet the safety issues required in some applications. To achieve that, a fault-tolerant inverter modulated through a 3-dimension space vector pulse width modulation technique (3D-SVPWM) is used. Besides that, an algorithm to obtain the exact position of the saturation saliency in the case of a loss of one phase is introduced to achieve a closed-loop field-oriented encoderless speed control and to further enhance the reliability of the whole drive system. This algorithm is based on measuring the transient stator current responses of the motor due to the insulated-gate bipolar transistors (IGBTs) switching actions. Then according to the operating condition (normal or a loss of one phase), the saliency position signals are constructed from the dynamic current responses. Simulation results are provided to demonstrate the effectiveness of the saliency tracking technique under normal and under a loss of one phase conditions. Moreover, the results verify the maximum reliability for the whole drive system that is achieved in this work through a continuous operation of the drive system under a loss of one phase condition and under encoderless speed control.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Nie, Weizhi, Lu Qu, Minjie Ren, Qi Liang, Yuting Su, Yangyang Li e Hao Jin. "Two-Stream Network Based on Visual Saliency Sharing for 3D Model Recognition". IEEE Access 8 (2020): 5979–89. http://dx.doi.org/10.1109/access.2019.2963511.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Abouelaziz, Ilyass, Aladine Chetouani, Mohammed El Hassouni, Longin Jan Latecki e Hocine Cherifi. "3D visual saliency and convolutional neural network for blind mesh quality assessment". Neural Computing and Applications 32, n.º 21 (19 de outubro de 2019): 16589–603. http://dx.doi.org/10.1007/s00521-019-04521-1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia