Auswahl der wissenschaftlichen Literatur zum Thema „3D saliency“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "3D saliency" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "3D saliency"
Jiao, Yuzhong, Mark Ping Chan Mok, Kayton Wai Keung Cheung, Man Chi Chan, Tak Wai Shen und Yiu Kei Li. „Dynamic Zero-Parallax-Setting Techniques for Multi-View Autostereoscopic Display“. Electronic Imaging 2020, Nr. 2 (26.01.2020): 98–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-098.
Der volle Inhalt der QuelleA K, Aswathi, und Namitha T N. „3D Saliency Detection“. International Journal of Engineering Trends and Technology 47, Nr. 6 (25.05.2017): 353–55. http://dx.doi.org/10.14445/22315381/ijett-v47p257.
Der volle Inhalt der QuelleZhang, Ya, Chunyi Chen, Xiaojuan Hu, Ling Li und Hailan Li. „Saliency detection of textured 3D models based on multi-view information and texel descriptor“. PeerJ Computer Science 9 (25.10.2023): e1584. http://dx.doi.org/10.7717/peerj-cs.1584.
Der volle Inhalt der QuelleFavorskaya, M. N., und L. C. Jain. „Saliency detection in deep learning era: trends of development“. Information and Control Systems, Nr. 3 (21.06.2019): 10–36. http://dx.doi.org/10.31799/1684-8853-2019-3-10-36.
Der volle Inhalt der QuelleLiu, Tao, Zhixiang Fang, Qingzhou Mao, Qingquan Li und Xing Zhang. „A cube-based saliency detection method using integrated visual and spatial features“. Sensor Review 36, Nr. 2 (21.03.2016): 148–57. http://dx.doi.org/10.1108/sr-07-2015-0110.
Der volle Inhalt der QuelleYuan, Jing, Yang Cao, Yu Kang, Weiguo Song, Zhongcheng Yin, Rui Ba und Qing Ma. „3D Layout encoding network for spatial‐aware 3D saliency modelling“. IET Computer Vision 13, Nr. 5 (10.07.2019): 480–88. http://dx.doi.org/10.1049/iet-cvi.2018.5591.
Der volle Inhalt der QuelleHamidi, Mohamed, Aladine Chetouani, Mohamed El Haziti, Mohammed El Hassouni und Hocine Cherifi. „Blind Robust 3D Mesh Watermarking Based on Mesh Saliency and Wavelet Transform for Copyright Protection“. Information 10, Nr. 2 (18.02.2019): 67. http://dx.doi.org/10.3390/info10020067.
Der volle Inhalt der QuelleChen, Yanxiang, Yifei Pan, Minglong Song und Meng Wang. „Image retargeting with a 3D saliency model“. Signal Processing 112 (Juli 2015): 53–63. http://dx.doi.org/10.1016/j.sigpro.2014.11.001.
Der volle Inhalt der QuelleLin, Hongyun, Chunyu Lin, Yao Zhao und Anhong Wang. „3D saliency detection based on background detection“. Journal of Visual Communication and Image Representation 48 (Oktober 2017): 238–53. http://dx.doi.org/10.1016/j.jvcir.2017.06.011.
Der volle Inhalt der QuelleJunle Wang, M. P. DaSilva, P. LeCallet und V. Ricordel. „Computational Model of Stereoscopic 3D Visual Saliency“. IEEE Transactions on Image Processing 22, Nr. 6 (Juni 2013): 2151–65. http://dx.doi.org/10.1109/tip.2013.2246176.
Der volle Inhalt der QuelleDissertationen zum Thema "3D saliency"
Zhao, Yitian. „Detections and applications of saliency on 3D surfaces by using retinex theory“. Thesis, Aberystwyth University, 2013. http://hdl.handle.net/2160/83baa3e3-fe5c-4e1d-a3d8-e63d95bed13e.
Der volle Inhalt der QuelleIn addition, the comparative studies also show that the propose techniques outperform the state-of-the-art methods and have clear advantages.
Wang, Junle. „From 2D to stereoscopic-3D visual saliency : revisiting psychophysical methods and computational modeling“. Nantes, 2012. http://www.theses.fr/2012NANT2072.
Der volle Inhalt der QuelleVisual attention is one of the most important mechanisms deployed in the human visual system to reduce the amount of information that our brain needs to process. An increasing amount of efforts are being dedicated in the studies of visual attention, particularly in computational modeling of visual attention. In this thesis, we present studies focusing on several aspects of the research of visual attention. Our works can be mainly classified into two parts. The first part concerns ground truths used in the studies related to visual attention ; the second part contains studies related to the modeling of visual attention for Stereoscopic 3D (S-3D) viewing condition. In the first part, our work starts with identifying the reliability of FDM from different eye-tracking databases. Then we quantitatively identify the similarities and difference between fixation density maps and visual importance map, which have been two widely used ground truth for attention-related applications. Next, to solve the problem of lacking ground truth in the community of 3D visual attention modeling, we conduct a binocular eye-tracking experiment to create a new eye-tracking database for S-3D images. In the second part, we start with examining the impact of depth on visual attention in S-3D viewing condition. We firstly introduce a so-called “depth-bias” in the viewing of synthetic S-3D content on planar stereoscopic display. Then, we extend our study from synthetic stimuli to natural content S-3D images. We propose a depth-saliency-based model of 3D visual attention, which relies on depth contrast of the scene. Two different ways of applying depth information in S-3D visual attention model are also compared in our study. Next, we study the difference of center-bias between 2D and S-3D viewing conditions, and further integrate the center-bias with S-3D visual attention modeling. At the end, based on the assumption that visual attention can be used for improving Quality of Experience of 3D-TV when collaborating with blur, we study the influence of blur on depth perception and blur’s relationship with binocular disparity
Munaretti, Rodrigo Barni. „Perceptual guidance in mesh processing and rendering using mesh saliency“. reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2007. http://hdl.handle.net/10183/12673.
Der volle Inhalt der QuelleConsiderations on perceptual information are quickly gaining importance in mesh representation, analysis and display research. User studies, eye tracking and other techniques are able to provide ever more useful insights for many user-centric systems, which form the bulk of computer graphics applications. In this work we build upon the concept of Mesh Saliency — an automatic measure of visual importance for triangle meshes based on models of low-level human visual attention—improving, extending and integrating it with different applications. We extend the concept of Mesh Saliency to encompass deformable objects, showing how a vertex-level saliency map can be constructed that accurately captures the regions of high perceptual importance over a range of mesh poses or deformations. We define multipose saliency as a multi-scale aggregate of curvature values over a locally stable vertex neighborhood together with deformations over multiple poses. We replace the use of the Euclidean distance by geodesic distance thereby providing superior estimates of the local neighborhood. Results show that multi-pose saliency generates more visually appealing mesh simplifications when compared to a single-pose mesh saliency. We also apply Mesh Saliency to the problem of mesh segmentation and view-dependent rendering, introducing a technique for segmentation that partitions an object into a set of face clusters, each encompassing a group of locally interesting features. Mesh Saliency is incorporated in a propagative mesh clustering framework, guiding cluster seed selection and triangle propagation costs and leading to a convergence of face clusters around perceptually important features. We compare our technique with different fully automatic segmentation algorithms, showing that it provides similar or better segmentation without the need for user input. Since the proposed clustering algorithm is specially suitable for multi-resolution rendering, we illustrate application of our clustering results through a saliency-guided view-dependent rendering system, achieving significant framerate increases with little loss of visual detail.
Joubert, Deon. „Saliency grouped landmarks for use in vision-based simultaneous localisation and mapping“. Diss., University of Pretoria, 2013. http://hdl.handle.net/2263/40834.
Der volle Inhalt der QuelleDissertation (MEng)--University of Pretoria, 2013.
gm2014
Electrical, Electronic and Computer Engineering
unrestricted
Fraihat, Hossam. „Contribution à la perception visuelle multi-résolution de l’environnement 3D : application à la robotique autonome“. Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1065/document.
Der volle Inhalt der QuelleThe research work, carried out within the framework of this thesis, concerns the development of a system of perception and saliency detection in 3D environment taking advantage from a pseudo-3D representation. Our contribution and the issued concept derive from the hypothesis that the depth of the object with respect to the robot is an important factor in the detection of the saliency. On this basis, a salient vision system of the 3D environment has been proposed, designed and validated on a platform including a robot equipped with a pseudo-3D sensor. The implementation of the aforementioned concept and its design were first validated on the pseudo-3D KINECT vision system. Then, in a second step, the concept and the algorithms have been extended to the aforementioned robotic platform. The main contributions of the present thesis can be summarized as follow: A) A state of the art on the various sensors for acquiring depth information as well as different methods of detecting 2D salience and pseudo 3D. B) Study of pseudo-3D visual saliency system based on benefiting from the development of a robust algorithm allowing the detection of salient objects. C) Implementation of a depth estimation system in centimeters for the Pepper robot. D) Implementation of the concepts and methods proposed on the aforementioned platform. The carried out studies and the experimental validations confirmed that the proposed approaches allow to increase the autonomy of the robots in a real 3D environment
El, Haje Noura. „A heterogeneous data-based proposal for procedural 3D cities visualization and generalization“. Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30238.
Der volle Inhalt der QuelleThis thesis project was born from a collaborative project between the research team VORTEX / Visual objects: from reality to expression (now REVA: Real Expression Artificial Life) at IRIT: Institute of Research in Computer Science Toulouse on the one hand and education professionals, companies and public entities on the other.The SCOLA collaborative project is essentially an online learning platform based on the use of serious games in schools. It helps users to acquire and track predefined skills. This platform provides teachers with a new flexible tool that creates pedagogical scenarios and personalizes student records. Several contributions have been attributed to IRIT. One of these is to suggest a solution for the automatic creation of 3D environments, to integrate into the game scenario. This solution aims to prevent 3D graphic designers from manually modeling detailed and large 3D environments, which can be very expensive and take a lot of time. Various applications and prototypes have been developed to allow the user to generalize and visualize their own virtual world primarily from a set of rules. Therefore, there is no single representation scheme in the virtual world due to the heterogeneity and diversity of 3D content design, especially city models. This constraint has led us to rely heavily on our project on real 3D urban data instead of custom data predefined by the game designer. Advances in computer graphics, high computing capabilities, and Web technologies have revolutionized data reconstruction and visualization techniques. These techniques are applied in a variety of areas, starting with video games, simulations, and ending with movies that use procedurally generated spaces and character animations. Although modern computer games do not have the same hardware and memory restrictions as older games, procedural generation is frequently used to create unique games, cards, levels, characters, or other random facets on each. Currently, the trend is shifting towards GIS : Geographical Information Systems to create urban worlds, especially after their successful implementation around the world to support many areas of applications. GIS are more specifically dedicated to applications such as simulation, disaster management and urban planning, with a great use more or less limited in games, for example the game "Minecraft", the latest version offers a map using real world cities Geodata in Minecraft.[...]
Ben, salah Imeen. „Extraction d'un graphe de navigabilité à partir d'un nuage de points 3D enrichis“. Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR070/document.
Der volle Inhalt der QuelleCameras have become increasingly common in vehicles, smart phones, and advanced driver assistance systems. The areas of application of these cameras in the world of intelligent transportation systems are becoming more and more varied : pedestrian detection, line crossing detection, navigation ... Vision-based navigation has reached a certain maturity in recent years through the use of advanced technologies. Vision-based navigation systems have the considerable advantage of being able to directly use the visual information already existing in the environment without having to adapt any element of the infrastructure. In addition, unlike systems using GPS, they can be used outdoors and indoors without any loss of precision. This guarantees the superiority of these systems based on computer vision. A major area of {research currently focuses on mapping, which represents an essential step for navigation. This step generates a problem of memory management quite substantial required by these systems because of the huge amount of information collected by each sensor. Indeed, the memory space required to accommodate the map of a small city is measured in tens of GB or even thousands when one wants to cover large spaces. This makes impossible to integrate this map into a mobile system such as smartphones , cameras embedded in vehicles or robots. The challenge would be to develop new algorithms to minimize the size of the memory needed to operate this navigation system using only computer vision. It's in this context that our project consists in developing a new system able to summarize a3D map resulting from the visual information collected by several sensors. The summary will be a set of spherical views allow to keep the same level of visibility in all directions. It would also guarantee, at a lower cost, a good level of precision and speed during navigation. The summary map of the environment will contain geometric, photometric and semantic information
Walter, Nicolas. „Détection de primitives par une approche discrète et non linéaire : application à la détection et la caractérisation de points d'intérêt dans les maillages 3D“. Phd thesis, Université de Bourgogne, 2010. http://tel.archives-ouvertes.fr/tel-00808216.
Der volle Inhalt der QuelleEl, Sayed Abdul Rahman. „Traitement des objets 3D et images par les méthodes numériques sur graphes“. Thesis, Normandie, 2018. http://www.theses.fr/2018NORMLH19/document.
Der volle Inhalt der QuelleSkin detection involves detecting pixels corresponding to human skin in a color image. The faces constitute a category of stimulus important by the wealth of information that they convey because before recognizing any person it is essential to locate and recognize his face. Most security and biometrics applications rely on the detection of skin regions such as face detection, 3D adult object filtering, and gesture recognition. In addition, saliency detection of 3D mesh is an important pretreatment phase for many computer vision applications. 3D segmentation based on salient regions has been widely used in many computer vision applications such as 3D shape matching, object alignments, 3D point-point smoothing, searching images on the web, image indexing by content, video segmentation and face detection and recognition. The detection of skin is a very difficult task for various reasons generally related to the variability of the shape and the color to be detected (different hues from one person to another, orientation and different sizes, lighting conditions) and especially for images from the web captured under different light conditions. There are several known approaches to skin detection: approaches based on geometry and feature extraction, motion-based approaches (background subtraction (SAP), difference between two consecutive images, optical flow calculation) and color-based approaches. In this thesis, we propose numerical optimization methods for the detection of skins color and salient regions on 3D meshes and 3D point clouds using a weighted graph. Based on these methods, we provide 3D face detection approaches using Linear Programming and Data Mining. In addition, we adapted our proposed methods to solve the problem of simplifying 3D point clouds and matching 3D objects. In addition, we show the robustness and efficiency of our proposed methods through different experimental results. Finally, we show the stability and robustness of our methods with respect to noise
Ricci, Thomas. „Individuazione di punti salienti in dati 3D mediante rappresentazioni strutturate“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3968/.
Der volle Inhalt der QuelleBücher zum Thema "3D saliency"
Lee, Christoph I. 3D CT Colonography for Colorectal Cancer Screening. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780190223700.003.0046.
Der volle Inhalt der QuelleLee, Christoph I. Management of Lung Nodules Detected by CT. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780190223700.003.0045.
Der volle Inhalt der QuelleBuchteile zum Thema "3D saliency"
Julien, Leroy, und Nicolas Riche. „Toward 3D Visual Saliency Modeling“. In From Human Attention to Computational Attention, 305–30. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4939-3435-5_17.
Der volle Inhalt der QuellePirri, Fiora, Matia Pizzoli und Arnab Sinha. „Coherence Fields for 3D Saliency Prediction“. In Biologically Inspired Cognitive Architectures 2012, 251–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-34274-5_45.
Der volle Inhalt der QuelleYang, Yu-Bin, Tong Lu und Jin-Jie Lin. „Saliency Regions for 3D Mesh Abstraction“. In Advances in Multimedia Information Processing - PCM 2009, 292–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10467-1_25.
Der volle Inhalt der QuelleSon, Jeongho, Dongkyu Kim, Hak-Yeol Choi, Han-Ul Jang und Sunghee Choi. „Perceptual 3D Watermarking Using Mesh Saliency“. In Information Science and Applications 2017, 315–22. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-4154-9_37.
Der volle Inhalt der QuelleLara, Graciela, Angélica De Antonio, Adriana Peña, Mirna Muñoz und Edwin Becerra. „3D objects’ shape relevance for saliency measure“. In Advances in Intelligent Systems and Computing, 241–50. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69341-5_22.
Der volle Inhalt der QuelleMa, Bo, Eakta Jain und Alireza Entezari. „3D Saliency from Eye Tracking with Tomography“. In Eye Tracking and Visualization, 185–98. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-47024-5_11.
Der volle Inhalt der QuelleYan, Feng, Fei Wang, Yu Guo und Peilin Jiang. „Saliency-Guided Smoothing for 3D Point Clouds“. In Intelligent Computing Theories and Application, 165–74. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63309-1_16.
Der volle Inhalt der QuelleTaher, Hamed, Muhammad Rushdi, Muhammad Islam und Ahmed Badawi. „Adaptive Saliency-Weighted 2D-to-3D Video Conversion“. In Computer Analysis of Images and Patterns, 737–48. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-23117-4_63.
Der volle Inhalt der QuelleDing, Guanqun, und Yuming Fang. „Video Saliency Detection by 3D Convolutional Neural Networks“. In Communications in Computer and Information Science, 245–54. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8108-8_23.
Der volle Inhalt der QuelleCen, Jiajing, Pei An, Gaojie Chen, Junxiong Liang und Jie Ma. „PSS: Point Semantic Saliency for 3D Object Detection“. In Artificial Intelligence, 408–19. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93046-2_35.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "3D saliency"
Pirri, Fiora, Matia Pizzoli, Daniele Rigato und Redjan Shabani. „3D Saliency maps“. In 2011 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops). IEEE, 2011. http://dx.doi.org/10.1109/cvprw.2011.5981736.
Der volle Inhalt der QuelleWang, Yao, Qi Dai, Mihai Bâce, Karsten Klein und Andreas Bulling. „Saliency3D: A 3D Saliency Dataset Collected on Screen“. In ETRA '24: The 2024 Symposium on Eye Tracking Research and Applications. New York, NY, USA: ACM, 2024. http://dx.doi.org/10.1145/3649902.3653350.
Der volle Inhalt der QuelleYao, Houpu (Hope), und Max Yi Ren. „Impressionist: A 3D Peekaboo Game for Crowdsourcing Shape Saliency“. In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-60081.
Der volle Inhalt der QuelleKobyshev, Nikolay, Hayko Riemenschneider, Andras Bodis-Szomoru und Luc Van Gool. „3D Saliency for Finding Landmark Buildings“. In 2016 Fourth International Conference on 3D Vision (3DV). IEEE, 2016. http://dx.doi.org/10.1109/3dv.2016.35.
Der volle Inhalt der QuelleWan, Pengfei, Yunlong Feng, Gene Cheung, Ivan V. Bajic, Oscar C. Au und Yusheng Ji. „3D motion in visual saliency modeling“. In ICASSP 2013 - 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013. http://dx.doi.org/10.1109/icassp.2013.6637969.
Der volle Inhalt der QuelleALfarasani, Dalia A., Thomas Sweetman, Yu-Kun Lai und Paul L. Rosin. „Learning to Predict 3D Mesh Saliency“. In 2022 26th International Conference on Pattern Recognition (ICPR). IEEE, 2022. http://dx.doi.org/10.1109/icpr56361.2022.9956579.
Der volle Inhalt der QuelleWu, Zongwei, Shriarulmozhivarman Gobichettipalayam, Brahim Tamadazte, Guillaume Allibert, Danda Pani Paudel und Cedric Demonceaux. „Robust RGB-D Fusion for Saliency Detection“. In 2022 International Conference on 3D Vision (3DV). IEEE, 2022. http://dx.doi.org/10.1109/3dv57658.2022.00052.
Der volle Inhalt der QuellePiao, Yongri, Zhengkun Rong, Miao Zhang, Xiao Li und Huchuan Lu. „Deep Light-field-driven Saliency Detection from a Single View“. In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/127.
Der volle Inhalt der QuelleLi, Hai, Weicai Ye, Guofeng Zhang, Sanyuan Zhang und Hujun Bao. „Saliency Guided Subdivision for Single-View Mesh Reconstruction“. In 2020 International Conference on 3D Vision (3DV). IEEE, 2020. http://dx.doi.org/10.1109/3dv50981.2020.00120.
Der volle Inhalt der QuelleLiu, Peng, Michael Reale, Xing Zhang und Lijun Yin. „Saliency-guided 3D head pose estimation on 3D expression models“. In the 15th ACM. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2522848.2522864.
Der volle Inhalt der Quelle