Auswahl der wissenschaftlichen Literatur zum Thema „RGB-D Image“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "RGB-D Image" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "RGB-D Image"

1

Uddin, Md Kamal, Amran Bhuiyan, and Mahmudul Hasan. "Fusion in Dissimilarity Space Between RGB D and Skeleton for Person Re Identification." International Journal of Innovative Technology and Exploring Engineering 10, no. 12 (2021): 69–75. http://dx.doi.org/10.35940/ijitee.l9566.10101221.

Der volle Inhalt der Quelle
Annotation:
Person re-identification (Re-id) is one of the important tools of video surveillance systems, which aims to recognize an individual across the multiple disjoint sensors of a camera network. Despite the recent advances on RGB camera-based person re-identification methods under normal lighting conditions, Re-id researchers fail to take advantages of modern RGB-D sensor-based additional information (e.g. depth and skeleton information). When traditional RGB-based cameras fail to capture the video under poor illumination conditions, RGB-D sensor-based additional information can be advantageous to
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Md, Kamal Uddin, Bhuiyan Amran, and Hasan Mahmudul. "Fusion in Dissimilarity Space Between RGB-D and Skeleton for Person Re-Identification." International Journal of Innovative Technology and Exploring Engineering (IJITEE) 10, no. 12 (2021): 69–75. https://doi.org/10.35940/ijitee.L9566.10101221.

Der volle Inhalt der Quelle
Annotation:
Person re-identification (Re-id) is one of the important tools of video surveillance systems, which aims to recognize an individual across the multiple disjoint sensors of a camera network. Despite the recent advances on RGB camera-based person re-identification methods under normal lighting conditions, Re-id researchers fail to take advantages of modern RGB-D sensor-based additional information (e.g. depth and skeleton information). When traditional RGB-based cameras fail to capture the video under poor illumination conditions, RGB-D sensor-based additional information can be advantageous to
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Li, Hengyu, Hang Liu, Ning Cao, et al. "Real-time RGB-D image stitching using multiple Kinects for improved field of view." International Journal of Advanced Robotic Systems 14, no. 2 (2017): 172988141769556. http://dx.doi.org/10.1177/1729881417695560.

Der volle Inhalt der Quelle
Annotation:
This article concerns the problems of a defective depth map and limited field of view of Kinect-style RGB-D sensors. An anisotropic diffusion based hole-filling method is proposed to recover invalid depth data in the depth map. The field of view of the Kinect-style RGB-D sensor is extended by stitching depth and color images from several RGB-D sensors. By aligning the depth map with the color image, the registration data calculated by registering color images can be used to stitch depth and color images into a depth and color panoramic image concurrently in real time. Experiments show that the
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wu, Yan, Jiqian Li, and Jing Bai. "Multiple Classifiers-Based Feature Fusion for RGB-D Object Recognition." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 05 (2017): 1750014. http://dx.doi.org/10.1142/s0218001417500148.

Der volle Inhalt der Quelle
Annotation:
RGB-D-based object recognition has been enthusiastically investigated in the past few years. RGB and depth images provide useful and complementary information. Fusing RGB and depth features can significantly increase the accuracy of object recognition. However, previous works just simply take the depth image as the fourth channel of the RGB image and concatenate the RGB and depth features, ignoring the different power of RGB and depth information for different objects. In this paper, a new method which contains three different classifiers is proposed to fuse features extracted from RGB image a
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kitzler, Florian, Norbert Barta, Reinhard W. Neugschwandtner, Andreas Gronauer, and Viktoria Motsch. "WE3DS: An RGB-D Image Dataset for Semantic Segmentation in Agriculture." Sensors 23, no. 5 (2023): 2713. http://dx.doi.org/10.3390/s23052713.

Der volle Inhalt der Quelle
Annotation:
Smart farming (SF) applications rely on robust and accurate computer vision systems. An important computer vision task in agriculture is semantic segmentation, which aims to classify each pixel of an image and can be used for selective weed removal. State-of-the-art implementations use convolutional neural networks (CNN) that are trained on large image datasets. In agriculture, publicly available RGB image datasets are scarce and often lack detailed ground-truth information. In contrast to agriculture, other research areas feature RGB-D datasets that combine color (RGB) with additional distanc
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zheng, Huiming, and Wei Gao. "End-to-End RGB-D Image Compression via Exploiting Channel-Modality Redundancy." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 7 (2024): 7562–70. http://dx.doi.org/10.1609/aaai.v38i7.28588.

Der volle Inhalt der Quelle
Annotation:
As a kind of 3D data, RGB-D images have been extensively used in object tracking, 3D reconstruction, remote sensing mapping, and other tasks. In the realm of computer vision, the significance of RGB-D images is progressively growing. However, the existing learning-based image compression methods usually process RGB images and depth images separately, which cannot entirely exploit the redundant information between the modalities, limiting the further improvement of the Rate-Distortion performance. With the goal of overcoming the defect, in this paper, we propose a learning-based dual-branch RGB
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Peroš, Josip, Rinaldo Paar, Vladimir Divić, and Boštjan Kovačić. "Fusion of Laser Scans and Image Data—RGB+D for Structural Health Monitoring of Engineering Structures." Applied Sciences 12, no. 22 (2022): 11763. http://dx.doi.org/10.3390/app122211763.

Der volle Inhalt der Quelle
Annotation:
A novel method for structural health monitoring (SHM) by using RGB+D data has been recently proposed. RGB+D data are created by fusing image and laser scan data, where the D channel represents the distance, interpolated from laser scanner data. RGB channel represents image data obtained by an image sensor integrated in robotic total station (RTS) telescope, or on top of the telescope i.e., image assisted total station (IATS). Images can also be obtained by conventional cameras, or cameras integrated with RTS (different kind of prototypes). RGB+D image combines the advantages of the two measuri
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Yan, Zhiqiang, Hongyuan Wang, Qianhao Ning, and Yinxi Lu. "Robust Image Matching Based on Image Feature and Depth Information Fusion." Machines 10, no. 6 (2022): 456. http://dx.doi.org/10.3390/machines10060456.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose a robust image feature extraction and fusion method to effectively fuse image feature and depth information and improve the registration accuracy of RGB-D images. The proposed method directly splices the image feature point descriptors with the corresponding point cloud feature descriptors to obtain the fusion descriptor of the feature points. The fusion feature descriptor is constructed based on the SIFT, SURF, and ORB feature descriptors and the PFH and FPFH point cloud feature descriptors. Furthermore, the registration performance based on fusion features is tested
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Yuan, Yuan, Zhitong Xiong, and Qi Wang. "ACM: Adaptive Cross-Modal Graph Convolutional Neural Networks for RGB-D Scene Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9176–84. http://dx.doi.org/10.1609/aaai.v33i01.33019176.

Der volle Inhalt der Quelle
Annotation:
RGB image classification has achieved significant performance improvement with the resurge of deep convolutional neural networks. However, mono-modal deep models for RGB image still have several limitations when applied to RGB-D scene recognition. 1) Images for scene classification usually contain more than one typical object with flexible spatial distribution, so the object-level local features should also be considered in addition to global scene representation. 2) Multi-modal features in RGB-D scene classification are still under-utilized. Simply combining these modal-specific features suff
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wang, Z., T. Li, L. Pan, and Z. Kang. "SCENE SEMANTIC SEGMENTATION FROM INDOOR RGB-D IMAGES USING ENCODE-DECODER FULLY CONVOLUTIONAL NETWORKS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7 (September 12, 2017): 397–404. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w7-397-2017.

Der volle Inhalt der Quelle
Annotation:
With increasing attention for the indoor environment and the development of low-cost RGB-D sensors, indoor RGB-D images are easily acquired. However, scene semantic segmentation is still an open area, which restricts indoor applications. The depth information can help to distinguish the regions which are difficult to be segmented out from the RGB images with similar color or texture in the indoor scenes. How to utilize the depth information is the key problem of semantic segmentation for RGB-D images. In this paper, we propose an Encode-Decoder Fully Convolutional Networks for RGB-D image clas
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "RGB-D Image"

1

Murgia, Julian. "Segmentation d'objets mobiles par fusion RGB-D et invariance colorimétrique." Thesis, Belfort-Montbéliard, 2016. http://www.theses.fr/2016BELF0289/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse s'inscrit dans un cadre de vidéo-surveillance, et s'intéresse plus précisément à la détection robustesd'objets mobiles dans une séquence d'images. Une bonne détection d'objets mobiles est un prérequis indispensableà tout traitement appliqué à ces objets dans de nombreuses applications telles que le suivi de voitures ou depersonnes, le comptage des passagers de transports en commun, la détection de situations dangereuses dans desenvironnements spécifiques (passages à niveau, passages piéton, carrefours, etc.), ou encore le contrôle devéhicules autonomes. Un très grand nombre de ces
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Tykkälä, Tommi. "Suivi de caméra image en temps réel base et cartographie de l'environnement." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00933813.

Der volle Inhalt der Quelle
Annotation:
Dans ce travail, méthodes d'estimation basées sur des images, également connu sous le nom de méthodes directes, sont étudiées qui permettent d'éviter l'extraction de caractéristiques et l'appariement complètement. L'objectif est de produire pose 3D précis et des estimations de la structure. Les fonctions de coût présenté minimiser l'erreur du capteur, car les mesures ne sont pas transformés ou modifiés. Dans la caméra photométrique estimation de la pose, rotation 3D et les paramètres de traduction sont estimées en minimisant une séquence de fonctions de coûts à base d'image, qui sont des non-l
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Lai, Po Kong. "Immersive Dynamic Scenes for Virtual Reality from a Single RGB-D Camera." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39663.

Der volle Inhalt der Quelle
Annotation:
In this thesis we explore the concepts and components which can be used as individual building blocks for producing immersive virtual reality (VR) content from a single RGB-D sensor. We identify the properties of immersive VR videos and propose a system composed of a foreground/background separator, a dynamic scene re-constructor and a shape completer. We initially explore the foreground/background separator component in the context of video summarization. More specifically, we examined how to extract trajectories of moving objects from video sequences captured with a static camera. We the
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kadkhodamohammadi, Abdolrahim. "3D detection and pose estimation of medical staff in operating rooms using RGB-D images." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD047/document.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse, nous traitons des problèmes de la détection des personnes et de l'estimation de leurs poses dans la Salle Opératoire (SO), deux éléments clés pour le développement d'applications d'assistance chirurgicale. Nous percevons la salle grâce à des caméras RGB-D qui fournissent des informations visuelles complémentaires sur la scène. Ces informations permettent de développer des méthodes mieux adaptées aux difficultés propres aux SO, comme l'encombrement, les surfaces sans texture et les occlusions. Nous présentons des nouvelles approches qui tirent profit des informations temporell
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Meilland, Maxime. "Cartographie RGB-D dense pour la localisation visuelle temps-réel et la navigation autonome." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2012. http://tel.archives-ouvertes.fr/tel-00686803.

Der volle Inhalt der Quelle
Annotation:
Dans le contexte de la navigation autonome en environnement urbain, une localisation précise du véhicule est importante pour une navigation sure et fiable. La faible précision des capteurs bas coût existants tels que le système GPS, nécessite l'utilisation d'autres capteurs eux aussi à faible coût. Les caméras mesurent une information photométrique riche et précise sur l'environnement, mais nécessitent l'utilisation d'algorithmes de traitement avancés pour obtenir une information sur la géométrie et sur la position de la caméra dans l'environnement. Cette problématique est connue sous le terme
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Villota, Juan Carlos Perafán. "Adaptive registration using 2D and 3D features for indoor scene reconstruction." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-17042017-090901/.

Der volle Inhalt der Quelle
Annotation:
Pairwise alignment between point clouds is an important task in building 3D maps of indoor environments with partial information. The combination of 2D local features with depth information provided by RGB-D cameras are often used to improve such alignment. However, under varying lighting or low visual texture, indoor pairwise frame registration with sparse 2D local features is not a particularly robust method. In these conditions, features are hard to detect, thus leading to misalignment between consecutive pairs of frames. The use of 3D local features can be a solution as such features come
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Shi, Yangyu. "Infrared Imaging Decision Aid Tools for Diagnosis of Necrotizing Enterocolitis." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40714.

Der volle Inhalt der Quelle
Annotation:
Neonatal necrotizing enterocolitis (NEC) is one of the most severe digestive tract emergencies in neonates, involving bowel edema, hemorrhage, and necrosis, and can lead to serious complications including death. Since it is difficult to diagnose early, the morbidity and mortality rates are high due to severe complications in later stages of NEC and thus early detection is key to the treatment of NEC. In this thesis, a novel automatic image acquisition and analysis system combining a color and depth (RGB-D) sensor with an infrared (IR) camera is proposed for NEC diagnosis. A design for sensors
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Baban, A. Erep Thierry Roland. "Contribution au développement d'un système intelligent de quantification des nutriments dans les repas d'Afrique subsaharienne." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP100.

Der volle Inhalt der Quelle
Annotation:
La malnutrition, qu'elle soit liée à un apport insuffisant ou excessif en nutriments, représente un défi mondial de santé publique touchant des milliards de personnes. Elle affecte tous les systèmes organiques en étant un facteur majeur de risque pour les maladies non transmissibles telles que les maladies cardiovasculaires, le diabète et certains cancers. Évaluer l'apport alimentaire est crucial pour prévenir la malnutrition, mais cela reste un défi. Les méthodes traditionnelles d'évaluation alimentaire sont laborieuses et sujettes aux biais. Les avancées en IA ont permis la conception de VBD
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Hasnat, Md Abul. "Unsupervised 3D image clustering and extension to joint color and depth segmentation." Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4013/document.

Der volle Inhalt der Quelle
Annotation:
L'accès aux séquences d'images 3D s'est aujourd'hui démocratisé, grâce aux récentes avancées dans le développement des capteurs de profondeur ainsi que des méthodes permettant de manipuler des informations 3D à partir d'images 2D. De ce fait, il y a une attente importante de la part de la communauté scientifique de la vision par ordinateur dans l'intégration de l'information 3D. En effet, des travaux de recherche ont montré que les performances de certaines applications pouvaient être améliorées en intégrant l'information 3D. Cependant, il reste des problèmes à résoudre pour l'analyse et la se
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Řehánek, Martin. "Detekce objektů pomocí Kinectu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236602.

Der volle Inhalt der Quelle
Annotation:
With the release of the Kinect device new possibilities appeared, allowing a simple use of image depth in image processing. The aim of this thesis is to propose a method for object detection and recognition in a depth map. Well known method Bag of Words and a descriptor based on Spin Image method are used for the object recognition. The Spin Image method is one of several existing approaches to depth map which are described in this thesis. Detection of object in picture is ensured by the sliding window technique. That is improved and speeded up by utilization of the depth information.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Bücher zum Thema "RGB-D Image"

1

Rosin, Paul L., Yu-Kun Lai, Ling Shao, and Yonghuai Liu, eds. RGB-D Image Analysis and Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rosin, Paul L., Yonghuai Liu, Ling Shao, and Yu-Kun Lai. RGB-D Image Analysis and Processing. Springer International Publishing AG, 2020.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

RGB-D Image Analysis and Processing. Springer, 2019.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kohli, Pushmeet, Zhengyou Zhang, Ling Shao, and Jungong Han. Computer Vision and Machine Learning with RGB-D Sensors. Springer, 2014.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kohli, Pushmeet, Zhengyou Zhang, Ling Shao, and Jungong Han. Computer Vision and Machine Learning with RGB-D Sensors. Springer, 2016.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Computer Vision and Machine Learning with RGB-D Sensors. Springer, 2014.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hester, Desirae. Picture Book of dσdge Chαrgєrs: An Album Consist of Compelling Photos of dσdge Chαrgєrs with High Quality Images As a Special Gift for Friends, Family, Lovers, Relative. Independently Published, 2022.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "RGB-D Image"

1

Civera, Javier, and Seong Hun Lee. "RGB-D Odometry and SLAM." In RGB-D Image Analysis and Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Zollhöfer, Michael. "Commodity RGB-D Sensors: Data Acquisition." In RGB-D Image Analysis and Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Malleson, Charles, Jean-Yves Guillemaut, and Adrian Hilton. "3D Reconstruction from RGB-D Data." In RGB-D Image Analysis and Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ren, Tongwei, and Ao Zhang. "RGB-D Salient Object Detection: A Review." In RGB-D Image Analysis and Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cong, Runmin, Hao Chen, Hongyuan Zhu, and Huazhu Fu. "Foreground Detection and Segmentation in RGB-D Images." In RGB-D Image Analysis and Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sahin, Caner, Guillermo Garcia-Hernando, Juil Sock, and Tae-Kyun Kim. "Instance- and Category-Level 6D Object Pose Estimation." In RGB-D Image Analysis and Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhang, Song-Hai, and Yu-Kun Lai. "Geometric and Semantic Modeling from RGB-D Data." In RGB-D Image Analysis and Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Schwarz, Max, and Sven Behnke. "Semantic RGB-D Perception for Cognitive Service Robots." In RGB-D Image Analysis and Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_13.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Spinsante, Susanna. "RGB-D Sensors and Signal Processing for Fall Detection." In RGB-D Image Analysis and Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_14.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Moyà-Alcover, Gabriel, Ines Ayed, Javier Varona, and Antoni Jaume-i-Capó. "RGB-D Interactive Systems on Serious Games for Motor Rehabilitation Therapy and Therapeutic Measurements." In RGB-D Image Analysis and Processing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_15.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "RGB-D Image"

1

Teng, Qianqian, and Xianbo He. "RGB-D Image Modeling Method Based on Transformer: RDT." In 2024 3rd International Conference on Artificial Intelligence, Internet of Things and Cloud Computing Technology (AIoTC). IEEE, 2024. http://dx.doi.org/10.1109/aiotc63215.2024.10748282.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Wang, Kexuan, Chenhua Liu, Huiguang Wei, Li Jing, and Rongfu Zhang. "RFNET: Refined Fusion Three-Branch RGB-D Salient Object Detection Network." In 2024 IEEE International Conference on Image Processing (ICIP). IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10647308.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Fouad, Islam I., Sherine Rady, and Mostafa G. M. Mostafa. "Efficient image segmentation of RGB-D images." In 2017 12th International Conference on Computer Engineering and Systems (ICCES). IEEE, 2017. http://dx.doi.org/10.1109/icces.2017.8275331.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Li, Shijie, Rong Li, and Juergen Gall. "Semantic RGB-D Image Synthesis." In 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2023. http://dx.doi.org/10.1109/iccvw60793.2023.00101.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zhang, Xiaoxiong, Sajid Javed, Ahmad Obeid, Jorge Dias, and Naoufel Werghi. "Gender Recognition on RGB-D Image." In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9191068.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhang, Shaopeng, Ming Zhong, Gang Zeng, and Rui Gan. "Joining geometric and RGB features for RGB-D semantic segmentation." In The Second International Conference on Image, Video Processing and Artificial Intelligence, edited by Ruidan Su. SPIE, 2019. http://dx.doi.org/10.1117/12.2541645.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Li, Benchao, Wanhua Li, Yongyi Tang, Jian-Fang Hu, and Wei-Shi Zheng. "GL-PAM RGB-D Gesture Recognition." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451157.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Shibata, Toshihiro, Yuji Akai, and Ryo Matsuoka. "Reflection Removal Using RGB-D Images." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451639.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Valognes, Julien, Maria A. Amer, and Niloufar Salehi Dastjerdi. "Effective keyframe extraction from RGB and RGB-D video sequences." In 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA). IEEE, 2017. http://dx.doi.org/10.1109/ipta.2017.8310120.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Hui, Tak-Wai, and King Ngi Ngan. "Depth enhancement using RGB-D guided filtering." In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7025778.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!