Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „RGB-D Image“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "RGB-D Image" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "RGB-D Image"
Uddin, Md Kamal, Amran Bhuiyan und Mahmudul Hasan. „Fusion in Dissimilarity Space Between RGB D and Skeleton for Person Re Identification“. International Journal of Innovative Technology and Exploring Engineering 10, Nr. 12 (30.10.2021): 69–75. http://dx.doi.org/10.35940/ijitee.l9566.10101221.
Der volle Inhalt der QuelleLi, Hengyu, Hang Liu, Ning Cao, Yan Peng, Shaorong Xie, Jun Luo und Yu Sun. „Real-time RGB-D image stitching using multiple Kinects for improved field of view“. International Journal of Advanced Robotic Systems 14, Nr. 2 (01.03.2017): 172988141769556. http://dx.doi.org/10.1177/1729881417695560.
Der volle Inhalt der QuelleWu, Yan, Jiqian Li und Jing Bai. „Multiple Classifiers-Based Feature Fusion for RGB-D Object Recognition“. International Journal of Pattern Recognition and Artificial Intelligence 31, Nr. 05 (27.02.2017): 1750014. http://dx.doi.org/10.1142/s0218001417500148.
Der volle Inhalt der QuelleKitzler, Florian, Norbert Barta, Reinhard W. Neugschwandtner, Andreas Gronauer und Viktoria Motsch. „WE3DS: An RGB-D Image Dataset for Semantic Segmentation in Agriculture“. Sensors 23, Nr. 5 (01.03.2023): 2713. http://dx.doi.org/10.3390/s23052713.
Der volle Inhalt der QuelleZheng, Huiming, und Wei Gao. „End-to-End RGB-D Image Compression via Exploiting Channel-Modality Redundancy“. Proceedings of the AAAI Conference on Artificial Intelligence 38, Nr. 7 (24.03.2024): 7562–70. http://dx.doi.org/10.1609/aaai.v38i7.28588.
Der volle Inhalt der QuellePeroš, Josip, Rinaldo Paar, Vladimir Divić und Boštjan Kovačić. „Fusion of Laser Scans and Image Data—RGB+D for Structural Health Monitoring of Engineering Structures“. Applied Sciences 12, Nr. 22 (19.11.2022): 11763. http://dx.doi.org/10.3390/app122211763.
Der volle Inhalt der QuelleYan, Zhiqiang, Hongyuan Wang, Qianhao Ning und Yinxi Lu. „Robust Image Matching Based on Image Feature and Depth Information Fusion“. Machines 10, Nr. 6 (08.06.2022): 456. http://dx.doi.org/10.3390/machines10060456.
Der volle Inhalt der QuelleYuan, Yuan, Zhitong Xiong und Qi Wang. „ACM: Adaptive Cross-Modal Graph Convolutional Neural Networks for RGB-D Scene Recognition“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 9176–84. http://dx.doi.org/10.1609/aaai.v33i01.33019176.
Der volle Inhalt der QuelleWang, Z., T. Li, L. Pan und Z. Kang. „SCENE SEMANTIC SEGMENTATION FROM INDOOR RGB-D IMAGES USING ENCODE-DECODER FULLY CONVOLUTIONAL NETWORKS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W7 (12.09.2017): 397–404. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w7-397-2017.
Der volle Inhalt der QuelleKanda, Takuya, Kazuya Miyakawa, Jeonghwang Hayashi, Jun Ohya, Hiroyuki Ogata, Kenji Hashimoto, Xiao Sun, Takashi Matsuzawa, Hiroshi Naito und Atsuo Takanishi. „Locating Mechanical Switches Using RGB-D Sensor Mounted on a Disaster Response Robot“. Electronic Imaging 2020, Nr. 6 (26.01.2020): 16–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.6.iriacv-016.
Der volle Inhalt der QuelleDissertationen zum Thema "RGB-D Image"
Murgia, Julian. „Segmentation d'objets mobiles par fusion RGB-D et invariance colorimétrique“. Thesis, Belfort-Montbéliard, 2016. http://www.theses.fr/2016BELF0289/document.
Der volle Inhalt der QuelleThis PhD thesis falls within the scope of video-surveillance, and more precisely focuses on the detection of movingobjects in image sequences. In many applications, good detection of moving objects is an indispensable prerequisiteto any treatment applied to these objects such as people or cars tracking, passengers counting, detection ofdangerous situations in specific environments (level crossings, pedestrian crossings, intersections, etc.), or controlof autonomous vehicles. The reliability of computer vision based systems require robustness against difficultconditions often caused by lighting conditions (day/night, shadows), weather conditions (rain, wind, snow...) and thetopology of the observed scene (occultation...).Works detailed in this PhD thesis aim at reducing the impact of illumination conditions by improving the quality of thedetection of mobile objects in indoor or outdoor environments and at any time of the day. Thus, we propose threestrategies working as a combination to improve the detection of moving objects:i) using colorimetric invariants and/or color spaces that provide invariant properties ;ii) using passive stereoscopic camera (in outdoor environments) and Microsoft Kinect active camera (in outdoorenvironments) in order to partially reconstruct the 3D environment, providing an additional dimension (a depthinformation) to the background/foreground subtraction algorithm ;iii) a new fusion algorithm based on fuzzy logic in order to combine color and depth information with a certain level ofuncertainty for the pixels classification
Tykkälä, Tommi. „Suivi de caméra image en temps réel base et cartographie de l'environnement“. Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00933813.
Der volle Inhalt der QuelleLai, Po Kong. „Immersive Dynamic Scenes for Virtual Reality from a Single RGB-D Camera“. Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39663.
Der volle Inhalt der QuelleKadkhodamohammadi, Abdolrahim. „3D detection and pose estimation of medical staff in operating rooms using RGB-D images“. Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD047/document.
Der volle Inhalt der QuelleIn this thesis, we address the two problems of person detection and pose estimation in Operating Rooms (ORs), which are key ingredients in the development of surgical assistance applications. We perceive the OR using compact RGB-D cameras that can be conveniently integrated in the room. These sensors provide complementary information about the scene, which enables us to develop methods that can cope with numerous challenges present in the OR, e.g. clutter, textureless surfaces and occlusions. We present novel part-based approaches that take advantage of depth, multi-view and temporal information to construct robust human detection and pose estimation models. Evaluation is performed on new single- and multi-view datasets recorded in operating rooms. We demonstrate very promising results and show that our approaches outperform state-of-the-art methods on this challenging data acquired during real surgeries
Meilland, Maxime. „Cartographie RGB-D dense pour la localisation visuelle temps-réel et la navigation autonome“. Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2012. http://tel.archives-ouvertes.fr/tel-00686803.
Der volle Inhalt der QuelleVillota, Juan Carlos Perafán. „Adaptive registration using 2D and 3D features for indoor scene reconstruction“. Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-17042017-090901/.
Der volle Inhalt der QuelleO alinhamento entre pares de nuvens de pontos é uma tarefa importante na construção de mapas de ambientes em 3D. A combinação de características locais 2D com informação de profundidade fornecida por câmeras RGB-D são frequentemente utilizadas para melhorar tais alinhamentos. No entanto, em ambientes internos com baixa iluminação ou pouca textura visual o método usando somente características locais 2D não é particularmente robusto. Nessas condições, as características 2D são difíceis de serem detectadas, conduzindo a um desalinhamento entre pares de quadros consecutivos. A utilização de características 3D locais pode ser uma solução uma vez que tais características são extraídas diretamente de pontos 3D e são resistentes a variações na textura visual e na iluminação. Como situações de variações em cenas reais em ambientes internos são inevitáveis, essa tese apresenta um novo sistema desenvolvido com o objetivo de melhorar o alinhamento entre pares de quadros usando uma combinação adaptativa de características esparsas 2D e 3D. Tal combinação está baseada nos níveis de estrutura geométrica e de textura visual contidos em cada cena. Esse sistema foi testado com conjuntos de dados RGB-D, incluindo vídeos com movimentos irrestritos da câmera e mudanças naturais na iluminação. Os resultados experimentais mostram que a nossa proposta supera aqueles métodos que usam características 2D ou 3D separadamente, obtendo uma melhora da precisão no alinhamento de cenas em ambientes internos reais.
Shi, Yangyu. „Infrared Imaging Decision Aid Tools for Diagnosis of Necrotizing Enterocolitis“. Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40714.
Der volle Inhalt der QuelleBaban, a. erep Thierry Roland. „Contribution au développement d'un système intelligent de quantification des nutriments dans les repas d'Afrique subsaharienne“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP100.
Der volle Inhalt der QuelleMalnutrition, including under- and overnutrition, is a global health challenge affecting billions of people. It impacts all organ systems and is a significant risk factor for noncommunicable diseases such as cardiovascular diseases, diabetes, and some cancers. Assessing food intake is crucial for preventing malnutrition but remains challenging. Traditional methods for dietary assessment are labor-intensive and prone to bias. Advancements in AI have made Vision-Based Dietary Assessment (VBDA) a promising solution for automatically analyzing food images to estimate portions and nutrition. However, food image segmentation in VBDA faces challenges due to food's non-rigid structure, high intra-class variation (where the same dish can look very different), inter-class resemblance (where different foods appear similar) and scarcity of publicly available datasets.Almost all food segmentation research has focused on Asian and Western foods, with no datasets for African cuisines. However, African dishes often involve mixed food classes, making accurate segmentation challenging. Additionally, research has largely focus on RGB images, which provides color and texture but may lack geometric detail. To address this, RGB-D segmentation combines depth data with RGB images. Depth images provide crucial geometric details that enhance RGB data, improve object discrimination, and are robust to factors like illumination and fog. Despite its success in other fields, RGB-D segmentation for food is underexplored due to difficulties in collecting food depth images.This thesis makes key contributions by developing new deep learning models for RGB (mid-DeepLabv3+) and RGB-D (ESeNet-D) image segmentation and introducing the first food segmentation datasets focused on African food images. Mid-DeepLabv3+ is based on DeepLabv3+, featuring a simplified ResNet backbone with and added skip layer (middle layer) in the decoder and SimAM attention mechanism. This model offers an optimal balance between performance and efficiency, matching DeepLabv3+'s performance while cutting computational load by half. ESeNet-D consists on two encoder branches using EfficientNetV2 as backbone, with a fusion block for multi-scale integration and a decoder employing self-calibrated convolution and learned interpolation for precise segmentation. ESeNet-D outperforms many RGB and RGB-D benchmark models while having fewer parameters and FLOPs. Our experiments show that, when properly integrated, depth information can significantly improve food segmentation accuracy. We also present two new datasets: AfricaFoodSeg for “food/non-food” segmentation with 3,067 images (2,525 for training, 542 for validation), and CamerFood focusing on Cameroonian cuisine. CamerFood datasets include CamerFood10 with 1,422 images from ten food classes, and CamerFood15, an enhanced version with 15 food classes, 1,684 training images, and 514 validation images. Finally, we address the challenge of scarce depth data in RGB-D food segmentation by demonstrating that Monocular Depth Estimation (MDE) models can aid in generating effective depth maps for RGB-D datasets
Hasnat, Md Abul. „Unsupervised 3D image clustering and extension to joint color and depth segmentation“. Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4013/document.
Der volle Inhalt der QuelleAccess to the 3D images at a reasonable frame rate is widespread now, thanks to the recent advances in low cost depth sensors as well as the efficient methods to compute 3D from 2D images. As a consequence, it is highly demanding to enhance the capability of existing computer vision applications by incorporating 3D information. Indeed, it has been demonstrated in numerous researches that the accuracy of different tasks increases by including 3D information as an additional feature. However, for the task of indoor scene analysis and segmentation, it remains several important issues, such as: (a) how the 3D information itself can be exploited? and (b) what is the best way to fuse color and 3D in an unsupervised manner? In this thesis, we address these issues and propose novel unsupervised methods for 3D image clustering and joint color and depth image segmentation. To this aim, we consider image normals as the prominent feature from 3D image and cluster them with methods based on finite statistical mixture models. We consider Bregman Soft Clustering method to ensure computationally efficient clustering. Moreover, we exploit several probability distributions from directional statistics, such as the von Mises-Fisher distribution and the Watson distribution. By combining these, we propose novel Model Based Clustering methods. We empirically validate these methods using synthetic data and then demonstrate their application for 3D/depth image analysis. Afterward, we extend these methods to segment synchronized 3D and color image, also called RGB-D image. To this aim, first we propose a statistical image generation model for RGB-D image. Then, we propose novel RGB-D segmentation method using a joint color-spatial-axial clustering and a statistical planar region merging method. Results show that, the proposed method is comparable with the state of the art methods and requires less computation time. Moreover, it opens interesting perspectives to fuse color and geometry in an unsupervised manner. We believe that the methods proposed in this thesis are equally applicable and extendable for clustering different types of data, such as speech, gene expressions, etc. Moreover, they can be used for complex tasks, such as joint image-speech data analysis
Řehánek, Martin. „Detekce objektů pomocí Kinectu“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236602.
Der volle Inhalt der QuelleBücher zum Thema "RGB-D Image"
Rosin, Paul L., Yu-Kun Lai, Ling Shao und Yonghuai Liu, Hrsg. RGB-D Image Analysis and Processing. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3.
Der volle Inhalt der QuelleRosin, Paul L., Yonghuai Liu, Ling Shao und Yu-Kun Lai. RGB-D Image Analysis and Processing. Springer International Publishing AG, 2020.
Den vollen Inhalt der Quelle findenRGB-D Image Analysis and Processing. Springer, 2019.
Den vollen Inhalt der Quelle findenKohli, Pushmeet, Zhengyou Zhang, Ling Shao und Jungong Han. Computer Vision and Machine Learning with RGB-D Sensors. Springer, 2014.
Den vollen Inhalt der Quelle findenKohli, Pushmeet, Zhengyou Zhang, Ling Shao und Jungong Han. Computer Vision and Machine Learning with RGB-D Sensors. Springer, 2016.
Den vollen Inhalt der Quelle findenComputer Vision and Machine Learning with RGB-D Sensors. Springer, 2014.
Den vollen Inhalt der Quelle findenHester, Desirae. Picture Book of dσdge Chαrgєrs: An Album Consist of Compelling Photos of dσdge Chαrgєrs with High Quality Images As a Special Gift for Friends, Family, Lovers, Relative. Independently Published, 2022.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "RGB-D Image"
Civera, Javier, und Seong Hun Lee. „RGB-D Odometry and SLAM“. In RGB-D Image Analysis and Processing, 117–44. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_6.
Der volle Inhalt der QuelleZollhöfer, Michael. „Commodity RGB-D Sensors: Data Acquisition“. In RGB-D Image Analysis and Processing, 3–13. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_1.
Der volle Inhalt der QuelleMalleson, Charles, Jean-Yves Guillemaut und Adrian Hilton. „3D Reconstruction from RGB-D Data“. In RGB-D Image Analysis and Processing, 87–115. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_5.
Der volle Inhalt der QuelleRen, Tongwei, und Ao Zhang. „RGB-D Salient Object Detection: A Review“. In RGB-D Image Analysis and Processing, 203–20. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_9.
Der volle Inhalt der QuelleCong, Runmin, Hao Chen, Hongyuan Zhu und Huazhu Fu. „Foreground Detection and Segmentation in RGB-D Images“. In RGB-D Image Analysis and Processing, 221–41. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_10.
Der volle Inhalt der QuelleSahin, Caner, Guillermo Garcia-Hernando, Juil Sock und Tae-Kyun Kim. „Instance- and Category-Level 6D Object Pose Estimation“. In RGB-D Image Analysis and Processing, 243–65. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_11.
Der volle Inhalt der QuelleZhang, Song-Hai, und Yu-Kun Lai. „Geometric and Semantic Modeling from RGB-D Data“. In RGB-D Image Analysis and Processing, 267–82. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_12.
Der volle Inhalt der QuelleSchwarz, Max, und Sven Behnke. „Semantic RGB-D Perception for Cognitive Service Robots“. In RGB-D Image Analysis and Processing, 285–307. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_13.
Der volle Inhalt der QuelleSpinsante, Susanna. „RGB-D Sensors and Signal Processing for Fall Detection“. In RGB-D Image Analysis and Processing, 309–34. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_14.
Der volle Inhalt der QuelleMoyà-Alcover, Gabriel, Ines Ayed, Javier Varona und Antoni Jaume-i-Capó. „RGB-D Interactive Systems on Serious Games for Motor Rehabilitation Therapy and Therapeutic Measurements“. In RGB-D Image Analysis and Processing, 335–53. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-28603-3_15.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "RGB-D Image"
Teng, Qianqian, und Xianbo He. „RGB-D Image Modeling Method Based on Transformer: RDT“. In 2024 3rd International Conference on Artificial Intelligence, Internet of Things and Cloud Computing Technology (AIoTC), 386–89. IEEE, 2024. http://dx.doi.org/10.1109/aiotc63215.2024.10748282.
Der volle Inhalt der QuelleWang, Kexuan, Chenhua Liu, Huiguang Wei, Li Jing und Rongfu Zhang. „RFNET: Refined Fusion Three-Branch RGB-D Salient Object Detection Network“. In 2024 IEEE International Conference on Image Processing (ICIP), 741–46. IEEE, 2024. http://dx.doi.org/10.1109/icip51287.2024.10647308.
Der volle Inhalt der QuelleFouad, Islam I., Sherine Rady und Mostafa G. M. Mostafa. „Efficient image segmentation of RGB-D images“. In 2017 12th International Conference on Computer Engineering and Systems (ICCES). IEEE, 2017. http://dx.doi.org/10.1109/icces.2017.8275331.
Der volle Inhalt der QuelleLi, Shijie, Rong Li und Juergen Gall. „Semantic RGB-D Image Synthesis“. In 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2023. http://dx.doi.org/10.1109/iccvw60793.2023.00101.
Der volle Inhalt der QuelleZhang, Xiaoxiong, Sajid Javed, Ahmad Obeid, Jorge Dias und Naoufel Werghi. „Gender Recognition on RGB-D Image“. In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9191068.
Der volle Inhalt der QuelleZhang, Shaopeng, Ming Zhong, Gang Zeng und Rui Gan. „Joining geometric and RGB features for RGB-D semantic segmentation“. In The Second International Conference on Image, Video Processing and Artificial Intelligence, herausgegeben von Ruidan Su. SPIE, 2019. http://dx.doi.org/10.1117/12.2541645.
Der volle Inhalt der QuelleLi, Benchao, Wanhua Li, Yongyi Tang, Jian-Fang Hu und Wei-Shi Zheng. „GL-PAM RGB-D Gesture Recognition“. In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451157.
Der volle Inhalt der QuelleShibata, Toshihiro, Yuji Akai und Ryo Matsuoka. „Reflection Removal Using RGB-D Images“. In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451639.
Der volle Inhalt der QuelleValognes, Julien, Maria A. Amer und Niloufar Salehi Dastjerdi. „Effective keyframe extraction from RGB and RGB-D video sequences“. In 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA). IEEE, 2017. http://dx.doi.org/10.1109/ipta.2017.8310120.
Der volle Inhalt der QuelleHui, Tak-Wai, und King Ngi Ngan. „Depth enhancement using RGB-D guided filtering“. In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7025778.
Der volle Inhalt der Quelle