Dissertations / Theses on the topic 'RGB-Depth Image'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 16 dissertations / theses for your research on the topic 'RGB-Depth Image.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Deng, Zhuo. "RGB-DEPTH IMAGE SEGMENTATION AND OBJECT RECOGNITION FOR INDOOR SCENES." Diss., Temple University Libraries, 2016. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/427631.
Full textPh.D.
With the advent of Microsoft Kinect, the landscape of various vision-related tasks has been changed. Firstly, using an active infrared structured light sensor, the Kinect can provide directly the depth information that is hard to infer from traditional RGB images. Secondly, RGB and depth information are generated synchronously and can be easily aligned, which makes their direct integration possible. In this thesis, I propose several algorithms or systems that focus on how to integrate depth information with traditional visual appearances for addressing different computer vision applications. Those applications cover both low level (image segmentation, class agnostic object proposals) and high level (object detection, semantic segmentation) computer vision tasks. To firstly understand whether and how depth information is helpful for improving computer vision performances, I start research on the image segmentation field, which is a fundamental problem and has been studied extensively in natural color images. We propose an unsupervised segmentation algorithm that is carefully crafted to balance the contribution of color and depth features in RGB-D images. The segmentation problem is then formulated as solving the Maximum Weight Independence Set (MWIS) problem. Given superpixels obtained from different layers of a hierarchical segmentation, the saliency of each superpixel is estimated based on balanced combination of features originating from depth, gray level intensity, and texture information. We evaluate the segmentation quality based on five standard measures on the commonly used NYU-v2 RGB-Depth dataset. A surprising message indicated from experiments is that unsupervised image segmentation of RGB-D images yields comparable results to supervised segmentation. In image segmentation, an image is partitioned into several groups of pixels (or super-pixels). We take one step further to investigate on the problem of assigning class labels to every pixel, i.e., semantic scene segmentation. We propose a novel image region labeling method which augments CRF formulation with hard mutual exclusion (mutex) constraints. This way our approach can make use of rich and accurate 3D geometric structure coming from Kinect in a principled manner. The final labeling result must satisfy all mutex constraints, which allows us to eliminate configurations that violate common sense physics laws like placing a floor above a night stand. Three classes of mutex constraints are proposed: global object co-occurrence constraint, relative height relationship constraint, and local support relationship constraint. Segments obtained from image segmentation can be either too fine or too coarse. A full object region not only conveys global features but also arguably enriches contextual features as confusing background is separated. We propose a novel unsupervised framework for automatically generating bottom up class independent object candidates for detection and recognition in cluttered indoor environments. Utilizing raw depth map, we propose a novel plane segmentation algorithm for dividing an indoor scene into predominant planar regions and non-planar regions. Based on this partition, we are able to effectively predict object locations and their spatial extensions. Our approach automatically generates object proposals considering five different aspects: Non-planar Regions (NPR), Planar Regions (PR), Detected Planes (DP), Merged Detected Planes (MDP) and Hierarchical Clustering (HC) of 3D point clouds. Object region proposals include both bounding boxes and instance segments. Although 2D computer vision tasks can roughly identify where objects are placed on image planes, their true locations and poses in the physical 3D world are difficult to determine due to multiple factors such as occlusions and the uncertainty arising from perspective projections. However, it is very natural for human beings to understand how far objects are from viewers, object poses and their full extents from still images. These kind of features are extremely desirable for many applications such as robotics navigation, grasp estimation, and Augmented Reality (AR) etc. In order to fill the gap, we addresses the problem of amodal perception of 3D object detection. The task is to not only find object localizations in the 3D world, but also estimate their physical sizes and poses, even if only parts of them are visible in the RGB-D image. Recent approaches have attempted to harness point cloud from depth channel to exploit 3D features directly in the 3D space and demonstrated the superiority over traditional 2D representation approaches. We revisit the amodal 3D detection problem by sticking to the 2D representation framework, and directly relate 2D visual appearance to 3D objects. We propose a novel 3D object detection system that simultaneously predicts objects' 3D locations, physical sizes, and orientations in indoor scenes.
Temple University--Theses
Hasnat, Md Abul. "Unsupervised 3D image clustering and extension to joint color and depth segmentation." Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4013/document.
Full textAccess to the 3D images at a reasonable frame rate is widespread now, thanks to the recent advances in low cost depth sensors as well as the efficient methods to compute 3D from 2D images. As a consequence, it is highly demanding to enhance the capability of existing computer vision applications by incorporating 3D information. Indeed, it has been demonstrated in numerous researches that the accuracy of different tasks increases by including 3D information as an additional feature. However, for the task of indoor scene analysis and segmentation, it remains several important issues, such as: (a) how the 3D information itself can be exploited? and (b) what is the best way to fuse color and 3D in an unsupervised manner? In this thesis, we address these issues and propose novel unsupervised methods for 3D image clustering and joint color and depth image segmentation. To this aim, we consider image normals as the prominent feature from 3D image and cluster them with methods based on finite statistical mixture models. We consider Bregman Soft Clustering method to ensure computationally efficient clustering. Moreover, we exploit several probability distributions from directional statistics, such as the von Mises-Fisher distribution and the Watson distribution. By combining these, we propose novel Model Based Clustering methods. We empirically validate these methods using synthetic data and then demonstrate their application for 3D/depth image analysis. Afterward, we extend these methods to segment synchronized 3D and color image, also called RGB-D image. To this aim, first we propose a statistical image generation model for RGB-D image. Then, we propose novel RGB-D segmentation method using a joint color-spatial-axial clustering and a statistical planar region merging method. Results show that, the proposed method is comparable with the state of the art methods and requires less computation time. Moreover, it opens interesting perspectives to fuse color and geometry in an unsupervised manner. We believe that the methods proposed in this thesis are equally applicable and extendable for clustering different types of data, such as speech, gene expressions, etc. Moreover, they can be used for complex tasks, such as joint image-speech data analysis
Baban, a. erep Thierry Roland. "Contribution au développement d'un système intelligent de quantification des nutriments dans les repas d'Afrique subsaharienne." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP100.
Full textMalnutrition, including under- and overnutrition, is a global health challenge affecting billions of people. It impacts all organ systems and is a significant risk factor for noncommunicable diseases such as cardiovascular diseases, diabetes, and some cancers. Assessing food intake is crucial for preventing malnutrition but remains challenging. Traditional methods for dietary assessment are labor-intensive and prone to bias. Advancements in AI have made Vision-Based Dietary Assessment (VBDA) a promising solution for automatically analyzing food images to estimate portions and nutrition. However, food image segmentation in VBDA faces challenges due to food's non-rigid structure, high intra-class variation (where the same dish can look very different), inter-class resemblance (where different foods appear similar) and scarcity of publicly available datasets.Almost all food segmentation research has focused on Asian and Western foods, with no datasets for African cuisines. However, African dishes often involve mixed food classes, making accurate segmentation challenging. Additionally, research has largely focus on RGB images, which provides color and texture but may lack geometric detail. To address this, RGB-D segmentation combines depth data with RGB images. Depth images provide crucial geometric details that enhance RGB data, improve object discrimination, and are robust to factors like illumination and fog. Despite its success in other fields, RGB-D segmentation for food is underexplored due to difficulties in collecting food depth images.This thesis makes key contributions by developing new deep learning models for RGB (mid-DeepLabv3+) and RGB-D (ESeNet-D) image segmentation and introducing the first food segmentation datasets focused on African food images. Mid-DeepLabv3+ is based on DeepLabv3+, featuring a simplified ResNet backbone with and added skip layer (middle layer) in the decoder and SimAM attention mechanism. This model offers an optimal balance between performance and efficiency, matching DeepLabv3+'s performance while cutting computational load by half. ESeNet-D consists on two encoder branches using EfficientNetV2 as backbone, with a fusion block for multi-scale integration and a decoder employing self-calibrated convolution and learned interpolation for precise segmentation. ESeNet-D outperforms many RGB and RGB-D benchmark models while having fewer parameters and FLOPs. Our experiments show that, when properly integrated, depth information can significantly improve food segmentation accuracy. We also present two new datasets: AfricaFoodSeg for “food/non-food” segmentation with 3,067 images (2,525 for training, 542 for validation), and CamerFood focusing on Cameroonian cuisine. CamerFood datasets include CamerFood10 with 1,422 images from ten food classes, and CamerFood15, an enhanced version with 15 food classes, 1,684 training images, and 514 validation images. Finally, we address the challenge of scarce depth data in RGB-D food segmentation by demonstrating that Monocular Depth Estimation (MDE) models can aid in generating effective depth maps for RGB-D datasets
Řehánek, Martin. "Detekce objektů pomocí Kinectu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236602.
Full textSANTOS, LEANDRO TAVARES ARAGAO DOS. "GENERATING SUPERRESOLVED DEPTH MAPS USING LOW COST SENSORS AND RGB IMAGES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28673@1.
Full textCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
As aplicações da reconstrução em três dimensões de uma cena real são as mais diversas. O surgimento de sensores de profundidade de baixo custo, tal qual o Kinect, sugere o desenvolvimento de sistemas de reconstrução mais baratos que aqueles já existentes. Contudo, os dados disponibilizados por este dispositivo ainda carecem em muito quando comparados àqueles providos por sistemas mais sofisticados. No mundo acadêmico e comercial, algumas iniciativas, como aquelas de Tong et al. [1] e de Cui et al. [2], se propõem a solucionar tal problema. A partir do estudo das mesmas, este trabalho propôs a modificação do algoritmo de super-resolução descrito por Mitzel et al. [3] no intuito de considerar em seus cálculos as imagens coloridas também fornecidas pelo dispositivo, conforme abordagem de Cui et al. [2]. Tal alteração melhorou os mapas de profundidade super-resolvidos fornecidos, mitigando interferências geradas por movimentações repentinas na cena captada. Os testes realizados comprovam a melhoria dos mapas gerados, bem como analisam o impacto da implementação em CPU e GPU dos algoritmos nesta etapa da super-resolução. O trabalho se restringe a esta etapa. As etapas seguintes da reconstrução 3D não foram implementadas.
There are a lot of three dimensions reconstruction applications of real scenes. The rise of low cost sensors, like the Kinect, suggests the development of systems cheaper than the existing ones. Nevertheless, data provided by this device are worse than that provided by more sophisticated sensors. In the academic and commercial world, some initiatives, described in Tong et al. [1] and in Cui et al. [2], try to solve that problem. Studying that attempts, this work suggests the modification of super-resolution algorithm described for Mitzel et al. [3] in order to consider in its calculations coloured images provided by Kinect, like the approach of Cui et al. [2]. This change improved the super resolved depth maps provided, mitigating interference caused by sudden changes of captured scenes. The tests proved the improvement of generated maps and analysed the impact of CPU and GPU algorithms implementation in the superresolution step. This work is restricted to this step. The next stages of 3D reconstruction have not been implemented.
Thörnberg, Jesper. "Combining RGB and Depth Images for Robust Object Detection using Convolutional Neural Networks." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-174137.
Full textMöckelind, Christoffer. "Improving deep monocular depth predictions using dense narrow field of view depth images." Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.
Full textI det här arbetet studerar vi ett djupapproximationsproblem där vi tillhandahåller en djupbild med smal synvinkel och en RGB-bild med bred synvinkel till ett djupt nätverk med uppgift att förutsäga djupet för hela RGB-bilden. Vi visar att genom att ge djupbilden till nätverket förbättras resultatet för området utanför det tillhandahållna djupet jämfört med en existerande metod som använder en RGB-bild för att förutsäga djupet. Vi undersöker flera arkitekturer och storlekar på djupbildssynfält och studerar effekten av att lägga till brus och sänka upplösningen på djupbilden. Vi visar att större synfält för djupbilden ger en större fördel och även att modellens noggrannhet minskar med avståndet från det angivna djupet. Våra resultat visar också att modellerna som använde sig av det brusiga lågupplösta djupet presterade på samma nivå som de modeller som använde sig av det omodifierade djupet.
Hammond, Patrick Douglas. "Deep Synthetic Noise Generation for RGB-D Data Augmentation." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7516.
Full textTu, Chieh-Min, and 杜介民. "Depth Image Inpainting with RGB-D Camera." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/k4m42a.
Full text義守大學
資訊工程學系
103
Since Microsoft released the cheap Kinect sensors as a new natural user interface, stereo imaging is made from previous multi-view color image synthesis, to now synthesis of color image and depth image. But the captured depth images may lose some depth values so that stereoscopic effect is often poor in general. This thesis is based on Kinect RGB-D camera to develop an object-based depth inpainting method. Firstly, the background differencing, frame differencing and depth thresholding strategies are used as a basis for segmenting foreground objects from a dynamic background image. Then, the task of hole inpainting is divided into background area and foreground area, in which background area is inpainted by background depth image and foreground area is inpainted by a best-fit neighborhood depth value. Experimental results show that such an inpainting method is helpful to fill holes, and to improve the contour edges and image quality.
Lin, Shih-Pi, and 林士筆. "In-air Handwriting Chinese Character Recognition Base on RGB Image without Depth Information." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/2mhfzk.
Full text國立中央大學
資訊工程學系
107
As technology changes rapidly, Human-Computer Interaction(HCI) no longer being limited by keyboard. Existing handwriting products are provided sufficient feature to recognize handwriting trajectories on density and stability. For Chinese font, it is relatively difficult for machines to obtain stable trajectory comparing to English and numerals. In the past, in-air hand detection and tracking often used the devices with depth information. For example, Kinect uses two infrared cameras to obtain depth information, which cause higher price on devices. Therefore, the use of RGB information with one camera to achieve object detection and tracking is a trend in recent years. The use of RGB camera as HCI media for in-air handwriting need to deal with accurate hand detection and stability tracking, and the handwriting trajectory has one stroke-finished attribute, which means that it will have both real stroke and virtual stroke, it increases the difficulty of recognition. The hand database uses to build the model contains, self-recorded handwriting videos and the relevant hand data sets collected on the Internet. By adding the Multiple Receptive Field(MRF) in processing data, which scale the ground truth and regard the scaled as a new object, it increases the robustness of detection. This paper uses YOLO v3 as the core neural network model, and adds Convolutional Recurrent Neural Network(CRNN) to convert YOLO into a time-sequential neural network to stabilize tracking. The analysis of the experimental results shows that the hand detection can be more robust after the data processed by the MRF. The converted YOLO improves the stability of hand tracking. Overall, using several Chinese character recognition methods, the accuracy of recognize in-air handwriting trajectory in Chinese characters is about 96.33%.
Lu, Kaiyue. "Learning to Enhance RGB and Depth Images with Guidance." Phd thesis, 2022. http://hdl.handle.net/1885/258498.
Full textHuang, BO-XI, and 黃博熙. "Improve Alignment of RGB and Depth Images for KINECT V2." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/xmzj9h.
Full text逢甲大學
資訊工程學系
106
Due to the vigorous development of computer vision, the use of 3D reconstruction has become more and more extensive in recent years, such as industrial design, archaeological research, entertainment industry, etc. Therefore, it is urgent to get more accurate reconstruction information. Instruments that used to perform 3D reconstruction have been quite expensive. However, in recent years, the introduction of Microsoft KINECT V2 not only has a cheaper price, but also provides depth and color information. Because the KINECT V2 color and depth camera are not in the same horizontal position, the images captured by the two cameras cannot be directly aligned, so the depth image coordinates can be converted to be aligned with the color image. The main purpose of this thesis is to use KINECT V2 to align the color and depth images by coordinate transformation method, and compare it with the alignment method using affine transformation. Furthermore, since the images captured by the KINECT V2 color camera and the depth camera may be distorted before alignment, the camera needs to be calibrated to facilitate higher accuracy after alignment. This thesis is divided into three parts. The first part is camera calibration, which is divided into internal correction, external correction and distortion correction. Mainly with reference to the methods of D. C. Herrera [1] and Zhengyou Zhang [2] to improve. The second part is to use the coordinate conversion method to align the color and depth image; The third part is the comparison of our proposed coordinate transformation method and the affine transformation method based on the root mean square error of the aligned images.
RUSSO, PAOLO. "Broadening deep learning horizons: models for RGB and depth images adaptation." Doctoral thesis, 2020. http://hdl.handle.net/11573/1365047.
Full texthao, Huang wei, and 黃偉豪. "Accurate Alignment of RGB and Depth Images for KINECT by Camera Calibration." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/16807955249441238409.
Full text逢甲大學
資訊工程學系
103
Three-dimensional reconstruction is widely applied to various fields, such as robot vision, medical imaging, archeology, etc. Recently, Microsoft company released KINECT as a tool for three-dimensional reconstruction. It not only can provide a fairly accurate depth value, and also is much cheaper than other traditional depth cameras. The main research of this paper is the alignment of the KINECT’s color and depth images. Because the color camera and the depth camera of Kinect are located at different horizontal position, the captured images can’t be accurately aligned. Alignment of the two images is necessary for the following image processing and applications. Furthermore, calibration of the two cameras before alignment can help more accurate result of the alignment. Therefore, the focus of the research is divided into two parts. First part is camera calibration, which includes internal and external calibrations. Our method is mainly based on the method proposed by Zheng-you Zhang’s research result [7]. Second part is how to perform the alignment of the color and depth images. Our method is mainly based on observing the difference between two images by the affine transformation, and then finding the corresponding relationship between each other to transform the depth image. Finally, experiments are performed to validate our proposed approach. In terms of the camera calibration, we can observe that the calibrated image becomes less distorted. In terms of alignment, it significantly aligned the color and depth images.
Kang, Wen-Yao, and 康文耀. "Combined RGB and Depth Images to Detect Crop Rows for Sanshin Green Onion." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/dy9y8h.
Full text國立宜蘭大學
生物機電工程學系碩士班
106
Ilan Shansin green onion is an important economic crop in Ilan. Because the weather of Ilan is wet and rainy, most crops are grown by ridge and furrow farming. Sanshin green onion’s cultivation and management is very labor-intensive and time-consuming. Therefore, detecting row crops lines of the Sanshin green onion on ridge by machine vision will be the critical technology for the unmanned field vehicle. The purpose of the study is to propose a method to combine the results of both color image and depth image acquired by the Kinect sensors to automatic detect the crop rows lines under varying conditions of light brightness and growth period. A Kinect sensor was used to acquire both color and depth images of the crop rows of the Sanshin green onion field. Green onion features were segmented from the background by the difference of the RGB components of the color image. Hough transform method was then used to find crop lines using feature points selected by the proposed grid squares. In the depth image the green onion features were segmented differently by inspecting the difference of the depth in the horizontal lines, and Hough transform method was also used. The slopes of the crop lines and the distances between the upper and lower endpoints of those lines were used to determine whether the results of color image or the depth image were correct. If both were correct, the final result was obtained by multiple specific weighting factors. Those factors were obtained by observing the successful rates and average errors from the experiment results of both the color and depth images. A crop row lines detecting method was successfully developed by combing the results of both color and depth images acquiring by Kinect. The experiment results showed that the average successful rate of color images to find the crop lines in sunny days was 90.4% and in cloudy day was 93.5%. Using depth images to find the crop lines in sunny day was 53.8%, and in cloudy day was 91.3%. The proposed combining method in this study can detect crop lines successful at a rate of 92.3% in sunny days and 100% in cloudy days. Therefore, the combining method in this study can effectively improve the total successful rate comparing with that used only color or depth image.
Gama, Filipe Xavier da Graça. "Efficient processing techniques for depth maps generated by rgb-d sensors." Master's thesis, 2015. http://hdl.handle.net/10400.8/2524.
Full text