Literatura académica sobre el tema "Imagerie RGB"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Imagerie RGB".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Imagerie RGB"
Vigneau, Nathalie, Corentin Chéron, Aleixandre Verger y Frédéric Baret. "Imagerie aérienne par drone : exploitation des données pour l'agriculture de précision". Revue Française de Photogrammétrie et de Télédétection, n.º 213 (26 de abril de 2017): 125–31. http://dx.doi.org/10.52638/rfpt.2017.203.
Texto completoShen, Xin, Lin Cao, Bisheng Yang, Zhong Xu y Guibin Wang. "Estimation of Forest Structural Attributes Using Spectral Indices and Point Clouds from UAS-Based Multispectral and RGB Imageries". Remote Sensing 11, n.º 7 (3 de abril de 2019): 800. http://dx.doi.org/10.3390/rs11070800.
Texto completoPriyankara, Prabath y Takehiro Morimoto. "UAV Based Agricultural Crop Canopy Mapping for Crop Field Monitoring". Abstracts of the ICA 1 (15 de julio de 2019): 1. http://dx.doi.org/10.5194/ica-abs-1-303-2019.
Texto completoPurwanto, Anang Dwi y Wikanti Asriningrum. "IDENTIFICATION OF MANGROVE FORESTS USING MULTISPECTRAL SATELLITE IMAGERIES". International Journal of Remote Sensing and Earth Sciences (IJReSES) 16, n.º 1 (30 de octubre de 2019): 63. http://dx.doi.org/10.30536/j.ijreses.2019.v16.a3097.
Texto completoChhatkuli, S., T. Satoh y K. Tachibana. "MULTI SENSOR DATA INTEGRATION FOR AN ACCURATE 3D MODEL GENERATION". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4/W5 (11 de mayo de 2015): 103–6. http://dx.doi.org/10.5194/isprsarchives-xl-4-w5-103-2015.
Texto completoArgyrou, Argyro, Athos Agapiou, Apostolos Papakonstantinou y Dimitrios D. Alexakis. "Comparison of Machine Learning Pixel-Based Classifiers for Detecting Archaeological Ceramics". Drones 7, n.º 9 (13 de septiembre de 2023): 578. http://dx.doi.org/10.3390/drones7090578.
Texto completoMawardi, Sonny, Emi Sukiyah y Iyan Haryanto. "Morphotectonic Characteristics Of Cisadane Watersshed Based On Satellite Images Analysis". Jurnal Geologi dan Sumberdaya Mineral 20, n.º 3 (22 de agosto de 2019): 175. http://dx.doi.org/10.33332/jgsm.geologi.v20i3.464.
Texto completoVanbrabant, Yasmin, Stephanie Delalieux, Laurent Tits, Klaas Pauly, Joke Vandermaesen y Ben Somers. "Pear Flower Cluster Quantification Using RGB Drone Imagery". Agronomy 10, n.º 3 (17 de marzo de 2020): 407. http://dx.doi.org/10.3390/agronomy10030407.
Texto completoSimes, Tomás, Luís Pádua y Alexandra Moutinho. "Wildfire Burnt Area Severity Classification from UAV-Based RGB and Multispectral Imagery". Remote Sensing 16, n.º 1 (20 de diciembre de 2023): 30. http://dx.doi.org/10.3390/rs16010030.
Texto completoSemah, Franck. "Imagerie médicale et épilepsies". Revue Générale Nucléaire, n.º 4 (agosto de 2001): 36–37. http://dx.doi.org/10.1051/rgn/20014036.
Texto completoTesis sobre el tema "Imagerie RGB"
Lefévre, Soizic. "Caractérisation de la qualité des raisins par imagerie". Electronic Thesis or Diss., Reims, 2023. http://www.theses.fr/2023REIMS017.
Texto completoIdentifying the health conditions of the grapes at harvest time is a major issue in order to produce quality wines. To meet this issue, data are acquired by spectrometry, hyperspectral imaging and RGB imaging on grape samples during harvest.Several pre-treatments adapted to each type of data are applied such as normalization, reduction, extraction of characteristic vectors, and segmentation of useful areas. From an imaging point of view, the reconstruction in false colors of hyperspectral images, far from reality, doesn’t allow to label all the intra-class diversity. On the other hand, the visual quality of RGB imaging enables accurate class labelling. From this labelling, classifiers such as support vector machines, random forests, maximum likelihood estimation, spectral mapping, k-means are tested and trained on labelled bases. Depending on the nature of the data, the most effective is applied to whole images of grape clusters or crates of grapes of several grape varieties from different parcels.The quality indices obtained from RGB image processing are very close to the estimates made by experts in the field
Kacete, Amine. "Unconstrained Gaze Estimation Using RGB-D Camera". Thesis, CentraleSupélec, 2016. http://www.theses.fr/2016SUPL0012/document.
Texto completoIn this thesis, we tackled the automatic gaze estimation problem in unconstrained user environments. This work takes place in the computer vision research field applied to the perception of humans and their behaviors. Many existing industrial solutions are commercialized and provide an acceptable accuracy in gaze estimation. These solutions often use a complex hardware such as range of infrared cameras (embedded on a head mounted or in a remote system) making them intrusive, very constrained by the user's environment and inappropriate for a large scale public use. We focus on estimating gaze using cheap low-resolution and non-intrusive devices like the Kinect sensor. We develop new methods to address some challenging conditions such as head pose changes, illumination conditions and user-sensor large distance. In this work we investigated different gaze estimation paradigms. We first developed two automatic gaze estimation systems following two classical approaches: feature and semi appearance-based approaches. The major limitation of such paradigms lies in their way of designing gaze systems which assume a total independence between eye appearance and head pose blocks. To overcome this limitation, we converged to a novel paradigm which aims at unifying the two previous components and building a global gaze manifold, we explored two global approaches across the experiments by using synthetic and real RGB-D gaze samples
Kadkhodamohammadi, Abdolrahim. "3D detection and pose estimation of medical staff in operating rooms using RGB-D images". Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD047/document.
Texto completoIn this thesis, we address the two problems of person detection and pose estimation in Operating Rooms (ORs), which are key ingredients in the development of surgical assistance applications. We perceive the OR using compact RGB-D cameras that can be conveniently integrated in the room. These sensors provide complementary information about the scene, which enables us to develop methods that can cope with numerous challenges present in the OR, e.g. clutter, textureless surfaces and occlusions. We present novel part-based approaches that take advantage of depth, multi-view and temporal information to construct robust human detection and pose estimation models. Evaluation is performed on new single- and multi-view datasets recorded in operating rooms. We demonstrate very promising results and show that our approaches outperform state-of-the-art methods on this challenging data acquired during real surgeries
Devanne, Maxime. "3D human behavior understanding by shape analysis of human motion and pose". Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10138/document.
Texto completoThe emergence of RGB-D sensors providing the 3D structure of both the scene and the human body offers new opportunities for studying human motion and understanding human behaviors. However, the design and development of models for behavior recognition that are both accurate and efficient is a challenging task due to the variability of the human pose, the complexity of human motion and possible interactions with the environment. In this thesis, we first focus on the action recognition problem by representing human action as the trajectory of 3D coordinates of human body joints over the time, thus capturing simultaneously the body shape and the dynamics of the motion. The action recognition problem is then formulated as the problem of computing the similarity between shape of trajectories in a Riemannian framework. Experiments carried out on four representative benchmarks demonstrate the potential of the proposed solution in terms of accuracy/latency for a low-latency action recognition. Second, we extend the study to more complex behaviors by analyzing the evolution of the human pose shape to decompose the motion stream into short motion units. Each motion unit is then characterized by the motion trajectory and depth appearance around hand joints, so as to describe the human motion and interaction with objects. Finally, the sequence of temporal segments is modeled through a Dynamic Naive Bayesian Classifier. Experiments on four representative datasets evaluate the potential of the proposed approach in different contexts, including recognition and online detection of behaviors
Tykkälä, Tommi. "Suivi de caméra image en temps réel base et cartographie de l'environnement". Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00933813.
Texto completoAlston, Laure. "Spectroscopie de fluorescence et imagerie optique pour l'assistance à la résection de gliomes : conception et caractérisation de systèmes de mesure et modèles de traitement des données associées, sur fantômes et au bloc opératoire". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1295/document.
Texto completoGliomas are infiltrative tumors of the brain which are yet hardly curable, notably because of the difficulty to precisely delimitate their margins during surgery. Intraoperative 5-ALA induced protoporphyrin IX (PpIX) fluorescence microscopy has shown its relevance to assist neurosurgeons but lacks sensitivity. In this thesis, we perform a spectroscopic clinical trial on 10 patients with the assumption that collected fluorescence is a linear combination of the contribution of two states of PpIX which proportions vary with the density of tumor cells. This work starts with the development of the intraoperative, portable and real time fluorescence spectroscopic device that provides multi-wavelength excitation. Then, we show its use on PpIX phantoms with tissues mimicking properties. This first enables to obtain a reference emitted spectrum for each state apart and then permits the development of a fitting model to adjust any emitted spectrum as a linear combination of the references in the spectral band 608-637 nm. Next, we present the steps led to get approvals for the clinical trial, especially the risk analysis. In vivo data analysis is then presented, showing that we detect fluorescence where current microscopes cannot, which could exhibit a change in PpIX state from glioma center to its margins. Besides, the relevance of multi-wavelength excitation is highlighted as the correlation between the three measured spectra of a same sample decreases with the density of tumor cells. Finally, the complementary need to intraoperatively identify cerebral functional areas is tackled with optical measurements as a perspective and other properties of PpIX on phantoms are also raised
Chakib, Reda. "Acquisition et rendu 3D réaliste à partir de périphériques "grand public"". Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0101/document.
Texto completoDigital imaging, from the synthesis of images to computer vision isexperiencing a strong evolution, due among other factors to the democratization and commercial success of 3D cameras. In the same context, the consumer 3D printing, which is experiencing a rapid rise, contributes to the strong demand for this type of camera for the needs of 3D scanning. The objective of this thesis is to acquire and master a know-how in the field of the capture / acquisition of 3D models in particular on the rendered aspect. The realization of a 3D scanner from a RGB-D camera is part of the goal. During the acquisition phase, especially for a portable device, there are two main problems, the problem related to the repository of each capture and the final rendering of the reconstructed object
Chiron, Guillaume. "Système complet d’acquisition vidéo, de suivi de trajectoires et de modélisation comportementale pour des environnements 3D naturellement encombrés : application à la surveillance apicole". Thesis, La Rochelle, 2014. http://www.theses.fr/2014LAROS030/document.
Texto completoThis manuscript provides the basis for a complete chain of videosurveillence for naturally cluttered environments. In the latter, we identify and solve the wide spectrum of methodological and technological barriers inherent to : 1) the acquisition of video sequences in natural conditions, 2) the image processing problems, 3) the multi-target tracking ambiguities, 4) the discovery and the modeling of recurring behavioral patterns, and 5) the data fusion. The application context of our work is the monitoring of honeybees, and in particular the study of the trajectories bees in flight in front of their hive. In fact, this thesis is part a feasibility and prototyping study carried by the two interdisciplinary projects EPERAS and RISQAPI (projects undertaken in collaboration with INRA institute and the French National Museum of Natural History). It is for us, computer scientists, and for biologists who accompanied us, a completely new area of investigation for which the scientific knowledge, usually essential for such applications, are still in their infancy. Unlike existing approaches for monitoring insects, we propose to tackle the problem in the three-dimensional space through the use of a high frequency stereo camera. In this context, we detail our new target detection method which we called HIDS segmentation. Concerning the computation of trajectories, we explored several tracking approaches, relying on more or less a priori, which are able to deal with the extreme conditions of the application (e.g. many targets, small in size, following chaotic movements). Once the trajectories are collected, we organize them according to a given hierarchical data structure and apply a Bayesian nonparametric approach for discovering emergent behaviors within the colony of insects. The exploratory analysis of the trajectories generated by the crowded scene is performed following an unsupervised classification method simultaneously over different levels of semantic, and where the number of clusters for each level is not defined a priori, but rather estimated from the data only. This approach is has been validated thanks to a ground truth generated by a Multi-Agent System. Then we tested it in the context of real data
Muske, Manideep Sai Yadav. "To Detect Water-Puddle On Driving Terrain From RGB Imagery Using Deep Learning Algorithms". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21229.
Texto completoFernández, Gallego José Armando. "Image processing techniques for plant phenotyping using RGB and thermal imagery = Técnicas de procesamiento de imágenes RGB y térmicas como herramienta para fenotipado de cultivos". Doctoral thesis, Universitat de Barcelona, 2019. http://hdl.handle.net/10803/669111.
Texto completoLas existencias mundiales de cereales deben aumentar para satisfacer la creciente demanda. Actualmente, el maíz, el arroz y el trigo son los principales cultivos a nivel mundial, otros cereales como la cebada, el sorgo y la avena están también bien ubicados en la lista. La productividad de los cultivos se ve afectada directamente por factores del cambio climático como el calor, la sequía, las inundaciones o las tormentas. Los investigadores coinciden en que el cambio climático global está teniendo un gran impacto en la productividad de los cultivos. Es por esto que muchos estudios se han centrado en escenarios de cambio climático y más específicamente en estrés abiótico. Por ejemplo, en el caso de estrés por calor, las altas temperaturas entre antesis y llenado de grano pueden disminuir el rendimiento del grano. Para hacer frente al cambio climático y escenarios ambientales futuros, el mejoramiento de plantas es una de las principales alternativas; incluso se considera que las técnicas de mejoramiento contribuyen en mayor medida al aumento del rendimiento que el manejo del cultivo. Los programas de mejora se centran en identificar genotipos con altos rendimientos y calidad para actuar como progenitores y promover los mejores individuos para desarrollar nuevas variedades de plantas. Los mejoradores utilizan los datos fenotípicos, el desempeño de las plantas y los cultivos, y la información genética para mejorar el rendimiento mediante selección (GxE, donde G y E indican factores genéticos y ambientales). El fenotipado plantas está relacionado con las características observables (o medibles) de la planta mientras crece el cultivo, así como con la asociación entre el fondo genético de la planta y su respuesta al medio ambiente (GxE). En el fenotipado tradicional, las mediciones se clasifican manualmente, lo cual es tedioso, consume mucho tiempo y es propenso a errores subjetivos. Sin embargo, hoy en día la tecnología está involucrada en muchas aplicaciones. Desde el punto de vista del fenotipado de plantas, la tecnología se ha incorporado como una herramienta. El uso de técnicas de procesamiento de imágenes que integran sensores y algoritmos son por lo tanto una alternativa para evaluar automáticamente (o semiautomáticamente) estas características.
Libros sobre el tema "Imagerie RGB"
King, Jane Valerie. VALIDATION OF A SPECIFIC TECHNIQUE OF RELAXATION WITH GUIDED IMAGERY (RGI) ON STATE ANXIETY IN GRADUATE NURSING STUDENTS. 1987.
Buscar texto completoCapítulos de libros sobre el tema "Imagerie RGB"
Lorenzo-Navarro, Javier, Modesto Castrillón-Santana y Daniel Hernández-Sosa. "An Study on Re-identification in RGB-D Imagery". En Lecture Notes in Computer Science, 200–207. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-35395-6_28.
Texto completoBakalos, Nikolaos, Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis, Kassiani Papasotiriou y Matthaios Bimpas. "Fusing RGB and Thermal Imagery with Channel State Information for Abnormal Activity Detection Using Multimodal Bidirectional LSTM". En Cyber-Physical Security for Critical Infrastructures Protection, 77–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69781-5_6.
Texto completoMatos, João Pedro, Artur Machado, Ricardo Ribeiro y Alexandra Moutinho. "Automatic People Detection Based on RGB and Thermal Imagery for Military Applications". En Robot 2023: Sixth Iberian Robotics Conference, 201–12. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-59167-9_17.
Texto completoPádua, Luís, Nathalie Guimarães, Telmo Adão, Pedro Marques, Emanuel Peres, António Sousa y Joaquim J. Sousa. "Classification of an Agrosilvopastoral System Using RGB Imagery from an Unmanned Aerial Vehicle". En Progress in Artificial Intelligence, 248–57. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30241-2_22.
Texto completoRozanda, Nesdi Evrilyan, M. Ismail y Inggih Permana. "Segmentation Google Earth Imagery Using K-Means Clustering and Normalized RGB Color Space". En Computational Intelligence in Data Mining - Volume 1, 375–86. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-2205-7_36.
Texto completoNiu, Qinglin, Haikuan Feng, Changchun Li, Guijun Yang, Yuanyuan Fu, Zhenhai Li y Haojie Pei. "Estimation of Leaf Nitrogen Concentration of Winter Wheat Using UAV-Based RGB Imagery". En Computer and Computing Technologies in Agriculture XI, 139–53. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-06179-1_15.
Texto completoSukkar, Abdullah y Mustafa Turker. "Tree Detection from Very High Spatial Resolution RGB Satellite Imagery Using Deep Learning". En Recent Research on Geotechnical Engineering, Remote Sensing, Geophysics and Earthquake Seismology, 145–49. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-43218-7_34.
Texto completoGuarin, Arnold, Homero Ortega y Hans Garcia. "Acquisition System Based in a Low-Cost Optical Architecture with Single Pixel Measurements and RGB Side Information for UAV Imagery". En Applications of Computational Intelligence, 155–67. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36211-9_13.
Texto completoZefri, Yahya, Imane Sebari, Hicham Hajji y Ghassane Aniba. "A Channel-Based Attention Deep Semantic Segmentation Model for the Extraction of Multi-type Solar Photovoltaic Arrays from Large-Scale Orthorectified UAV RGB Imagery". En Intelligent Sustainable Systems, 421–29. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-7660-5_36.
Texto completoPrabhakar, Dolonchapa y Pradeep Kumar Garg. "Applying a Deep Learning Approach for Building Extraction From High-Resolution Remote Sensing Imagery". En Advances in Geospatial Technologies, 157–79. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-7319-1.ch008.
Texto completoActas de conferencias sobre el tema "Imagerie RGB"
Han, Yiding, Austin Jensen y Huifang Dou. "Programmable Multispectral Imager Development as Light Weight Payload for Low Cost Fixed Wing Unmanned Aerial Vehicles". En ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-87741.
Texto completoTaylor, Camillo y Anthony Cowley. "Parsing Indoor Scenes Using RGB-D Imagery". En Robotics: Science and Systems 2012. Robotics: Science and Systems Foundation, 2012. http://dx.doi.org/10.15607/rss.2012.viii.051.
Texto completoLaRocque, Armand, Brigitte Leblon, Melanie-Louise Leblanc y Angela Douglas. "Surveying Migratory Waterfowl using UAV RGB Imagery". En IGARSS 2021 - 2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021. http://dx.doi.org/10.1109/igarss47720.2021.9553747.
Texto completoWuDunn, Marc, James Dunn y Avideh Zakhor. "Point Cloud Segmentation using RGB Drone Imagery". En 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9191266.
Texto completoWuDunn, Marc, Avideh Zakhor, Samir Touzani y Jessica Granderson. "Aerial 3D building reconstruction from RGB drone imagery". En Geospatial Informatics X, editado por Kannappan Palaniappan, Gunasekaran Seetharaman, Peter J. Doucette y Joshua D. Harguess. SPIE, 2020. http://dx.doi.org/10.1117/12.2558399.
Texto completoRueda, Hoover, Daniel Lau y Gonzalo R. Arce. "RGB detectors on compressive snapshot multi-spectral imagers". En 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2015. http://dx.doi.org/10.1109/globalsip.2015.7418223.
Texto completoSwamy, Shravan Kumar, Klaus Schwarz, Michael Hartmann y Reiner M. Creutzburg. "RGB and IR imagery fusion for autonomous driving". En Multimodal Image Exploitation and Learning 2023, editado por Sos S. Agaian, Stephen P. DelMarco y Vijayan K. Asari. SPIE, 2023. http://dx.doi.org/10.1117/12.2664336.
Texto completoChen, Xiwen, Bryce Hopkins, Hao Wang, Leo O’Neill, Fatemeh Afghah, Abolfazl Razi, Peter Fulé, Janice Coen, Eric Rowell y Adam Watts. "Wildland Fire Detection and Monitoring using a Drone-collected RGB/IR Image Dataset". En 2022 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE, 2022. http://dx.doi.org/10.1109/aipr57179.2022.10092208.
Texto completoYokoya, Naoto y Akira Iwasaki. "Airborne unmixing-based hyperspectral super-resolution using RGB imagery". En IGARSS 2014 - 2014 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2014. http://dx.doi.org/10.1109/igarss.2014.6947019.
Texto completoSchau, H. C. "Estimation of low-resolution visible spectra from RGB imagery". En SPIE Defense, Security, and Sensing, editado por Sylvia S. Shen y Paul E. Lewis. SPIE, 2009. http://dx.doi.org/10.1117/12.814650.
Texto completoInformes sobre el tema "Imagerie RGB"
Bhatt, Parth, Curtis Edson y Ann MacLean. Image Processing in Dense Forest Areas using Unmanned Aerial System (UAS). Michigan Technological University, septiembre de 2022. http://dx.doi.org/10.37099/mtu.dc.michigantech-p/16366.
Texto completoLey, Matt, Tom Baldvins, Hannah Pilkington, David Jones y Kelly Anderson. Vegetation classification and mapping project: Big Thicket National Preserve. National Park Service, 2024. http://dx.doi.org/10.36967/2299254.
Texto completoLey, Matt, Tom Baldvins, David Jones, Hanna Pilkington y Kelly Anderson. Vegetation classification and mapping: Gulf Islands National Seashore. National Park Service, mayo de 2023. http://dx.doi.org/10.36967/2299028.
Texto completo