Academic literature on the topic 'Imagerie RGB'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Imagerie RGB.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Imagerie RGB"
Vigneau, Nathalie, Corentin Chéron, Aleixandre Verger, and Frédéric Baret. "Imagerie aérienne par drone : exploitation des données pour l'agriculture de précision." Revue Française de Photogrammétrie et de Télédétection, no. 213 (April 26, 2017): 125–31. http://dx.doi.org/10.52638/rfpt.2017.203.
Full textShen, Xin, Lin Cao, Bisheng Yang, Zhong Xu, and Guibin Wang. "Estimation of Forest Structural Attributes Using Spectral Indices and Point Clouds from UAS-Based Multispectral and RGB Imageries." Remote Sensing 11, no. 7 (April 3, 2019): 800. http://dx.doi.org/10.3390/rs11070800.
Full textPriyankara, Prabath, and Takehiro Morimoto. "UAV Based Agricultural Crop Canopy Mapping for Crop Field Monitoring." Abstracts of the ICA 1 (July 15, 2019): 1. http://dx.doi.org/10.5194/ica-abs-1-303-2019.
Full textPurwanto, Anang Dwi, and Wikanti Asriningrum. "IDENTIFICATION OF MANGROVE FORESTS USING MULTISPECTRAL SATELLITE IMAGERIES." International Journal of Remote Sensing and Earth Sciences (IJReSES) 16, no. 1 (October 30, 2019): 63. http://dx.doi.org/10.30536/j.ijreses.2019.v16.a3097.
Full textChhatkuli, S., T. Satoh, and K. Tachibana. "MULTI SENSOR DATA INTEGRATION FOR AN ACCURATE 3D MODEL GENERATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4/W5 (May 11, 2015): 103–6. http://dx.doi.org/10.5194/isprsarchives-xl-4-w5-103-2015.
Full textArgyrou, Argyro, Athos Agapiou, Apostolos Papakonstantinou, and Dimitrios D. Alexakis. "Comparison of Machine Learning Pixel-Based Classifiers for Detecting Archaeological Ceramics." Drones 7, no. 9 (September 13, 2023): 578. http://dx.doi.org/10.3390/drones7090578.
Full textMawardi, Sonny, Emi Sukiyah, and Iyan Haryanto. "Morphotectonic Characteristics Of Cisadane Watersshed Based On Satellite Images Analysis." Jurnal Geologi dan Sumberdaya Mineral 20, no. 3 (August 22, 2019): 175. http://dx.doi.org/10.33332/jgsm.geologi.v20i3.464.
Full textVanbrabant, Yasmin, Stephanie Delalieux, Laurent Tits, Klaas Pauly, Joke Vandermaesen, and Ben Somers. "Pear Flower Cluster Quantification Using RGB Drone Imagery." Agronomy 10, no. 3 (March 17, 2020): 407. http://dx.doi.org/10.3390/agronomy10030407.
Full textSimes, Tomás, Luís Pádua, and Alexandra Moutinho. "Wildfire Burnt Area Severity Classification from UAV-Based RGB and Multispectral Imagery." Remote Sensing 16, no. 1 (December 20, 2023): 30. http://dx.doi.org/10.3390/rs16010030.
Full textSemah, Franck. "Imagerie médicale et épilepsies." Revue Générale Nucléaire, no. 4 (August 2001): 36–37. http://dx.doi.org/10.1051/rgn/20014036.
Full textDissertations / Theses on the topic "Imagerie RGB"
Lefévre, Soizic. "Caractérisation de la qualité des raisins par imagerie." Electronic Thesis or Diss., Reims, 2023. http://www.theses.fr/2023REIMS017.
Full textIdentifying the health conditions of the grapes at harvest time is a major issue in order to produce quality wines. To meet this issue, data are acquired by spectrometry, hyperspectral imaging and RGB imaging on grape samples during harvest.Several pre-treatments adapted to each type of data are applied such as normalization, reduction, extraction of characteristic vectors, and segmentation of useful areas. From an imaging point of view, the reconstruction in false colors of hyperspectral images, far from reality, doesn’t allow to label all the intra-class diversity. On the other hand, the visual quality of RGB imaging enables accurate class labelling. From this labelling, classifiers such as support vector machines, random forests, maximum likelihood estimation, spectral mapping, k-means are tested and trained on labelled bases. Depending on the nature of the data, the most effective is applied to whole images of grape clusters or crates of grapes of several grape varieties from different parcels.The quality indices obtained from RGB image processing are very close to the estimates made by experts in the field
Kacete, Amine. "Unconstrained Gaze Estimation Using RGB-D Camera." Thesis, CentraleSupélec, 2016. http://www.theses.fr/2016SUPL0012/document.
Full textIn this thesis, we tackled the automatic gaze estimation problem in unconstrained user environments. This work takes place in the computer vision research field applied to the perception of humans and their behaviors. Many existing industrial solutions are commercialized and provide an acceptable accuracy in gaze estimation. These solutions often use a complex hardware such as range of infrared cameras (embedded on a head mounted or in a remote system) making them intrusive, very constrained by the user's environment and inappropriate for a large scale public use. We focus on estimating gaze using cheap low-resolution and non-intrusive devices like the Kinect sensor. We develop new methods to address some challenging conditions such as head pose changes, illumination conditions and user-sensor large distance. In this work we investigated different gaze estimation paradigms. We first developed two automatic gaze estimation systems following two classical approaches: feature and semi appearance-based approaches. The major limitation of such paradigms lies in their way of designing gaze systems which assume a total independence between eye appearance and head pose blocks. To overcome this limitation, we converged to a novel paradigm which aims at unifying the two previous components and building a global gaze manifold, we explored two global approaches across the experiments by using synthetic and real RGB-D gaze samples
Kadkhodamohammadi, Abdolrahim. "3D detection and pose estimation of medical staff in operating rooms using RGB-D images." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD047/document.
Full textIn this thesis, we address the two problems of person detection and pose estimation in Operating Rooms (ORs), which are key ingredients in the development of surgical assistance applications. We perceive the OR using compact RGB-D cameras that can be conveniently integrated in the room. These sensors provide complementary information about the scene, which enables us to develop methods that can cope with numerous challenges present in the OR, e.g. clutter, textureless surfaces and occlusions. We present novel part-based approaches that take advantage of depth, multi-view and temporal information to construct robust human detection and pose estimation models. Evaluation is performed on new single- and multi-view datasets recorded in operating rooms. We demonstrate very promising results and show that our approaches outperform state-of-the-art methods on this challenging data acquired during real surgeries
Devanne, Maxime. "3D human behavior understanding by shape analysis of human motion and pose." Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10138/document.
Full textThe emergence of RGB-D sensors providing the 3D structure of both the scene and the human body offers new opportunities for studying human motion and understanding human behaviors. However, the design and development of models for behavior recognition that are both accurate and efficient is a challenging task due to the variability of the human pose, the complexity of human motion and possible interactions with the environment. In this thesis, we first focus on the action recognition problem by representing human action as the trajectory of 3D coordinates of human body joints over the time, thus capturing simultaneously the body shape and the dynamics of the motion. The action recognition problem is then formulated as the problem of computing the similarity between shape of trajectories in a Riemannian framework. Experiments carried out on four representative benchmarks demonstrate the potential of the proposed solution in terms of accuracy/latency for a low-latency action recognition. Second, we extend the study to more complex behaviors by analyzing the evolution of the human pose shape to decompose the motion stream into short motion units. Each motion unit is then characterized by the motion trajectory and depth appearance around hand joints, so as to describe the human motion and interaction with objects. Finally, the sequence of temporal segments is modeled through a Dynamic Naive Bayesian Classifier. Experiments on four representative datasets evaluate the potential of the proposed approach in different contexts, including recognition and online detection of behaviors
Tykkälä, Tommi. "Suivi de caméra image en temps réel base et cartographie de l'environnement." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00933813.
Full textAlston, Laure. "Spectroscopie de fluorescence et imagerie optique pour l'assistance à la résection de gliomes : conception et caractérisation de systèmes de mesure et modèles de traitement des données associées, sur fantômes et au bloc opératoire." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1295/document.
Full textGliomas are infiltrative tumors of the brain which are yet hardly curable, notably because of the difficulty to precisely delimitate their margins during surgery. Intraoperative 5-ALA induced protoporphyrin IX (PpIX) fluorescence microscopy has shown its relevance to assist neurosurgeons but lacks sensitivity. In this thesis, we perform a spectroscopic clinical trial on 10 patients with the assumption that collected fluorescence is a linear combination of the contribution of two states of PpIX which proportions vary with the density of tumor cells. This work starts with the development of the intraoperative, portable and real time fluorescence spectroscopic device that provides multi-wavelength excitation. Then, we show its use on PpIX phantoms with tissues mimicking properties. This first enables to obtain a reference emitted spectrum for each state apart and then permits the development of a fitting model to adjust any emitted spectrum as a linear combination of the references in the spectral band 608-637 nm. Next, we present the steps led to get approvals for the clinical trial, especially the risk analysis. In vivo data analysis is then presented, showing that we detect fluorescence where current microscopes cannot, which could exhibit a change in PpIX state from glioma center to its margins. Besides, the relevance of multi-wavelength excitation is highlighted as the correlation between the three measured spectra of a same sample decreases with the density of tumor cells. Finally, the complementary need to intraoperatively identify cerebral functional areas is tackled with optical measurements as a perspective and other properties of PpIX on phantoms are also raised
Chakib, Reda. "Acquisition et rendu 3D réaliste à partir de périphériques "grand public"." Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0101/document.
Full textDigital imaging, from the synthesis of images to computer vision isexperiencing a strong evolution, due among other factors to the democratization and commercial success of 3D cameras. In the same context, the consumer 3D printing, which is experiencing a rapid rise, contributes to the strong demand for this type of camera for the needs of 3D scanning. The objective of this thesis is to acquire and master a know-how in the field of the capture / acquisition of 3D models in particular on the rendered aspect. The realization of a 3D scanner from a RGB-D camera is part of the goal. During the acquisition phase, especially for a portable device, there are two main problems, the problem related to the repository of each capture and the final rendering of the reconstructed object
Chiron, Guillaume. "Système complet d’acquisition vidéo, de suivi de trajectoires et de modélisation comportementale pour des environnements 3D naturellement encombrés : application à la surveillance apicole." Thesis, La Rochelle, 2014. http://www.theses.fr/2014LAROS030/document.
Full textThis manuscript provides the basis for a complete chain of videosurveillence for naturally cluttered environments. In the latter, we identify and solve the wide spectrum of methodological and technological barriers inherent to : 1) the acquisition of video sequences in natural conditions, 2) the image processing problems, 3) the multi-target tracking ambiguities, 4) the discovery and the modeling of recurring behavioral patterns, and 5) the data fusion. The application context of our work is the monitoring of honeybees, and in particular the study of the trajectories bees in flight in front of their hive. In fact, this thesis is part a feasibility and prototyping study carried by the two interdisciplinary projects EPERAS and RISQAPI (projects undertaken in collaboration with INRA institute and the French National Museum of Natural History). It is for us, computer scientists, and for biologists who accompanied us, a completely new area of investigation for which the scientific knowledge, usually essential for such applications, are still in their infancy. Unlike existing approaches for monitoring insects, we propose to tackle the problem in the three-dimensional space through the use of a high frequency stereo camera. In this context, we detail our new target detection method which we called HIDS segmentation. Concerning the computation of trajectories, we explored several tracking approaches, relying on more or less a priori, which are able to deal with the extreme conditions of the application (e.g. many targets, small in size, following chaotic movements). Once the trajectories are collected, we organize them according to a given hierarchical data structure and apply a Bayesian nonparametric approach for discovering emergent behaviors within the colony of insects. The exploratory analysis of the trajectories generated by the crowded scene is performed following an unsupervised classification method simultaneously over different levels of semantic, and where the number of clusters for each level is not defined a priori, but rather estimated from the data only. This approach is has been validated thanks to a ground truth generated by a Multi-Agent System. Then we tested it in the context of real data
Muske, Manideep Sai Yadav. "To Detect Water-Puddle On Driving Terrain From RGB Imagery Using Deep Learning Algorithms." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21229.
Full textFernández, Gallego José Armando. "Image processing techniques for plant phenotyping using RGB and thermal imagery = Técnicas de procesamiento de imágenes RGB y térmicas como herramienta para fenotipado de cultivos." Doctoral thesis, Universitat de Barcelona, 2019. http://hdl.handle.net/10803/669111.
Full textLas existencias mundiales de cereales deben aumentar para satisfacer la creciente demanda. Actualmente, el maíz, el arroz y el trigo son los principales cultivos a nivel mundial, otros cereales como la cebada, el sorgo y la avena están también bien ubicados en la lista. La productividad de los cultivos se ve afectada directamente por factores del cambio climático como el calor, la sequía, las inundaciones o las tormentas. Los investigadores coinciden en que el cambio climático global está teniendo un gran impacto en la productividad de los cultivos. Es por esto que muchos estudios se han centrado en escenarios de cambio climático y más específicamente en estrés abiótico. Por ejemplo, en el caso de estrés por calor, las altas temperaturas entre antesis y llenado de grano pueden disminuir el rendimiento del grano. Para hacer frente al cambio climático y escenarios ambientales futuros, el mejoramiento de plantas es una de las principales alternativas; incluso se considera que las técnicas de mejoramiento contribuyen en mayor medida al aumento del rendimiento que el manejo del cultivo. Los programas de mejora se centran en identificar genotipos con altos rendimientos y calidad para actuar como progenitores y promover los mejores individuos para desarrollar nuevas variedades de plantas. Los mejoradores utilizan los datos fenotípicos, el desempeño de las plantas y los cultivos, y la información genética para mejorar el rendimiento mediante selección (GxE, donde G y E indican factores genéticos y ambientales). El fenotipado plantas está relacionado con las características observables (o medibles) de la planta mientras crece el cultivo, así como con la asociación entre el fondo genético de la planta y su respuesta al medio ambiente (GxE). En el fenotipado tradicional, las mediciones se clasifican manualmente, lo cual es tedioso, consume mucho tiempo y es propenso a errores subjetivos. Sin embargo, hoy en día la tecnología está involucrada en muchas aplicaciones. Desde el punto de vista del fenotipado de plantas, la tecnología se ha incorporado como una herramienta. El uso de técnicas de procesamiento de imágenes que integran sensores y algoritmos son por lo tanto una alternativa para evaluar automáticamente (o semiautomáticamente) estas características.
Books on the topic "Imagerie RGB"
King, Jane Valerie. VALIDATION OF A SPECIFIC TECHNIQUE OF RELAXATION WITH GUIDED IMAGERY (RGI) ON STATE ANXIETY IN GRADUATE NURSING STUDENTS. 1987.
Find full textBook chapters on the topic "Imagerie RGB"
Lorenzo-Navarro, Javier, Modesto Castrillón-Santana, and Daniel Hernández-Sosa. "An Study on Re-identification in RGB-D Imagery." In Lecture Notes in Computer Science, 200–207. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-35395-6_28.
Full textBakalos, Nikolaos, Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis, Kassiani Papasotiriou, and Matthaios Bimpas. "Fusing RGB and Thermal Imagery with Channel State Information for Abnormal Activity Detection Using Multimodal Bidirectional LSTM." In Cyber-Physical Security for Critical Infrastructures Protection, 77–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69781-5_6.
Full textMatos, João Pedro, Artur Machado, Ricardo Ribeiro, and Alexandra Moutinho. "Automatic People Detection Based on RGB and Thermal Imagery for Military Applications." In Robot 2023: Sixth Iberian Robotics Conference, 201–12. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-59167-9_17.
Full textPádua, Luís, Nathalie Guimarães, Telmo Adão, Pedro Marques, Emanuel Peres, António Sousa, and Joaquim J. Sousa. "Classification of an Agrosilvopastoral System Using RGB Imagery from an Unmanned Aerial Vehicle." In Progress in Artificial Intelligence, 248–57. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30241-2_22.
Full textRozanda, Nesdi Evrilyan, M. Ismail, and Inggih Permana. "Segmentation Google Earth Imagery Using K-Means Clustering and Normalized RGB Color Space." In Computational Intelligence in Data Mining - Volume 1, 375–86. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-2205-7_36.
Full textNiu, Qinglin, Haikuan Feng, Changchun Li, Guijun Yang, Yuanyuan Fu, Zhenhai Li, and Haojie Pei. "Estimation of Leaf Nitrogen Concentration of Winter Wheat Using UAV-Based RGB Imagery." In Computer and Computing Technologies in Agriculture XI, 139–53. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-06179-1_15.
Full textSukkar, Abdullah, and Mustafa Turker. "Tree Detection from Very High Spatial Resolution RGB Satellite Imagery Using Deep Learning." In Recent Research on Geotechnical Engineering, Remote Sensing, Geophysics and Earthquake Seismology, 145–49. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-43218-7_34.
Full textGuarin, Arnold, Homero Ortega, and Hans Garcia. "Acquisition System Based in a Low-Cost Optical Architecture with Single Pixel Measurements and RGB Side Information for UAV Imagery." In Applications of Computational Intelligence, 155–67. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36211-9_13.
Full textZefri, Yahya, Imane Sebari, Hicham Hajji, and Ghassane Aniba. "A Channel-Based Attention Deep Semantic Segmentation Model for the Extraction of Multi-type Solar Photovoltaic Arrays from Large-Scale Orthorectified UAV RGB Imagery." In Intelligent Sustainable Systems, 421–29. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-7660-5_36.
Full textPrabhakar, Dolonchapa, and Pradeep Kumar Garg. "Applying a Deep Learning Approach for Building Extraction From High-Resolution Remote Sensing Imagery." In Advances in Geospatial Technologies, 157–79. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-7319-1.ch008.
Full textConference papers on the topic "Imagerie RGB"
Han, Yiding, Austin Jensen, and Huifang Dou. "Programmable Multispectral Imager Development as Light Weight Payload for Low Cost Fixed Wing Unmanned Aerial Vehicles." In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-87741.
Full textTaylor, Camillo, and Anthony Cowley. "Parsing Indoor Scenes Using RGB-D Imagery." In Robotics: Science and Systems 2012. Robotics: Science and Systems Foundation, 2012. http://dx.doi.org/10.15607/rss.2012.viii.051.
Full textLaRocque, Armand, Brigitte Leblon, Melanie-Louise Leblanc, and Angela Douglas. "Surveying Migratory Waterfowl using UAV RGB Imagery." In IGARSS 2021 - 2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021. http://dx.doi.org/10.1109/igarss47720.2021.9553747.
Full textWuDunn, Marc, James Dunn, and Avideh Zakhor. "Point Cloud Segmentation using RGB Drone Imagery." In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9191266.
Full textWuDunn, Marc, Avideh Zakhor, Samir Touzani, and Jessica Granderson. "Aerial 3D building reconstruction from RGB drone imagery." In Geospatial Informatics X, edited by Kannappan Palaniappan, Gunasekaran Seetharaman, Peter J. Doucette, and Joshua D. Harguess. SPIE, 2020. http://dx.doi.org/10.1117/12.2558399.
Full textRueda, Hoover, Daniel Lau, and Gonzalo R. Arce. "RGB detectors on compressive snapshot multi-spectral imagers." In 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2015. http://dx.doi.org/10.1109/globalsip.2015.7418223.
Full textSwamy, Shravan Kumar, Klaus Schwarz, Michael Hartmann, and Reiner M. Creutzburg. "RGB and IR imagery fusion for autonomous driving." In Multimodal Image Exploitation and Learning 2023, edited by Sos S. Agaian, Stephen P. DelMarco, and Vijayan K. Asari. SPIE, 2023. http://dx.doi.org/10.1117/12.2664336.
Full textChen, Xiwen, Bryce Hopkins, Hao Wang, Leo O’Neill, Fatemeh Afghah, Abolfazl Razi, Peter Fulé, Janice Coen, Eric Rowell, and Adam Watts. "Wildland Fire Detection and Monitoring using a Drone-collected RGB/IR Image Dataset." In 2022 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE, 2022. http://dx.doi.org/10.1109/aipr57179.2022.10092208.
Full textYokoya, Naoto, and Akira Iwasaki. "Airborne unmixing-based hyperspectral super-resolution using RGB imagery." In IGARSS 2014 - 2014 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2014. http://dx.doi.org/10.1109/igarss.2014.6947019.
Full textSchau, H. C. "Estimation of low-resolution visible spectra from RGB imagery." In SPIE Defense, Security, and Sensing, edited by Sylvia S. Shen and Paul E. Lewis. SPIE, 2009. http://dx.doi.org/10.1117/12.814650.
Full textReports on the topic "Imagerie RGB"
Bhatt, Parth, Curtis Edson, and Ann MacLean. Image Processing in Dense Forest Areas using Unmanned Aerial System (UAS). Michigan Technological University, September 2022. http://dx.doi.org/10.37099/mtu.dc.michigantech-p/16366.
Full textLey, Matt, Tom Baldvins, Hannah Pilkington, David Jones, and Kelly Anderson. Vegetation classification and mapping project: Big Thicket National Preserve. National Park Service, 2024. http://dx.doi.org/10.36967/2299254.
Full textLey, Matt, Tom Baldvins, David Jones, Hanna Pilkington, and Kelly Anderson. Vegetation classification and mapping: Gulf Islands National Seashore. National Park Service, May 2023. http://dx.doi.org/10.36967/2299028.
Full text