Letteratura scientifica selezionata sul tema "Imagerie RGB"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Imagerie RGB".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Imagerie RGB":

1

Vigneau, Nathalie, Corentin Chéron, Aleixandre Verger e Frédéric Baret. "Imagerie aérienne par drone : exploitation des données pour l'agriculture de précision". Revue Française de Photogrammétrie et de Télédétection, n. 213 (26 aprile 2017): 125–31. http://dx.doi.org/10.52638/rfpt.2017.203.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La technologie des drones devenant plus accessible et les réglementations nationales encadrant les vols des drones commençant à émerger, de nombreuses sociétés utilisent désormais des drones pour réaliser des acquisitions d'images.Parmi celles-ci AIRINOV a choisi de se spécialiser dans l'agriculture et offre ses services aux agriculteurs ainsi qu'aux expérimentateurs. AIRINOV exploite les drones eBee de senseFly. Le drone a une envergure d'1 m pour un poids de 700 g charge comprise et son vol est entièrment automatique. Le vol est programmé à l'avance puis contrôlé par unauto-pilote connecté à un capteur GPS et à une centrale inertielle embarqués. Ces capteurs enregistrent la position et l'attitude du drone pendant son vol, permettant de géolocaliser les images acquises. Une étude réalisée avec des cibles au sol a permis d'établir que le positionnement absolu des images est de 2,06 m. Toutefois, le recalage sur des points dont on connaît les coordonnées permet d'avoir un géoréférencement avec une précision centimétrique.En parallèle de l'utilisation des appareils photos classiques (RGB), AIRINOV utilise un capteur multispectral quadribande.Les longueurs d'onde de ce capteur sont modulables mais sont généralement vert, rouge, red edge et proche infra-rouge.Ces longueurs d'onde permettent non seulement le suivi d'indices de végétation tels que le NDVI mais également l'accès à des variables biochimiques et biophysiques par inversion d'un modèle de transfert radiatif. Une étude menée conjointement avec l'INRA d'Avignon et le CREAF permet d'accéder au Green Area Index (GAI) et au contenu en chlorophylle (Cab) sur colza, blé, maïs et orge. Cet article présente les résultats d'estimation du GAI avec une RMSE de 0,25 et de Cab avec une RMSE de 4,75 microgrammes/cm2.La qualité des estimations combinée à la forte capacité de revisite du drone ainsi qu'à la multiplicité des indicateurs disponibles démontre le grand intérêt du drone pour le phénotypage et le suivi de plateformes d'essais.
2

Shen, Xin, Lin Cao, Bisheng Yang, Zhong Xu e Guibin Wang. "Estimation of Forest Structural Attributes Using Spectral Indices and Point Clouds from UAS-Based Multispectral and RGB Imageries". Remote Sensing 11, n. 7 (3 aprile 2019): 800. http://dx.doi.org/10.3390/rs11070800.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Forest structural attributes are key indicators for parameterization of forest growth models, which play key roles in understanding the biophysical processes and function of the forest ecosystem. In this study, UAS-based multispectral and RGB imageries were used to estimate forest structural attributes in planted subtropical forests. The point clouds were generated from multispectral and RGB imageries using the digital aerial photogrammetry (DAP) approach. Different suits of spectral and structural metrics (i.e., wide-band spectral indices and point cloud metrics) derived from multispectral and RGB imageries were compared and assessed. The selected spectral and structural metrics were used to fit partial least squares (PLS) regression models individually and in combination to estimate forest structural attributes (i.e., Lorey’s mean height (HL) and volume(V)), and the capabilities of multispectral- and RGB-derived spectral and structural metrics in predicting forest structural attributes in various stem density forests were assessed and compared. The results indicated that the derived DAP point clouds had perfect visual effects and that most of the structural metrics extracted from the multispectral DAP point cloud were highly correlated with the metrics derived from the RGB DAP point cloud (R2 > 0.75). Although the models including only spectral indices had the capability to predict forest structural attributes with relatively high accuracies (R2 = 0.56–0.69, relative Root-Mean-Square-Error (RMSE) = 10.88–21.92%), the models with spectral and structural metrics had higher accuracies (R2 = 0.82–0.93, relative RMSE = 4.60–14.17%). Moreover, the models fitted using multispectral- and RGB-derived metrics had similar accuracies (∆R2 = 0–0.02, ∆ relative RMSE = 0.18–0.44%). In addition, the combo models fitted with stratified sample plots had relatively higher accuracies than those fitted with all of the sample plots (∆R2 = 0–0.07, ∆ relative RMSE = 0.49–3.08%), and the accuracies increased with increasing stem density.
3

Priyankara, Prabath, e Takehiro Morimoto. "UAV Based Agricultural Crop Canopy Mapping for Crop Field Monitoring". Abstracts of the ICA 1 (15 luglio 2019): 1. http://dx.doi.org/10.5194/ica-abs-1-303-2019.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
<p><strong>Abstract.</strong> Nowadays, mapping of agricultural crop canopy in different growing stages are vital data for crop field monitoring than field-based observations in large scale agricultural crop fields. By mapping agricultural crop canopy, it is very easy to analyse the status of an agricultural crop field by using different vegetation indices. Further, the data can be used to estimate the yield. These information are timely and reliable spatial information to the farmers and decision makers. Mapping of crop canopy in an agricultural crop field in different growing stages are very challenging using satellite imagery mainly due to the difficulty of recording with high cloud coverage. Also, the cost for satellite imagery are higher in proportion to the spatial resolution. It takes some time to order a satellite imagery and sometimes can’t cover some growing stages. This problem can be solved by using low cost RGB based UAV imageries which can be operated at low altitudes (below the clouds) which and when necessary. This study is therefore aimed at mapping of a maize crop canopy using RGB based UAV imageries. UAV flights at different growth stages were carried out with a high resolution RGB camera over a maize field in Ampara District, Sri Lanka. For accurate crop canopy mapping, very high-resolution multi-temporal ortho-mosaicked images were derived from UAV imageries using free and open source image processing platforms with spatial resolution in centimetre level. The resultant multi-temporal ortho-mosaicked images can be used to map and monitor the crop field’s precise and efficient manner. These information are very important for farmers and decision makers to properly manage the crop fields.</p>
4

Purwanto, Anang Dwi, e Wikanti Asriningrum. "IDENTIFICATION OF MANGROVE FORESTS USING MULTISPECTRAL SATELLITE IMAGERIES". International Journal of Remote Sensing and Earth Sciences (IJReSES) 16, n. 1 (30 ottobre 2019): 63. http://dx.doi.org/10.30536/j.ijreses.2019.v16.a3097.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The visual identification of mangrove forests is greatly constrained by combinations of RGB composite. This research aims to determine the best combination of RGB composite for identifying mangrove forest in Segara Anakan, Cilacap using the Optimum Index Factor (OIF) method. The OIF method uses the standard deviation value and correlation coefficient from a combination of three image bands. The image data comprise Landsat 8 imagery acquired on 30 May 2013, Sentinel 2A imagery acquired on 18 March 2018 and images from SPOT 6 acquired on 10 January 2015. The results show that the band composites of 564 (NIR+SWIR+Red) from Landsat 8 and 8a114 (Vegetation Red Edge+SWIR+Red) from Sentinel 2A are the best RGB composites for identifying mangrove forest, in addition to those of 341 (Red+NIR+Blue) from SPOT 6. The near-infrared (NIR) and short-wave infrared (SWIR) bands play an important role in determining mangrove forests. The properties of vegetation are reflected strongly at the NIR wavelength and the SWIR band is very sensitive to evaporation and the identification of wetlands.
5

Chhatkuli, S., T. Satoh e K. Tachibana. "MULTI SENSOR DATA INTEGRATION FOR AN ACCURATE 3D MODEL GENERATION". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4/W5 (11 maggio 2015): 103–6. http://dx.doi.org/10.5194/isprsarchives-xl-4-w5-103-2015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other’s weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.
6

Argyrou, Argyro, Athos Agapiou, Apostolos Papakonstantinou e Dimitrios D. Alexakis. "Comparison of Machine Learning Pixel-Based Classifiers for Detecting Archaeological Ceramics". Drones 7, n. 9 (13 settembre 2023): 578. http://dx.doi.org/10.3390/drones7090578.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Recent improvements in low-altitude remote sensors and image processing analysis can be utilised to support archaeological research. Over the last decade, the increased use of remote sensing sensors and their products for archaeological science and cultural heritage studies has been reported in the literature. Therefore, different spatial and spectral analysis datasets have been applied to recognise archaeological remains or map environmental changes over time. Recently, more thorough object detection approaches have been adopted by researchers for the automated detection of surface ceramics. In this study, we applied several supervised machine learning classifiers using red-green-blue (RGB) and multispectral high-resolution drone imageries over a simulated archaeological area to evaluate their performance towards semi-automatic surface ceramic detection. The overall results indicated that low-altitude remote sensing sensors and advanced image processing techniques can be innovative in archaeological research. Nevertheless, the study results also pointed out existing research limitations in the detection of surface ceramics, which affect the detection accuracy. The development of a novel, robust methodology aimed to address the “accuracy paradox” of imbalanced data samples for optimising archaeological surface ceramic detection. At the same time, this study attempted to fill a gap in the literature by blending AI methodologies for non-uniformly distributed classes. Indeed, detecting surface ceramics using RGB or multi-spectral drone imageries should be reconsidered as an ‘imbalanced data distribution’ problem. To address this paradox, novel approaches need to be developed.
7

Mawardi, Sonny, Emi Sukiyah e Iyan Haryanto. "Morphotectonic Characteristics Of Cisadane Watersshed Based On Satellite Images Analysis". Jurnal Geologi dan Sumberdaya Mineral 20, n. 3 (22 agosto 2019): 175. http://dx.doi.org/10.33332/jgsm.geologi.v20i3.464.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cisadane Watershed is one of the most rapidly growing areas and infrastructure development, and has developed as a residential, industrial, administrative centers and other economic activities. The purpose of this paper is to use remote sensing satellite imageries to identify the morphotectonic characteristics of the Cisadane watershed both qualitatively and quantitatively. Processing stereomodel, stereoplotting and stereocompilation on TerraSAR-X Digital Surface Model (DSM) and SPOT 6 imageries, produced the Digital Terrain Model (DTM) image, which has not been affected by land cover. Fusion of the DTM and Landsat 8 RGB 567+8 images is used to interpret the distribution of lithology, geomorphological units, and lineaments, which are an indication of geological structures. The morphotectonic characteristics of sub-watersheds qualitatively was carried out a bifurcation ratio calculation (Rb) which indicates tectonic deformation. Based on the analysis of satellite images both qualitatively and quantitatively, the morphotectonic characteristics of the upstream, middle and downstream Cisadane Watershed have been deformed.Keywords : satellite images, morphotectonic, DSM, DTM, Cisadane Watershed.
8

Vanbrabant, Yasmin, Stephanie Delalieux, Laurent Tits, Klaas Pauly, Joke Vandermaesen e Ben Somers. "Pear Flower Cluster Quantification Using RGB Drone Imagery". Agronomy 10, n. 3 (17 marzo 2020): 407. http://dx.doi.org/10.3390/agronomy10030407.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
High quality fruit production requires the regulation of the crop load on fruit trees by reducing the number of flowers and fruitlets early in the growing season, if the bearing is too high. Several automated flower cluster quantification methods based on proximal and remote imagery methods have been proposed to estimate flower cluster numbers, but their overall performance is still far from satisfactory. For other methods, the performance of the method to estimate flower clusters within a tree is unknown since they were only tested on images from one perspective. One of the main reported bottlenecks is the presence of occluded flowers due to limitations of the top-view perspective of the platform-sensor combinations. In order to tackle this problem, the multi-view perspective from the Red–Green–Blue (RGB) colored dense point clouds retrieved from drone imagery are compared and evaluated against the field-based flower cluster number per tree. Experimental results obtained on a dataset of two pear tree orchards (N = 144) demonstrate that our 3D object-based method, a combination of pixel-based classification with the stochastic gradient boosting algorithm and density-based clustering (DBSCAN), significantly outperforms the state-of-the-art in flower cluster estimations from the 2D top-view (R2 = 0.53), with R2 > 0.7 and RRMSE < 15%.
9

Simes, Tomás, Luís Pádua e Alexandra Moutinho. "Wildfire Burnt Area Severity Classification from UAV-Based RGB and Multispectral Imagery". Remote Sensing 16, n. 1 (20 dicembre 2023): 30. http://dx.doi.org/10.3390/rs16010030.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Wildfires present a significant threat to ecosystems and human life, requiring effective prevention and response strategies. Equally important is the study of post-fire damages, specifically burnt areas, which can provide valuable insights. This research focuses on the detection and classification of burnt areas and their severity using RGB and multispectral aerial imagery captured by an unmanned aerial vehicle. Datasets containing features computed from multispectral and/or RGB imagery were generated and used to train and optimize support vector machine (SVM) and random forest (RF) models. Hyperparameter tuning was performed to identify the best parameters for a pixel-based classification. The findings demonstrate the superiority of multispectral data for burnt area and burn severity classification with both RF and SVM models. While the RF model achieved a 95.5% overall accuracy for the burnt area classification using RGB data, the RGB models encountered challenges in distinguishing between mildly and severely burnt classes in the burn severity classification. However, the RF model incorporating mixed data (RGB and multispectral) achieved the highest accuracy of 96.59%. The outcomes of this study contribute to the understanding and practical implementation of machine learning techniques for assessing and managing burnt areas.
10

Semah, Franck. "Imagerie médicale et épilepsies". Revue Générale Nucléaire, n. 4 (agosto 2001): 36–37. http://dx.doi.org/10.1051/rgn/20014036.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Imagerie RGB":

1

Lefévre, Soizic. "Caractérisation de la qualité des raisins par imagerie". Electronic Thesis or Diss., Reims, 2023. http://www.theses.fr/2023REIMS017.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L’identification des états sanitaires du raisin au moment de la vendange est un enjeu majeur afin de produire des vins de qualité. Pour répondre à cet enjeu, des données sont acquises par spectrométrie, imagerie hyperspectrale et imagerie RGB sur des échantillons de raisin au cours des vendanges.Plusieurs prétraitements adaptés à chaque type de données sont appliqués tels que la normalisation, la réduction, l’extraction de vecteurs de caractéristiques et la segmentation de zones utiles. D’un point de vue imagerie, la reconstitution en fausses couleurs des images hyperspectrales, éloignée de la réalité, ne permet pas d’étiqueter toute la diversité intra-classe. En revanche, la qualité visuelle de l’imagerie RGB favorise l’étiquetage des classes avec précision. A partir de cet étiquetage, des classifieurs tels que les machines à vecteurs de support, les forêts aléatoires, l’estimation du maximum de vraisemblance, la mise en correspondance spectrale, les k-moyennes sont testés et entrainés sur les bases étiquetées. En fonction de la nature des données, le plus performant est appliqué sur les images entières de grappes ou caisses de raisins de plusieurs cépages provenant de différentes parcelles.Les indices de qualité obtenus à partir du traitement des images RGB sont très proches des estimations effectuées par les experts du domaine
Identifying the health conditions of the grapes at harvest time is a major issue in order to produce quality wines. To meet this issue, data are acquired by spectrometry, hyperspectral imaging and RGB imaging on grape samples during harvest.Several pre-treatments adapted to each type of data are applied such as normalization, reduction, extraction of characteristic vectors, and segmentation of useful areas. From an imaging point of view, the reconstruction in false colors of hyperspectral images, far from reality, doesn’t allow to label all the intra-class diversity. On the other hand, the visual quality of RGB imaging enables accurate class labelling. From this labelling, classifiers such as support vector machines, random forests, maximum likelihood estimation, spectral mapping, k-means are tested and trained on labelled bases. Depending on the nature of the data, the most effective is applied to whole images of grape clusters or crates of grapes of several grape varieties from different parcels.The quality indices obtained from RGB image processing are very close to the estimates made by experts in the field
2

Kacete, Amine. "Unconstrained Gaze Estimation Using RGB-D Camera". Thesis, CentraleSupélec, 2016. http://www.theses.fr/2016SUPL0012/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans ce travail, nous avons abordé le problème d’estimation automatique du regard dans des environnements utilisateur sans contraintes. Ce travail s’inscrit dans la vision par ordinateur appliquée à l’analyse automatique du comportement humain. Plusieurs solutions industrielles sont aujourd’hui commercialisées et donnent des estimations précises du regard. Certaines ont des spécifications matérielles très complexes (des caméras embarquées sur un casque ou sur des lunettes qui filment le mouvement des yeux) et présentent un niveau d’intrusivité important, ces solutions sont souvent non accessible au grand public. Cette thèse vise à produire un système d’estimation automatique du regard capable d’augmenter la liberté du mouvement de l’utilisateur par rapport à la caméra (mouvement de la tête, distance utilisateur-capteur), et de réduire la complexité du système en utilisant des capteurs relativement simples et accessibles au grand public. Dans ce travail, nous avons exploré plusieurs paradigmes utilisés par les systèmes d’estimation automatique du regard. Dans un premier temps, Nous avons mis au point deux systèmes basés sur deux approches classiques: le premier basé caractéristiques et le deuxième basé semi apparence. L’inconvénient majeur de ces paradigmes réside dans la conception des systèmes d'estimation du regard qui supposent une indépendance totale entre l'image d'apparence des yeux et la pose de la tête. Pour corriger cette limitation, Nous avons convergé vers un nouveau paradigme qui unifie les deux blocs précédents en construisant un espace regard global, nous avons exploré deux directions en utilisant des données réelles et synthétiques respectivement
In this thesis, we tackled the automatic gaze estimation problem in unconstrained user environments. This work takes place in the computer vision research field applied to the perception of humans and their behaviors. Many existing industrial solutions are commercialized and provide an acceptable accuracy in gaze estimation. These solutions often use a complex hardware such as range of infrared cameras (embedded on a head mounted or in a remote system) making them intrusive, very constrained by the user's environment and inappropriate for a large scale public use. We focus on estimating gaze using cheap low-resolution and non-intrusive devices like the Kinect sensor. We develop new methods to address some challenging conditions such as head pose changes, illumination conditions and user-sensor large distance. In this work we investigated different gaze estimation paradigms. We first developed two automatic gaze estimation systems following two classical approaches: feature and semi appearance-based approaches. The major limitation of such paradigms lies in their way of designing gaze systems which assume a total independence between eye appearance and head pose blocks. To overcome this limitation, we converged to a novel paradigm which aims at unifying the two previous components and building a global gaze manifold, we explored two global approaches across the experiments by using synthetic and real RGB-D gaze samples
3

Kadkhodamohammadi, Abdolrahim. "3D detection and pose estimation of medical staff in operating rooms using RGB-D images". Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD047/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans cette thèse, nous traitons des problèmes de la détection des personnes et de l'estimation de leurs poses dans la Salle Opératoire (SO), deux éléments clés pour le développement d'applications d'assistance chirurgicale. Nous percevons la salle grâce à des caméras RGB-D qui fournissent des informations visuelles complémentaires sur la scène. Ces informations permettent de développer des méthodes mieux adaptées aux difficultés propres aux SO, comme l'encombrement, les surfaces sans texture et les occlusions. Nous présentons des nouvelles approches qui tirent profit des informations temporelles, de profondeur et des vues multiples afin de construire des modèles robustes pour la détection des personnes et de leurs poses. Une évaluation est effectuée sur plusieurs jeux de données complexes enregistrés dans des salles opératoires avec une ou plusieurs caméras. Les résultats obtenus sont très prometteurs et montrent que nos approches surpassent les méthodes de l'état de l'art sur ces données cliniques
In this thesis, we address the two problems of person detection and pose estimation in Operating Rooms (ORs), which are key ingredients in the development of surgical assistance applications. We perceive the OR using compact RGB-D cameras that can be conveniently integrated in the room. These sensors provide complementary information about the scene, which enables us to develop methods that can cope with numerous challenges present in the OR, e.g. clutter, textureless surfaces and occlusions. We present novel part-based approaches that take advantage of depth, multi-view and temporal information to construct robust human detection and pose estimation models. Evaluation is performed on new single- and multi-view datasets recorded in operating rooms. We demonstrate very promising results and show that our approaches outperform state-of-the-art methods on this challenging data acquired during real surgeries
4

Devanne, Maxime. "3D human behavior understanding by shape analysis of human motion and pose". Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10138/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'émergence de capteurs de profondeur capturant la structure 3D de la scène et du corps humain offre de nouvelles possibilités pour l'étude du mouvement et la compréhension des comportements humains. Cependant, la conception et le développement de modules de reconnaissance de comportements à la fois précis et efficaces est une tâche difficile en raison de la variabilité de la posture humaine, la complexité du mouvement et les interactions avec l'environnement. Dans cette thèse, nous nous concentrons d'abord sur le problème de la reconnaissance d'actions en représentant la trajectoire du corps humain au cours du temps, capturant ainsi simultanément la forme du corps et la dynamique du mouvement. Le problème de la reconnaissance d'actions est alors formulé comme le calcul de similitude entre la forme des trajectoires dans un cadre Riemannien. Les expériences menées sur quatre bases de données démontrent le potentiel de la solution en termes de précision/temps de latence de la reconnaissance d'actions. Deuxièmement, nous étendons l'étude aux comportements plus complexes en analysant l'évolution de la forme de la posture pour décomposer la séquence en unités de mouvement. Chaque unité de mouvement est alors caractérisée par la trajectoire de mouvement et l'apparence autour des mains, de manière à décrire le mouvement humain et l'interaction avec les objets. Enfin, la séquence de segments temporels est modélisée par un classifieur Bayésien naïf dynamique. Les expériences menées sur quatre bases de données évaluent le potentiel de l'approche dans différents contextes de reconnaissance et détection en ligne de comportements
The emergence of RGB-D sensors providing the 3D structure of both the scene and the human body offers new opportunities for studying human motion and understanding human behaviors. However, the design and development of models for behavior recognition that are both accurate and efficient is a challenging task due to the variability of the human pose, the complexity of human motion and possible interactions with the environment. In this thesis, we first focus on the action recognition problem by representing human action as the trajectory of 3D coordinates of human body joints over the time, thus capturing simultaneously the body shape and the dynamics of the motion. The action recognition problem is then formulated as the problem of computing the similarity between shape of trajectories in a Riemannian framework. Experiments carried out on four representative benchmarks demonstrate the potential of the proposed solution in terms of accuracy/latency for a low-latency action recognition. Second, we extend the study to more complex behaviors by analyzing the evolution of the human pose shape to decompose the motion stream into short motion units. Each motion unit is then characterized by the motion trajectory and depth appearance around hand joints, so as to describe the human motion and interaction with objects. Finally, the sequence of temporal segments is modeled through a Dynamic Naive Bayesian Classifier. Experiments on four representative datasets evaluate the potential of the proposed approach in different contexts, including recognition and online detection of behaviors
5

Tykkälä, Tommi. "Suivi de caméra image en temps réel base et cartographie de l'environnement". Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00933813.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Dans ce travail, méthodes d'estimation basées sur des images, également connu sous le nom de méthodes directes, sont étudiées qui permettent d'éviter l'extraction de caractéristiques et l'appariement complètement. L'objectif est de produire pose 3D précis et des estimations de la structure. Les fonctions de coût présenté minimiser l'erreur du capteur, car les mesures ne sont pas transformés ou modifiés. Dans la caméra photométrique estimation de la pose, rotation 3D et les paramètres de traduction sont estimées en minimisant une séquence de fonctions de coûts à base d'image, qui sont des non-linéaires en raison de la perspective projection et la distorsion de l'objectif. Dans l'image la structure basée sur le raffinement, d'autre part, de la structure 3D est affinée en utilisant un certain nombre de vues supplémentaires et un coût basé sur l'image métrique. Les principaux domaines d'application dans ce travail sont des reconstitutions d'intérieur, la robotique et la réalité augmentée. L'objectif global du projet est d'améliorer l'image des méthodes d'estimation fondées, et pour produire des méthodes de calcul efficaces qui peuvent être accueillis dans des applications réelles. Les principales questions pour ce travail sont : Qu'est-ce qu'une formulation efficace pour une image 3D basé estimation de la pose et de la structure tâche de raffinement ? Comment organiser calcul afin de permettre une mise en œuvre efficace en temps réel ? Quelles sont les considérations pratiques utilisant l'image des méthodes d'estimation basées sur des applications telles que la réalité augmentée et la reconstruction 3D ?
6

Alston, Laure. "Spectroscopie de fluorescence et imagerie optique pour l'assistance à la résection de gliomes : conception et caractérisation de systèmes de mesure et modèles de traitement des données associées, sur fantômes et au bloc opératoire". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1295/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Les gliomes sont des tumeurs cérébrales infiltrantes difficilement curables, notamment à cause de la difficulté à visualiser toutes les infiltrations au bloc opératoire. Dans cette thèse, nous réalisons une étude clinique de spectroscopie de fluorescence de la protoporphyrine IX (PpIX) dans les gliomes de 10 patients selon l’hypothèse que les spectres collectés proviennent de la contribution de 2 états de la PpIX dont les proportions varient suivant la densité en cellules tumorales. Après avoir présenté le développement du système interventionnel proposant une excitation multi-longueurs d’onde, nous présentons son utilisation sur fantômes de PpIX mimant les propriétés des gliomes. Ceci permet tout d’abord d’obtenir les spectres émis par les 2 états séparément puis de proposer un modèle d’ajustement des spectres comme une combinaison linéaire des 2 spectres de référence sur la bande spectrale 608-637 nm. Ensuite, nous présentons la mise en place de l’étude clinique, notamment l’analyse de risques, avant d’appliquer ce système in vivo. Les mesures in vivo détectent de la fluorescence dans des tissus où le microscope chirurgical n’en détecte pas, ce qui pourrait s’expliquer par un changement d’état de la PpIX entre le cœur des gliomes et leurs infiltrations. L’intérêt de l’excitation multi-longueurs d’onde est démontré par la décroissance de la corrélation des spectres acquis aux trois excitations suivant la densité en cellules tumorale. Enfin, nous soulevons des pistes d’étude de l’identification peropératoire des zones de fonctionnalité cérébrale à l’aide d’une caméra optique ainsi que l’étude du temps de vie de fluorescence et de la fluorescence deux photons de la PpIX sur fantômes
Gliomas are infiltrative tumors of the brain which are yet hardly curable, notably because of the difficulty to precisely delimitate their margins during surgery. Intraoperative 5-ALA induced protoporphyrin IX (PpIX) fluorescence microscopy has shown its relevance to assist neurosurgeons but lacks sensitivity. In this thesis, we perform a spectroscopic clinical trial on 10 patients with the assumption that collected fluorescence is a linear combination of the contribution of two states of PpIX which proportions vary with the density of tumor cells. This work starts with the development of the intraoperative, portable and real time fluorescence spectroscopic device that provides multi-wavelength excitation. Then, we show its use on PpIX phantoms with tissues mimicking properties. This first enables to obtain a reference emitted spectrum for each state apart and then permits the development of a fitting model to adjust any emitted spectrum as a linear combination of the references in the spectral band 608-637 nm. Next, we present the steps led to get approvals for the clinical trial, especially the risk analysis. In vivo data analysis is then presented, showing that we detect fluorescence where current microscopes cannot, which could exhibit a change in PpIX state from glioma center to its margins. Besides, the relevance of multi-wavelength excitation is highlighted as the correlation between the three measured spectra of a same sample decreases with the density of tumor cells. Finally, the complementary need to intraoperatively identify cerebral functional areas is tackled with optical measurements as a perspective and other properties of PpIX on phantoms are also raised
7

Chakib, Reda. "Acquisition et rendu 3D réaliste à partir de périphériques "grand public"". Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0101/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'imagerie numérique, de la synthèse d'images à la vision par ordinateur est en train de connaître une forte évolution, due entre autres facteurs à la démocratisation et au succès commercial des caméras 3D. Dans le même contexte, l'impression 3D grand public, qui est en train de vivre un essor fulgurant, contribue à la forte demande sur ce type de caméra pour les besoins de la numérisation 3D. L'objectif de cette thèse est d'acquérir et de maîtriser un savoir-faire dans le domaine de la capture/acquisition de modèles 3D en particulier sur l'aspect rendu réaliste. La réalisation d'un scanner 3D à partir d'une caméra RGB-D fait partie de l'objectif. Lors de la phase d'acquisition, en particulier pour un dispositif portable, on est confronté à deux problèmes principaux, le problème lié au référentiel de chaque capture et le rendu final de l'objet reconstruit
Digital imaging, from the synthesis of images to computer vision isexperiencing a strong evolution, due among other factors to the democratization and commercial success of 3D cameras. In the same context, the consumer 3D printing, which is experiencing a rapid rise, contributes to the strong demand for this type of camera for the needs of 3D scanning. The objective of this thesis is to acquire and master a know-how in the field of the capture / acquisition of 3D models in particular on the rendered aspect. The realization of a 3D scanner from a RGB-D camera is part of the goal. During the acquisition phase, especially for a portable device, there are two main problems, the problem related to the repository of each capture and the final rendering of the reconstructed object
8

Chiron, Guillaume. "Système complet d’acquisition vidéo, de suivi de trajectoires et de modélisation comportementale pour des environnements 3D naturellement encombrés : application à la surveillance apicole". Thesis, La Rochelle, 2014. http://www.theses.fr/2014LAROS030/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Ce manuscrit propose une approche méthodologique pour la constitution d’une chaîne complète de vidéosurveillance pour des environnements naturellement encombrés. Nous identifions et levons un certain nombre de verrous méthodologiques et technologiques inhérents : 1) à l’acquisition de séquences vidéo en milieu naturel, 2) au traitement d’images, 3) au suivi multi-cibles, 4) à la découverte et la modélisation de motifs comportementaux récurrents, et 5) à la fusion de données. Le contexte applicatif de nos travaux est la surveillance apicole, et en particulier, l’étude des trajectoires des abeilles en vol devant la ruche. De ce fait, cette thèse se présente également comme une étude de faisabilité et de prototypage dans le cadre des deux projets interdisciplinaires EPERAS et RISQAPI (projets menées en collaboration avec l’INRA Magneraud et le Muséum National d’Histoire Naturelle). Il s’agit pour nous informaticiens et pour les biologistes qui nous ont accompagnés, d’un domaine d’investigation totalement nouveau, pour lequel les connaissances métiers, généralement essentielles à ce genre d’applications, restent encore à définir. Contrairement aux approches existantes de suivi d’insectes, nous proposons de nous attaquer au problème dans l’espace à trois dimensions grâce à l’utilisation d’une caméra stéréovision haute fréquence. Dans ce contexte, nous détaillons notre nouvelle méthode de détection de cibles appelée segmentation HIDS. Concernant le calcul des trajectoires, nous explorons plusieurs approches de suivi de cibles, s’appuyant sur plus ou moins d’a priori, susceptibles de supporter les conditions extrêmes de l’application (e.g. cibles nombreuses, de petite taille, présentant un mouvement chaotique). Une fois les trajectoires collectées, nous les organisons selon une structure de données hiérarchique et mettons en œuvre une approche Bayésienne non-paramétrique pour la découverte de comportements émergents au sein de la colonie d’insectes. L’analyse exploratoire des trajectoires issues de la scène encombrée s’effectue par classification non supervisée, simultanément sur des niveaux sémantiques différents, et où le nombre de clusters pour chaque niveau n’est pas défini a priori mais est estimé à partir des données. Cette approche est dans un premier temps validée à l’aide d’une pseudo-vérité terrain générée par un Système Multi-Agents, puis dans un deuxième temps appliquée sur des données réelles
This manuscript provides the basis for a complete chain of videosurveillence for naturally cluttered environments. In the latter, we identify and solve the wide spectrum of methodological and technological barriers inherent to : 1) the acquisition of video sequences in natural conditions, 2) the image processing problems, 3) the multi-target tracking ambiguities, 4) the discovery and the modeling of recurring behavioral patterns, and 5) the data fusion. The application context of our work is the monitoring of honeybees, and in particular the study of the trajectories bees in flight in front of their hive. In fact, this thesis is part a feasibility and prototyping study carried by the two interdisciplinary projects EPERAS and RISQAPI (projects undertaken in collaboration with INRA institute and the French National Museum of Natural History). It is for us, computer scientists, and for biologists who accompanied us, a completely new area of investigation for which the scientific knowledge, usually essential for such applications, are still in their infancy. Unlike existing approaches for monitoring insects, we propose to tackle the problem in the three-dimensional space through the use of a high frequency stereo camera. In this context, we detail our new target detection method which we called HIDS segmentation. Concerning the computation of trajectories, we explored several tracking approaches, relying on more or less a priori, which are able to deal with the extreme conditions of the application (e.g. many targets, small in size, following chaotic movements). Once the trajectories are collected, we organize them according to a given hierarchical data structure and apply a Bayesian nonparametric approach for discovering emergent behaviors within the colony of insects. The exploratory analysis of the trajectories generated by the crowded scene is performed following an unsupervised classification method simultaneously over different levels of semantic, and where the number of clusters for each level is not defined a priori, but rather estimated from the data only. This approach is has been validated thanks to a ground truth generated by a Multi-Agent System. Then we tested it in the context of real data
9

Muske, Manideep Sai Yadav. "To Detect Water-Puddle On Driving Terrain From RGB Imagery Using Deep Learning Algorithms". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21229.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Background: With the emerging application of autonomous vehicles in the automotive industry, several efforts have been made for the complete adoption of autonomous vehicles. One of the several problems in creating autonomous technology is the detection of water puddles, which can cause damages to internal components and the vehicle to lose control. This thesis focuses on the detection of water puddles on-road and off-road conditions with the use of Deep Learning models. Objectives: The thesis focuses on finding suitable Deep Learning algorithms for detecting the water puddles, and then an experiment is performed with the chosen algorithms. The algorithms are then compared with each other based on the performance evaluation of the trained models. Methods: The study uses a literature review to find the appropriate Deep Learning algorithms to answer the first research question, followed by conducting an experiment to compare and evaluate the selected algorithms. Metrics used to compare the algorithm include accuracy, precision, recall, f1 score, training time, and detection speed. Results: The Literature Review indicated Faster R-CNN and SSD are suitable algorithms for object detection applications. The experimental results indicated that on the basis of accuracy, recall, and f1 score, the Faster R-CNN is a better performing algorithm. But on the basis of precision, training time, and detection speed, the SSD is a faster performing algorithm. Conclusions: After carefully analyzing the results, Faster R-CNN is preferred for its better performance due to the fact that in a real-life scenario which the thesis aims at, the models to correctly predict the water puddles is key
10

Fernández, Gallego José Armando. "Image processing techniques for plant phenotyping using RGB and thermal imagery = Técnicas de procesamiento de imágenes RGB y térmicas como herramienta para fenotipado de cultivos". Doctoral thesis, Universitat de Barcelona, 2019. http://hdl.handle.net/10803/669111.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
World cereal stocks need to increase in order to meet growing demands. Currently, maize, rice, wheat, are the main crops worldwide, while other cereals such as barley, sorghum, oat or different millets are also well placed in the top list. Crop productivity is affected directly by climate change factors such as heat, drought, floods or storms. Researchers agree that global climate change is having a major impact on crop productivity. In that way, several studies have been focused on climate change scenarios and more specifically abiotic stresses in cereals. For instance, in the case of heat stress, high temperatures between anthesis to grain filling can decrease grain yield. In order to deal with the climate change and future environmental scenarios, plant breeding is one of the main alternatives breeding is even considered to contribute to the larger component of yield growth compared to management. Plant breeding programs are focused on identifying genotypes with high yields and quality to act as a parentals and further the best individuals among the segregating population thus develop new varieties of plants. Breeders use the phenotypic data, plant and crop performance, and genetic information to improve the yield by selection (GxE, with G and E indicating genetic and environmental factors). More factors must be taken into account to increase the yield, such as, for instance, the education of farmers, economic incentives and the use of new technologies (GxExM, with M indicating management). Plant phenotyping is related with the observable (or measurable) characteristics of the plant while the crop growing as well as the association between the plant genetic background and its response to the environment (GxE). In traditional phenotyping the measurements are collated manually, which is tedious, time consuming and prone to subjective errors. Nowadays the technology is involved in many applications. From the point of view of plan phenotyping, technology has been incorporated as a tool. The use of image processing techniques integrating sensors and algorithm processes, is therefore, an alternative to asses automatically (or semi-automatically) these traits. Images have become a useful tool for plant phenotyping because most frequently data from the sensors are processed and analyzed as an image in two (2D) or three (3D) dimensions. An image is the arrangement of pixels in a regular Cartesian coordinates as a matrix, each pixel has a numerical value into the matrix which represents the number of photons captured by the sensor within the exposition time. Therefore, an image is the optical representation of the object illuminated by a radiating source. The main characteristics of images can be defined by the sensor spectral and spatial properties, with the spatial properties of the resulting image also heavily dependent on the sensor platform (which determines the distance from the target object).
Las existencias mundiales de cereales deben aumentar para satisfacer la creciente demanda. Actualmente, el maíz, el arroz y el trigo son los principales cultivos a nivel mundial, otros cereales como la cebada, el sorgo y la avena están también bien ubicados en la lista. La productividad de los cultivos se ve afectada directamente por factores del cambio climático como el calor, la sequía, las inundaciones o las tormentas. Los investigadores coinciden en que el cambio climático global está teniendo un gran impacto en la productividad de los cultivos. Es por esto que muchos estudios se han centrado en escenarios de cambio climático y más específicamente en estrés abiótico. Por ejemplo, en el caso de estrés por calor, las altas temperaturas entre antesis y llenado de grano pueden disminuir el rendimiento del grano. Para hacer frente al cambio climático y escenarios ambientales futuros, el mejoramiento de plantas es una de las principales alternativas; incluso se considera que las técnicas de mejoramiento contribuyen en mayor medida al aumento del rendimiento que el manejo del cultivo. Los programas de mejora se centran en identificar genotipos con altos rendimientos y calidad para actuar como progenitores y promover los mejores individuos para desarrollar nuevas variedades de plantas. Los mejoradores utilizan los datos fenotípicos, el desempeño de las plantas y los cultivos, y la información genética para mejorar el rendimiento mediante selección (GxE, donde G y E indican factores genéticos y ambientales). El fenotipado plantas está relacionado con las características observables (o medibles) de la planta mientras crece el cultivo, así como con la asociación entre el fondo genético de la planta y su respuesta al medio ambiente (GxE). En el fenotipado tradicional, las mediciones se clasifican manualmente, lo cual es tedioso, consume mucho tiempo y es propenso a errores subjetivos. Sin embargo, hoy en día la tecnología está involucrada en muchas aplicaciones. Desde el punto de vista del fenotipado de plantas, la tecnología se ha incorporado como una herramienta. El uso de técnicas de procesamiento de imágenes que integran sensores y algoritmos son por lo tanto una alternativa para evaluar automáticamente (o semiautomáticamente) estas características.

Libri sul tema "Imagerie RGB":

1

King, Jane Valerie. VALIDATION OF A SPECIFIC TECHNIQUE OF RELAXATION WITH GUIDED IMAGERY (RGI) ON STATE ANXIETY IN GRADUATE NURSING STUDENTS. 1987.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Imagerie RGB":

1

Lorenzo-Navarro, Javier, Modesto Castrillón-Santana e Daniel Hernández-Sosa. "An Study on Re-identification in RGB-D Imagery". In Lecture Notes in Computer Science, 200–207. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-35395-6_28.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Bakalos, Nikolaos, Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis, Kassiani Papasotiriou e Matthaios Bimpas. "Fusing RGB and Thermal Imagery with Channel State Information for Abnormal Activity Detection Using Multimodal Bidirectional LSTM". In Cyber-Physical Security for Critical Infrastructures Protection, 77–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69781-5_6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractIn this paper, we present a multimodal deep model for detection of abnormal activity, based on bidirectional Long Short-Term Memory neural networks (LSTM). The proposed model exploits three different input modalities: RGB imagery, thermographic imagery and Channel State Information from Wi-Fi signal reflectance to estimate human intrusion and suspicious activity. The fused multimodal information is used as input in a Bidirectional LSTM, which has the benefit of being able to capture temporal interdependencies in both past and future time instances, a significant aspect in the discussed unusual activity detection scenario. We also present a Bayesian optimization framework that fine-tunes the Bidirectional LSTM parameters in an optimal manner. The proposed framework is evaluated on real-world data from a critical water infrastructure protection and monitoring scenario and the results indicate a superior performance compared to other unimodal and multimodal approaches and classification models.
3

Matos, João Pedro, Artur Machado, Ricardo Ribeiro e Alexandra Moutinho. "Automatic People Detection Based on RGB and Thermal Imagery for Military Applications". In Robot 2023: Sixth Iberian Robotics Conference, 201–12. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-59167-9_17.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Pádua, Luís, Nathalie Guimarães, Telmo Adão, Pedro Marques, Emanuel Peres, António Sousa e Joaquim J. Sousa. "Classification of an Agrosilvopastoral System Using RGB Imagery from an Unmanned Aerial Vehicle". In Progress in Artificial Intelligence, 248–57. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30241-2_22.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Rozanda, Nesdi Evrilyan, M. Ismail e Inggih Permana. "Segmentation Google Earth Imagery Using K-Means Clustering and Normalized RGB Color Space". In Computational Intelligence in Data Mining - Volume 1, 375–86. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-2205-7_36.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Niu, Qinglin, Haikuan Feng, Changchun Li, Guijun Yang, Yuanyuan Fu, Zhenhai Li e Haojie Pei. "Estimation of Leaf Nitrogen Concentration of Winter Wheat Using UAV-Based RGB Imagery". In Computer and Computing Technologies in Agriculture XI, 139–53. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-06179-1_15.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Sukkar, Abdullah, e Mustafa Turker. "Tree Detection from Very High Spatial Resolution RGB Satellite Imagery Using Deep Learning". In Recent Research on Geotechnical Engineering, Remote Sensing, Geophysics and Earthquake Seismology, 145–49. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-43218-7_34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Guarin, Arnold, Homero Ortega e Hans Garcia. "Acquisition System Based in a Low-Cost Optical Architecture with Single Pixel Measurements and RGB Side Information for UAV Imagery". In Applications of Computational Intelligence, 155–67. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-36211-9_13.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Zefri, Yahya, Imane Sebari, Hicham Hajji e Ghassane Aniba. "A Channel-Based Attention Deep Semantic Segmentation Model for the Extraction of Multi-type Solar Photovoltaic Arrays from Large-Scale Orthorectified UAV RGB Imagery". In Intelligent Sustainable Systems, 421–29. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-7660-5_36.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Prabhakar, Dolonchapa, e Pradeep Kumar Garg. "Applying a Deep Learning Approach for Building Extraction From High-Resolution Remote Sensing Imagery". In Advances in Geospatial Technologies, 157–79. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-7319-1.ch008.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
As data science applies to mapping buildings, great attention has been given to the potential of using deep learning and new data sources. However, given that convolutional neural networks (CNNs) dominate image classification tasks, automating the building extraction process is becoming more and more common. Increased access to unstructured data (such as imagery and text) and developments in deep learning and computer vision algorithms have improved the possibility of automating the extraction of building attributes from satellite images in a cost-effective and large-scale manner. By applying intelligent software-based solutions to satellite imageries, the manual process of acquiring features such as building footprints can be expedited. Manual feature acquisition is time-consuming and expensive. The buildings may be recovered from RGB photos and are extremely properly identified. This chapter offers suggestions to quicken the development of DL-centred building extraction techniques using remotely sensed images.

Atti di convegni sul tema "Imagerie RGB":

1

Han, Yiding, Austin Jensen e Huifang Dou. "Programmable Multispectral Imager Development as Light Weight Payload for Low Cost Fixed Wing Unmanned Aerial Vehicles". In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-87741.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this paper, we have developed a light-weight and cost-efficient multispectral imager payload for low cost fixed wing UAVs (Unmanned Aerial Vehicles) that need no runway for takeoff and landing. The imager is band-reconfigurable, covering both visual (RGB) and near infrared (NIR) spectrum. The number of the RGB and NIR sensors is scalable, depending on the demands of specific applications. The UAV on-board microcomputer programs and controls the imager system, synchronizing each camera individually to capture airborne imagery. It also bridges the payload to the UAV system by sending and receiving message packages. The airborne imagery is time-stamped with the corresponding local and geodetic coordinates data measured by the onboard IMU (Inertia Measurement Unit) and GPS (Global Positioning System) module. Subsequently, the imagery will be orthorectified with the recorded geo-referencing data. The application of such imager system includes multispectral remote sensing, ground mapping, target recognition, etc. In this paper, we will outline the technologies, demonstrate our experimental results from actual UAV flight missions, and compare the results with our previous imager system.
2

Taylor, Camillo, e Anthony Cowley. "Parsing Indoor Scenes Using RGB-D Imagery". In Robotics: Science and Systems 2012. Robotics: Science and Systems Foundation, 2012. http://dx.doi.org/10.15607/rss.2012.viii.051.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

LaRocque, Armand, Brigitte Leblon, Melanie-Louise Leblanc e Angela Douglas. "Surveying Migratory Waterfowl using UAV RGB Imagery". In IGARSS 2021 - 2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021. http://dx.doi.org/10.1109/igarss47720.2021.9553747.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

WuDunn, Marc, James Dunn e Avideh Zakhor. "Point Cloud Segmentation using RGB Drone Imagery". In 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9191266.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

WuDunn, Marc, Avideh Zakhor, Samir Touzani e Jessica Granderson. "Aerial 3D building reconstruction from RGB drone imagery". In Geospatial Informatics X, a cura di Kannappan Palaniappan, Gunasekaran Seetharaman, Peter J. Doucette e Joshua D. Harguess. SPIE, 2020. http://dx.doi.org/10.1117/12.2558399.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Rueda, Hoover, Daniel Lau e Gonzalo R. Arce. "RGB detectors on compressive snapshot multi-spectral imagers". In 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2015. http://dx.doi.org/10.1109/globalsip.2015.7418223.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Swamy, Shravan Kumar, Klaus Schwarz, Michael Hartmann e Reiner M. Creutzburg. "RGB and IR imagery fusion for autonomous driving". In Multimodal Image Exploitation and Learning 2023, a cura di Sos S. Agaian, Stephen P. DelMarco e Vijayan K. Asari. SPIE, 2023. http://dx.doi.org/10.1117/12.2664336.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Chen, Xiwen, Bryce Hopkins, Hao Wang, Leo O’Neill, Fatemeh Afghah, Abolfazl Razi, Peter Fulé, Janice Coen, Eric Rowell e Adam Watts. "Wildland Fire Detection and Monitoring using a Drone-collected RGB/IR Image Dataset". In 2022 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). IEEE, 2022. http://dx.doi.org/10.1109/aipr57179.2022.10092208.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Yokoya, Naoto, e Akira Iwasaki. "Airborne unmixing-based hyperspectral super-resolution using RGB imagery". In IGARSS 2014 - 2014 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2014. http://dx.doi.org/10.1109/igarss.2014.6947019.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Schau, H. C. "Estimation of low-resolution visible spectra from RGB imagery". In SPIE Defense, Security, and Sensing, a cura di Sylvia S. Shen e Paul E. Lewis. SPIE, 2009. http://dx.doi.org/10.1117/12.814650.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Imagerie RGB":

1

Bhatt, Parth, Curtis Edson e Ann MacLean. Image Processing in Dense Forest Areas using Unmanned Aerial System (UAS). Michigan Technological University, settembre 2022. http://dx.doi.org/10.37099/mtu.dc.michigantech-p/16366.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Imagery collected via Unmanned Aerial System (UAS) platforms has become popular in recent years due to improvements in a Digital Single-Lens Reflex (DSLR) camera (centimeter and sub-centimeter), lower operation costs as compared to human piloted aircraft, and the ability to collect data over areas with limited ground access. Many different application (e.g., forestry, agriculture, geology, archaeology) are already using and utilizing the advantages of UAS data. Although, there are numerous UAS image processing workflows, for each application the approach can be different. In this study, we developed a processing workflow of UAS imagery collected in a dense forest (e.g., coniferous/deciduous forest and contiguous wetlands) area allowing users to process large datasets with acceptable mosaicking and georeferencing errors. Imagery was acquired with near-infrared (NIR) and red, green, blue (RGB) cameras with no ground control points. Image quality of two different UAS collection platforms were observed. Agisoft Metashape, a photogrammetric suite, which uses SfM (Structure from Motion) techniques, was used to process the imagery. The results showed that an UAS having a consumer grade Global Navigation Satellite System (GNSS) onboard had better image alignment than an UAS with lower quality GNSS.
2

Ley, Matt, Tom Baldvins, Hannah Pilkington, David Jones e Kelly Anderson. Vegetation classification and mapping project: Big Thicket National Preserve. National Park Service, 2024. http://dx.doi.org/10.36967/2299254.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The Big Thicket National Preserve (BITH) vegetation inventory project classified and mapped vegetation within the administrative boundary and estimated thematic map accuracy quantitatively. National Park Service (NPS) Vegetation Mapping Inventory Program provided technical guidance. The overall process included initial planning and scoping, imagery procurement, vegetation classification field data collection, data analysis, imagery interpretation/classification, accuracy assessment (AA), and report writing and database development. Initial planning and scoping meetings took place during May, 2016 in Kountze, Texas where representatives gathered from BITH, the NPS Gulf Coast Inventory and Monitoring Network, and Colorado State University. The project acquired new 2014 orthoimagery (30-cm, 4-band (RGB and CIR)) from the Hexagon Imagery Program. Supplemental imagery for the interpretation phase included Texas Natural Resources Information System (TNRIS) 2015 50 cm leaf-off 4-band imagery from the Texas Orthoimagery Program (TOP), Farm Service Agency (FSA) 100-cm (2016) and 60 cm (2018) National Aerial Imagery Program (NAIP) imagery, and current and historical true-color Google Earth and Bing Maps imagery. In addition to aerial and satellite imagery, 2017 Neches River Basin Light Detection and Ranging (LiDAR) data was obtained from the United States Geological Survey (USGS) and TNRIS to analyze vegetation structure at BITH. The preliminary vegetation classification included 110 United States National Vegetation Classification (USNVC) associations. Existing vegetation and mapping data combined with vegetation plot data contributed to the final vegetation classification. Quantitative classification using hierarchical clustering and professional expertise was supported by vegetation data collected from 304 plots surveyed between 2016 and 2019 and 110 additional observation plots. The final vegetation classification includes 75 USNVC associations and 27 park special types including 80 forest and woodland, 7 shrubland, 12 herbaceous, and 3 sparse vegetation types. The final BITH map consists of 51 map classes. Land cover classes include five types: pasture / hay ground agricultural vegetation; non ? vegetated / barren land, borrow pit, cut bank; developed, open space; developed, low ? high intensity; and water. The 46 vegetation classes represent 102 associations or park specials. Of these, 75 represent natural vegetation associations within the USNVC, and 27 types represent unpublished park specials. Of the 46 vegetation map classes, 26 represent a single USNVC association/park special, 7 map classes contain two USNVC associations/park specials, 4 map classes contain three USNVC associations/park specials, and 9 map classes contain four or more USNVC associations/park specials. Forest and woodland types had an abundance of Pinus taeda, Liquidambar styraciflua, Ilex opaca, Ilex vomitoria, Quercus nigra, and Vitis rotundifolia. Shrubland types were dominated by Pinus taeda, Ilex vomitoria, Triadica sebifera, Liquidambar styraciflua, and/or Callicarpa americana. Herbaceous types had an abundance of Zizaniopsis miliacea, Juncus effusus, Panicum virgatum, and/or Saccharum giganteum. The final BITH vegetation map consists of 7,271 polygons totaling 45,771.8 ha (113,104.6 ac). Mean polygon size is 6.3 ha (15.6 ac). Of the total area, 43,314.4 ha (107,032.2 ac) or 94.6% represent natural or ruderal vegetation. Developed areas such as roads, parking lots, and campgrounds comprise 421.9 ha (1,042.5 ac) or 0.9% of the total. Open water accounts for approximately 2,034.9 ha (5,028.3 ac) or 4.4% of the total mapped area. Within the natural or ruderal vegetation types, forest and woodland types were the most extensive at 43,022.19 ha (106,310.1 ac) or 94.0%, followed by herbaceous vegetation types at 129.7 ha (320.5 ac) or 0.3%, sparse vegetation types at 119.2 ha (294.5 ac) or 0.3%, and shrubland types at 43.4 ha (107.2 ac) or 0.1%. A total of 784 AA samples were collected to evaluate the map?s thematic accuracy. When each AA sample was evaluated for a variety of potential errors, a number of the disagreements were overturned. It was determined that 182 plot records disagreed due to either an erroneous field call or a change in the vegetation since the imagery date, and 79 disagreed due to a true map classification error. Those records identified as incorrect due to an erroneous field call or changes in vegetation were considered correct for the purpose of the AA. As a simple plot count proportion, the reconciled overall accuracy was 89.9% (705/784). The spatially-weighted overall accuracy was 92.1% with a Kappa statistic of 89.6%. This method provides more weight to larger map classes in the park. Five map classes had accuracies below 80%. After discussing preliminary results with the parl, we retained those map classes because the community was rare, the map classes provided desired detail for management or the accuracy was reasonably close to the 80% target. When the 90% AA confidence intervals were included, an additional eight classes had thematic accruacies that extend below 80%. In addition to the vegetation polygon database and map, several products to support park resource management include the vegetation classification, field key to the associations, local association descriptions, photographic database, project geodatabase, ArcGIS .mxd files for map posters, and aerial imagery acquired for the project. The project geodatabase links the spatial vegetation data layer to vegetation classification, plot photos, project boundary extent, AA points, and PLOTS database sampling data. The geodatabase includes USNVC hierarchy tables allowing for spatial queries of data associated with a vegetation polygon or sample point. All geospatial products are projected using North American Datum 1983 (NAD83) in Universal Transverse Mercator (UTM) Zone 15 N. The final report includes methods and results, contingency tables showing AA results, field forms, species list, and a guide to imagery interpretation. These products provide useful information to assist with management of park resources and inform future management decisions. Use of standard national vegetation classification and mapping protocols facilitates effective resource stewardship by ensuring the compatibility and widespread use throughout NPS as well as other federal and state agencies. Products support a wide variety of resource assessments, park management and planning needs. Associated information provides a structure for framing and answering critical scientific questions about vegetation communities and their relationship to environmental processes across the landscape.
3

Ley, Matt, Tom Baldvins, David Jones, Hanna Pilkington e Kelly Anderson. Vegetation classification and mapping: Gulf Islands National Seashore. National Park Service, maggio 2023. http://dx.doi.org/10.36967/2299028.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The Gulf Islands National Seashore (GUIS) vegetation inventory project classified and mapped vegetation on park-owned lands within the administrative boundary and estimated thematic map accuracy quantitatively. The project began in June 2016. National Park Service (NPS) Vegetation Mapping Inventory Program provided technical guidance. The overall process included initial planning and scoping, imagery procurement, field data collection, data analysis, imagery interpretation/classification, accuracy assessment (AA), and report writing and database development. Initial planning and scoping meetings took place during May, 2016 in Ocean Springs, Mississippi where representatives gathered from GUIS, the NPS Gulf Coast Inventory and Monitoring Network, and Colorado State University. Primary imagery used for interpretation was 4-band (RGB and CIR) orthoimages from 2014 and 2016 with resolutions of 15 centimeters (cm) (Florida only) and 30 cm. Supplemental imagery with varying coverage across the study area included National Aerial Imagery Program 50 cm imagery for Mississippi (2016) and Florida (2017), 15 and 30 cm true color Digital Earth Model imagery for Mississippi (2016 and 2017), and current and historical true-color Google Earth and Bing Map imagery. National Oceanic Atmospheric Administration National Geodetic Survey 30 cm true color imagery from 2017 (post Hurricane Nate) supported remapping the Mississippi barrier islands after Hurricane Nate. The preliminary vegetation classification included 59 United States National Vegetation Classification (USNVC) associations. Existing vegetation and mapping data combined with vegetation plot data contributed to the final vegetation classification. Quantitative classification using hierarchical clustering and professional expertise was supported by vegetation data collected from 250 plots in 2016 and 29 plots in 2017 and 2018, as well as other observational data. The final vegetation classification includes 39 USNVC associations and 5 park special types; 18 forest and woodland, 7 shrubland, 17 herbaceous, and 2 sparse vegetation types were identified. The final GUIS map consists of 38 map classes. Land cover classes include four types: non-vegetated barren land / borrow pit, developed open space, developed low – high intensity, and water/ocean. Of the 34 vegetation map classes, 26 represent a single USNVC association/park special, six map classes contain two USNVC associations/park specials, and two map classes contain three USNVC associations/park specials. Forest and woodland associations had an abundance of sand pine (Pinus clausa), slash pine (Pinus elliottii), sand live oak (Quercus geminata), yaupon (Ilex vomitoria), wax myrtle (Morella cerifera), and saw palmetto (Serenoa repens). Shrubland associations supported dominant species such as eastern baccharis (Baccharis halimifolia), yaupon (Ilex vomitoria), wax myrtle (Morella cerifera), saw palmetto (Serenoa repens), and sand live oak (Quercus geminata). Herbaceous associations commonly included camphorweed (Heterotheca subaxillaris), needlegrass rush (Juncus roemerianus), bitter seabeach grass (Panicum amarum var. amarum), gulf bluestem (Schizachyrium maritimum), saltmeadow cordgrass (Spartina patens), and sea oats (Uniola paniculata). The final GUIS vegetation map consists of 1,268 polygons totaling 35,769.0 hectares (ha) or 88,387.2 acres (ac). Mean polygon size excluding water is 3.6 ha (8.9 ac). The most abundant land cover class is open water/ocean which accounts for approximately 31,437.7 ha (77,684.2 ac) or 87.9% of the total mapped area. Natural and ruderal vegetation consists of 4,176.8 ha (10,321.1 ac) or 11.6% of the total area. Within the natural and ruderal vegetation types, herbaceous types are the most extensive with 1945.1 ha (4,806.4 ac) or 46.5%, followed by forest and woodland types with 804.9 ha (1,989.0 ac) or 19.3%, sparse vegetation types with 726.9 ha (1,796.1 ac) or 17.4%, and shrubland types with 699.9 ha (1,729.5 ac) or 16.8%. Developed open space, which can include a matrix of roads, parking lots, park-like areas and campgrounds account for 153.8 ha (380.0 ac) or 0.43% of the total mapped area. Artificially non-vegetated barren land is rare and only accounts for 0.74 ha (1.82 ac) or 0.002% of the total area. We collected 701 AA samples to evaluate the thematic accuracy of the vegetation map. Final thematic accuracy, as a simple proportion of correct versus incorrect field calls, is 93.0%. Overall weighted map class accuracy is 93.6%, where the area of each map class was weighted in proportion to the percentage of total park area. This method provides more weight to larger map classes in the park. Each map class had an individual thematic accuracy goal of at least 80%. The hurricane impact area map class was the only class that fell below this target with an accuracy of 73.5%. The vegetation communities impacted by the hurricane are highly dynamic and regenerated quickly following the disturbance event, contributing to map class disagreement during the accuracy assessment phase. No other map classes fell below the 80% accuracy threshold. In addition to the vegetation polygon database and map, several products to support park resource management are provided including the vegetation classification, field key to the associations, local association descriptions, photographic database, project geodatabase, ArcGIS .mxd files for map posters, and aerial imagery acquired for the project. The project geodatabase links the spatial vegetation data layer to vegetation classification, plot photos, project boundary extent, AA points, and the PLOTS database. The geodatabase includes USNVC hierarchy tables allowing for spatial queries of data associated with a vegetation polygon or sample point. All geospatial products are projected using North American Datum 1983 (NAD83) in Universal Transverse Mercator (UTM) Zone 16 N. The final report includes methods and results, contingency tables showing AA results, field forms, species list, and a guide to imagery interpretation. These products provide useful information to assist with management of park resources and inform future management decisions. Use of standard national vegetation classification and mapping protocols facilitates effective resource stewardship by ensuring the compatibility and widespread use throughout the NPS as well as other federal and state agencies. Products support a wide variety of resource assessments, park management and planning needs. Associated information provides a structure for framing and answering critical scientific questions about vegetation communities and their relationship to environmental processes across the landscape.

Vai alla bibliografia