Siga este enlace para ver otros tipos de publicaciones sobre el tema: Imagerie RGB.

Artículos de revistas sobre el tema "Imagerie RGB"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "Imagerie RGB".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Vigneau, Nathalie, Corentin Chéron, Aleixandre Verger y Frédéric Baret. "Imagerie aérienne par drone : exploitation des données pour l'agriculture de précision". Revue Française de Photogrammétrie et de Télédétection, n.º 213 (26 de abril de 2017): 125–31. http://dx.doi.org/10.52638/rfpt.2017.203.

Texto completo
Resumen
La technologie des drones devenant plus accessible et les réglementations nationales encadrant les vols des drones commençant à émerger, de nombreuses sociétés utilisent désormais des drones pour réaliser des acquisitions d'images.Parmi celles-ci AIRINOV a choisi de se spécialiser dans l'agriculture et offre ses services aux agriculteurs ainsi qu'aux expérimentateurs. AIRINOV exploite les drones eBee de senseFly. Le drone a une envergure d'1 m pour un poids de 700 g charge comprise et son vol est entièrment automatique. Le vol est programmé à l'avance puis contrôlé par unauto-pilote connecté à un capteur GPS et à une centrale inertielle embarqués. Ces capteurs enregistrent la position et l'attitude du drone pendant son vol, permettant de géolocaliser les images acquises. Une étude réalisée avec des cibles au sol a permis d'établir que le positionnement absolu des images est de 2,06 m. Toutefois, le recalage sur des points dont on connaît les coordonnées permet d'avoir un géoréférencement avec une précision centimétrique.En parallèle de l'utilisation des appareils photos classiques (RGB), AIRINOV utilise un capteur multispectral quadribande.Les longueurs d'onde de ce capteur sont modulables mais sont généralement vert, rouge, red edge et proche infra-rouge.Ces longueurs d'onde permettent non seulement le suivi d'indices de végétation tels que le NDVI mais également l'accès à des variables biochimiques et biophysiques par inversion d'un modèle de transfert radiatif. Une étude menée conjointement avec l'INRA d'Avignon et le CREAF permet d'accéder au Green Area Index (GAI) et au contenu en chlorophylle (Cab) sur colza, blé, maïs et orge. Cet article présente les résultats d'estimation du GAI avec une RMSE de 0,25 et de Cab avec une RMSE de 4,75 microgrammes/cm2.La qualité des estimations combinée à la forte capacité de revisite du drone ainsi qu'à la multiplicité des indicateurs disponibles démontre le grand intérêt du drone pour le phénotypage et le suivi de plateformes d'essais.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Shen, Xin, Lin Cao, Bisheng Yang, Zhong Xu y Guibin Wang. "Estimation of Forest Structural Attributes Using Spectral Indices and Point Clouds from UAS-Based Multispectral and RGB Imageries". Remote Sensing 11, n.º 7 (3 de abril de 2019): 800. http://dx.doi.org/10.3390/rs11070800.

Texto completo
Resumen
Forest structural attributes are key indicators for parameterization of forest growth models, which play key roles in understanding the biophysical processes and function of the forest ecosystem. In this study, UAS-based multispectral and RGB imageries were used to estimate forest structural attributes in planted subtropical forests. The point clouds were generated from multispectral and RGB imageries using the digital aerial photogrammetry (DAP) approach. Different suits of spectral and structural metrics (i.e., wide-band spectral indices and point cloud metrics) derived from multispectral and RGB imageries were compared and assessed. The selected spectral and structural metrics were used to fit partial least squares (PLS) regression models individually and in combination to estimate forest structural attributes (i.e., Lorey’s mean height (HL) and volume(V)), and the capabilities of multispectral- and RGB-derived spectral and structural metrics in predicting forest structural attributes in various stem density forests were assessed and compared. The results indicated that the derived DAP point clouds had perfect visual effects and that most of the structural metrics extracted from the multispectral DAP point cloud were highly correlated with the metrics derived from the RGB DAP point cloud (R2 > 0.75). Although the models including only spectral indices had the capability to predict forest structural attributes with relatively high accuracies (R2 = 0.56–0.69, relative Root-Mean-Square-Error (RMSE) = 10.88–21.92%), the models with spectral and structural metrics had higher accuracies (R2 = 0.82–0.93, relative RMSE = 4.60–14.17%). Moreover, the models fitted using multispectral- and RGB-derived metrics had similar accuracies (∆R2 = 0–0.02, ∆ relative RMSE = 0.18–0.44%). In addition, the combo models fitted with stratified sample plots had relatively higher accuracies than those fitted with all of the sample plots (∆R2 = 0–0.07, ∆ relative RMSE = 0.49–3.08%), and the accuracies increased with increasing stem density.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Priyankara, Prabath y Takehiro Morimoto. "UAV Based Agricultural Crop Canopy Mapping for Crop Field Monitoring". Abstracts of the ICA 1 (15 de julio de 2019): 1. http://dx.doi.org/10.5194/ica-abs-1-303-2019.

Texto completo
Resumen
<p><strong>Abstract.</strong> Nowadays, mapping of agricultural crop canopy in different growing stages are vital data for crop field monitoring than field-based observations in large scale agricultural crop fields. By mapping agricultural crop canopy, it is very easy to analyse the status of an agricultural crop field by using different vegetation indices. Further, the data can be used to estimate the yield. These information are timely and reliable spatial information to the farmers and decision makers. Mapping of crop canopy in an agricultural crop field in different growing stages are very challenging using satellite imagery mainly due to the difficulty of recording with high cloud coverage. Also, the cost for satellite imagery are higher in proportion to the spatial resolution. It takes some time to order a satellite imagery and sometimes can’t cover some growing stages. This problem can be solved by using low cost RGB based UAV imageries which can be operated at low altitudes (below the clouds) which and when necessary. This study is therefore aimed at mapping of a maize crop canopy using RGB based UAV imageries. UAV flights at different growth stages were carried out with a high resolution RGB camera over a maize field in Ampara District, Sri Lanka. For accurate crop canopy mapping, very high-resolution multi-temporal ortho-mosaicked images were derived from UAV imageries using free and open source image processing platforms with spatial resolution in centimetre level. The resultant multi-temporal ortho-mosaicked images can be used to map and monitor the crop field’s precise and efficient manner. These information are very important for farmers and decision makers to properly manage the crop fields.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Purwanto, Anang Dwi y Wikanti Asriningrum. "IDENTIFICATION OF MANGROVE FORESTS USING MULTISPECTRAL SATELLITE IMAGERIES". International Journal of Remote Sensing and Earth Sciences (IJReSES) 16, n.º 1 (30 de octubre de 2019): 63. http://dx.doi.org/10.30536/j.ijreses.2019.v16.a3097.

Texto completo
Resumen
The visual identification of mangrove forests is greatly constrained by combinations of RGB composite. This research aims to determine the best combination of RGB composite for identifying mangrove forest in Segara Anakan, Cilacap using the Optimum Index Factor (OIF) method. The OIF method uses the standard deviation value and correlation coefficient from a combination of three image bands. The image data comprise Landsat 8 imagery acquired on 30 May 2013, Sentinel 2A imagery acquired on 18 March 2018 and images from SPOT 6 acquired on 10 January 2015. The results show that the band composites of 564 (NIR+SWIR+Red) from Landsat 8 and 8a114 (Vegetation Red Edge+SWIR+Red) from Sentinel 2A are the best RGB composites for identifying mangrove forest, in addition to those of 341 (Red+NIR+Blue) from SPOT 6. The near-infrared (NIR) and short-wave infrared (SWIR) bands play an important role in determining mangrove forests. The properties of vegetation are reflected strongly at the NIR wavelength and the SWIR band is very sensitive to evaporation and the identification of wetlands.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Chhatkuli, S., T. Satoh y K. Tachibana. "MULTI SENSOR DATA INTEGRATION FOR AN ACCURATE 3D MODEL GENERATION". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-4/W5 (11 de mayo de 2015): 103–6. http://dx.doi.org/10.5194/isprsarchives-xl-4-w5-103-2015.

Texto completo
Resumen
The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other’s weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Argyrou, Argyro, Athos Agapiou, Apostolos Papakonstantinou y Dimitrios D. Alexakis. "Comparison of Machine Learning Pixel-Based Classifiers for Detecting Archaeological Ceramics". Drones 7, n.º 9 (13 de septiembre de 2023): 578. http://dx.doi.org/10.3390/drones7090578.

Texto completo
Resumen
Recent improvements in low-altitude remote sensors and image processing analysis can be utilised to support archaeological research. Over the last decade, the increased use of remote sensing sensors and their products for archaeological science and cultural heritage studies has been reported in the literature. Therefore, different spatial and spectral analysis datasets have been applied to recognise archaeological remains or map environmental changes over time. Recently, more thorough object detection approaches have been adopted by researchers for the automated detection of surface ceramics. In this study, we applied several supervised machine learning classifiers using red-green-blue (RGB) and multispectral high-resolution drone imageries over a simulated archaeological area to evaluate their performance towards semi-automatic surface ceramic detection. The overall results indicated that low-altitude remote sensing sensors and advanced image processing techniques can be innovative in archaeological research. Nevertheless, the study results also pointed out existing research limitations in the detection of surface ceramics, which affect the detection accuracy. The development of a novel, robust methodology aimed to address the “accuracy paradox” of imbalanced data samples for optimising archaeological surface ceramic detection. At the same time, this study attempted to fill a gap in the literature by blending AI methodologies for non-uniformly distributed classes. Indeed, detecting surface ceramics using RGB or multi-spectral drone imageries should be reconsidered as an ‘imbalanced data distribution’ problem. To address this paradox, novel approaches need to be developed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Mawardi, Sonny, Emi Sukiyah y Iyan Haryanto. "Morphotectonic Characteristics Of Cisadane Watersshed Based On Satellite Images Analysis". Jurnal Geologi dan Sumberdaya Mineral 20, n.º 3 (22 de agosto de 2019): 175. http://dx.doi.org/10.33332/jgsm.geologi.v20i3.464.

Texto completo
Resumen
Cisadane Watershed is one of the most rapidly growing areas and infrastructure development, and has developed as a residential, industrial, administrative centers and other economic activities. The purpose of this paper is to use remote sensing satellite imageries to identify the morphotectonic characteristics of the Cisadane watershed both qualitatively and quantitatively. Processing stereomodel, stereoplotting and stereocompilation on TerraSAR-X Digital Surface Model (DSM) and SPOT 6 imageries, produced the Digital Terrain Model (DTM) image, which has not been affected by land cover. Fusion of the DTM and Landsat 8 RGB 567+8 images is used to interpret the distribution of lithology, geomorphological units, and lineaments, which are an indication of geological structures. The morphotectonic characteristics of sub-watersheds qualitatively was carried out a bifurcation ratio calculation (Rb) which indicates tectonic deformation. Based on the analysis of satellite images both qualitatively and quantitatively, the morphotectonic characteristics of the upstream, middle and downstream Cisadane Watershed have been deformed.Keywords : satellite images, morphotectonic, DSM, DTM, Cisadane Watershed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Vanbrabant, Yasmin, Stephanie Delalieux, Laurent Tits, Klaas Pauly, Joke Vandermaesen y Ben Somers. "Pear Flower Cluster Quantification Using RGB Drone Imagery". Agronomy 10, n.º 3 (17 de marzo de 2020): 407. http://dx.doi.org/10.3390/agronomy10030407.

Texto completo
Resumen
High quality fruit production requires the regulation of the crop load on fruit trees by reducing the number of flowers and fruitlets early in the growing season, if the bearing is too high. Several automated flower cluster quantification methods based on proximal and remote imagery methods have been proposed to estimate flower cluster numbers, but their overall performance is still far from satisfactory. For other methods, the performance of the method to estimate flower clusters within a tree is unknown since they were only tested on images from one perspective. One of the main reported bottlenecks is the presence of occluded flowers due to limitations of the top-view perspective of the platform-sensor combinations. In order to tackle this problem, the multi-view perspective from the Red–Green–Blue (RGB) colored dense point clouds retrieved from drone imagery are compared and evaluated against the field-based flower cluster number per tree. Experimental results obtained on a dataset of two pear tree orchards (N = 144) demonstrate that our 3D object-based method, a combination of pixel-based classification with the stochastic gradient boosting algorithm and density-based clustering (DBSCAN), significantly outperforms the state-of-the-art in flower cluster estimations from the 2D top-view (R2 = 0.53), with R2 > 0.7 and RRMSE < 15%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Simes, Tomás, Luís Pádua y Alexandra Moutinho. "Wildfire Burnt Area Severity Classification from UAV-Based RGB and Multispectral Imagery". Remote Sensing 16, n.º 1 (20 de diciembre de 2023): 30. http://dx.doi.org/10.3390/rs16010030.

Texto completo
Resumen
Wildfires present a significant threat to ecosystems and human life, requiring effective prevention and response strategies. Equally important is the study of post-fire damages, specifically burnt areas, which can provide valuable insights. This research focuses on the detection and classification of burnt areas and their severity using RGB and multispectral aerial imagery captured by an unmanned aerial vehicle. Datasets containing features computed from multispectral and/or RGB imagery were generated and used to train and optimize support vector machine (SVM) and random forest (RF) models. Hyperparameter tuning was performed to identify the best parameters for a pixel-based classification. The findings demonstrate the superiority of multispectral data for burnt area and burn severity classification with both RF and SVM models. While the RF model achieved a 95.5% overall accuracy for the burnt area classification using RGB data, the RGB models encountered challenges in distinguishing between mildly and severely burnt classes in the burn severity classification. However, the RF model incorporating mixed data (RGB and multispectral) achieved the highest accuracy of 96.59%. The outcomes of this study contribute to the understanding and practical implementation of machine learning techniques for assessing and managing burnt areas.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Semah, Franck. "Imagerie médicale et épilepsies". Revue Générale Nucléaire, n.º 4 (agosto de 2001): 36–37. http://dx.doi.org/10.1051/rgn/20014036.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Aarts, L., A. LaRocque, B. Leblon y A. Douglas. "USE OF UAV IMAGERY FOR EELGRASS MAPPING IN ATLANTIC CANADA". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (3 de agosto de 2020): 287–92. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-287-2020.

Texto completo
Resumen
Abstract. Eelgrass beds are critical in coastal ecosystems and can be useful as a measure of nearshore ecosystem health. Population declines have been seen around the world, including in Atlantic Canada. Restoration has the potential to aid the eelgrass population. Traditionally, field-level protocols would be used to monitor restoration; however, using unmanned aerial vehicles (UAVs) would be faster, more cost-efficient, and produce images with higher spatial resolution. This project used RGB UAV imagery and data acquired over five sites with eelgrass beds in the northern part of the Shediac Bay (New Brunswick, Canada). The images were mosaicked using Pix4Dmapper and PCI Geomatica. Each RGB mosaic was tested for the separability of four different classes (eelgrass bed, deep water channels, sand floor, and mud floor), and training areas were created for each class. The Maximum-likelihood classifier was then applied to each mosaic for creating a map of the five sites. With an average and overall accuracy higher than 98% and a Kappa coefficient higher than 0.97, the Pix4D RGB mosaic was superior to the PCI Geomatica RGB mosaic with an average accuracy of 89%, an overall accuracy of 87%, and a Kappa coefficient of 0.83. This study indicates that mapping eelgrass beds with UAV RGB imagery is possible, but that the mosaicking step is critical. However, some factors need to be considered for creating a better map, such as acquiring the images during overcast conditions to reduce the difference in sun illumination, and the effects of glint or cloud shadow on the images.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Berndt, Emily, Nicholas Elmer, Lori Schultz y Andrew Molthan. "A Methodology to Determine Recipe Adjustments for Multispectral Composites Derived from Next-Generation Advanced Satellite Imagers". Journal of Atmospheric and Oceanic Technology 35, n.º 3 (marzo de 2018): 643–64. http://dx.doi.org/10.1175/jtech-d-17-0047.1.

Texto completo
Resumen
AbstractThe European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) began creating multispectral [i.e., red–green–blue (RGB)] composites in the early 2000s with the advent of the Meteosat-8 Spinning Enhanced Visible and Infrared Imager (SEVIRI). As new satellite sensors—for example, the Himawari-8 Advanced Himawari Imager (AHI) and the Geostationary Operational Environmental Satellite Advanced Baseline Imager (ABI)—become available, there is a need to adjust the EUMETSAT RGB standard thresholds (i.e., recipes) to account for differences in spectral characteristics, spectral response, and atmospheric absorption in order to maintain an interpretation consistent with legacy composites. For the purpose of comparing RGB composites derived from nonoverlapping geostationary sensors, an adjustment technique was applied to the Suomi National Polar-Orbiting Partnership (Suomi-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) to create an intermediate reference sensor (i.e., SEVIRI proxy). Brightness temperature offset values between each AHI and SEVIRI proxy band centered near 3.9, 8.6, 11.0, and 12.0 µm were determined with this technique and through line-by-line radiative transfer model simulations. The relationship between measured brightness temperature of AHI and the SEVIRI proxy was determined though linear regression similar to research by the Japan Meteorological Agency. The linear regression coefficients were utilized to determine the RGB recipe adjustments. Adjusting the RGB recipes to account for the differences in spectral characteristics results in RGB composites consistent with legacy EUMETSAT composites. The methodology was applied to an example of the Nighttime Microphysics RGB, confirming the Japan Meteorological Agency adjustments and demonstrating a simple methodology to determine recipe adjustments for RGB composites derived with next-generation sensors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Wu, Jingru, Qixia Man, Xinming Yang, Pinliang Dong, Xiaotong Ma, Chunhui Liu y Changyin Han. "Fine Classification of Urban Tree Species Based on UAV-Based RGB Imagery and LiDAR Data". Forests 15, n.º 2 (19 de febrero de 2024): 390. http://dx.doi.org/10.3390/f15020390.

Texto completo
Resumen
Rapid and accurate classification of urban tree species is crucial for the protection and management of urban ecology. However, tree species classification remains a great challenge because of the high spatial heterogeneity and biodiversity. Addressing this challenge, in this study, unmanned aerial vehicle (UAV)-based high-resolution RGB imagery and LiDAR data were utilized to extract seven types of features, including RGB spectral features, texture features, vegetation indexes, HSV spectral features, HSV texture features, height feature, and intensity feature. Seven experiments involving different feature combinations were conducted to classify 10 dominant tree species in urban areas with a Random Forest classifier. Additionally, Plurality Filling was applied to further enhance the accuracy of the results as a post-processing method. The aim was to explore the potential of UAV-based RGB imagery and LiDAR data for tree species classification in urban areas, as well as evaluate the effectiveness of the post-processing method. The results indicated that, compared to using RGB imagery alone, the integrated LiDAR and RGB data could improve the overall accuracy and the Kappa coefficient by 18.49% and 0.22, respectively. Notably, among the features based on RGB, the HSV and its texture features contribute most to the improvement of accuracy. The overall accuracy and Kappa coefficient of the optimal feature combination could achieve 73.74% and 0.70 with the Random Forest classifier, respectively. Additionally, the Plurality Filling method could increase the overall accuracy by 11.76%, which could reach 85.5%. The results of this study confirm the effectiveness of RGB imagery and LiDAR data for urban tree species classification. Consequently, these results could provide a valuable reference for the precise classification of tree species using UAV remote sensing data in urban areas.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Koukiou, Georgia. "Perceptually Optimal Color Representation of Fully Polarimetric SAR Imagery". Journal of Imaging 8, n.º 3 (7 de marzo de 2022): 67. http://dx.doi.org/10.3390/jimaging8030067.

Texto completo
Resumen
The four bands of fully polarimetric SAR data convey scattering characteristics of the Earth’s background, but perceptually are not very easy for an observer to use. In this work, the four different channels of fully polarimetric SAR images, namely HH, HV, VH, and VV, are combined so that a color image of the Earth’s background is derived that is perceptually excellent for the human eye and at the same time provides accurate information regarding the scattering mechanisms in each pixel. Most of the elementary scattering mechanisms are related to specific color and land cover types. The innovative nature of the proposed approach is due to the two different consecutive coloring procedures. The first one is a fusion procedure that moves all the information contained in the four polarimetric channels into three derived RGB bands. This is achieved by means of Cholesky decomposition and brings to the RGB output the correlation properties of a natural color image. The second procedure moves the color information of the RGB image to the CIELab color space, which is perceptually uniform. The color information is then evenly distributed by means of color equalization in the CIELab color space. After that, the inverse procedure to obtain the final RGB image is performed. These two procedures bring the PolSAR information regarding the scattering mechanisms on the Earth’s surface onto a meaningful color image, the appearance of which is close to Google Earth maps. Simultaneously, they give better color correspondence to various land cover types compared with existing SAR color representation methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

García-Fernández, Marta, Enoc Sanz-Ablanedo y José Ramón Rodríguez-Pérez. "High-Resolution Drone-Acquired RGB Imagery to Estimate Spatial Grape Quality Variability". Agronomy 11, n.º 4 (30 de marzo de 2021): 655. http://dx.doi.org/10.3390/agronomy11040655.

Texto completo
Resumen
Remotesensing techniques can help reduce time and resources spent collecting samples of crops and analyzing quality variables. The main objective of this work was to demonstrate that it is possible to obtain information on the distribution of must quality variables from conventional photographs. Georeferenced berry samples were collected and analyzed in the laboratory, and RGB images were taken using a low-cost drone from which an orthoimage was made. Transformation equations were calculated to obtain absolute reflectances for the different bands and to calculate 10 vegetation indices plus two new proposed indices. Correlations for the 12 indices with values for 15 must quality variables were calculated in terms of Pearson’s correlation coefficients. Significant correlations were obtained for 100-berries weight (0.77), malic acid (−0.67), alpha amino nitrogen (−0.59), phenolic maturation index (0.69), and the total polyphenol index (0.62), with 100-berries weight and the total polyphenol index obtaining the best results in the proposed RGB-based vegetation index 2 and RGB-based vegetation index 3. Our findings indicate that must variables important for the production of quality wines can be related to the RGB bands in conventional digital images, potentially improving and aiding management and increasing productivity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Niu, Yaxiao, Liyuan Zhang, Huihui Zhang, Wenting Han y Xingshuo Peng. "Estimating Above-Ground Biomass of Maize Using Features Derived from UAV-Based RGB Imagery". Remote Sensing 11, n.º 11 (28 de mayo de 2019): 1261. http://dx.doi.org/10.3390/rs11111261.

Texto completo
Resumen
The rapid, accurate, and economical estimation of crop above-ground biomass at the farm scale is crucial for precision agricultural management. The unmanned aerial vehicle (UAV) remote-sensing system has a great application potential with the ability to obtain remote-sensing imagery with high temporal-spatial resolution. To verify the application potential of consumer-grade UAV RGB imagery in estimating maize above-ground biomass, vegetation indices and plant height derived from UAV RGB imagery were adopted. To obtain a more accurate observation, plant height was directly derived from UAV RGB point clouds. To search the optimal estimation method, the estimation performances of the models based on vegetation indices alone, based on plant height alone, and based on both vegetation indices and plant height were compared. The results showed that plant height directly derived from UAV RGB point clouds had a high correlation with ground-truth data with an R2 value of 0.90 and an RMSE value of 0.12 m. The above-ground biomass exponential regression models based on plant height alone had higher correlations for both fresh and dry above-ground biomass with R2 values of 0.77 and 0.76, respectively, compared to the linear regression model (both R2 values were 0.59). The vegetation indices derived from UAV RGB imagery had great potential to estimate maize above-ground biomass with R2 values ranging from 0.63 to 0.73. When estimating the above-ground biomass of maize by using multivariable linear regression based on vegetation indices, a higher correlation was obtained with an R2 value of 0.82. There was no significant improvement of the estimation performance when plant height derived from UAV RGB imagery was added into the multivariable linear regression model based on vegetation indices. When estimating crop above-ground biomass based on UAV RGB remote-sensing system alone, looking for optimized vegetation indices and establishing estimation models with high performance based on advanced algorithms (e.g., machine learning technology) may be a better way.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Eltner, A., D. Mader, N. Szopos, B. Nagy, J. Grundmann y L. Bertalan. "USING THERMAL AND RGB UAV IMAGERY TO MEASURE SURFACE FLOW VELOCITIES OF RIVERS". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (28 de junio de 2021): 717–22. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-717-2021.

Texto completo
Resumen
Abstract. This study assesses the suitability to use RGB and thermal infrared imagery acquired from an UAV to measure surface flow velocities of rivers. The reach of a medium-scale river in Hungary is investigated. Image sequences with a frame rate of 2 Hz were captured with two sensors, a RGB and an uncooled thermal camera, at a flying height that ensures the visibility of both shores. The interior geometry of both cameras were calibrated with an in-house designed target field. The image sequences were automatically co-registered to account for UAV movements during the image acquisition. The TIR data was processed to keep loss-free image information solely in the water area and to enhance the signal to noise ratio. Image velocimetry with PIV applied to the TIR data and PTV applied to the RGB data was utilised to retrieve surface flow velocities. Comparison between RGB and TIR data reveal an average deviation of about 0.01 m/s. Future studies are needed to evaluate the transferability to other non-regulated river reaches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Fernandez-Gallego, Jose, Ma Buchaillot, Nieves Aparicio Gutiérrez, María Nieto-Taladriz, José Araus y Shawn Kefauver. "Automatic Wheat Ear Counting Using Thermal Imagery". Remote Sensing 11, n.º 7 (28 de marzo de 2019): 751. http://dx.doi.org/10.3390/rs11070751.

Texto completo
Resumen
Ear density is one of the most important agronomical yield components in wheat. Ear counting is time-consuming and tedious as it is most often conducted manually in field conditions. Moreover, different sampling techniques are often used resulting in a lack of standard protocol, which may eventually affect inter-comparability of results. Thermal sensors capture crop canopy features with more contrast than RGB sensors for image segmentation and classification tasks. An automatic thermal ear counting system is proposed to count the number of ears using zenithal/nadir thermal images acquired from a moderately high resolution handheld thermal camera. Three experimental sites under different growing conditions in Spain were used on a set of 24 varieties of durum wheat for this study. The automatic pipeline system developed uses contrast enhancement and filter techniques to segment image regions detected as ears. The approach is based on the temperature differential between the ears and the rest of the canopy, given that ears usually have higher temperatures due to their lower transpiration rates. Thermal images were acquired, together with RGB images and in situ (i.e., directly in the plot) visual ear counting from the same plot segment for validation purposes. The relationship between the thermal counting values and the in situ visual counting was fairly weak (R2 = 0.40), which highlights the difficulties in estimating ear density from one single image-perspective. However, the results show that the automatic thermal ear counting system performed quite well in counting the ears that do appear in the thermal images, exhibiting high correlations with the manual image-based counts from both thermal and RGB images in the sub-plot validation ring (R2 = 0.75–0.84). Automatic ear counting also exhibited high correlation with the manual counting from thermal images when considering the complete image (R2 = 0.80). The results also show a high correlation between the thermal and the RGB manual counting using the validation ring (R2 = 0.83). Methodological requirements and potential limitations of the technique are discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Tubau Comas, A., J. Valente y L. Kooistra. "AUTOMATIC APPLE TREE BLOSSOM ESTIMATION FROM UAV RGB IMAGERY". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (4 de junio de 2019): 631–35. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-631-2019.

Texto completo
Resumen
<p><strong>Abstract.</strong> Apple trees often produce high amount of fruits, which results in small, low quality fruits. Thinning in apple orchards is used to improve the quality of the apples by reducing the number of flowers or fruits the tree is producing. The current method used to estimate how much thinning is necessary is to measure flowering intensity, currently done by human visual inspection of trees in the orchard. The use of images of apple trees from ground-level to measure flowering intensity and its spatial variation through orchards has been researched with promising results. This research explores the potential of UAV RGB high-resolution imagery to measure flowering intensity. Image segmentation techniques have been used to segment the white pixels, which correspond to the apple flowers, of the orthophoto and the single photos. Single trees have been cropped from the single photos and from the orthophoto, and correlation has been measured between percentage of white pixels per tree and flowering intensity and between percentage of white pixels per tree and flower clusters. The resulting correlation is low, with a maximum of 0.54 for the correlation between white pixels per tree and flower clusters when using the ortophoto. Those results show the complexity of working with drone images, but there are still alternative approaches that have to investigated before discarding the use of UAV RGB imagery for estimation of flowering intensity.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Chen, Jianqu, Xunmeng Li, Kai Wang, Shouyu Zhang, Jun Li y Mingbo Sun. "Assessment of intertidal seaweed biomass based on RGB imagery". PLOS ONE 17, n.º 2 (24 de febrero de 2022): e0263416. http://dx.doi.org/10.1371/journal.pone.0263416.

Texto completo
Resumen
The Above Ground Biomass (AGB) of seaweeds is the most fundamental ecological parameter as the material and energy basis of intertidal ecosystems. Therefore, there is a need to develop an efficient survey method that has less impact on the environment. With the advent of technology and the availability of popular filming devices such as smartphones and cameras, intertidal seaweed wet biomass can be surveyed by remote sensing using popular RGB imaging sensors. In this paper, 143 in situ sites of seaweed in the intertidal zone of GouQi Island, ShengSi County, Zhejiang Province, were sampled and biomass inversions were performed. The hyperspectral data of seaweed at different growth stages were analyzed, and it was found that the variation range was small (visible light range < 0.1). Through Principal Component Analysis (PCA), Most of the variance is explained in the first principal component, and the load allocated to the three kinds of seaweed is more than 90%. Through Pearson correlation analysis, 24 parameters of spectral features, 9 parameters of texture features (27 in total for the three RGB bands) and parameters of combined spectral and texture features of the images were selected for screening, and regression prediction was performed using two methods: Random Forest (RF), and Gradient Boosted Decision Tree (GBDT), combined with Pearson correlation coefficients. Compared with the other two models, GBDT has better fitting accuracy in the inversion of seaweed biomass, and the highest R2 was obtained when the top 17, 17 and 11 parameters with strong correlation were selected for the regression prediction by Pearson’s correlation coefficient for Ulva australis, Sargassum thunbergii, and Sargassum fusiforme, and the R2 for Ulva australis was 0.784, RMSE 156.129, MAE 50.691 and MAPE 28.201, the R2 for Sargassum thunbergii was 0.854, RMSE 790.487, MAE 327.108 and MAPE 19.039, and the R2 for Sargassum fusiforme was 0.808, RMSE 445.067 and MAPE 28.822. MAE was 180.172 and MAPE was 28.822. The study combines in situ survey with machine learning methods, which has the advantages of being popular, efficient and environmentally friendly, and can provide technical support for intertidal seaweed surveys.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Pádua, Luís, Pedro Marques, Jonáš Hruška, Telmo Adão, Emanuel Peres, Raul Morais y Joaquim Sousa. "Multi-Temporal Vineyard Monitoring through UAV-Based RGB Imagery". Remote Sensing 10, n.º 12 (29 de noviembre de 2018): 1907. http://dx.doi.org/10.3390/rs10121907.

Texto completo
Resumen
This study aimed to characterize vineyard vegetation thorough multi-temporal monitoring using a commercial low-cost rotary-wing unmanned aerial vehicle (UAV) equipped with a consumer-grade red/green/blue (RGB) sensor. Ground-truth data and UAV-based imagery were acquired on nine distinct dates, covering the most significant vegetative growing cycle until harvesting season, over two selected vineyard plots. The acquired UAV-based imagery underwent photogrammetric processing resulting, per flight, in an orthophoto mosaic, used for vegetation estimation. Digital elevation models were used to compute crop surface models. By filtering vegetation within a given height-range, it was possible to separate grapevine vegetation from other vegetation present in a specific vineyard plot, enabling the estimation of grapevine area and volume. The results showed high accuracy in grapevine detection (94.40%) and low error in grapevine volume estimation (root mean square error of 0.13 m and correlation coefficient of 0.78 for height estimation). The accuracy assessment showed that the proposed method based on UAV-based RGB imagery is effective and has potential to become an operational technique. The proposed method also allows the estimation of grapevine areas that can potentially benefit from canopy management operations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Yassine, H., K. Tout y M. Jaber. "IMPROVING LULC CLASSIFICATION FROM SATELLITE IMAGERY USING DEEP LEARNING – EUROSAT DATASET". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2021 (28 de junio de 2021): 369–76. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2021-369-2021.

Texto completo
Resumen
Abstract. Machine learning (ML) has proven useful for a very large number of applications in several domains. It has realized a remarkable growth in remote-sensing image analysis over the past few years. Deep Learning (DL) a subset of machine learning were applied in this work to achieve a better classification of Land Use Land Cover (LULC) in satellite imagery using Convolutional Neural Networks (CNNs). EuroSAT benchmarking data set is used as training data set which uses Sentinel-2 satellite images. Sentinel-2 provides images with 13 spectral feature bands, but surprisingly little attention has been paid to these features in deep learning models. The majority of applications focused only on using RGB due to high availability of the RGB models in computer vision. While RGB gives an accuracy of 96.83% using CNN, we are presenting two approaches to improve the classification performance of Sentinel-2 images. In the first approach, features are extracted from 13 spectral feature bands of Sentinel-2 instead of RGB which leads to accuracy of 98.78%. In the second approach features are extracted from 13 spectral bands of Sentinel-2 in addition to calculated indices used in LULC like Blue Ratio (BR), Vegetation index based on Red Edge (VIRE) and Normalized Near Infrared (NNIR), etc. which gives a better accuracy of 99.58%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Iskandar, Beni, I. Nengah Surati Jaya y Muhammad Buce Saleh. "Crown closure segmentation on wetland lowland forest using the mean shift algorithm". Indonesian Journal of Electrical Engineering and Computer Science 24, n.º 2 (1 de noviembre de 2021): 965. http://dx.doi.org/10.11591/ijeecs.v24.i2.pp965-977.

Texto completo
Resumen
The availability of high and very high-resolution imagery is helpful for forest inventory, particularly to measure the stand variables such as canopy dimensions, canopy density, and crown closure. This paper describes the examination of mean shift (MS) algorithm on wetland lowland forest. The study objective was to find the optimal parameters for crown closure segmentation Pleiades-1B and SPOT-6 imageries. The study shows that the segmentation of crown closure with the red band of Pleiades-1B image would be well segmented by using the parameter combination of (hs: 6, hr: 5, M: 33) having overall accuracy of 88.93% and Kappa accuracy of 73.76%, while the red, green, blue (RGB) composite of SPOT-6 image, the optimal parameter combination was (hs:2, hr: 8, M: 11), having overall accuracy of 85.72% and kappa accuracy of 68.33%. The Pleiades-1B image with a spatial resolution of (0.5 m) provides better accuracy than SPOT-5 of (1.5 m) spatial resolution. The differences between single spectral, synthetic, and RGB does not significantly affect the accuracy of segmentation. The study concluded that the segmentation of high and very high-resolution images gives promising results on forest inventory.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Rau, J. Y., J. P. Jhan y C. Y. Huang. "ORTHO-RECTIFICATION OF NARROW BAND MULTI-SPECTRAL IMAGERY ASSISTED BY DSLR RGB IMAGERY ACQUIRED BY A FIXED-WING UAS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1/W4 (26 de agosto de 2015): 67–74. http://dx.doi.org/10.5194/isprsarchives-xl-1-w4-67-2015.

Texto completo
Resumen
Miniature Multiple Camera Array (MiniMCA-12) is a frame-based multilens/multispectral sensor composed of 12 lenses with narrow band filters. Due to its small size and light weight, it is suitable to mount on an Unmanned Aerial System (UAS) for acquiring high spectral, spatial and temporal resolution imagery used in various remote sensing applications. However, due to its wavelength range is only 10 nm that results in low image resolution and signal-to-noise ratio which are not suitable for image matching and digital surface model (DSM) generation. In the meantime, the spectral correlation among all 12 bands of MiniMCA images are low, it is difficult to perform tie-point matching and aerial triangulation at the same time. In this study, we thus propose the use of a DSLR camera to assist automatic aerial triangulation of MiniMCA-12 imagery and to produce higher spatial resolution DSM for MiniMCA12 ortho-image generation. Depending on the maximum payload weight of the used UAS, these two kinds of sensors could be collected at the same time or individually. In this study, we adopt a fixed-wing UAS to carry a Canon EOS 5D Mark2 DSLR camera and a MiniMCA-12 multi-spectral camera. For the purpose to perform automatic aerial triangulation between a DSLR camera and the MiniMCA-12, we choose one master band from MiniMCA-12 whose spectral range has overlap with the DSLR camera. However, all lenses of MiniMCA-12 have different perspective centers and viewing angles, the original 12 channels have significant band misregistration effect. Thus, the first issue encountered is to reduce the band misregistration effect. Due to all 12 MiniMCA lenses being frame-based, their spatial offsets are smaller than 15 cm and all images are almost 98% overlapped, we thus propose a <b>modified projective transformation</b> (MPT) method together with two systematic error correction procedures to register all 12 bands of imagery on the same image space. It means that those 12 bands of images acquired at the same exposure time will have same interior orientation parameters (IOPs) and exterior orientation parameters (EOPs) after band-to-band registration (BBR). Thus, in the aerial triangulation stage, the master band of MiniMCA-12 was treated as a reference channel to link with DSLR RGB images. It means, all reference images from the master band of MiniMCA-12 and all RGB images were triangulated at the same time with same coordinate system of ground control points (GCP). Due to the spatial resolution of RGB images is higher than the MiniMCA-12, the GCP can be marked on the RGB images only even they cannot be recognized on the MiniMCA images. Furthermore, a one meter gridded digital surface model (DSM) is created by the RGB images and applied to the MiniMCA imagery for ortho-rectification. Quantitative error analyses show that the proposed BBR scheme can achieve 0.33 pixels of average misregistration residuals length and the co-registration errors among 12 MiniMCA ortho-images and between MiniMCA and Canon RGB ortho-images are all less than 0.6 pixels. The experimental results demonstrate that the proposed method is robust, reliable and accurate for future remote sensing applications.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Shi, Weibo, Xiaohan Liao, Jia Sun, Zhengjian Zhang, Dongliang Wang, Shaoqiang Wang, Wenqiu Qu et al. "Optimizing Observation Plans for Identifying Faxon Fir (Abies fargesii var. Faxoniana) Using Monthly Unmanned Aerial Vehicle Imagery". Remote Sensing 15, n.º 8 (21 de abril de 2023): 2205. http://dx.doi.org/10.3390/rs15082205.

Texto completo
Resumen
Faxon fir (Abies fargesii var. faxoniana), as a dominant tree species in the subalpine coniferous forest of Southwest China, has strict requirements regarding the temperature and humidity of the growing environment. Therefore, the dynamic and continuous monitoring of Faxon fir distribution is very important to protect this highly sensitive ecological environment. Here, we combined unmanned aerial vehicle (UAV) imagery and convolutional neural networks (CNNs) to identify Faxon fir and explored the identification capabilities of multispectral (five bands) and red-green-blue (RGB) imagery under different months. For a case study area in Wanglang Nature Reserve, Southwest China, we acquired monthly RGB and multispectral images on six occasions over the growing season. We found that the accuracy of RGB imagery varied considerably (the highest intersection over union (IoU), 83.72%, was in April and the lowest, 76.81%, was in June), while the accuracy of multispectral imagery was consistently high (IoU > 81%). In April and October, the accuracy of the RGB imagery was slightly higher than that of multispectral imagery, but for the other months, multispectral imagery was more accurate (IoU was nearly 6% higher than those of the RGB imagery for June). Adding vegetation indices (VIs) improved the accuracy of the RGB models during summer, but there was still a gap to the multispectral model. Hence, our results indicate that the optimized time of the year for identifying Faxon fir using UAV imagery is during the peak of the growing season when using a multispectral imagery. During the non-growing season, RGB imagery was no worse or even slightly better than multispectral imagery for Faxon fir identification. Our study can provide guidance for optimizing observation plans regarding data collection time and UAV loads and could further help enhance the utility of UAVs in forestry and ecological research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Fu, Yuanyuan, Guijun Yang, Zhenhai Li, Xiaoyu Song, Zhenhong Li, Xingang Xu, Pei Wang y Chunjiang Zhao. "Winter Wheat Nitrogen Status Estimation Using UAV-Based RGB Imagery and Gaussian Processes Regression". Remote Sensing 12, n.º 22 (18 de noviembre de 2020): 3778. http://dx.doi.org/10.3390/rs12223778.

Texto completo
Resumen
Predicting the crop nitrogen (N) nutrition status is critical for optimizing nitrogen fertilizer application. The present study examined the ability of multiple image features derived from unmanned aerial vehicle (UAV) RGB images for winter wheat N status estimation across multiple critical growth stages. The image features consisted of RGB-based vegetation indices (VIs), color parameters, and textures, which represented image features of different aspects and different types. To determine which N status indicators could be well-estimated, we considered two mass-based N status indicators (i.e., the leaf N concentration (LNC) and plant N concentration (PNC)) and two area-based N status indicators (i.e., the leaf N density (LND) and plant N density (PND)). Sixteen RGB-based VIs associated with crop growth were selected. Five color space models, including RGB, HSV, L*a*b*, L*c*h*, and L*u*v*, were used to quantify the winter wheat canopy color. The combination of Gaussian processes regression (GPR) and Gabor-based textures with four orientations and five scales was proposed to estimate the winter wheat N status. The gray level co-occurrence matrix (GLCM)-based textures with four orientations were extracted for comparison. The heterogeneity in the textures of different orientations was evaluated using the measures of mean and coefficient of variation (CV). The variable importance in projection (VIP) derived from partial least square regression (PLSR) and a band analysis tool based on Gaussian processes regression (GPR-BAT) were used to identify the best performing image features for the N status estimation. The results indicated that (1) the combination of RGB-based VIs or color parameters only could produce reliable estimates of PND and the GPR model based on the combination of color parameters yielded a higher accuracy for the estimation of PND (R2val = 0.571, RMSEval = 2.846 g/m2, and RPDval = 1.532), compared to that based on the combination of RGB-based VIs; (2) there was no significant heterogeneity in the textures of different orientations and the textures of 45 degrees were recommended in the winter wheat N status estimation; (3) compared with the RGB-based VIs and color parameters, the GPR model based on the Gabor-based textures produced a higher accuracy for the estimation of PND (R2val = 0.675, RMSEval = 2.493 g/m2, and RPDval = 1.748) and the PLSR model based on the GLCM-based textures produced a higher accuracy for the estimation of PNC (R2val = 0.612, RMSEval = 0.380%, and RPDval = 1.601); and (4) the combined use of RGB-based VIs, color parameters, and textures produced comparable estimation results to using textures alone. Both VIP-PLSR and GPR-BAT analyses confirmed that image textures contributed most to the estimation of winter wheat N status. The experimental results reveal the potential of image textures derived from high-definition UAV-based RGB images for the estimation of the winter wheat N status. They also suggest that a conventional low-cost digital camera mounted on a UAV could be well-suited for winter wheat N status monitoring in a fast and non-destructive way.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Li, Hui, Linhai Jing, Changyong Dou y Haifeng Ding. "A Comprehensive Assessment of the Pansharpening of the Nighttime Light Imagery of the Glimmer Imager of the Sustainable Development Science Satellite 1". Remote Sensing 16, n.º 2 (8 de enero de 2024): 245. http://dx.doi.org/10.3390/rs16020245.

Texto completo
Resumen
The Sustainable Development Science Satellite 1 (SDGSAT-1) satellite, launched in November 2021, is dedicated to providing data detailing the “traces of human activities” for the implementation of the United Union’s 2030 Agenda for Sustainable Development and global scientific research. The glimmer imager (GI) that is equipped on SDGSAT-1 can provide nighttime light (NL) data with a 10 m panchromatic (PAN) band and red, green, and blue (RGB) bands of 40 m resolution, which can be used for a wide range of applications, such as in urban expansion, population studies of cities, and economics of cities, as well as nighttime aerosol thickness monitoring. The 10 m PAN band can be fused with the 40 m RGB bands to obtain a 10 m RGB NL image, which can be used to identify the intensity and type of night lights and the spatial distribution of road networks and to improve the monitoring accuracy of sustainable development goal (SDG) indicators related to city developments. Existing remote sensing image fusion algorithms are mainly developed for daytime optical remote sensing images. Compared with daytime optical remote sensing images, NL images are characterized by a large amount of dark (low-value) pixels and high background noises. To investigate whether daytime optical image fusion algorithms are suitable for the fusion of GI NL images and which image fusion algorithms are the best choice for GI images, this study conducted a comprehensive evaluation of thirteen state-of-the-art pansharpening algorithms in terms of quantitative indicators and visual inspection using four GI NL datasets. The results showed that PanNet, GLP_HPM, GSA, and HR outperformed the other methods and provided stable performances among the four datasets. Specifically, PanNet offered UIQI values ranging from 0.907 to 0.952 for the four datasets, whereas GSA, HR, and GLP_HPM provided UIQI values ranging from 0.770 to 0.856. The three methods based on convolutional neural networks achieved more robust and better visual effects than the methods using multiresolution analysis at the original scale. According to the experimental results, PanNet shows great potential in the fusion of SDGSAT-1 GI imagery due to its robust performance and relatively short training time. The quality metrics generated at the degraded scale were highly consistent with visual inspection, but those used at the original scale were inconsistent with visual inspection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Feng, Haikuan, Huilin Tao, Zhenhai Li, Guijun Yang y Chunjiang Zhao. "Comparison of UAV RGB Imagery and Hyperspectral Remote-Sensing Data for Monitoring Winter Wheat Growth". Remote Sensing 14, n.º 15 (8 de agosto de 2022): 3811. http://dx.doi.org/10.3390/rs14153811.

Texto completo
Resumen
Although crop-growth monitoring is important for agricultural managers, it has always been a difficult research topic. However, unmanned aerial vehicles (UAVs) equipped with RGB and hyperspectral cameras can now acquire high-resolution remote-sensing images, which facilitates and accelerates such monitoring. To explore the effect of monitoring a single crop-growth indicator and multiple indicators, this study combines six growth indicators (plant nitrogen content, above-ground biomass, plant water content, chlorophyll, leaf area index, and plant height) into the new comprehensive growth index (CGI). We investigate the performance of RGB imagery and hyperspectral data for monitoring crop growth based on multi-time estimation of the CGI. The CGI is estimated from the vegetation indices based on UAV hyperspectral data treated by linear, nonlinear, and multiple linear regression (MLR), partial least squares (PLSR), and random forest (RF). The results are as follows: (1) The RGB-imagery indices red reflectance (r), the excess-red index (EXR), the vegetation atmospherically resistant index (VARI), and the modified green-red vegetation index (MGRVI), as well as the spectral indices consisting of the linear combination index (LCI), the modified simple ratio index (MSR), the simple ratio vegetation index (SR), and the normalized difference vegetation index (NDVI), are more strongly correlated with the CGI than a single growth-monitoring indicator. (2) The CGI estimation model is constructed by comparing a single RGB-imagery index and a spectral index, and the optimal RGB-imagery index corresponding to each of the four growth stages in order is r, r, r, EXR; the optimal spectral index is LCI for all four growth stages. (3) The MLR, PLSR, and RF methods are used to estimate the CGI. The MLR method produces the best estimates. (4) Finally, the CGI is more accurately estimated using the UAV hyperspectral indices than using the RGB-image indices.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

López-García, Patricia, Diego Intrigliolo, Miguel A. Moreno, Alejandro Martínez-Moreno, José Fernando Ortega, Eva Pilar Pérez-Álvarez y Rocío Ballesteros. "Machine Learning-Based Processing of Multispectral and RGB UAV Imagery for the Multitemporal Monitoring of Vineyard Water Status". Agronomy 12, n.º 9 (7 de septiembre de 2022): 2122. http://dx.doi.org/10.3390/agronomy12092122.

Texto completo
Resumen
The development of unmanned aerial vehicles (UAVs) and light sensors has required new approaches for high-resolution remote sensing applications. High spatial and temporal resolution spectral data acquired by multispectral and conventional cameras (or red, green, blue (RGB) sensors) onboard UAVs can be useful for plant water status determination and, as a consequence, for irrigation management. A study in a vineyard located in south-eastern Spain was carried out during the 2018, 2019, and 2020 seasons to assess the potential uses of these techniques. Different water qualities and irrigation application start throughout the growth cycle were imposed. Flights with RGB and multispectral cameras mounted on a UAV were performed throughout the growth cycle, and orthoimages were generated. These orthoimages were segmented to include only vegetation and calculate the green canopy cover (GCC). The stem water potential was measured, and the water stress integral (Sψ) was obtained during each irrigation season. Multiple linear regression techniques and artificial neural networks (ANNs) models with multispectral and RGB bands, as well as GCC, as inputs, were trained and tested to simulate the Sψ. The results showed that the information in the visible domain was highly related to the Sψ in the 2018 season. For all the other years and combinations of years, multispectral ANNs performed slightly better. Differences in the spatial resolution and radiometric quality of the RGB and multispectral geomatic products explain the good model performances with each type of data. Additionally, RGB cameras cost less and are easier to use than multispectral cameras, and RGB images are simpler to process than multispectral images. Therefore, RGB sensors are a good option for use in predicting entire vineyard water status. In any case, field punctual measurements are still required to generate a general model to estimate the water status in any season and vineyard.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Wu, Qiang, Yongping Zhang, Min Xie, Zhiwei Zhao, Lei Yang, Jie Liu y Dingyi Hou. "Estimation of Fv/Fm in Spring Wheat Using UAV-Based Multispectral and RGB Imagery with Multiple Machine Learning Methods". Agronomy 13, n.º 4 (29 de marzo de 2023): 1003. http://dx.doi.org/10.3390/agronomy13041003.

Texto completo
Resumen
The maximum quantum efficiency of photosystem II (Fv/Fm) is a widely used indicator of photosynthetic health in plants. Remote sensing of Fv/Fm using MS (multispectral) and RGB imagery has the potential to enable high-throughput screening of plant health in agricultural and ecological applications. This study aimed to estimate Fv/Fm in spring wheat at an experimental base in Hanghou County, Inner Mongolia, from 2020 to 2021. RGB and MS images were obtained at the wheat flowering stage using a Da-Jiang Phantom 4 multispectral drone. A total of 51 vegetation indices were constructed, and the measured Fv/Fm of wheat on the ground was obtained simultaneously using a Handy PEA plant efficiency analyzer. The performance of 26 machine learning algorithms for estimating Fv/Fm using RGB and multispectral imagery was compared. The findings revealed that a majority of the multispectral vegetation indices and approximately half of the RGB vegetation indices demonstrated a strong correlation with Fv/Fm, as evidenced by an absolute correlation coefficient greater than 0.75. The Gradient Boosting Regressor (GBR) was the optimal estimation model for RGB, with the important features being RGBVI and ExR. The Huber model was the optimal estimation model for MS, with the important feature being MSAVI2. The Automatic Relevance Determination (ARD) was the optimal estimation model for the combination (RGB + MS), with the important features being SIPI, ExR, and VEG. The highest accuracy was achieved using the ARD model for estimating Fv/Fm with RGB + MS vegetation indices on the test sets (Test set MAE = 0.019, MSE = 0.001, RMSE = 0.024, R2 = 0.925, RMSLE = 0.014, MAPE = 0.026). The combined analysis suggests that extracting vegetation indices (SIPI, ExR, and VEG) from RGB and MS remote images by UAV as input variables of the model and using the ARD model can significantly improve the accuracy of Fv/Fm estimation at flowering stage. This approach provides new technical support for rapid and accurate monitoring of Fv/Fm in spring wheat in the Hetao Irrigation District.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Lussem, U., A. Bolten, M. L. Gnyp, J. Jasper y G. Bareth. "EVALUATION OF RGB-BASED VEGETATION INDICES FROM UAV IMAGERY TO ESTIMATE FORAGE YIELD IN GRASSLAND". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (30 de abril de 2018): 1215–19. http://dx.doi.org/10.5194/isprs-archives-xlii-3-1215-2018.

Texto completo
Resumen
Monitoring forage yield throughout the growing season is of key importance to support management decisions on grasslands/pastures. Especially on intensely managed grasslands, where nitrogen fertilizer and/or manure are applied regularly, precision agriculture applications are beneficial to support sustainable, site-specific management decisions on fertilizer treatment, grazing management and yield forecasting to mitigate potential negative impacts. To support these management decisions, timely and accurate information is needed on plant parameters (e.g. forage yield) with a high spatial and temporal resolution. However, in highly heterogeneous plant communities such as grasslands, assessing their in-field variability non-destructively to determine e.g. adequate fertilizer application still remains challenging. Especially biomass/yield estimation, as an important parameter in assessing grassland quality and quantity, is rather laborious. Forage yield (dry or fresh matter) is mostly measured manually with rising plate meters (RPM) or ultrasonic sensors (handheld or mounted on vehicles). Thus the in-field variability cannot be assessed for the entire field or only with potential disturbances. Using unmanned aerial vehicles (UAV) equipped with consumer grade RGB cameras in-field variability can be assessed by computing RGB-based vegetation indices. In this contribution we want to test and evaluate the robustness of RGB-based vegetation indices to estimate dry matter forage yield on a recently established experimental grassland site in Germany. Furthermore, the RGB-based VIs are compared to indices computed from the Yara N-Sensor. The results show a good correlation of forage yield with RGB-based VIs such as the NGRDI with R<sup>2</sup> values of 0.62.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Dabrowski, R., A. Orych, A. Jenerowicz y P. Walczykowski. "PRELIMINARY RESULTS FROM THE PORTABLE IMAGERY QUALITY ASSESSMENT TEST FIELD (PIQuAT) OF UAV IMAGERY FOR IMAGERY RECONNAISSANCE PURPOSES". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1/W4 (26 de agosto de 2015): 111–15. http://dx.doi.org/10.5194/isprsarchives-xl-1-w4-111-2015.

Texto completo
Resumen
The article presents a set of initial results of a quality assessment study of 2 different types of sensors mounted on an unmanned aerial vehicle, carried out over an especially designed and constructed test field. The PIQuAT (Portable Imagery Quality Assessment Test Field) field had been designed especially for the purposes of determining the quality parameters of UAV sensors, especially in terms of the spatial, spectral and radiometric resolutions and chosen geometric aspects. The sensor used include a multispectral framing camera and a high-resolution RGB sensor. The flights were conducted from a number of altitudes ranging from 10 m to 200 m above the test field. Acquiring data at a number of different altitudes allowed the authors to evaluate the obtained results and check for possible linearity of the calculated quality assessment parameters. The radiometric properties of the sensors were evaluated from images of the grayscale target section of the PIQuAT field. The spectral resolution of the imagery was determined based on a number of test samples with known spectral reflectance curves. These reference spectral reflectance curves were then compared with spectral reflectance coefficients at the wavelengths registered by the miniMCA camera. Before conducting all of these experiments in field conditions, the interior orientation parameters were calculated for the MiniMCA and RGB sensor in laboratory conditions. These parameters include: the actual pixel size on the detector, distortion parameters, calibrated focal length (CFL) and the coordinates of the principal point of autocollimation (miniMCA - for each of the six channels separately.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Jollet, D., U. Rascher y M. Müller-Linow. "Assessing yield quality parameters in bush bean via RGB imagery". Acta Horticulturae, n.º 1327 (noviembre de 2021): 421–28. http://dx.doi.org/10.17660/actahortic.2021.1327.56.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Sebastian, C., B. Boom, T. van Lankveld, E. Bondarev y P. H. N. De With. "BOOTSTRAPPED CNNS FOR BUILDING SEGMENTATION ON RGB-D AERIAL IMAGERY". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-4 (19 de septiembre de 2018): 187–92. http://dx.doi.org/10.5194/isprs-annals-iv-4-187-2018.

Texto completo
Resumen
<p><strong>Abstract.</strong> Detection of buildings and other objects from aerial images has various applications in urban planning and map making. Automated building detection from aerial imagery is a challenging task, as it is prone to varying lighting conditions, shadows and occlusions. Convolutional Neural Networks (CNNs) are robust against some of these variations, although they fail to distinguish easy and difficult examples. We train a detection algorithm from RGB-D images to obtain a segmented mask by using the CNN architecture DenseNet. First, we improve the performance of the model by applying a statistical re-sampling technique called Bootstrapping and demonstrate that more informative examples are retained. Second, the proposed method outperforms the non-bootstrapped version by utilizing only one-sixth of the original training data and it obtains a precision-recall break-even of 95.10<span class="thinspace"></span>% on our aerial imagery dataset.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Rodriguez-Gallo, Yakdiel, Byron Escobar-Benitez y Jony Rodriguez-Lainez. "Robust Coffee Rust Detection Using UAV-Based Aerial RGB Imagery". AgriEngineering 5, n.º 3 (21 de agosto de 2023): 1415–31. http://dx.doi.org/10.3390/agriengineering5030088.

Texto completo
Resumen
Timely detection of pests and diseases in crops is essential to mitigate severe damage and economic losses, especially in the context of climate change. This paper describes a method for detecting the presence of coffee leaf rust (CLR) using two databases: RoCoLe and a database obtained from an unmanned aerial vehicle (UAV) equipped with an RGB camera. The developed method follows a two-stage approach. In the first stage, images are processed using ImageJ software, while, in the second phase, Python is used to implement morphological filters and the Hough transform for rust identification. The algorithm’s performance is evaluated using the chi-square test, and its discriminatory capacity is assessed through the generation of a Receiver Operating Characteristic (ROC) curve. Additionally, Cohen’s kappa method is used to assess the agreement among observers, while Kendall’s rank correlation coefficient (KRCC) measures the correlation between the criteria of the observers and the classifications generated by the method. The results demonstrate that the developed method achieved an efficiency of 97% in detecting coffee rust in the RoCoLe dataset and over 93.5% in UAV images. These findings suggest that the developed method has the potential to be implemented in the future on a UAV for rust detection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Psiroukis, Vasilis, George Papadopoulos, Aikaterini Kasimati, Nikos Tsoulias y Spyros Fountas. "Cotton Growth Modelling Using UAS-Derived DSM and RGB Imagery". Remote Sensing 15, n.º 5 (22 de febrero de 2023): 1214. http://dx.doi.org/10.3390/rs15051214.

Texto completo
Resumen
Modeling cotton plant growth is an important aspect of improving cotton yields and fiber quality and optimizing land management strategies. High-throughput phenotyping (HTP) systems, including those using high-resolution imagery from unmanned aerial systems (UAS) combined with sensor technologies, can accurately measure and characterize phenotypic traits such as plant height, canopy cover, and vegetation indices. However, manual assessment of plant characteristics is still widely used in practice. It is time-consuming, labor-intensive, and prone to human error. In this study, we investigated the use of a data-processing pipeline to estimate cotton plant height using UAS-derived visible-spectrum vegetation indices and photogrammetric products. Experiments were conducted at an experimental cotton field in Aliartos, Greece, using a DJI Phantom 4 UAS in five different stages of the 2022 summer cultivation season. Ground Control Points (GCPs) were marked in the field and used for georeferencing and model optimization. The imagery was used to generate dense point clouds, which were then used to create Digital Surface Models (DSMs), while specific Digital Elevation Models (DEMs) were interpolated from RTK GPS measurements. Three (3) vegetation indices were calculated using visible spectrum reflectance data from the generated orthomosaic maps, and ground coverage from the cotton canopy was also calculated by using binary masks. Finally, the correlations between the indices and crop height were examined. The results showed that vegetation indices, especially Green Chromatic Coordinate (GCC) and Normalized Excessive Green (NExG) indices, had high correlations with cotton height in the earlier growth stages and exceeded 0.70, while vegetation cover showed a more consistent trend throughout the season and exceeded 0.90 at the beginning of the season.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Surový, P., N. A Ribeiro, A. C Oliveira y Ľ. Scheer. "Discrimination of vegetation from the background in high resolution colour remote sensed imagery". Journal of Forest Science 50, No. 4 (11 de enero de 2012): 161–70. http://dx.doi.org/10.17221/4611-jfs.

Texto completo
Resumen
Different transformations of RGB colour space were compared to develop the best method for discrimination of vegetation from the background in open pure cork oak stands in southern Portugal in high-resolution colour imagery. Normalised difference index, i1i2i3 colour space and other indices developed for classic band imagery were recalculated for near infrared imagery and tested. A new method for fully automated thresholding was developed and tested. The newly developed index shows the equal accuracy performance but provides the smallest overestimation error and retains the largest scale of grey levels for <br />a subsequent shape analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Agapiou, Athos. "Vegetation Extraction Using Visible-Bands from Openly Licensed Unmanned Aerial Vehicle Imagery". Drones 4, n.º 2 (26 de junio de 2020): 27. http://dx.doi.org/10.3390/drones4020027.

Texto completo
Resumen
Red–green–blue (RGB) cameras which are attached in commercial unmanned aerial vehicles (UAVs) can support remote-observation small-scale campaigns, by mapping, within a few centimeter’s accuracy, an area of interest. Vegetated areas need to be identified either for masking purposes (e.g., to exclude vegetated areas for the production of a digital elevation model (DEM) or for monitoring vegetation anomalies, especially for precision agriculture applications. However, while detection of vegetated areas is of great importance for several UAV remote sensing applications, this type of processing can be quite challenging. Usually, healthy vegetation can be extracted at the near-infrared part of the spectrum (approximately between 760–900 nm), which is not captured by the visible (RGB) cameras. In this study, we explore several visible (RGB) vegetation indices in different environments using various UAV sensors and cameras to validate their performance. For this purposes, openly licensed unmanned aerial vehicle (UAV) imagery has been downloaded “as is” and analyzed. The overall results are presented in the study. As it was found, the green leaf index (GLI) was able to provide the optimum results for all case studies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Wang, Jiali, Ming Chen, Weidong Zhu, Liting Hu y Yasong Wang. "A Combined Approach for Retrieving Bathymetry from Aerial Stereo RGB Imagery". Remote Sensing 14, n.º 3 (7 de febrero de 2022): 760. http://dx.doi.org/10.3390/rs14030760.

Texto completo
Resumen
Shallow water bathymetry is critical in understanding and managing marine ecosystems. Bathymetric inversion models using airborne/satellite multispectral data are an efficient way to retrieve shallow bathymetry due to the affordable cost of airborne/satellite images and less field work required. With the increasing availability and popularity of unmanned aerial vehicle (UAV) imagery, this paper explores a new approach to obtain bathymetry using UAV visual-band (RGB) images. A combined approach is therefore proposed for retrieving bathymetry from aerial stereo RGB imagery, which is the combination of a new stereo triangulation method (an improved projection image based two-medium stereo triangulation method) and spectral inversion models. In general, the inversion models require some bathymetry reference points, which are not always feasible in many scenarios, and the proposed approach employs a new stereo triangulation method to obtain reliable bathymetric points, which act as the reference points of the inversion models. Using various numbers of triangulation points as the reference points together with a Geographical Weighted Regression (GWR) model, a series of experiments were conducted using UAV RGB images of a small island, and the results were validated against LiDAR points. The promising results indicate that the proposed approach is an efficient technique for shallow water bathymetry retrieval, and together with UAV platforms, it could be deployed easily to conduct a broad range of applications within marine environments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Chandel, Narendra S., Yogesh A. Rajwade, Kumkum Dubey, Abhilash K. Chandel, A. Subeesh y Mukesh K. Tiwari. "Water Stress Identification of Winter Wheat Crop with State-of-the-Art AI Techniques and High-Resolution Thermal-RGB Imagery". Plants 11, n.º 23 (2 de diciembre de 2022): 3344. http://dx.doi.org/10.3390/plants11233344.

Texto completo
Resumen
Timely crop water stress detection can help precision irrigation management and minimize yield loss. A two-year study was conducted on non-invasive winter wheat water stress monitoring using state-of-the-art computer vision and thermal-RGB imagery inputs. Field treatment plots were irrigated using two irrigation systems (flood and sprinkler) at four rates (100, 75, 50, and 25% of crop evapotranspiration [ETc]). A total of 3200 images under different treatments were captured at critical growth stages, that is, 20, 35, 70, 95, and 108 days after sowing using a custom-developed thermal-RGB imaging system. Crop and soil response measurements of canopy temperature (Tc), relative water content (RWC), soil moisture content (SMC), and relative humidity (RH) were significantly affected by the irrigation treatments showing the lowest Tc (22.5 ± 2 °C), and highest RWC (90%) and SMC (25.7 ± 2.2%) for 100% ETc, and highest Tc (28 ± 3 °C), and lowest RWC (74%) and SMC (20.5 ± 3.1%) for 25% ETc. The RGB and thermal imagery were then used as inputs to feature-extraction-based deep learning models (AlexNet, GoogLeNet, Inception V3, MobileNet V2, ResNet50) while, RWC, SMC, Tc, and RH were the inputs to function-approximation models (Artificial Neural Network (ANN), Kernel Nearest Neighbor (KNN), Logistic Regression (LR), Support Vector Machine (SVM) and Long Short-Term Memory (DL-LSTM)) to classify stressed/non-stressed crops. Among the feature extraction-based models, ResNet50 outperformed other models showing a discriminant accuracy of 96.9% with RGB and 98.4% with thermal imagery inputs. Overall, classification accuracy was higher for thermal imagery compared to RGB imagery inputs. The DL-LSTM had the highest discriminant accuracy of 96.7% and less error among the function approximation-based models for classifying stress/non-stress. The study suggests that computer vision coupled with thermal-RGB imagery can be instrumental in high-throughput mitigation and management of crop water stress.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Zhang, Jiuyuan, Jingshan Lu, Qiuyan Zhang, Qimo Qi, Gangjun Zheng, Fadi Chen, Sumei Chen, Fei Zhang, Weimin Fang y Zhiyong Guan. "Estimation of Garden Chrysanthemum Crown Diameter Using Unmanned Aerial Vehicle (UAV)-Based RGB Imagery". Agronomy 14, n.º 2 (6 de febrero de 2024): 337. http://dx.doi.org/10.3390/agronomy14020337.

Texto completo
Resumen
Crown diameter is one of the crucial indicators for evaluating the adaptability, growth quality, and ornamental value of garden chrysanthemums. To accurately obtain crown diameter, this study employed an unmanned aerial vehicle (UAV) equipped with a RGB camera to capture orthorectified canopy images of 64 varieties of garden chrysanthemums at different growth stages. Three methods, namely RGB color space, hue-saturation-value (HSV) color space, and the mask region-based convolutional neural network (Mask R-CNN), were employed to estimate the crown diameter of garden chrysanthemums. The results revealed that the Mask R-CNN exhibited the best performance in crown diameter estimation (sample number = 2409, R2 = 0.9629, RMSE = 2.2949 cm). Following closely, the HSV color space-based model exhibited strong performance (sample number = 2409, R2 = 0.9465, RMSE = 3.4073 cm). Both of the first two methods were efficient in estimating crown diameter throughout the entire growth stage. In contrast, the RGB color space-based model exhibited slightly lower performance (sample number = 1065, R2 = 0.9011, RMSE = 3.3418 cm) and was only applicable during periods when the entire plant was predominantly green. These findings provide theoretical and technical support for utilizing UAV-based imagery to estimate the crown diameter of garden chrysanthemums.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Makinde, Esther y Oluwaseun Oyelade. "Land Cover Mapping Using Sentinel-1 SAR Satellite Imagery of Lagos State for 2017". Proceedings 2, n.º 22 (29 de octubre de 2018): 1399. http://dx.doi.org/10.3390/proceedings2221399.

Texto completo
Resumen
For several years, Landsat imageries have been used for land cover mapping analysis. However, cloud cover constitutes a major obstacle to land cover classification in coastal tropical regions including Lagos state. In this work, a land cover map for Lagos state is created using Sentinel-1 Synthetic Aperture Radar (SAR) imagery. To this aim, a sentinel-1 SAR dual-pol (VV+VH) Interferometric Wide swath mode (IW) data orbit for 2017 over Lagos state, Nigeria was acquired and used. Results include an RGB composite of the image, classified image, with overall accuracy calculated as 0.757, while the kappa value for this project was evaluated to be about 0.719. The classification therefore passed the accuracy assessment. It is concluded that the Sentinel 1 SAR results has been effectively exploited for producing acceptably accurate land cover map of Lagos state, with relevant advantages for areas with cloud cover.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Alldieck, Thiemo, Chris Bahnsen y Thomas Moeslund. "Context-Aware Fusion of RGB and Thermal Imagery for Traffic Monitoring". Sensors 16, n.º 11 (18 de noviembre de 2016): 1947. http://dx.doi.org/10.3390/s16111947.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Abu Sari, Mohd Yazid, Yana Mazwin Mohmad Hassim, Rahmat Hidayat y Asmala Ahmad. "Monitoring Rice Crop and Paddy Field Condition Using UAV RGB Imagery". JOIV : International Journal on Informatics Visualization 5, n.º 4 (31 de diciembre de 2021): 469. http://dx.doi.org/10.30630/joiv.5.4.742.

Texto completo
Resumen
An effective crop management practice is very important to the sustenance of crop production. With the emergence of Industrial Revolution 4.0 (IR 4.0), precision farming has become the key element in modern agriculture to help farmers in maintaining the sustainability of crop production. Unmanned aerial vehicle (UAV) also known as drone was widely used in agriculture as one of the potential technologies to collect the data and monitor the crop condition. Managing and monitoring the paddy field especially at the bigger scale is one of the biggest challenges for farmers. Traditionally, the paddy field and crop condition are only monitored and observed manually by the farmers which may sometimes lead to inaccurate observation of the plot due the large area. Therefore, this study proposes the application of unmanned aerial vehicles and RGB imagery for monitoring rice crop development and paddy field condition. The integration of UAV with RGB digital camera were used to collect the data in the paddy field. Result shows that the early monitoring of rice crops is important to identify the crop condition. Therefore, with the use of aerial imagery analysis from UAV, it can help to improve rice crop management and eventually is expected to increase rice crop production.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Soria, Xavier, Angel Sappa y Riad Hammoud. "Wide-Band Color Imagery Restoration for RGB-NIR Single Sensor Images". Sensors 18, n.º 7 (27 de junio de 2018): 2059. http://dx.doi.org/10.3390/s18072059.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Paccaud, Philippe y D. A. Barry. "Obstacle detection for lake-deployed autonomous surface vehicles using RGB imagery". PLOS ONE 13, n.º 10 (22 de octubre de 2018): e0205319. http://dx.doi.org/10.1371/journal.pone.0205319.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Tedore, Cynthia y Sönke Johnsen. "Using RGB displays to portray color realistic imagery to animal eyes". Current Zoology 63, n.º 1 (30 de junio de 2016): 27–34. http://dx.doi.org/10.1093/cz/zow076.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Laslier, Marianne, Laurence Hubert‐Moy, Thomas Corpetti y Simon Dufour. "Monitoring the colonization of alluvial deposits using multitemporal UAV RGB ‐imagery". Applied Vegetation Science 22, n.º 4 (octubre de 2019): 561–72. http://dx.doi.org/10.1111/avsc.12455.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Pádua, Luís, Pedro Marques, Jonáš Hruška, Telmo Adão, José Bessa, António Sousa, Emanuel Peres, Raul Morais y Joaquim J. Sousa. "Vineyard properties extraction combining UAS-based RGB imagery with elevation data". International Journal of Remote Sensing 39, n.º 15-16 (10 de mayo de 2018): 5377–401. http://dx.doi.org/10.1080/01431161.2018.1471548.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Davidson, Corey, Vishnu Jaganathan, Arun Narenthiran Sivakumar, Joby M. Prince Czarnecki y Girish Chowdhary. "NDVI/NDRE prediction from standard RGB aerial imagery using deep learning". Computers and Electronics in Agriculture 203 (diciembre de 2022): 107396. http://dx.doi.org/10.1016/j.compag.2022.107396.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía