To see the other types of publications on this topic, follow the link: Multispectral aerial video imagery.

Journal articles on the topic 'Multispectral aerial video imagery'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multispectral aerial video imagery.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Morozov, A. N., A. L. Nazolin, and I. L. Fufurin. "Optical and Spectral Methods for Detection and Recognition of Unmanned Aerial Vehicles." Radio Engineering, no. 2 (May 17, 2020): 39–50. http://dx.doi.org/10.36027/rdeng.0220.0000167.

Full text
Abstract:
The paper considers a problem of detection and identification of unmanned aerial vehicles (UAVs) against the animate and inanimate objects and identification of their load by optical and spectral optical methods. The state-of-the-art analysis has shown that, when using the radar methods to detect small UAVs, there is a dead zone for distances of 250-700 m, and in this case it is important to use optical methods for detecting UAVs.The application possibilities and improvements of the optical scheme for detecting UAVs at long distances of about 1-2 km are considered. Location is performed by intrinsic infrared (IR) radiation of an object using the IR cameras and thermal imagers, as well as using a laser rangefinder (LIDAR). The paper gives examples of successful dynamic detection and recognition of objects from video images by methods of graph theory and neural networks using the network FasterR-CNN, YOLO and SSD models, including one frame received.The possibility for using the available spectral optical methods to analyze the chemical composition of materials that can be employed for remote identification of UAV coating materials, as well as for detecting trace amounts of matter on its surface has been studied. The advantages and disadvantages of the luminescent spectroscopy with UV illumination, Raman spectroscopy, differential absorption spectroscopy based on a tunable UV laser, spectral imaging methods (hyper / multispectral images), diffuse reflectance laser spectroscopy using infrared tunable quantum cascade lasers (QCL) have been shown.To assess the potential limiting distances for detecting and identifying UAVs, as well as identifying the chemical composition of an object by optical and spectral optical methods, a described experimental setup (a hybrid lidar UAV identification complex) is expected to be useful. The experimental setup structure and its performances are described. Such studies are aimed at development of scientific basics for remote detection, identification, tracking, and determination of UAV parameters and UAV belonging to different groups by optical location and spectroscopy methods, as well as for automatic optical UAV recognition in various environments against the background of moving wildlife. The proposed problem solution is to combine the optical location and spectral analysis methods, methods of the theory of statistics, graphs, deep learning, neural networks and automatic control methods, which is an interdisciplinary fundamental scientific task.
APA, Harvard, Vancouver, ISO, and other styles
2

Mian, O., J. Lutes, G. Lipa, J. J. Hutton, E. Gavelle, and S. Borghini. "ACCURACY ASSESSMENT OF DIRECT GEOREFERENCING FOR PHOTOGRAMMETRIC APPLICATIONS ON SMALL UNMANNED AERIAL PLATFORMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W4 (March 17, 2016): 77–83. http://dx.doi.org/10.5194/isprs-archives-xl-3-w4-77-2016.

Full text
Abstract:
Efficient mapping from unmanned aerial platforms cannot rely on aerial triangulation using known ground control points. The cost and time of setting ground control, added to the need for increased overlap between flight lines, severely limits the ability of small VTOL platforms, in particular, to handle mapping-grade missions of all but the very smallest survey areas. Applanix has brought its experience in manned photogrammetry applications to this challenge, setting out the requirements for increasing the efficiency of mapping operations from small UAVs, using survey-grade GNSS-Inertial technology to accomplish direct georeferencing of the platform and/or the imaging payload. The Direct Mapping Solution for Unmanned Aerial Vehicles (DMS-UAV) is a complete and ready-to-integrate OEM solution for Direct Georeferencing (DG) on unmanned aerial platforms. Designed as a solution for systems integrators to create mapping payloads for UAVs of all types and sizes, the DMS produces directly georeferenced products for any imaging payload (visual, LiDAR, infrared, multispectral imaging, even video). Additionally, DMS addresses the airframe’s requirements for high-accuracy position and orientation for such tasks as precision RTK landing and Precision Orientation for Air Data Systems (ADS), Guidance and Control. <br><br> This paper presents results using a DMS comprised of an Applanix APX-15 UAV with a Sony a7R camera to produce highly accurate orthorectified imagery without Ground Control Points on a Microdrones md4-1000 platform conducted by Applanix and Avyon. APX-15 UAV is a single-board, small-form-factor GNSS-Inertial system designed for use on small, lightweight platforms. The Sony a7R is a prosumer digital RGB camera sensor, with a 36MP, 4.9-micron CCD producing images at 7360 columns by 4912 rows. It was configured with a 50mm AF-S Nikkor f/1.8 lens and subsequently with a 35mm Zeiss Sonnar T* FE F2.8 lens. Both the camera/lens combinations and the APX-15 were mounted to a Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. <br><br> In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. <br><br> The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. <br><br> The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. <br><br> Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was post-processed in Single Base mode, using a base station located in the project area via POSPac UAV. For the block and railway corridor, the basestation’s position was precisely determined by processing a 12-hour session using the CSRS-PPP Post Processing service. Similarly, for the flight over Fryer Dam, the base-station’s position was also precisely determined by processing a 4-hour session using the CSRS-PPP Post Processing service. POSPac UAV’s camera calibration and quality control (CalQC) module was used to refine the camera interior orientation parameters using an Integrated Sensor Orientation (ISO) approach. POSPac UAV was also used to generate the Exterior Orientation parameters for images collected during the test flight. <br><br> The Inpho photogrammetric software package was used to develop the final map products for both corridors under various scenarios. The imagery was first imported into an Inpho project, with updated focal length, principal point offsets and Exterior Orientation parameters. First, a Digital Terrain/Surface Model (DTM/DSM) was extracted from the stereo imagery, following which the raw images were orthorectified to produce an orthomosaic product.
APA, Harvard, Vancouver, ISO, and other styles
3

Mian, O., J. Lutes, G. Lipa, J. J. Hutton, E. Gavelle, and S. Borghini. "ACCURACY ASSESSMENT OF DIRECT GEOREFERENCING FOR PHOTOGRAMMETRIC APPLICATIONS ON SMALL UNMANNED AERIAL PLATFORMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W4 (March 17, 2016): 77–83. http://dx.doi.org/10.5194/isprsarchives-xl-3-w4-77-2016.

Full text
Abstract:
Efficient mapping from unmanned aerial platforms cannot rely on aerial triangulation using known ground control points. The cost and time of setting ground control, added to the need for increased overlap between flight lines, severely limits the ability of small VTOL platforms, in particular, to handle mapping-grade missions of all but the very smallest survey areas. Applanix has brought its experience in manned photogrammetry applications to this challenge, setting out the requirements for increasing the efficiency of mapping operations from small UAVs, using survey-grade GNSS-Inertial technology to accomplish direct georeferencing of the platform and/or the imaging payload. The Direct Mapping Solution for Unmanned Aerial Vehicles (DMS-UAV) is a complete and ready-to-integrate OEM solution for Direct Georeferencing (DG) on unmanned aerial platforms. Designed as a solution for systems integrators to create mapping payloads for UAVs of all types and sizes, the DMS produces directly georeferenced products for any imaging payload (visual, LiDAR, infrared, multispectral imaging, even video). Additionally, DMS addresses the airframe’s requirements for high-accuracy position and orientation for such tasks as precision RTK landing and Precision Orientation for Air Data Systems (ADS), Guidance and Control. &lt;br&gt;&lt;br&gt; This paper presents results using a DMS comprised of an Applanix APX-15 UAV with a Sony a7R camera to produce highly accurate orthorectified imagery without Ground Control Points on a Microdrones md4-1000 platform conducted by Applanix and Avyon. APX-15 UAV is a single-board, small-form-factor GNSS-Inertial system designed for use on small, lightweight platforms. The Sony a7R is a prosumer digital RGB camera sensor, with a 36MP, 4.9-micron CCD producing images at 7360 columns by 4912 rows. It was configured with a 50mm AF-S Nikkor f/1.8 lens and subsequently with a 35mm Zeiss Sonnar T* FE F2.8 lens. Both the camera/lens combinations and the APX-15 were mounted to a Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. &lt;br&gt;&lt;br&gt; In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. &lt;br&gt;&lt;br&gt; The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. &lt;br&gt;&lt;br&gt; The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. &lt;br&gt;&lt;br&gt; Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was post-processed in Single Base mode, using a base station located in the project area via POSPac UAV. For the block and railway corridor, the basestation’s position was precisely determined by processing a 12-hour session using the CSRS-PPP Post Processing service. Similarly, for the flight over Fryer Dam, the base-station’s position was also precisely determined by processing a 4-hour session using the CSRS-PPP Post Processing service. POSPac UAV’s camera calibration and quality control (CalQC) module was used to refine the camera interior orientation parameters using an Integrated Sensor Orientation (ISO) approach. POSPac UAV was also used to generate the Exterior Orientation parameters for images collected during the test flight. &lt;br&gt;&lt;br&gt; The Inpho photogrammetric software package was used to develop the final map products for both corridors under various scenarios. The imagery was first imported into an Inpho project, with updated focal length, principal point offsets and Exterior Orientation parameters. First, a Digital Terrain/Surface Model (DTM/DSM) was extracted from the stereo imagery, following which the raw images were orthorectified to produce an orthomosaic product.
APA, Harvard, Vancouver, ISO, and other styles
4

Jayroe, Clinton W., William H. Baker, and Amy B. Greenwalt. "Using Multispectral Aerial Imagery to Evaluate Crop Productivity." Crop Management 4, no. 1 (2005): 1–7. http://dx.doi.org/10.1094/cm-2005-0205-01-rs.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bruce, Robert W., Istvan Rajcan, and John Sulik. "Classification of Soybean Pubescence from Multispectral Aerial Imagery." Plant Phenomics 2021 (August 4, 2021): 1–11. http://dx.doi.org/10.34133/2021/9806201.

Full text
Abstract:
The accurate determination of soybean pubescence is essential for plant breeding programs and cultivar registration. Currently, soybean pubescence is classified visually, which is a labor-intensive and time-consuming activity. Additionally, the three classes of phenotypes (tawny, light tawny, and gray) may be difficult to visually distinguish, especially the light tawny class where misclassification with tawny frequently occurs. The objectives of this study were to solve both the throughput and accuracy issues in the plant breeding workflow, develop a set of indices for distinguishing pubescence classes, and test a machine learning (ML) classification approach. A principal component analysis (PCA) on hyperspectral soybean plot data identified clusters related to pubescence classes, while a Jeffries-Matusita distance analysis indicated that all bands were important for pubescence class separability. Aerial images from 2018, 2019, and 2020 were analyzed in this study. A 60-plot test (2019) of genotypes with known pubescence was used as reference data, while whole-field images from 2018, 2019, and 2020 were used to examine the broad applicability of the classification methodology. Two indices, a red/blue ratio and blue normalized difference vegetation index (blue NDVI), were effective at differentiating tawny and gray pubescence types in high-resolution imagery. A ML approach using a support vector machine (SVM) radial basis function (RBF) classifier was able to differentiate the gray and tawny types (83.1% accuracy and kappa=0.740 on a pixel basis) on images where reference training data was present. The tested indices and ML model did not generalize across years to imagery that did not contain the reference training panel, indicating limitations of using aerial imagery for pubescence classification in some environmental conditions. High-throughput classification of gray and tawny pubescence types is possible using aerial imagery, but light tawny soybeans remain difficult to classify and may require training data from each field season.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Bo, Timothy L. Hawthorne, Hannah Torres, and Michael Feinman. "Using Object-Oriented Classification for Coastal Management in the East Central Coast of Florida: A Quantitative Comparison between UAV, Satellite, and Aerial Data." Drones 3, no. 3 (July 27, 2019): 60. http://dx.doi.org/10.3390/drones3030060.

Full text
Abstract:
High resolution mapping of coastal habitats is invaluable for resource inventory, change detection, and inventory of aquaculture applications. However, coastal areas, especially the interior of mangroves, are often difficult to access. An Unmanned Aerial Vehicle (UAV), equipped with a multispectral sensor, affords an opportunity to improve upon satellite imagery for coastal management because of the very high spatial resolution, multispectral capability, and opportunity to collect real-time observations. Despite the recent and rapid development of UAV mapping applications, few articles have quantitatively compared how much improvement there is of UAV multispectral mapping methods compared to more conventional remote sensing data such as satellite imagery. The objective of this paper is to quantitatively demonstrate the improvements of a multispectral UAV mapping technique for higher resolution images used for advanced mapping and assessing coastal land cover. We performed multispectral UAV mapping fieldwork trials over Indian River Lagoon along the central Atlantic coast of Florida. Ground Control Points (GCPs) were collected to generate a rigorous geo-referenced dataset of UAV imagery and support comparison to geo-referenced satellite and aerial imagery. Multi-spectral satellite imagery (Sentinel-2) was also acquired to map land cover for the same region. NDVI and object-oriented classification methods were used for comparison between UAV and satellite mapping capabilities. Compared with aerial images acquired from Florida Department of Environmental Protection, the UAV multi-spectral mapping method used in this study provided advanced information of the physical conditions of the study area, an improved land feature delineation, and a significantly better mapping product than satellite imagery with coarser resolution. The study demonstrates a replicable UAV multi-spectral mapping method useful for study sites that lack high quality data.
APA, Harvard, Vancouver, ISO, and other styles
7

Kramber‡, W. J., A. J. Richardson§, P. R. Nixon§, and K. Lulla†. "Principal component analysis of aerial video imagery†." International Journal of Remote Sensing 9, no. 9 (September 1988): 1415–22. http://dx.doi.org/10.1080/01431168808954949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Yanchao, Wen Yang, Ying Sun, Christine Chang, Jiya Yu, and Wenbo Zhang. "Fusion of Multispectral Aerial Imagery and Vegetation Indices for Machine Learning-Based Ground Classification." Remote Sensing 13, no. 8 (April 7, 2021): 1411. http://dx.doi.org/10.3390/rs13081411.

Full text
Abstract:
Unmanned Aerial Vehicles (UAVs) are emerging and promising platforms for carrying different types of cameras for remote sensing. The application of multispectral vegetation indices for ground cover classification has been widely adopted and has proved its reliability. However, the fusion of spectral bands and vegetation indices for machine learning-based land surface investigation has hardly been studied. In this paper, we studied the fusion of spectral bands information from UAV multispectral images and derived vegetation indices for almond plantation classification using several machine learning methods. We acquired multispectral images over an almond plantation using a UAV. First, a multispectral orthoimage was generated from the acquired multispectral images using SfM (Structure from Motion) photogrammetry methods. Eleven types of vegetation indexes were proposed based on the multispectral orthoimage. Then, 593 data points that contained multispectral bands and vegetation indexes were randomly collected and prepared for this study. After comparing six machine learning algorithms (Support Vector Machine, K-Nearest Neighbor, Linear Discrimination Analysis, Decision Tree, Random Forest, and Gradient Boosting), we selected three (SVM, KNN, and LDA) to study the fusion of multi-spectral bands information and derived vegetation index for classification. With the vegetation indexes increased, the model classification accuracy of all three selected machine learning methods gradually increased, then dropped. Our results revealed that that: (1) spectral information from multispectral images can be used for machine learning-based ground classification, and among all methods, SVM had the best performance; (2) combination of multispectral bands and vegetation indexes can improve the classification accuracy comparing to only spectral bands among all three selected methods; (3) among all VIs, NDEGE, NDVIG, and NDVGE had consistent performance in improving classification accuracies, and others may reduce the accuracy. Machine learning methods (SVM, KNN, and LDA) can be used for classifying almond plantation using multispectral orthoimages, and fusion of multispectral bands with vegetation indexes can improve machine learning-based classification accuracy if the vegetation indexes are properly selected.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Chenghai, Charles P. C. Suh, and John K. Westbrook. "Early identification of cotton fields using mosaicked aerial multispectral imagery." Journal of Applied Remote Sensing 11, no. 1 (January 12, 2017): 016008. http://dx.doi.org/10.1117/1.jrs.11.016008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Soni, Ayush, Alexander Loui, Scott Brown, and Carl Salvaggio. "High-quality multispectral image generation using Conditional GANs." Electronic Imaging 2020, no. 8 (January 26, 2020): 86–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.8.imawm-086.

Full text
Abstract:
In this paper, we demonstrate the use of a Conditional Generative Adversarial Networks (cGAN) framework for producing high-fidelity, multispectral aerial imagery using low-fidelity imagery of the same kind as input. The motivation behind is that it is easier, faster, and often less costly to produce low-fidelity images than high-fidelity images using the various available techniques, such as physics-driven synthetic image generation models. Once the cGAN network is trained and tuned in a supervised manner on a data set of paired low- and high-quality aerial images, it can then be used to enhance new, lower-quality baseline images of similar type to produce more realistic, high-fidelity multispectral image data. This approach can potentially save significant time and effort compared to traditional approaches of producing multispectral images.
APA, Harvard, Vancouver, ISO, and other styles
11

Csatho, Bea M., Cornelis J. Van Der Veen, and Catherine M. Tremper. "Trimline Mapping from Multispectral Landsat ETM+ Imagery." Géographie physique et Quaternaire 59, no. 1 (October 30, 2006): 49–62. http://dx.doi.org/10.7202/013736ar.

Full text
Abstract:
AbstractMultispectral Landsat ETM+ imagery is used to study the ice-marginal region in the vicinity of Jakobshavn Isfjord, west Greenland. In particular, the trimline indicating margin retreat since the maximum stand attained during the Little Ice Age maximum is reconstructed, and compared with earlier maps based on aerial photogrammetry and ground surveys. Applying supervised classification, fourteen different surface types were identified, ranging from snow and ice, debris-covered ice and water with differing turbidities, to different types of vegetative landcover. After similar classes were merged into five, distinctively different classes, a digitized geomorphologic map was used to assess the accuracy of the classification. The positional accuracy of the trimline was checked by using results from a GPS survey along northern slope of the Jakobshavn fjord. By merging three spectral bands with the panchromatic band, a pan-sharpened image with a spatial resolution of 15 m is obtained that clearly shows morphological features on the ice surface, as well as increased resolution of glacial geomorphology.
APA, Harvard, Vancouver, ISO, and other styles
12

Yang, Chenghai, and J. H. Everitt. "Comparison of hyperspectral imagery with aerial photography and multispectral imagery for mapping broom snakeweed." International Journal of Remote Sensing 31, no. 20 (October 20, 2010): 5423–38. http://dx.doi.org/10.1080/01431160903369626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Olivetti, Diogo, Henrique Roig, Jean-Michel Martinez, Henrique Borges, Alexandre Ferreira, Raphael Casari, Leandro Salles, and Edio Malta. "Low-Cost Unmanned Aerial Multispectral Imagery for Siltation Monitoring in Reservoirs." Remote Sensing 12, no. 11 (June 8, 2020): 1855. http://dx.doi.org/10.3390/rs12111855.

Full text
Abstract:
The recent and continuous development of unmanned aerial vehicles (UAV) and small cameras with different spectral resolutions and imaging systems promotes new remote sensing platforms that can supply ultra-high spatial and temporal resolution, filling the gap between ground-based surveys and orbital sensors. This work aimed to monitor siltation in two large rural and urban reservoirs by recording water color variations within a savanna biome in the central region of Brazil using a low cost and very light unmanned platform. Airborne surveys were conducted using a Parrot Sequoia camera (~0.15 kg) onboard a DJI Phantom 4 UAV (~1.4 kg) during dry and rainy seasons over inlet areas of both reservoirs. Field measurements of total suspended solids (TSS) and water clarity were made jointly with the airborne survey campaigns. Field hyperspectral radiometry data were also collected during two field surveys. Bio-optical models for TSS were tested for all spectral bands of the Sequoia camera. The near-infrared single band was found to perform the best (R2: 0.94; RMSE: 7.8 mg L−1) for a 0–180 mg L−1 TSS range and was used to produce time series of TSS concentration maps of the study areas. This flexible platform enabled monitoring of the increase of TSS concentration at a ~13 cm spatial resolution in urban and rural drainages in the rainy season. Aerial surveys allowed us to map TSS load fluctuations in a 1 week period during which no satellite images were available due to continuous cloud coverage in the rainy season. This work demonstrates that a low-cost configuration allows dense TSS monitoring at the inlet areas of reservoirs and thus enables mapping of the sources of sediment inputs, supporting the definition of mitigation plans to limit the siltation process.
APA, Harvard, Vancouver, ISO, and other styles
14

Pourazar, Hossein, Farhad Samadzadegan, and Farzaneh Dadrass Javan. "Aerial multispectral imagery for plant disease detection: radiometric calibration necessity assessment." European Journal of Remote Sensing 52, sup3 (July 23, 2019): 17–31. http://dx.doi.org/10.1080/22797254.2019.1642143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Borgogno-Mondino, E., A. Lessio, L. Tarricone, V. Novello, and L. de Palma. "A comparison between multispectral aerial and satellite imagery in precision viticulture." Precision Agriculture 19, no. 2 (March 25, 2017): 195–217. http://dx.doi.org/10.1007/s11119-017-9510-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Su, Jinya, Cunjia Liu, Matthew Coombes, Xiaoping Hu, Conghao Wang, Xiangming Xu, Qingdong Li, Lei Guo, and Wen-Hua Chen. "Wheat yellow rust monitoring by learning from multispectral UAV aerial imagery." Computers and Electronics in Agriculture 155 (December 2018): 157–66. http://dx.doi.org/10.1016/j.compag.2018.10.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Prior, Elizabeth M., Frances C. O’Donnell, Christian Brodbeck, Wesley N. Donald, George Brett Runion, and Stephanie L. Shepherd. "Measuring High Levels of Total Suspended Solids and Turbidity Using Small Unoccupied Aerial Systems (sUAS) Multispectral Imagery." Drones 4, no. 3 (September 8, 2020): 54. http://dx.doi.org/10.3390/drones4030054.

Full text
Abstract:
Due to land development, high concentrations of suspended sediment are produced from erosion after rain events. Sediment basins are commonly used for the settlement of suspended sediments before discharge. Stormwater regulations may require frequent sampling and monitoring of these basins, both of which are time and labor intensive. Potential remedies are small, unoccupied aerial systems (sUAS). The goal of this study was to demonstrate whether sUAS multispectral imagery could measure high levels of total suspended solids (TSS) and turbidity in a sediment basin. The sediment basin at the Auburn University Erosion and Sediment Control Testing Facility was used to simulate a local 2-year, 24-h storm event with a 30-min flow rate. Water samples were collected at three depths in two locations every 15 min for six hours with corresponding sUAS multispectral imagery. Multispectral pixel values were related to TSS and turbidity in separate models using multiple linear regressions. TSS and turbidity regression models had coefficients of determination (r2) values of 0.926 and 0.851, respectively. When water column measurements were averaged, the r2 values increased to 0.965 and 0.929, respectively. The results indicated that sUAS multispectral imagery is a viable option for monitoring and assessing sediment basins during high-concentration events.
APA, Harvard, Vancouver, ISO, and other styles
18

Sangeetha, A. "Damaged Building Detection from Satellite Multispectral Imagery." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 30, 2021): 3367–73. http://dx.doi.org/10.22214/ijraset.2021.35706.

Full text
Abstract:
Damaged Building footprint detection in satellite and aerial imagery is crucial in city management. Building detection is a fundamental but a challenging problem mainly because it requires correct recovery of building footprints from high-resolution images. Buildings are one of the key pieces of cadastral information related to population and cities, and are fundamental to urban planning & policymaking. Critical infrastructures, such as public transport, electricity, water distribution networks, or postal and delivery services, rely heavily on accurate population and building maps. On top of that, it is essential to get real-life, up-to-date information about buildings each time there is a need for disaster risk management, risk assessment, or emergency relief. Accurate and fine-grained information about the extent of damage to buildings is essential for directing Humanitarian Aid and Disaster Response operations in the immediate aftermath of any natural calamity. Satellite and UAV (drone) imagery has been used for this purpose in recent years, sometimes aided by computer vision algorithms. Existing Computer Vision approaches for building damage assessment typically rely on a two stage approach, consisting of building detection using an object detection model, followed by damage assessment through classification of the detected building tiles. These multi-stage methods are not end-to-end trainable, and as well as suffer from poor overall results. We proposed the UNet segmentation model, a model that can simultaneously segment buildings and assess the damage levels to individual buildings and can be trained end-to-end. We trained the model using X View 2 challenge dataset.
APA, Harvard, Vancouver, ISO, and other styles
19

Khodaei, B., F. Samadzadegan, F. Dadras Javan, and H. Hasani. "3D SURFACE GENERATION FROM AERIAL THERMAL IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1-W5 (December 11, 2015): 401–5. http://dx.doi.org/10.5194/isprsarchives-xl-1-w5-401-2015.

Full text
Abstract:
Aerial thermal imagery has been recently applied to quantitative analysis of several scenes. For the mapping purpose based on aerial thermal imagery, high accuracy photogrammetric process is necessary. However, due to low geometric resolution and low contrast of thermal imaging sensors, there are some challenges in precise 3D measurement of objects. In this paper the potential of thermal video in 3D surface generation is evaluated. In the pre-processing step, thermal camera is geometrically calibrated using a calibration grid based on emissivity differences between the background and the targets. Then, Digital Surface Model (DSM) generation from thermal video imagery is performed in four steps. Initially, frames are extracted from video, then tie points are generated by Scale-Invariant Feature Transform (SIFT) algorithm. Bundle adjustment is then applied and the camera position and orientation parameters are determined. Finally, multi-resolution dense image matching algorithm is used to create 3D point cloud of the scene. Potential of the proposed method is evaluated based on thermal imaging cover an industrial area. The thermal camera has 640×480 Uncooled Focal Plane Array (UFPA) sensor, equipped with a 25 mm lens which mounted in the Unmanned Aerial Vehicle (UAV). The obtained results show the comparable accuracy of 3D model generated based on thermal images with respect to DSM generated from visible images, however thermal based DSM is somehow smoother with lower level of texture. Comparing the generated DSM with the 9 measured GCPs in the area shows the Root Mean Square Error (RMSE) value is smaller than 5 decimetres in both X and Y directions and 1.6 meters for the Z direction.
APA, Harvard, Vancouver, ISO, and other styles
20

Khaliq, Aleem, Lorenzo Comba, Alessandro Biglia, Davide Ricauda Aimonino, Marcello Chiaberge, and Paolo Gay. "Comparison of Satellite and UAV-Based Multispectral Imagery for Vineyard Variability Assessment." Remote Sensing 11, no. 4 (February 20, 2019): 436. http://dx.doi.org/10.3390/rs11040436.

Full text
Abstract:
In agriculture, remotely sensed data play a crucial role in providing valuable information on crop and soil status to perform effective management. Several spectral indices have proven to be valuable tools in describing crop spatial and temporal variability. In this paper, a detailed analysis and comparison of vineyard multispectral imagery, provided by decametric resolution satellite and low altitude Unmanned Aerial Vehicle (UAV) platforms, is presented. The effectiveness of Sentinel-2 imagery and of high-resolution UAV aerial images was evaluated by considering the well-known relation between the Normalised Difference Vegetation Index (NDVI) and crop vigour. After being pre-processed, the data from UAV was compared with the satellite imagery by computing three different NDVI indices to properly analyse the unbundled spectral contribution of the different elements in the vineyard environment considering: (i) the whole cropland surface; (ii) only the vine canopies; and (iii) only the inter-row terrain. The results show that the raw s resolution satellite imagery could not be directly used to reliably describe vineyard variability. Indeed, the contribution of inter-row surfaces to the remotely sensed dataset may affect the NDVI computation, leading to biased crop descriptors. On the contrary, vigour maps computed from the UAV imagery, considering only the pixels representing crop canopies, resulted to be more related to the in-field assessment compared to the satellite imagery. The proposed method may be extended to other crop typologies grown in rows or without intensive layout, where crop canopies do not extend to the whole surface or where the presence of weeds is significant.
APA, Harvard, Vancouver, ISO, and other styles
21

Grulke, Nancy, Jason Maxfield, Phillip Riggan, and Charlie Schrader-Patton. "Pre-Emptive Detection of Mature Pine Drought Stress Using Multispectral Aerial Imagery." Remote Sensing 12, no. 14 (July 21, 2020): 2338. http://dx.doi.org/10.3390/rs12142338.

Full text
Abstract:
Drought, ozone (O3), and nitrogen deposition (N) alter foliar pigments and tree crown structure that may be remotely detectable. Remote sensing tools are needed that pre-emptively identify trees susceptible to environmental stresses could inform forest managers in advance of tree mortality risk. Jeffrey pine, a component of the economically important and widespread western yellow pine in North America was investigated in the southern Sierra Nevada. Transpiration of mature trees differed by 20% between microsites with adequate (mesic (M)) vs. limited (xeric (X)) water availability as described in a previous study. In this study, in-the-crown morphological traits (needle chlorosis, branchlet diameter, and frequency of needle defoliators and dwarf mistletoe) were significantly correlated with aerially detected, sub-crown spectral traits (upper crown NDVI, high resolution (R), near-infrared (NIR) Scalar (inverse of NDVI) and THERM Δ, and the difference between upper and mid crown temperature). A classification tree model sorted trees into X and M microsites with THERM Δ alone (20% error), which was partially validated at a second site with only mesic trees (2% error). Random forest separated M and X site trees with additional spectra (17% error). Imagery taken once, from an aerial platform with sub-crown resolution, under the challenge of drought stress, was effective in identifying droughted trees within the context of other environmental stresses.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Yundong, Antonios Kontsos, and Ivan Bartoli. "Automated Rust-Defect Detection of a Steel Bridge Using Aerial Multispectral Imagery." Journal of Infrastructure Systems 25, no. 2 (June 2019): 04019014. http://dx.doi.org/10.1061/(asce)is.1943-555x.0000488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Mcnally, Adrian J. D., and Samuel J. P. Mckenzie. "Combining multispectral aerial imagery and digital surface models to extract urban buildings." Journal of Maps 7, no. 1 (January 2011): 51–59. http://dx.doi.org/10.4113/jom.2011.1152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Melville, Bethany, Adrian Fisher, and Arko Lucieer. "Ultra-high spatial resolution fractional vegetation cover from unmanned aerial multispectral imagery." International Journal of Applied Earth Observation and Geoinformation 78 (June 2019): 14–24. http://dx.doi.org/10.1016/j.jag.2019.01.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chrétien, L. P., J. Théau, and P. Ménard. "WILDLIFE MULTISPECIES REMOTE SENSING USING VISIBLE AND THERMAL INFRARED IMAGERY ACQUIRED FROM AN UNMANNED AERIAL VEHICLE (UAV)." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1/W4 (August 26, 2015): 241–48. http://dx.doi.org/10.5194/isprsarchives-xl-1-w4-241-2015.

Full text
Abstract:
Wildlife aerial surveys require time and significant resources. Multispecies detection could reduce costs to a single census for species that coexist spatially. Traditional methods are demanding for observers in terms of concentration and are not adapted to multispecies censuses. The processing of multispectral aerial imagery acquired from an unmanned aerial vehicle (UAV) represents a potential solution for multispecies detection. The method used in this study is based on a multicriteria object-based image analysis applied on visible and thermal infrared imagery acquired from a UAV. This project aimed to detect American bison, fallow deer, gray wolves, and elks located in separate enclosures with a known number of individuals. Results showed that all bison and elks were detected without errors, while for deer and wolves, 0–2 individuals per flight line were mistaken with ground elements or undetected. This approach also detected simultaneously and separately the four targeted species even in the presence of other untargeted ones. These results confirm the potential of multispectral imagery acquired from UAV for wildlife census. Its operational application remains limited to small areas related to the current regulations and available technology. Standardization of the workflow will help to reduce time and expertise requirements for such technology.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhao, Yuan, Song, Ding, Lin, Liang, and Zhang. "Use of Unmanned Aerial Vehicle Imagery and Deep Learning UNet to Extract Rice Lodging." Sensors 19, no. 18 (September 6, 2019): 3859. http://dx.doi.org/10.3390/s19183859.

Full text
Abstract:
Rice lodging severely affects harvest yield. Traditional evaluation methods and manual on-site measurement are found to be time-consuming, labor-intensive, and cost-intensive. In this study, a new method for rice lodging assessment based on a deep learning UNet (U-shaped Network) architecture was proposed. The UAV (unmanned aerial vehicle) equipped with a high-resolution digital camera and a three-band multispectral camera synchronously was used to collect lodged and non-lodged rice images at an altitude of 100 m. After splicing and cropping the original images, the datasets with the lodged and non-lodged rice image samples were established by augmenting for building a UNet model. The research results showed that the dice coefficients in RGB (Red, Green and Blue) image and multispectral image test set were 0.9442 and 0.9284, respectively. The rice lodging recognition effect using the RGB images without feature extraction is better than that of multispectral images. The findings of this study are useful for rice lodging investigations by different optical sensors, which can provide an important method for large-area, high-efficiency, and low-cost rice lodging monitoring research.
APA, Harvard, Vancouver, ISO, and other styles
27

Suo, McGovern, and Gilmer. "Coastal Dune Vegetation Mapping Using a Multispectral Sensor Mounted on an UAS." Remote Sensing 11, no. 15 (August 2, 2019): 1814. http://dx.doi.org/10.3390/rs11151814.

Full text
Abstract:
Vegetation mapping, identifying the type and distribution of plant species, is important for analysing vegetation dynamics, quantifying spatial patterns of vegetation evolution, analysing the effects of environmental changes and predicting spatial patterns of species diversity. Such analysis can contribute to the development of targeted land management actions that maintain biodiversity and ecological functions. This paper presents a methodology for 3D vegetation mapping of a coastal dune complex using a multispectral camera mounted on an unmanned aerial system with particular reference to the Buckroney dune complex in Co. Wicklow, Ireland. Unmanned aerial systems (UAS), also known as unmanned aerial vehicles (UAV) or drones, have enabled high-resolution and high-accuracy ground-based data to be gathered quickly and easily on-site. The Sequoia multispectral sensor used in this study has green, red, red edge and near-infrared wavebands, and a regular camer with red, green and blue wavebands (RGB camera), to capture both visible and near-infrared (NIR) imagery of the land surface. The workflow of 3D vegetation mapping of the study site included establishing coordinated ground control points, planning the flight mission and camera parameters, acquiring the imagery, processing the image data and performing features classification. The data processing outcomes included an orthomosaic model, a 3D surface model and multispectral imagery of the study site, in the Irish Transverse Mercator (ITM) coordinate system. The planimetric resolution of the RGB sensor-based outcomes was 0.024 m while multispectral sensor-based outcomes had a planimetric resolution of 0.096 m. High-resolution vegetation mapping was successfully generated from these data processing outcomes. There were 235 sample areas (1 m × 1 m) used for the accuracy assessment of the classification of the vegetation mapping. Feature classification was conducted using nine different classification strategies to examine the efficiency of multispectral sensor data for vegetation and contiguous land cover mapping. The nine classification strategies included combinations of spectral bands and vegetation indices. Results show classification accuracies, based on the nine different classification strategies, ranging from 52% to 75%.
APA, Harvard, Vancouver, ISO, and other styles
28

Easterday, Kelly, Chippie Kislik, Todd Dawson, Sean Hogan, and Maggi Kelly. "Remotely Sensed Water Limitation in Vegetation: Insights from an Experiment with Unmanned Aerial Vehicles (UAVs)." Remote Sensing 11, no. 16 (August 9, 2019): 1853. http://dx.doi.org/10.3390/rs11161853.

Full text
Abstract:
Unmanned aerial vehicles (UAVs) equipped with multispectral sensors present an opportunity to monitor vegetation with on-demand high spatial and temporal resolution. In this study we use multispectral imagery from quadcopter UAVs to monitor the progression of a water manipulation experiment on a common shrub, Baccharis pilularis (coyote brush) at the Blue Oak Ranch Reserve (BORR) ~20 km east of San Jose, California. We recorded multispectral imagery at several altitudes with nearly hourly intervals to explore the relationship between two common spectral indices, NDVI (normalized difference vegetation index) and NDRE (normalized difference red edge index), leaf water content and water potential as physiological metrics of plant water status, across a gradient of water deficit. An examination of the spatial and temporal thresholds at which water limitations were most detectable revealed that the best separation between levels of water deficit were at higher resolution (lower flying height), and in the morning (NDVI) and early morning (NDRE). We found that both measures were able to identify moisture deficit across treatments; however, NDVI was better able to distinguish between treatments than NDRE and was more positively correlated with field measurements of leaf water content. Finally, we explored how relationships between spectral indices and water status changed when the imagery was scaled to courser resolutions provided by satellite-based imagery (PlanetScope).We found that PlanetScope data was able to capture the overall trend in treatments but unable to capture subtle changes in water content. These kinds of experiments that evaluate the relationship between direct field measurements and UAV camera sensitivity are needed to enable translation of field-based physiology measurements to landscape or regional scales.
APA, Harvard, Vancouver, ISO, and other styles
29

Fletcher, Reginald S., and Allan T. Showler. "Surveying kaolin-treated cotton plots with airborne multispectral digital video imagery." Computers and Electronics in Agriculture 54, no. 1 (October 2006): 1–7. http://dx.doi.org/10.1016/j.compag.2006.06.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Vlachopoulos, O., B. Leblon, J. Wang, A. Haddadi, A. LaRocque, and G. Patterson. "MAPPING BARLEY LODGING WITH UAS MULTISPECTRAL IMAGERY AND MACHINE LEARNING." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2021 (June 28, 2021): 203–8. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2021-203-2021.

Full text
Abstract:
Abstract. Unmanned Aircraft Systems (UAS) are demonstrated cost- and time-effective remote sensing platforms for precision agriculture applications and crop damage monitoring. In this study, lodging damage on barley crops has been mapped from UAS imagery that was acquired over multiple barley fields with extensive lodging damages in two aerial surveys. A Random Forests classification model was trained and tested for the discrimination of lodged barley with an overall accuracy of 99.7% on the validation dataset. The crop areas with lodging were automatically delineated by vector analysis and compared to manually delineated areas using two spatial accuracy metrics, the Area Goodness of Fit (AGoF) and the Boundary Mean Positional Error (BMPE). The average AGoF was 97.95% and the average BMPE was 0.235 m.
APA, Harvard, Vancouver, ISO, and other styles
31

Durfee, Nicole, Carlos Ochoa, and Ricardo Mata-Gonzalez. "The Use of Low-Altitude UAV Imagery to Assess Western Juniper Density and Canopy Cover in Treated and Untreated Stands." Forests 10, no. 4 (March 29, 2019): 296. http://dx.doi.org/10.3390/f10040296.

Full text
Abstract:
Monitoring vegetation characteristics and ground cover is crucial to determine appropriate management techniques in western juniper (Juniperus occidentalis Hook.) ecosystems. Remote-sensing techniques have been used to study vegetation cover; yet, few studies have applied these techniques using unmanned aerial vehicles (UAV), specifically in areas of juniper woodlands. We used ground-based data in conjunction with low-altitude UAV imagery to assess vegetation and ground cover characteristics in a paired watershed study located in central Oregon, USA. The study was comprised of a treated watershed (most juniper removed) and an untreated watershed. Research objectives were to: (1) evaluate the density and canopy cover of western juniper in a treated (juniper removed) and an untreated watershed; and, (2) assess the effectiveness of using low altitude UAV-based imagery to measure juniper-sapling population density and canopy cover. Ground- based measurements were used to assess vegetation features in each watershed and as a means to verify analysis from aerial imagery. Visual imagery (red, green, and blue wavelengths) and multispectral imagery (red, green, blue, near-infrared, and red-edge wavelengths) were captured using a quadcopter-style UAV. Canopy cover in the untreated watershed was estimated using two different methods: vegetation indices and support vector machine classification. Supervised classification was used to assess juniper sapling density and vegetation cover in the treated watershed. Results showed that vegetation indices that incorporated near-infrared reflectance values estimated canopy cover within 0.7% to 4.1% of ground-based calculations. Canopy cover estimates at the untreated watershed using supervised classification were within 0.9% to 2.3% of ground-based results. Supervised classification applied to fall imagery using multispectral bands provided the best estimates of juniper sapling density compared to imagery taken in the summer or to using visual imagery. Study results suggest that low-altitude multispectral imagery obtained using small UAV can be effectively used to assess western juniper density and canopy cover.
APA, Harvard, Vancouver, ISO, and other styles
32

Mathews, Adam J. "A Practical UAV Remote Sensing Methodology to Generate Multispectral Orthophotos for Vineyards." International Journal of Applied Geospatial Research 6, no. 4 (October 2015): 65–87. http://dx.doi.org/10.4018/ijagr.2015100104.

Full text
Abstract:
This paper explores the use of compact digital cameras to remotely estimate spectral reflectance based on unmanned aerial vehicle imagery. Two digital cameras, one unaltered and one altered, were used to collect four bands of spectral information (blue, green, red, and near-infrared [NIR]). The altered camera had its internal hot mirror removed to allow the sensor to be additionally sensitive to NIR. Through on-ground experimentation with spectral targets and a spectroradiometer, the sensitivity and abilities of the cameras were observed. This information along with on-site collected spectral data were used to aid in converting aerial imagery digital numbers to estimates of scaled surface reflectance using the empirical line method. The resulting images were used to create spectrally-consistent orthophotomosaics of a vineyard study site. Individual bands were subsequently validated with in situ spectroradiometer data. Results show that red and NIR bands exhibited the best fit (R2: 0.78 for red; 0.57 for NIR).
APA, Harvard, Vancouver, ISO, and other styles
33

Wallhead, Matthew, Heping Zhu, John Sulik, and William Stump. "A Workflow for Extracting Plot-level Biophysical Indicators From Aerially Acquired Multispectral Imagery." Plant Health Progress 18, no. 2 (January 1, 2017): 95–96. http://dx.doi.org/10.1094/php-04-17-0025-ps.

Full text
Abstract:
Advances in technologies associated with unmanned aerial vehicles (UAVs) have allowed researchers, farmers and agribusinesses to incorporate UAVs coupled with imaging systems into data collection and decision making. Multispectral imagery allows for a quantitative assessment of biophysical indicators or plant health status. What is needed now is a standardized workflow for the quantitative assessment of plant biophysical indicators using low-altitude, high-resolution, UAV-acquired multispectral imagery. With this need in mind, the authors developed and proposed a workflow for extracting plot-level vegetation index means from orthomosaics. As the use of UAVs and associated data collection activities expands, it will become increasingly important that data is being properly incorporated and utilized by ag professionals.
APA, Harvard, Vancouver, ISO, and other styles
34

Zhou, Jing, Dennis Yungbluth, Chin Nee Vong, Andrew Scaboo, and Jianfeng Zhou. "Estimation of the Maturity Date of Soybean Breeding Lines Using UAV-Based Multispectral Imagery." Remote Sensing 11, no. 18 (September 4, 2019): 2075. http://dx.doi.org/10.3390/rs11182075.

Full text
Abstract:
Physiological maturity date is a critical parameter for the selection of breeding lines in soybean breeding programs. The conventional method to estimate the maturity dates of breeding lines uses visual ratings based on pod senescence by experts, which is subjective by human estimation, labor-intensive and time-consuming. Unmanned aerial vehicle (UAV)-based phenotyping systems provide a high-throughput and powerful tool of capturing crop traits using remote sensing, image processing and machine learning technologies. The goal of this study was to investigate the potential of predicting maturity dates of soybean breeding lines using UAV-based multispectral imagery. Maturity dates of 326 soybean breeding lines were taken using visual ratings from the beginning maturity stage (R7) to full maturity stage (R8), and the aerial multispectral images were taken during this period on 27 August, 14 September and 27 September, 2018. One hundred and thirty features were extracted from the five-band multispectral images. The maturity dates of the soybean lines were predicted and evaluated using partial least square regression (PLSR) models with 10-fold cross-validation. Twenty image features with importance to the estimation were selected and their changing rates between each two of the data collection days were calculated. The best prediction (R2 = 0.81, RMSE = 1.4 days) was made by the PLSR model using image features taken on 14 September and their changing rates between 14 September and 27 September with five components, leading to the conclusion that the UAV-based multispectral imagery is promising and practical in estimating maturity dates of soybean breeding lines.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhang, Suming, Gengxing Zhao, Kun Lang, Baowei Su, Xiaona Chen, Xue Xi, and Huabin Zhang. "Integrated Satellite, Unmanned Aerial Vehicle (UAV) and Ground Inversion of the SPAD of Winter Wheat in the Reviving Stage." Sensors 19, no. 7 (March 27, 2019): 1485. http://dx.doi.org/10.3390/s19071485.

Full text
Abstract:
Chlorophyll is the most important component of crop photosynthesis, and the reviving stage is an important period during the rapid growth of winter wheat. Therefore, rapid and precise monitoring of chlorophyll content in winter wheat during the reviving stage is of great significance. The satellite-UAV-ground integrated inversion method is an innovative solution. In this study, the core region of the Yellow River Delta (YRD) is used as a study area. Ground measurements data, UAV multispectral and Sentinel-2A multispectral imagery are used as data sources. First, representative plots in the Hekou District were selected as the core test area, and 140 ground sampling points were selected. Based on the measured SPAD values and UAV multispectral images, UAV-based SPAD inversion models were constructed, and the most accurate model was selected. Second, by comparing satellite and UAV imagery, a reflectance correction for satellite imagery was performed. Finally, based on the UAV-based inversion model and satellite imagery after reflectance correction, the inversion results for SPAD values in multi-scale were obtained. The results showed that green, red, red-edge and near-infrared bands were significantly correlated with SPAD values. The modeling precisions of the best inversion model are R2 = 0.926, Root Mean Squared Error (RMSE) = 0.63 and Mean Absolute Error (MAE) = 0.92, and the verification precisions are R2 = 0.934, RMSE = 0.78 and MAE = 0.87. The Sentinel-2A imagery after the reflectance correction has a pronounced inversion effect; the SPAD values in the study area were concentrated between 40 and 60, showing an increasing trend from the eastern coast to the southwest and west, with obvious spatial differences. This study synthesizes the advantages of satellite, UAV and ground methods, and the proposed satellite-UAV-ground integrated inversion method has important implications for real-time, rapid and precision SPAD values collected on multiple scales.
APA, Harvard, Vancouver, ISO, and other styles
36

Albetis, Johanna, Sylvie Duthoit, Fabio Guttler, Anne Jacquin, Michel Goulard, Hervé Poilvé, Jean-Baptiste Féret, and Gérard Dedieu. "Detection of Flavescence dorée Grapevine Disease Using Unmanned Aerial Vehicle (UAV) Multispectral Imagery." Remote Sensing 9, no. 4 (March 24, 2017): 308. http://dx.doi.org/10.3390/rs9040308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Brodie, Juliet, Lauren V. Ash, Ian Tittley, and Chris Yesson. "A comparison of multispectral aerial and satellite imagery for mapping intertidal seaweed communities." Aquatic Conservation: Marine and Freshwater Ecosystems 28, no. 4 (May 8, 2018): 872–81. http://dx.doi.org/10.1002/aqc.2905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Marston, Zachary P. D., Theresa M. Cira, Erin W. Hodgson, Joseph F. Knight, Ian V. Macrae, and Robert L. Koch. "Detection of Stress Induced by Soybean Aphid (Hemiptera: Aphididae) Using Multispectral Imagery from Unmanned Aerial Vehicles." Journal of Economic Entomology 113, no. 2 (November 29, 2019): 779–86. http://dx.doi.org/10.1093/jee/toz306.

Full text
Abstract:
Abstract Soybean aphid, Aphis glycines Matsumura (Hemiptera: Aphididae), is a common pest of soybean, Glycine max (L.) Merrill (Fabales: Fabaceae), in North America requiring frequent scouting as part of an integrated pest management plan. Current scouting methods are time consuming and provide incomplete coverage of soybean. Unmanned aerial vehicles (UAVs) are capable of collecting high-resolution imagery that offer more detailed coverage in agricultural fields than traditional scouting methods. Recently, it was documented that changes to the spectral reflectance of soybean canopies caused by aphid-induced stress could be detected from ground-based sensors; however, it remained unknown whether these changes could also be detected from UAV-based sensors. Small-plot trials were conducted in 2017 and 2018 where cages were used to manipulate aphid populations. Additional open-field trials were conducted in 2018 where insecticides were used to create a gradient of aphid pressure. Whole-plant soybean aphid densities were recorded along with UAV-based multispectral imagery. Simple linear regressions were used to determine whether UAV-based multispectral reflectance was associated with aphid populations. Our findings indicate that near-infrared reflectance decreased with increasing soybean aphid populations in caged trials when cumulative aphid days surpassed the economic injury level, and in open-field trials when soybean aphid populations were above the economic threshold. These findings provide the first documentation of soybean aphid-induced stress being detected from UAV-based multispectral imagery and advance the use of UAVs for remote scouting of soybean aphid and other field crop pests.
APA, Harvard, Vancouver, ISO, and other styles
39

MAUSEL, P. W., M. A. KARASKA, C. Y. MAO, D. E. ESCOBAR, and J. H. EVERITT. "Insights into secchi transparency through computer analysis of aerial multispectral video data." International Journal of Remote Sensing 12, no. 12 (December 1991): 2485–92. http://dx.doi.org/10.1080/01431169108955282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Boon, M. A., and S. Tesfamichael. "WETLAND VEGETATION INTEGRITY ASSESSMENT WITH LOW ALTITUDE MULTISPECTRAL UAV IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W6 (August 23, 2017): 55–62. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w6-55-2017.

Full text
Abstract:
The use of multispectral sensors on Unmanned Aerial Vehicles (UAVs) was until recently too heavy and bulky although this changed in recent times and they are now commercially available. The focus on the usage of these sensors is mostly directed towards the agricultural sector where the focus is on precision farming. Applications of these sensors for mapping of wetland ecosystems are rare. Here, we evaluate the performance of low altitude multispectral UAV imagery to determine the state of wetland vegetation in a localised spatial area. Specifically, NDVI derived from multispectral UAV imagery was used to inform the determination of the integrity of the wetland vegetation. Furthermore, we tested different software applications for the processing of the imagery. The advantages and disadvantages we experienced of these applications are also shortly presented in this paper. <br><br> A JAG-M fixed-wing imaging system equipped with a MicaScene RedEdge multispectral camera were utilised for the survey. A single surveying campaign was undertaken in early autumn of a 17&amp;thinsp;ha study area at the Kameelzynkraal farm, Gauteng Province, South Africa. Structure-from-motion photogrammetry software was used to reconstruct the camera position’s and terrain features to derive a high resolution orthoretified mosaic. MicaSense Atlas cloud-based data platform, Pix4D and PhotoScan were utilised for the processing. The WET-Health level one methodology was followed for the vegetation assessment, where wetland health is a measure of the deviation of a wetland’s structure and function from its natural reference condition. An on-site evaluation of the vegetation integrity was first completed. Disturbance classes were then mapped using the high resolution multispectral orthoimages and NDVI. The WET-Health vegetation module completed with the aid of the multispectral UAV products indicated that the vegetation of the wetland is largely modified (“D” PES Category) and that the condition is expected to deteriorate (change score) in the future. However a lower impact score were determined utilising the multispectral UAV imagery and NDVI. The result is a more accurate estimation of the impacts in the wetland.
APA, Harvard, Vancouver, ISO, and other styles
41

Chancia, Robert, Jan van Aardt, Sarah Pethybridge, Daniel Cross, and John Henderson. "Predicting Table Beet Root Yield with Multispectral UAS Imagery." Remote Sensing 13, no. 11 (June 2, 2021): 2180. http://dx.doi.org/10.3390/rs13112180.

Full text
Abstract:
Timely and accurate monitoring has the potential to streamline crop management, harvest planning, and processing in the growing table beet industry of New York state. We used unmanned aerial system (UAS) combined with a multispectral imager to monitor table beet (Beta vulgaris ssp. vulgaris) canopies in New York during the 2018 and 2019 growing seasons. We assessed the optimal pairing of a reflectance band or vegetation index with canopy area to predict table beet yield components of small sample plots using leave-one-out cross-validation. The most promising models were for table beet root count and mass using imagery taken during emergence and canopy closure, respectively. We created augmented plots, composed of random combinations of the study plots, to further exploit the importance of early canopy growth area. We achieved a R2 = 0.70 and root mean squared error (RMSE) of 84 roots (~24%) for root count, using 2018 emergence imagery. The same model resulted in a RMSE of 127 roots (~35%) when tested on the unseen 2019 data. Harvested root mass was best modeled with canopy closing imagery, with a R2 = 0.89 and RMSE = 6700 kg/ha using 2018 data. We applied the model to the 2019 full-field imagery and found an average yield of 41,000 kg/ha (~40,000 kg/ha average for upstate New York). This study demonstrates the potential for table beet yield models using a combination of radiometric and canopy structure data obtained at early growth stages. Additional imagery of these early growth stages is vital to develop a robust and generalized model of table beet root yield that can handle imagery captured at slightly different growth stages between seasons.
APA, Harvard, Vancouver, ISO, and other styles
42

Suo, C., E. McGovern, and A. Gilmer. "VEGETATION MAPPING OF A COASTAL DUNE COMPLEX USING MULTISPECTRAL IMAGERY ACQUIRED FROM AN UNMANNED AERIAL SYSTEM." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1 (September 26, 2018): 421–27. http://dx.doi.org/10.5194/isprs-archives-xlii-1-421-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> Vegetation mapping, identifying the distribution of plant species, is important for analysing vegetation dynamics, quantifying spatial patterns of vegetation evolution, analysing the effects of environment changes on vegetation, and predicting spatial patterns of species diversity. Such analysis can contribute to the development of targeted land management actions that maintain biodiversity and ecological functions. This paper represents a methodology for 3D vegetation mapping of a coastal dune complex using a multispectral camera mounted on an Unmanned Aerial System (UAS) with particular reference to the Buckroney dune complex in Co. Wicklow, Ireland. UAS, also known as Unmanned Aerial Vehicles (UAV’s) or drones, have enabled high-resolution and high-accuracy ground-based data to be gathered quickly and easily on-site. The Sequoia multispectral camera used in this study has green, red, red-edge and near infrared wavebands, and a normal RGB camera, to capture both visible and NIR images of the land surface. The workflow of 3D vegetation mapping of the study site included establishing ground control points, planning the flight mission and camera parameters, acquiring the imagery, processing the image data and performing features classification. The data processing outcomes include an orthomosiac model, a 3D surface model and multispectral images of the study site, in the Irish Transverse Mercator coordinate system, with a planimetric resolution of 0.024<span class="thinspace"></span>m and a georeferenced Root-Mean-Square (RMS) error of 0.111<span class="thinspace"></span>m. There were 235 sample area (1<span class="thinspace"></span>m<span class="thinspace"></span>&amp;times;<span class="thinspace"></span>1<span class="thinspace"></span>m) used for the accuracy assessment of the classification of the vegetation mapping. Feature classification was conducted using three different classification strategies to examine the efficiency of multispectral sensor data for vegetation mapping. Vegetation type classification accuracies ranged from 60<span class="thinspace"></span>% to 70<span class="thinspace"></span>%. This research illustrates the efficiency of data collection at Buckroney dune complex and the high-accuracy and high-resolution of the vegetation mapping of the site using a multispectral sensor mounted on UAS.</p>
APA, Harvard, Vancouver, ISO, and other styles
43

Stavrakoudis, Dimitris, Dimitrios Katsantonis, Kalliopi Kadoglidou, Argyris Kalaitzidis, and Ioannis Gitas. "Estimating Rice Agronomic Traits Using Drone-Collected Multispectral Imagery." Remote Sensing 11, no. 5 (March 6, 2019): 545. http://dx.doi.org/10.3390/rs11050545.

Full text
Abstract:
The knowledge of rice nitrogen (N) requirements and uptake capacity are fundamental for the development of improved N management. This paper presents empirical models for predicting agronomic traits that are relevant to yield and N requirements of rice (Oryza sativa L.) through remotely sensed data. Multiple linear regression models were constructed at key growth stages (at tillering and at booting), using as input reflectance values and vegetation indices obtained from a compact multispectral sensor (green, red, red-edge, and near-infrared channels) onboard an unmanned aerial vehicle (UAV). The models were constructed using field data and images from two consecutive years in a number of experimental rice plots in Greece (Thessaloniki Regional Unit), by applying four different N treatments (C0: 0 N kg∙ha−1, C1: 80 N kg∙ha−1, C2: 160 N kg∙ha−1, and C4: 320 N kg∙ha−1). Models for estimating the current crop status (e.g., N uptake at the time of image acquisition) and predicting the future one (e.g., N uptake of grains at maturity) were developed and evaluated. At the tillering stage, high accuracies (R2 ≥ 0.8) were achieved for N uptake and biomass. At the booting stage, similarly high accuracies were achieved for yield, N concentration, N uptake, biomass, and plant height, using inputs from either two or three images. The results of the present study can be useful for providing N recommendations for the two top-dressing fertilizations in rice cultivation, through a cost-efficient workflow.
APA, Harvard, Vancouver, ISO, and other styles
44

Heim, René, Ian Wright, Peter Scarth, Angus Carnegie, Dominique Taylor, and Jens Oldeland. "Multispectral, Aerial Disease Detection for Myrtle Rust (Austropuccinia psidii) on a Lemon Myrtle Plantation." Drones 3, no. 1 (March 7, 2019): 25. http://dx.doi.org/10.3390/drones3010025.

Full text
Abstract:
Disease management in agriculture often assumes that pathogens are spread homogeneously across crops. In practice, pathogens can manifest in patches. Currently, disease detection is predominantly carried out by human assessors, which can be slow and expensive. A remote sensing approach holds promise. Current satellite sensors are not suitable to spatially resolve individual plants or lack temporal resolution to monitor pathogenesis. Here, we used multispectral imaging and unmanned aerial systems (UAS) to explore whether myrtle rust (Austropuccinia psidii) could be detected on a lemon myrtle (Backhousia citriodora) plantation. Multispectral aerial imagery was collected from fungicide treated and untreated tree canopies, the fungicide being used to control myrtle rust. Spectral vegetation indices and single spectral bands were used to train a random forest classifier. Treated and untreated trees could be classified with high accuracy (95%). Important predictors for the classifier were the near-infrared (NIR) and red edge (RE) spectral band. Taking some limitations into account, that are discussedherein, our work suggests potential for mapping myrtle rust-related symptoms from aerial multispectral images. Similar studies could focus on pinpointing disease hotspots to adjust management strategies and to feed epidemiological models.
APA, Harvard, Vancouver, ISO, and other styles
45

Song, Bonggeun, and Kyunghun Park. "Detection of Aquatic Plants Using Multispectral UAV Imagery and Vegetation Index." Remote Sensing 12, no. 3 (January 25, 2020): 387. http://dx.doi.org/10.3390/rs12030387.

Full text
Abstract:
In this study, aquatic plants in a small reservoir were detected using multispectral UAV (Unmanned Aerial Vehicle) imagery and various vegetation indices. A Firefly UAV, which has both fixed-wing and rotary-wing flight modes, was flown over the study site four times. A RedEdge camera was mounted on the UAV to acquire multispectral images. These images were used to analyze the NDVI (Normalized Difference Vegetation Index), ENDVI (Enhance Normalized Difference Vegetation Index), NDREI (Normalized Difference RedEdge Index), NGRDI (Normalized Green-Red Difference Index), and GNDVI (Green Normalized Difference Vegetation Index). As for multispectral characteristics, waterside plants showed the highest reflectance in Rnir, while floating plants had a higher reflectance in Rre. During the hottest season (on 25 June), the vegetation indices were the highest, and the habitat expanded near the edge of the reservoir. Among the vegetation indices, NDVI was the highest and NGRDI was the lowest. In particular, NGRDI had a higher value on the water surface and was not useful for detecting aquatic plants. NDVI and GNDVI, which showed the clearest difference between aquatic plants and water surface, were determined to be the most effective vegetation indices for detecting aquatic plants. Accordingly, the vegetation indices using multispectral UAV imagery turned out to be effective for detecting aquatic plants. A further study will be accompanied by a field survey in order to acquire and analyze more accurate imagery information.
APA, Harvard, Vancouver, ISO, and other styles
46

Benjamin, Adam R., Amr Abd-Elrahman, Lyn A. Gettys, Hartwig H. Hochmair, and Kyle Thayer. "Monitoring the Efficacy of Crested Floatingheart (Nymphoides cristata) Management with Object-Based Image Analysis of UAS Imagery." Remote Sensing 13, no. 4 (February 23, 2021): 830. http://dx.doi.org/10.3390/rs13040830.

Full text
Abstract:
This study investigates the use of unmanned aerial systems (UAS) mapping for monitoring the efficacy of invasive aquatic vegetation (AV) management on a floating-leaved AV species, Nymphoides cristata (CFH). The study site consists of 48 treatment plots (TPs). Based on six unique flights over two days at three different flight altitudes while using both a multispectral and RGB sensor, accuracy assessment of the final object-based image analysis (OBIA)-derived classified images yielded overall accuracies ranging from 89.6% to 95.4%. The multispectral sensor was significantly more accurate than the RGB sensor at measuring CFH areal coverage within each TP only with the highest multispectral, spatial resolution (2.7 cm/pix at 40 m altitude). When measuring response in the AV community area between the day of treatment and two weeks after treatment, there was no significant difference between the temporal area change from the reference datasets and the area changes derived from either the RGB or multispectral sensor. Thus, water resource managers need to weigh small gains in accuracy from using multispectral sensors against other operational considerations such as the additional processing time due to increased file sizes, higher financial costs for equipment procurements, and longer flight durations in the field when operating multispectral sensors.
APA, Harvard, Vancouver, ISO, and other styles
47

d'Angelo, P., G. Kuschk, and P. Reinartz. "Evaluation of Skybox Video and Still Image products." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1 (November 7, 2014): 95–99. http://dx.doi.org/10.5194/isprsarchives-xl-1-95-2014.

Full text
Abstract:
The SkySat-1 satellite lauched by Skybox Imaging on November 21 in 2013 opens a new chapter in civilian earth observation as it is the first civilian satellite to image a target in high definition panchromatic video for up to 90 seconds. The small satellite with a mass of 100 kg carries a telescope with 3 frame sensors. Two products are available: Panchromatic video with a resolution of around 1 meter and a frame size of 2560 &times; 1080 pixels at 30 frames per second. Additionally, the satellite can collect still imagery with a swath of 8 km in the panchromatic band, and multispectral images with 4 bands. Using super-resolution techniques, sub-meter accuracy is reached for the still imagery. The paper provides an overview of the satellite design and imaging products. The still imagery product consists of 3 stripes of frame images with a footprint of approximately 2.6 &times; 1.1 km. Using bundle block adjustment, the frames are registered, and their accuracy is evaluated. Image quality of the panchromatic, multispectral and pansharpened products are evaluated. The video product used in this evaluation consists of a 60 second gazing acquisition of Las Vegas. A DSM is generated by dense stereo matching. Multiple techniques such as pairwise matching or multi image matching are used and compared. As no ground truth height reference model is availble to the authors, comparisons on flat surface and compare differently matched DSMs are performed. Additionally, visual inspection of DSM and DSM profiles show a detailed reconstruction of small features and large skyscrapers.
APA, Harvard, Vancouver, ISO, and other styles
48

Baiocchi, V., A. Bianchi, C. Maddaluno, and M. Vidale. "PANSHARPENING TECHNIQUES TO DETECT MASS MONUMENT DAMAGING IN IRAQ." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-5/W1 (May 15, 2017): 121–26. http://dx.doi.org/10.5194/isprs-archives-xlii-5-w1-121-2017.

Full text
Abstract:
The recent mass destructions of monuments in Iraq cannot be monitored with the terrestrial survey methodologies, for obvious reasons of safety. For the same reasons, it’s not advisable the use of classical aerial photogrammetry, so it was obvious to think to the use of multispectral Very High Resolution (VHR) satellite imagery. Nowadays VHR satellite images resolutions are very near airborne photogrammetrical images and usually they are acquired in multispectral mode. The combination of the various bands of the images is called pan-sharpening and it can be carried on using different algorithms and strategies. The correct pansharpening methodology, for a specific image, must be chosen considering the specific multispectral characteristics of the satellite used and the particular application. In this paper a first definition of guidelines for the use of VHR multispectral imagery to detect monument destruction in unsafe area, is reported. <br><br> The proposed methodology, agreed with UNESCO and soon to be used in Libya for the coastal area, has produced a first report delivered to the Iraqi authorities. Some of the most evident examples are reported to show the possible capabilities of identification of damages using VHR images.
APA, Harvard, Vancouver, ISO, and other styles
49

Wan, Liang, Jiangpeng Zhu, Xiaoyue Du, Jiafei Zhang, Xiongzhe Han, Weijun Zhou, Xiaopeng Li, et al. "A model for phenotyping crop fractional vegetation cover using imagery from unmanned aerial vehicles." Journal of Experimental Botany 72, no. 13 (May 8, 2021): 4691–707. http://dx.doi.org/10.1093/jxb/erab194.

Full text
Abstract:
Abstract Fractional vegetation cover (FVC) is the key trait of interest for characterizing crop growth status in crop breeding and precision management. Accurate quantification of FVC among different breeding lines, cultivars, and growth environments is challenging, especially because of the large spatiotemporal variability in complex field conditions. This study presents an ensemble modeling strategy for phenotyping crop FVC from unmanned aerial vehicle (UAV)-based multispectral images by coupling the PROSAIL model with a gap probability model (PROSAIL-GP). Seven field experiments for four main crops were conducted, and canopy images were acquired using a UAV platform equipped with RGB and multispectral cameras. The PROSAIL-GP model successfully retrieved FVC in oilseed rape (Brassica napus L.) with coefficient of determination, root mean square error (RMSE), and relative RMSE (rRMSE) of 0.79, 0.09, and 18%, respectively. The robustness of the proposed method was further examined in rice (Oryza sativa L.), wheat (Triticum aestivum L.), and cotton (Gossypium hirsutum L.), and a high accuracy of FVC retrieval was obtained, with rRMSEs of 12%, 6%, and 6%, respectively. Our findings suggest that the proposed method can efficiently retrieve crop FVC from UAV images at a high spatiotemporal domain, which should be a promising tool for precision crop breeding.
APA, Harvard, Vancouver, ISO, and other styles
50

Thelen, Kurt D., A. N. Kravchenko, and Chad D. Lee. "Use of Optical Remote Sensing for Detecting Herbicide Injury in Soybean." Weed Technology 18, no. 2 (June 2004): 292–97. http://dx.doi.org/10.1614/wt-03-049r2.

Full text
Abstract:
Experiments were conducted from 2000 to 2002 at two locations each year to determine if lactofen and imazethapyr injury to soybean could be detected using digital aerial imagery and ground-based optical remote sensing. Lactofen and imazethapyr were applied at base rates of 105 and 71 g/ha, respectively, and at 0, 2X, and 4X rates. Treated plots were evaluated between 7 and 21 d after treatment for crop injury using a ground-based radiometer and a system using computer analysis of digital aerial imagery. Both the ground-based radiometer and the digital aerial imagery were effective in detecting herbicide injury under most conditions. The digital aerial imagery system was found to be more sensitive in detecting herbicide injury than the ground-based radiometer system. Herbicide or herbicide rate had a significant effect on normalized differential vegetation indices (NDVI) derived from digital aerial imagery in four of four site-years. NDVI values derived from a multispectral ground-based radiometer were significant for herbicide or herbicide rate in four of six site-years. NDVI values from treated plots were subtracted from the NDVI value of the untreated check to generate a ΔNDVI. The resulting ΔNDVI values from the ground-based radiometer system were significant for herbicide or herbicide rate in six of six site-years. Neither optical remote-sensing system was effective at estimating actual application rates of lactofen and imazethapyr across a broad range of field and weather conditions due to temporal and spatial variability in crop response to the herbicides.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography