Academic literature on the topic 'Multispectral aerial video imagery'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multispectral aerial video imagery.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multispectral aerial video imagery"

1

Morozov, A. N., A. L. Nazolin, and I. L. Fufurin. "Optical and Spectral Methods for Detection and Recognition of Unmanned Aerial Vehicles." Radio Engineering, no. 2 (May 17, 2020): 39–50. http://dx.doi.org/10.36027/rdeng.0220.0000167.

Full text
Abstract:
The paper considers a problem of detection and identification of unmanned aerial vehicles (UAVs) against the animate and inanimate objects and identification of their load by optical and spectral optical methods. The state-of-the-art analysis has shown that, when using the radar methods to detect small UAVs, there is a dead zone for distances of 250-700 m, and in this case it is important to use optical methods for detecting UAVs.The application possibilities and improvements of the optical scheme for detecting UAVs at long distances of about 1-2 km are considered. Location is performed by intrinsic infrared (IR) radiation of an object using the IR cameras and thermal imagers, as well as using a laser rangefinder (LIDAR). The paper gives examples of successful dynamic detection and recognition of objects from video images by methods of graph theory and neural networks using the network FasterR-CNN, YOLO and SSD models, including one frame received.The possibility for using the available spectral optical methods to analyze the chemical composition of materials that can be employed for remote identification of UAV coating materials, as well as for detecting trace amounts of matter on its surface has been studied. The advantages and disadvantages of the luminescent spectroscopy with UV illumination, Raman spectroscopy, differential absorption spectroscopy based on a tunable UV laser, spectral imaging methods (hyper / multispectral images), diffuse reflectance laser spectroscopy using infrared tunable quantum cascade lasers (QCL) have been shown.To assess the potential limiting distances for detecting and identifying UAVs, as well as identifying the chemical composition of an object by optical and spectral optical methods, a described experimental setup (a hybrid lidar UAV identification complex) is expected to be useful. The experimental setup structure and its performances are described. Such studies are aimed at development of scientific basics for remote detection, identification, tracking, and determination of UAV parameters and UAV belonging to different groups by optical location and spectroscopy methods, as well as for automatic optical UAV recognition in various environments against the background of moving wildlife. The proposed problem solution is to combine the optical location and spectral analysis methods, methods of the theory of statistics, graphs, deep learning, neural networks and automatic control methods, which is an interdisciplinary fundamental scientific task.
APA, Harvard, Vancouver, ISO, and other styles
2

Mian, O., J. Lutes, G. Lipa, J. J. Hutton, E. Gavelle, and S. Borghini. "ACCURACY ASSESSMENT OF DIRECT GEOREFERENCING FOR PHOTOGRAMMETRIC APPLICATIONS ON SMALL UNMANNED AERIAL PLATFORMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W4 (March 17, 2016): 77–83. http://dx.doi.org/10.5194/isprs-archives-xl-3-w4-77-2016.

Full text
Abstract:
Efficient mapping from unmanned aerial platforms cannot rely on aerial triangulation using known ground control points. The cost and time of setting ground control, added to the need for increased overlap between flight lines, severely limits the ability of small VTOL platforms, in particular, to handle mapping-grade missions of all but the very smallest survey areas. Applanix has brought its experience in manned photogrammetry applications to this challenge, setting out the requirements for increasing the efficiency of mapping operations from small UAVs, using survey-grade GNSS-Inertial technology to accomplish direct georeferencing of the platform and/or the imaging payload. The Direct Mapping Solution for Unmanned Aerial Vehicles (DMS-UAV) is a complete and ready-to-integrate OEM solution for Direct Georeferencing (DG) on unmanned aerial platforms. Designed as a solution for systems integrators to create mapping payloads for UAVs of all types and sizes, the DMS produces directly georeferenced products for any imaging payload (visual, LiDAR, infrared, multispectral imaging, even video). Additionally, DMS addresses the airframe’s requirements for high-accuracy position and orientation for such tasks as precision RTK landing and Precision Orientation for Air Data Systems (ADS), Guidance and Control. <br><br> This paper presents results using a DMS comprised of an Applanix APX-15 UAV with a Sony a7R camera to produce highly accurate orthorectified imagery without Ground Control Points on a Microdrones md4-1000 platform conducted by Applanix and Avyon. APX-15 UAV is a single-board, small-form-factor GNSS-Inertial system designed for use on small, lightweight platforms. The Sony a7R is a prosumer digital RGB camera sensor, with a 36MP, 4.9-micron CCD producing images at 7360 columns by 4912 rows. It was configured with a 50mm AF-S Nikkor f/1.8 lens and subsequently with a 35mm Zeiss Sonnar T* FE F2.8 lens. Both the camera/lens combinations and the APX-15 were mounted to a Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. <br><br> In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. <br><br> The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. <br><br> The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. <br><br> Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was post-processed in Single Base mode, using a base station located in the project area via POSPac UAV. For the block and railway corridor, the basestation’s position was precisely determined by processing a 12-hour session using the CSRS-PPP Post Processing service. Similarly, for the flight over Fryer Dam, the base-station’s position was also precisely determined by processing a 4-hour session using the CSRS-PPP Post Processing service. POSPac UAV’s camera calibration and quality control (CalQC) module was used to refine the camera interior orientation parameters using an Integrated Sensor Orientation (ISO) approach. POSPac UAV was also used to generate the Exterior Orientation parameters for images collected during the test flight. <br><br> The Inpho photogrammetric software package was used to develop the final map products for both corridors under various scenarios. The imagery was first imported into an Inpho project, with updated focal length, principal point offsets and Exterior Orientation parameters. First, a Digital Terrain/Surface Model (DTM/DSM) was extracted from the stereo imagery, following which the raw images were orthorectified to produce an orthomosaic product.
APA, Harvard, Vancouver, ISO, and other styles
3

Mian, O., J. Lutes, G. Lipa, J. J. Hutton, E. Gavelle, and S. Borghini. "ACCURACY ASSESSMENT OF DIRECT GEOREFERENCING FOR PHOTOGRAMMETRIC APPLICATIONS ON SMALL UNMANNED AERIAL PLATFORMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W4 (March 17, 2016): 77–83. http://dx.doi.org/10.5194/isprsarchives-xl-3-w4-77-2016.

Full text
Abstract:
Efficient mapping from unmanned aerial platforms cannot rely on aerial triangulation using known ground control points. The cost and time of setting ground control, added to the need for increased overlap between flight lines, severely limits the ability of small VTOL platforms, in particular, to handle mapping-grade missions of all but the very smallest survey areas. Applanix has brought its experience in manned photogrammetry applications to this challenge, setting out the requirements for increasing the efficiency of mapping operations from small UAVs, using survey-grade GNSS-Inertial technology to accomplish direct georeferencing of the platform and/or the imaging payload. The Direct Mapping Solution for Unmanned Aerial Vehicles (DMS-UAV) is a complete and ready-to-integrate OEM solution for Direct Georeferencing (DG) on unmanned aerial platforms. Designed as a solution for systems integrators to create mapping payloads for UAVs of all types and sizes, the DMS produces directly georeferenced products for any imaging payload (visual, LiDAR, infrared, multispectral imaging, even video). Additionally, DMS addresses the airframe’s requirements for high-accuracy position and orientation for such tasks as precision RTK landing and Precision Orientation for Air Data Systems (ADS), Guidance and Control. &lt;br&gt;&lt;br&gt; This paper presents results using a DMS comprised of an Applanix APX-15 UAV with a Sony a7R camera to produce highly accurate orthorectified imagery without Ground Control Points on a Microdrones md4-1000 platform conducted by Applanix and Avyon. APX-15 UAV is a single-board, small-form-factor GNSS-Inertial system designed for use on small, lightweight platforms. The Sony a7R is a prosumer digital RGB camera sensor, with a 36MP, 4.9-micron CCD producing images at 7360 columns by 4912 rows. It was configured with a 50mm AF-S Nikkor f/1.8 lens and subsequently with a 35mm Zeiss Sonnar T* FE F2.8 lens. Both the camera/lens combinations and the APX-15 were mounted to a Microdrones md4-1000 quad-rotor VTOL UAV. The Sony A7R and each lens combination were focused and calibrated terrestrially using the Applanix camera calibration facility, and then integrated with the APX-15 GNSS-Inertial system using a custom mount specifically designed for UAV applications. The mount is constructed in such a way as to maintain the stability of both the interior orientation and IMU boresight calibration over shock and vibration, thus turning the Sony A7R into a metric imaging solution. &lt;br&gt;&lt;br&gt; In July and August 2015, Applanix and Avyon carried out a series of test flights of this system. The goal of these test flights was to assess the performance of DMS APX-15 direct georeferencing system under various scenarios. Furthermore, an examination of how DMS APX-15 can be used to produce accurate map products without the use of ground control points and with reduced sidelap was also carried out. Reducing the side lap for survey missions performed by small UAVs can significantly increase the mapping productivity of these platforms. &lt;br&gt;&lt;br&gt; The area mapped during the first flight campaign was a 250m x 300m block and a 775m long railway corridor in a rural setting in Ontario, Canada. The second area mapped was a 450m long corridor over a dam known as Fryer Dam (over Richelieu River in Quebec, Canada). Several ground control points were distributed within both test areas. &lt;br&gt;&lt;br&gt; The flight over the block area included 8 North-South lines and 1 cross strip flown at 80m AGL, resulting in a ~1cm GSD. The flight over the railway corridor included 2 North-South lines also flown at 80m AGL. Similarly, the flight over the dam corridor included 2 North-South lines flown at 50m AGL. The focus of this paper was to analyse the results obtained from the two corridors. &lt;br&gt;&lt;br&gt; Test results from both areas were processed using Direct Georeferencing techniques, and then compared for accuracy against the known positions of ground control points in each test area. The GNSS-Inertial data collected by the APX-15 was post-processed in Single Base mode, using a base station located in the project area via POSPac UAV. For the block and railway corridor, the basestation’s position was precisely determined by processing a 12-hour session using the CSRS-PPP Post Processing service. Similarly, for the flight over Fryer Dam, the base-station’s position was also precisely determined by processing a 4-hour session using the CSRS-PPP Post Processing service. POSPac UAV’s camera calibration and quality control (CalQC) module was used to refine the camera interior orientation parameters using an Integrated Sensor Orientation (ISO) approach. POSPac UAV was also used to generate the Exterior Orientation parameters for images collected during the test flight. &lt;br&gt;&lt;br&gt; The Inpho photogrammetric software package was used to develop the final map products for both corridors under various scenarios. The imagery was first imported into an Inpho project, with updated focal length, principal point offsets and Exterior Orientation parameters. First, a Digital Terrain/Surface Model (DTM/DSM) was extracted from the stereo imagery, following which the raw images were orthorectified to produce an orthomosaic product.
APA, Harvard, Vancouver, ISO, and other styles
4

Jayroe, Clinton W., William H. Baker, and Amy B. Greenwalt. "Using Multispectral Aerial Imagery to Evaluate Crop Productivity." Crop Management 4, no. 1 (2005): 1–7. http://dx.doi.org/10.1094/cm-2005-0205-01-rs.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bruce, Robert W., Istvan Rajcan, and John Sulik. "Classification of Soybean Pubescence from Multispectral Aerial Imagery." Plant Phenomics 2021 (August 4, 2021): 1–11. http://dx.doi.org/10.34133/2021/9806201.

Full text
Abstract:
The accurate determination of soybean pubescence is essential for plant breeding programs and cultivar registration. Currently, soybean pubescence is classified visually, which is a labor-intensive and time-consuming activity. Additionally, the three classes of phenotypes (tawny, light tawny, and gray) may be difficult to visually distinguish, especially the light tawny class where misclassification with tawny frequently occurs. The objectives of this study were to solve both the throughput and accuracy issues in the plant breeding workflow, develop a set of indices for distinguishing pubescence classes, and test a machine learning (ML) classification approach. A principal component analysis (PCA) on hyperspectral soybean plot data identified clusters related to pubescence classes, while a Jeffries-Matusita distance analysis indicated that all bands were important for pubescence class separability. Aerial images from 2018, 2019, and 2020 were analyzed in this study. A 60-plot test (2019) of genotypes with known pubescence was used as reference data, while whole-field images from 2018, 2019, and 2020 were used to examine the broad applicability of the classification methodology. Two indices, a red/blue ratio and blue normalized difference vegetation index (blue NDVI), were effective at differentiating tawny and gray pubescence types in high-resolution imagery. A ML approach using a support vector machine (SVM) radial basis function (RBF) classifier was able to differentiate the gray and tawny types (83.1% accuracy and kappa=0.740 on a pixel basis) on images where reference training data was present. The tested indices and ML model did not generalize across years to imagery that did not contain the reference training panel, indicating limitations of using aerial imagery for pubescence classification in some environmental conditions. High-throughput classification of gray and tawny pubescence types is possible using aerial imagery, but light tawny soybeans remain difficult to classify and may require training data from each field season.
APA, Harvard, Vancouver, ISO, and other styles
6

Yang, Bo, Timothy L. Hawthorne, Hannah Torres, and Michael Feinman. "Using Object-Oriented Classification for Coastal Management in the East Central Coast of Florida: A Quantitative Comparison between UAV, Satellite, and Aerial Data." Drones 3, no. 3 (July 27, 2019): 60. http://dx.doi.org/10.3390/drones3030060.

Full text
Abstract:
High resolution mapping of coastal habitats is invaluable for resource inventory, change detection, and inventory of aquaculture applications. However, coastal areas, especially the interior of mangroves, are often difficult to access. An Unmanned Aerial Vehicle (UAV), equipped with a multispectral sensor, affords an opportunity to improve upon satellite imagery for coastal management because of the very high spatial resolution, multispectral capability, and opportunity to collect real-time observations. Despite the recent and rapid development of UAV mapping applications, few articles have quantitatively compared how much improvement there is of UAV multispectral mapping methods compared to more conventional remote sensing data such as satellite imagery. The objective of this paper is to quantitatively demonstrate the improvements of a multispectral UAV mapping technique for higher resolution images used for advanced mapping and assessing coastal land cover. We performed multispectral UAV mapping fieldwork trials over Indian River Lagoon along the central Atlantic coast of Florida. Ground Control Points (GCPs) were collected to generate a rigorous geo-referenced dataset of UAV imagery and support comparison to geo-referenced satellite and aerial imagery. Multi-spectral satellite imagery (Sentinel-2) was also acquired to map land cover for the same region. NDVI and object-oriented classification methods were used for comparison between UAV and satellite mapping capabilities. Compared with aerial images acquired from Florida Department of Environmental Protection, the UAV multi-spectral mapping method used in this study provided advanced information of the physical conditions of the study area, an improved land feature delineation, and a significantly better mapping product than satellite imagery with coarser resolution. The study demonstrates a replicable UAV multi-spectral mapping method useful for study sites that lack high quality data.
APA, Harvard, Vancouver, ISO, and other styles
7

Kramber‡, W. J., A. J. Richardson§, P. R. Nixon§, and K. Lulla†. "Principal component analysis of aerial video imagery†." International Journal of Remote Sensing 9, no. 9 (September 1988): 1415–22. http://dx.doi.org/10.1080/01431168808954949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Yanchao, Wen Yang, Ying Sun, Christine Chang, Jiya Yu, and Wenbo Zhang. "Fusion of Multispectral Aerial Imagery and Vegetation Indices for Machine Learning-Based Ground Classification." Remote Sensing 13, no. 8 (April 7, 2021): 1411. http://dx.doi.org/10.3390/rs13081411.

Full text
Abstract:
Unmanned Aerial Vehicles (UAVs) are emerging and promising platforms for carrying different types of cameras for remote sensing. The application of multispectral vegetation indices for ground cover classification has been widely adopted and has proved its reliability. However, the fusion of spectral bands and vegetation indices for machine learning-based land surface investigation has hardly been studied. In this paper, we studied the fusion of spectral bands information from UAV multispectral images and derived vegetation indices for almond plantation classification using several machine learning methods. We acquired multispectral images over an almond plantation using a UAV. First, a multispectral orthoimage was generated from the acquired multispectral images using SfM (Structure from Motion) photogrammetry methods. Eleven types of vegetation indexes were proposed based on the multispectral orthoimage. Then, 593 data points that contained multispectral bands and vegetation indexes were randomly collected and prepared for this study. After comparing six machine learning algorithms (Support Vector Machine, K-Nearest Neighbor, Linear Discrimination Analysis, Decision Tree, Random Forest, and Gradient Boosting), we selected three (SVM, KNN, and LDA) to study the fusion of multi-spectral bands information and derived vegetation index for classification. With the vegetation indexes increased, the model classification accuracy of all three selected machine learning methods gradually increased, then dropped. Our results revealed that that: (1) spectral information from multispectral images can be used for machine learning-based ground classification, and among all methods, SVM had the best performance; (2) combination of multispectral bands and vegetation indexes can improve the classification accuracy comparing to only spectral bands among all three selected methods; (3) among all VIs, NDEGE, NDVIG, and NDVGE had consistent performance in improving classification accuracies, and others may reduce the accuracy. Machine learning methods (SVM, KNN, and LDA) can be used for classifying almond plantation using multispectral orthoimages, and fusion of multispectral bands with vegetation indexes can improve machine learning-based classification accuracy if the vegetation indexes are properly selected.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Chenghai, Charles P. C. Suh, and John K. Westbrook. "Early identification of cotton fields using mosaicked aerial multispectral imagery." Journal of Applied Remote Sensing 11, no. 1 (January 12, 2017): 016008. http://dx.doi.org/10.1117/1.jrs.11.016008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Soni, Ayush, Alexander Loui, Scott Brown, and Carl Salvaggio. "High-quality multispectral image generation using Conditional GANs." Electronic Imaging 2020, no. 8 (January 26, 2020): 86–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.8.imawm-086.

Full text
Abstract:
In this paper, we demonstrate the use of a Conditional Generative Adversarial Networks (cGAN) framework for producing high-fidelity, multispectral aerial imagery using low-fidelity imagery of the same kind as input. The motivation behind is that it is easier, faster, and often less costly to produce low-fidelity images than high-fidelity images using the various available techniques, such as physics-driven synthetic image generation models. Once the cGAN network is trained and tuned in a supervised manner on a data set of paired low- and high-quality aerial images, it can then be used to enhance new, lower-quality baseline images of similar type to produce more realistic, high-fidelity multispectral image data. This approach can potentially save significant time and effort compared to traditional approaches of producing multispectral images.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Multispectral aerial video imagery"

1

Hodgson, Lucien Guy, and n/a. "Cotton crop condition assessment using arial video imagery." University of Canberra. Applied Science, 1991. http://erl.canberra.edu.au./public/adt-AUC20060725.144909.

Full text
Abstract:
Cotton crop condition was assessed from an analysis of multispectral aerial video imagery. Visible-near infrared imagery of two cotton fields was collected towards the end of the 1990 crop. The digital analysis was based on image classification, and the accuracies were assessed using the Kappa coefficient of agreement. The earliest of three images proved to be best for distinguishing plant variety. Vegetation index images were better for estimating potential yield than the original multispectral image; so too were multi-channel images that were transformed using vegetation indices or principal component analysis. The seedbed preparation rig used, the nitrogen application rate and three plant varieties, a weed species and two cotton cultivars, could all be discriminated from the imagery. Accuracies were moderate for the discrimination of plant variety, tillage treatment and nitrogen treatment, and low for the estimation of potential yield.
APA, Harvard, Vancouver, ISO, and other styles
2

Gurram, Prudhvi K. "Automated 3D object modeling from aerial video imagery /." Online version of thesis, 2009. http://hdl.handle.net/1850/11207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Salmon, Summer Anne. "A New Technique for Measuring Runup Variation Using Sub-Aerial Video Imagery." The University of Waikato, 2008. http://hdl.handle.net/10289/2511.

Full text
Abstract:
Video monitoring of beaches is becoming the preferred method for observing changes to nearshore morphology. Consequently this work investigates a new technique for predicting the probability of inundation that is based on measuring runup variation using video. Runup is defined as the water-level elevation maxima on the foreshore relative to the still water level and the waterline is defined as the position where the MWL intersects the beach face. Tairua, and Pauanui Beaches, on the north east coast of the North Island of New Zealand, were used as the field site in this study and represent two very different beaches with the same incoming wave and meteorological conditions. Tairua is most frequently in an intermediate beach state, whereas Pauanui is usually flatter in nature. In order to rectify runup observations, an estimate of the runup elevation was needed (Z). This was estimated by measuring the variation of the waterline over a tidal cycle from time-averaged video images during a storm event and provided beach morphology statistics (i.e. beach slope (α) and beach intercept (b)) used in the rectification process where Z=aX+b. The maximum swash excursions were digitized from time-stacks, and rectified to provide run-up timeseries with duration 20 minutes. Field calibrations revealed a videoed waterline that was seaward of the surveyed waterline. Quantification of this error gave a vertical offset of 0.33m at Tairua and 0.25m at Pauanui. At Tairua, incident wave energy was dominant in the swash zone, and the runup distributions followed a Rayleigh distribution. At Pauanui, the flatter beach, the runup distributions were approximately bimodal due to the dominance of infragravity energy in the swash signal. The slope of the beach was a major control on the runup elevation; runup at Pauanui was directly affected by the deepwater wave height and the tide, while at Tairua there was no correlation. Overall, the results of the study indicate realistic runup measurements, over a wide range of time scales and, importantly, during storm events. However, comparisons of videoed runup and empirical runup formulae revealed larger deviations as the beach steepness increased. Furthur tests need to be carried out to see if this is a limitation of this technique, used to measure runup. The runup statistics are consistently higher at Tairua and suggests that swash runs up higher on steeper beaches. However, because of the characteristics of flatter beaches (such as high water tables and low drainage efficiencies) the impact of extreme runup elevations on such beaches are more critical in regards to erosion and/ or inundation. The coastal environment is of great importance to Māori. Damage to the coast and coastal waahi tapu (places of spiritual importance) caused by erosion and inundation, adversely affects the spiritual and cultural well-being of Māori. For this reason, a chapter was dedicated to investigating the practices used by Māori to protect and preserve the coasts in accordance with tikanga Māori (Māori protocols). Mimicking nature was and still is a practice used by Māori to restore the beaches after erosive events, and includes replanting native dune plants and using natural materials on the beaches to stabilize the dunes. Tapu and rahui (the power and influence of the gods) were imposed on communities to prohibit and prevent people from free access to either food resources or to a particular place, in order to protect people and/ or resources. Interpretations of Māori oral histories provide insights into past local hazards and inform about the safety and viability of certain activities within an area. Environmental indicators were used to identify and forecast extreme weather conditions locally. Māori knowledge of past hazards, and the coastal environment as a whole, is a valuable resource and provides a unique source of expertise that can contribute to current coastal hazards management plans in New Zealand and provide insights about the areas that may again be impacted by natural hazards.
APA, Harvard, Vancouver, ISO, and other styles
4

Wolkesson, Henrik. "Realtime Mosaicing of Video Stream from µUAV." Thesis, Linköpings universitet, Datorseende, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-76357.

Full text
Abstract:
This is a master thesis of the Master of Science degree program in Applied Physics and Electrical Engineering (Y) at Linköping University. The goal of the projectis to develop an application for creating a map in real time from a video camera on a miniature unmanned aerial vehicle. This thesis project and report is a first exploratory study for this application. It implements a prototype method and evaluates it on sample sequences from an on-board video camera. The method first looks for good points to follow in the image and then tracks them in a sequence.The image is then pasted, or merged, together with previous images so that points from the different images align. Two methods to find good points to follow are examined with focus on real-time performance. The result is that the much faster FAST detector method yielded satisfactory results good enough to replace the slower standard method of the Harris-Stephens corner detector. It is also examined whether it is possible to assume that the ground is a flat surface in this application or if a computationally more expensive method estimating altitude information has to be used. The result is that at high altitudes or when the ground is close to flat in reality and the camera points straight downwards a two-dimensional method will do. If flying lower or with high objects in the picture, which is often the case in this application, it must to be taken into account that the points really are at different heights, hence the ground can not be assumed to be flat.
APA, Harvard, Vancouver, ISO, and other styles
5

Potter, Thomas Noel 1959. "The use of multispectral aerial video to determine land cover for hydrological simulations in small urban watersheds." Thesis, The University of Arizona, 1993. http://hdl.handle.net/10150/291381.

Full text
Abstract:
Airborne multispectral video was evaluated as a tool for obtaining urban land cover information for hydrological simulations. Land cover data was obtained for a small urban watershed in Tucson, Arizona using four methods: multispectral aerial video (2 meter and 4 meter pixel resolution), National High Altitude Photography (NHAP), multispectral satellite imagery from Systeme Pour l'Observation de la Terre (SPOT), and by conventional survey. A semi-automated land cover classification produced four classes: vegetation, buildings, pavement, and bare soil. The land cover data from each classification was used as input to a runoff simulation model. Runoff values generate by each simulation were compared to observed runoff. A chi-square goodness-of-fit test indicated that SPOT produced landcover data most similar to the conventional classification. In the curve number model, the SPOT data produced simulated runoff values most similar to observed runoff.
APA, Harvard, Vancouver, ISO, and other styles
6

Maier, Kathrin. "Direct multispectral photogrammetry for UAV-based snow depth measurements." Thesis, KTH, Geoinformatik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254566.

Full text
Abstract:
Due to the changing climate and inherent atypically occurring meteorological events in the Arctic regions, more accurate snow quality predictions are needed in order to support the Sámi reindeer herding communities in northern Sweden that struggle to adapt to the rapidly changing Arctic climate. Spatial snow depth distribution is a crucial parameter not only to assess snow quality but also for multiple environmental research and social land use purposes. This contrasts with the current availability of affordable and efficient snow monitoring methods to estimate such an extremely variable parameter in both space and time. In this thesis, a novel approach to determine spatial snow depth distribution in challenging alpine terrain is presented and tested during a field campaign performed in Tarfala, Sweden in April 2019. A multispectral camera capturing five spectral bands in wavelengths between 470 and 860 nanometers on board of a small Unmanned Aerial Vehicle is deployed to derive 3D snow surface models via photogrammetric image processing techniques. The main advantage over conventional photogrammetric surveys is the utilization of accurate RTK positioning technology that enables direct georeferencing of the images, and thus eliminates the need for ground control points and dangerous and time-consuming fieldwork. The continuous snow depth distribution is retrieved by differencing two digital surface models corresponding to the snow-free and snow-covered study areas. An extensive error assessment based on ground measurements is performed including an analysis of the impact of multispectral imagery. Uncertainties and non-transparencies due to a black-box environment in the photogrammetric processing are, however, present, but accounted for during the error source analysis. The results of this project demonstrate that the proposed methodology is capable of producing high-resolution 3D snow-covered surface models (< 7 cm/pixel) of alpine areas up to 8 hectares in a fast, reliable and cost-efficient way. The overall RMSE of the snow depth estimates is 7.5 cm for data acquired in ideal survey conditions. The proposed method furthermore assists in closing the scale gap between discrete point measurements and regional-scale remote sensing, and in complementing large-scale remote sensing data by providing an adequate validation source. As part of the Swedish cooperation project ’Snow4all’, the findings of this project are used to support and validate large-scale snow models for improved snow quality prediction in northern Sweden.
På grund av klimatförändringar och naturliga meteorologiska händelser i arktis behövs mer exakta snökvalitetsprognoser för att stödja samernas rensköttsamhällen i norra Sverige som har problem med att anpassa sig till det snabbt föränderliga arktiska klimatet. Rumslig snödjupsfördelning är en avgörande parameter för att inte bara bedöma snökvaliteten utan även för flera miljöforskning och sociala markanvändningsändamål. Detta står i motsats till den nuvarande tillgången till överkomliga och effektiva metoder för snöövervakning för att uppskatta sådan extremt varierande parameter i tid och rum. I detta arbete presenteras och testas en ny metod för att bestämma rumslig snödjupssdistribution i utmanande alpin terräng under en fältstudie som genomfördes i Tarfala i norra Sverige i april 2019. Via fotogrammetrisk bildbehandlingsteknik hämtades snöytemodeller i 3D med hjälp av en multispektral kamera monterad på en liten obemannad drönare. En viktig fördel, i jämförelse med konventionella fotogrammetriska undersökningar, är användningen av exakt RTK-positioneringsteknik som möjliggör direkt georeferencing och eliminerar behovet av markkontrollpunkter. Den kontinuerliga snödjupfördelningen hämtas genom att ytmodellerna delas upp i snöfria respektive snötäckta undersökningsområden. En omfattande felsökning som baseras på markmätningar utförs, inklusive en analys av effekten av multispektrala bilder. Resultaten från denna studie visar att den famtagna metoden kan producera högupplösta snötäckta höjdmodeller i 3D (< 7 cm/pixel) av alpina områden på upp till 8 hektar på ett snabbt, pålitligt och kostnadseffektivt sätt. Den övergripande RMSE för det beräknade snödjupet är 7,5 cm för data som förvärvats under idealiska undersökningsförhållanden. Som ett led i det svenska projektet “Snow4all” används resultaten från projektet för att förbättra och validera storskaliga snömodeller för att bättre förutse snökvaliteten i norra Sverige.
APA, Harvard, Vancouver, ISO, and other styles
7

Apostolopoulos, Andreas K. Tisdale Riley O. "Dissemination and storage of tactical unmanned aerial vehicle digital video imagery at the Army Brigade Level /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA374041.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, September 1999.
"September 1999". Thesis advisor(s): Orin E. Marvel, William Haga, Brad Naegle. Includes bibliographical references (p. 159-162). Also avaliable online.
APA, Harvard, Vancouver, ISO, and other styles
8

Apostolopoulos, Andreas K., and Riley O. Tisdale. "Dissemination and storage of tactical unmanned aerial vehicle digital video imagery at the Army Brigade Level." Thesis, Monterey, California. Naval Postgraduate School, 1999. http://hdl.handle.net/10945/26490.

Full text
Abstract:
Approved for public release; distribution is unlimited
The Department of Defense Joint Technical Architecture has mandated a migration from analog to digital technology in the Command, Control, Communication, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) community. The Tactical Unmanned Aerial Vehicle (TUAV) and Tactical Control System (TCS) are two brigade imagery intelligence systems that the Army will field within the next three years to achieve information superiority on the modern digital battlefield. These two systems provide the brigade commander with an imagery collection and processing capability never before deployed under brigade control. The deployment of the Warfighter Information Network (WIN), within three to five years, will ensure that a digital dissemination network is in place to handle the transmission bandwidth requirements of large digital video files. This thesis examines the storage and dissemination capabilities of this future brigade imagery system. It calculates a minimum digital! storage capacity requirement for the TCS Imagery Product Library, analyzes available storage media based on performance, and recommends a high capacity storage architecture based on modern high technology fault tolerance and performance. A video streaming technique is also recommended that utilizes the digital interconnectivity of the WIN for dissemination of video imagery throughout the brigade.
APA, Harvard, Vancouver, ISO, and other styles
9

Heiner, Benjamin Kurt. "Construction of Large Geo-Referenced Mosaics from MAV Video and Telemetry Data." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/1804.

Full text
Abstract:
Miniature Aerial Vehicles (MAVs) are quickly gaining acceptance as a platform for performing remote sensing or surveillance of remote areas. However, because MAVs are typically flown close to the ground (1000 feet or less in altitude), their field of view for any one image is relatively small. In addition, the context of the video (where and at what orientation are the objects being observed, the relationship between images) is unclear from any one image. To overcome these problems, we propose a geo-referenced mosaicing method that creates a mosaic from the captured images and geo-references the mosaic using information from the MAV IMU/GPS unit. Our method utilizes bundle adjustment within a constrained optimization framework and topology refinement. Using real MAV video, we have demonstrated our mosaic creation process on over 900 frames. Our method has been shown to produce the high quality mosaics to within 7m using tightly synchronized MAV telemetry data and to within 30m using only GPS information (i.e. no roll and pitch information).
APA, Harvard, Vancouver, ISO, and other styles
10

Andersen, Evan D. "A Surveillance System to Create and Distribute Geo-Referenced Mosaics Using SUAV Video." BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/1679.

Full text
Abstract:
Small Unmanned Aerial Vehicles (SUAVs) are an attractive choice for many surveillance tasks. However, video from an SUAV can be difficult to use in its raw form. In addition, the limitations inherent in the SUAV platform inhibit the distribution of video to remote users. To solve the problems with using SUAV video, we propose a system to automatically create geo-referenced mosiacs of video frames. We also present three novel techniques we have developed to improve ortho-rectification and geo-location accuracy of the mosaics. The most successful of these techniques is able to reduce geo-location error by a factor of 15 with minimal computational overhead. The proposed system overcomes communications limitations by transmitting the mosaics to a central server where there they can easily be accessed by remote users via the Internet. Using flight test results, we show that the proposed mosaicking system achieves real-time performance and produces high-quality and accurately geo-referenced imagery.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Multispectral aerial video imagery"

1

Apostolopoulos, Andreas K. Dissemination and storage of tactical unmanned aerial vehicle digital video imagery at the Army Brigade Level. Monterey, Calif: Naval Postgraduate School, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

King, Douglas John. Development of a multispectral aerial video system and its application in forest and land cover type analysis. 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dissemination and Storage of Tactical Unmanned Aerial Vehicle Digital Video Imagery at the Army Brigade Level. Storming Media, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multispectral aerial video imagery"

1

Brauchle, Jörg, Steven Bayer, and Ralf Berger. "Automatic Ship Detection on Multispectral and Thermal Infrared Aerial Images Using MACS-Mar Remote Sensing Platform." In Image and Video Technology, 382–95. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92753-4_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Messina, Gaetano, Vincenzo Fiozzo, Salvatore Praticò, Biagio Siciliani, Antonio Curcio, Salvatore Di Fazio, and Giuseppe Modica. "Monitoring Onion Crops Using Multispectral Imagery from Unmanned Aerial Vehicle (UAV)." In New Metropolitan Perspectives, 1640–49. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-48279-4_154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Liang-Chien, Tee-Ann Teo, Chi-Heng Hsieh, and Jiann-Yeou Rau. "Reconstruction of Building Models with Curvilinear Boundaries from Laser Scanner and Aerial Imagery." In Advances in Image and Video Technology, 24–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11949534_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vasile, Alexandru N., Luke J. Skelly, Karl Ni, Richard Heinrichs, and Octavia Camps. "Efficient City-Sized 3D Reconstruction from Ultra-High Resolution Aerial and Ground Video Imagery." In Advances in Visual Computing, 347–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24028-7_32.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kuhnert, Lars, Markus Ax, Matthias Langer, Duong Nguyen Van, and Klaus-Dieter Kuhnert. "Absolute High-Precision Localisation of an Unmanned Ground Vehicle by Using Real-Time Aerial Video Imagery for Geo-referenced Orthophoto Registration." In Autonome Mobile Systeme 2009, 145–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10284-4_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"Benthic Habitats and the Effects of Fishing." In Benthic Habitats and the Effects of Fishing, edited by T. D. Clayton, J. C. Brock, and C. W. Wright. American Fisheries Society, 2005. http://dx.doi.org/10.47886/9781888569605.ch21.

Full text
Abstract:
For ecologists and managers of seagrass systems, the spatial context provided by remote sensing has proven to be an important complement to in situ assessments and measurements. The spatial extent of seagrass beds has been mapped most commonly with conventional aerial photography. Additional remote mapping and monitoring tools applied to seagrass studies include optical satellite sensors, airborne multispectral scanners, underwater video cameras, and towed sonar systems. An additional tool that shows much promise is airborne, waveform-resolving lidar (light detection and ranging). Now used routinely for high-resolution bathymetric and topographic surveys, lidar systems operate by emitting a laser pulse, then measuring its two-way travel time from the plane to reflecting surface(s) below, then back to the detector co-located with the laser transmitter. Using a novel, waveformresolving lidar system developed at NASA — the Experimental Advanced Airborne Research Lidar (EAARL) — we are investigating the possibility of using the additional information contained in the returned laser pulse (waveform) for the purposes of benthic habitat mapping. Preliminary analyses indicate that seagrass beds can potentially be delineated on the basis of apparent bathymetry, returned waveform shape and amplitude, and (horizontal) spatial texture. A complete set of georectified digital camera imagery is also collected during each EAARL overflight and can aid in mapping efforts. Illustrative examples are shown from seagrass beds in the turbid waters of Tampa Bay and the relatively clear waters of the Florida Keys.
APA, Harvard, Vancouver, ISO, and other styles
7

Rango, Albert, and Jerry Ritchie. "Applications of Remotely Sensed Data from the Jornada Basin." In Structure and Function of a Chihuahuan Desert Ecosystem. Oxford University Press, 2006. http://dx.doi.org/10.1093/oso/9780195117769.003.0019.

Full text
Abstract:
Like other rangelands, little application of remote sensing data for measurement and monitoring has taken place within the Jornada Basin. Although remote sensing data in the form of aerial photographs were acquired as far back as 1935 over portions of the Jornada Basin, little reliance was placed on these data. With the launch of Earth resources satellites in 1972, a variety of sensors have been available to collect remote sensing data. These sensors are typically satellite-based but can be used from other platforms including ground-based towers and hand-held apparatus, low-altitude aircraft, and high-altitude aircraft with various resolutions (now as good as 0.61 m) and spectral capabilities. A multispectral, multispatial, and multitemporal remote sensing approach would be ideal for extrapolating ground-based point and plot knowledge to large areas or landscape units viewed from satellite-based platforms. This chapter details development and applications of long-term remotely sensed data sets that are used in concert with other long-term data to provide more comprehensive knowledge for management of rangeland across this basin and as a template for their use for rangeland management in other regions. In concert with the ongoing Jornada Basin research program of ground measurements, in 1995 we began to collect remotely sensed data from ground, airborne, and satellite platforms to provide spatial and temporal data on the physical and biological state of basin rangeland. Data on distribution and reflectance of vegetation were measured on the ground along preestablished transects with detailed vegetation surveys (cover, composition, and height); with hand-held and yoke-mounted spectral and thermal radiometers; from aircraft flown at different elevations with spectral and thermal radiometers, infrared thermal radiometers, multispectral video, digital imagers, and laser altimeters; and from space with Landsat Thematic Mapper (TM), IKONOS, QuickBird, Terra/Aqua, and other satellite-based sensors. These different platforms (ground, aircraft, and satellite) allow evaluation of landscape patterns and states at different scales. One general use of these measurements will be to quantify the hydrologic budget and plant response to changes in components in the water and energy balance at different scales and to evaluate techniques of scaling data.
APA, Harvard, Vancouver, ISO, and other styles
8

Mathews, Adam J. "A Practical UAV Remote Sensing Methodology to Generate Multispectral Orthophotos for Vineyards." In Unmanned Aerial Vehicles, 271–94. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8365-3.ch012.

Full text
Abstract:
This paper explores the use of compact digital cameras to remotely estimate spectral reflectance based on unmanned aerial vehicle imagery. Two digital cameras, one unaltered and one altered, were used to collect four bands of spectral information (blue, green, red, and near-infrared [NIR]). The altered camera had its internal hot mirror removed to allow the sensor to be additionally sensitive to NIR. Through on-ground experimentation with spectral targets and a spectroradiometer, the sensitivity and abilities of the cameras were observed. This information along with on-site collected spectral data were used to aid in converting aerial imagery digital numbers to estimates of scaled surface reflectance using the empirical line method. The resulting images were used to create spectrally-consistent orthophotomosaics of a vineyard study site. Individual bands were subsequently validated with in situ spectroradiometer data. Results show that red and NIR bands exhibited the best fit (R2: 0.78 for red; 0.57 for NIR).
APA, Harvard, Vancouver, ISO, and other styles
9

Mathews, Adam J. "A Practical UAV Remote Sensing Methodology to Generate Multispectral Orthophotos for Vineyards." In Geospatial Intelligence, 298–322. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8054-6.ch014.

Full text
Abstract:
This paper explores the use of compact digital cameras to remotely estimate spectral reflectance based on unmanned aerial vehicle imagery. Two digital cameras, one unaltered and one altered, were used to collect four bands of spectral information (blue, green, red, and near-infrared [NIR]). The altered camera had its internal hot mirror removed to allow the sensor to be additionally sensitive to NIR. Through on-ground experimentation with spectral targets and a spectroradiometer, the sensitivity and abilities of the cameras were observed. This information along with on-site collected spectral data were used to aid in converting aerial imagery digital numbers to estimates of scaled surface reflectance using the empirical line method. The resulting images were used to create spectrally-consistent orthophotomosaics of a vineyard study site. Individual bands were subsequently validated with in situ spectroradiometer data. Results show that red and NIR bands exhibited the best fit (R2: 0.78 for red; 0.57 for NIR).
APA, Harvard, Vancouver, ISO, and other styles
10

Essa, Almabrok, Paheding Sidike, and Vijayan K. Asari. "Efficient Key Frame Selection Approach for Object Detection in Wide Area Surveillance Applications." In Censorship, Surveillance, and Privacy, 609–23. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7113-1.ch032.

Full text
Abstract:
This paper presents an efficient preprocessing algorithm for object detection in wide area surveillance video analysis. The proposed key-frame selection method utilizes the pixel intensity differences among subsequent frames to automatically select only the frames that contain the desired contextual information and discard the rest of the insignificant frames. For improving effectiveness and efficiency, a batch updating based on a modular representation strategy is also incorporated. Experimental results show that the proposed key frame selection technique has a significant positive performance impact on wide area surveillance applications such as automatic object detection and recognition in aerial imagery.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multispectral aerial video imagery"

1

Chai, Dengfeng, and Qunsheng Peng. "Spatiotemporal alignment of multi-sensor aerial video sequences." In MIPPR 2005 SAR and Multispectral Image Processing, edited by Liangpei Zhang, Jianqing Zhang, and Mingsheng Liao. SPIE, 2005. http://dx.doi.org/10.1117/12.654910.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Salvado, Ana Beatriz, Ricardo Mendonca, Andre Lourenco, Francisco Marques, J. P. Matos-Carvalho, Luis Miguel Campos, and Jose Barata. "Semantic Navigation Mapping from Aerial Multispectral Imagery." In 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE). IEEE, 2019. http://dx.doi.org/10.1109/isie.2019.8781301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fitzgerald, G. J., D. J. Hunsaker, E. M. Barnes, T. R. Clarke, R. Roth, and P. J. Pinter, Jr. "Estimating Cotton Crop Water Use from Multispectral Aerial Imagery, 2003." In World Water and Environmental Resources Congress 2005. Reston, VA: American Society of Civil Engineers, 2005. http://dx.doi.org/10.1061/40792(173)525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Williams, Elmer, Michael A. Pusateri, and David Siviter. "Multicamera-multispectral video library - An algorithm development tool." In 2008 37th IEEE Applied Imagery Pattern Recognition Workshop. IEEE, 2008. http://dx.doi.org/10.1109/aipr.2008.4906477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hassan-Esfahani, Leila, Alfonso Torres-Rua, Andres M. Ticlavilca, Austin Jensen, and Mac McKee. "Topsoil moisture estimation for precision agriculture using unmmaned aerial vehicle multispectral imagery." In IGARSS 2014 - 2014 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2014. http://dx.doi.org/10.1109/igarss.2014.6947175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rottensteiner, F., J. Trinder, S. Clode, K. Kubik, and B. Lovell. "Building detection by Dempster-Shafer fusion of LIDAR data and multispectral aerial imagery." In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. IEEE, 2004. http://dx.doi.org/10.1109/icpr.2004.1334203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vasu, Bhavan, Faiz Ur Rahman, and Andreas Savakis. "Aerial-CAM: Salient Structures and Textures in Network Class Activation Maps of Aerial Imagery." In 2018 IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP). IEEE, 2018. http://dx.doi.org/10.1109/ivmspw.2018.8448567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Viguier, Raphael, Chung Ching Lin, Hadi AliAkbarpour, Filiz Bunyak, Sharathchandra Pankanti, Guna Seetharaman, and Kannappan Palaniappan. "Automatic Video Content Summarization Using Geospatial Mosaics of Aerial Imagery." In 2015 IEEE International Symposium on Multimedia (ISM). IEEE, 2015. http://dx.doi.org/10.1109/ism.2015.124.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Loveland, Rohan C., and Edward Rosten. "Acquisition and registration of aerial video imagery of urban traffic." In Optical Engineering + Applications, edited by Andrew G. Tescher. SPIE, 2008. http://dx.doi.org/10.1117/12.796785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Al-Arab, Manal, Alfonso Torres-Rua, Andres Ticlavilca, Austin Jensen, and Mac McKee. "Use of high-resolution multispectral imagery from an unmanned aerial vehicle in precision agriculture." In IGARSS 2013 - 2013 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2013. http://dx.doi.org/10.1109/igarss.2013.6723419.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Multispectral aerial video imagery"

1

Cooke, B., and A. Saucier. Correction of Line Interleaving Displacement in Frame Captured Aerial Video Imagery. New Orleans, LA: U.S. Department of Agriculture, Forest Service, Southern Forest Experiment Station, 1995. http://dx.doi.org/10.2737/so-rn-380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Becker, Sarah, Megan Maloney, and Andrew Griffin. A multi-biome study of tree cover detection using the Forest Cover Index. Engineer Research and Development Center (U.S.), September 2021. http://dx.doi.org/10.21079/11681/42003.

Full text
Abstract:
Tree cover maps derived from satellite and aerial imagery directly support civil and military operations. However, distinguishing tree cover from other vegetative land covers is an analytical challenge. While the commonly used Normalized Difference Vegetation Index (NDVI) can identify vegetative cover, it does not consistently distinguish between tree and low-stature vegetation. The Forest Cover Index (FCI) algorithm was developed to take the multiplicative product of the red and near infrared bands and apply a threshold to separate tree cover from non-tree cover in multispectral imagery (MSI). Previous testing focused on one study site using 2-m resolution commercial MSI from WorldView-2 and 30-m resolution imagery from Landsat-7. New testing in this work used 3-m imagery from PlanetScope and 10-m imagery from Sentinel-2 in imagery in sites across 12 biomes in South and Central America and North Korea. Overall accuracy ranged between 23% and 97% for Sentinel-2 imagery and between 51% and 98% for PlanetScope imagery. Future research will focus on automating the identification of the threshold that separates tree from other land covers, exploring use of the output for machine learning applications, and incorporating ancillary data such as digital surface models and existing tree cover maps.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography