To see the other types of publications on this topic, follow the link: Airborne video imagery.

Journal articles on the topic 'Airborne video imagery'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 29 journal articles for your research on the topic 'Airborne video imagery.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

PICKUP, G., V. H. CHEWINGS, and G. PEARCE. "Procedures for correcting high resolution airborne video imagery." International Journal of Remote Sensing 16, no. 9 (June 1995): 1647–62. http://dx.doi.org/10.1080/01431169508954502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Everitt, J. H., M. A. Hussey, D. E. Escobar, P. R. Nixon, and B. Pinkerton. "Assessment of grassland phytomass with airborne video imagery." Remote Sensing of Environment 20, no. 3 (December 1986): 299–306. http://dx.doi.org/10.1016/0034-4257(86)90050-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Sijie, Kai Zhang, Shaoyi Li, and Jie Yan. "Learning to Track Aircraft in Infrared Imagery." Remote Sensing 12, no. 23 (December 6, 2020): 3995. http://dx.doi.org/10.3390/rs12233995.

Full text
Abstract:
Airborne target tracking in infrared imagery remains a challenging task. The airborne target usually has a low signal-to-noise ratio and shows different visual patterns. The features adopted in the visual tracking algorithm are usually deep features pre-trained on ImageNet, which are not tightly coupled with the current video domain and therefore might not be optimal for infrared target tracking. To this end, we propose a new approach to learn the domain-specific features, which can be adapted to the current video online without pre-training on a large datasets. Considering that only a few samples of the initial frame can be used for online training, general feature representations are encoded to the network for a better initialization. The feature learning module is flexible and can be integrated into tracking frameworks based on correlation filters to improve the baseline method. Experiments on airborne infrared imagery are conducted to demonstrate the effectiveness of our tracking algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Cook, C. G., D. E. Escobar, J. H. Everitt, I. Cavazos, A. F. Robinson, and M. R. Davis. "Utilizing airborne video imagery in kenaf management and production." Industrial Crops and Products 9, no. 3 (March 1999): 205–10. http://dx.doi.org/10.1016/s0926-6690(98)00033-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Edirisinghe, A., J. P. Louis, and G. E. Chapman. "Potential for Calibrating Airborne Video Imagery Using Preflight Calibration Coefficients." Photogrammetric Engineering & Remote Sensing 70, no. 5 (May 1, 2004): 573–80. http://dx.doi.org/10.14358/pers.70.5.573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ifimov, Gabriela, Tomas Naprstek, Joshua M. Johnston, Juan Pablo Arroyo-Mora, George Leblanc, and Madeline D. Lee. "Geocorrection of Airborne Mid-Wave Infrared Imagery for Mapping Wildfires without GPS or IMU." Sensors 21, no. 9 (April 27, 2021): 3047. http://dx.doi.org/10.3390/s21093047.

Full text
Abstract:
The increase in annual wildfires in many areas of the world has triggered international efforts to deploy sensors on airborne and space platforms to map these events and understand their behaviour. During the summer of 2017, an airborne flight campaign acquired mid-wave infrared imagery over active wildfires in Northern Ontario, Canada. However, it suffered multiple position-based equipment issues, thus requiring a non-standard geocorrection methodology. This study presents the approach, which utilizes a two-step semi-automatic geocorrection process that outputs image mosaics from airborne infrared video input. The first step extracts individual video frames that are combined into orthoimages using an automatic image registration method. The second step involves the georeferencing of the imagery using pseudo-ground control points to a fixed coordinate systems. The output geocorrected datasets in units of radiance can then be used to derive fire products such as fire radiative power density (FRPD). Prior to the georeferencing process, the Root Mean Square Error (RMSE) associated with the imagery was greater than 200 m. After the georeferencing process was applied, an RMSE below 30 m was reported, and the computed FRPD estimations are within expected values across the literature. As such, this alternative geocorrection methodology successfully salvages an otherwise unusable dataset and can be adapted by other researchers that do not have access to accurate positional information for airborne infrared flight campaigns over wildfires.
APA, Harvard, Vancouver, ISO, and other styles
7

Fletcher, Reginald S., and Allan T. Showler. "Surveying kaolin-treated cotton plots with airborne multispectral digital video imagery." Computers and Electronics in Agriculture 54, no. 1 (October 2006): 1–7. http://dx.doi.org/10.1016/j.compag.2006.06.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Everitt, James H., David E. Escobar, Mario A. Alaniz, Ricardo Villarreal, and Michael R. Davis. "Distinguishing Brush and Weeds on Rangelands Using Video Remote Sensing." Weed Technology 6, no. 4 (December 1992): 913–21. http://dx.doi.org/10.1017/s0890037x00036472.

Full text
Abstract:
This paper describes the application of a relatively new remote sensing tool, airborne video imagery, for distinguishing weed and brush species on rangelands. Plant species studied were false broomweed, spiny aster, and Chinese tamarisk. A multispectral video system that acquired color-infrared (CIR) composite imagery and its simultaneously synchronized three-band [near-infrared (NIR), red, and yellow-green] narrowband images was used for the false broomweed and spiny aster experiments. A conventional color camcorder video system was used to study Chinese tamarisk. False broomweed and spiny aster could be detected on CIR composite and NIR narrowband imagery, while Chinese tamarisk could be distinguished on conventional color imagery. Quantitative data obtained from digitized video images of the three species showed that their digital values were statistically different (P = 0.05) from those of associated vegetation and soil. Computer analyses of video images showed that populations of the three species could be quantified from associated vegetation. This technique permits area estimates of false broomweed, spiny aster, and Chinese tamarisk populations on rangeland and wildland areas.
APA, Harvard, Vancouver, ISO, and other styles
9

Everitt, James H., David E. Escobar, Ricardo Villarreal, Mario A. Alaniz, and Michael R. Davis. "Integration of Airborne Video, Global Positioning System and Geographic Information System Technologies for Detecting and Mapping Two Woody Legumes on Rangelands." Weed Technology 7, no. 4 (December 1993): 981–87. http://dx.doi.org/10.1017/s0890037x00038112.

Full text
Abstract:
Blackbrush acacia and huisache, two troublesome woody legumes on Texas rangelands, could be distinguished on conventional color aerial video imagery. The integration of a global positioning system with the video imagery permitted latitude/longitude coordinates of blackbrush acacia and huisache infestations to be recorded on each image. Global positioning system coordinates were entered into a geographic information system to map blackbrush acacia and huisache populations over an extensive rangeland area.
APA, Harvard, Vancouver, ISO, and other styles
10

Stutte, G. W., and C. A. Stutte. "Use of Near-Infrared video for Localizing Nitrogen Stress in Peach Orchards." HortTechnology 2, no. 2 (April 1992): 224–27. http://dx.doi.org/10.21273/horttech.2.2.224.

Full text
Abstract:
Computer analysis of airborne, broad-band, near-infrared (NIR, 710 to 1100 nm) video imagery of peach tree canopies was used to determine spatial variability of cumulative stress in two peach orchards. A significant quadratic correlation was found between leaf-N content and the normalized mean pixel intensity (MPI) of the digital imagery of NIR canopy reflectance. This correlation was used to establish MPI estimates of N-stressed trees in the orchard. The relationship was used to localize site-specific spatial variability in a commercial peach orchard. The underlying soil type was found to be closely associated with the spatial variability in NIR imagery in the commercial peach orchard. Assessing spatial variability in the orchard with NIR video permits early localization of potentially low productivity regions within an orchard.
APA, Harvard, Vancouver, ISO, and other styles
11

Everitt, James H., David E. Escobar, Mario A. Alaniz, Michael R. Davis, and James V. Richerson. "Using Spatial Information Technologies to Map Chinese Tamarisk (Tamarix chinensis) Infestations." Weed Science 44, no. 1 (March 1996): 194–201. http://dx.doi.org/10.1017/s0043174500093759.

Full text
Abstract:
This paper describes the application of airborne video data with global positioning system and geographic information system technologies for detecting and mapping Chinese tamarisk infestations in the southwestern United States. Study areas were along the Colorado River in southwestern Arizona, the Rio Grande River in extreme west Texas, and the Pecos River in west-central Texas. Chinese tamarisk could be readily distinguished on conventional color video imagery in late November when its foliage turned a yellow-orange to orange-brown color prior to leaf drop. The integration of the global positioning system with the video imagery permitted latitude/longitude coordinates of Chinese tamarisk infestations to be recorded on each image. The global positioning system latitude/longitude coordinates were entered into a geographic information system to map Chinese tamarisk populations along the three river systems.
APA, Harvard, Vancouver, ISO, and other styles
12

Greene, Roger H. "Airborne Video Digital Data for Resource Analysis and Management in the Northeast." Northern Journal of Applied Forestry 5, no. 2 (June 1, 1988): 117–20. http://dx.doi.org/10.1093/njaf/5.2.117.

Full text
Abstract:
Abstract Airborne video data in digital form provides an inexpensive alternative to aerial photography to provide up-to-date information on the size, kinds, and distribution of forest types. Its capability to be incorporated into a geographic information system can augment the value of information produced during analysis. In Maine, Landmark Applied Technologies has developed and is using a system which includes acquiring the video imagery, extracting scenes in digital form, analyzing these data, and incorporating them into an Intergraph GIS to provide a mechanism for rapid updating of spatial data bases. North. J. Appl. For. 5:117-120, June 1988.
APA, Harvard, Vancouver, ISO, and other styles
13

Prost, C., A. Zerger, and P. Dare. "A multilayer feedforward neural network for automatic classification of eucalyptus forests in airborne video imagery." International Journal of Remote Sensing 26, no. 15 (August 10, 2005): 3275–93. http://dx.doi.org/10.1080/01431160500114722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Edirisinghe, A., G. E. Chapman, and J. P. Louis. "A simplified method for retrieval of ground level reflectance of targets from airborne video imagery." International Journal of Remote Sensing 22, no. 6 (January 2001): 1127–41. http://dx.doi.org/10.1080/01431160117786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cusicanqui, Johnny, Norman Kerle, and Francesco Nex. "Usability of aerial video footage for 3-D scene reconstruction and structural damage assessment." Natural Hazards and Earth System Sciences 18, no. 6 (June 8, 2018): 1583–98. http://dx.doi.org/10.5194/nhess-18-1583-2018.

Full text
Abstract:
Abstract. Remote sensing has evolved into the most efficient approach to assess post-disaster structural damage, in extensively affected areas through the use of spaceborne data. For smaller, and in particular, complex urban disaster scenes, multi-perspective aerial imagery obtained with unmanned aerial vehicles and derived dense color 3-D models are increasingly being used. These type of data allow the direct and automated recognition of damage-related features, supporting an effective post-disaster structural damage assessment. However, the rapid collection and sharing of multi-perspective aerial imagery is still limited due to tight or lacking regulations and legal frameworks. A potential alternative is aerial video footage, which is typically acquired and shared by civil protection institutions or news media and which tends to be the first type of airborne data available. Nevertheless, inherent artifacts and the lack of suitable processing means have long limited its potential use in structural damage assessment and other post-disaster activities. In this research the usability of modern aerial video data was evaluated based on a comparative quality and application analysis of video data and multi-perspective imagery (photos), and their derivative 3-D point clouds created using current photogrammetric techniques. Additionally, the effects of external factors, such as topography and the presence of smoke and moving objects, were determined by analyzing two different earthquake-affected sites: Tainan (Taiwan) and Pescara del Tronto (Italy). Results demonstrated similar usabilities for video and photos. This is shown by the short 2 cm of difference between the accuracies of video- and photo-based 3-D point clouds. Despite the low video resolution, the usability of these data was compensated for by a small ground sampling distance. Instead of video characteristics, low quality and application resulted from non-data-related factors, such as changes in the scene, lack of texture, or moving objects. We conclude that not only are current video data more rapidly available than photos, but they also have a comparable ability to assist in image-based structural damage assessment and other post-disaster activities.
APA, Harvard, Vancouver, ISO, and other styles
16

Phinn, Stuart, Janet Franklin, Allen Hope, Douglas Stow, and Laura Huenneke. "Biomass Distribution Mapping Using Airborne Digital Video Imagery and Spatial Statistics in a Semi-Arid Environment." Journal of Environmental Management 47, no. 2 (June 1996): 139–64. http://dx.doi.org/10.1006/jema.1996.0042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Everitt, J. H., D. E. Escobar, M. A. Alaniz, and M. R. Davis. "Using airborne middle-infrared (1.45–2.0 μm) video imagery for distinguishing plant species and soil conditions." Remote Sensing of Environment 22, no. 3 (August 1987): 423–28. http://dx.doi.org/10.1016/0034-4257(87)90093-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Churnside, James H., Alexei F. Sharov, and Ronald A. Richter. "Aerial surveys of fish in estuaries: a case study in Chesapeake Bay." ICES Journal of Marine Science 68, no. 1 (September 7, 2010): 239–44. http://dx.doi.org/10.1093/icesjms/fsq138.

Full text
Abstract:
Abstract Churnside, J. H., Sharov, A. F., and Richter, R. A. 2011. Aerial surveys of fish in estuaries: a case study in Chesapeake Bay. – ICES Journal of Marine Science, 68: 239–244. The performance of a near-nadir, airborne lidar was compared with that of an airborne imagery (video) system for surveys of Atlantic menhaden (Brevoortia tyrannus) in Chesapeake Bay, USA. Lidar had a greater probability of detecting a school (0.93 vs. 0.73) as a result of its greater depth penetration, a lesser probability of false identification (0.05 vs. 0.13) because it was less dependent on surface conditions and ambient illumination, and less variability [coefficient of variability of 0.34 vs. 0.73] in repeated coverage of the same area. Video had a lower statistical uncertainty in school detection [relative standard error 0.04 vs. 0.07] as a result of its greater swath width. The average depth penetration of lidar was 12 m, and the average depth of detected schools was 3 m. The performance of both techniques decreased with increasing windspeed, although the effect was smaller for lidar. The school area inferred by the two techniques was nearly the same. An examination of the missed schools and false identifications in lidar and video suggest that a combination of the two techniques would reduce most of the uncertainty associated with the use of either technique alone.
APA, Harvard, Vancouver, ISO, and other styles
19

Lim, H. S., M. Z. Matjafri, and K. Abdullah. "Land Use/Cover Classification over Small Areas Using Conventional Digital Camcorder Imagery Based on Frequency-Based Contextual and Neural Network Classification Techniques." Advanced Materials Research 650 (January 2013): 658–63. http://dx.doi.org/10.4028/www.scientific.net/amr.650.658.

Full text
Abstract:
An airborne survey was conducted to produce land cover/use maps. The feasibility of using a conventional digital camcorder to acquire remotely sensed data was investigated, and the imagery for land cover mapping using remote sensing technique was evaluated. The study area was the Universiti Sains Malaysia campus, Penang, located in Peninsular Malaysia. Digital images were taken from a low-attitude light aircraft, Cessna 172Q, at an average altitude of 2.4384 km above sea level. The use of a digital camcorder as a sensor to capture digital images is more economical compared with other airborne sensors. This technique is designed to overcome the problem of obtaining cloud-free photographs from a satellite platform in equatorial regions. Digital video imageries were taken in the red, green, and blue bands. A comparison between frequency-based contextual and neural network classification techniques for analyzing digital camcorder imagery is presented. Frequency-based contextual and neural network classification techniques were applied to the digital camera spectral bands (red, green, and blue) to extract the thematic information from the acquired scenes. The classified map was compared with the ground truth data, and accuracy was evaluated by an error matrix. Results indicate that a conventional digital camcorder can be used to acquire digital imageries for land cover/use mapping of a small area of coverage.
APA, Harvard, Vancouver, ISO, and other styles
20

Brown, Carl E., Mervin F. Fingas, and Richard Marois. "LASER FLUOROSENSOR DEMONSTRATION FLIGHTS AROUND THE SOUTHERN COAST OF NEWFOUNDLAND." International Oil Spill Conference Proceedings 2005, no. 1 (May 1, 2005): 1001–5. http://dx.doi.org/10.7901/2169-3358-2005-1-1001.

Full text
Abstract:
ABSTRACT Several oil spill remote sensing flights were conducted by Environment Canada off the Southern coast of Newfoundland, Canada in late February, early March 2004. These flights were undertaken to demonstrate the capabilities of the Scanning Laser Environmental Airborne Fluorosensor (SLEAF) in real-life situations in the North Atlantic and Newfoundland coastal regions in late winter weather conditions. Geo-referenced infrared, ultraviolet, color video and digital still imagery was collected along with the laser fluorosensor data. Brief testing of a Generation III night vision camera was also conducted. Flights were conducted in the shipping lanes around the Newfoundland coast, out to the Hibernia and Terra Nova oil platforms and over known oil seep areas. Details of the analysis of laser fluorescence data collected during these flights will be presented along with a summary of the remote sensing flights.
APA, Harvard, Vancouver, ISO, and other styles
21

Coops, N. C., and P. C. Catling. "Utilising Airborne Multispectral Videography to Predict Habitat Complexity in Eucalypt Forests for Wildlife Management *Further information about this research can be found on the World Wide Web at http://www.ffp.csiro.au/nfm/mdq/." Wildlife Research 24, no. 6 (1997): 691. http://dx.doi.org/10.1071/wr96099.

Full text
Abstract:
Airborne videographic remote sensing is a relatively recent technology thatcan provide inexpensive and high-spatial-resolution imagery for forestmanagement. This paper presents a methodology that allows videographic data tobe modelled to predict habitat complexity in eucalypt forests.Within the eucalypt forests of south-eastern New South Wales, plots werelocated on the imagery, and the local variance of the videography within eachplot was computed on the assumption that changes in local variance provided anindication of forest structure, and thus the habitat complexity of the site.The near- infrared (NIR) channel demonstrated the most variation, as thatchannel provided an indication of photosynthetic activity and, as a result,the variation between canopy, understorey, ground cover, soil and shadowprovided a highly variable response in the video imagery. Habitat-complexityscores were used to record forest structure, and the relationship between theNIR variance and field habitat-complexity scores was highly significant(P < 0·001)(r2 = 0·75;n = 29). From this relationship, maps of thehabitat-complexity scores were predicted from the videography at 2-m spatialresolution. The model was extrapolated across a 1 1 km subset of the videodata and field verification showed that the predicted scores correspondedclosely with the field scores.Studies have demonstrated the relationship between habitat-complexity scoresand the distribution and abundance of different mammalian fauna. This methodallows predictions of habitat-complexity scores to be spatially extrapolatedand used to stratify the landscape into regions for both the modelling offaunal habitat and to predict the composition, distribution and abundance ofsome faunal groups across the landscape. Ultimately, the management of foresthabitats for wildlife will depend on the availability of accurate maps of thediversity and extent of habitats over large areas and/or in difficult terrain.
APA, Harvard, Vancouver, ISO, and other styles
22

Ulhaq, Anwaar, Peter Adams, Tarnya E. Cox, Asim Khan, Tom Low, and Manoranjan Paul. "Automated Detection of Animals in Low-Resolution Airborne Thermal Imagery." Remote Sensing 13, no. 16 (August 19, 2021): 3276. http://dx.doi.org/10.3390/rs13163276.

Full text
Abstract:
Detecting animals to estimate abundance can be difficult, particularly when the habitat is dense or the target animals are fossorial. The recent surge in the use of thermal imagers in ecology and their use in animal detections can increase the accuracy of population estimates and improve the subsequent implementation of management programs. However, the use of thermal imagers results in many hours of captured flight videos which require manual review for confirmation of species detection and identification. Therefore, the perceived cost and efficiency trade-off often restricts the use of these systems. Additionally, for many off-the-shelf systems, the exported imagery can be quite low resolution (<9 Hz), increasing the difficulty of using automated detections algorithms to streamline the review process. This paper presents an animal species detection system that utilises the cost-effectiveness of these lower resolution thermal imagers while harnessing the power of transfer learning and an enhanced small object detection algorithm. We have proposed a distant object detection algorithm named Distant-YOLO (D-YOLO) that utilises YOLO (You Only Look Once) and improves its training and structure for the automated detection of target objects in thermal imagery. We trained our system on thermal imaging data of rabbits, their active warrens, feral pigs, and kangaroos collected by thermal imaging researchers in New South Wales and Western Australia. This work will enhance the visual analysis of animal species while performing well on low, medium and high-resolution thermal imagery.
APA, Harvard, Vancouver, ISO, and other styles
23

Hein, Daniel, Thomas Kraft, Jörg Brauchle, and Ralf Berger. "Integrated UAV-Based Real-Time Mapping for Security Applications." ISPRS International Journal of Geo-Information 8, no. 5 (May 8, 2019): 219. http://dx.doi.org/10.3390/ijgi8050219.

Full text
Abstract:
Security applications such as management of natural disasters and man-made incidents crucially depend on the rapid availability of a situation picture of the affected area. UAV-based remote sensing systems may constitute an essential tool for capturing aerial imagery in such scenarios. While several commercial UAV solutions already provide acquisition of high quality photos or real-time video transmission via radio link, generating instant high-resolution aerial maps is still an open challenge. For this purpose, the article presents a real-time processing tool chain, enabling generation of interactive aerial maps during flight. Key element of this tool chain is the combination of the Terrain Aware Image Clipping (TAC) algorithm and 12-bit JPEG compression. As a result, the data size of a common scenery can be reduced to approximately 0.4% of the original size, while preserving full geometric and radiometric resolution. Particular attention was paid to minimize computational costs to reduce hardware requirements. The full workflow was demonstrated using the DLR Modular Airborne Camera System (MACS) operated on a conventional aircraft. In combination with a commercial radio link, the latency between image acquisition and visualization in the ground station was about 2 s. In addition, the integration of a miniaturized version of the camera system into a small fixed-wing UAV is presented. It is shown that the described workflow is efficient enough to instantly generate image maps even on small UAV hardware. Using a radio link, these maps can be broadcasted to on-site operation centers and are immediately available to the end-users.
APA, Harvard, Vancouver, ISO, and other styles
24

Gstaiger, V., H. Römer, D. Rosenbaum, and F. Henkel. "AIRBORNE CAMERA SYSTEM FOR REAL-TIME APPLICATIONS – SUPPORT OF A NATIONAL CIVIL PROTECTION EXERCISE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W3 (April 30, 2015): 1189–94. http://dx.doi.org/10.5194/isprsarchives-xl-7-w3-1189-2015.

Full text
Abstract:
In the VABENE++ project of the German Aerospace Center (DLR), powerful tools are being developed to aid public authorities and organizations with security responsibilities as well as traffic authorities when dealing with disasters and large public events. One focus lies on the acquisition of high resolution aerial imagery, its fully automatic processing, analysis and near real-time provision to decision makers in emergency situations. For this purpose a camera system was developed to be operated from a helicopter with light-weight processing units and microwave link for fast data transfer. In order to meet end-users’ requirements DLR works close together with the German Federal Office of Civil Protection and Disaster Assistance (BBK) within this project. One task of BBK is to establish, maintain and train the German Medical Task Force (MTF), which gets deployed nationwide in case of large-scale disasters. In October 2014, several units of the MTF were deployed for the first time in the framework of a national civil protection exercise in Brandenburg. The VABENE++ team joined the exercise and provided near real-time aerial imagery, videos and derived traffic information to support the direction of the MTF and to identify needs for further improvements and developments. <br><br> In this contribution the authors introduce the new airborne camera system together with its near real-time processing components and share experiences gained during the national civil protection exercise.
APA, Harvard, Vancouver, ISO, and other styles
25

Fernandes, Richard, Christian Prevost, Francis Canisius, Sylvain G. Leblanc, Matt Maloley, Sarah Oakes, Kiyomi Holman, and Anders Knudby. "Monitoring snow depth change across a range of landscapes with ephemeral snowpacks using structure from motion applied to lightweight unmanned aerial vehicle videos." Cryosphere 12, no. 11 (November 13, 2018): 3535–50. http://dx.doi.org/10.5194/tc-12-3535-2018.

Full text
Abstract:
Abstract. Differencing of digital surface models derived from structure from motion (SfM) processing of airborne imagery has been used to produce snow depth (SD) maps with between ∼2 and ∼15 cm horizontal resolution and accuracies of ±10 cm over relatively flat surfaces with little or no vegetation and over alpine regions. This study builds on these findings by testing two hypotheses across a broader range of conditions: (i) that the vertical accuracy of SfM processing of imagery acquired by commercial low-cost unmanned aerial vehicle (UAV) systems can be adequately modelled using conventional photogrammetric theory and (ii) that SD change can be more accurately estimated by differencing snow-covered elevation surfaces rather than differencing a snow-covered and snow-free surface. A total of 71 UAV missions were flown over five sites, ranging from short grass to a regenerating forest, with ephemeral snowpacks. Point cloud geolocation performance agreed with photogrammetric theory that predicts uncertainty is proportional to UAV altitude and linearly related to horizontal uncertainty. The root-mean-square difference (RMSD) over the observation period, in comparison to the average of in situ measurements along ∼50 m transects, ranged from 1.58 to 10.56 cm for weekly SD and from 2.54 to 8.68 cm for weekly SD change. RMSD was not related to microtopography as quantified by the snow-free surface roughness. SD change uncertainty was unrelated to vegetation cover but was dominated by outliers corresponding to rapid in situ melt or onset; the median absolute difference of SD change ranged from 0.65 to 2.71 cm. These results indicate that the accuracy of UAV-based estimates of weekly snow depth change was, excepting conditions with deep fresh snow, substantially better than for snow depth and was comparable to in situ methods.
APA, Harvard, Vancouver, ISO, and other styles
26

Baum, Bryan A., Andrew J. Heymsfield, Ping Yang, and Sarah T. Bedka. "Bulk Scattering Properties for the Remote Sensing of Ice Clouds. Part I: Microphysical Data and Models." Journal of Applied Meteorology 44, no. 12 (December 1, 2005): 1885–95. http://dx.doi.org/10.1175/jam2308.1.

Full text
Abstract:
Abstract This study reports on the use of in situ data obtained in midlatitude and tropical ice clouds from airborne sampling probes and balloon-borne replicators as the basis for the development of bulk scattering models for use in satellite remote sensing applications. Airborne sampling instrumentation includes the two-dimensional cloud (2D-C), two-dimensional precipitation (2D-P), high-volume precipitation spectrometer (HVPS), cloud particle imager (CPI), and NCAR video ice particle sampler (VIPS) probes. Herein the development of a comprehensive set of microphysical models based on in situ measurements of particle size distributions (PSDs) is discussed. Two parameters are developed and examined: ice water content (IWC) and median mass diameter Dm. Comparisons are provided between the IWC and Dm values derived from in situ measurements obtained during a series of field campaigns held in the midlatitude and tropical regions and those calculated from a set of modeled ice particles used for light-scattering calculations. The ice particle types considered in this study include droxtals, hexagonal plates, solid columns, hollow columns, aggregates, and 3D bullet rosettes. It is shown that no single habit accurately replicates the derived IWC and Dm values, but a mixture of habits can significantly improve the comparison of these bulk microphysical properties. In addition, the relationship between Dm and the effective particle size Deff, defined as 1.5 times the ratio of ice particle volume to projected area for a given PSD, is investigated. Based on these results, a subset of microphysical models is chosen as the basis for the development of ice cloud bulk scattering models in Part II of this study.
APA, Harvard, Vancouver, ISO, and other styles
27

Di Traglia, Federico, Sonia Calvari, Luca D'Auria, Teresa Nolesini, Alessandro Bonaccorso, Alessandro Fornaciai, Antonietta Esposito, Antonio Cristaldi, Massimiliano Favalli, and Nicola Casagli. "The 2014 Effusive Eruption at Stromboli: New Insights from In Situ and Remote-Sensing Measurements." Remote Sensing 10, no. 12 (December 14, 2018): 2035. http://dx.doi.org/10.3390/rs10122035.

Full text
Abstract:
In situ and remote-sensing measurements have been used to characterize the run-up phase and the phenomena that occurred during the August–November 2014 flank eruption at Stromboli. Data comprise videos recorded by the visible and infrared camera network, ground displacement recorded by the permanent-sited Ku-band, Ground-Based Interferometric Synthetic Aperture Radar (GBInSAR) device, seismic signals (band 0.02–10 Hz), and high-resolution Digital Elevation Models (DEMs) reconstructed based on Light Detection and Ranging (LiDAR) data and tri-stereo PLEIADES-1 imagery. This work highlights the importance of considering data from in situ sensors and remote-sensing platforms in monitoring active volcanoes. Comparison of data from live-cams, tremor amplitude, localization of Very-Long-Period (VLP) source and amplitude of explosion quakes, and ground displacements recorded by GBInSAR in the crater terrace provide information about the eruptive activity, nowcasting the shift in eruptive style of explosive to effusive. At the same time, the landslide activity during the run-up and onset phases could be forecasted and tracked using the integration of data from the GBInSAR and the seismic landslide index. Finally, the use of airborne and space-borne DEMs permitted the detection of topographic changes induced by the eruptive activity, allowing for the estimation of a total volume of 3.07 ± 0.37 × 106 m3 of the 2014 lava flow field emplaced on the steep Sciara del Fuoco slope.
APA, Harvard, Vancouver, ISO, and other styles
28

Pitt, Douglas G., Robert G. Wagner, Ronald J. Hall, Douglas J. King, Donald G. Leckie, and Ulf Runesson. "Use of remote sensing for forest vegetation management: A problem analysis." Forestry Chronicle 73, no. 4 (August 1, 1997): 459–77. http://dx.doi.org/10.5558/tfc73459-4.

Full text
Abstract:
Forest managers require accurate and timely data that describe vegetation conditions on cutover areas to assess vegetation development and prescribe actions necessary to achieve forest regeneration objectives. Needs for such data are increasing with current emphasis on ecosystem management, escalating silvicultural treatment costs, evolving computer-based decision support tools, and demands for greater accountability. Deficiencies associated with field survey methods of data acquisition (e.g. high costs, subjectivity, and low spatial and temporal coverage) frequently limit decision-making effectiveness. The potential for remotely sensed data to supplement field-collected forest vegetation management data was evaluated in a problem analysis consisting of a comprehensive literature review and consultation with remote sensing and vegetation management experts at a national workshop. Among curently available sensors, aerial photographs appear to offer the most suitable combination of characteristics, including high spatial resolution, stereo coverage, a range of image scales, a variety of film, lens, and camera options, capability for geometric correction, versatility, and moderate cost. A flexible strategy that employs a sequence of 1:10,000-, 1:5,000-, and 1:500-scale aerial photographs is proposed to: 1) accurately map cutover areas, 2) facilitate location-specific prescriptions for silvicultural treatments, sampling, buffer zones, wildlife areas, etc., and 3) monitor and document conditions and activities at specific points during the regeneration period. Surveys that require very detailed information on smaller plants (<0.5-m tall) and/or individual or rare plant species are not likely to be supported by current remote sensing technologies. Recommended areas for research include : 1) digital frame cameras, or other cost-effective digital imagers, as replacements for conventional cameras, 2) computer-based classification and interpretation algorithms for digital image data, 3) relationships between image measures and physical measures, such as leaf-area index and biomass, 4) imaging standards, 5) airborne video, laser altimeters, and radar as complementary sensors, and 6) remote sensing applications in partial cutting systems. Key words: forest vegetation management, regeneration, remote sensing, aerial photography
APA, Harvard, Vancouver, ISO, and other styles
29

van den Bergh, Jarrett, Ved Chirayath, Alan Li, Juan L. Torres-Pérez, and Michal Segal-Rozenhaimer. "NeMO-Net – Gamifying 3D Labeling of Multi-Modal Reference Datasets to Support Automated Marine Habitat Mapping." Frontiers in Marine Science 8 (April 21, 2021). http://dx.doi.org/10.3389/fmars.2021.645408.

Full text
Abstract:
NASA NeMO-Net, The Neural Multimodal Observation and Training Network for global coral reef assessment, is a convolutional neural network (CNN) that generates benthic habitat maps of coral reefs and other shallow marine ecosystems. To segment and classify imagery accurately, CNNs require curated training datasets of considerable volume and accuracy. Here, we present a citizen science approach to create these training datasets through a novel 3D classification game for mobile and desktop devices. Leveraging citizen science, the NeMO-Net video game generates high-resolution 3D benthic habitat labels at the subcentimeter to meter scales. The video game trains users to accurately identify benthic categories and semantically segment 3D scenes captured using NASA airborne fluid lensing, the first remote sensing technology capable of mitigating ocean wave distortions, as well as in situ 3D photogrammetry and 2D satellite remote sensing. An active learning framework is used in the game to allow users to rate and edit other user classifications, dynamically improving segmentation accuracy. Refined and aggregated data labels from the game are used to train NeMO-Net’s supercomputer-based CNN to autonomously map shallow marine systems and augment satellite habitat mapping accuracy in these regions. We share the NeMO-Net game approach to user training and retention, outline the 3D labeling technique developed to accurately label complex coral reef imagery, and present preliminary results from over 70,000 user classifications. To overcome the inherent variability of citizen science, we analyze criteria and metrics for evaluating and filtering user data. Finally, we examine how future citizen science and machine learning approaches might benefit from label training in 3D space using an active learning framework. Within 7 months of launch, NeMO-Net has reached over 300 million people globally and directly engaged communities in coral reef mapping and conservation through ongoing scientific field campaigns, uninhibited by geography, language, or physical ability. As more user data are fed into NeMO-Net’s CNN, it will produce the first shallow-marine habitat mapping products trained on 3D subcm-scale label data and merged with m-scale satellite data that could be applied globally when data sets are available.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography