Academic literature on the topic 'Unmanned Aerial System Imagery Technology advances'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Unmanned Aerial System Imagery Technology advances.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Unmanned Aerial System Imagery Technology advances"

1

Marcaccio, J. V., C. E. Markle, and P. Chow-Fraser. "UNMANNED AERIAL VEHICLES PRODUCE HIGH-RESOLUTION, SEASONALLY-RELEVANT IMAGERY FOR CLASSIFYING WETLAND VEGETATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1/W4 (August 26, 2015): 249–56. http://dx.doi.org/10.5194/isprsarchives-xl-1-w4-249-2015.

Full text
Abstract:
With recent advances in technology, personal aerial imagery acquired with unmanned aerial vehicles (UAVs) has transformed the way ecologists can map seasonal changes in wetland habitat. Here, we use a multi-rotor (consumer quad-copter, the DJI Phantom 2 Vision+) UAV to acquire a high-resolution (< 8 cm) composite photo of a coastal wetland in summer 2014. Using validation data collected in the field, we determine if a UAV image and SWOOP (Southwestern Ontario Orthoimagery Project) image (collected in spring 2010) differ in their classification of type of dominant vegetation type and percent cover of three plant classes: submerged aquatic vegetation, floating aquatic vegetation, and emergent vegetation. The UAV imagery was more accurate than available SWOOP imagery for mapping percent cover of submergent and floating vegetation categories, but both were able to accurately determine the dominant vegetation type and percent cover of emergent vegetation. Our results underscore the value and potential for affordable UAVs (complete quad-copter system < $3,000 CAD) to revolutionize the way ecologists obtain imagery and conduct field research. In Canada, new UAV regulations make this an easy and affordable way to obtain multiple high-resolution images of small (< 1.0 km<sup>2</sup>) wetlands, or portions of larger wetlands throughout a year.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Biquan, Jiating Li, P. Stephen Baenziger, Vikas Belamkar, Yufeng Ge, Jian Zhang, and Yeyin Shi. "Automatic Wheat Lodging Detection and Mapping in Aerial Imagery to Support High-Throughput Phenotyping and In-Season Crop Management." Agronomy 10, no. 11 (November 12, 2020): 1762. http://dx.doi.org/10.3390/agronomy10111762.

Full text
Abstract:
Latest advances in unmanned aerial vehicle (UAV) technology and convolutional neural networks (CNNs) allow us to detect crop lodging in a more precise and accurate way. However, the performance and generalization of a model capable of detecting lodging when the plants may show different spectral and morphological signatures have not been investigated much. This study investigated and compared the performance of models trained using aerial imagery collected at two growth stages of winter wheat with different canopy phenotypes. Specifically, three CNN-based models were trained with aerial imagery collected at early grain filling stage only, at physiological maturity only, and at both stages. Results show that the multi-stage model trained by images from both growth stages outperformed the models trained by images from individual growth stages on all testing data. The mean accuracy of the multi-stage model was 89.23% for both growth stages, while the mean of the other two models were 52.32% and 84.9%, respectively. This study demonstrates the importance of diversity of training data in big data analytics, and the feasibility of developing a universal decision support system for wheat lodging detection and mapping multi-growth stages with high-resolution remote sensing imagery.
APA, Harvard, Vancouver, ISO, and other styles
3

Chang, Anjin, Jinha Jung, Murilo M. Maeda, Juan A. Landivar, Henrique D. R. Carvalho, and Junho Yeom. "Measurement of Cotton Canopy Temperature Using Radiometric Thermal Sensor Mounted on the Unmanned Aerial Vehicle (UAV)." Journal of Sensors 2020 (August 19, 2020): 1–7. http://dx.doi.org/10.1155/2020/8899325.

Full text
Abstract:
Canopy temperature is an important variable directly linked to a plant’s water status. Recent advances in Unmanned Aerial Vehicle (UAV) and sensor technology provides a great opportunity to obtain high-quality imagery for crop monitoring and high-throughput phenotyping (HTP) applications. In this study, a UAV-based thermal system was developed to directly measure canopy temperature, skipping the traditional radiometric calibration process which is time-consuming and complicates data processing. Raw thermal imagery collected over a cotton field was converted to surface temperature using the Software Development Kit (SDK) provided by the sensor company. Canopy temperature map was generated using Structure from Motion (SfM), and Thermal Stress Index (TSI) was calculated for the test site. UAV temperature measurements were compared to ground measurements acquired by net radiometers and thermocouples. Temperature differences between UAV and ground measurements were less than 5%, and UAV measurements proved to be more stable. The proposed UAV system was successful in showing temperature differences between the cotton genotype. In conclusion, the system described in this study could possibly be used to monitor crop water status in a field setting, which should prove helpful for precision agriculture and crop research.
APA, Harvard, Vancouver, ISO, and other styles
4

Chávez, José Luis, Alfonso F. Torres-Rua, Wayne E. Woldt, Huihui Zhang, Christopher C. Robertson, Gary W. Marek, Dong Wang, Derek M. Heeren, Saleh Taghvaeian, and Christopher M. U. Neale. "A Decade of Unmanned Aerial Systems in Irrigated Agriculture in the Western U.S." Applied Engineering in Agriculture 36, no. 4 (2020): 423–36. http://dx.doi.org/10.13031/aea.13941.

Full text
Abstract:
Highlights Unmanned aerial systems (UAS) are able to provide data for precision irrigation management. Improvements are needed regarding UAS platforms, sensors, processing software, and regulations. Integration of multi-scale imagery into scientific irrigation scheduling tools are needed for technology adoption. Abstract . Several research institutes, laboratories, academic programs, and service companies around the United States have been developing programs to utilize small unmanned aerial systems (sUAS) as an instrument to improve the efficiency of in-field water and agronomical management. This article describes a decade of efforts on research and development efforts focused on UAS technologies and methodologies developed for irrigation management, including the evolution of aircraft and sensors in contrast to data from satellites. Federal Aviation Administration (FAA) regulations for UAS operation in agriculture have been synthesized along with proposed modifications to enhance UAS contributions to irrigated agriculture. Although it is feasible to use sUAS technology to produce maps of actual crop coefficients, actual crop evapotranspiration, and soil water deficits, for irrigation management, the technology and regulations need to evolve further to facilitate a successful wide adoption and application. Improvements and standards are needed in terms of cameras’ spectral (bands) ranges, radiometric resolutions and associated calibrations, fuel/power technology for longer missions, better imagery processing software, and easier FAA approval of higher altitudes flight missions among other issues. Furthermore, the sUAS technology would play a larger role in irrigated agriculture when integrating multi-scale data (sUAS, ground-based or proximal, satellite) and soil water sensors is addressed, including the need for advances on processing large amounts of data from multiple and different sources, and integration into scientific irrigation scheduling (SIS) systems for convenience of decision making. Desirable technological innovations, and features of the next generation of UAS platforms, sensors, software, and methods for irrigated agriculture, are discussed. Keywords: Agricultural water management, Irrigation prescription mapping, Irrigation scheduling, Precision irrigation, Remote sensing, Sensors, Spatial crop evaOotranspiration, Unmanned aerial systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Jin, Xiuliang, Zhenhai Li, and Clement Atzberger. "Editorial for the Special Issue “Estimation of Crop Phenotyping Traits using Unmanned Ground Vehicle and Unmanned Aerial Vehicle Imagery”." Remote Sensing 12, no. 6 (March 13, 2020): 940. http://dx.doi.org/10.3390/rs12060940.

Full text
Abstract:
High-throughput crop phenotyping is harnessing the potential of genomic resources for the genetic improvement of crop production under changing climate conditions. As global food security is not yet assured, crop phenotyping has received increased attention during the past decade. This spectral issue (SI) collects 30 papers reporting research on estimation of crop phenotyping traits using unmanned ground vehicle (UGV) and unmanned aerial vehicle (UAV) imagery. Such platforms were previously not widely available. The special issue includes papers presenting recent advances in the field, with 22 UAV-based papers and 12 UGV-based articles. The special issue covers 16 RGB sensor papers, 11 papers on multi-spectral imagery, and further 4 papers on hyperspectral and 3D data acquisition systems. A total of 13 plants’ phenotyping traits, including morphological, structural, and biochemical traits are covered. Twenty different data processing and machine learning methods are presented. In this way, the special issue provides a good overview regarding potential applications of the platforms and sensors, to timely provide crop phenotyping traits in a cost-efficient and objective manner. With the fast development of sensors technology and image processing algorithms, we expect that the estimation of crop phenotyping traits supporting crop breeding scientists will gain even more attention in the future.
APA, Harvard, Vancouver, ISO, and other styles
6

Neumann, John L., and Paula J. Durlach. "Human Factors and Trainability of Piloting a Simulated Micro Unmanned Aerial Vehicle." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 23 (September 2005): 2055–59. http://dx.doi.org/10.1177/154193120504902312.

Full text
Abstract:
In 2004, the U.S. Army Research Institute's (ARI) Simulator Systems Research Unit began studies involving the training requirements for operators of a micro-unmanned aerial vehicle (MAV). Our research involved the use of a touch-screen operator control interface developed for the DARPA MAV Advanced Technology Demonstration. This control system allowed operators to plan and run autonomous flight missions or to tele-operate a simulated MAV around a static synthetic environment. An initial study focused primarily on the usability of the system. Extensive heuristic evaluations were conducted by seven volunteers with backgrounds in human factors and military training systems. Each evaluator completed a self-paced training session including exercises that tested their ability to perform various control functions. Lack of immediate feedback from touch-screen inputs and missing or obscure status information formed the basis of several of the usability issues. Manually piloting the MAV presented the most difficulty to operators. As such, a second study was conducted that focused specifically on manual control tasks. In this study, participants were trained on manual control of the MAV, and then completed four increasingly difficult missions, requiring them to pilot the vehicle through the synthetic environment. This experiment was designed to compare the effect of supplemental sensor imagery on mission performance. During Study 1, operators could choose to view a sensor image taken from a fixed camera pointed 15 degrees below horizontal or straight down (90 degrees), but only one view was available at a time. During Study 2, operators were provided with three sensor views simultaneously. The 15-degree view was presented in a primary sensor window, and two additional views were displayed in smaller windows below it. The camera angle of one of these supplemental views was the manipulated independent variable — 30, 60, or 90 degrees from horizontal. The remaining window always contained an overhead satellite view (downward view from 5000 feet above the MAV). Data were collected on time to complete each mission, the number of physical interactions each user made with the interface, SME ratings, workload, and usability. Results indicated that mission requirements had a greater effect on performance and workload ratings than the camera angle of the supplemental view, though the camera angle of the supplemental view did affect mission time required to capture images of designated target buildings. Session averages of workload, usability, mission completion time, and SME rating were significantly inter-correlated. Higher SME ratings were associated with lower participant ratings of workload, shorter mission completion times, and higher usability ratings.
APA, Harvard, Vancouver, ISO, and other styles
7

Sultonov, Furkat, Jun-Hyun Park, Sangseok Yun, Dong-Woo Lim, and Jae-Mo Kang. "Mixer U-Net: An Improved Automatic Road Extraction from UAV Imagery." Applied Sciences 12, no. 4 (February 13, 2022): 1953. http://dx.doi.org/10.3390/app12041953.

Full text
Abstract:
Automatic road extraction from unmanned aerial vehicle (UAV) imagery has been one of the major research topics in the area of remote sensing analysis due to its importance in a wide range of applications such as urban planning, road monitoring, intelligent transportation systems, and automatic road navigation. Thanks to the recent advances in Deep Learning (DL), the tedious manual segmentation of roads can be automated. However, the majority of these models are computationally heavy and, thus, are not suitable for UAV remote-sensing tasks with limited resources. To alleviate this bottleneck, we propose two lightweight models based on depthwise separable convolutions and ConvMixer inception block. Both models take the advantage of computational efficiency of depthwise separable convolutions and multi-scale processing of inception module and combine them in an encoder–decoder architecture of U-Net. Specifically, we substitute standard convolution layers used in U-Net for ConvMixer layers. Furthermore, in order to learn images on different scales, we apply ConvMixer layer into Inception module. Finally, we incorporate pathway networks along the skip connections to minimize the semantic gap between encoder and decoder. In order to validate the performance and effectiveness of the models, we adopt Massachusetts roads dataset. One incarnation of our models is able to beat the U-Net’s performance with 10× fewer parameters, and DeepLabV3’s performance with 12× fewer parameters in terms of mean intersection over union (mIoU) metric. For further validation, we have compared our models against four baselines in total and used additional metrics such as precision (P), recall (R), and F1 score.
APA, Harvard, Vancouver, ISO, and other styles
8

Niu, Yaxiao, Liyuan Zhang, Huihui Zhang, Wenting Han, and Xingshuo Peng. "Estimating Above-Ground Biomass of Maize Using Features Derived from UAV-Based RGB Imagery." Remote Sensing 11, no. 11 (May 28, 2019): 1261. http://dx.doi.org/10.3390/rs11111261.

Full text
Abstract:
The rapid, accurate, and economical estimation of crop above-ground biomass at the farm scale is crucial for precision agricultural management. The unmanned aerial vehicle (UAV) remote-sensing system has a great application potential with the ability to obtain remote-sensing imagery with high temporal-spatial resolution. To verify the application potential of consumer-grade UAV RGB imagery in estimating maize above-ground biomass, vegetation indices and plant height derived from UAV RGB imagery were adopted. To obtain a more accurate observation, plant height was directly derived from UAV RGB point clouds. To search the optimal estimation method, the estimation performances of the models based on vegetation indices alone, based on plant height alone, and based on both vegetation indices and plant height were compared. The results showed that plant height directly derived from UAV RGB point clouds had a high correlation with ground-truth data with an R2 value of 0.90 and an RMSE value of 0.12 m. The above-ground biomass exponential regression models based on plant height alone had higher correlations for both fresh and dry above-ground biomass with R2 values of 0.77 and 0.76, respectively, compared to the linear regression model (both R2 values were 0.59). The vegetation indices derived from UAV RGB imagery had great potential to estimate maize above-ground biomass with R2 values ranging from 0.63 to 0.73. When estimating the above-ground biomass of maize by using multivariable linear regression based on vegetation indices, a higher correlation was obtained with an R2 value of 0.82. There was no significant improvement of the estimation performance when plant height derived from UAV RGB imagery was added into the multivariable linear regression model based on vegetation indices. When estimating crop above-ground biomass based on UAV RGB remote-sensing system alone, looking for optimized vegetation indices and establishing estimation models with high performance based on advanced algorithms (e.g., machine learning technology) may be a better way.
APA, Harvard, Vancouver, ISO, and other styles
9

Ulhaq, Anwaar, and Douglas Pinto Sampaio Gomes. "Editorial for the Special Issue “Advances in Object and Activity Detection in Remote Sensing Imagery”." Remote Sensing 14, no. 8 (April 12, 2022): 1844. http://dx.doi.org/10.3390/rs14081844.

Full text
Abstract:
Advances in data collection and accessibility, such as unmanned aerial vehicle (UAV) technology, the availability of satellite imagery, and the increasing performance of deep learning models, have had significant impacts on solving various remote sensing problems and proposing new applications ranging from vegetation and wildlife monitoring to crowd monitoring [...]
APA, Harvard, Vancouver, ISO, and other styles
10

Ułanowicz, Leszek, and Ryszard Sabak. "Unmanned aerial vehicles supporting imagery intelligence using the structured light technology." Archives of Transport 58, no. 2 (June 30, 2021): 35–45. http://dx.doi.org/10.5604/01.3001.0014.8796.

Full text
Abstract:
One of the possible tasks for unmanned aerial vehicles (UAVs) is field capturing of object images. The field capturing of object images (scenes) is possible owing to the UAV equipped with photographic cameras, TV cameras, infrared camer-as or synthetic aperture radars (SAR). The result of the recognition is a metric mapping of space, i.e. 2D flat images. In order to increase the quality of image recognition, it is necessary to search for and develop stereoscopic visualization with the possibility of its mobile use. A pioneering approach presented in the research paper is using a UAV with an imagery intelligence system based on structured light technology for air reconnaissance of object over a selected area or in a given direction in the field. The outcome of imagery intelligence is a three-dimensional (3D imaging) information on the geometry of an observed scene. The visualization with a stereoscopic interface proposed in the work allows for a natural perception of the depth of the scene and mutual spatial relationships, as well as seeing which objects are closer and which are further. The essence of the article is to present the application of three-dimensional vision measurement technology on UAVs. The paper presents an analysis of the possibilities of using UAVs for image recognition and a method of image recognition based on the technology of structural lighting using the method of projection of Gray’a fringes and codes. The designed image recognition system based on the structural lighting technology is described. It also discusses task modules forming a measuring head, i.e., projection, detection and calculation modules, and the exchange of control or measurement data between imaging system components. It presents the results of tests on the possibility of rapidly acquiring images using a UAV. The test results and the analyses indicate that using a UAV with an imaging technology based on structural light can contribute to improving the abilities to detect, identify, locate and monitor objects at close range, within a selected direction outdoors or indoors.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Unmanned Aerial System Imagery Technology advances"

1

Zvara, Ondrej. "UAV - based imagery processing using structure from motion and remote sensing technology." Master's thesis, 2015. http://hdl.handle.net/10362/14571.

Full text
Abstract:
With the recent advances in technology and miniaturization of devices such as GPS or IMU, Unmanned Aerial Vehicles became a feasible platform for a Remote Sensing applications. The use of UAVs compared to the conventional aerial platforms provides a set of advantages such as higher spatial resolution of the derived products. UAV - based imagery obtained by a user grade cameras introduces a set of problems which have to be solved, e. g. rotational or angular differences or unknown or insufficiently precise IO and EO camera parameters. In this work, UAV - based imagery of RGB and CIR type was processed using two different workflows based on PhotoScan and VisualSfM software solutions resulting in the DSM and orthophoto products. Feature detection and matching parameters influence on the result quality as well as a processing time was examined and the optimal parameter setup was presented. Products of the both workflows were compared in terms of a quality and a spatial accuracy. Both workflows were compared by presenting the processing times and quality of the results. Finally, the obtained products were used in order to demonstrate vegetation classification. Contribution of the IHS transformations was examined with respect to the classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Unmanned Aerial System Imagery Technology advances"

1

Fahroo, Fariba. Recent Advances in Research on Unmanned Aerial Vehicles. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Le Yi, Fariba Fahroo, and George Yin. Recent Advances in Research on Unmanned Aerial Vehicles. Springer, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Unmanned Aerial System Imagery Technology advances"

1

Emmanuel, Uche. "Review of Agricultural Unmanned Aerial Vehicles (UAV) Obstacle Avoidance System." In Aeronautics - New Advances. IntechOpen, 2022. http://dx.doi.org/10.5772/intechopen.103037.

Full text
Abstract:
Unmanned aerial vehicles (UAVs) are being used for commercial, scientific, agricultural and infrastructural enhancement to alleviate maladies. The objective of this chapter is to review existing capabilities and ongoing studies to overcome difficulties associated with the deployment of the agricultural unmanned aerial vehicle in obstacle-rich farms for pesticides and fertilizer application. By review of various literature, it is apparent that the potential for real-time and near real-time exists but the development of technology for quality imagery and rapid processing leading to real-time response is needed. The Infrared, time of flight and millimeter wavelength radar sensors for detecting farm and flight environment obstacles appear promising. The autonomous mental development algorithm, and the simultaneous localization and mapping technology are, however, ahead of others in achieving autonomous identification of obstacles and real-time obstacle avoidance. They are, therefore, found fit for further studies and development for deployment on agricultural unmanned aerial vehicles for obstacle-rich farms.
APA, Harvard, Vancouver, ISO, and other styles
2

Moutinho, Alexandra, Maria João Sousa, Miguel Almeida, Miguel Bergano, Diogo Henriques, Dário Silva, Domingos Barbosa, Luis Nero, and António Navarro. "Eye in the Sky - Using High-Altitude Balloons for Decision Support in Wildfire Operations." In Advances in Forest Fire Research 2022, 181–86. Imprensa da Universidade de Coimbra, 2022. http://dx.doi.org/10.14195/978-989-26-2298-9_29.

Full text
Abstract:
The large fires that have occurred in recent years around the World have shown that the management of these events becomes much more difficult when the extension of the burnt area or the location of the fire fronts, among other fundamental information for those managing operations in theatre, is not known. In addition, the large involvement of firefighting resources often challenges the operability of the communications system, sometimes making it dysfunctional or with limited operability. This was a reality seen in several large fires, from which we highlight the Fire of Pedrógão Grande that occurred in Portugal in June 2017. Aware of the need to improve operational technology for fire monitoring, the Eye in the Sky Project has been developed. Our approach within this project is to design and develop an easy to deploy kit that can be launched from anywhere and fulfil these needs, providing quality real-time aerial imagery of the wildfires and guaranteeing emergency communications. The proposed solution consists of a high-altitude balloon that will carry a flying wing unmanned aerial vehicle with two different high-value types of payloads that meet these two specific functions of imagery collection in the visible and infrared ranges and communications repeater. The authors like to refer to the Eye in the Sky solution as a satellite dedicated to a specific fire. Naturally, the development of such solution raises several challenges, particularly in terms of the operational functioning of this solution, the optimised use of the balloon/glider pair with the associated payload, the capture, automatic detection of the fire images and their georeferencing, and the communications system between the payload, the ground station and the users in the command centre. The several challenges that have arisen in the development of this solution, as well as the current developments, are hereby exposed.
APA, Harvard, Vancouver, ISO, and other styles
3

Raimi, Lukman, and Ramotu Sule. "Plausibility of Precision Agriculture as a COVID-19-Compliant Digital Technology for Food Security and Agricultural Productivity in Nigeria." In Handbook of Research on Strategies and Interventions to Mitigate COVID-19 Impact on SMEs, 383–402. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-7436-2.ch019.

Full text
Abstract:
The chapter discusses the plausibility of precision agriculture (PA) as a COVID-19-compliant digital technology to tackle the challenges of food insecurity and low agricultural productivity in Nigeria. In all the countries where PA has been adopted, it includes a wide array of disruptive technologies such as global positioning system (GPS), satellite imagery, monitoring control systems (MCS) using an intelligent sensor, artificial plant growing technique (APGT), soilless cultivation system (SCS), drones/unmanned aerial vehicles (UAVs), artificial intelligence (AI), internet of things (IoT), robotics, variable rate technology, GPS-based soil sampling, and telematics. Adopting the PA in commercial agriculture will improve food production, increase food exports, gross domestic products (GDPs), and strengthen the actualization of sustainable food security. Ultimately, this chapter concludes with policy suggestions for mitigating the rising trends in food insecurity, food inflation, and low agricultural productivity.
APA, Harvard, Vancouver, ISO, and other styles
4

Ye, X. W., T. Jin, and P. Y. Chen. "Application of Computer Vision Technology to Structural Health Monitoring of Engineering Structures." In Optoelectronics in Machine Vision-Based Theories and Applications, 256–68. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-5751-7.ch009.

Full text
Abstract:
The computer vision technology has gained great advances and applied in a variety of industry fields. It has some unique advantages over the traditional technologies such as high speed, high accuracy, low noise, anti-electromagnetic interference, etc. In the last decade, the technology of computer vision has been widely employed in the field of structure health monitoring (SHM). Many specific hardware and algorithms have been developed to meet different kinds of monitoring demands. This chapter presents three application scenarios of computer vision technology for health monitoring of engineering structures, including bridge inspection and evaluation with unmanned aerial vehicle (UAV), recognition and surveillance of foreign object intrusion for railway system, and identification and tracking of concrete cracking. The principles and procedures of three application scenarios are addressed following with the experimental study, and the possibilities and ideas for the application of computer vision technology to other monitoring items are also discussed.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Unmanned Aerial System Imagery Technology advances"

1

Sarath, Raj N. S., Jerrin Thadathil Varghese, and Feni Pandya. "Unmanned Aerial Vehicle for Human Tracking using Face Recognition System." In 2019 Advances in Science and Engineering Technology International Conferences (ASET). IEEE, 2019. http://dx.doi.org/10.1109/icaset.2019.8714440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Legowo, Ari, Erwin Sulaeman, and Danial Rosli. "Review on System Identification for Quadrotor Unmanned Aerial Vehicle (UAV)." In 2019 Advances in Science and Engineering Technology International Conferences (ASET). IEEE, 2019. http://dx.doi.org/10.1109/icaset.2019.8714531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Choi, Sung-suk, and Eung-kon Kim. "Design of construction stability test system using small unmanned aerial vehicle based on image processing." In 2014 16th International Conference on Advanced Communication Technology (ICACT). Global IT Research Institute (GIRI), 2014. http://dx.doi.org/10.1109/icact.2014.6779171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gajjar, Yogin. "Monitoring of Pipeline RoU Using Remote Sensing and GIS Techniques." In ASME 2017 India Oil and Gas Pipeline Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/iogpc2017-2428.

Full text
Abstract:
Safe Pipeline transportation of energy resources is a major concern. Every Natural Gas Pipeline Operator’s primary objective is to operate and maintain pipeline network in such a way that it would continuously provide un-interrupted services to customers without any accidents which can adversely impact on the environment and reputation of the organization. Various surveillance methods are being used in Natural Gas Pipelines as a part of direct integrity assessment. Traditionally, surveillance is conducted by line walking and supplemented by vehicular over the linear corridor. This process involves various shortcomings in terms of efficacy, accuracy, cost, and safety. This method purely depend upon Inspector’s ability for detecting anomalies. It is in the interest of any operator to maintain the value of its pipelines and to protect them effectively against damage caused by third parties. As a result of global progress in high-resolution remote sensing and image processing technology, it is possible to use digital surveillance method for monitoring of pipeline Right of Use (RoU). Digital Surveillance is done using Remote Sensing and Geographical Information System (GIS) techniques. Remote sensing based pipeline surveillance refers to the monitoring and detection of changes on RoU and around pipeline networks. This paper elaborates on the development and implementation of a digital solution that uses images from satellites and Unmanned Aerial Vehicles (UAV) to detect instances of encroachments and third-party activities on Pipeline RoU. Such a solution provides capability of running advance analytics on captured images, and will enable to automate detection of anomalies which may often go un-noticed during manual inspection.
APA, Harvard, Vancouver, ISO, and other styles
5

Shetty, Devdas, and Louis Manzione. "Unmanned Aerial Vehicles (UAV): Design Trends." In ASME 2011 International Mechanical Engineering Congress and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/imece2011-64518.

Full text
Abstract:
This paper looks at the trends in design procedures in Unmanned Aerial Vehicles (UAVs). Rapid advances in technology are enabling more and more capability to be placed on smaller airframes which is spurring a large increase in the number of UAVs being deployed in the army. The military role of UAV is growing at unprecedented rates. The UAV is an acronym for Unmanned Aerial Vehicle, which is an aircraft with no pilot on board. UAVs can be remote controlled aircraft (e.g. flown by a pilot at a ground control station) or can fly autonomously based on pre-programmed flight plans or more complex dynamic automation systems. A variety of design configurations are in use. The primary driving parameters in all UAVs is the need for maximizing available wing area and wing effectiveness, while minimizing the required storage volume. The major factors in determining the relative merit of the different concepts are the evaluation of structural viability, mechanical complexity and overall system survivability by G forces. This paper examines some of the design methodologies and hardware-in-the loop simulation environment to support and validate the UAV hardware and software development.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Unmanned Aerial System Imagery Technology advances"

1

Yan, Yujie, and Jerome F. Hajjar. Automated Damage Assessment and Structural Modeling of Bridges with Visual Sensing Technology. Northeastern University, May 2021. http://dx.doi.org/10.17760/d20410114.

Full text
Abstract:
Recent advances in visual sensing technology have gained much attention in the field of bridge inspection and management. Coupled with advanced robotic systems, state-of-the-art visual sensors can be used to obtain accurate documentation of bridges without the need for any special equipment or traffic closure. The captured visual sensor data can be post-processed to gather meaningful information for the bridge structures and hence to support bridge inspection and management. However, state-of-the-practice data postprocessing approaches require substantial manual operations, which can be time-consuming and expensive. The main objective of this study is to develop methods and algorithms to automate the post-processing of the visual sensor data towards the extraction of three main categories of information: 1) object information such as object identity, shapes, and spatial relationships - a novel heuristic-based method is proposed to automate the detection and recognition of main structural elements of steel girder bridges in both terrestrial and unmanned aerial vehicle (UAV)-based laser scanning data. Domain knowledge on the geometric and topological constraints of the structural elements is modeled and utilized as heuristics to guide the search as well as to reject erroneous detection results. 2) structural damage information, such as damage locations and quantities - to support the assessment of damage associated with small deformations, an advanced crack assessment method is proposed to enable automated detection and quantification of concrete cracks in critical structural elements based on UAV-based visual sensor data. In terms of damage associated with large deformations, based on the surface normal-based method proposed in Guldur et al. (2014), a new algorithm is developed to enhance the robustness of damage assessment for structural elements with curved surfaces. 3) three-dimensional volumetric models - the object information extracted from the laser scanning data is exploited to create a complete geometric representation for each structural element. In addition, mesh generation algorithms are developed to automatically convert the geometric representations into conformal all-hexahedron finite element meshes, which can be finally assembled to create a finite element model of the entire bridge. To validate the effectiveness of the developed methods and algorithms, several field data collections have been conducted to collect both the visual sensor data and the physical measurements from experimental specimens and in-service bridges. The data were collected using both terrestrial laser scanners combined with images, and laser scanners and cameras mounted to unmanned aerial vehicles.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography