Journal articles on the topic 'DL. Archives'

To see the other types of publications on this topic, follow the link: DL. Archives.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'DL. Archives.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Buyukdemircioglu, M., S. Kocaman, and M. Kada. "DEEP LEARNING FOR 3D BUILDING RECONSTRUCTION: A REVIEW." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (May 30, 2022): 359–66. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-359-2022.

Full text
Abstract:
Abstract. 3D building reconstruction using Earth Observation (EO) data (aerial and satellite imagery, point clouds, etc.) is an important and active research topic in different fields, such as photogrammetry, remote sensing, computer vision and Geographic Information Systems (GIS). Nowadays 3D city models have become an essential part of 3D GIS environments and they can be used in many applications and analyses in urban areas. The conventional 3D building reconstruction methods depend heavily on the data quality and source; and manual efforts are still needed for generating the object models. Several tasks in photogrammetry and remote sensing have been revolutionized by using deep learning (DL) methods, such as image segmentation, classification, and 3D reconstruction. In this study, we provide a review on the state-of-the-art machine learning and in particular the DL methods for 3D building reconstruction for the purpose of city modelling using EO data. This is the first review with a focus on object model generation based on the DL methods and EO data. A brief overview of the recent building reconstruction studies with DL is also given. We have investigated the different DL architectures, such as convolutional neural networks (CNNs), generative adversarial networks (GANs), and the combinations of conventional approaches with DL in this paper and reported their advantages and disadvantages. An outlook on the future developments of 3D building modelling based on DL is also presented.
APA, Harvard, Vancouver, ISO, and other styles
2

Palma, V. "TOWARDS DEEP LEARNING FOR ARCHITECTURE: A MONUMENT RECOGNITION MOBILE APP." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W9 (January 31, 2019): 551–56. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w9-551-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> In recent years, the diffusion of large image datasets and an unprecedented computational power have boosted the development of a class of artificial intelligence (AI) algorithms referred to as deep learning (DL). Among DL methods, convolutional neural networks (CNNs) have proven particularly effective in computer vision, finding applications in many disciplines. This paper introduces a project aimed at studying CNN techniques in the field of architectural heritage, a still to be developed research stream. The first steps and results in the development of a mobile app to recognize monuments are discussed. While AI is just beginning to interact with the built environment through mobile devices, heritage technologies have long been producing and exploring digital models and spatial archives. The interaction between DL algorithms and state-of-the-art information modeling is addressed, as an opportunity to both exploit heritage collections and optimize new object recognition techniques.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

Comesaña Cebral, L. J., J. Martínez Sánchez, E. Rúa Fernández, and P. Arias Sánchez. "HEURISTIC GENERATION OF MULTISPECTRAL LABELED POINT CLOUD DATASETS FOR DEEP LEARNING MODELS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (May 30, 2022): 571–76. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-571-2022.

Full text
Abstract:
Abstract. Deep Learning (DL) models need big enough datasets for training, especially those that deal with point clouds. Artificial generation of these datasets can complement the real ones by improving the learning rate of DL architectures. Also, Light Detection and Ranging (LiDAR) scanners can be studied by comparing its performing with artificial point clouds. A methodology for simulate LiDAR-based artificial point clouds is presented in this work in order to get train datasets already labelled for DL models. In addition to the geometry design, a spectral simulation will be also performed so that all points in each cloud will have its 3 dimensional coordinates (x, y, z), a label designing which category it belongs to (vegetation, traffic sign, road pavement, …) and an intensity estimator based on physical properties as reflectance.
APA, Harvard, Vancouver, ISO, and other styles
4

Cao, Y., and M. Scaioni. "LABEL-EFFICIENT DEEP LEARNING-BASED SEMANTIC SEGMENTATION OF BUILDING POINT CLOUDS AT LOD3 LEVEL." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (June 28, 2021): 449–56. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-449-2021.

Full text
Abstract:
Abstract. In recent research, fully supervised Deep Learning (DL) techniques and large amounts of pointwise labels are employed to train a segmentation network to be applied to buildings’ point clouds. However, fine-labelled buildings’ point clouds are hard to find and manually annotating pointwise labels is time-consuming and expensive. Consequently, the application of fully supervised DL for semantic segmentation of buildings’ point clouds at LoD3 level is severely limited. To address this issue, we propose a novel label-efficient DL network that obtains per-point semantic labels of LoD3 buildings’ point clouds with limited supervision. In general, it consists of two steps. The first step (Autoencoder – AE) is composed of a Dynamic Graph Convolutional Neural Network-based encoder and a folding-based decoder, designed to extract discriminative global and local features from input point clouds by reconstructing them without any label. The second step is semantic segmentation. By supplying a small amount of task-specific supervision, a segmentation network is proposed for semantically segmenting the encoded features acquired from the pre-trained AE. Experimentally, we evaluate our approach based on the ArCH dataset. Compared to the fully supervised DL methods, we find that our model achieved state-of-the-art results on the unseen scenes, with only 10% of labelled training data from fully supervised methods as input.
APA, Harvard, Vancouver, ISO, and other styles
5

He, H., K. Gao, W. Tan, L. Wang, S. N. Fatholahi, N. Chen, M. A. Chapman, and J. Li. "IMPACT OF DEEP LEARNING-BASED SUPER-RESOLUTION ON BUILDING FOOTPRINT EXTRACTION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2022 (May 30, 2022): 31–37. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2022-31-2022.

Full text
Abstract:
Abstract. Automated building footprints extraction from High Spatial Resolution (HSR) remote sensing images plays important roles in urban planning and management, and hazard and disease control. However, HSR images are not always available in practice. In these cases, super-resolution, especially deep learning (DL)-based methods, can provide higher spatial resolution images given lower resolution images. In a variety of remote sensing applications, DL based super-resolution methods are widely used. However, there are few studies focusing on the impact of DL-based super-resolution on building footprint extraction. As such, we present an exploration of this topic. Specifically, we first super-resolve the Massachusetts Building Dataset using bicubic interpolation, a pre-trained Super-Resolution CNN (SRCNN), a pre-trained Residual Channel Attention Network (RCAN), a pre-trained Residual Feature Aggregation Network (RFANet). Then, using the dataset under its original resolution, as well as the four different super-resolutions of the dataset, we employ the High-Resolution Network (HRNet) v2 to extract building footprints. Our experiments show that super-resolving either training or test datasets using the latest high-performance DL-based super-resolution method can improve the accuracy of building footprints extraction. Although SRCNN based building footprint extraction gives the highest Overall Accuracy, Intersection of Union and F1 score, we suggest using the latest super-resolution method to process images before building footprint extraction due to the fixed scale ratio of pre-trained SRCNN and low speed of convergence in training.
APA, Harvard, Vancouver, ISO, and other styles
6

Nurunnabi, A., F. N. Teferle, D. F. Laefer, F. Remondino, I. R. Karas, and J. Li. "kCV-B: BOOTSTRAP WITH CROSS-VALIDATION FOR DEEP LEARNING MODEL DEVELOPMENT, ASSESSMENT AND SELECTION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-4/W3-2022 (December 2, 2022): 111–18. http://dx.doi.org/10.5194/isprs-archives-xlviii-4-w3-2022-111-2022.

Full text
Abstract:
Abstract. This study investigates the inability of two popular data splitting techniques: train/test split and k-fold cross-validation that are to create training and validation data sets, and to achieve sufficient generality for supervised deep learning (DL) methods. This failure is mainly caused by their limited ability of new data creation. In response, the bootstrap is a computer based statistical resampling method that has been used efficiently for estimating the distribution of a sample estimator and to assess a model without having knowledge about the population. This paper couples cross-validation and bootstrap to have their respective advantages in view of data generation strategy and to achieve better generalization of a DL model. This paper contributes by: (i) developing an algorithm for better selection of training and validation data sets, (ii) exploring the potential of bootstrap for drawing statistical inference on the necessary performance metrics (e.g., mean square error), and (iii) introducing a method that can assess and improve the efficiency of a DL model. The proposed method is applied for semantic segmentation and is demonstrated via a DL based classification algorithm, PointNet, through aerial laser scanning point cloud data.
APA, Harvard, Vancouver, ISO, and other styles
7

Nurunnabi, A., F. N. Teferle, J. Li, R. C. Lindenbergh, and A. Hunegnaw. "AN EFFICIENT DEEP LEARNING APPROACH FOR GROUND POINT FILTERING IN AERIAL LASER SCANNING POINT CLOUDS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2021 (June 28, 2021): 31–38. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2021-31-2021.

Full text
Abstract:
Abstract. Ground surface extraction is one of the classic tasks in airborne laser scanning (ALS) point cloud processing that is used for three-dimensional (3D) city modelling, infrastructure health monitoring, and disaster management. Many methods have been developed over the last three decades. Recently, Deep Learning (DL) has become the most dominant technique for 3D point cloud classification. DL methods used for classification can be categorized into end-to-end and non end-to-end approaches. One of the main challenges of using supervised DL approaches is getting a sufficient amount of training data. The main advantage of using a supervised non end-to-end approach is that it requires less training data. This paper introduces a novel local feature-based non end-to-end DL algorithm that generates a binary classifier for ground point filtering. It studies feature relevance, and investigates three models that are different combinations of features. This method is free from the limitations of point clouds’ irregular data structure and varying data density, which is the biggest challenge for using the elegant convolutional neural network. The new algorithm does not require transforming data into regular 3D voxel grids or any rasterization. The performance of the new method has been demonstrated through two ALS datasets covering urban environments. The method successfully labels ground and non-ground points in the presence of steep slopes and height discontinuity in the terrain. Experiments in this paper show that the algorithm achieves around 97% in both F1-score and model accuracy for ground point labelling.
APA, Harvard, Vancouver, ISO, and other styles
8

Rehman, Amir, Muhammad Azhar Iqbal, Huanlai Xing, and Irfan Ahmed. "COVID-19 Detection Empowered with Machine Learning and Deep Learning Techniques: A Systematic Review." Applied Sciences 11, no. 8 (April 10, 2021): 3414. http://dx.doi.org/10.3390/app11083414.

Full text
Abstract:
COVID-19 has infected 223 countries and caused 2.8 million deaths worldwide (at the time of writing this article), and the death rate is increasing continuously. Early diagnosis of COVID patients is a critical challenge for medical practitioners, governments, organizations, and countries to overcome the rapid spread of the deadly virus in any geographical area. In this situation, the previous epidemic evidence on Machine Learning (ML) and Deep Learning (DL) techniques encouraged the researchers to play a significant role in detecting COVID-19. Similarly, the rising scope of ML/DL methodologies in the medical domain also advocates its significant role in COVID-19 detection. This systematic review presents ML and DL techniques practiced in this era to predict, diagnose, classify, and detect the coronavirus. In this study, the data was retrieved from three prevalent full-text archives, i.e., Science Direct, Web of Science, and PubMed, using the search code strategy on 16 March 2021. Using professional assessment, among 961 articles retrieved by an initial query, only 40 articles focusing on ML/DL-based COVID-19 detection schemes were selected. Findings have been presented as a country-wise distribution of publications, article frequency, various data collection, analyzed datasets, sample sizes, and applied ML/DL techniques. Precisely, this study reveals that ML/DL technique accuracy lay between 80% to 100% when detecting COVID-19. The RT-PCR-based model with Support Vector Machine (SVM) exhibited the lowest accuracy (80%), whereas the X-ray-based model achieved the highest accuracy (99.7%) using a deep convolutional neural network. However, current studies have shown that an anal swab test is super accurate to detect the virus. Moreover, this review addresses the limitations of COVID-19 detection along with the detailed discussion of the prevailing challenges and future research directions, which eventually highlight outstanding issues.
APA, Harvard, Vancouver, ISO, and other styles
9

Caglar, Ozgur, Erdem Karadeniz, Irem Ates, Sevilay Ozmen, and Mehmet Dumlu Aydin. "Vagosympathetic imbalance induced thyroiditis following subarachnoid hemorrhage: a preliminary study." Journal of Research in Clinical Medicine 8, no. 1 (May 6, 2020): 17. http://dx.doi.org/10.34172/jrcm.2020.017.

Full text
Abstract:
Introduction: This preliminary study evaluates the possible responsibility of ischemia-induced vagosympathetic imbalances following subarachnoid hemorrhage (SAH), for the onset of autoimmune thyroiditis. Methods: Twenty-two rabbits were chosen from our former experimental animals, five of which were picked from healthy rabbits as control (nG-I=5). Sham group (nG-II=5) and animals with thyroid pathologies (nG-III=12) were also included after a one-month-long experimental SAH follow-up. Thyroid hormone levels were measured weekly, and animals were decapitated. Thyroid glands, superior cervical ganglia, and intracranial parts of vagal nerve sections obtained from our tissue archives were reexamined with routine/immunohistochemical methods. Thyroid hormone levels, hormone-filled total follicle volumes (TFVs) per cubic millimeter, degenerated neuron density (DND) of vagal nuclei and neuron density of superior cervical ganglia were measured and statistically compared. Results: The mean neuron density of both superior cervical ganglia was estimated as 8230±983/ mm3 in study group animals with severe thyroiditis, 7496±787/mm3 in the sham group and 6416±510/mm3 in animals with normal thyroid glands. In control group (group I), T3 was 107±11 μg/dL, T4: 1,43±0.32 μg/dL and TSH <0.5, while mean TFV was 43%/mm3 and DND of vagal nuclei was 3±1/mm3. In sham group (group II), T3 was 96±11 μg/dL, T4: 1.21±0.9 μg/ dL and TSH>0.5 while TFV was 38%/mm3 and DND of vagal nuclei was 13±4. In study group, T3 was 54±8 μg/dL, T4: 1,07±0.3 μg/dL and TSH >0.5, while TFV was 27%/mm3 and DND of vagal nuclei was 42±9/mm3. Conclusion: Sympathovagal imbalance characterized by relative sympathetic hyperactivity based on vagal insufficiency should be considered as a new causative agent for hypothyroidism.
APA, Harvard, Vancouver, ISO, and other styles
10

Andrade, R. B., G. A. O. P. Costa, G. L. A. Mota, M. X. Ortega, R. Q. Feitosa, P. J. Soto, and C. Heipke. "EVALUATION OF SEMANTIC SEGMENTATION METHODS FOR DEFORESTATION DETECTION IN THE AMAZON." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (August 22, 2020): 1497–505. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-1497-2020.

Full text
Abstract:
Abstract. Deforestation is a wide-reaching problem, responsible for serious environmental issues, such as biodiversity loss and global climate change. Containing approximately ten percent of all biomass on the planet and home to one tenth of the known species, the Amazon biome has faced important deforestation pressure in the last decades. Devising efficient deforestation detection methods is, therefore, key to combat illegal deforestation and to aid in the conception of public policies directed to promote sustainable development in the Amazon. In this work, we implement and evaluate a deforestation detection approach which is based on a Fully Convolutional, Deep Learning (DL) model: the DeepLabv3+. We compare the results obtained with the devised approach to those obtained with previously proposed DL-based methods (Early Fusion and Siamese Convolutional Network) using Landsat OLI-8 images acquired at different dates, covering a region of the Amazon forest. In order to evaluate the sensitivity of the methods to the amount of training data, we also evaluate them using varying training sample set sizes. The results show that all tested variants of the proposed method significantly outperform the other DL-based methods in terms of overall accuracy and F1-score. The gains in performance were even more substantial when limited amounts of samples were used in training the evaluated methods.
APA, Harvard, Vancouver, ISO, and other styles
11

Panella, F., J. Boehm, Y. Loo, A. Kaushik, and D. Gonzalez. "DEEP LEARNING AND IMAGE PROCESSING FOR AUTOMATED CRACK DETECTION AND DEFECT MEASUREMENT IN UNDERGROUND STRUCTURES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 829–35. http://dx.doi.org/10.5194/isprs-archives-xlii-2-829-2018.

Full text
Abstract:
This work presents the combination of Deep-Learning (DL) and image processing to produce an automated cracks recognition and defect measurement tool for civil structures. The authors focus on tunnel civil structures and survey and have developed an end to end tool for asset management of underground structures. In order to maintain the serviceability of tunnels, regular inspection is needed to assess their structural status. The traditional method of carrying out the survey is the visual inspection: simple, but slow and relatively expensive and the quality of the output depends on the ability and experience of the engineer as well as on the total workload (stress and tiredness may influence the ability to observe and record information). As a result of these issues, in the last decade there is the desire to automate the monitoring using new methods of inspection. The present paper has the goal of combining DL with traditional image processing to create a tool able to detect, locate and measure the structural defect.
APA, Harvard, Vancouver, ISO, and other styles
12

Tejeswari, B., S. K. Sharma, M. Kumar, and K. Gupta. "BUILDING FOOTPRINT EXTRACTION FROM SPACE-BORNE IMAGERY USING DEEP NEURAL NETWORKS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (May 30, 2022): 641–47. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-641-2022.

Full text
Abstract:
Abstract. One of the important and high-level detailing contained within basemaps is the ‘building feature’. Though pre-trained Deep Learning (DL) models are available for Building Feature Extraction (BFE), they are not efficient in predicting the buildings in other locations. This study explores the need and the major issue of implementing DL models for BFE from Very High Resolution Remote Sensing (VHRS) satellite data for any given area. Though advanced DL models are invented, in order to implement them, huge amount of potential training data is demanded for feed in. the building typologies are highly subjected to the context of study area including soil characteristics, culture/lifestyle/economy, architectural style and the building byelaws. The study believes that availability of enough training data of contextual buildings as one of the concern for effective model training. The study aims to extract the buildings present in the study area from Pleiades 1A (2019) RGB VHRS data using simple Mask R-CNN instance segmentation model which is training on the native contextual buildings. Here, an automated method of generating the location-specific training data for a given area is followed using Google Maps API (2021). The generated training data when trained on a deep learning architecture and predicted by the input data yielded promising results. The prediction accuracy of about 98.41% specificity, 96.20% predictive accuracy and 0.89 F1 score are achieved. The methods adopted assist the planning/governing bodies to accelerate the qualitative urban map preparation.
APA, Harvard, Vancouver, ISO, and other styles
13

Nurunnabi, A., and F. N. Teferle. "RESAMPLING METHODS FOR A RELIABLE VALIDATION SET IN DEEP LEARNING BASED POINT CLOUD CLASSIFICATION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2022 (May 30, 2022): 617–24. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2022-617-2022.

Full text
Abstract:
Abstract. A validation data set plays a pivotal role in tweaking a machine learning model trained in a supervised manner. Many existing algorithms select a part of available data by using random sampling to produce a validation set. However, this approach can be prone to overfitting. One should follow careful data splitting to have reliable training and validation sets that can produce a generalized model with a good performance for the unseen (test) data. Data splitting based on resampling techniques involves repeatedly drawing samples from the available data. Hence, resampling methods can give better generalization power to a model, because they can produce and use many training and/or validation sets. These techniques are computationally expensive, but with increasingly available high-performance computing facilities, one can exploit them. Though a multitude of resampling methods exist, investigation of their influence on the generality of deep learning (DL) algorithms is limited due to its non-linear black-box nature. This paper contributes by: (1) investigating the generalization capability of the four most popular resampling methods: k-fold cross-validation (k-CV), repeated k-CV (Rk-CV), Monte Carlo CV (MC-CV) and bootstrap for creating training and validation data sets used for developing, training and validating DL based point cloud classifiers (e.g., PointNet; Qi et al., 2017a), (2) justifying Mean Square Error (MSE) as a statistically consistent estimator, and (3) exploring the use of MSE as a reliable performance metric for supervised DL. Experiments in this paper are performed on both synthetic and real-world aerial laser scanning (ALS) point clouds.
APA, Harvard, Vancouver, ISO, and other styles
14

Can, R., S. Kocaman, and A. O. Ok. "A WEBGIS FRAMEWORK FOR SEMI-AUTOMATED GEODATABASE UPDATING ASSISTED BY DEEP LEARNING." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B5-2021 (June 30, 2021): 13–19. http://dx.doi.org/10.5194/isprs-archives-xliii-b5-2021-13-2021.

Full text
Abstract:
Abstract. The automation of geoinformation (GI) collection and interpretation has been a fundamental goal for many researchers. The developments in various sensors, platforms, and algorithms have been contributing to the achievement of this goal. In addition, the contributions of citizen science (CitSci) and volunteered geographical information (VGI) concepts have become evident and extensive for the geodata collection and interpretation in the era where information has the utmost importance to solve societal and environmental problems. The web- and mobile-based Geographical Information Systems (GIS) have facilitated the broad and frequent use of GI by people from any background, thanks to the accessibility and the simplicity of the platforms. On the other hand, the increased use of GI also yielded a great increment in the demand for GI in different application areas. Thus, new algorithms and platforms allowing human intervention are immensely required for semi-automatic GI extraction to increase the accuracy. By integrating the novel artificial intelligence (AI) methods including deep learning (DL) algorithms on WebGIS interfaces, this task can be achieved. Thus, volunteers with limited knowledge on GIS software can be supported to perform accurate processing and to make guided decisions. In this study, a web-based geospatial AI (GeoAI) platform was developed for map updating by using the image processing results obtained from a DL algorithm to assist volunteers. The platform includes vector drawing and editing capabilities and employs a spatial database management system to store the final maps. The system is flexible and can utilise various DL methods in the image segmentation.
APA, Harvard, Vancouver, ISO, and other styles
15

Martin, David R., Joshua A. Hanson, Rama R. Gullapalli, Fred A. Schultz, Aisha Sethi, and Douglas P. Clark. "A Deep Learning Convolutional Neural Network Can Recognize Common Patterns of Injury in Gastric Pathology." Archives of Pathology & Laboratory Medicine 144, no. 3 (June 27, 2019): 370–78. http://dx.doi.org/10.5858/arpa.2019-0004-oa.

Full text
Abstract:
Context.— Most deep learning (DL) studies have focused on neoplastic pathology, with the realm of inflammatory pathology remaining largely untouched. Objective.— To investigate the use of DL for nonneoplastic gastric biopsies. Design.— Gold standard diagnoses were blindly established by 2 gastrointestinal pathologists. For phase 1, 300 classic cases (100 normal, 100 Helicobacter pylori, 100 reactive gastropathy) that best displayed the desired pathology were scanned and annotated for DL analysis. A total of 70% of the cases for each group were selected for the training set, and 30% were included in the test set. The software assigned colored labels to the test biopsies, which corresponded to the area of the tissue assigned a diagnosis by the DL algorithm, termed area distribution (AD). For Phase 2, an additional 106 consecutive nonclassical gastric biopsies from our archives were tested in the same fashion. Results.— For Phase 1, receiver operating curves showed near perfect agreement with the gold standard diagnoses at an AD percentage cutoff of 50% for normal (area under the curve [AUC] = 99.7%) and H pylori (AUC = 100%), and 40% for reactive gastropathy (AUC = 99.9%). Sensitivity/specificity pairings were as follows: normal (96.7%, 86.7%), H pylori (100%, 98.3%), and reactive gastropathy (96.7%, 96.7%). For phase 2, receiver operating curves were slightly less discriminatory, with optimal AD cutoffs reduced to 40% across diagnostic groups. The AUCs were 91.9% for normal, 100% for H pylori, and 94.0% for reactive gastropathy. Sensitivity/specificity parings were as follows: normal (73.7%, 79.6%), H pylori (95.7%, 100%), reactive gastropathy (100%, 62.5%). Conclusions.— A convolutional neural network can serve as an effective screening tool/diagnostic aid for H pylori gastritis.
APA, Harvard, Vancouver, ISO, and other styles
16

Adão, T., T. M. Pinho, L. Pádua, N. Santos, A. Sousa, J. J. Sousa, and E. Peres. "USING VIRTUAL SCENARIOS TO PRODUCE MACHINE LEARNABLE ENVIRONMENTS FOR WILDFIRE DETECTION AND SEGMENTATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W8 (August 20, 2019): 9–15. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w8-9-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Today’s climatic proneness to extreme conditions together with human activity have been triggering a series of wildfire-related events that put at risk ecosystems, as well as animal and vegetal patrimony, while threatening dwellers nearby rural or urban areas. When intervention teams - firefighters, civil protection, police - acknowledge these events, usually they have already escalated to proportions hardly controllable mainly due wind gusts, fuel-like solo conditions, among other conditions that propitiate fire spreading.</p> <p>Currently, there is a wide range of camera-capable sensing systems that can be complemented with useful location data - for example, unmanned aerial systems (UAS) integrated cameras and IMU/GPS sensors, stationary surveillance systems - and processing components capable of fostering wildfire events detection and monitoring, thus providing accurate and faithful data for decision support. Precisely in what concerns to detection and monitoring, Deep Learning (DL) has been successfully applied to perform tasks involving classification and/or segmentation of objects of interest in several fields, such as Agriculture, Forestry and other similar areas. Usually, for an effective DL application, more specifically, based on imagery, datasets must rely on heavy and burdensome logistics to gather a representative problem formulation. What if putting together a dataset could be supported in customizable virtual environments, representing faithful situations to train machines, as it already occurs for human training in what regards some particular tasks (rescue operations, surgeries, industry assembling, etc.)?</p> <p>This work intends to propose not only a system to produce faithful virtual environments to complement and/or even supplant the need for dataset gathering logistics while eventually dealing with hypothetical proposals considering climate change events, but also to create tools for synthesizing wildfire environments for DL application. It will therefore enable to extend existing fire datasets with new data generated by human interaction and supervision, viable for training a computational entity. To that end, a study is presented to assess at which extent data virtually generated data can contribute to an effective DL system aiming to identify and segment fire, bearing in mind future developments of active monitoring systems to timely detect fire events and hopefully provide decision support systems to operational teams.</p>
APA, Harvard, Vancouver, ISO, and other styles
17

Micheal, A. A., K. Vani, S. Sanjeevi, and C. H. Lin. "A TOOL TO ENHANCE THE CAPACITY FOR DEEP LEARNING BASED OBJECT DETECTION AND TRACKING WITH UAV DATA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B5-2020 (August 24, 2020): 221–26. http://dx.doi.org/10.5194/isprs-archives-xliii-b5-2020-221-2020.

Full text
Abstract:
Abstract. Currently, deployment of UAV has transformed from crucial to day-to-day scenarios for various purposes such as wastage collection, live entertainment, product delivery, town mapping, etc. Object tracking based UAV applications such as traffic monitoring, wildlife monitoring and surveillance have undergone phenomenal changeover due to deep learning based methodologies. With such transformation, there is also lack of resources to practically explore the UAV images and videos with deep learning methodologies. Hence, a deep learning-based object detection and tracking tool with UAV data (DL-ODT-UAV) is proposed to fill the learning gap, especially among students. DL-ODT-UAV is a resource to acquire basic knowledge about UAV and deep learning based object detection and tracking. It integrates various object annotators, object detectors and object tracker. Single object detection and tracking is performed with YOLO as object detector and LSTM as object tracker. Faster R-CNN is adopted in multiple object detection. With exploring the tool, the ability of students to approach problems related to deep learning methodologies will improve to a greater level.
APA, Harvard, Vancouver, ISO, and other styles
18

Sunil, A., V. V. Sajithvariyar, V. Sowmya, R. Sivanpillai, and K. P. Soman. "IDENTIFYING OIL PADS IN HIGH SPATIAL RESOLUTION AERIAL IMAGES USING FASTER R-CNN." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIV-M-3-2021 (August 10, 2021): 155–61. http://dx.doi.org/10.5194/isprs-archives-xliv-m-3-2021-155-2021.

Full text
Abstract:
Abstract. Deep learning (DL) methods are used for identifying objects in aerial and ground-based images. Detecting vehicles, roads, buildings, and crops are examples of object identification applications using DL methods. Identifying complex natural and man-made features continues to be a challenge. Oil pads are an example of complex built features due to their shape, size, and presence of other structures like sheds. This work applies Faster Region-based Convolutional Neural Network (R-CNN), a DL-based object recognition method, for identifying oil pads in high spatial resolution (1m), true-color aerial images. Faster R-CNN is a region-based object identification method, consisting of Regional Proposal Network (RPN) that helps to find the area where the target can be possibly present in the images. If the target is present in the images, the Faster R-CNN algorithm will identify the area in an image as foreground and the rest as background. The algorithm was trained with oil pad locations that were manually annotated from orthorectified imagery acquired in 2017. Eighty percent of the annotated images were used for training and the number of epochs was increased from 100 to 1000 in increments of 100 with a fixed length of 1000. After determining the optimal number of epochs the performance of the algorithm was evaluated with an independent set of validation images consisting of frames with and without oil pads. Results indicate that the Faster R-CNN algorithm can be used for identifying oil pads in aerial images.
APA, Harvard, Vancouver, ISO, and other styles
19

Nurunnabi, A., F. N. Teferle, J. Li, R. C. Lindenbergh, and S. Parvaz. "INVESTIGATION OF POINTNET FOR SEMANTIC SEGMENTATION OF LARGE-SCALE OUTDOOR POINT CLOUDS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-4/W5-2021 (December 23, 2021): 397–404. http://dx.doi.org/10.5194/isprs-archives-xlvi-4-w5-2021-397-2021.

Full text
Abstract:
Abstract. Semantic segmentation of point clouds is indispensable for 3D scene understanding. Point clouds have credibility for capturing geometry of objects including shape, size, and orientation. Deep learning (DL) has been recognized as the most successful approach for image semantic segmentation. Applied to point clouds, performance of the many DL algorithms degrades, because point clouds are often sparse and have irregular data format. As a result, point clouds are regularly first transformed into voxel grids or image collections. PointNet was the first promising algorithm that feeds point clouds directly into the DL architecture. Although PointNet achieved remarkable performance on indoor point clouds, its performance has not been extensively studied in large-scale outdoor point clouds. So far, we know, no study on large-scale aerial point clouds investigates the sensitivity of the hyper-parameters used in the PointNet. This paper evaluates PointNet’s performance for semantic segmentation through three large-scale Airborne Laser Scanning (ALS) point clouds of urban environments. Reported results show that PointNet has potential in large-scale outdoor scene semantic segmentation. A remarkable limitation of PointNet is that it does not consider local structure induced by the metric space made by its local neighbors. Experiments exhibit PointNet is expressively sensitive to the hyper-parameters like batch-size, block partition and the number of points in a block. For an ALS dataset, we get significant difference between overall accuracies of 67.5% and 72.8%, for the block sizes of 5m × 5m and 10m × 10m, respectively. Results also discover that the performance of PointNet depends on the selection of input vectors.
APA, Harvard, Vancouver, ISO, and other styles
20

Akiyama, T. S., J. Marcato Junior, W. N. Gonçalves, P. O. Bressan, A. Eltner, F. Binder, and T. Singer. "DEEP LEARNING APPLIED TO WATER SEGMENTATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 14, 2020): 1189–93. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-1189-2020.

Full text
Abstract:
Abstract. The use of deep learning (DL) with convolutional neural networks (CNN) to monitor surface water can be a valuable supplement to costly and labour-intense standard gauging stations. This paper presents the application of a recent CNN semantic segmentation method (SegNet) to automatically segment river water in imagery acquired by RGB sensors. This approach can be used as a new supporting tool because there are only a few studies using DL techniques to monitor water resources. The study area is a medium-scale river (Wesenitz) located in the East of Germany. The captured images reflect different periods of the day over a period of approximately 50 days, allowing for the analysis of the river in different environmental conditions and situations. In the experiments, we evaluated the input image resolutions of 256 × 256 and 512 × 512 pixels to assess their influence on the performance of river segmentation. The performance of the CNN was measured with the pixel accuracy and IoU metrics revealing an accuracy of 98% and 97%, respectively, for both resolutions, indicating that our approach is efficient to segment water in RGB imagery.
APA, Harvard, Vancouver, ISO, and other styles
21

De Geyter, S., M. Bassier, and M. Vergauwen. "AUTOMATED TRAINING DATA CREATION FOR SEMANTIC SEGMENTATION OF 3D POINT CLOUDS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-5/W1-2022 (February 3, 2022): 59–67. http://dx.doi.org/10.5194/isprs-archives-xlvi-5-w1-2022-59-2022.

Full text
Abstract:
Abstract. The creation of as-built Building Information Modelling (BIM) models currently is mostly manual which makes it time consuming and error prone. A crucial step that remains to be automated is the interpretation of the point clouds and the modelling of the BIM geometry. Research has shown that despite the advancements in semantic segmentation, the Deep Learning (DL) networks that are used in the interpretation do not achieve the necessary accuracy for market adoption. One of the main reasons is a lack of sufficient and representative labelled data to train these models. In this work, the possibility to use already conducted Scan-to-BIM projects to automatically generate highly needed training data in the form of labelled point clouds is investigated. More specifically, a pipeline is presented that uses real-world point clouds and their corresponding manually created BIM models. In doing so, realistic and representative training data is created. The presented paper is focussed on the semantic segmentation of 6 common structure BIM classes, representing the main structure of a building. The experiments show that the pipeline successfully creates new training data for a recent DL network.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Feng, Liu Han, Zhu Liujun, Huang Yinyou, and Guo Song. "URBAN VEGETATION MAPPING BASED ON THE HJ-1 NDVI RECONSTRCTION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B8 (June 23, 2016): 867–71. http://dx.doi.org/10.5194/isprs-archives-xli-b8-867-2016.

Full text
Abstract:
HJ-1A/B NDVI (HJ NDVI) time-series data possess relatively high spatio-temporal resolution which is significant for the research on urban areas. However, its application is hindered by noise resulting from the restrictions of imaging quality and limits of the satellite platform. The NDVI noise reduction is necessary. Some noise-reduction techniques including the asymmetric Gaussian filter (AG), the double logistic filter (DL), the Savitzky-Golay (S-G) filter and the harmonic analysis (Hants) of NDVI time-series have been used to carry out the NDVI time series reconstruction, and based on the comparison results of different filter, S-G filter is the optimal in the application on urban areas. Finally,urban vegetation mapping is carried out based on the new HJ NDVI.
APA, Harvard, Vancouver, ISO, and other styles
23

Buyukdemircioglu, M., R. Can, and S. Kocaman. "DEEP LEARNING BASED ROOF TYPE CLASSIFICATION USING VERY HIGH RESOLUTION AERIAL IMAGERY." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2021 (June 28, 2021): 55–60. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2021-55-2021.

Full text
Abstract:
Abstract. Automatic detection, segmentation and reconstruction of buildings in urban areas from Earth Observation (EO) data are still challenging for many researchers. Roof is one of the most important element in a building model. The three-dimensional geographical information system (3D GIS) applications generally require the roof type and roof geometry for performing various analyses on the models, such as energy efficiency. The conventional segmentation and classification methods are often based on features like corners, edges and line segments. In parallel to the developments in computer hardware and artificial intelligence (AI) methods including deep learning (DL), image features can be extracted automatically. As a DL technique, convolutional neural networks (CNNs) can also be used for image classification tasks, but require large amount of high quality training data for obtaining accurate results. The main aim of this study was to generate a roof type dataset from very high-resolution (10 cm) orthophotos of Cesme, Turkey, and to classify the roof types using a shallow CNN architecture. The training dataset consists 10,000 roof images and their labels. Six roof type classes such as flat, hip, half-hip, gable, pyramid and complex roofs were used for the classification in the study area. The prediction performance of the shallow CNN model used here was compared with the results obtained from the fine-tuning of three well-known pre-trained networks, i.e. VGG-16, EfficientNetB4, ResNet-50. The results show that although our CNN has slightly lower performance expressed with the overall accuracy, it is still acceptable for many applications using sparse data.
APA, Harvard, Vancouver, ISO, and other styles
24

Hasan, A., M. R. Udawalpola, C. Witharana, and A. K. Liljedahl. "COUNTING ICE-WEDGE POLYGONS FROM SPACE: USE OF COMMERCIAL SATELLITE IMAGERY TO MONITOR CHANGING ARCTIC POLYGONAL TUNDRA." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIV-M-3-2021 (August 10, 2021): 67–72. http://dx.doi.org/10.5194/isprs-archives-xliv-m-3-2021-67-2021.

Full text
Abstract:
Abstract. The microtopography associated with ice wedge polygons (IWPs) governs the Arctic ecosystem from local to regional scales due to the impacts on the flow and storage of water and therefore, vegetation and carbon. Increasing subsurface temperatures in Arctic permafrost landscapes cause differential ground settlements followed by a series of adverse microtopographic transitions at sub decadal scale. The entire Arctic has been imaged at 0.5 m or finer resolution by commercial satellite sensors. Dramatic microtopographic transformation of low-centered into high-centered IWPs can be identified using sub-meter resolution commercial satellite imagery. In this exploratory study, we have employed a Deep Learning (DL)-based object detection and semantic segmentation method named the Mask R-CNN to automatically map IWPs from commercial satellite imagery. Different tundra vegetation types have distinct spectral, spatial, textural characteristics, which in turn decide the semantics of overlying IWPs. Landscape complexity translates to the image complexity, affecting DL model performances. Scarcity of labelled training images, inadequate training samples for some types of tundra and class imbalance stand as other key challenges in this study. We implemented image augmentation methods to introduce variety in the training data and trained models separately for tundra types. Augmentation methods show promising results but the models with separate tundra types seem to suffer from the lack of annotated data.
APA, Harvard, Vancouver, ISO, and other styles
25

Asma Abdul Qadeer, Rabia Mehmood, Saadia Baraan, Nadia Junaid, Sara Bashir Kant, and Sarah Habib. "Anemia among pregnant women a major concern for achieving universal health coverage." Professional Medical Journal 30, no. 01 (January 1, 2023): 63–67. http://dx.doi.org/10.29309/tpmj/2023.30.01.7097.

Full text
Abstract:
Objective: To assess the frequency of anemia among pregnant females visiting Rawal Institute of Health Sciences and to find out the risk factors contributing to anemia. Study Design: Cross Sectional Descriptive study. Setting: Rawal Institute of Health Sciences, Islamabad, Pakistan. Period: May to July 2019. Material & Methods: A study was carried out to find the frequency of anemia among 100 pregnant women through non-probability convenient sampling at RIHS using a structured questionnaire. Hemoglobin concentration data in the blood was collected from their antenatal archives. Results: Hemoglobin level was found to be less than 7 g/dl in 3% of the pregnant females and 6% had moderate anemia. In addition to that 68% were mildly anemic. Overall frequency of anemic pregnant women was found to be 77%. Conclusion: In conclusion, anemia in this study population was high frequency. This high frequency according to our study is related to inadequate diet, stress, multiple pregnancies and menorrhagia.
APA, Harvard, Vancouver, ISO, and other styles
26

Belwalkar, A., A. Nath, and O. Dikshit. "SPECTRAL-SPATIAL CLASSIFICATION OF HYPERSPECTRAL REMOTE SENSING IMAGES USING VARIATIONAL AUTOENCODER AND CONVOLUTION NEURAL NETWORK." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-5 (November 19, 2018): 613–20. http://dx.doi.org/10.5194/isprs-archives-xlii-5-613-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> In this paper, we propose a spectral-spatial feature extraction framework based on deep learning (DL) for hyperspectral image (HSI) classification. In this framework, the variational autoencoder (VAE) is used for extraction of spectral features from two widely used hyperspectral datasets- Kennedy Space Centre, Florida and University of Pavia, Italy. Additionally, a convolutional neural network (CNN) is utilized to obtain spatial features. The spatial and spectral feature vectors are then stacked together to form a joint feature vector. Finally, the joint feature vector is trained using multinomial logistic regression (softmax regression) for prediction of class labels. The classification performance analysis is done through generation of the confusion matrix. The confusion matrix is then used to calculate Cohen’s Kappa (<i>&amp;Kappa;</i>) to get a quantitative measure of classification performance. The results show that the K value is higher than 0.99 for both HSI datasets.</p>
APA, Harvard, Vancouver, ISO, and other styles
27

Mirmazloumi, S. M., Á. F. Gambin, Y. Wassie, A. Barra, R. Palamà, M. Crosetto, O. Monserrat, and B. Crippa. "INSAR DEFORMATION TIME SERIES CLASSIFICATION USING A CONVOLUTIONAL NEURAL NETWORK." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2022 (May 30, 2022): 307–12. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2022-307-2022.

Full text
Abstract:
Abstract. Temporal analysis of deformations Time Series (TS) provides detailed information of various natural and humanmade displacements. Interferometric Synthetic Aperture Radar (InSAR) generates millimetre-scale products, indicating the chronicle behaviour of detected targets via TS products. Deep Learning (DL) can handle a massive load of InSAR TS to categorize significant movements from non-moving targets. To this end, we employed a supervised Convolutional Neural Network (CNN) model to distinguish five deformations trends, including Stable, Linear, Quadratic, Bilinear, and Phase Unwrapping Error (PUE). Considering several arguments in a CNN model, we trained numerous combinations to explore the most accurate combination from 5000 samples extracted from a Persistent Scatterer Interferometry (PSI) technique and Sentinel-1 images over the Granada region, Spain. The model overall accuracy exceeds 92%. Deformations of three cases of landslides were also detected over the same area, including the Cortijo de Lorenzo, El Arrecife, and Rules Viaduct areas.
APA, Harvard, Vancouver, ISO, and other styles
28

Teoh, Jeremy Yuen Chun, Ning Hong Chan, Siu Ming Mak, Anthony Wing Ip Lo, Chung Ying Leung, Yin Hui, In Chak Law, et al. "Inflammatory Myofibroblastic Tumours of the Urinary Bladder: Multi-Centre 18-Year Experience." Urologia Internationalis 94, no. 1 (July 18, 2014): 31–36. http://dx.doi.org/10.1159/000358732.

Full text
Abstract:
Objective: To review a series of inflammatory myofibroblastic tumours (IMTs) of the urinary bladder in 10 hospitals in Hong Kong. Methods: A database search in the pathology archives of 10 hospitals in Hong Kong from 1995 to 2013 was performed using the key words ‘inflammatory myofibroblastic tumour', ‘inflammatory pseudotumour' and ‘spindle cell lesion'. Patient characteristics, clinical features, histological features, immunohistochemical staining results and treatment outcomes were reviewed. Results: Nine cases of IMT of the urinary bladder were retrieved. The mean age was 45.4 ± 22.8 years (range 11-78). Eight patients (88.9%) presented with haematuria and 5 patients (55.6%) had anaemia with a mean haemoglobin level of 6.8 ± 1.3 g/dl. Histologically, the majority of patients (77.8%) had a compact spindle cell pattern. Anaplastic lymphoma kinase staining was positive in 75% of cases. During a mean follow-up period of 43.4 months (range 8-94), none of them developed any local recurrence or distant metastasis. Conclusions: A high index of suspicion of IMT should be maintained for young patients presenting with bleeding bladder tumours and significant anaemia. IMTs of the urinary bladder run a benign disease course, and good prognosis can be achieved after surgical resection.
APA, Harvard, Vancouver, ISO, and other styles
29

He, Z., H. He, J. Li, M. A. Chapman, and H. Ding. "A SHORT-CUT CONNECTIONS-BASED NEURAL NETWORK FOR BUILDING EXTRACTION FROM HIGH RESOLUTION ORTHOIMAGERY." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2022 (May 30, 2022): 39–44. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2022-39-2022.

Full text
Abstract:
Abstract. Extracting building footprints utilizing deep learning-based (DL-based) methods for high-resolution remote sensing images is one of the current research interest areas. However, the extraction results suffer from blurred edges, rounded corners and detail loss in general. Hence, this article presents a detail-oriented deep learning network named eU-Net (enhanced U-Net). The method adopted in this study, imagery send into the pre-module, which consists of the Canny edge detector, Principal Component Analysis (PCA) and the inter-band ratio operations, before feeding them into the network. Then, process skips connections used in the network to reduce the loss of details during edge and corner detection. The encoding and decoding modules, in this network, are redesigned to expand the perceptual field with shortcut connections and stacked layers. Finally, a Dropout module is added in the bottom layer of the network to avoid the over-fitting problem. The experimental results indicate that the methods used in this study outperform other commonly used and state-of-the-art methods of FCN-8s, U-net, DeepLabv3 and Fast SCNN.
APA, Harvard, Vancouver, ISO, and other styles
30

Pirotti, F., C. Zanchetta, M. Previtali, and S. Della Torre. "DETECTION OF BUILDING ROOFS AND FACADES FROM AERIAL LASER SCANNING DATA USING DEEP LEARNING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W11 (May 5, 2019): 975–80. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w11-975-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> In this work we test the power of prediction of deep learning for detection of buildings from aerial laser scanner point cloud information. Automatic extraction of built features from remote sensing data is of extreme interest for many applications. In particular latest paradigms of 3D mapping of buildings, such as CityGML and BIM, can benefit from an initial determination of building geometries. In this work we used a LiDAR dataset of urban environment from the ISPRS benchmark on urban object detection. The dataset is labelled with eight classes, two were used for this investigation: roof and facades. The objective is to test how TensorFlow neural network for deep learning can predict these two classes. Results show that for “roof” and “facades” semantic classes respectively, recall is 84% and 76% and precision is 72% and 63%. The number and distribution of correct points well represent the geometry, thus allowing to use them as support for CityGML and BIM modelling. Further tuning of the hidden layers of the DL model will likely improve results and will be tested in future investigations.</p>
APA, Harvard, Vancouver, ISO, and other styles
31

Murray, J., I. Sargent, D. Holland, A. Gardiner, K. Dionysopoulou, S. Coupland, J. Hare, C. Zhang, and P. M. Atkinson. "OPPORTUNITIES FOR MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE IN NATIONAL MAPPING AGENCIES: ENHANCING ORDNANCE SURVEY WORKFLOW." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B5-2020 (August 24, 2020): 185–89. http://dx.doi.org/10.5194/isprs-archives-xliii-b5-2020-185-2020.

Full text
Abstract:
Abstract. National Mapping agencies (NMA) are frequently tasked with providing highly accurate geospatial data for a range of customers. Traditionally, this challenge has been met by combining the collection of remote sensing data with extensive field work, and the manual interpretation and processing of the combined data. Consequently, this task is a significant logistical undertaking which benefits the production of high quality output, but which is extremely expensive to deliver. Therefore, novel approaches that can automate feature extraction and classification from remotely sensed data, are of great potential interest to NMAs across the entire sector. Using research undertaken at Great Britain’s NMA; Ordnance Survey (OS) as an example, this paper provides an overview of the recent advances at an NMA in the use of artificial intelligence (AI), including machine learning (ML) and deep learning (DL) based applications. Examples of these approaches are in automating the process of feature extraction and classification from remotely sensed aerial imagery. In addition, recent OS research in applying deep (convolutional) neural network architectures to image classification are also described. This overview is intended to be useful to other NMAs who may be considering the adoption of similar approaches within their workflows.
APA, Harvard, Vancouver, ISO, and other styles
32

Yadav, R., A. Nascetti, and Y. Ban. "BUILDING CHANGE DETECTION USING MULTI-TEMPORAL AIRBORNE LIDAR DATA." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2022 (May 31, 2022): 1377–83. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2022-1377-2022.

Full text
Abstract:
Abstract. Building change detection is essential for monitoring urbanization, disaster assessment, urban planning and frequently updating the maps. 3D structure information from airborne light detection and ranging (LiDAR) is very effective for detecting urban changes. But the 3D point cloud from airborne LiDAR(ALS) holds an enormous amount of unordered and irregularly sparse information. Handling such data is tricky and consumes large memory for processing. Most of this information is not necessary when we are looking for a particular type of urban change. In this study, we propose an automatic method that reduces the 3D point clouds into a much smaller representation without losing necessary information required for detecting Building changes. The method utilizes the Deep Learning(DL) model U-Net for segmenting the buildings from the background. Produced segmentation maps are then processed further for detecting changes and the results are refined using morphological methods. For the change detection task, we used multi-temporal airborne LiDAR data. The data is acquired over Stockholm in the years 2017 and 2019. The changes in buildings are classified into four types: ‘newly built’, ‘demolished’, ‘taller’ and ’shorter’. The detected changes are visualized in one map for better interpretation.
APA, Harvard, Vancouver, ISO, and other styles
33

El Kohli, S., Y. Jannaj, M. Maanan, and H. Rhinane. "DEEP LEARNING: NEW APPROACH FOR DETECTING SCHOLAR EXAMS FRAUD." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-4/W3-2021 (January 10, 2022): 103–7. http://dx.doi.org/10.5194/isprs-archives-xlvi-4-w3-2021-103-2022.

Full text
Abstract:
Abstract. Cheating in exams is a worldwide phenomenon that hinders efforts to assess the skills and growth of students. With scientific and technological progress, it has become possible to develop detection systems in particular a system to monitor the movements and gestures of the candidates during the exam. Individually or collectively. Deep learning (DL) concepts are widely used to investigate image processing and machine learning applications. Our system is based on the advances in artificial intelligence, particularly 3D Convolutional Neural Network (3D CNN), object detector methods, OpenCV and especially Google Tensor Flow, to provides a real-time optimized Computer Vision. The proposal approach, we provide a detection system able to predict fraud during exams. Using the 3D CNN to generate a model from 7,638 selected images and objects detector to identify prohibited things. These experimental studies provide a detection performance with 95% accuracy of correlation between the training and validation data set.
APA, Harvard, Vancouver, ISO, and other styles
34

Treccani, D., J. Balado, A. Fernández, A. Adami, and L. Díaz-Vilariño. "A DEEP LEARNING APPROACH FOR THE RECOGNITION OF URBAN GROUND PAVEMENTS IN HISTORICAL SITES." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B4-2022 (June 1, 2022): 321–26. http://dx.doi.org/10.5194/isprs-archives-xliii-b4-2022-321-2022.

Full text
Abstract:
Abstract. Urban management is a topic of great interest for local administrators, particularly because it is strongly connected to smart city issues and can have a great impact on making cities more sustainable. In particular, thinking about the management of the physical accessibility of cities, the possibility of automating data collection in urban areas is of great interest. Focusing then on historical centres and urban areas of cities and historical sites, it can be noted that their ground surfaces are generally characterised by the use of a multitude of different pavements. To strengthen the management of such urban areas, a comprehensive mapping of the different pavements can be very useful. In this paper, the survey of a historical city (Sabbioneta, in northern Italy) carried out with a Mobile Mapping System (MMS) was used as a starting point. The approach here presented exploit Deep Learning (DL) to classify the different pavings. Firstly, the points belonging to the ground surfaces of the point cloud were selected and the point cloud was rasterised. Then the raster images were used to perform a material classification using the Deep Learning approach, implementing U-Net coupled with ResNet 18. Five different classes of materials were identified, namely sampietrini, bricks, cobblestone, stone, asphalt. The average accuracy of the result is 94%.
APA, Harvard, Vancouver, ISO, and other styles
35

Teffahi, H., and N. Teffahi. "EMAP-DCNN: A NOVEL MATHEMATICAL MORPHOLOGY AND DEEP LEARNING COMBINED FRAMEWORK FOR HYPERSPECTRAL IMAGE CLASSIFICATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (August 21, 2020): 479–86. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-479-2020.

Full text
Abstract:
Abstract. The classification of hyperspectral image (HSI) with high spectral and spatial resolution represents an important and challenging task in image processing and remote sensing (RS) domains due to the problem of computational complexity and big dimensionality of the remote sensing images. The spatial and spectral pixel characteristics have crucial significance for hyperspectral image classification and to take into account these two types of characteristics, various classification and feature extraction methods have been developed to improve spectral-spatial classification of remote sensing images for thematic mapping purposes such as agricultural mapping, urban mapping, emergency mapping in case of natural disasters... In recent years, mathematical morphology and deep learning (DL) have been recognized as prominent feature extraction techniques that led to remarkable spectral-spatial classification performances. Among them, Extended Multi-Attribute Profiles (EMAP) and Dense Convolutional Neural Network (DCNN) are considered as robust and powerful approaches such as the work in this paper is based on these two techniques for the feature extraction stage and used in two combined manners and constructing the EMAP-DCNN frame. The experiments were conducted on two popular datasets: “Indian Pines” and “Huston” hyperspectral datasets. Experimental results demonstrate that the two proposed approaches of the EMAP-DCNN frame denoted EMAP-DCNN 1, EMAP-DCNN 2 provide competitive performances compared with some state-of-the-art spectral-spatial classification methods based on deep learning.
APA, Harvard, Vancouver, ISO, and other styles
36

Meshkini, K., F. Bovolo, and L. Bruzzone. "A 3D CNN APPROACH FOR CHANGE DETECTION IN HR SATELLITE IMAGE TIME SERIES BASED ON A PRETRAINED 2D CNN." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2022 (May 30, 2022): 143–50. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2022-143-2022.

Full text
Abstract:
Abstract. Over recent decades, Change Detection (CD) has been intensively investigated due to the availability of High Resolution (HR) multi-spectral multi-temporal remote sensing images. Deep Learning (DL) based methods such as Convolutional Neural Network (CNN) have recently received increasing attention in CD problems demonstrating high potential. However, most of the CNN-based CD methods are designed for bi-temporal image analysis. Here, we propose a Three-Dimensional (3D) CNN-based CD approach that can effectively deal with HR image time series and process spatial-spectral-temporal features. The method is unsupervised and thus does not require the complex task of collecting labelled multi-temporal data. Since there are only a few pretrained 3D CNNs available that are not suitable for remote sensing CD analysis, the proposed approach starts with a pretrained 2D CNN architecture trained on remote sensing images for semantic segmentation and develops a 3D CNN architecture using a transfer learning technique to jointly deal with spatial, spectral and temporal information. A layerwise feature reduction strategy is performed to select the most informative features and a pixelwise year-based Change Vector Analysis (CVA) is employed to identify changed pixels. Experimental results on a long time series of Landsat 8 images for an area located in Saudi Arabia confirm the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
37

Yassine, H., K. Tout, and M. Jaber. "IMPROVING LULC CLASSIFICATION FROM SATELLITE IMAGERY USING DEEP LEARNING – EUROSAT DATASET." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2021 (June 28, 2021): 369–76. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2021-369-2021.

Full text
Abstract:
Abstract. Machine learning (ML) has proven useful for a very large number of applications in several domains. It has realized a remarkable growth in remote-sensing image analysis over the past few years. Deep Learning (DL) a subset of machine learning were applied in this work to achieve a better classification of Land Use Land Cover (LULC) in satellite imagery using Convolutional Neural Networks (CNNs). EuroSAT benchmarking data set is used as training data set which uses Sentinel-2 satellite images. Sentinel-2 provides images with 13 spectral feature bands, but surprisingly little attention has been paid to these features in deep learning models. The majority of applications focused only on using RGB due to high availability of the RGB models in computer vision. While RGB gives an accuracy of 96.83% using CNN, we are presenting two approaches to improve the classification performance of Sentinel-2 images. In the first approach, features are extracted from 13 spectral feature bands of Sentinel-2 instead of RGB which leads to accuracy of 98.78%. In the second approach features are extracted from 13 spectral bands of Sentinel-2 in addition to calculated indices used in LULC like Blue Ratio (BR), Vegetation index based on Red Edge (VIRE) and Normalized Near Infrared (NNIR), etc. which gives a better accuracy of 99.58%.
APA, Harvard, Vancouver, ISO, and other styles
38

Murtiyoso, A., and P. Grussenmeyer. "AUTOMATIC POINT CLOUD NOISE MASKING IN CLOSE RANGE PHOTOGRAMMETRY FOR BUILDINGS USING AI-BASED SEMANTIC LABELLING." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-2/W1-2022 (February 25, 2022): 389–93. http://dx.doi.org/10.5194/isprs-archives-xlvi-2-w1-2022-389-2022.

Full text
Abstract:
Abstract. The use of AI in semantic segmentation has grown significantly in recent years, aided by developments in computing power and the availability of annotated images for training data. However, in the context of close-range photogrammetry, although working with 2D images, AI is still used mostly for 3D point cloud segmentation purposes. In this paper, we propose a simple method to apply such methods in close range photogrammetry by benefitting from deep learning-based semantic segmentation. Specifically, AI was used to detect unwanted objects in a scene involving the 3D reconstruction of a historical building façade. For these purposes, classes e.g., sky, trees, and electricity poles were considered as noise. Masks were then created from the results which would then constraint the dense image matching process to only the wanted classes. In this regard, the resulting dense point cloud essentially projected the 2D semantic labels into the 3D space, thus excluding noise and unwanted object classes from the 3D scene. Our results were compared to manual image masking and managed to achieve comparable results while requiring only a fraction of the processing time when using a pre-trained DL network to do the task.
APA, Harvard, Vancouver, ISO, and other styles
39

Sajithvariyar, V. V., S. Aswin, V. Sowmya, K. P. Soman, R. Sivanpillai, and G. K. Brown. "ANALYSIS OF FOUR GENERATOR ARCHITECTURES OF C-GAN, LOSS FUNCTION, AND ANNOTATION METHOD FOR EPIPHYTE IDENTIFICATION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIV-M-3-2021 (August 10, 2021): 149–53. http://dx.doi.org/10.5194/isprs-archives-xliv-m-3-2021-149-2021.

Full text
Abstract:
Abstract. The deep learning (DL) models require timely updates to continue their reliability and robustness in prediction, classification, and segmentation tasks. When the deep learning models are tested with a limited test set, the model will not reveal the drawbacks. Every deep learning baseline model needs timely updates by incorporating more data, change in architecture, and hyper parameter tuning. This work focuses on updating the Conditional Generative Adversarial Network (C-GAN) based epiphyte identification deep learning model by incorporating 4 different generator architectures of GAN and two different loss functions. The four generator architectures used in this task are Resnet-6. Resnet-9, Resnet-50 and Resnet-101. A new annotation method called background removed annotation was tested to analyse the improvement in the epiphyte identification protocol. All the results obtained from the model by changing the above parameters are reported using two common evaluation metrics. Based on the parameter tuning experiment, Resnet-6, and Resnet- 9, with binary cross-entropy (BCE) as the loss function, attained higher scores also Resnet-6 with MSE as loss function performed well. The new annotation by removing the background had minimal effect on identifying the epiphytes.
APA, Harvard, Vancouver, ISO, and other styles
40

Bramhe, V. S., S. K. Ghosh, and P. K. Garg. "EXTRACTION OF BUILT-UP AREAS USING CONVOLUTIONAL NEURAL NETWORKS AND TRANSFER LEARNING FROM SENTINEL-2 SATELLITE IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3 (April 30, 2018): 79–85. http://dx.doi.org/10.5194/isprs-archives-xlii-3-79-2018.

Full text
Abstract:
With rapid globalization, the extent of built-up areas is continuously increasing. Extraction of features for classifying built-up areas that are more robust and abstract is a leading research topic from past many years. Although, various studies have been carried out where spatial information along with spectral features has been utilized to enhance the accuracy of classification. Still, these feature extraction techniques require a large number of user-specific parameters and generally application specific. On the other hand, recently introduced Deep Learning (DL) techniques requires less number of parameters to represent more abstract aspects of the data without any manual effort. Since, it is difficult to acquire high-resolution datasets for applications that require large scale monitoring of areas. Therefore, in this study Sentinel-2 image has been used for built-up areas extraction. In this work, pre-trained Convolutional Neural Networks (ConvNets) i.e. Inception v3 and VGGNet are employed for transfer learning. Since these networks are trained on generic images of ImageNet dataset which are having very different characteristics from satellite images. Therefore, weights of networks are fine-tuned using data derived from Sentinel-2 images. To compare the accuracies with existing shallow networks, two state of art classifiers i.e. Gaussian Support Vector Machine (SVM) and Back-Propagation Neural Network (BP-NN) are also implemented. Both SVM and BP-NN gives 84.31&amp;thinsp;% and 82.86&amp;thinsp;% overall accuracies respectively. Inception-v3 and VGGNet gives 89.43&amp;thinsp;% of overall accuracy using fine-tuned VGGNet and 92.10&amp;thinsp;% when using Inception-v3. The results indicate high accuracy of proposed fine-tuned ConvNets on a 4-channel Sentinel-2 dataset for built-up area extraction.
APA, Harvard, Vancouver, ISO, and other styles
41

Radhakrishnan, Nita, Neema Tiwari, Savitri Singh, Jyotsna Madan, Devajit Nath, Usha Bindal, and Ravi Shankar. "‘Hemophagocytosis’ a small yet relevant bone marrow aspirate finding: experience of a tertiary care pediatric centre in India." Asian Journal of Medical Sciences 12, no. 1 (January 1, 2021): 75–80. http://dx.doi.org/10.3126/ajms.v12i1.31067.

Full text
Abstract:
Backgrond: Hemophagocytosis (HS) is an interesting finding that is observed in bone marrow, lymph nodes, CSF other reticuloendothelial systems but at times is overlooked or is not incorporated in reports. Demonstration of hemophagocytosis is one criterion in the diagnosis of Hemophagocytic Lymphohistiocytosis (HLH). Aims and Objective: Hemophagocytosis as an important finding evaluated in pediatric bone marrows having different clinical diagnosis. Materials and Methods: A retrospective descriptive analysis of bone marrow aspirates of 73 patients showing any degree of hemophagocytosis [out of 440 bone marrow aspirates] retrieved from the archives of Department of Pathology during the period from May 2017 to May 2020 were included in the study. Only those cases where microscopic examination revealed hemophagocytosis(73 cases) were included in the study. Results: On analysing the data of 73 bone marrow aspirate 11 (1 Primary,10 secondary) cases were confirmed clinicopathologically as Hemophagocytic lymphohistiocytosis, 9 cases (2 metastasis, 4 infective, 1 acute leukemia, ,1 nutritional deficiency and 1 Hypocellular marrow with degenerative changes) were not suspected to have HLH clinically however showed features of increased serum ferritin >500mg/dl and bone marrow aspirate hemophagocytosis, favouring a diagnosis of secondary HLH (WHO 2004). Conclusion: We present a spectrum of differential diagnosis presenting with hemophagocytosis in pediatric population and its clinico-biochemical correlation assessing progression to HLH.
APA, Harvard, Vancouver, ISO, and other styles
42

Alidoost, F., H. Arefi, and F. Tombari. "BUILDING OUTLINE EXTRACTION FROM AERIAL IMAGES USING CONVOLUTIONAL NEURAL NETWORKS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (October 18, 2019): 57–61. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-57-2019.

Full text
Abstract:
Abstract. Automatic detection and extraction of buildings from aerial images are considerable challenges in many applications, including disaster management, navigation, urbanization monitoring, emergency responses, 3D city mapping and reconstruction. However, the most important problem is to precisely localize buildings from single aerial images where there is no additional information such as LiDAR point cloud data or high resolution Digital Surface Models (DSMs). In this paper, a Deep Learning (DL)-based approach is proposed to localize buildings, estimate the relative height information, and extract the buildings’ boundaries using a single aerial image. In order to detect buildings and extract the bounding boxes, a Fully Connected Convolutional Neural Network (FC-CNN) is trained to classify building and non-building objects. We also introduced a novel Multi-Scale Convolutional-Deconvolutional Network (MS-CDN) including skip connection layers to predict normalized DSMs (nDSMs) from a single image. The extracted bounding boxes as well as predicted nDSMs are then employed by an Active Contour Model (ACM) to provide precise boundaries of buildings. The experiments show that, even having noises in the predicted nDSMs, the proposed method performs well on single aerial images with different building shapes. The quality rate for building detection is about 86% and the RMSE for nDSM prediction is about 4 m. Also, the accuracy of boundary extraction is about 68%. Since the proposed framework is based on a single image, it could be employed for real time applications.
APA, Harvard, Vancouver, ISO, and other styles
43

Clini, P., R. Nespeca, and L. Ruggeri. "VIRTUAL IN REAL. INTERACTIVE SOLUTIONS FOR LEARNING AND COMMUNICATION IN THE NATIONAL ARCHAEOLOGICAL MUSEUM OF MARCHE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-5/W1 (May 17, 2017): 647–54. http://dx.doi.org/10.5194/isprs-archives-xlii-5-w1-647-2017.

Full text
Abstract:
Today the ICTs are favourable additions to museum exhibitions. This work aims to realize an innovative system of digital exploitation of artefacts in the National Archaeological Museum of Marche (MANaM), in order to create a shared museum that will improve the knowledge of cultural contents through the paradigm "learning by interacting" and “edutainment”. <br><br> The main novelty is the implementation of stand-alone multimedia installations for digital artefacts that combine real and virtual scenarios in order to enrich the experience, the knowledge and the multi-sensory perception. <br><br> A Digital Library (DL) is created using Close Range Photogrammetry (CRP) techniques applied to 21 archaeological artefacts belonging to different categories. Enriched with other data (texts, images, multimedia), all 3D models flow into the cloud data server from which are recalled in the individual exhibitions. In particular, we have chosen three types of technological solutions: VISUAL, TACTILE, SPATIAL. All the solutions take into account the possibility of group interaction, allowing the participation of the interaction to an appropriate number of users. Sharing the experience enables greater involvement, generating communicative effectiveness much higher than it would get from a lonely visit. From the “Museum Visitors Behaviour Analysis” we obtain a survey about users’ needs and efficiency of the interactive solutions. <br><br> The main result of this work is the educational impact in terms of increase in visitors, specially students, learning increase of historical and cultural content, greater user involvement during the visit to the museum.
APA, Harvard, Vancouver, ISO, and other styles
44

Capone, M., and E. Lanzara. "PARAMETRIC LIBRARY FOR RIBBED VAULTS INDEXING." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-M-1-2021 (August 28, 2021): 107–15. http://dx.doi.org/10.5194/isprs-archives-xlvi-m-1-2021-107-2021.

Full text
Abstract:
Abstract. This paper presents a part from one broader research project on ribbed vaults. The main goal is to generate a parametric objects library for ribbed vaults, suitable both for HBIM system, for structural analysis or for Cultural Heritage dissemination. Starting from Treatises study we have analyzed different classification system and different terminology used for ribbed vaults components in different languages, especially in English, French, Spanish and Italian, our aim is to improve a multilingual vocabularies. In our research we have defined an experimental workflow to generate a set of ribbed vaults library based on the geometric rules from treatises and a controlled vocabulary, the comparison of these 3D models with point clouds allows us to identify the rule used or to define a new rule and, therefore, to build complex parametric models based on reality-based surveys. We are improving our parametric model using different geometric rules from Spanish, French and English manuals. We can generate the realty based model using the same parametric model, in this case the input data is the ribs geometry extracted from the point cloud. We use a generative tool to analyze the curves from point cloud and to draw the borders. We are going to test our tool on some case studies for historical architectural elements indexing, for geometries reconstruction in HBIM environment and for point cloud segmentation in DL process.
APA, Harvard, Vancouver, ISO, and other styles
45

Giri, A., V. V. Sajith Variyar, V. Sowmya, R. Sivanpillai, and K. P. Soman. "MULTIPLE OIL PAD DETECTION USING DEEP LEARNING." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-M-2-2022 (July 25, 2022): 91–96. http://dx.doi.org/10.5194/isprs-archives-xlvi-m-2-2022-91-2022.

Full text
Abstract:
Abstract. Deep learning (DL) algorithms are widely used in object detection such as roads, vehicles, buildings, etc., in aerial images. However, the object detection task is still considered challenging for detecting complex structures, oil pads are one such example: due to its shape, orientation, and background reflection. A recent study used Faster Region-based Convolutional Neural Network (FR-CNN) to detect a single oil pad from the center of the image of size 256 × 256. However, for real-time applications, it is necessary to detect multiple oil pads from aerial images irrespective of their orientation. In this study, FR-CNN was trained to detect multiple oil pads. We cropped images from high spatial resolution images to train the model containing multiple oil pads. The network was trained for 100 epochs using 164 training images and tested with 50 images under 3 different categories. with images containing: single oil pad, multiple oil pad and no oil pad. The model performance was evaluated using standard metrics: precision, recall, F1-score. The final model trained for multiple oil pad detection achieved a weighted average for 50 images precision of 0.67, recall of 0.80, and f1 score of 0.73. The 0.80 recall score indicates that 80% of the oil pads were able to identify from the given test set. The presence of instances in test images like cleared areas, rock structures, and sand patterns having high visual similarity with the target resulted in a low precision score.
APA, Harvard, Vancouver, ISO, and other styles
46

Udawalpola, M. R., C. Witharana, A. Hasan, A. Liljedahl, M. Ward Jones, and B. Jones. "AUTOMATED RECOGNITION OF PERMAFROST DISTURBANCES USING HIGH-SPATIAL RESOLUTION SATELLITE IMAGERY AND DEEP LEARNING MODELS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-M-2-2022 (July 25, 2022): 203–8. http://dx.doi.org/10.5194/isprs-archives-xlvi-m-2-2022-203-2022.

Full text
Abstract:
Abstract. The accelerated warming conditions of the high Arctic have intensified the extensive thawing of permafrost. Retrogressive thaw slumps (RTSs) are considered as the most active landforms in the Arctic permafrost. An increase in RTSs has been observed in the Arctic in recent decades. Continuous monitoring of RTSs is important to understand climate change-driven disturbances in the region. Manual detection of these landforms is extremely difficult as they occur over exceptionally large areas. Only very few studies have explored the utility of very high spatial resolution (VHSR) commercial satellite imagery in the automated mapping of RTSs. We have developed deep learning (DL) convolution neural net (CNN) based workflow to automatically detect RTSs from VHRS satellite imagery. This study systematically compared the performance of different DLCNN model architectures and varying backbones. Our candidate CNN models include: DeepLabV3+, UNet, UNet++, Multi-scale Attention Net (MA-Net), and Pyramid Attention Network (PAN) with ResNet50, ResNet101 and ResNet152 backbones. The RTS modeling experiment was conducted on Banks Island and Ellesmere Island in Canada. The UNet++ model demonstrated the highest accuracy (F1 score of 87%) with the ResNet50 backbone at the expense of training and inferencing time. PAN, DeepLabV3, MaNet, and UNet, models reported mediocre F1 scores of 72%, 75%, 80%, and 81% respectively. Our findings unravel the performances of different DLCNNs in imagery-enabled RTS mapping and provide useful insights on operationalizing the mapping application across the Arctic.
APA, Harvard, Vancouver, ISO, and other styles
47

Witharana, C., M. A. E. Bhuiyan, and A. K. Liljedahl. "BIG IMAGERY AND HIGH PERFORMANCE COMPUTING AS RESOURCES TO UNDERSTAND CHANGING ARCTIC POLYGONAL TUNDRA." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIV-M-2-2020 (November 17, 2020): 111–16. http://dx.doi.org/10.5194/isprs-archives-xliv-m-2-2020-111-2020.

Full text
Abstract:
Abstract. Permafrost thaw has been observed at several locations across the Arctic tundra in recent decades; however, the pan-Arctic extent and spatiotemporal dynamics of thaw remains poorly explained. Thaw-induced differential ground subsidence and dramatic microtopographic transitions, such as transformation of low-centered ice-wedge polygons (IWPs) into high-centered IWPs can be characterized using very high spatial resolution (VHSR) commercial satellite imagery. Arctic researchers demand for an accurate estimate of the distribution of IWPs and their status across the tundra domain. The entire Arctic has been imaged in 0.5 m resolution by commercial satellite sensors; however, mapping efforts are yet limited to small scales and confined to manual or semi-automated methods. Knowledge discovery through artificial intelligence (AI), big imagery, and high performance computing (HPC) resources is just starting to be realized in Arctic science. Large-scale deployment of VHSR imagery resources requires sophisticated computational approaches to automated image interpretation coupled with efficient use of HPC resources. We are in the process of developing an automated Mapping Application for Permafrost Land Environment (MAPLE) by combining big imagery, AI, and HPC resources. The MAPLE uses deep learning (DL) convolutional neural nets (CNNs) algorithms on HPCs to automatically map IWPs from VHSR commercial satellite imagery across large geographic domains. We trained and tasked a DLCNN semantic object instance segmentation algorithm to automatically classify IWPs from VHSR satellite imagery. Overall, our findings demonstrate the robust performances of IWP mapping algorithm in diverse tundra landscapes and lay a firm foundation for its operational-level application in repeated documentation of circumpolar permafrost disturbances.
APA, Harvard, Vancouver, ISO, and other styles
48

Joshi, D., and C. Witharana. "ROADSIDE FOREST MODELING USING DASHCAM VIDEOS AND CONVOLUTIONAL NEURAL NETS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-M-2-2022 (July 25, 2022): 135–40. http://dx.doi.org/10.5194/isprs-archives-xlvi-m-2-2022-135-2022.

Full text
Abstract:
Abstract. Tree failure is a primary cause of storm-related power outages throughout the United States. Roadside vegetation management is therefore critical to electric utility companies to prevent power outages during extreme weather conditions. It is difficult to execute roadside vegetation management practices, at the landscape level, without proper monitoring of roadside forests’ physical structure and health condition. Remote sensing images and LiDAR are widely used to characterize the forest edge; however, the limitation on the temporal and spatial resolution for most of that dataset is a big challenge. Also, there is a need for a ground-level dataset that provides the vertical profile of the forest trees so that we can more accurately characterize the forest structure and health and recommend the optimal management strategies according to the local forest conditions. For the first time, we introduced Dashcam videos as an alternative to the existing aerial remote sensing data sources to characterize the roadside forest condition using the deep learning (DL) convolutional neural net (CNN) algorithms. In this study, we used dashcam videos taken during the leaf-on and leaf-off conditions and various weather conditions along the roadside. We trained a DLCNN model based on the U-Net and YOLO v5 architectures to classify the multilayer vegetation and detect utility poles and tree trunks alongside the road. Our experiment results suggest that a dashcam can be a viable alternative and complementary way to characterize the roadside vegetation and can be used in the management of roadside forests as a cost-effective data acquisition mechanism for utility companies.
APA, Harvard, Vancouver, ISO, and other styles
49

Nurunnabi, A., F. N. Teferle, D. F. Laefer, R. C. Lindenbergh, and A. Hunegnaw. "A TWO-STEP FEATURE EXTRACTION ALGORITHM: APPLICATION TO DEEP LEARNING FOR POINT CLOUD CLASSIFICATION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-2/W1-2022 (February 25, 2022): 401–8. http://dx.doi.org/10.5194/isprs-archives-xlvi-2-w1-2022-401-2022.

Full text
Abstract:
Abstract. Most deep learning (DL) methods that are not end-to-end use several multi-scale and multi-type hand-crafted features that make the network challenging, more computationally intensive and vulnerable to overfitting. Furthermore, reliance on empirically-based feature dimensionality reduction may lead to misclassification. In contrast, efficient feature management can reduce storage and computational complexities, builds better classifiers, and improves overall performance. Principal Component Analysis (PCA) is a well-known dimension reduction technique that has been used for feature extraction. This paper presents a two-step PCA based feature extraction algorithm that employs a variant of feature-based PointNet (Qi et al., 2017a) for point cloud classification. This paper extends the PointNet framework for use on large-scale aerial LiDAR data, and contributes by (i) developing a new feature extraction algorithm, (ii) exploring the impact of dimensionality reduction in feature extraction, and (iii) introducing a non-end-to-end PointNet variant for per point classification in point clouds. This is demonstrated on aerial laser scanning (ALS) point clouds. The algorithm successfully reduces the dimension of the feature space without sacrificing performance, as benchmarked against the original PointNet algorithm. When tested on the well-known Vaihingen data set, the proposed algorithm achieves an Overall Accuracy (OA) of 74.64% by using 9 input vectors and 14 shape features, whereas with the same 9 input vectors and only 5PCs (principal components built by the 14 shape features) it actually achieves a higher OA of 75.36% which demonstrates the effect of efficient dimensionality reduction.
APA, Harvard, Vancouver, ISO, and other styles
50

Hoang, Thien, Luceri Patricia, and Valentina Del Signore. "RF35 | PSAT304 A Case of COVID-19 Infection; COVID-19 mRNA-Based Vaccine Induced Human Anti-Mouse Antibodies (HAMA) Interfering With TSH Testing." Journal of the Endocrine Society 6, Supplement_1 (November 1, 2022): A863—A864. http://dx.doi.org/10.1210/jendso/bvac150.1785.

Full text
Abstract:
Abstract A 71-year-old male presented to the endocrine clinic for evaluation of hypothyroidism which has recently been difficult for his primary care physician to control. He initially developed hypothyroidism in 1990 after neck radiation for squamous cell carcinoma of his sternocleidomastoid muscle. He was previously controlled for many years on levothyroxine 150mcg daily with TSH readings in the normal reference range. Patient was infected with COVID-19 in 2020 and was hospitalized for 10 days. He was treated with 5 days of Remdesivir and 10 days of Decadron. 6 months after the infection he received mRNA COVID-19 vaccination. Upon his first visit to the endocrine clinic post COVID-19 infection and vaccination, his TSH was elevated at 90 uIU/mL with a free thyroxine that was elevated at 1.8 ng/dL even after multiple attempts by his primary care physician to increase his Levothyroxine which was up to 225 mcg daily before he was referred to endocrinology. He is compliant with his levothyroxine and takes it on an empty stomach first thing in the morning without any of his other medications. He is not taking iron supplements, biotin or any other supplements or new medications. TSH values had been normal 1 month prior to his COVID-19 infection and was first noted to be abnormal 8 months after his COVID-19 infection and 2 months after his mRNA COVID-19 vaccine. There was no TSH drawn between when he had COVID-19 and when he received his vaccine. It was suspected that he may have a factitious elevation in his TSH due to antibody interference. Labs were drawn for a TSH with human anti-mouse antibodies (HAMA) treatment which resulted with a markedly suppressed TSH at 0.007 uIU/mL with an elevated free thyroxine of 1.94 ng/dL. HAMA antibodies came back elevated at 552 ng/mL. Reference Klee, G., 2000. Human Anti-Mouse Antibodies. Archives of Pathology & Laboratory Medicine, 124(6), pp.921-923. Koshida, S., Asanuma, K., Kuribayashi, K., Goto, M., Tsuji, N., Kobayashi, D., Tanaka, M. and Watanabe, N., 2010. Prevalence of human anti-mouse antibodies (HAMAs) in routine examinations. Clinica Chimica Acta, 411(5-6), pp.391-394. Presentation: Saturday, June 11, 2022 1:00 p.m. - 3:00 p.m., Monday, June 13, 2022 1:00 p.m. - 1:05 p.m.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography