Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Geo-processing workflow.

Статті в журналах з теми "Geo-processing workflow"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-45 статей у журналах для дослідження на тему "Geo-processing workflow".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Schäffer, Bastian, and Theodor Foerster. "A client for distributed geo-processing and workflow design." Journal of Location Based Services 2, no. 3 (September 2008): 194–210. http://dx.doi.org/10.1080/17489720802558491.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chen, Nengcheng, Liping Di, Genong Yu, and Jianya Gong. "Geo-processing workflow driven wildfire hot pixel detection under sensor web environment." Computers & Geosciences 36, no. 3 (March 2010): 362–72. http://dx.doi.org/10.1016/j.cageo.2009.06.013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lemmens, R., B. Toxopeus, L. Boerboom, M. Schouwenburg, B. Retsios, W. Nieuwenhuis, and C. Mannaerts. "IMPLEMENTATION OF A COMPREHENSIVE AND EFFECTIVE GEOPROCESSING WORKFLOW ENVIRONMENT." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W8 (July 11, 2018): 123–27. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w8-123-2018.

Повний текст джерела
Анотація:
<p><strong>Abstract.</strong> Many projects and research efforts implement geo-information (GI) workflows, ranging from very basic ones to complicated software processing chains. The creation of these workflows normally needs considerable expertise and sharing them is often hampered by undocumented and non-interoperable geoprocessing implementations. We believe that the visual representation of workflows can help in the creation, sharing and understanding of software processing of geodata. In our efforts we aim at bridging abstract and concrete workflow representations for the sake of easing the creation and sharing of simple geoprocessing logic within and across projects.</p><p> We have implemented a first version of our workflow approach in one of our current projects. MARIS, the Mara Rangeland Information System, is being developed in the Mau Mara Serengeti SustainableWater Initiative (MaMaSe). It is a web client that uses the Integrated Land and Water Information System (ILWIS), our open source Remote Sensing and GIS software. It aims to integrate historic, near real time and near future forecast of rainfall, biomass, carrying capacity and livestock market information for the sustainable management of rangelands by conservancies in the Maasai Mara in Kenya. More importantly it aims to show results of a carrying capacity model implemented in a comprehensive geoprocessing workflow.</p><p> In this paper we briefly describe our software and show the workflow implementation strategy and discuss the innovative aspects of our approach as well as our project evaluation and the opportunities for further grounding of our software development.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Li, Chunlin, Jun Liu, Min Wang, and Youlong Luo. "Fault-tolerant scheduling and data placement for scientific workflow processing in geo-distributed clouds." Journal of Systems and Software 187 (May 2022): 111227. http://dx.doi.org/10.1016/j.jss.2022.111227.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Chen, Nengcheng, Liping Di, Genong Yu, and Jianya Gong. "Automatic On-Demand Data Feed Service for AutoChem Based on Reusable Geo-Processing Workflow." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 3, no. 4 (December 2010): 418–26. http://dx.doi.org/10.1109/jstars.2010.2049094.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Chen, Wuhui, Incheon Paik, and Patrick C. K. Hung. "Transformation-Based Streaming Workflow Allocation on Geo-Distributed Datacenters for Streaming Big Data Processing." IEEE Transactions on Services Computing 12, no. 4 (July 1, 2019): 654–68. http://dx.doi.org/10.1109/tsc.2016.2614297.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Toschi, I., E. Nocerino, F. Remondino, A. Revolti, G. Soria, and S. Piffer. "GEOSPATIAL DATA PROCESSING FOR 3D CITY MODEL GENERATION, MANAGEMENT AND VISUALIZATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1/W1 (May 31, 2017): 527–34. http://dx.doi.org/10.5194/isprs-archives-xlii-1-w1-527-2017.

Повний текст джерела
Анотація:
Recent developments of 3D technologies and tools have increased availability and relevance of 3D data (from 3D points to complete city models) in the geospatial and geo-information domains. Nevertheless, the potential of 3D data is still underexploited and mainly confined to visualization purposes. Therefore, the major challenge today is to create automatic procedures that make best use of available technologies and data for the benefits and needs of public administrations (PA) and national mapping agencies (NMA) involved in “smart city” applications. The paper aims to demonstrate a step forward in this process by presenting the results of the SENECA project (Smart and SustaiNablE City from Above – <a href="http://seneca.fbk.eu"target="_blank">http://seneca.fbk.eu</a>). State-of-the-art processing solutions are investigated in order to (i) efficiently exploit the photogrammetric workflow (aerial triangulation and dense image matching), (ii) derive topologically and geometrically accurate 3D geo-objects (i.e. building models) at various levels of detail and (iii) link geometries with non-spatial information within a 3D geo-database management system accessible via web-based client. The developed methodology is tested on two case studies, i.e. the cities of Trento (Italy) and Graz (Austria). Both spatial (i.e. nadir and oblique imagery) and non-spatial (i.e. cadastral information and building energy consumptions) data are collected and used as input for the project workflow, starting from 3D geometry capture and modelling in urban scenarios to geometry enrichment and management within a dedicated webGIS platform.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Iacone, Brooke, Ginger R. H. Allington, and Ryan Engstrom. "A Methodology for Georeferencing and Mosaicking Corona Imagery in Semi-Arid Environments." Remote Sensing 14, no. 21 (October 27, 2022): 5395. http://dx.doi.org/10.3390/rs14215395.

Повний текст джерела
Анотація:
High-resolution Corona imagery acquired by the United States through spy missions in the 1960s presents an opportunity to gain critical insight into historic land cover conditions and expand the timeline of available data for land cover change analyses, particularly in regions such as Northern China where data from that era are scarce. Corona imagery requires time-intensive pre-processing, and the existing literature lacks the necessary detail required to replicate these processes easily. This is particularly true in landscapes where dynamic physical processes, such as aeolian desertification, reshape topography over time or regions with few persistent features for use in geo-referencing. In this study, we present a workflow for georeferencing Corona imagery in a highly desertified landscape that contained mobile dunes, shifting vegetation cover, and a few reference points. We geo-referenced four Corona images from Inner Mongolia, China using uniquely derived ground control points and Landsat TM imagery with an overall accuracy of 11.77 m, and the workflow is documented in sufficient detail for replication in similar environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lucas, G. "CONSIDERING TIME IN ORTHOPHOTOGRAPHY PRODUCTION: FROM A GENERAL WORKFLOW TO A SHORTENED WORKFLOW FOR A FASTER DISASTER RESPONSE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W3 (August 19, 2015): 249–55. http://dx.doi.org/10.5194/isprsarchives-xl-3-w3-249-2015.

Повний текст джерела
Анотація:
This article overall deals with production time with orthophoto imagery with medium size digital frame camera. The workflow examination follows two main parts: data acquisition and post-processing. The objectives of the research are fourfold: 1/ gathering time references for the most important steps of orthophoto production (it turned out that literature is missing on this topic); these figures are used later for total production time estimation; 2/ identifying levers for reducing orthophoto production time; 3/ building a simplified production workflow for emergency response: less exigent with accuracy and faster; and compare it to a classical workflow; 4/ providing methodical elements for the estimation of production time with a custom project. <br><br> In the data acquisition part a comprehensive review lists and describes all the factors that may affect the acquisition efficiency. Using a simulation with different variables (average line length, time of the turns, flight speed) their effect on acquisition efficiency is quantitatively examined. <br><br> Regarding post-processing, the time references figures were collected from the processing of a 1000 frames case study with 15 cm GSD covering a rectangular area of 447 km<sup>2</sup>; the time required to achieve each step during the production is written down. When several technical options are possible, each one is tested and time documented so as all alternatives are available. Based on a technical choice with the workflow and using the compiled time reference of the elementary steps, a total time is calculated for the post-processing of the 1000 frames. Two scenarios are compared as regards to time and accuracy. The first one follows the “normal” practices, comprising triangulation, orthorectification and advanced mosaicking methods (feature detection, seam line editing and seam applicator); the second is simplified and make compromise over positional accuracy (using direct geo-referencing) and seamlines preparation in order to achieve orthophoto production faster. The shortened workflow reduces the production time by more than three whereas the positional error increases from 1 GSD to 1.5 GSD. The examination of time allocation through the production process shows that it is worth sparing time in the post-processing phase.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Liakos, Leonidas, and Panos Panagos. "Challenges in the Geo-Processing of Big Soil Spatial Data." Land 11, no. 12 (December 13, 2022): 2287. http://dx.doi.org/10.3390/land11122287.

Повний текст джерела
Анотація:
This study addressed a critical resource—soil—through the prism of processing big data at the continental scale. Rapid progress in technology and remote sensing has majorly improved data processing on extensive spatial and temporal scales. Here, the manuscript presents the results of a systematic effort to geo-process and analyze soil-relevant data. In addition, the main highlights include the difficulties associated with using data infrastructures, managing big geospatial data, decentralizing operations through remote access, mass processing, and automating the data-processing workflow using advanced programming languages. Challenges to this study included the reproducibility of the results, their presentation in a communicative way, and the harmonization of complex heterogeneous data in space and time based on high standards of accuracy. Accuracy was especially important as the results needed to be identical at all spatial scales (from point counts to aggregated countrywide data). The geospatial modeling of soil requires analysis at multiple spatial scales, from the pixel level, through multiple territorial units (national or regional), and river catchments, to the global scale. Advanced mapping methods (e.g., zonal statistics, map algebra, choropleth maps, and proportional symbols) were used to convey comprehensive and substantial information that would be of use to policymakers. More specifically, a variety of cartographic practices were employed, including vector and raster visualization and hexagon grid maps at the global or European scale and in several cartographic projections. The information was rendered in both grid format and as aggregated statistics per polygon (zonal statistics), combined with diagrams and an advanced graphical interface. The uncertainty was estimated and the results were validated in order to present the outputs in the most robust way. The study was also interdisciplinary in nature, requiring large-scale datasets to be integrated from different scientific domains, such as soil science, geography, hydrology, chemistry, climate change, and agriculture.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Stoter, Jantien, Vincent van Altena, Marc Post, Dirk Burghardt, and Cecile Duchêne. "AUTOMATED GENERALISATION WITHIN NMAs IN 2016." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (June 14, 2016): 647–52. http://dx.doi.org/10.5194/isprs-archives-xli-b4-647-2016.

Повний текст джерела
Анотація:
Producing maps and geo-data at different scales is traditionally one of the main tasks of National (and regional) Mapping Agencies (NMAs). The derivation of low-scale maps (i.e. with less detail) from large-scale maps (with more detail), i.e. generalisation, used to be a manual task of cartographers. With the need for more up-to-date data as well as the development of automated generalisation solutions in both research and industry, NMAs are implementing automated generalisation production lines. To exchange experiences and identify remaining issues, a workshop was organised end 2015 by the Commission on Generalisation and Multirepresentation of the International Cartographic Association and the Commission on Modelling and Processing of the European Spatial Data Research. This paper reports about the workshop outcomes. It shows that, most NMAs have implemented a certain form of automation in their workflows, varying from generalisation of certain features while still maintaining a manual workflow; semiautomated editing and generalisation to a fully automated procedure.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Stoter, Jantien, Vincent van Altena, Marc Post, Dirk Burghardt, and Cecile Duchêne. "AUTOMATED GENERALISATION WITHIN NMAs IN 2016." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B4 (June 14, 2016): 647–52. http://dx.doi.org/10.5194/isprsarchives-xli-b4-647-2016.

Повний текст джерела
Анотація:
Producing maps and geo-data at different scales is traditionally one of the main tasks of National (and regional) Mapping Agencies (NMAs). The derivation of low-scale maps (i.e. with less detail) from large-scale maps (with more detail), i.e. generalisation, used to be a manual task of cartographers. With the need for more up-to-date data as well as the development of automated generalisation solutions in both research and industry, NMAs are implementing automated generalisation production lines. To exchange experiences and identify remaining issues, a workshop was organised end 2015 by the Commission on Generalisation and Multirepresentation of the International Cartographic Association and the Commission on Modelling and Processing of the European Spatial Data Research. This paper reports about the workshop outcomes. It shows that, most NMAs have implemented a certain form of automation in their workflows, varying from generalisation of certain features while still maintaining a manual workflow; semiautomated editing and generalisation to a fully automated procedure.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Tobak, Zalán, József Szatmári, and Boudewijn Van Leeuwen. "Small Format Aerial Photography." Journal of Environmental Geography 1, no. 3-4 (July 1, 2008): 21–26. http://dx.doi.org/10.14232/jengeo-2008-43861.

Повний текст джерела
Анотація:
Since February 2008, an advanced system has been developed to acquire digital images in the visible to near infrared wavelengths. Using this system, it is possible to acquire data for a large variety of applications. The core of the system consists of a Duncantech MS3100 CIR (Color-InfraRed) multi-spectral camera. The main advantages of the system are its affordability and flexibility; within an hour the system can be deployed against very competitive costs. In several steps, using ArcGIS, Python and Avenue scripts, the raw data is semi-automatically processed into geo-referenced mosaics. This paper presents the parts of the system, the image processing workflow and several potential applications of the images.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Peng, Li, Fei Fei Tang, Zhi Yue Zhou, Xing Liu, and Zhi Min Ruan. "Applying Lightweight UAV in Landslide Monitoring." Applied Mechanics and Materials 738-739 (March 2015): 738–45. http://dx.doi.org/10.4028/www.scientific.net/amm.738-739.738.

Повний текст джерела
Анотація:
With the advantages of small size, cost efficient, low noise, energy saving, fine definition and high current, UAV has been widely used in various fields such as military, agriculture, forestry,meteorology, environment, etc. Moreover, this technique can obtain large area, large angle and 3D surface information without shadows resulting from cloud cover which is the common shortage insatellite images, it is also widespread in geological hazards monitoring. In this article, the applications of hazards monitoring using UAV has been reviewed at first, then according to the needof landslide monitoring, based on UAV data processing in geological hazard in east-north of Chongqing on 31th August, 2014, route planning method for lightweight and low altitude UAV inmountainous district is introduced, also the workflow and relevant experiences of UAV data processing by applying Agisoft photoscan software. After data processing, both geo-referencedDSM and DOM of landslide are obtained by interpolation with assistance of chosen GPS control points. The interpolated topographic map of landslide can provide important information forgeological hazards monitoring and emergency relief.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Rumpler, M., S. Daftry, A. Tscharf, R. Prettenthaler, C. Hoppe, G. Mayer, and H. Bischof. "Automated End-to-End Workflow for Precise and Geo-accurate Reconstructions using Fiducial Markers." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-3 (August 7, 2014): 135–42. http://dx.doi.org/10.5194/isprsannals-ii-3-135-2014.

Повний текст джерела
Анотація:
Photogrammetric computer vision systems have been well established in many scientific and commercial fields during the last decades. Recent developments in image-based 3D reconstruction systems in conjunction with the availability of affordable high quality digital consumer grade cameras have resulted in an easy way of creating visually appealing 3D models. However, many of these methods require manual steps in the processing chain and for many photogrammetric applications such as mapping, recurrent topographic surveys or architectural and archaeological 3D documentations, high accuracy in a geo-coordinate system is required which often cannot be guaranteed. Hence, in this paper we present and advocate a fully automated end-to-end workflow for precise and geoaccurate 3D reconstructions using fiducial markers. We integrate an automatic camera calibration and georeferencing method into our image-based reconstruction pipeline based on binary-coded fiducial markers as artificial, individually identifiable landmarks in the scene. Additionally, we facilitate the use of these markers in conjunction with known ground control points (GCP) in the bundle adjustment, and use an online feedback method that allows assessment of the final reconstruction quality in terms of image overlap, ground sampling distance (GSD) and completeness, and thus provides flexibility to adopt the image acquisition strategy already during image recording. An extensive set of experiments is presented which demonstrate the accuracy benefits to obtain a highly accurate and geographically aligned reconstruction with an absolute point position uncertainty of about 1.5 times the ground sampling distance.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Wang, J., and R. Lindenbergh. "VALIDATING A WORKFLOW FOR TREE INVENTORY UPDATING WITH 3D POINT CLOUDS OBTAINED BY MOBILE LASER SCANNING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 1163–68. http://dx.doi.org/10.5194/isprs-archives-xlii-2-1163-2018.

Повний текст джерела
Анотація:
Urban trees are an important component of our environment and ecosystem. Trees are able to combat climate change, clean the air and cool the streets and city. Tree inventory and monitoring are of great interest for biomass estimation and change monitoring. Conventionally, parameters of trees are manually measured and documented in situ, which is not efficient regarding labour and costs. Light Detection And Ranging (LiDAR) has become a well-established surveying technique for the acquisition of geo-spatial information. Combined with automatic point cloud processing techniques, this in principle enables the efficient extraction of geometric tree parameters. In recent years, studies have investigated to what extend it is possible to perform tree inventories using laser scanning point clouds. Give the availability of a city of Delft Open data tree repository, we are now able to present, validate and extend a workflow to automatically obtain tree data from tree location until tree species. The results of a test over 47 trees show that the proposed methods in the workflow are able to individual urban trees. The tree species classification results based on the extracted tree parameters show that only one tree was wrongly classified using k-means clustering.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Tavani, Stefano, Antonio Pignalosa, Amerigo Corradetti, Marco Mercuri, Luca Smeraglia, Umberto Riccardi, Thomas Seers, Terry Pavlis, and Andrea Billi. "Photogrammetric 3D Model via Smartphone GNSS Sensor: Workflow, Error Estimate, and Best Practices." Remote Sensing 12, no. 21 (November 4, 2020): 3616. http://dx.doi.org/10.3390/rs12213616.

Повний текст джерела
Анотація:
Geotagged smartphone photos can be employed to build digital terrain models using structure from motion-multiview stereo (SfM-MVS) photogrammetry. Accelerometer, magnetometer, and gyroscope sensors integrated within consumer-grade smartphones can be used to record the orientation of images, which can be combined with location information provided by inbuilt global navigation satellite system (GNSS) sensors to geo-register the SfM-MVS model. The accuracy of these sensors is, however, highly variable. In this work, we use a 200 m-wide natural rocky cliff as a test case to evaluate the impact of consumer-grade smartphone GNSS sensor accuracy on the registration of SfM-MVS models. We built a high-resolution 3D model of the cliff, using an unmanned aerial vehicle (UAV) for image acquisition and ground control points (GCPs) located using a differential GNSS survey for georeferencing. This 3D model provides the benchmark against which terrestrial SfM-MVS photogrammetry models, built using smartphone images and registered using built-in accelerometer/gyroscope and GNSS sensors, are compared. Results show that satisfactory post-processing registrations of the smartphone models can be attained, requiring: (1) wide acquisition areas (scaling with GNSS error) and (2) the progressive removal of misaligned images, via an iterative process of model building and error estimation.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Tscharf, A., M. Rumpler, F. Fraundorfer, G. Mayer, and H. Bischof. "ON THE USE OF UAVS IN MINING AND ARCHAEOLOGY - GEO-ACCURATE 3D RECONSTRUCTIONS USING VARIOUS PLATFORMS AND TERRESTRIAL VIEWS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-1/W1 (August 27, 2015): 15–22. http://dx.doi.org/10.5194/isprsannals-ii-1-w1-15-2015.

Повний текст джерела
Анотація:
During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Especially the increasing affordability of unmanned aerial vehicles (UAVs) in conjunction with automated multi-view processing pipelines have resulted in an easy way of acquiring spatial data and creating realistic and accurate 3D models. With the use of multicopter UAVs, it is possible to record highly overlapping images from almost terrestrial camera positions to oblique and nadir aerial images due to the ability to navigate slowly, hover and capture images at nearly any possible position. Multi-copter UAVs thus are bridging the gap between terrestrial and traditional aerial image acquisition and are therefore ideally suited to enable easy and safe data collection and inspection tasks in complex or hazardous environments. In this paper we present a fully automated processing pipeline for precise, metric and geo-accurate 3D reconstructions of complex geometries using various imaging platforms. Our workflow allows for georeferencing of UAV imagery based on GPS-measurements of camera stations from an on-board GPS receiver as well as tie and control point information. Ground control points (GCPs) are integrated directly in the bundle adjustment to refine the georegistration and correct for systematic distortions of the image block. We discuss our approach based on three different case studies for applications in mining and archaeology and present several accuracy related analyses investigating georegistration, camera network configuration and ground sampling distance. Our approach is furthermore suited for seamlessly matching and integrating images from different view points and cameras (aerial and terrestrial as well as inside views) into one single reconstruction. Together with aerial images from a UAV, we are able to enrich 3D models by combining terrestrial images as well inside views of an object by joint image processing to generate highly detailed, accurate and complete reconstructions.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Zia, Mohammed, Johannes Fürle, Christina Ludwig, Sven Lautenbach, Stefan Gumbrich, and Alexander Zipf. "SocialMedia2Traffic: Derivation of Traffic Information from Social Media Data." ISPRS International Journal of Geo-Information 11, no. 9 (September 13, 2022): 482. http://dx.doi.org/10.3390/ijgi11090482.

Повний текст джерела
Анотація:
Traffic prediction is a topic of increasing importance for research and applications in the domain of routing and navigation. Unfortunately, open data are rarely available for this purpose. To overcome this, the authors explored the possibility of using geo-tagged social media data (Twitter), land-use and land-cover point of interest data (from OpenStreetMap) and an adapted betweenness centrality measure as feature spaces to predict the traffic congestion of eleven world cities. The presented framework and workflow are termed as SocialMedia2Traffic. Traffic congestion was predicted at four tile spatial resolutions and compared with Uber Movement data. The overall precision of the forecast for highly traffic-congested regions was approximately 81%. Different data processing steps including ways to aggregate data points, different proxies and machine learning approaches were compared. The lack of a universal definition on a global scale to classify road segments by speed bins into different traffic congestion classes has been identified to be a major limitation of the transferability of the framework. Overall, SocialMedia2Traffic further improves the usability of the tested feature space for traffic prediction. A further benefit is the agnostic nature of the social media platform’s approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Tintrup gen. Suntrup, G., T. Jalke, L. Streib, N. Keck, S. Nieland, N. Moran, B. Kleinschmit, and M. Trapp. "New Methods in Acquisition, Update and Dissemination of Nature Conservation Geodata - Implementation of an Integrated Framework." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W3 (April 29, 2015): 707–12. http://dx.doi.org/10.5194/isprsarchives-xl-7-w3-707-2015.

Повний текст джерела
Анотація:
Within the framework of this project methods are being tested and implemented a) to introduce remote sensing based approaches into the existing process of biotope mapping and b) to develop a framework serving the multiple requirements arising from different users’ backgrounds and thus the need for comprehensive data interoperability. Therefore state-wide high resolution land cover vector-data have been generated in an automated object oriented workflow based on aerial imagery and a normalised digital surface models.These data have been enriched by an extensive characterisation of the individual objects by e.g. site specific, contextual or spectral parameters utilising multitemporal satellite images, DEM-derivatives and multiple relevant geo-data. Parameters are tested on relevance in regard to the classification process using different data mining approaches and have been used to formalise categories of the European nature information system (EUNIS) in a semantic framework. The Classification will be realised by ontology-based reasoning. Dissemination and storage of data is developed fully INSPIRE-compatible and facilitated via a web portal. Main objectives of the project are a) maximum exploitation of existing “standard” data provided by state authorities, b) combination of these data with satellite imagery (Copernicus), c) create land cover objects and achieve data interoperability through low number of classes but comprehensive characterisation and d) implement algorithms and methods suitable for automated processing on large scales.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Zhang, Y., Y. Wan, B. Wang, Y. Kang, and J. Xiong. "Automatic Processing of Chinese GF-1 Wide Field of View Images." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W3 (April 29, 2015): 729–34. http://dx.doi.org/10.5194/isprsarchives-xl-7-w3-729-2015.

Повний текст джерела
Анотація:
The wide field of view (WFV) imaging instrument carried on the Chinese GF-1 satellite includes four cameras. Each camera has 200km swath-width that can acquire earth image at the same time and the observation can be repeated within only 4 days. This enables the applications of remote sensing imagery to advance from non-scheduled land-observation to periodically land-monitoring in the areas that use the images in such resolutions. This paper introduces an automatic data analysing and processing technique for the wide-swath images acquired by GF-1 satellite. Firstly, the images are validated by a self-adaptive Gaussian mixture model based cloud detection method to confirm whether they are qualified and suitable to be involved into the automatic processing workflow. Then the ground control points (GCPs) are quickly and automatically matched from the public geo-information products such as the rectified panchromatic images of Landsat-8. Before the geometric correction, the cloud detection results are also used to eliminate the invalid GCPs distributed in the cloud covered areas, which obviously reduces the ratio of blunders of GCPs. The geometric correction module not only rectifies the rational function models (RFMs), but also provides the self-calibration model and parameters for the non-linear distortion, and it is iteratively processed to detect blunders. The maximum geometric distortion in WFV image decreases from about 10-15 pixels to 1-2 pixels when compensated by self-calibration model. The processing experiments involve hundreds of WFV images of GF-1 satellite acquired from June to September 2013, which covers the whole mainland of China. All the processing work can be finished by one operator within 2 days on a desktop computer made up by a second-generation Intel Core-i7 CPU and a 4-solid-State-Disk array. The digital ortho maps (DOM) are automatically generated with 3 arc second Shuttle Radar Topography Mission (SRTM). The geometric accuracies of the generated DOM are 20m for camera-2 and 3, and 30m accuracy for camera-1 and 4. These products are now widely used in the fields of land and resource investigation, environment protection, and agricultural research.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Kriz, Karel. "ALBINA The White Goddess – Mapping and Communicating Avalanche Risk in the European Alps." Abstracts of the ICA 1 (July 15, 2019): 1. http://dx.doi.org/10.5194/ica-abs-1-191-2019.

Повний текст джерела
Анотація:
<p><strong>Abstract.</strong> With the increasing importance of communication in the context of risk management and disaster prevention in mountainous environments, the demand for adequate communication channels and cartographic representations is constantly rising. In particular, the presentation of a broad spectrum of geospatial topics such as avalanche awareness requires innovative cartographic methods and approaches that go beyond standard cartographic depiction procedures.</p><p>ALBINA is such a project that addresses this proposition. It embraces risk management with cartographic communication methods and stands for “The White Goddess” an allusion to snow avalanches. This cooperation project has the goal to publish a joint, multilingual avalanche bulletin in the entire European region of Tyrol, South Tyrol and Trentino. The aim is to inform the public daily about the avalanche situation as well as to communicate avalanche related information in an efficient and profound way. An online portal is currently being developed as part of an Interreg V-A Italy-Austria project in collaboration between the University of Vienna, the EVTZ Europaregion, the Austrian Avalanche Warning Service of Tyrol as well as the Italian Avalanche Warning Services of South Tyrol and Trentino. The developed communication structures promote and facilitate the exchange of spatial-temporal information between experts of neighboring regions as well as the public in a multi-lingual environment. The framework is supported by a software system that handles and visualizes meteorological data, observations, snow profiles and avalanche events of the entire region with a strong focus on cartographic communication. It furthermore offers the possibility to enter and manipulate the avalanche bulletin in a standardized way in order to optimize the exchange of information between the avalanche experts on duty.</p><p>In order to foster the efforts in avalanche awareness and communication three conceptual cornerstones have been identified according to international avalanche warning standards: (1) avalanche danger assessment and forecasting production, (2) timing and validity of publication and (3) effective geo-communication. Based on this alignment the international ALBINA project was launched to showcase the ability and strength of such an approach. This presentation will primarily focus on effective geo-communication clarifying the general framework as well as the communication structures and workflow within the overall system. Thereby explaining the methods and interaction between the available real-time data, the technical infrastructure, the human resources as well as the geo-communicational aspects of the system. Thereafter the individual cornerstones of the system will be discussed. These consist of various services dealing with the input and administration of avalanche relevant information, geodata processing and provision, map production and dissemination, meteorological map and diagram manipulation and creation as well as the design and conception of the frontend web-interface. Finally, the current state of the system will be presented exemplifying the geo-communicational procedures and methods.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Xiao, F., G. Y. K. Shea, M. S. Wong, and J. Campbell. "An automated and integrated framework for dust storm detection based on ogc web processing services." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-2 (November 11, 2014): 151–56. http://dx.doi.org/10.5194/isprsarchives-xl-2-151-2014.

Повний текст джерела
Анотація:
Dust storms are known to have adverse effects on public health. Atmospheric dust loading is also one of the major uncertainties in global climatic modelling as it is known to have a significant impact on the radiation budget and atmospheric stability. The complexity of building scientific dust storm models is coupled with the scientific computation advancement, ongoing computing platform development, and the development of heterogeneous Earth Observation (EO) networks. It is a challenging task to develop an integrated and automated scheme for dust storm detection that combines Geo-Processing frameworks, scientific models and EO data together to enable the dust storm detection and tracking processes in a dynamic and timely manner. This study develops an automated and integrated framework for dust storm detection and tracking based on the Web Processing Services (WPS) initiated by Open Geospatial Consortium (OGC). The presented WPS framework consists of EO data retrieval components, dust storm detecting and tracking component, and service chain orchestration engine. The EO data processing component is implemented based on OPeNDAP standard. The dust storm detecting and tracking component combines three earth scientific models, which are SBDART model (for computing aerosol optical depth (AOT) of dust particles), WRF model (for simulating meteorological parameters) and HYSPLIT model (for simulating the dust storm transport processes). The service chain orchestration engine is implemented based on Business Process Execution Language for Web Service (BPEL4WS) using open-source software. The output results, including horizontal and vertical AOT distribution of dust particles as well as their transport paths, were represented using KML/XML and displayed in Google Earth. A serious dust storm, which occurred over East Asia from 26 to 28 Apr 2012, is used to test the applicability of the proposed WPS framework. Our aim here is to solve a specific instance of a complex EO data and scientific model integration problem by using a framework and scientific workflow approach together. The experimental result shows that this newly automated and integrated framework can be used to give advance near real-time warning of dust storms, for both environmental authorities and public. The methods presented in this paper might be also generalized to other types of Earth system models, leading to improved ease of use and flexibility.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Bathellier, Eric, Jon Downton, and Gabino Castillo. "Optimising CSG development: quantitative estimation of lithological and geomechanical reservoir quality parameters from seismic data." APPEA Journal 52, no. 2 (2012): 675. http://dx.doi.org/10.1071/aj11089.

Повний текст джерела
Анотація:
Within the past decade, new developments in seismic azimuthal anisotropy have identified a link between fracture density and orientation observed in well logs and the intensity and orientation of the actual anisotropy. Recent studies have shown a correlation between these measurements that provide quantitative estimations of fracture density from 3D wide-azimuth seismic data in tight-gas sand reservoirs. Recent research shows the significance of advanced seismic processing in the successful recovery of reliable fracture estimations, which directly correlates to borehole observations. These quantitative estimations of fracture density provide valuable insight that helps optimise drilling and completion programs, particularly in tight reservoirs. Extending this analysis to CSG reservoirs needs to consider additional reservoir quality parameters while implementing a similar quantitative approach on the interpretation of seismic data and correlation with borehole logging observations. The characterisation of CSG plays involves the understanding of the reservoir matrix properties as well as the in-situ stresses and fracturing that will determine optimal production zones. Pre-stack seismic data can assist with identifying the sweet spots—productive areas—in CSG resource plays by detailed reservoir-oriented gather conditioning followed by pre-stack seismic inversion and multi-attribute analysis. This analysis provides rock property estimations such as Poisson's ratio and Young's modulus, among others, which in turn relate to quantitative reservoir properties such as porosity and brittleness. This study shows an integrated workflow based on pre-stack azimuthal seismic data analysis and well log information to identify sweet spots, estimate geo-mechanical properties, and quantify in-situ principal stresses.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Gausepohl, Florian, Anne Hennke, Timm Schoening, Kevin Köser, and Jens Greinert. "Scars in the abyss: reconstructing sequence, location and temporal change of the 78 plough tracks of the 1989 DISCOL deep-sea disturbance experiment in the Peru Basin." Biogeosciences 17, no. 6 (March 23, 2020): 1463–93. http://dx.doi.org/10.5194/bg-17-1463-2020.

Повний текст джерела
Анотація:
Abstract. High-resolution optical and hydro-acoustic sea floor data acquired in 2015 enabled the reconstruction and exact localization of disturbance tracks of a past deep-sea recolonization experiment (DISCOL) that was conducted in 1989 in the Peru Basin during a German environmental impact study associated with manganese-nodule mining. Based on this information, the disturbance level of the experiment regarding the direct plough impact and distribution and redeposition of sediment from the evolving sediment plume was assessed qualitatively. The compilation of all available optical and acoustic data sets available from the DISCOL Experimental Area (DEA) and the derived accurate positions of the different plough marks facilitate the analysis of the sedimentary evolution over the last 26 years for a sub-set of the 78 disturbance tracks. The results highlight the remarkable difference between natural sedimentation in the deep sea and sedimentation of a resettled sediment plume; most of the blanketing of the plough tracks happened through the resettling of plume sediment from plough tracks created later. Generally sediment plumes are seen as one of the important impacts associated with potential Mn-nodule mining. For enabling a better evaluation and interpretation of particularly geochemical and microbiological data, a relative age sequence of single plough marks and groups of them was derived and is presented here. This is important as the thickness of resettled sediment differs distinctly between plough marks created earlier and later. Problems in data processing became eminent for data from the late 1980s, at a time when GPS was just invented and underwater navigation was in an infant stage. However, even today the uncertainties of underwater navigation need to be considered if a variety of acoustical and optical sensors with different resolution should be merged to correlate accurately with the absolute geographic position. In this study, the ship-based bathymetric map was used as the absolute geographic reference layer and a workflow was applied for geo-referencing all the other data sets of the DISCOL Experimental Area until the end of 2015. New high-resolution field data were mainly acquired with sensors attached to GEOMAR's AUV Abyss and the 0.5∘ × 1∘ EM122 multibeam system of RV Sonne during cruise SO242-1. Legacy data from the 1980s and 1990s first needed to be found and compiled before they could be digitized and properly geo-referenced for our joined analyses.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Wang, Nan, and Qiming Qin. "Natural Source Electromagnetic Component Exploration of Coalbed Methane Reservoirs." Minerals 12, no. 6 (May 28, 2022): 680. http://dx.doi.org/10.3390/min12060680.

Повний текст джерела
Анотація:
As an environmentally friendly and high-calorific natural gas, coalbed methane (CBM) has become one of the world’s most crucial unconventional energy sources. Undoubtedly, it is necessary to conduct in-depth research on reservoir exploration methods to ensure high and stable CBM production in the development stage. However, current methods have disadvantages such as high cost, complex devices, and poor terrain adaptability, and therefore they are unsuitable for reasonable monitoring of CBM reservoirs. In contrast, electromagnetic prospecting methods are increasingly widely employed in the rapid delineation of conductive distributions, contributing a lot to in-situ reservoir interpretation. Furthermore, a natural source Super-Low Frequency electromagnetic component method (i.e., the SLF method for short) has been proposed and applied with high potential in a CBM enrichment area, Qinshui Basin, China. In this paper, this method is thoroughly discussed. The magnetic component responses of the SLF method can be used as the characteristic responses of subsurface layers, and the forward modeling algorithms using the finite element method have been successfully developed and verified. On this basis, the direct depth transformation and one-dimensional nonlinear regularization inversion algorithms of the magnetic component responses are proposed for geo-object interpretation. With the help of the empirical mode decomposition (EMD), an SLF data processing workflow is demonstrated theoretically and practically, which is integrated into a portable instrument. The instrument’s ability to identify the low-resistivity reservoirs and their surrounding rocks has been proved by field survey. The extraction of electromagnetic radiation (EMR) anomalies also helps to refine the reservoir interpretation with higher accuracy. A joint comparative inversion test between the SLF method and the audio-magnetotelluric method (AMT) is also addressed, demonstrating that the SLF method is reliably applicable in the field survey of CBM reservoirs. A preliminary statistical analysis shows that the depth resolution of CBM reservoirs can reach the order of tens of meters. Therefore, the SLF method is expected to become one of the most potential options for in-situ CBM exploration with a cost-effective interpretation capability.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Hassan, Huma Ahmed, Syed Amer Mahmood, Saira Batool, Areeba Amer, Mareena Khurshid, and Hina Yaqub. "Generation of Digital Surface Model (DSM) USING UAV/ QUADCOPTER." Vol 2 Issue 3 2, no. 3 (September 15, 2020): 89–107. http://dx.doi.org/10.33411/ijist/2020020304.

Повний текст джерела
Анотація:
Satellite imageries are being used as primary source of information due to their vast coverage and high temporal resolution. Unnamed Aerial Vehicle (UAV) is being used these days because of its accuracy, autonomous flights, cost effectiveness and rapid overview of data. UAV provides a fully or partially autonomous image acquiring platform which is devoid of any manned flight controller. In this research Phantom 3 advanced Quadcopter was used for an image acquisition plan for generation of Digital Surface Model (DSM). Two designs were drawn through this workflow for the reconstruction of Department of Space Science and technology in university of Punjab. For the first design 3D quadcopter is hovered at the height of 120 feet (37 meters) which covered an area of 83 x 130 meter, frontal and sidewise overlapping is 80%, and the camera is kept at an angle of 70° for the double grid type pattern. For second mission design a circular flight is taken to obtain images at the height of 27meters with coverage area of (107 x 106) meter, 45° camera angle and 10° circular angle. For reconstruction of urban area, quadcopter is hovered at relatively greater height of 210 feet (64 meters), following the double grid pattern. In order to attain desired GST, the camera is flown at a constant height over the Area of Interest (AOI). The highly overlapped images obtained using Phantom 3, advanced Quadcopter are then processed using Pix4d software for processing of images. Initially, the common points of adjacent images are matched automatically. After matching similar points, additional geographic information of coordinates and z-value of elevation associated with it is generated in 3D space by sparse point cloud. Then a detailed 3D model along the precise geo location is obtained using dense point cloud. A study surface area and its texture are generated using 3D mesh. Finally, a desired 3D surface model is accurately generated containing desired AOI. The results are analyzed using UAV imagery to generate high resolution DSM. DSM for construction of urban area of Department of Space Science was generated at a very high resolution of 3.55cm and 1.8cm respectively. The accuracy of geo locations can be improved by using GPS loggers or by taking the GCPs. It is suggested by many authors that 3D surface model of reconstruction of building is quite accurate geographically and geometrically, after the comparison of bundle block adjustments, Ground Sampling Distance (GSD) value, 3D matching and average point cloud density of DSM. Thus, the 3D surface models are used in parameters, features extraction and estimation of values including depth and elevation values, in texturing, 3D data collection for 3D visualizations, 3D roof tops and building facades and contour maps and orthodox photos.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Chen, Huangke, Jinming Wen, Witold Pedrycz, and Guohua Wu. "Big Data Processing Workflows Oriented Real-Time Scheduling Algorithm using Task-Duplication in Geo-Distributed Clouds." IEEE Transactions on Big Data 6, no. 1 (March 1, 2020): 131–44. http://dx.doi.org/10.1109/tbdata.2018.2874469.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Qin, R., S. Song, and X. Huang. "3D DATA GENERATION USING LOW-COST CROSS-VIEW IMAGES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2020 (August 12, 2020): 157–62. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2020-157-2020.

Повний текст джерела
Анотація:
Abstract. 3D data generation often requires expensive data collection such as aerial photogrammetric or LiDAR flight. In cases such data are unavailable, for example, areas of interest inaccessible from aerial platforms, alternative sources to be considered can be quite heterogeneous and come in the form of different accuracy, resolution and views, which challenge the standard data processing workflows. Assuming only overview satellite and ground-level go-pro images are available, which we call cross-view data due to the significant view differences, this paper introduces a framework from our project, consisting of a few novel algorithms that convert such challenging dataset to 3D textured mesh models containing both top and façade features. The necessary methods include 3D point cloud generation from satellite overview images and ground-level images, geo-registration and meshing. We firstly introduce the problems and discuss the potential challenges and introduce our proposed methods to address these challenges. Finally, we practice our proposed framework on a dataset consisting of twelve satellite images and 150k video frames acquired through a vehicle-mounted Go-pro camera and demonstrate the reconstruction results. We have also compared our results with results generated from an intuitive processing pipeline that involves typical geo-registration and meshing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Brandón, Álvaro, María S. Pérez, Jesus Montes, and Alberto Sanchez. "FMonE: A Flexible Monitoring Solution at the Edge." Wireless Communications and Mobile Computing 2018 (November 19, 2018): 1–15. http://dx.doi.org/10.1155/2018/2068278.

Повний текст джерела
Анотація:
Monitoring has always been a key element on ensuring the performance of complex distributed systems, being a first step to control quality of service, detect anomalies, or make decisions about resource allocation and job scheduling, to name a few. Edge computing is a new type of distributed computing, where data processing is performed by a large number of heterogeneous devices close to the place where the data is generated. Some of the differences between this approach and more traditional architectures, like cloud or high performance computing, are that these devices have low computing power, have unstable connectivity, and are geo-distributed or even mobile. All of these aforementioned characteristics establish new requirements for monitoring tools, such as customized monitoring workflows or choosing different back-ends for the metrics, depending on the device hosting them. In this paper, we present a study of the requirements that an edge monitoring tool should meet, based on motivating scenarios drawn from literature. Additionally, we implement these requirements in a monitoring tool named FMonE. This framework allows deploying monitoring workflows that conform to the specific demands of edge computing systems. We evaluate FMonE by simulating a fog environment in the Grid’5000 testbed and we demonstrate that it fulfills the requirements we previously enumerated.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Nourian, Pirouz, Carlos Martinez-Ortiz, and Ken Arroyo Ohori. "Essential Means for Urban Computing: Specification of Web-Based Computing Platforms for Urban Planning, a Hitchhiker’s Guide." Urban Planning 3, no. 1 (March 29, 2018): 47–57. http://dx.doi.org/10.17645/up.v3i1.1299.

Повний текст джерела
Анотація:
This article provides an overview of the specifications of web-based computing platforms for urban data analytics and computational urban planning practice. There are currently a variety of tools and platforms that can be used in urban computing practices, including scientific computing languages, interactive web languages, data sharing platforms and still many desktop computing environments, e.g., GIS software applications. We have reviewed a list of technologies considering their potential and applicability in urban planning and urban data analytics. This review is not only based on the technical factors such as capabilities of the programming languages but also the ease of developing and sharing complex data processing workflows. The arena of web-based computing platforms is currently under rapid development and is too volatile to be predictable; therefore, in this article we focus on the specification of the requirements and potentials from an urban planning point of view rather than speculating about the fate of computing platforms or programming languages. The article presents a list of promising computing technologies, a technical specification of the essential data models and operators for geo-spatial data processing, and mathematical models for an ideal urban computing platform.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Lari, Z., and N. El-Sheimy. "SYSTEM CONSIDERATIONS AND CHALLENDES IN 3D MAPPING AND MODELING USING LOW-COST UAV SYSTEMS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W3 (August 19, 2015): 343–48. http://dx.doi.org/10.5194/isprsarchives-xl-3-w3-343-2015.

Повний текст джерела
Анотація:
In the last few years, low-cost UAV systems have been acknowledged as an affordable technology for geospatial data acquisition that can meet the needs of a variety of traditional and non-traditional mapping applications. In spite of its proven potential, UAV-based mapping is still lacking in terms of what is needed for it to become an acceptable mapping tool. In other words, a well-designed system architecture that considers payload restrictions as well as the specifications of the utilized direct geo-referencing component and the imaging systems in light of the required mapping accuracy and intended application is still required. Moreover, efficient data processing workflows, which are capable of delivering the mapping products with the specified quality while considering the synergistic characteristics of the sensors onboard, the wide range of potential users who might lack deep knowledge in mapping activities, and time constraints of emerging applications, are still needed to be adopted. Therefore, the introduced challenges by having low-cost imaging and georeferencing sensors onboard UAVs with limited payload capability, the necessity of efficient data processing techniques for delivering required products for intended applications, and the diversity of potential users with insufficient mapping-related expertise needs to be fully investigated and addressed by UAV-based mapping research efforts. This paper addresses these challenges and reviews system considerations, adaptive processing techniques, and quality assurance/quality control procedures for achievement of accurate mapping products from these systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Jende, P., Z. Hussnain, M. Peter, S. Oude Elberink, M. Gerke, and G. Vosselman. "LOW-LEVEL TIE FEATURE EXTRACTION OF MOBILE MAPPING DATA (MLS/IMAGES) AND AERIAL IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W4 (March 17, 2016): 19–26. http://dx.doi.org/10.5194/isprsarchives-xl-3-w4-19-2016.

Повний текст джерела
Анотація:
Mobile Mapping (MM) is a technique to obtain geo-information using sensors mounted on a mobile platform or vehicle. The mobile platform’s position is provided by the integration of Global Navigation Satellite Systems (GNSS) and Inertial Navigation Systems (INS). However, especially in urban areas, building structures can obstruct a direct line-of-sight between the GNSS receiver and navigation satellites resulting in an erroneous position estimation. Therefore, derived MM data products, such as laser point clouds or images, lack the expected positioning reliability and accuracy. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary reference data. However, current approaches do not consider errors in height, cannot achieve sub-decimetre accuracy and are often not designed to work in a fully automatic fashion. We propose an automatic pipeline to rectify MM data products by employing high resolution aerial nadir and oblique imagery as horizontal and vertical reference, respectively. By exploiting the MM platform’s defective, and therefore imprecise but approximate orientation parameters, accurate feature matching techniques can be realised as a pre-processing step to minimise the MM platform’s three-dimensional positioning error. Subsequently, identified correspondences serve as constraints for an orientation update, which is conducted by an estimation or adjustment technique. Since not all MM systems employ laser scanners and imaging sensors simultaneously, and each system and data demands different approaches, two independent workflows are developed in parallel. &lt;br&gt;&lt;br&gt; Still under development, both workflows will be presented and preliminary results will be shown. The workflows comprise of three steps; feature extraction, feature matching and the orientation update. In this paper, initial results of low-level image and point cloud feature extraction methods will be discussed as well as an outline of the project and its framework will be given.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Jende, P., Z. Hussnain, M. Peter, S. Oude Elberink, M. Gerke, and G. Vosselman. "LOW-LEVEL TIE FEATURE EXTRACTION OF MOBILE MAPPING DATA (MLS/IMAGES) AND AERIAL IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W4 (March 17, 2016): 19–26. http://dx.doi.org/10.5194/isprs-archives-xl-3-w4-19-2016.

Повний текст джерела
Анотація:
Mobile Mapping (MM) is a technique to obtain geo-information using sensors mounted on a mobile platform or vehicle. The mobile platform’s position is provided by the integration of Global Navigation Satellite Systems (GNSS) and Inertial Navigation Systems (INS). However, especially in urban areas, building structures can obstruct a direct line-of-sight between the GNSS receiver and navigation satellites resulting in an erroneous position estimation. Therefore, derived MM data products, such as laser point clouds or images, lack the expected positioning reliability and accuracy. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary reference data. However, current approaches do not consider errors in height, cannot achieve sub-decimetre accuracy and are often not designed to work in a fully automatic fashion. We propose an automatic pipeline to rectify MM data products by employing high resolution aerial nadir and oblique imagery as horizontal and vertical reference, respectively. By exploiting the MM platform’s defective, and therefore imprecise but approximate orientation parameters, accurate feature matching techniques can be realised as a pre-processing step to minimise the MM platform’s three-dimensional positioning error. Subsequently, identified correspondences serve as constraints for an orientation update, which is conducted by an estimation or adjustment technique. Since not all MM systems employ laser scanners and imaging sensors simultaneously, and each system and data demands different approaches, two independent workflows are developed in parallel. <br><br> Still under development, both workflows will be presented and preliminary results will be shown. The workflows comprise of three steps; feature extraction, feature matching and the orientation update. In this paper, initial results of low-level image and point cloud feature extraction methods will be discussed as well as an outline of the project and its framework will be given.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Pezzica, C., A. Piemonte, C. Bleil de Souza, and V. Cutini. "PHOTOGRAMMETRY AS A PARTICIPATORY RECOVERY TOOL AFTER DISASTERS: A GROUNDED FRAMEWORK FOR FUTURE GUIDELINES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W15 (August 23, 2019): 921–28. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w15-921-2019.

Повний текст джерела
Анотація:
<p><strong>Abstract.</strong> This paper identifies the application domain, context of use, processes and goals of low-cost street-level photogrammetry after urban disasters. The proposal seeks a synergy between top-down and bottom-up initiatives carried out by different actors during the humanitarian response phase in data scarce contexts. By focusing on the self-organisation capacities of local people, this paper suggests using collaborative photogrammetry to empower communities hit by disasters and foster their active participation in recovery and reconstruction planning. It shows that this task may prove technically challenging depending on the specifics of the collected imagery and develops a grounded framework to produce user-centred image acquisition guidelines and fit-for-purpose photogrammetric reconstruction workflows, useful in future post-disaster scenarios. To this end, it presents an in-depth analysis of a collaborative photographic mapping initiative undergone by a group of citizen-scientists after the 2016 Central Italy earthquake, followed by the explorative processing of some sample datasets. Specifically, the paper firstly presents a visual ethnographic study of the photographic material uploaded by participants from September 2016 to November 2018 in the two Italian municipalities of Arquata del Tronto and Norcia. Secondly, it illustrates from a technical point of view issues concerning the processing of crowdsourced data (e.g. image filtering, selection, quality, semantic content and 3D model scaling) and discusses the viability of using it to enrich the pool of geo-information available to stakeholders and decision-makers. Final considerations are discussed as part of a grounded framework for future guidelines tailored to multiple goals and data processing scenarios.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Đuka, Andreja, Zoran Bumber, Tomislav Poršinsky, Ivica Papa, and Tibor Pentek. "The Influence of Increased Salvage Felling on Forwarding Distance and the Removal—A Case Study from Croatia." Forests 12, no. 1 (December 23, 2020): 7. http://dx.doi.org/10.3390/f12010007.

Повний текст джерела
Анотація:
During the seven-year research period, the average annual removal was by 3274 m3 higher than the average annual removal prescribed by the existing management plan (MP). The main reason lies in the high amount of salvage felling volume at 55,238 m3 (38.3%) in both the main and the intermediate felling due to oak dieback. The analysis of forest accessibility took into account the spatial distribution of cutblocks (with ongoing felling operations) and the volume of felled timber for two proposed factors: (1) the position of the cutblock and (2) the position of the removal. Cutblock position factor took into account the spatial position of the felling areas/sites, while removal position factor besides the spatial reference took into account the amount of felled timber (i.e., volume) both concerning forest infrastructure network and forest operations. The analysed relative forest openness by using geo-processing workflows in GIS environment showed four types of opening areas in the studied management unit (MU): single-opened, multiple-opened, unopened and opened areas outside of the management unit. Negative effects of the piece-volume law and low harvesting densities on forest operations are highlighted in this research due to high amount of salvage felling particularly in the intermediate felling by replacing timber volume that should have come from thinnings.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Thompson, M. W., J. Hiestermann, and L. Moyo. "PROVING THE CAPABILITY FOR LARGE SCALE REGIONAL LAND-COVER DATA PRODUCTION BY SELF-FUNDED COMMERCIAL OPERATORS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-3/W2 (November 16, 2017): 209–12. http://dx.doi.org/10.5194/isprs-archives-xlii-3-w2-209-2017.

Повний текст джерела
Анотація:
For service providers developing commercial value-added data content based on remote sensing technologies, the focus is to typically create commercially appropriate geospatial information which has downstream business value. The primary aim being to link locational intelligence with business intelligence in order to better make informed decisions. From a geospatial perspective this locational information must be relevant, informative, and most importantly current; with the ability to maintain the information timeously into the future for change detection purposes. Aligned with this, GeoTerraImage has successfully embarked on the production of land-cover/land-use content over southern Africa. The ability for a private company to successfully implement and complete such an exercise has been the capability to leverage the combined advantages of cutting edge data processing technologies and methodologies, with emphasis on processing repeatability and speed, and the use of a wide range of readily available imagery. These production workflows utilise a wide range of integrated procedures including machine learning algorithms, innovative use of non-specialists for sourcing of reference data, and conventional pixel and object-based image classification routines, and experienced/expert landscape interpretation. This multi-faceted approach to data produce development demonstrates the capability for SMME level commercial entities such as GeoTerraImage to generate industry applicable large data content, in this case, wide area coverage land-cover and land-use data across the sub-continent. Within this development, the emphasis has been placed on the key land-use information, such as mining, human settlements, and agriculture, given the importance of this geo-spatial land-use information in business and socio-economic applications and decision making.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Banerjee, Bikram Pratap, and Simit Raval. "Mapping Sensitive Vegetation Communities in Mining Eco-space using UAV-LiDAR." International Journal of Coal Science & Technology 9, no. 1 (June 3, 2022). http://dx.doi.org/10.1007/s40789-022-00509-w.

Повний текст джерела
Анотація:
AbstractNear earth sensing from uncrewed aerial vehicles or UAVs has emerged as a potential approach for fine-scale environmental monitoring. These systems provide a cost-effective and repeatable means to acquire remotely sensed images in unprecedented spatial detail and a high signal-to-noise ratio. It is increasingly possible to obtain both physiochemical and structural insights into the environment using state-of-art light detection and ranging (LiDAR) sensors integrated onto UAVs. Monitoring sensitive environments, such as swamp vegetation in longwall mining areas, is essential yet challenging due to their inherent complexities. Current practices for monitoring these remote and challenging environments are primarily ground-based. This is partly due to an absent framework and challenges of using UAV-based sensor systems in monitoring such sensitive environments. This research addresses the related challenges in developing a LiDAR system, including a workflow for mapping and potentially monitoring highly heterogeneous and complex environments. This involves amalgamating several design components, including hardware integration, calibration of sensors, mission planning, and developing a processing chain to generate usable datasets. It also includes the creation of new methodologies and processing routines to establish a pipeline for efficient data retrieval and generation of usable products. The designed systems and methods were applied to a peat swamp environment to obtain an accurate geo-spatialised LiDAR point cloud. Performance of the LiDAR data was tested against ground-based measurements on various aspects, including visual assessment for generation LiDAR metrices maps, canopy height model, and fine-scale mapping.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Legleiter, Carl J., and Paul J. Kinzel. "Surface Flow Velocities From Space: Particle Image Velocimetry of Satellite Video of a Large, Sediment-Laden River." Frontiers in Water 3 (May 28, 2021). http://dx.doi.org/10.3389/frwa.2021.652213.

Повний текст джерела
Анотація:
Conventional, field-based streamflow monitoring in remote, inaccessible locations such as Alaska poses logistical challenges. Safety concerns, financial considerations, and a desire to expand water-observing networks make remote sensing an appealing alternative means of collecting hydrologic data. In an ongoing effort to develop non-contact methods for measuring river discharge, we evaluated the potential to estimate surface flow velocities from satellite video of a large, sediment-laden river in Alaska via particle image velocimetry (PIV). In this setting, naturally occurring sediment boil vortices produced distinct water surface features that could be tracked from frame to frame as they were advected by the flow, obviating the need to introduce artificial tracer particles. In this study, we refined an end-to-end workflow that involved stabilization and geo-referencing, image preprocessing, PIV analysis with an ensemble correlation algorithm, and post-processing of PIV output to filter outliers and scale and geo-reference velocity vectors. Applying these procedures to image sequences extracted from satellite video allowed us to produce high resolution surface velocity fields; field measurements of depth-averaged flow velocity were used to assess accuracy. Our results confirmed the importance of preprocessing images to enhance contrast and indicated that lower frame rates (e.g., 0.25 Hz) lead to more reliable velocity estimates because longer capture intervals allow more time for water surface features to translate several pixels between frames, given the relatively coarse spatial resolution of the satellite data. Although agreement between PIV-derived velocity estimates and field measurements was weak (R2 = 0.39) on a point-by-point basis, correspondence improved when the PIV output was aggregated to the cross-sectional scale. For example, the correspondence between cross-sectional maximum velocities inferred via remote sensing and measured in the field was much stronger (R2 = 0.76), suggesting that satellite video could play a role in measuring river discharge. Examining correlation matrices produced as an intermediate output of the PIV algorithm yielded insight on the interactions between image frame rate and sensor spatial resolution, which must be considered in tandem. Although further research and technological development are needed, measuring surface flow velocities from satellite video could become a viable tool for streamflow monitoring in certain fluvial environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Inamdar, Deep, Margaret Kalacska, J. Pablo Arroyo-Mora, and George Leblanc. "The Directly-Georeferenced Hyperspectral Point Cloud: Preserving the Integrity of Hyperspectral Imaging Data." Frontiers in Remote Sensing 2 (April 23, 2021). http://dx.doi.org/10.3389/frsen.2021.675323.

Повний текст джерела
Анотація:
The raster data model has been the standard format for hyperspectral imaging (HSI) over the last four decades. Unfortunately, it misrepresents HSI data because pixels are not natively square nor uniformly distributed across imaged scenes. To generate end products as rasters with square pixels while preserving spectral data integrity, the nearest neighbor resampling methodology is typically applied. This process compromises spatial data integrity as the pixels from the original HSI data are shifted, duplicated and eliminated so that HSI data can conform to the raster data model structure. Our study presents a novel hyperspectral point cloud data representation that preserves the spatial-spectral integrity of HSI data more effectively than conventional square pixel rasters. This Directly-Georeferenced Hyperspectral Point Cloud (DHPC) is generated through a data fusion workflow that can be readily implemented into existing processing workflows used by HSI data providers. The effectiveness of the DHPC over conventional square pixel rasters is shown with four HSI datasets. These datasets were collected at three different sites with two different sensors that captured the spectral information from each site at various spatial resolutions (ranging from ∼1.5 cm to 2.6 m). The DHPC was assessed based on three data quality metrics (i.e., pixel loss, pixel duplication and pixel shifting), data storage requirements and various HSI applications. All of the studied raster data products were characterized by either substantial pixel loss (∼50–75%) or pixel duplication (∼35–75%), depending on the resolution of the resampling grid used in the nearest neighbor methodology. Pixel shifting in the raster end products ranged from 0.33 to 1.95 pixels. The DHPC was characterized by zero pixel loss, pixel duplication and pixel shifting. Despite containing additional surface elevation data, the DHPC was up to 13 times smaller in file size than the corresponding rasters. Furthermore, the DHPC consistently outperformed the rasters in all of the tested applications which included classification, spectra geo-location and target detection. Based on the findings from this work, the developed DHPC data representation has the potential to push the limits of HSI data distribution, analysis and application.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Beilschmidt, Christian, Johannes Drönner, Néstor Fernández, Christian Langer, Michael Mattig, and Bernhard Seeger. "Analyzing Essential Biodiversity Variables with the VAT System." Biodiversity Information Science and Standards 3 (June 21, 2019). http://dx.doi.org/10.3897/biss.3.36319.

Повний текст джерела
Анотація:
The Essential Biodiversity Variables (EBVs) are important information sources for scientists and decision makers. They are developed and promoted by the Group on Earth Observations Biodiversity Observation Network (GEO BON) together with the community. EBVs provide an abstraction level between measurements and indicators. This enables access to biodiversity observations and allows different groups of users to detect temporal trends as well as regional deviations. In particular, the analysis of EBVs supports finding countermeasures for current important challenges like biodiversity loss and climate change. A visual assessment is an intuitive way to drive the analysis. As one example, researchers can recognize and interpret the changes of forest cover maps over time. The VAT System, in which VAT is an acronym for visualization, analysis and transformation, is an ideal candidate platform for the creation of such an analytical application. It is a geographical processing system that supports a variety of spatio-temporal data types and allows computations using heterogeneous data. For user interaction, it offers a web-based user interface that is built with state-of-the-art web technology. Users can perform interactive analysis of spatio-temporal data by visualizing data on maps and using various graphs and diagrams that are linked to the user’s area of interest. Furthermore, users are enabled to browse through the temporal dimension of the data using a time slider tool. This provides an easy access to large spatio-temporal data sets. One exemplary use case is the creation of EBV statistics for selected countries or areas. This functionality is provided as an app that is built upon the VAT System. Here, users select EBVs, a time range and a metric, and create temporal charts that display developments over time. The charts are constructed internally by employing R scripts that were created by domain experts. The scripts are executed using VAT’s R connectivity module. Finally, users can export the results to their local computers. An export contains the result itself and additionally, a list of citations of the included EBVs as well as a workflow description of all processing steps for reasons of reproducibility. Such a use case exemplifies the suitability of the VAT System to facilitate the creation of similar projects or applications without the need of programming, using VAT’s modular and flexible components.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

"Urban remote sensing with lidar for the Smart City Concept implementation." Visnyk of V. N. Karazin Kharkiv National University, series "Geology. Geography. Ecology", no. 50 (2019). http://dx.doi.org/10.26565/2410-7360-2019-50-08.

Повний текст джерела
Анотація:
Introduction of the problem. The paper emphasizes that the key features of the contemporary urban development have caused a number of challengers, which require the innovative technological introductions in urban studies. The research goal of this paper means representing a multifunctional approach, which combines author’s urbogeosystem (UGS) theory with the URS (Urban Remote Sensing) technique for LiDAR (Light Detection And Ranging) data processing. The key elements of the Smart City concept within a geospatial perspective. Three basic assumptions are implied due to the affiliation “a geospatial perspective ó the Smart City concept” (SCC). The five key elements of the SCC have been outlined: Innovations; Scalability; Data gathering, measuring, and mining; Addressing environmental challengers; Interlink between the smart meter information and the geo-sensor information. The urbogeosystemic approach as a tool for simulating the “smart urban environment” – a core node of the Smart City hierarchy. The urbogeosystemic ontological model has been introduced as a trinity-tripod (urban citizens, municipal infrastructure, urbanistic processes and phenomena). The “smart urban environment” is a core node of an urbogeosystem. Processing results of LiDAR surveying technique. With increasing availability of LiDAR data, 3D city models of robust topology and correct geometry have become the most prominent features of the urban environment. Three key advantages of the LiDAR surveying technique have been introduced. The flowchart of the operational URS / LiDAR / GIS workflow for the Smart City implementation has been depicted. Urban Remote Sensing for data mining / city analytics and the EOS LiDAR Tool. ELiT (EOS LiDAR Tool) software is both a separate web-based (network) generator (an engine) – ELiT Server, and an integrated component of EOS Platform-as-a-Service software – ELiT Cloud. The allied one to these two products is our desktop ElitCore software, that possesses even broader functionality. The paper outlines the whole framework of urban data mining / city analytics relevant to the mentioned applications. The ELiT software use cases for the Smart Cities. A number of use cases that can be completed with the ELiT software in the common urban planning domain have been described and illustrated. Each from five scenarios presented suggests some unique solution within the frameworks of the SCC implementation. Conclusion, future research and developments. The completed research results have been summarized. An entity of the urban geoinformation space has been introduced. A geodatabase of ELiT 3D city models has been assigned a mandatory key component of the urban decision support system.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Royaux, Coline, Olivier Norvez, Marie Jossé, Elie Arnaud, Julien Sananikone, Sandrine Pavoine, Dominique Pelletier, Jean-Baptiste Mihoub, and Yvan Le Bras. "From Biodiversity Observation Networks to Datasets and Workflows Supporting Biodiversity Indicators, a French Biodiversity Observation Network (BON) Essential Biodiversity Variables (EBV) Operationalization Pilot using Galaxy and Ecological Metadata Language." Biodiversity Information Science and Standards 6 (September 16, 2022). http://dx.doi.org/10.3897/biss.6.94957.

Повний текст джерела
Анотація:
Integration of biological data with different ecological scales is complex! The biodiversity community (scientists, policy makers, managers, citizen, NGOs) needs to build a framework of harmonized and interoperable data from raw, heterogeneous and scattered datasets. Such a framework will help observation, measurement and understanding of the spatio-temporal dynamic of biodiversity from local to global scales. One of the most relevant approaches to reach that aim is the concept of Essential Biodiversity Variables (EBV). As we can potentially extract a lot of information from raw datasets sampled at different ecological scales, the EBV concept represents a useful leverage for identifying appropriate data to be collated as well as associated analytical workflows for processing these data. Thanks to FAIR data and source code implementation (Findable, Accessible, Interoperability, Reusable), it is possible to make a transparent assessment of biodiversity by generating operational biodiversity indicators (that can be reused / declined) through the EBV framework, and help designing or improving biodiversity monitoring at various scales. Through the BiodiFAIRse GO FAIR implementation network, we established how ecological and environmental sciences can benefit from existing open standards, tools and platforms used by European, Australian and United States infrastructures, particularly regarding the Galaxy platform for code sources accessiblility and the DataOne network of data catalogs and the Ecological Metadata Language standard for data management. We propose that these implementation choices can help fight the biodiversity crisis by supporting the important mission of GEO BON (Group on Earth Observation Biodiversity Observation Network): “Improve the acquisition, coordination and delivery of biodiversity observations and related services to users including decision makers and the scientific community” (GEO BON 2022).
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Adetunji, Modupeore O., and Brian J. Abraham. "SEAseq: a portable and cloud-based chromatin occupancy analysis suite." BMC Bioinformatics 23, no. 1 (February 23, 2022). http://dx.doi.org/10.1186/s12859-022-04588-z.

Повний текст джерела
Анотація:
Abstract Background Genome-wide protein-DNA binding is popularly assessed using specific antibody pulldown in Chromatin Immunoprecipitation Sequencing (ChIP-Seq) or Cleavage Under Targets and Release Using Nuclease (CUT&RUN) sequencing experiments. These technologies generate high-throughput sequencing data that necessitate the use of multiple sophisticated, computationally intensive genomic tools to make discoveries, but these genomic tools often have a high barrier to use because of computational resource constraints. Results We present a comprehensive, infrastructure-independent, computational pipeline called SEAseq, which leverages field-standard, open-source tools for processing and analyzing ChIP-Seq/CUT&RUN data. SEAseq performs extensive analyses from the raw output of the experiment, including alignment, peak calling, motif analysis, promoters and metagene coverage profiling, peak annotation distribution, clustered/stitched peaks (e.g. super-enhancer) identification, and multiple relevant quality assessment metrics, as well as automatic interfacing with data in GEO/SRA. SEAseq enables rapid and cost-effective resource for analysis of both new and publicly available datasets as demonstrated in our comparative case studies. Conclusions The easy-to-use and versatile design of SEAseq makes it a reliable and efficient resource for ensuring high quality analysis. Its cloud implementation enables a broad suite of analyses in environments with constrained computational resources. SEAseq is platform-independent and is aimed to be usable by everyone with or without programming skills. It is available on the cloud at https://platform.stjude.cloud/workflows/seaseq and can be locally installed from the repository at https://github.com/stjude/seaseq.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Grieb, Jonas, Claus Weiland, Alex Hardisty, Wouter Addink, Sharif Islam, Sohaib Younis, and Marco Schmidt. "Machine Learning as a Service for DiSSCo’s Digital Specimen Architecture." Biodiversity Information Science and Standards 5 (September 23, 2021). http://dx.doi.org/10.3897/biss.5.75634.

Повний текст джерела
Анотація:
International mass digitization efforts through infrastructures like the European Distributed System of Scientific Collections (DiSSCo), the US resource for Digitization of Biodiversity Collections (iDigBio), the National Specimen Information Infrastructure (NSII) of China, and Australia’s digitization of National Research Collections (NRCA Digital) make geo- and biodiversity specimen data freely, fully and directly accessible. Complementary, overarching infrastructure initiatives like the European Open Science Cloud (EOSC) were established to enable mutual integration, interoperability and reusability of multidisciplinary data streams including biodiversity, Earth system and life sciences (De Smedt et al. 2020). Natural Science Collections (NSC) are of particular importance for such multidisciplinary and internationally linked infrastructures, since they provide hard scientific evidence by allowing direct traceability of derived data (e.g., images, sequences, measurements) to physical specimens and material samples in NSC. To open up the large amounts of trait and habitat data and to link these data to digital resources like sequence databases (e.g., ENA), taxonomic infrastructures (e.g., GBIF) or environmental repositories (e.g., PANGAEA), proper annotation of specimen data with rich (meta)data early in the digitization process is required, next to bridging technologies to facilitate the reuse of these data. This was addressed in recent studies (Younis et al. 2018, Younis et al. 2020), where we employed computational image processing and artificial intelligence technologies (Deep Learning) for the classification and extraction of features like organs and morphological traits from digitized collection data (with a focus on herbarium sheets). However, such applications of artificial intelligence are rarely—this applies both for (sub-symbolic) machine learning and (symbolic) ontology-based annotations—integrated in the workflows of NSC’s management systems, which are the essential repositories for the aforementioned integration of data streams. This was the motivation for the development of a Deep Learning-based trait extraction and coherent Digital Specimen (DS) annotation service providing “Machine learning as a Service” (MLaaS) with a special focus on interoperability with the core services of DiSSCo, notably the DS Repository (nsidr.org) and the Specimen Data Refinery (Walton et al. 2020), as well as reusability within the data fabric of EOSC. Taking up the use case to detect and classify regions of interest (ROI) on herbarium scans, we demonstrate a MLaaS prototype for DiSSCo involving the digital object framework, Cordra, for the management of DS as well as instant annotation of digital objects with extracted trait features (and ROIs) based on the DS specification openDS (Islam et al. 2020). Source code available at: https://github.com/jgrieb/plant-detection-service
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії