Academic literature on the topic 'Geo-processing workflow'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Geo-processing workflow.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Geo-processing workflow"

1

Schäffer, Bastian, and Theodor Foerster. "A client for distributed geo-processing and workflow design." Journal of Location Based Services 2, no. 3 (September 2008): 194–210. http://dx.doi.org/10.1080/17489720802558491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Nengcheng, Liping Di, Genong Yu, and Jianya Gong. "Geo-processing workflow driven wildfire hot pixel detection under sensor web environment." Computers & Geosciences 36, no. 3 (March 2010): 362–72. http://dx.doi.org/10.1016/j.cageo.2009.06.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lemmens, R., B. Toxopeus, L. Boerboom, M. Schouwenburg, B. Retsios, W. Nieuwenhuis, and C. Mannaerts. "IMPLEMENTATION OF A COMPREHENSIVE AND EFFECTIVE GEOPROCESSING WORKFLOW ENVIRONMENT." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W8 (July 11, 2018): 123–27. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w8-123-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> Many projects and research efforts implement geo-information (GI) workflows, ranging from very basic ones to complicated software processing chains. The creation of these workflows normally needs considerable expertise and sharing them is often hampered by undocumented and non-interoperable geoprocessing implementations. We believe that the visual representation of workflows can help in the creation, sharing and understanding of software processing of geodata. In our efforts we aim at bridging abstract and concrete workflow representations for the sake of easing the creation and sharing of simple geoprocessing logic within and across projects.</p><p> We have implemented a first version of our workflow approach in one of our current projects. MARIS, the Mara Rangeland Information System, is being developed in the Mau Mara Serengeti SustainableWater Initiative (MaMaSe). It is a web client that uses the Integrated Land and Water Information System (ILWIS), our open source Remote Sensing and GIS software. It aims to integrate historic, near real time and near future forecast of rainfall, biomass, carrying capacity and livestock market information for the sustainable management of rangelands by conservancies in the Maasai Mara in Kenya. More importantly it aims to show results of a carrying capacity model implemented in a comprehensive geoprocessing workflow.</p><p> In this paper we briefly describe our software and show the workflow implementation strategy and discuss the innovative aspects of our approach as well as our project evaluation and the opportunities for further grounding of our software development.</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Chunlin, Jun Liu, Min Wang, and Youlong Luo. "Fault-tolerant scheduling and data placement for scientific workflow processing in geo-distributed clouds." Journal of Systems and Software 187 (May 2022): 111227. http://dx.doi.org/10.1016/j.jss.2022.111227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Nengcheng, Liping Di, Genong Yu, and Jianya Gong. "Automatic On-Demand Data Feed Service for AutoChem Based on Reusable Geo-Processing Workflow." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 3, no. 4 (December 2010): 418–26. http://dx.doi.org/10.1109/jstars.2010.2049094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Wuhui, Incheon Paik, and Patrick C. K. Hung. "Transformation-Based Streaming Workflow Allocation on Geo-Distributed Datacenters for Streaming Big Data Processing." IEEE Transactions on Services Computing 12, no. 4 (July 1, 2019): 654–68. http://dx.doi.org/10.1109/tsc.2016.2614297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Toschi, I., E. Nocerino, F. Remondino, A. Revolti, G. Soria, and S. Piffer. "GEOSPATIAL DATA PROCESSING FOR 3D CITY MODEL GENERATION, MANAGEMENT AND VISUALIZATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1/W1 (May 31, 2017): 527–34. http://dx.doi.org/10.5194/isprs-archives-xlii-1-w1-527-2017.

Full text
Abstract:
Recent developments of 3D technologies and tools have increased availability and relevance of 3D data (from 3D points to complete city models) in the geospatial and geo-information domains. Nevertheless, the potential of 3D data is still underexploited and mainly confined to visualization purposes. Therefore, the major challenge today is to create automatic procedures that make best use of available technologies and data for the benefits and needs of public administrations (PA) and national mapping agencies (NMA) involved in “smart city” applications. The paper aims to demonstrate a step forward in this process by presenting the results of the SENECA project (Smart and SustaiNablE City from Above – <a href="http://seneca.fbk.eu"target="_blank">http://seneca.fbk.eu</a>). State-of-the-art processing solutions are investigated in order to (i) efficiently exploit the photogrammetric workflow (aerial triangulation and dense image matching), (ii) derive topologically and geometrically accurate 3D geo-objects (i.e. building models) at various levels of detail and (iii) link geometries with non-spatial information within a 3D geo-database management system accessible via web-based client. The developed methodology is tested on two case studies, i.e. the cities of Trento (Italy) and Graz (Austria). Both spatial (i.e. nadir and oblique imagery) and non-spatial (i.e. cadastral information and building energy consumptions) data are collected and used as input for the project workflow, starting from 3D geometry capture and modelling in urban scenarios to geometry enrichment and management within a dedicated webGIS platform.
APA, Harvard, Vancouver, ISO, and other styles
8

Iacone, Brooke, Ginger R. H. Allington, and Ryan Engstrom. "A Methodology for Georeferencing and Mosaicking Corona Imagery in Semi-Arid Environments." Remote Sensing 14, no. 21 (October 27, 2022): 5395. http://dx.doi.org/10.3390/rs14215395.

Full text
Abstract:
High-resolution Corona imagery acquired by the United States through spy missions in the 1960s presents an opportunity to gain critical insight into historic land cover conditions and expand the timeline of available data for land cover change analyses, particularly in regions such as Northern China where data from that era are scarce. Corona imagery requires time-intensive pre-processing, and the existing literature lacks the necessary detail required to replicate these processes easily. This is particularly true in landscapes where dynamic physical processes, such as aeolian desertification, reshape topography over time or regions with few persistent features for use in geo-referencing. In this study, we present a workflow for georeferencing Corona imagery in a highly desertified landscape that contained mobile dunes, shifting vegetation cover, and a few reference points. We geo-referenced four Corona images from Inner Mongolia, China using uniquely derived ground control points and Landsat TM imagery with an overall accuracy of 11.77 m, and the workflow is documented in sufficient detail for replication in similar environments.
APA, Harvard, Vancouver, ISO, and other styles
9

Lucas, G. "CONSIDERING TIME IN ORTHOPHOTOGRAPHY PRODUCTION: FROM A GENERAL WORKFLOW TO A SHORTENED WORKFLOW FOR A FASTER DISASTER RESPONSE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3/W3 (August 19, 2015): 249–55. http://dx.doi.org/10.5194/isprsarchives-xl-3-w3-249-2015.

Full text
Abstract:
This article overall deals with production time with orthophoto imagery with medium size digital frame camera. The workflow examination follows two main parts: data acquisition and post-processing. The objectives of the research are fourfold: 1/ gathering time references for the most important steps of orthophoto production (it turned out that literature is missing on this topic); these figures are used later for total production time estimation; 2/ identifying levers for reducing orthophoto production time; 3/ building a simplified production workflow for emergency response: less exigent with accuracy and faster; and compare it to a classical workflow; 4/ providing methodical elements for the estimation of production time with a custom project. <br><br> In the data acquisition part a comprehensive review lists and describes all the factors that may affect the acquisition efficiency. Using a simulation with different variables (average line length, time of the turns, flight speed) their effect on acquisition efficiency is quantitatively examined. <br><br> Regarding post-processing, the time references figures were collected from the processing of a 1000 frames case study with 15 cm GSD covering a rectangular area of 447 km<sup>2</sup>; the time required to achieve each step during the production is written down. When several technical options are possible, each one is tested and time documented so as all alternatives are available. Based on a technical choice with the workflow and using the compiled time reference of the elementary steps, a total time is calculated for the post-processing of the 1000 frames. Two scenarios are compared as regards to time and accuracy. The first one follows the “normal” practices, comprising triangulation, orthorectification and advanced mosaicking methods (feature detection, seam line editing and seam applicator); the second is simplified and make compromise over positional accuracy (using direct geo-referencing) and seamlines preparation in order to achieve orthophoto production faster. The shortened workflow reduces the production time by more than three whereas the positional error increases from 1 GSD to 1.5 GSD. The examination of time allocation through the production process shows that it is worth sparing time in the post-processing phase.
APA, Harvard, Vancouver, ISO, and other styles
10

Liakos, Leonidas, and Panos Panagos. "Challenges in the Geo-Processing of Big Soil Spatial Data." Land 11, no. 12 (December 13, 2022): 2287. http://dx.doi.org/10.3390/land11122287.

Full text
Abstract:
This study addressed a critical resource—soil—through the prism of processing big data at the continental scale. Rapid progress in technology and remote sensing has majorly improved data processing on extensive spatial and temporal scales. Here, the manuscript presents the results of a systematic effort to geo-process and analyze soil-relevant data. In addition, the main highlights include the difficulties associated with using data infrastructures, managing big geospatial data, decentralizing operations through remote access, mass processing, and automating the data-processing workflow using advanced programming languages. Challenges to this study included the reproducibility of the results, their presentation in a communicative way, and the harmonization of complex heterogeneous data in space and time based on high standards of accuracy. Accuracy was especially important as the results needed to be identical at all spatial scales (from point counts to aggregated countrywide data). The geospatial modeling of soil requires analysis at multiple spatial scales, from the pixel level, through multiple territorial units (national or regional), and river catchments, to the global scale. Advanced mapping methods (e.g., zonal statistics, map algebra, choropleth maps, and proportional symbols) were used to convey comprehensive and substantial information that would be of use to policymakers. More specifically, a variety of cartographic practices were employed, including vector and raster visualization and hexagon grid maps at the global or European scale and in several cartographic projections. The information was rendered in both grid format and as aggregated statistics per polygon (zonal statistics), combined with diagrams and an advanced graphical interface. The uncertainty was estimated and the results were validated in order to present the outputs in the most robust way. The study was also interdisciplinary in nature, requiring large-scale datasets to be integrated from different scientific domains, such as soil science, geography, hydrology, chemistry, climate change, and agriculture.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Geo-processing workflow"

1

MIGLIORINI, Sara. "Supporting Distributed Geo-Processing: A Framework for Managing Multi-Accuracy Spatial Data." Doctoral thesis, 2012. http://hdl.handle.net/11562/397936.

Full text
Abstract:
Negli ultimi anni molti paesi hanno sviluppato un'infrastruttura tecnologica al fine di gestire i propri dati geografici (Spatial Data Infrastructure, SDI). Tali infrastrutture rechiedono nuove ed efficati metodologie per integrare continuamente dati che provengoono da sorgenti diverse e sono caratterizzati da diversi livelli di qualità. Questo bisogno è riconosciuto in letteratura ed è noto come problema di integrazione del dato (data integration) o fusione di informazioni (information fusion). Un aspetto peculiare dell'integrazione del dato geografico riguarda il matching e l'allineamento degli oggetti geometrici. I metodi esistenti solitamente eseguono l'integrazione semplicemente allineando il database meno accurato con quello più accurato, assumendo che il secondo contenga sempre una rappresentazione migliore delle geometrie rilevate. Seguendo questo approccio, gli oggetti geografici sono combinati assieme in una maniera non ottimale, causando distorsioni che potenzialmente riducono la qualità complessiva del database finale. Questa tesi si occupa del problema dell'integrazione del dato spaziale all'interno di una SDI fortemente strutturata, in cui i membri hanno preventivamente aderito ad uno schema globale comune, pertanto si focalizza sul problema dell'integrazione geometrica, assumendo che precedenti operazioni di integrazione sullo schema siano già state eseguire. In particulare, la tesi inizia proponendo un modello per la rappresentazione dell'informazione spaziale caratterizzata da differenti livelli di qualità, quindi definisce un processo di integrazione che tiene conto dell'accuratezza delle posizioni contenute in entrambi i database coinvoilti. La tecnica di integrazione proposta rappresenta la base per un framework capace di supportare il processamento distributo di dati geografici (geo-processing) nel contesto di una SDI. Il problema di implementare tale computazione distribuita e di lunga durata è trattato anche da un punto di vista pratico attraverso la valutazione dell'applicabilità delle tecnologie di workflow esistenti. Tale valutazione ha portato alla definizione di una soluzione software ideale, le cui caratteristiche sono discusse negli ultimi capitoli, considerando come caso di studio il design del processo di integrazione proposto.
In the last years many countries have developed a Spatial Data Infrastructure (SDI) to manage their geographical information. Large SDIs require new effective techniques to continuously integrate spatial data coming from different sources and characterized by different quality levels. This need is recognized in the scientific literature and is known as data integration or information fusion problem. A specific aspect of spatial data integration concerns the matching and alignment of object geometries. Existing methods mainly perform the integration by simply aligning the less accurate database with the more accurate one, assuming that the latter always contains a better representation of the relevant geometries. Following this approach, spatial entities are merged together in a sub-optimal manner, causing distortions that potentially reduce the overall database quality. This thesis deals with the problem of spatial data integration in a highly-coupled SDI where members have already adhered to a common global schema, hence it focuses on the geometric integration problem assuming that some schema matching operations have already been performed. In particular, the thesis initially proposes a model for representing spatial data together with their quality characteristics, producing a multi-accuracy spatial database, then it defines a novel integration process that takes care of the different positional accuracies of the involved source databases. The main goal of such process is to preserve coherence and consistency of the integrated data and when possible enhancing its accuracy. The proposed multi-accuracy spatial data model and the related integration technique represent the basis for a framework able to support distributed geo-processing in a SDI context. The problem of implementing such long-running distributed computations is also treated from a practical perspective by evaluating the applicability of existing workflow technologies. This evaluation leads to the definition of an ideal software solution, whose characteristics are discussed in the last chapters by considering the design of the proposed integration process as a motivating example.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Geo-processing workflow"

1

Ying, Yuan, Wu Qunyong, and Kang Linjun. "Integrate Geo-spatial Web Processing Services by Workflow Technology." In Lecture Notes in Electrical Engineering, 321–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27323-0_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Akhter, Shamim, and Kento Aida. "Cloud-Based Geo-Information Infrastructure to Support Agriculture Activity Monitoring." In Geospatial Intelligence, 1493–502. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-8054-6.ch066.

Full text
Abstract:
Agriculture activity monitoring needs to deal with large amount of data originated from various organizations (weather station, agriculture repositories, field management, farm management, universities, etc.) and mass people. Therefore, a scalable environment with flexible information access, easy communication and real time collaboration from all types of computing devices, including mobile handheld devices as smart phones, PDAs and iPads, Geo-sensor devices, and etc. are essential. It is mandatory that the system must be accessible, scalable, and transparent from location, migration and resources. In addition, the framework should support modern information retrieval and management systems, unstructured information to structured information processing (IBM Info Stream, text analytic, pig & hive, etc.), task prioritization, task distribution (Hadoop), workflow and task scheduling system, processing power and data storage (Amazon S3 and Google BigTable). Thus, High Scalability Computing (HSC) or Cloud based system can be a prominent and convincing solution for this circumstance.
APA, Harvard, Vancouver, ISO, and other styles
3

Akhter, Shamim, and Kento Aida. "Cloud-Based Geo-Information Infrastructure to Support Agriculture Activity Monitoring." In Information Technology Integration for Socio-Economic Development, 125–34. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-0539-6.ch005.

Full text
Abstract:
Agriculture activity monitoring needs to deal with large amount of data originated from various organizations (weather station, agriculture repositories, field management, farm management, universities, etc.) and mass people. Therefore, a scalable environment with flexible information access, easy communication and real time collaboration from all types of computing devices, including mobile handheld devices as smart phones, PDAs and iPads, Geo-sensor devices, and etc. are essential. It is mandatory that the system must be accessible, scalable, and transparent from location, migration and resources. In addition, the framework should support modern information retrieval and management systems, unstructured information to structured information processing (IBM Info Stream, text analytic, pig & hive, etc.), task prioritization, task distribution (Hadoop), workflow and task scheduling system, processing power and data storage (Amazon S3 and Google BigTable). Thus, High Scalability Computing (HSC) or Cloud based system can be a prominent and convincing solution for this circumstance.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Geo-processing workflow"

1

Migliorini, S., M. Gambini, A. Belussi, M. Negri, and G. Pelagatti. "Workflow technology for geo-processing." In the 2nd International Conference. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/1999320.1999356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Song, Xianfeng, and Junzhi Liu. "Scheduling Geo-processing Workflow Applications with QoS." In 2008 International Workshop on Geoscience and Remote Sensing (ETT and GRS). IEEE, 2008. http://dx.doi.org/10.1109/ettandgrs.2008.98.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nengcheng Chen, Zhong Zhen, Liping Di, Genong Yu, and Peisheng Zhao. "Resource Oriented Architecture for heterogeneous Geo-Processing Workflow Integration." In 2009 17th International Conference on Geoinformatics. IEEE, 2009. http://dx.doi.org/10.1109/geoinformatics.2009.5293169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Nengcheng, Liping Di, Jianya Gong, Genong Yu, and Min Min. "Automatic Earth observation data service based on reusable geo-processing workflow." In International Conference on Earth Observation Data Processing and Analysis, edited by Deren Li, Jianya Gong, and Huayi Wu. SPIE, 2008. http://dx.doi.org/10.1117/12.815690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xianfeng Song, Chuanrong Li, and Lingli Tang. "Global planning of geo-processing workflow in a distributed and collaborative environment." In 2009 17th International Conference on Geoinformatics. IEEE, 2009. http://dx.doi.org/10.1109/geoinformatics.2009.5293440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gurung, Gyanendra, and Kshama Roy. "Streamlining the GIS to CAD Workflow for Automated Pipeline Alignment Sheet Generation." In 2020 13th International Pipeline Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/ipc2020-9673.

Full text
Abstract:
Abstract The use of Geographic Information System (GIS) in managing pipeline database and automating routine engineering processes has become a standard practice in the pipeline industry. While maintaining a central database provides security, integrity, and easy management of data throughout the pipeline’s lifecycle, GIS enables spatial analysis of pipeline data in addition to streamlining access and visualization of results. One of the major benefits of GIS integration lies in the ease of automating the alignment sheet generation for pipelines. This paper introduces a simplified pipeline alignment sheet generation workflow using GIS datasets to produce highly customizable alignment sheets in AutoCAD, a much-preferred format in the pipeline industry. By utilizing existing GIS and AutoCAD features to generate the alignment sheet, writing complicated geo-processing or plotting algorithms is minimized, which in turn reduces the risks of committing any systematic errors. This robust and user-friendly workflow not only ensures safety but also leads to a cost-effective solution.
APA, Harvard, Vancouver, ISO, and other styles
7

Elkhamry, Ayman, Mohamed Fouda, Ahmed Taher, and Eduard Bikchandaev. "Integration of Deep Resistivity High Definition and Ultra-Deep Resistivity 3D Inversion Enables Geo-Steering in Thin Laminated Reservoirs." In International Petroleum Technology Conference. IPTC, 2023. http://dx.doi.org/10.2523/iptc-23017-ea.

Full text
Abstract:
Abstract Integrating the inversions of simultaneously acquired deep and ultra-deep logging while drilling (LWD) azimuthal resistivity measurements can improve the resolution of the overlapping volume under investigation and reduce uncertainty in the far field volume model reconstruction. Both are key tools for precise placement of horizontal wells, the recent enhancements in the downhole tools include surface processing algorithms and advanced visualization techniques that allow higher confidence in well placement decisions through improved understanding of subsurface geology and orientation of sand channels in real-time. The high-definition multi-layer inversion capability of a new generation deep resistivity tool has been utilized along with the 1D and 3D ultra-deep resistivity inversion for a separate established tool, providing detailed visualization of formations both near wellbore and in the far field. Both technologies were compared in reservoirs with varying resistivity profiles and thicknesses. In addition, the resistivity anisotropy analysis from ultra-deep 3D inversion was utilized to confirm lithology around the wellbore differentiating anisotropic shale zones from other lithologies of similar low resistivity. Ultra-deep 3D inversions were processed with fine scale cell sizes and then used to validate the high-resolution deep resistivity inversion results. The integration of multiple inversions with varying capabilities enabled resolving thin reservoir layers in a low-resistivity, low-contrast environment, providing superior resolution within the overlapping volumes of investigation of the deep and ultra-deep resistivities. Customization of the ultra-deep 3D inversion successfully enabled geo-mapping of 1-2 ft thick layers and was used to validate the high-resolution deep resistivity 1D inversion. The increasingly challenging geo-steering decision-making process in a complex drilling environment was addressed by employing the advancement in LWD technologies providing higher signal to noise ratios, multiple frequencies and transmitter-receiver spacings augmented with customized inversions providing superior results. This paper demonstrates the added value, to identify, map and navigate thin reservoir zones. A novel workflow has been developed to improve resolution in deep and ultra-deep resistivity mapping, enabling the identification of thin laminations around the wellbore capitalizing on the latest advancements in LWD geo-steering technologies.
APA, Harvard, Vancouver, ISO, and other styles
8

Elkhamry, Ayman, Mohamed Fouda, Ahmed Taher, and Eduard Bikchandaev. "Integration of Deep Resistivity High Definition and Ultra-Deep Resistivity 3D Inversion Enables Geo-Steering in Thin Laminated Reservoirs." In International Petroleum Technology Conference. IPTC, 2023. http://dx.doi.org/10.2523/iptc-23017-ms.

Full text
Abstract:
Abstract Integrating the inversions of simultaneously acquired deep and ultra-deep logging while drilling (LWD) azimuthal resistivity measurements can improve the resolution of the overlapping volume under investigation and reduce uncertainty in the far field volume model reconstruction. Both are key tools for precise placement of horizontal wells, the recent enhancements in the downhole tools include surface processing algorithms and advanced visualization techniques that allow higher confidence in well placement decisions through improved understanding of subsurface geology and orientation of sand channels in real-time. The high-definition multi-layer inversion capability of a new generation deep resistivity tool has been utilized along with the 1D and 3D ultra-deep resistivity inversion for a separate established tool, providing detailed visualization of formations both near wellbore and in the far field. Both technologies were compared in reservoirs with varying resistivity profiles and thicknesses. In addition, the resistivity anisotropy analysis from ultra-deep 3D inversion was utilized to confirm lithology around the wellbore differentiating anisotropic shale zones from other lithologies of similar low resistivity. Ultra-deep 3D inversions were processed with fine scale cell sizes and then used to validate the high-resolution deep resistivity inversion results. The integration of multiple inversions with varying capabilities enabled resolving thin reservoir layers in a low-resistivity, low-contrast environment, providing superior resolution within the overlapping volumes of investigation of the deep and ultra-deep resistivities. Customization of the ultra-deep 3D inversion successfully enabled geo-mapping of 1-2 ft thick layers and was used to validate the high-resolution deep resistivity 1D inversion. The increasingly challenging geo-steering decision-making process in a complex drilling environment was addressed by employing the advancement in LWD technologies providing higher signal to noise ratios, multiple frequencies and transmitter-receiver spacings augmented with customized inversions providing superior results. This paper demonstrates the added value, to identify, map and navigate thin reservoir zones. A novel workflow has been developed to improve resolution in deep and ultra-deep resistivity mapping, enabling the identification of thin laminations around the wellbore capitalizing on the latest advancements in LWD geo-steering technologies.
APA, Harvard, Vancouver, ISO, and other styles
9

Yang, Chao, Yuanzheng Shao, Nengcheng Chen, and Liping Di. "Aggregating distributed geo-processing workflows and web services as processing model web." In 2012 First International Conference on Agro-Geoinformatics. IEEE, 2012. http://dx.doi.org/10.1109/agro-geoinformatics.2012.6311638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Degenhardt, John J., Safdar Ali, Mansoor Ali, Brian Chin, W. D. Von Gonten, Jr., and Eric Peavey. "RESERVOIR SCALE CHEMOSTRATIGRAPHY AND FACIES MODELING USING HIGH SAMPLE RATE GEOPHYSICAL SCANS OF WHOLE CORE." In 2021 SPWLA 62nd Annual Logging Symposium Online. Society of Petrophysicists and Well Log Analysts, 2021. http://dx.doi.org/10.30632/spwla-2021-0051.

Full text
Abstract:
Many unconventional reservoirs exhibit a high level of vertical heterogeneity in terms of petrophysical and geo-mechanical properties. These properties often change on the scale of centimeters across rock types or bedding, and thus cannot be accurately measured by low-resolution petrophysical logs. Nonetheless, the distribution of these properties within a flow unit can significantly impact targeting, stimulation and production. In unconventional resource plays such as the Austin Chalk and Eagle Ford shale in south Texas, ash layers are the primary source of vertical heterogeneity throughout the reservoir. The ash layers tend to vary considerably in distribution, thickness and composition, but generally have the potential to significantly impact the economic recovery of hydrocarbons by closure of hydraulic fracture conduits via viscous creep and pinch-off. The identification and characterization of ash layers can be a time-consuming process that leads to wide variations in the interpretations that are made with regard to their presence and potential impact. We seek to use machine learning (ML) techniques to facilitate rapid and more consistent identification of ash layers and other pertinent geologic lithofacies. This paper involves high-resolution laboratory measurements of geophysical properties over whole core and analysis of such data using machine-learning techniques to build novel high-resolution facies models that can be used to make statistically meaningful predictions of facies characteristics in proximally remote wells where core or other physical is not available. Multiple core wells in the Austin Chalk/Eagle Ford shale play in Dimmitt County, Texas, USA were evaluated. Drill core was scanned at high sample rates (1 mm to 1 inch) using specialized equipment to acquire continuous high resolution petrophysical logs and the general modeling workflow involved pre-processing of high frequency sample rate data and classification training using feature selection and hyperparameter estimation. Evaluation of the resulting training classifiers using Receiver Operating Characteristics (ROC) determined that the blind test ROC result for ash layers was lower than those of the better constrained carbonate and high organic mudstone/wackestone data sets. From this it can be concluded that additional consideration must be given to the set of variables that govern the petrophysical and mechanical properties of ash layers prior to developing it as a classifier. Variability among ash layers is controlled by geologic factors that essentially change their compositional makeup, and consequently, their fundamental rock properties. As such, some proportion of them are likely to be misidentified as high clay mudstone/wackestone classifiers. Further refinement of such ash layer compositional variables is expected to improve ROC results for ash layers significantly.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography