Academic literature on the topic '360 photogrammetry'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic '360 photogrammetry.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "360 photogrammetry"

1

Janiszewski, Mateusz, Masoud Torkan, Lauri Uotinen, and Mikael Rinne. "Rapid Photogrammetry with a 360-Degree Camera for Tunnel Mapping." Remote Sensing 14, no. 21 (October 31, 2022): 5494. http://dx.doi.org/10.3390/rs14215494.

Full text
Abstract:
Structure-from-Motion Multi-View Stereo (SfM-MVS) photogrammetry is a viable method to digitize underground spaces for inspection, documentation, or remote mapping. However, the conventional image acquisition process can be laborious and time-consuming. Previous studies confirmed that the acquisition time can be reduced when using a 360-degree camera to capture the images. This paper demonstrates a method for rapid photogrammetric reconstruction of tunnels using a 360-degree camera. The method is demonstrated in a field test executed in a tunnel section of the Underground Research Laboratory of Aalto University in Espoo, Finland. A 10 m-long tunnel section with exposed rock was photographed using the 360-degree camera from 27 locations and a 3D model was reconstructed using SfM-MVS photogrammetry. The resulting model was then compared with a reference laser scan and a more conventional digital single-lens reflex (DSLR) camera-based model. Image acquisition with a 360-degree camera was 3x faster than with a conventional DSLR camera and the workflow was easier and less prone to errors. The 360-degree camera-based model achieved a 0.0046 m distance accuracy error compared to the reference laser scan. In addition, the orientation of discontinuities was measured remotely from the 3D model and the digitally obtained values matched the manual compass measurements of the sub-vertical fracture sets, with an average error of 2–5°.
APA, Harvard, Vancouver, ISO, and other styles
2

Barazzetti, L., M. Previtali, and F. Roncoroni. "3D MODELLING WITH THE SAMSUNG GEAR 360." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W3 (February 23, 2017): 85–90. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w3-85-2017.

Full text
Abstract:
The Samsung Gear 360 is a consumer grade spherical camera able to capture photos and videos. The aim of this work is to test the metric accuracy and the level of detail achievable with the Samsung Gear 360 coupled with digital modelling techniques based on photogrammetry/computer vision algorithms. Results demonstrate that the direct use of the projection generated inside the mobile phone or with Gear 360 Action Direction (the desktop software for post-processing) have a relatively low metric accuracy. As results were in contrast with the accuracy achieved by using the original fisheye images (front and rear facing images) in photogrammetric reconstructions, an alternative solution to generate the equirectangular projections was developed. A calibration aimed at understanding the intrinsic parameters of the two lenses camera, as well as their relative orientation, allowed one to generate new equirectangular projections from which a significant improvement of geometric accuracy has been achieved.
APA, Harvard, Vancouver, ISO, and other styles
3

Janiszewski, Mateusz, Markus Prittinen, Masoud Torkan, and Lauri Uotinen. "Rapid tunnel scanning using a 360-degree camera and SfM photogrammetry." IOP Conference Series: Earth and Environmental Science 1124, no. 1 (January 1, 2023): 012010. http://dx.doi.org/10.1088/1755-1315/1124/1/012010.

Full text
Abstract:
Abstract Photogrammetric scanning can be employed for the digitization of underground spaces, for example for remote mapping, visualization, or training purposes. However, such a technique requires capturing many photos, which can be laborious and time-consuming. Previous research has demonstrated that the acquisition time can be reduced by capturing the data with multiple lenses or devices at the same time. Therefore, this paper demonstrates a method for rapid scanning of hard rock tunnels using Structure-from-Motion (SfM) photogrammetry and a 360-degree camera. The test was performed in the Underground Research Laboratory of Aalto University (URLA). The tunnel is located in granitic rocks at a depth of 20 m below the Otaniemi campus in Espoo, Finland. A 10 m long and 3.5 m high tunnel section with exposed rock was selected for this study. Photos were captured using the 360-degree camera from 27 locations and 3D models were reconstructed using SfM photogrammetry. The accuracy, speed, and resolution of the 3D models were measured and compared with models scanned with a digital single-lens reflex (DSLR) camera. The results show that the data capture process with a 360-degree camera is 6x faster compared to a conventional camera. In addition, the orientation of discontinuities was measured remotely from the 3D model and the digitally obtained values matched the manual compass measurements. Even though the quality of the 360-degree camerabased 3D model was visually inferior compared to the DSLR model, the point cloud had sufficient accuracy and resolution for semi-automatic discontinuity measurements. The quality of the models can be improved by combining 360-degree and DSLR photos which result in a point cloud with 3x higher resolution and 2x higher accuracy. The results demonstrated that 360-degree cameras can be used for the rapid digitization of underground tunnels.
APA, Harvard, Vancouver, ISO, and other styles
4

Murtiyoso, A., H. Hristova, N. Rehush, and V. C. Griess. "LOW-COST MAPPING OF FOREST UNDER-STOREY VEGETATION USING SPHERICAL PHOTOGRAMMETRY." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-2/W1-2022 (December 8, 2022): 185–90. http://dx.doi.org/10.5194/isprs-archives-xlviii-2-w1-2022-185-2022.

Full text
Abstract:
Abstract. This paper is an attempt to respond to the growing need and demand of 3D data in forestry, especially for 3D mapping. The use of terrestrial laser scanners (TLS) dominates contemporary literature for under-storey vegetation mapping as this technique provides precise and easy-to-use solutions for users. However, TLS requires substantial investments in terms of device acquisition and user training. The search for and development of low-cost alternatives is therefore an interesting field of inquiry. Here, we use low-cost 360° cameras combined with spherical photogrammetric principles for under-storey vegetation mapping. While we fully assume that this low-cost approach will not generate results on par with either TLS or classical close-range photogrammetry, its main aim is to investigate whether this alternative is sufficient to meet the requirements of forest mapping. In this regard, geometric analyses were conducted using both TLS and close-range photogrammetry as comparison points. The diameter at breast height (DBH), a parameter commonly used in forestry, was then computed from the 360° point cloud using three different methods to determine if a similar order of precision to the two reference datasets can be obtained. The results show that 360° cameras were able to generate point clouds with a similar geometric quality as the references despite their low density, albeit with a significantly higher amount of noise. The effect of the noise is also evident in the DBH computation, where it yielded an average error of 3.5 cm compared to both the TLS and close-range photogrammetry.
APA, Harvard, Vancouver, ISO, and other styles
5

Genovese, Katia, and Carmine Pappalettere. "Axial stereo-photogrammetry for 360° measurement on tubular samples." Optics and Lasers in Engineering 45, no. 5 (May 2007): 637–50. http://dx.doi.org/10.1016/j.optlaseng.2006.08.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Teppati Losè, Lorenzo, Filiberto Chiabrando, and Fabio Giulio Tonolo. "Documentation of Complex Environments Using 360° Cameras. The Santa Marta Belltower in Montanaro." Remote Sensing 13, no. 18 (September 11, 2021): 3633. http://dx.doi.org/10.3390/rs13183633.

Full text
Abstract:
Low-cost and fast surveying approaches are increasingly being deployed in several domains, including in the field of built heritage documentation. In parallel with mobile mapping systems, uncrewed aerial systems, and simultaneous location and mapping systems, 360° cameras and spherical photogrammetry are research topics attracting significant interest for this kind of application. Although several instruments and techniques can be considered to be consolidated approaches in the documentation processes, the research presented in this manuscript is focused on a series of tests and analyses using 360° cameras for the 3D metric documentation of a complex environment, applied to the case study of a XVIII century belltower in Piemonte region (north-west Italy). Both data acquisition and data processing phases were thoroughly investigated and several processing strategies were planned, carried out, and evaluated. Data derived from consolidated 3D mapping approaches were used as a ground reference to validate the results derived from the spherical photogrammetry approach. The outcomes of this research confirmed, under specific conditions and with a proper setup, the possibility of using 360° images in a Structure from Motion pipeline to meet the expected accuracies of typical architectural large-scale drawings.
APA, Harvard, Vancouver, ISO, and other styles
7

Murtiyoso, A., P. Grussenmeyer, and D. Suwardhi. "TECHNICAL CONSIDERATIONS IN LOW-COST HERITAGE DOCUMENTATION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W17 (November 29, 2019): 225–32. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w17-225-2019.

Full text
Abstract:
Abstract. The use of photogrammetry in 3D heritage documentation has matured over the recent years. In the same time, many types of sensors have also been developed in the field of imaging. While photogrammetry is considered as a low-cost alternative to TLS, several options exist in terms of sensor type with trade-offs between price, ease of use, and quality of resolution. Nevertheless, a proper knowledge on the acquisition and processing is still required to generate acceptable results. This paper aims to compare three photogrammetric sensors, namely a classical DSLR camera, a drone, and a spherical 360° camera in documenting heritage sites. Main comparison points include quality of the bundle adjustment and quality of the dense point cloud. However, an important point of the paper is also to determine whether a sensor at a given cost and effort is enough for documentation purposes. A TLS point cloud data was used as a common reference, as well as control and check points issued from geodetic surveying. In the aftermath of the comparison, several technical suggestions and recommendations were proposed as regards to the use of each sensor.
APA, Harvard, Vancouver, ISO, and other styles
8

Kwiatek, K., and R. Tokarczyk. "Photogrammetric Applications of Immersive Video Cameras." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-5 (May 28, 2014): 211–18. http://dx.doi.org/10.5194/isprsannals-ii-5-211-2014.

Full text
Abstract:
The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Barazzetti, L., M. Previtali, and F. Roncoroni. "CAN WE USE LOW-COST 360 DEGREE CAMERAS TO CREATE ACCURATE 3D MODELS?" ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 69–75. http://dx.doi.org/10.5194/isprs-archives-xlii-2-69-2018.

Full text
Abstract:
360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.
APA, Harvard, Vancouver, ISO, and other styles
10

Barazzetti, L., M. Previtali, F. Roncoroni, and R. Valente. "CONNECTING INSIDE AND OUTSIDE THROUGH 360° IMAGERY FOR CLOSE-RANGE PHOTOGRAMMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W9 (January 31, 2019): 87–92. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w9-87-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Metric documentation of buildings requires the connection of different spaces, such as rooms, corridors, floors, and interior and exterior spaces. Images and laser scans have to be oriented and registered to obtain accurate metric data about different areas and the related metric information (e.g., wall thickness). A robust registration can be obtained with total station measurements, especially when a geodetic network with multiple intersections on different station points is available. In the case of a photogrammetric project with several images acquired with a central perspective camera, the lack of total station measurements (i.e., control and check points) could result in a weak orientation for the limited overlap between images acquired through doors and windows. The procedure presented in this paper is based on 360&amp;deg; images acquired with an affordable digital camera (less than 350$). The large field of view of 360&amp;deg; images allows one to simultaneously capture different rooms as well as indoor and outdoor spaces, which will be visible in just a picture. This could provide a more robust orientation of multiple images acquired through narrow spaces. A combined bundle block adjustment that integrates central perspective and spherical images is here proposed and discussed. Additional considerations on the integration of fisheye images are discussed as well.</p>
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "360 photogrammetry"

1

Elmore, Clinton. "Comparing Structure from Motion Photogrammetry and Computer Vision for Low-Cost 3D Cave Mapping: Tipton-Haynes Cave, Tennessee." Digital Commons @ East Tennessee State University, 2019. https://dc.etsu.edu/etd/3608.

Full text
Abstract:
Natural caves represent one of the most difficult environments to map with modern 3D technologies. In this study I tested two relatively new methods for 3D mapping in Tipton-Haynes Cave near Johnson City, Tennessee: Structure from Motion Photogrammetry and Computer Vision using Tango, an RGB-D (Red Green Blue and Depth) technology. Many different aspects of these two methods were analyzed with respect to the needs of average cave explorers. Major considerations were cost, time, accuracy, durability, simplicity, lighting setup, and drift. The 3D maps were compared to a conventional cave map drafted with measurements from a modern digital survey instrument called the DistoX2, a clinometer, and a measuring tape. Both 3D mapping methods worked, but photogrammetry proved to be too time consuming and laborious for capturing more than a few meters of passage. RGB-D was faster, more accurate, and showed promise for the future of low-cost 3D cave mapping.
APA, Harvard, Vancouver, ISO, and other styles
2

Günther, Ulrike [Verfasser]. "Erstellung eines multimedialen Tutorials zum Thema "Digitale Photogrammetrie" mit Hilfe eines internetbasierten LernInformationssystems / vorgelegt von Ulrike Günther." 2004. http://d-nb.info/975470108/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Spohner, Regine [Verfasser]. "Rezente Landschaftsveränderungen im Nanga-Parbat-Gebiet (Nordwest-Himalaya) : eine Untersuchung mit Hilfe einer integrativen Methode aus Photogrammetrie, Satellitenfernerkundung und Geographischen Informationssystemen (GIS) / vorgelegt von Regine Spohner." 2004. http://d-nb.info/973677376/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "360 photogrammetry"

1

Stetson, Karl A. "Heterodyne Speckle Photogrammetry." In Optical Metrology, 520–30. Dordrecht: Springer Netherlands, 1987. http://dx.doi.org/10.1007/978-94-009-3609-6_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "360 photogrammetry"

1

Tannus, Julia. "Optimizing and Automating Computerized Photogrammetry for 360° 3D Reconstruction." In 2020 22nd Symposium on Virtual and Augmented Reality (SVR). IEEE, 2020. http://dx.doi.org/10.1109/svr51698.2020.00048.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Morales, Roberto C., and Edgar Farias. "Accuracy and Validation of 360-Degree Camera Use in Photogrammetry." In WCX SAE World Congress Experience. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2022. http://dx.doi.org/10.4271/2022-01-0829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vazquez, I., and Steven Cutchin. "An Analysis on Pixel Redundancy Structure in Equirectangular Images." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita, 2021. http://dx.doi.org/10.24132/csrn.2021.3002.6.

Full text
Abstract:
360◦photogrammetry captures the surrounding light from a central point. To process and transmit these types ofimages over the network to the end user, the most common approach is to project them onto a 2D image usingthe equirectangular projection to generate a 360◦image. However, this projection introduces redundancy into theimage, increasing storage and transmission requirements. To address this problem, the standard approach is touse compression algorithms, such as JPEG or PNG, but they do not take full advantage of the visual redundancyproduced by the equirectangular projection. In this study of the 360SP dataset (a collection of Google StreetView images), we analyze the redundancy in equirectangular images and show how it is structured across theimage. Outcomes from our study will support the developing of spherical compression algorithms, improvingthe immersive experience of Virtual Reality users by reducing loading times and increasing the perceptual imagequality.
APA, Harvard, Vancouver, ISO, and other styles
4

Vazquez, I., and S. Cutchin. "An Analysis on Pixel Redundancy Structure in Equirectangular Images." In WSCG'2021 - 29. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision'2021. Západočeská univerzita, 2021. http://dx.doi.org/10.24132/csrn.2021.3101.6.

Full text
Abstract:
360◦ photogrammetry captures the surrounding light from a central point. To process and transmit these types of images over the network to the end user, the most common approach is to project them onto a 2D image using the equirectangular projection to generate a 360◦ image. However, this projection introduces redundancy into the image, increasing storage and transmission requirements. To address this problem, the standard approach is to use compression algorithms, such as JPEG or PNG, but they do not take full advantage of the visual redundancy produced by the equirectangular projection. In this study of the 360SP dataset (a collection of Google Street View images), we analyze the redundancy in equirectangular images and show how it is structured across the image. Outcomes from our study will support the developing of spherical compression algorithms, improving the immersive experience of Virtual Reality users by reducing loading times and increasing the perceptual image quality.
APA, Harvard, Vancouver, ISO, and other styles
5

Triantafyllou, Vasileios, Konstantinos I. Kotsopoulos, Dimitrios Tsolis, and Dimitrios Tsoukalos. "Practical Techniques for Aerial Photogrammetry, Polygon Reduction and Aerial 360 Photography for Cultural Heritage Preservation in AR and VR Applications." In 2022 13th International Conference on Information, Intelligence, Systems & Applications (IISA). IEEE, 2022. http://dx.doi.org/10.1109/iisa56318.2022.9904357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kaufman, John, Allan E. W. Rennie, and Morag Clement. "Reverse Engineering Using Close Range Photogrammetry for Additive Manufactured Reproduction of Egyptian Artefacts and Other Objets d’art." In ASME 2014 12th Biennial Conference on Engineering Systems Design and Analysis. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/esda2014-20304.

Full text
Abstract:
Photogrammetry has been in use for over one hundred and fifty years. This research considers how digital image capture using a medium range Nikon Digital SLR camera, can be transformed into 3D virtual spatial images, and together with additive manufacturing (AM) technology, geometric representations of the original artefact can be fabricated. The research has focused on the use of photogrammetry as opposed to laser scanning (LS), investigating the shift from LS use to a Digital Single Lens Reflex (DSLR) camera exclusively. The basic photogrammetry equipment required is discussed, with the main objective being simplicity of execution for eventual realisation of physical products. As the processing power of computers has increased and become widely available, at affordable prices, software programs have improved, so it is now possible to digitally combine multi-view photographs, taken from 360°, into 3D virtual representational images. This has now led to the possibility of 3D images being created without LS intervention. Two methods of digital data capture are employed and discussed, in acquiring up to 130 digital data images, taken from different angles using the DSLR camera together with the specific operating conditions in which to photograph the objects. Three case studies are documented, the first, a modern clay sculpture, whilst the other two are 3000 year old Egyptian clay artefacts and the objects were recreated using AM technology. It has been shown that with the use of a standard DSLR camera and computer software, 2D images can be converted into 3D virtual video replicas as well as solid, geometric representation of the originals.
APA, Harvard, Vancouver, ISO, and other styles
7

Pinchukov, V. V., A. Yu Poroykov, and E. V. Shmatko. "Exploring the Application of Convolutional Neural Networks for Photogrammetric Image Processing." In 32nd International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2022. http://dx.doi.org/10.20948/graphicon-2022-340-347.

Full text
Abstract:
Close-range photogrammetry is widely used to measure the surface shape of various objects and its deformations. Usually, a stereo pair of images of the object under study, obtained from different angles by means of several digital video cameras, is used for this purpose. The surface shape is measured by triangulating a set of corresponding two-dimensional points from these images using a predetermined location of cameras relative to each other. Various algorithms are used to find these points. Several photogrammetric methods use cross-correlation for this purpose. This paper discusses the possibility of replacing the correlation algorithm with neural networks to determine offsets in the images. They allow to increase the calculation speed and the spatial resolution of the measurement results. To verify the possibility of their application, a series of experimental images of surfaces with different deformations were obtained. Computational experiments were performed to process these images using selected neural networks and a classical cross-correlation algorithm. The limitations on the use of the compared algorithms were determined and their error in restoring the three-dimensional shape of the surface was estimated. The physical simulation to verify the selected neural networks for image processing for the task of photogrammetry showed their performance and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
8

Matthews, Stephen John, Ross Sutherland, and Gareth McIntyre. "Enhanced Operational Integrity Management Via Digital Twin." In ADIPEC. SPE, 2022. http://dx.doi.org/10.2118/210998-ms.

Full text
Abstract:
Abstract During the early stages of the Covid pandemic, the oil and gas industry was faced with significant challenges in executing inspections for ageing offshore oil and gas infrastructure together with the management of various operational integrity threats. These challenges were addressed through the collaborative development of remote, digital techniques to minimise inspector time offshore and maximise efficiency. This initially involved a review and challenge of our existing operational integrity management lifecycle, comprising systems data, risk-based strategy development, inspection planning, inspection execution, data management, anomaly management (including fabric maintenance and engineering repairs) and close-out. Working with our partner, GDi, we sought to drive a step change into digital integrity management, combined with streamlined workflows, activities, and administrative tasks. The pilot development involved comprehensive laser scanning of a floating production storage and offloading (FPSO) facility to generate a dimensionally accurate digital twin, overlayed with 360° HD photogrammetry to provide a thorough baseline for subsequent general visual inspections of pressure equipment, piping systems and structural elements. General visual inspections can now be executed remotely, without the requirement for an offshore inspector. The digital twin environment also supported transformation of inspector-led activities, through optimisation of processes, digital inspection workflows via tablets and seamless integration with the integrity management platform. The pilot development also involved enhancing the anomaly risk management process, including management of mitigations (such as temporary repairs) and actions required to resolve and close anomalies. For the anomaly actions, the digital twin environment enables the accurate estimation of fabric maintenance scopes and dimensionally accurate repairs for corrective work orders. The system also facilitates a unique overview of cumulative risk via the plotting of anomalies in the digital twin space. The digitally enhanced operational integrity management system has substantially reduced direct costs and personnel safety risks, enabled substantial improvements in productivity (up to 200% for inspections), and improved the quality of integrity management outcomes.
APA, Harvard, Vancouver, ISO, and other styles
9

Dunphy, J. R., and W. H. Atkinson. "Development of Advanced Diagnostics for Turbine Disks." In ASME 1990 International Gas Turbine and Aeroengine Congress and Exposition. American Society of Mechanical Engineers, 1990. http://dx.doi.org/10.1115/90-gt-390.

Full text
Abstract:
Quantitative diagnostics are essential for use during design optimization studies of turbine engine components to insure that performance goals and lifetime requirements are met. This paper addresses development and testing of sensors for diagnostic application in turbine hot sections. Technologies tested during this investigation included optical fiber static strain sensors, thin metallic film static strain sensors, advanced wire static strain sensors, thermographic phosphor temperature sensors and heat flux sensors. Reference measurements for the strain sensors were provided by speckle photogrammetry and conventional strain gages, while reference measurements for temperature sensor were provided by optical pyrometry and conventional thermocouples. Simulated engine conditions typical of a high pressure turbine disk were provided by operating a disk in a high speed spin–rig which ran to 13200 revolutions per minute and 950 K. Representative results and application issues will be provided for each sensor type.
APA, Harvard, Vancouver, ISO, and other styles
10

GRAZIOSO, Stanislao, Mario SELVAGGIO, Giuseppe DI GIRONIMO, and Roberto RUGGIERO. "INBODY: Instant Photogrammetric 3D Body Scanner." In 7th International Conference on 3D Body Scanning Technologies, Lugano, Switzerland, 30 Nov.-1 Dec. 2016. Ascona, Switzerland: Hometrica Consulting - Dr. Nicola D'Apuzzo, 2016. http://dx.doi.org/10.15221/16.296.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "360 photogrammetry"

1

Salisbury, J. B., A. M. Herbst, and Katreen Wikstrom Jones. November 30, 2018, Mw 7.1 Anchorage earthquake photogrammetry. Alaska Division of Geological & Geophysical Surveys, 2019. http://dx.doi.org/10.14509/30270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Buzard, R. M., J. E. Christian, and J. R. Overbeck. Photogrammetry-derived orthoimagery and elevation for Napakiak, Alaska, collected June 30, 2021. Alaska Division of Geological & Geophysical Surveys, December 2021. http://dx.doi.org/10.14509/30793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ruiz, Pablo, Craig Perry, Alejando Garcia, Magali Guichardot, Michael Foguer, Joseph Ingram, Michelle Prats, Carlos Pulido, Robert Shamblin, and Kevin Whelan. The Everglades National Park and Big Cypress National Preserve vegetation mapping project: Interim report—Northwest Coastal Everglades (Region 4), Everglades National Park (revised with costs). National Park Service, November 2020. http://dx.doi.org/10.36967/nrr-2279586.

Full text
Abstract:
The Everglades National Park and Big Cypress National Preserve vegetation mapping project is part of the Comprehensive Everglades Restoration Plan (CERP). It is a cooperative effort between the South Florida Water Management District (SFWMD), the United States Army Corps of Engineers (USACE), and the National Park Service’s (NPS) Vegetation Mapping Inventory Program (VMI). The goal of this project is to produce a spatially and thematically accurate vegetation map of Everglades National Park and Big Cypress National Preserve prior to the completion of restoration efforts associated with CERP. This spatial product will serve as a record of baseline vegetation conditions for the purpose of: (1) documenting changes to the spatial extent, pattern, and proportion of plant communities within these two federally-managed units as they respond to hydrologic modifications resulting from the implementation of the CERP; and (2) providing vegetation and land-cover information to NPS park managers and scientists for use in park management, resource management, research, and monitoring. This mapping project covers an area of approximately 7,400 square kilometers (1.84 million acres [ac]) and consists of seven mapping regions: four regions in Everglades National Park, Regions 1–4, and three in Big Cypress National Preserve, Regions 5–7. The report focuses on the mapping effort associated with the Northwest Coastal Everglades (NWCE), Region 4 , in Everglades National Park. The NWCE encompasses a total area of 1,278 square kilometers (493.7 square miles [sq mi], or 315,955 ac) and is geographically located to the south of Big Cypress National Preserve, west of Shark River Slough (Region 1), and north of the Southwest Coastal Everglades (Region 3). Photo-interpretation was performed by superimposing a 50 × 50-meter (164 × 164-feet [ft] or 0.25 hectare [0.61 ac]) grid cell vector matrix over stereoscopic, 30 centimeters (11.8 inches) spatial resolution, color-infrared aerial imagery on a digital photogrammetric workstation. Photo-interpreters identified the dominant community in each cell by applying majority-rule algorithms, recognizing community-specific spectral signatures, and referencing an extensive ground-truth database. The dominant vegetation community within each grid cell was classified using a hierarchical classification system developed specifically for this project. Additionally, photo-interpreters categorized the absolute cover of cattail (Typha sp.) and any invasive species detected as either: Sparse (10–49%), Dominant (50–89%), or Monotypic (90–100%). A total of 178 thematic classes were used to map the NWCE. The most common vegetation classes are Mixed Mangrove Forest-Mixed and Transitional Bayhead Shrubland. These two communities accounted for about 10%, each, of the mapping area. Other notable classes include Short Sawgrass Marsh-Dense (8.1% of the map area), Mixed Graminoid Freshwater Marsh (4.7% of the map area), and Black Mangrove Forest (4.5% of the map area). The NWCE vegetation map has a thematic class accuracy of 88.4% with a lower 90th Percentile Confidence Interval of 84.5%.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography