Journal articles on the topic '360-degree images'

To see the other types of publications on this topic, follow the link: 360-degree images.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic '360-degree images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Assens, Marc, Xavier Giro-i-Nieto, Kevin McGuinness, and Noel E. O’Connor. "Scanpath and saliency prediction on 360 degree images." Signal Processing: Image Communication 69 (November 2018): 8–14. http://dx.doi.org/10.1016/j.image.2018.06.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Alves, Ricardo Martins, Luís Sousa, Aldric Trindade Negrier, João M. F. Rodrigues, Jânio Monteiro, Pedro J. S. Cardoso, Paulo Felisberto, and Paulo Bica. "Interactive 360 Degree Holographic Installation." International Journal of Creative Interfaces and Computer Graphics 8, no. 1 (January 2017): 20–38. http://dx.doi.org/10.4018/ijcicg.2017010102.

Full text
Abstract:
With new marketing strategies and technologies, new demands arise, and the standard public relation or salesperson is not enough, costumers tend to have higher standards while companies try to capture their attention, requiring the use of creative contents and ideas. For this purpose, this article describes how an interactive holographic installation was developed, making use of a holographic technology to call attention of potential clients. This is achieved by working as a host or showing a product advertising the company. The installation consists in a 360 degree (8 view) holographic avatar or object and optionality, also a screen, where a set of menus with videos, images and textual contents are presented. It uses several Microsoft Kinect sensors for enabling user (and other persons) tracking and natural interaction around the installation, through gestures and speech while building several statistics of the visualized content. All those statistics can be analyzed on-the-fly by the company to understand the success of the event.
APA, Harvard, Vancouver, ISO, and other styles
3

Banchi, Yoshihiro, Keisuke Yoshikawa, and Takashi Kawai. "Evaluating user experience of 180 and 360 degree images." Electronic Imaging 2020, no. 2 (January 26, 2020): 244–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-244.

Full text
Abstract:
This paper describes a comparison of user experience of virtual reality (VR) image format. The authors prepared the following four conditions and evaluated the user experience during viewing VR images with a headset by measuring subjective and objective indices; Condition 1: monoscopic 180-degree image, Condition 2: stereoscopic 180-degree image, Condition 3: monoscopic 360-degree image, Condition 4: stereoscopic 360-degree image. From the results of the subjective indices (reality, presence, and depth sensation), condition 4 was evaluated highest, and conditions 2 and 3 were evaluated to the same extent. In addition, from the results of the objective indices (eye and head tracking), a tendency to suppress head movement was found in 180-degree images.
APA, Harvard, Vancouver, ISO, and other styles
4

Jauhari, Jauhari. "SOLO-YOGYA INTO 360-DEGREE PHOTOGRAPHY." Capture : Jurnal Seni Media Rekam 13, no. 1 (December 13, 2021): 17–31. http://dx.doi.org/10.33153/capture.v13i1.3627.

Full text
Abstract:
Currently, technological developments have made it possible for photographic works not only to be present in the form of a flat 180 degree two-dimensional panorama, but even being able to present reality with a 360 degree perspective. This research on the creation of photographic works aims to optimize photographic equipment for photographing with a 360 degree perspective. Eventhough there are many 360 degree application in smartphones, but using a DSLR camera to create works with a 360 degree perspective has the advantage that it can be printed in large sizes with high resolution without breaking the pixels. The method of creating this work is based on the experimental process of developing DSLR camera equipment. This 360 degree photography creation technique uses the 'panning-sequence' technique with 'continuous exposure' which allows the images captured by the camera can be combined or mixed into one panoramic image. In addition to getting an important and interesting visual appearance, the presence of a 360 degree perspective in this work can also give a new nuances in the world of the art of photography.
APA, Harvard, Vancouver, ISO, and other styles
5

Ullah, Faiz, Oh-Jin Kwon, and Seungcheol Choi. "Generation of a Panorama Compatible with the JPEG 360 International Standard Using a Single PTZ Camera." Applied Sciences 11, no. 22 (November 21, 2021): 11019. http://dx.doi.org/10.3390/app112211019.

Full text
Abstract:
Recently, the JPEG working group (ISO/IEC JTC1 SC29 WG1) developed an international standard, JPEG 360, that specifies the metadata and functionalities for saving and sharing 360-degree images efficiently to create a more realistic environment in various virtual reality services. We surveyed the metadata formats of existing 360-degree images and compared them to the JPEG 360 metadata format. We found that existing omnidirectional cameras and stitching software packages use formats that are incompatible with the JPEG 360 standard to embed metadata in JPEG image files. This paper proposes an easy-to-use tool for embedding JPEG 360 standard metadata for 360-degree images in JPEG image files using a JPEG-defined box format: the JPEG universal metadata box format. The proposed implementation will help 360-degree cameras and software vendors provide immersive services to users in a standardized manner for various markets, such as entertainment, education, professional training, navigation, and virtual and augmented reality applications. We also propose and develop an economical JPEG 360 standard compatible panoramic image acquisition system from a single PTZ camera with a special-use case of a wide field of view image of a conference or meeting. A remote attendee of the conference/meeting can see the realistic and immersive environment through our PTZ panorama in virtual reality.
APA, Harvard, Vancouver, ISO, and other styles
6

Hussain, Abuelainin. "Interactive 360-Degree Virtual Reality into eLearning Content Design." International Journal of Innovative Technology and Exploring Engineering 10, no. 2 (December 10, 2020): 1–4. http://dx.doi.org/10.35940/ijitee.b8219.1210220.

Full text
Abstract:
The techniques and methods essential in creating 2D and 3D virtual reality images that can be displayed in multimedia devices are the main aims of the study. Tools such as desktops, laptops, tablets, smartphones, and other multimedia devices that display such content are the primary concern in the study. Such devices communicate the content through videos, images, or sound which are realistic and useful to the user. Such contents can be captured from different locations in virtual imaginary sites through the abovenamed electronic devices. These are beneficial e-learning instructional techniques for students, especially for higher learning [1]. Considering architectural learners who rely on such images to develop the expected simple designs useful in real constructions, 360-degree imaging has to be considered in e-learning for their benefits. The primary forms through which the content can be transformed into a virtual reality include YouTube and Facebook websites, all of which can display 360-degree virtual environment content. Through this, the learners will interact with virtual reality in such setups, thus enhancing their studies.
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Hyunchul, and Okkyung Choi. "An efficient parameter update method of 360-degree VR image model." International Journal of Engineering Business Management 11 (January 1, 2019): 184797901983599. http://dx.doi.org/10.1177/1847979019835993.

Full text
Abstract:
Recently, with the rapid growth of manufacture and ease of user convenience, technologies utilizing virtual reality images have been increasing. It is very important to estimate the projected direction and position of the image to show the image quality similar to the real world, and the estimation of the direction and the position is solved using the relation that transforms the spheres into the expanded equirectangular. The transformation relationship can be divided into a camera intrinsic parameter and a camera extrinsic parameter, and all the images have respective camera parameters. Also, if several images use the same camera, the camera intrinsic parameters of the images will have the same values. However, it is not the best way to set the camera intrinsic parameter to the same value for all images when matching images. To solve these problems and show images that does not have a sense of heterogeneity, it is needed to create the cost function by modeling the conversion relation and calculate the camera parameter that the residual value becomes the minimum. In this article, we compare and analyze efficient camera parameter update methods. For comparative analysis, we use Levenberg–Marquardt, a parameter optimization algorithm using corresponding points, and propose an efficient camera parameter update method based on the analysis results.
APA, Harvard, Vancouver, ISO, and other styles
8

Tsubaki, Ikuko, and Kazuo Sasaki. "An Interrupted Projection using Seam Carving for 360-degree Images." Electronic Imaging 2018, no. 2 (January 28, 2018): 414–1. http://dx.doi.org/10.2352/issn.2470-1173.2018.2.vipc-414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hasler, O., B. Loesch, S. Blaser, and S. Nebiker. "CONFIGURATION AND SIMULATION TOOL FOR 360-DEGREE STEREO CAMERA RIG." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 793–98. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-793-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> The demand for capturing outdoor and indoor scenes is rising with the digitalization trend in the construction industry. An efficient solution for capturing these environments is mobile mapping. Image-based systems with 360&amp;deg; panoramic coverage allow a rapid data acquisition and can be made user-friendly accessible when hosted in a cloud-based 3D geoinformation service. The design of such a 360° stereo camera system is challenging since multiple parameters like focal length, stereo base length and environmental restrictions such as narrow corridors are influencing each other. Therefore, this paper presents a toolset, which helps configuring and evaluating such a panorama stereo camera rig. The first tool is used to determine, from which distance on 360&amp;deg; stereo coverage depending on the parametrization of the rig is achieved. The second tool can be used to capture images with the parametrized camera rig in different virtual indoor and outdoor scenes. The last tool supports stitching the captured images together in respect of the intrinsic and extrinsic parameters from the configuration tool. This toolset radically simplifies the evaluation process of a 360&amp;deg; stereo camera configuration and decreases the number of physical MMS prototypes.</p>
APA, Harvard, Vancouver, ISO, and other styles
10

Hadi Ali, Israa, and Sarmad Salman. "360-Degree Panoramic Image Stitching for Un-ordered Images Based on Harris Corner Detection." Indian Journal of Science and Technology 12, no. 4 (January 1, 2019): 1–9. http://dx.doi.org/10.17485/ijst/2019/v12i4/140988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gafar, Ilham Afan, Zaenul Arif, and Syefudin. "Systematic Literature Review: Virtual Tour 360 Degree Panorama." International Journal of Engineering Business and Social Science 1, no. 01 (October 1, 2022): 01–10. http://dx.doi.org/10.58451/ijebss.v1i01.1.

Full text
Abstract:
The application of VR as a medium to promote a place, be it tourist attractions, education, tourism or health, has become a common thing in the current era. 360-degree panoramic virtual tour itself is one method in making Virtual Reality. A 360-degree panoramic virtual tour is a collection of 360-degree images which are then processed so that they can be enjoyed virtually as if they were real. The goal to be achieved in this paper is to analyze a 360-degree panoramic virtual tour as a medium to promote a place, by conducting in-depth reviews and evaluating searches through selected literature based on certain criteria and the selected studies will be processed to answer research questions. Systematic Literature Review (SLR) is a research method that aims to identify and evaluate research results with the best technique based on specific procedures from comparison results. The results of the research on the selection of journal topics, Tourism, Education, and Health can be the main reference regarding the 360-degree panoramic virtual tour and there are various methods that can be used starting from MDLC, Luther Sutopo, Image Method, IMSDD, Qualitative.
APA, Harvard, Vancouver, ISO, and other styles
12

Jiang, Hao, Gangyi Jiang, Mei Yu, Yun Zhang, You Yang, Zongju Peng, Fen Chen, and Qingbo Zhang. "Cubemap-Based Perception-Driven Blind Quality Assessment for 360-degree Images." IEEE Transactions on Image Processing 30 (2021): 2364–77. http://dx.doi.org/10.1109/tip.2021.3052073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, Seeun, Tae Hyun Baek, and Sukki Yoon. "The effect of 360-degree rotatable product images on purchase intention." Journal of Retailing and Consumer Services 55 (July 2020): 102062. http://dx.doi.org/10.1016/j.jretconser.2020.102062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zhu, Yucheng, Guangtao Zhai, and Xiongkuo Min. "The prediction of head and eye movement for 360 degree images." Signal Processing: Image Communication 69 (November 2018): 15–25. http://dx.doi.org/10.1016/j.image.2018.05.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fang, Yuming, Xiaoqiang Zhang, and Nevrez Imamoglu. "A novel superpixel-based saliency detection model for 360-degree images." Signal Processing: Image Communication 69 (November 2018): 1–7. http://dx.doi.org/10.1016/j.image.2018.07.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Momonoi, Yoshiharu, Koya Yamamoto, Yoshihiro Yokote, Atsushi Sato, and Yasuhiro Takaki. "Systematic Approach for Alignment of Light Field Mirage." Applied Sciences 12, no. 23 (December 4, 2022): 12413. http://dx.doi.org/10.3390/app122312413.

Full text
Abstract:
We previously proposed techniques to eliminate repeated three-dimensional (3D) images produced by the light field Mirage, which consists of circularly aligned multiple-slanted light field displays. However, we only constructed the lower half of the system to verify the proposed elimination techniques. In this study, we developed an alignment technique for a complete 360-degree display system. The alignment techniques for conventional 360-degree display systems, which use a large number of projectors, greatly depend on electronic calibration, which indispensably causes image quality degradation. We propose a systematic approach for the alignment for the light field Mirage, which causes less image quality degradation by taking advantage of the small number of display devices required for the light field Mirage. The calibration technique for light field displays, the image stitching technique, and the brightness matching technique are consecutively performed, and the generation of 360-degree 3D images is verified.
APA, Harvard, Vancouver, ISO, and other styles
17

Banchi, Yoshihiro, Keisuke Yoshikawa, and Takashi Kawai. "Effects of binocular parallax in 360-degree VR images on viewing behavior." Electronic Imaging 2019, no. 3 (January 13, 2019): 648–1. http://dx.doi.org/10.2352/issn.2470-1173.2019.3.sda-648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zhu, Yucheng, Guangtao Zhai, Xiongkuo Min, and Jiantao Zhou. "Learning a Deep Agent to Predict Head Movement in 360-Degree Images." ACM Transactions on Multimedia Computing, Communications, and Applications 16, no. 4 (December 16, 2020): 1–23. http://dx.doi.org/10.1145/3410455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Delforouzi, Ahmad, Seyed Amir Hossein Tabatabaei, Kimiaki Shirahama, and Marcin Grzegorzek. "A polar model for fast object tracking in 360-degree camera images." Multimedia Tools and Applications 78, no. 7 (August 20, 2018): 9275–97. http://dx.doi.org/10.1007/s11042-018-6525-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Barazzetti, L., M. Previtali, and F. Roncoroni. "CAN WE USE LOW-COST 360 DEGREE CAMERAS TO CREATE ACCURATE 3D MODELS?" ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 69–75. http://dx.doi.org/10.5194/isprs-archives-xlii-2-69-2018.

Full text
Abstract:
360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360&amp;deg; images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360&amp;deg; field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360&amp;deg; camera could be a better choice than a project based on central perspective cameras. Basically, 360&amp;deg; cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.
APA, Harvard, Vancouver, ISO, and other styles
21

Ha, Van Kha Ly, Rifai Chai, and Hung T. Nguyen. "A Telepresence Wheelchair with 360-Degree Vision Using WebRTC." Applied Sciences 10, no. 1 (January 3, 2020): 369. http://dx.doi.org/10.3390/app10010369.

Full text
Abstract:
This paper presents an innovative approach to develop an advanced 360-degree vision telepresence wheelchair for healthcare applications. The study aims at improving a wide field of view surrounding the wheelchair to provide safe wheelchair navigation and efficient assistance for wheelchair users. A dual-fisheye camera is mounted in front of the wheelchair to capture images which can be then streamed over the Internet. A web real-time communication (WebRTC) protocol was implemented to provide efficient video and data streaming. An estimation model based on artificial neural networks was developed to evaluate the quality of experience (QoE) of video streaming. Experimental results confirmed that the proposed telepresence wheelchair system was able to stream a 360-degree video surrounding the wheelchair smoothly in real-time. The average streaming rate of the entire 360-degree video was 25.83 frames per second (fps), and the average peak signal to noise ratio (PSNR) was 29.06 dB. Simulation results of the proposed QoE estimation scheme provided a prediction accuracy of 94%. Furthermore, the results showed that the designed system could be controlled remotely via the wireless Internet to follow the desired path with high accuracy. The overall results demonstrate the effectiveness of our proposed approach for the 360-degree vision telepresence wheelchair for assistive technology applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Madu, Chisom T., Taylor Phelps, Joel S. Schuman, Ronald Zambrano, Ting-Fang Lee, Joseph Panarelli, Lama Al-Aswad, and Gadi Wollstein. "Automated 360-degree goniophotography with the NIDEK Gonioscope GS-1 for glaucoma." PLOS ONE 18, no. 3 (March 7, 2023): e0270941. http://dx.doi.org/10.1371/journal.pone.0270941.

Full text
Abstract:
This study was registered with ClinicalTrials.gov (ID: NCT03715231). A total of 20 participants (37 eyes) who were 18 or older and had glaucoma or were glaucoma suspects were enrolled from the NYU Langone Eye Center and Bellevue Hospital. During their usual ophthalmology visit, they were consented for the study and underwent 360-degree goniophotography using the NIDEK Gonioscope GS-1. Afterwards, the three ophthalmologists separately examined the images obtained and determined the status of the iridocorneal angle in four quadrants using the Shaffer grading system. Physicians were masked to patient names and diagnoses. Inter-observer reproducibility was determined using Fleiss’ kappa statistics. The interobserver reliability using Fleiss’ statistics was shown to be significant between three glaucoma specialists with fair overall agreement (Fleiss’ kappa: 0.266, p < .0001) in the interpretation of 360-degree goniophotos. Automated 360-degree goniophotography using the NIDEK Gonioscope GS-1 have quality such that they are interpreted similarly by independent expert observers. This indicates that angle investigation may be performed using this automated device and that interpretation by expert observers is likely to be similar. Images produced from automated 360-degree goniophotography using the NIDEK Gonioscope GS-1 are similarly interpreted amongst glaucoma specialists, thus supporting use of this technique to document and assess the anterior chamber angle in patients with, or suspected of, glaucoma and iridocorneal angle abnormalities.
APA, Harvard, Vancouver, ISO, and other styles
23

Janiszewski, Mateusz, Masoud Torkan, Lauri Uotinen, and Mikael Rinne. "Rapid Photogrammetry with a 360-Degree Camera for Tunnel Mapping." Remote Sensing 14, no. 21 (October 31, 2022): 5494. http://dx.doi.org/10.3390/rs14215494.

Full text
Abstract:
Structure-from-Motion Multi-View Stereo (SfM-MVS) photogrammetry is a viable method to digitize underground spaces for inspection, documentation, or remote mapping. However, the conventional image acquisition process can be laborious and time-consuming. Previous studies confirmed that the acquisition time can be reduced when using a 360-degree camera to capture the images. This paper demonstrates a method for rapid photogrammetric reconstruction of tunnels using a 360-degree camera. The method is demonstrated in a field test executed in a tunnel section of the Underground Research Laboratory of Aalto University in Espoo, Finland. A 10 m-long tunnel section with exposed rock was photographed using the 360-degree camera from 27 locations and a 3D model was reconstructed using SfM-MVS photogrammetry. The resulting model was then compared with a reference laser scan and a more conventional digital single-lens reflex (DSLR) camera-based model. Image acquisition with a 360-degree camera was 3x faster than with a conventional DSLR camera and the workflow was easier and less prone to errors. The 360-degree camera-based model achieved a 0.0046 m distance accuracy error compared to the reference laser scan. In addition, the orientation of discontinuities was measured remotely from the 3D model and the digitally obtained values matched the manual compass measurements of the sub-vertical fracture sets, with an average error of 2–5°.
APA, Harvard, Vancouver, ISO, and other styles
24

Banchi, Yoshihiro, and Takashi Kawai. "Evaluating user experience of different angle VR images." Electronic Imaging 2021, no. 2 (January 18, 2021): 98–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.2.sda-098.

Full text
Abstract:
This paper describes a comparison of user experience of virtual reality (VR) image angles. 7 angles conditions are prepared and evaluated the user experience during viewing VR images with a headset by measuring subjective and objective indexes. Angle conditions were every 30 degrees from 180 to 360 degrees. From the results of the subjective indexes (reality, presence, and depth sensation), a 360-degree image was evaluated highest, and different evaluations were made between 240 and 270 degrees.In addition, from the results of the objective indexes (eye and head tracking), a tendency to spread the eye and head movement was found as the image angle increases.
APA, Harvard, Vancouver, ISO, and other styles
25

Barmpoutis, Panagiotis, Tania Stathaki, Kosmas Dimitropoulos, and Nikos Grammalidis. "Early Fire Detection Based on Aerial 360-Degree Sensors, Deep Convolution Neural Networks and Exploitation of Fire Dynamic Textures." Remote Sensing 12, no. 19 (September 28, 2020): 3177. http://dx.doi.org/10.3390/rs12193177.

Full text
Abstract:
The environmental challenges the world faces have never been greater or more complex. Global areas that are covered by forests and urban woodlands are threatened by large-scale forest fires that have increased dramatically during the last decades in Europe and worldwide, in terms of both frequency and magnitude. To this end, rapid advances in remote sensing systems including ground-based, unmanned aerial vehicle-based and satellite-based systems have been adopted for effective forest fire surveillance. In this paper, the recently introduced 360-degree sensor cameras are proposed for early fire detection, making it possible to obtain unlimited field of view captures which reduce the number of required sensors and the computational cost and make the systems more efficient. More specifically, once optical 360-degree raw data are obtained using an RGB 360-degree camera mounted on an unmanned aerial vehicle, we convert the equirectangular projection format images to stereographic images. Then, two DeepLab V3+ networks are applied to perform flame and smoke segmentation, respectively. Subsequently, a novel post-validation adaptive method is proposed exploiting the environmental appearance of each test image and reducing the false-positive rates. For evaluating the performance of the proposed system, a dataset, namely the “Fire detection 360-degree dataset”, consisting of 150 unlimited field of view images that contain both synthetic and real fire, was created. Experimental results demonstrate the great potential of the proposed system, which has achieved an F-score fire detection rate equal to 94.6%, hence reducing the number of required sensors. This indicates that the proposed method could significantly contribute to early fire detection.
APA, Harvard, Vancouver, ISO, and other styles
26

TSUKADA, Shota, Yusuke HASEGAWA, Yoshihiro BANCHI, Hiroyuki MORIKAWA, and Takashi KAWAI. "1F4-2 Analysis of user behavior during viewing 360-degree virtual reality images." Japanese journal of ergonomics 52, Supplement (2016): S240—S241. http://dx.doi.org/10.5100/jje.52.s240.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

BANCHI, Yoshihiro, Keisuke YOSHIKAWA, and Takashi KAWAI. "1A4-2 Behavioral Analysis while viewing 360-degree images using a HMD (1)." Japanese Journal of Ergonomics 54, Supplement (June 2, 2018): 1A4–2–1A4–2. http://dx.doi.org/10.5100/jje.54.1a4-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

YOSHIKAWA, Keisuke, Yoshihiro BANCHI, and Takashi KAWAI. "1A4-3 Behavioral analysis while viewing 360-degree images using a HMD (2)." Japanese Journal of Ergonomics 54, Supplement (June 2, 2018): 1A4–3–1A4–3. http://dx.doi.org/10.5100/jje.54.1a4-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sendjasni, Abderrezzaq, Mohamed-Chaker Larabi, and Faouzi Alaya Cheikh. "On the Improvement of 2D Quality Assessment Metrics for Omnidirectional Images." Electronic Imaging 2020, no. 9 (January 26, 2020): 287–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.9.iqsp-287.

Full text
Abstract:
Subjective quality assessment remains the most reliable way to evaluate image quality while being tedious and money consuming. Therefore, objective quality evaluation ensures a trade-off by providing a computational approach for predicting image quality. Even though a large literature exists for 2D image and video quality evaluation, 360-degree images quality is still under-explored. One can question the efficiency of 2D quality metrics on such a new type of content. To this end, we propose to study the possible improvement of well-known 2D quality metrics using important features related to 360-degree content, i.e. equator bias and visual saliency. The performance evaluation is conducted on two databases containing various distortion types. The obtained results show a slight improvement of the performance highlighting some problems inherently related to both the database content and the subjective evaluation approach used to obtain the observers’ quality scores.
APA, Harvard, Vancouver, ISO, and other styles
30

Zhu, Yucheng, Guangtao Zhai, Xiongkuo Min, and Jiantao Zhou. "The Prediction of Saliency Map for Head and Eye Movements in 360 Degree Images." IEEE Transactions on Multimedia 22, no. 9 (September 2020): 2331–44. http://dx.doi.org/10.1109/tmm.2019.2957986.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ling, Jing, Kao Zhang, Yingxue Zhang, Daiqin Yang, and Zhenzhong Chen. "A saliency prediction model on 360 degree images using color dictionary based sparse representation." Signal Processing: Image Communication 69 (November 2018): 60–68. http://dx.doi.org/10.1016/j.image.2018.03.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ozcinar, Cagri, and Aakanksha Rana. "Quality Assessment of Super-Resolved Omnidirectional Image Quality Using Tangential Views." Electronic Imaging 2021, no. 9 (January 18, 2021): 295–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.9.iqsp-295.

Full text
Abstract:
Omnidirectional images (ODIs), also known as 360-degree images, enable viewers to explore all directions of a given 360-degree scene from a fixed point. Designing an immersive imaging system with ODI is challenging as such systems require very large resolution coverage of the entire 360 viewing space to provide an enhanced quality of experience (QoE). Despite remarkable progress on single image super-resolution (SISR) methods with deep-learning techniques, no study for quality assessments of super-resolved ODIs exists to analyze the quality of such SISR techniques. This paper proposes an objective, full-reference quality assessment framework which studies quality measurement for ODIs generated by GAN-based and CNN-based SISR methods. The quality assessment framework offers to utilize tangential views to cope with the spherical nature of a given ODIs. The generated tangential views are distortion-free and can be efficiently scaled to high-resolution spherical data for SISR quality measurement. We extensively evaluate two state-of-the-art SISR methods using widely used full-reference SISR quality metrics adapted to our designed framework. In addition, our study reveals that most objective metric show high performance over CNN based SISR, while subjective tests favors GAN-based architectures.
APA, Harvard, Vancouver, ISO, and other styles
33

Takagi, Yuki, Mitsunori Watanabe, Takashi Kojima, Yukihiro Sakai, Ryo Asano, and Kazuo Ichikawa. "Comparison of the efficacy and invasiveness of manual and automated gonioscopy." PLOS ONE 18, no. 4 (April 6, 2023): e0284098. http://dx.doi.org/10.1371/journal.pone.0284098.

Full text
Abstract:
Purpose To compare the efficacy and invasiveness of manual gonioscopy and automated 360-degree gonioscopy. Method Manual and automated gonioscopy were performed on 70 patients with glaucoma. Manual gonioscopy was performed by a glaucoma specialist and an ophthalmology resident, and automated gonioscopy (GS-1) was performed by orthoptists. We compared the examination time for acquiring gonioscopic images (GS-1: 16 directions; manual gonioscopy: 8 directions). Furthermore, we compared the pain and discomfort scores during the examination using the Individualized Numeric Rating Scale. Among the images acquired by automated gonioscopy, we also evaluated the percentages of acquired images that could be used to determine the angle opening condition. Results The examination time was not significantly different between manual (80.2±28.7) and automated gonioscopy (94.7±82.8) (p = 0.105). The pain score of automated gonioscopy (0.22±0.59) was significantly lower than that of manual gonioscopy (0.55±1.11) (p = 0.025). The discomfort score was not significantly different between manual (1.34±1.90) and automated gonioscopy (1.06±1.50) (p = 0.165). Automated gonioscopy successfully acquired clear gonioscopic images in 93.4% of the total images. Conclusion Automated gonioscopy is comparable in examination time and invasiveness to manual gonioscopy and may be useful for 360-degree iridocorneal angle evaluation.
APA, Harvard, Vancouver, ISO, and other styles
34

Tai, Kuan-Chen, and Chih-Wei Tang. "Siamese Networks-Based People Tracking Using Template Update for 360-Degree Videos Using EAC Format." Sensors 21, no. 5 (March 1, 2021): 1682. http://dx.doi.org/10.3390/s21051682.

Full text
Abstract:
Rich information is provided by 360-degree videos. However, non-uniform geometric deformation caused by sphere-to-plane projection significantly decreases tracking accuracy of existing trackers, and the huge amount of data makes it difficult to achieve real-time tracking. Thus, this paper proposes a Siamese networks-based people tracker using template update for 360-degree equi-angular cubemap (EAC) format videos. Face stitching overcomes the problem of content discontinuity of the EAC format and avoids raising new geometric deformation in stitched images. Fully convolutional Siamese networks enable tracking at high speed. Mostly important, to be robust against combination of non-uniform geometric deformation of the EAC format and partial occlusions caused by zero padding in stitched images, this paper proposes a novel Bayes classifier-based timing detector of template update by referring to the linear discriminant feature and statistics of a score map generated by Siamese networks. Experimental results show that the proposed scheme significantly improves tracking accuracy of the fully convolutional Siamese networks SiamFC on the EAC format with operation beyond the frame acquisition rate. Moreover, the proposed score map-based timing detector of template update outperforms state-of-the-art score map-based timing detectors.
APA, Harvard, Vancouver, ISO, and other styles
35

Hasche, Eberhard, Dominik Benning, Oliver Karaschewski, Florian Carstens, and Reiner Creutzburg. "Creating high-resolution 360-degree single-line 25K video content for modern conference rooms using film compositing techniques." Electronic Imaging 2020, no. 3 (January 26, 2020): 206–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.3.mobmu-206.

Full text
Abstract:
360-degree image and movie content have been gaining popularity in the last few years all over the media industry. There are two main reasons for this development. On the one hand, it is the immersive character of this media form and, on the other hand, the development of recording and presentation technology has made great progress in terms of resolution and quality. 360-degree panoramas are particularly widespread in VR and AR technology. Despite the high immersive potential, these forms of presentation have the disadvantage that the users are isolated and have no social contact during the presentation. Efforts have therefore been made to project 360-degree content in specially equipped rooms or planetariums to enable a shared experience for the audience. One area of application for 360- degree single-line cylindrical panorama with moving imagery included are modern conference rooms in hotels, which create an immersive environment for their clients to encourage creativity. The aim of this work is to generate high-resolution 25K 360-degree videos for projection in a conference room. The creation of the panoramas uses the single-line cylinder technique and is based on composition technologies that are used in the film industry. Video sequences are partially composed into a still image panorama in order to enable a high native resolution of the final film. A main part of this work is the comparison of different film, video and DSLR cameras, in which different image parameters are examined with respect to the quality of the images. Finally, the advantages, disadvantages and limitations of the procedure are examined.
APA, Harvard, Vancouver, ISO, and other styles
36

Larabi, Mohamed-Chaker, Audrey Girard, Sami Jaballah, and Fan Yu. "Benchmark of 2D quality metrics for the assessment of 360-deg images." Color and Imaging Conference 2019, no. 1 (October 21, 2019): 262–67. http://dx.doi.org/10.2352/issn.2169-2629.2019.27.47.

Full text
Abstract:
Omnidirectional or 360-degree images are becoming very popular in many applications and several challenges are raised because of both the nature and the representation of the data. Quality assessment is one of them from two different points of view: objectively or subjectively. In this paper, we propose to study the performance of different metrics belonging to various categories including simple mathematical metrics, humand perception based metrics and spherically optimized metrics. The performance of these metrics is measured using different tools such as PLCC, SROCC, KROCC and RMSE based on the only publically available database from Nanjing university. The results show that the metric that are considered as optimized for 360 degrees images are not providing the best correlation with the human judgement of the quality.
APA, Harvard, Vancouver, ISO, and other styles
37

Jin, Xun, and Jongweon Kim. "Artwork Identification for 360-Degree Panoramic Images Using Polyhedron-Based Rectilinear Projection and Keypoint Shapes." Applied Sciences 7, no. 5 (May 19, 2017): 528. http://dx.doi.org/10.3390/app7050528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Gao, Jiahao, Zhiwen Hu, Kaigui Bian, Xinyu Mao, and Lingyang Song. "AQ360: UAV-Aided Air Quality Monitoring by 360-Degree Aerial Panoramic Images in Urban Areas." IEEE Internet of Things Journal 8, no. 1 (January 1, 2021): 428–42. http://dx.doi.org/10.1109/jiot.2020.3004582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Kim, Hakdong, Heonyeong Lim, Minkyu Jee, Yurim Lee, MinSung Yoon, and Cheongwon Kim. "High-Precision Depth Map Estimation from Missing Viewpoints for 360-Degree Digital Holography." Applied Sciences 12, no. 19 (September 20, 2022): 9432. http://dx.doi.org/10.3390/app12199432.

Full text
Abstract:
In this paper, we propose a novel model to extract highly precise depth maps from missing viewpoints, especially for generating holographic 3D content. These depth maps are essential elements for phase extraction, which is required for the synthesis of computer-generated holograms (CGHs). The proposed model, called the holographic dense depth, estimates depth maps through feature extraction, combining up-sampling. We designed and prepared a total of 9832 multi-view images with resolutions of 640 × 360. We evaluated our model by comparing the estimated depth maps with their ground truths using various metrics. We further compared the CGH patterns created from estimated depth maps with those from ground truths and reconstructed the holographic 3D image scenes from their CGHs. Both quantitative and qualitative results demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
40

Eiris, Ricardo, Masoud Gheisari, and Behzad Esmaeili. "PARS: Using Augmented 360-Degree Panoramas of Reality for Construction Safety Training." International Journal of Environmental Research and Public Health 15, no. 11 (November 3, 2018): 2452. http://dx.doi.org/10.3390/ijerph15112452.

Full text
Abstract:
Improving the hazard-identification skills of construction workers is a vital step towards preventing accidents in the increasingly complex working conditions of construction jobsites. Training the construction workforce to recognize hazards therefore plays a central role in preparing workers to actively understand safety-related risks and make assertive safety decisions. Considering the inadequacies of traditional safety-training methods (e.g., passive lectures, videos, demonstrations), researchers have employed advanced visualization techniques such as virtual reality technologies to enable users to actively improve their hazard-identification skills in a safe and controlled environment. However, current virtual reality techniques sacrifice realism and demand high computational costs to reproduce real environments. Augmented 360-degree panoramas of reality offers an innovative alternative that creates low-cost, simple-to-capture, true-to-reality representations of the actual construction jobsite within which trainees may practice identifying hazards. This proof-of-concept study developed and evaluated a platform using augmented 360-degree panoramas of reality (PARS) for safety-training applications to enhance trainees’ hazard-identification skills for four types of sample hazards. Thirty subjects participated in a usability test that evaluated the PARS training platform and its augmented 360-degree images captured from real construction jobsites. The usability reviews demonstrate that the trainees found the platform and augmentations advantageously to learning hazard identification. The results of this study will foreseeably help researchers in developing engaging training platforms to improve the hazard-identification skills of workers.
APA, Harvard, Vancouver, ISO, and other styles
41

Wiguna, Gede Arya, Gede Bayu Suparta, and Andreas Christian Louk. "3D Micro-Radiography Imaging for Quick Assessment on Small Specimen." Advanced Materials Research 896 (February 2014): 681–86. http://dx.doi.org/10.4028/www.scientific.net/amr.896.681.

Full text
Abstract:
An imaging procedure for developing a 3D X-ray micro-radiography system has been developed. The idea arose due to the necessity of performing internal inspection and testing nondestructively on a small specimen. The image is generated from multiple x-ray radiograph that were acquire from multiple angle of view. Each radiograph was taken for every one degree. The total angle covers up to 360 degree so that a set of 360 image were collected for each specimen. Then, for those images were rendered to be a 3-D image. A set of image processing procedures were applied such as background subtraction and noise reduction. From the experiment, this manner is very useful for assessing a small specimen nondestructively prior to perform a tomography inspection.
APA, Harvard, Vancouver, ISO, and other styles
42

Yamashita, Takuzo, Mahendra Kumar Pal, Kazutoshi Matsuzaki, and Hiromitsu Tomozawa. "Development of a Virtual Reality Experience System for Interior Damage Due to an Earthquake – Utilizing E-Defense Shake Table Test –." Journal of Disaster Research 12, no. 5 (September 27, 2017): 882–90. http://dx.doi.org/10.20965/jdr.2017.p0882.

Full text
Abstract:
To construct a virtual reality (VR) experience system for interior damage due to an earthquake, VR image contents were created by obtaining images, sounds, and vibration data from multiple devices, with synchronization information, in a room at the 10thfloor of 10-story RC structure tested at E-Defense shake table. An application for displaying 360-degree images of interior damage using a head mount display (HMD) was developed. The developed system was exhibited in public disaster prevention events, and then a questionnaire survey was conducted to assess usefulness of VR experience in disaster prevention education.
APA, Harvard, Vancouver, ISO, and other styles
43

Horn, E., F. Ashton, J. C. Haselgrove, and L. D. Peachey. "Tilt Unlimited: A 360 Degree Tilt Specimen Holder for the Jeol 4000-Ex 400 Kv TEM with Modified Objective Lens." Proceedings, annual meeting, Electron Microscopy Society of America 49 (August 1991): 996–97. http://dx.doi.org/10.1017/s0424820100089299.

Full text
Abstract:
Intermediate voltage electron microscopes are used in biological research to produce high quality images of relatively thick specimens, including whole cells prepared by critical point drying and sections up to several microns thick of embedded cells and tissues. Pairs of images with the specimen tilted by angles appropriate for the magnification and the specimen thickness can be viewed stereoscopically as a way to extract 3D information from these thick specimens. The tilts appropriate for stereo imaging of such thick specimens usually don't exceed +/-20 degrees. Quantitative 3D measurement and 3D reconstruction using tomography or computer graphic tracing methods give the highest accuracy results when the maximum range of tilt angles is available.
APA, Harvard, Vancouver, ISO, and other styles
44

Ping, Guo. "Real Three-Dimensional Image Projection System Based on the Volumetric 3D Display Principles and the WPF Framework." Applied Mechanics and Materials 427-429 (September 2013): 1436–39. http://dx.doi.org/10.4028/www.scientific.net/amm.427-429.1436.

Full text
Abstract:
The real projection system image of 3D Rotating cones based on the Volumetric 3D revelation principle and WPF platform is the true image. Compared with conventional 3D display, this system has the naked eye 3D display, so the viewer is no need to wear 3D glasses and 3D display can be achieved .At the same time, this system has a 360-degree holographic image display. The system is designed by using WPF 3D image, which makes it easy to produce 3D images.
APA, Harvard, Vancouver, ISO, and other styles
45

Borba, Eduardo Zilles. "Believability in virtual reality: a proposal to study brand communication in metaverses." Novos Olhares 11, no. 2 (February 11, 2023): 205283. http://dx.doi.org/10.11606/issn.2238-7714.no.2022.205283.

Full text
Abstract:
The article presents a theoretical and empirical exercise to the possibilities of brand communication in virtual reality (VR), proposing an analysis/creation tool for emerging metaverse platforms, applied to the field of brand communication (immersive and 360-degree images, multisensoriality, first person perspective). Methodologically, this study uses an empirical approach, qualitatively evaluating an advertising piece in VR and highlighting the dimensions of believability that, in some way, affect brand communication. Results show three relevant dimensions: realism, interactivity, and engagement with the plot.
APA, Harvard, Vancouver, ISO, and other styles
46

Gurses, Muhammet Enes, Abuzer Gungor, Sahin Hanalioglu, Cumhur Kaan Yaltirik, Hasan Cagri Postuk, Mustafa Berker, and Uğur Türe. "Qlone®: A Simple Method to Create 360-Degree Photogrammetry-Based 3-Dimensional Model of Cadaveric Specimens." Operative Neurosurgery 21, no. 6 (October 18, 2021): E488—E493. http://dx.doi.org/10.1093/ons/opab355.

Full text
Abstract:
Abstract BACKGROUND Human cadavers are an essential component of anatomy education. However, access to cadaveric specimens and laboratory facilities is limited in most parts of the world. Hence, new innovative approaches and accessible technologies are much needed to enhance anatomy training. OBJECTIVE To provide a practical method for 3-dimensional (3D) visualization of cadaveric specimens to maximize the utility of these precious educational materials. METHODS Embalmed cadaveric specimens (cerebrum, brain stem, and cerebellum) were used. The 3D models of cadaveric specimens were built by merging multiple 2-dimensional photographs. Pictures were taken with standard mobile devices (smartphone and tablet). A photogrammetry program (Qlone®, 2017-2020, EyeCue Vision Technologies Ltd, Yokneam, Israel), an all-in-one 3D scanning and augmented reality technology, was then used to convert the images into an integrated 3D model. RESULTS High-resolution 360-degree 3D models of the cadaveric specimens were obtained. These models could be rotated and moved freely on different planes, and viewed from different angles with varying magnifications. Advanced editing options and the possibility for export to virtual- or augmented-reality simulation allowed for better visualization. CONCLUSION This inexpensive, simple, and accessible method for creating 360-degree 3D cadaveric models can enhance training in neuroanatomy and allow for a highly realistic surgical simulation environment for neurosurgeons worldwide.
APA, Harvard, Vancouver, ISO, and other styles
47

Hara, Takayuki, Yusuke Mukuta, and Tatsuya Harada. "Spherical Image Generation from a Single Image by Considering Scene Symmetry." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1513–21. http://dx.doi.org/10.1609/aaai.v35i2.16242.

Full text
Abstract:
Spherical images taken in all directions (360 degrees by 180 degrees) allow the full surroundings of a subject to be represented, providing an immersive experience to viewers. Generating a spherical image from a single normal-field-of-view (NFOV) image is convenient and expands the usage scenarios considerably without relying on a specific panoramic camera or images taken from multiple directions; however, achieving such images remains a challenging and unresolved problem. The primary challenge is controlling the high degree of freedom involved in generating a wide area that includes all directions of the desired spherical image. We focus on scene symmetry, which is a basic property of the global structure of spherical images, such as rotational symmetry, plane symmetry, and asymmetry. We propose a method for generating a spherical image from a single NFOV image and controlling the degree of freedom of the generated regions using the scene symmetry. To estimate and control the scene symmetry using both a circular shift and flip of the latent image features, we incorporate the intensity of the symmetry as a latent variable into conditional variational autoencoders. Our experiments show that the proposed method can generate various plausible spherical images controlled from symmetric to asymmetric, and can reduce the reconstruction errors of the generated images based on the estimated symmetry.
APA, Harvard, Vancouver, ISO, and other styles
48

Matsuo, Masato, Shiro Mizoue, Koji Nitta, Yasuyuki Takai, Kazunobu Sugihara, and Masaki Tanito. "Intraobserver and interobserver agreement among anterior chamber angle evaluations using automated 360-degree gonio-photos." PLOS ONE 16, no. 5 (May 6, 2021): e0251249. http://dx.doi.org/10.1371/journal.pone.0251249.

Full text
Abstract:
Purpose To investigate the reproducibility for the iridocorneal angle evaluations using the pictures obtained by a gonioscopic camera, Gonioscope GS-1 (Nidek Co., Gamagori, Japan). Methods The pragmatic within-patient comparative diagnostic evaluations for 140 GS-1 gonio-images obtained from 35 eyes of 35 patients at four ocular sectors (superior, temporal, inferior, and nasal angles) were conducted by five independent ophthalmologists including three glaucoma specialists in a masked fashion twice, 1 week apart. We undertook the observer agreement and correlation analyses of Scheie’s angle width and pigmentation gradings and detection of peripheral anterior synechia and Sampaolesi line. Results The respective Fleiss’ kappa values for the four elements between manual gonioscopy and automated gonioscope by the glaucoma specialist were 0.22, 0.40, 0.32 and 0.58. Additionally, the respective intraobserver agreements for the four elements by the glaucoma specialist each were 0.32 to 0.65, 0.24 to 0.71, 0.35 to 0.70, and 0.20 to 0.76; the Fleiss’ kappa coefficients for the four elements among the three glaucoma specialists were, respectively, 0.31, 0.38, 0.31, and 0.17; the Fleiss’ kappa coefficients for the angle width and pigmentation gradings between the two glaucoma specialists each were 0.30 to 0.35, and 0.29 to 0.43, respectively. Overall, the Kendall’s tau coefficients for the angle gradings reflected the positive correlations in the evaluations. Conclusion Our findings suggested slight-to-substantial intraobserver agreement and slight-to-fair (among the three) or fair-to-moderate (between the two each) interobserver agreement for the angle assessments using GS-1 gonio-photos even by glaucoma specialists. Sufficient training and a solid consensus should allow us to perform more reliable angle assessments using gonio-photos with high reproducibility.
APA, Harvard, Vancouver, ISO, and other styles
49

Riyadi, Slamet, and Ida Nurhaida. "Aplikasi Sistem Virtual Tour E-Panorama 360 Derajat Berbasis Android Untuk Pengenalan Kampus Mercu Buana." Jurnal Teknologi Informasi dan Ilmu Komputer 9, no. 1 (February 7, 2022): 17. http://dx.doi.org/10.25126/jtiik.2021864209.

Full text
Abstract:
<p>Pemanfaatan internet untuk mencari informasi kini semakin mudah diakses kapanpun dan dimanapun bagi siapa saja terutama kalangan mahasiswa. Salah satu ciri utama kampus yang maju adalah tersedianya informasi yang muncul dalam berbagai media misalnya gambar. Maka skripsi dengan judul “Aplikasi Sistem Virtual Tour Berbasis E-Panorama 360 Derajat Untuk Pengenalan Kampus Mercu Buana” ini berfungsi sebagai media informasi kampus yang ditampilkan dalam bentuk gambar panorama 360 derajat tanpa batas sudut pandang. Metode Penelitian yang digunakan pada penelitian ini adalah metodologi Waterfall yang merupakan metode paling sesuai dengan menekankan 5 tahap pengambangan. Kebutuhan pembuatan virtual tour ini adalah perangkat keras berupa kamera dan laptop serta perangkat lunak berupa photoshop, panoweaver, xampp dan code editor. Website virtual tour ini menampilkan 4 scene dari berbagai titik dan lokasi yang dapat diakses melalui situs resmi dan peta Kampus Mercu Buana. Untuk Pembuatan Sistem Virtual Tour ini menghasilkan Output Website dan Aplikasi Untuk Android.</p><p> </p><p><strong><em>Abstract</em></strong></p><p><em>Utilization of the internet to find information is now more easily accessible anytime and anywhere for anyone, especially among students. One of the main characteristics of an advanced campus is the availability of information that appears in various media such as images. Then the thesis titled "Application of Virtual Tour System Based on 360-Degree E-Panorama for Introduction to the Mercu Buana Campus" serves as the campus information media that is displayed in the form of 360-degree panoramic images without a limited viewing angle. The research method used in this study is the Waterfall methodology which is the most suitable method by emphasizing the 5 stages of floating. The need for making this virtual tour is hardware in the form of cameras and laptops as well as software in the form of photoshop, panoweaver, xampp and codeigniter. This virtual tour website displays 4 scenes from various points and locations that can be accessed through the official website and map of the Mercu Buana Campus. For making this Virtual Tour System, it produces Website and Application for Android.</em></p>
APA, Harvard, Vancouver, ISO, and other styles
50

Barmpoutis, Panagiotis, Aristeidis Kastridis, Tania Stathaki, Jing Yuan, Mengjie Shi, and Nikos Grammalidis. "Suburban Forest Fire Risk Assessment and Forest Surveillance Using 360-Degree Cameras and a Multiscale Deformable Transformer." Remote Sensing 15, no. 8 (April 10, 2023): 1995. http://dx.doi.org/10.3390/rs15081995.

Full text
Abstract:
In the current context of climate change and demographic expansion, one of the phenomena that humanity faces are the suburban wildfires. To prevent the occurrence of suburban forest fires, fire risk assessment and early fire detection approaches need to be applied. Forest fire risk mapping depends on various factors and contributes to the identification and monitoring of vulnerable zones where risk factors are most severe. Therefore, watchtowers, sensors, and base stations of autonomous unmanned aerial vehicles need to be placed carefully in order to ensure adequate visibility or battery autonomy. In this study, fire risk assessment of an urban forest was performed and the recently introduced 360-degree data were used for early fire detection. Furthermore, a single-step approach that integrates a multiscale vision transformer was introduced for accurate fire detection. The study area includes the suburban pine forest of Thessaloniki city (Greece) named Seich Sou, which is prone to wildfires. For the evaluation of the performance of the proposed workflow, real and synthetic 360-degree images were used. Experimental results demonstrate the great potential of the proposed system, which achieved an F-score for real fire event detection rate equal to 91.6%. This indicates that the proposed method could significantly contribute to the monitoring, protection, and early fire detection of the suburban forest of Thessaloniki.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography