To see the other types of publications on this topic, follow the link: Caméra 360.

Journal articles on the topic 'Caméra 360'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Caméra 360.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Blanc, P., E. Cassol, E. Ouhayoun, and P. Payoux. "Gamma caméra CZT 360° VERITON-CT : expérience toulousaine." Médecine Nucléaire 45, no. 4 (July 2021): 240–44. http://dx.doi.org/10.1016/j.mednuc.2021.06.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Piatkova, Y., P. Payoux, C. Boursier, P. Gantet, M. Bordonné, V. Roch, L. Imbert, and A. Verger. "Comparaison de l’imagerie TEMP au 123I-FP-CIT obtenue avec une caméra CZT 360° et une caméra conventionnelle : étude prospective." Médecine Nucléaire 45, no. 4 (July 2021): 182. http://dx.doi.org/10.1016/j.mednuc.2021.06.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tran, Khanh Bao, Alexander Carballo, and Kazuya Takeda. "LiDAR-360 RGB Camera-360 Thermal Camera Targetless Calibration for Dynamic Situations." Sensors 24, no. 22 (November 10, 2024): 7199. http://dx.doi.org/10.3390/s24227199.

Full text
Abstract:
Integrating multiple types of sensors into autonomous systems, such as cars and robots, has become a widely adopted approach in modern technology. Among these sensors, RGB cameras, thermal cameras, and LiDAR are particularly valued for their ability to provide comprehensive environmental data. However, despite their advantages, current research primarily focuses on the one or combination of two sensors at a time. The full potential of utilizing all three sensors is often neglected. One key challenge is the ego-motion compensation of data in dynamic situations, which results from the rotational nature of the LiDAR sensor, and the blind spots of standard cameras due to their limited field of view. To resolve this problem, this paper proposes a novel method for the simultaneous registration of LiDAR, panoramic RGB cameras, and panoramic thermal cameras in dynamic environments without the need for calibration targets. Initially, essential features from RGB images, thermal data, and LiDAR point clouds are extracted through a novel method, designed to capture significant raw data characteristics. These extracted features then serve as a foundation for ego-motion compensation, optimizing the initial dataset. Subsequently, the raw features can be further refined to enhance calibration accuracy, achieving more precise alignment results. The results of the paper demonstrate the effectiveness of this approach in enhancing multiple sensor calibration compared to other ways. In the case of a high speed of around 9 m/s, some situations can improve the accuracy about 30 percent higher for LiDAR and Camera calibration. The proposed method has the potential to significantly improve the reliability and accuracy of autonomous systems in real-world scenarios, particularly under challenging environmental conditions.
APA, Harvard, Vancouver, ISO, and other styles
4

Barazzetti, L., M. Previtali, and F. Roncoroni. "CAN WE USE LOW-COST 360 DEGREE CAMERAS TO CREATE ACCURATE 3D MODELS?" ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 69–75. http://dx.doi.org/10.5194/isprs-archives-xlii-2-69-2018.

Full text
Abstract:
360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.
APA, Harvard, Vancouver, ISO, and other styles
5

М.В., Михайлюк, Омельченко Д.В., Кононов Д.А., and Логинов Д.М. "Параметры камеры просмотра видео 360 градусов." Труды НИИСИ РАН 10, no. 4 (October 22, 2020): 26–32. http://dx.doi.org/10.25682/niisi.2020.4.0004.

Full text
Abstract:
В работе рассматривается выбор параметров виртуальной камеры для просмотра видео 360 градусов, созданного с помощью кубической проекции. Проведены оценки искажения изображения при поворотах камеры для разных углов ее горизонтального поля зрения. Предлагаются алгоритмы реализации поворотов камеры в процессе просмотра видео 360 градусов. In the paper the virtual camera parameters for watching 360-degree video produced with a cubemap projection are considered. The image distortion estimates for camera rotations given different horizontal FOV angles are provided. The algorithms of virtual camera rotations during the watching 360-degree video are proposed.
APA, Harvard, Vancouver, ISO, and other styles
6

Lian, Trisha, Joyce Farrell, and Brian Wandell. "Image Systems Simulation for 360° Camera Rigs." Electronic Imaging 2018, no. 5 (January 28, 2018): 353–1. http://dx.doi.org/10.2352/issn.2470-1173.2018.05.pmii-353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sakai, Tetsu, Michifumi Yoshioka, and Katsufumi Inoue. "Camera Tracking Improvement for LSD-SLAM System with 360-degree Camera." IEEJ Transactions on Electronics, Information and Systems 140, no. 7 (July 1, 2020): 800–809. http://dx.doi.org/10.1541/ieejeiss.140.800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yan, Zhisheng, and Jun Yi. "Dissecting Latency in 360° Video Camera Sensing Systems." Sensors 22, no. 16 (August 11, 2022): 6001. http://dx.doi.org/10.3390/s22166001.

Full text
Abstract:
360° video camera sensing is an increasingly popular technology. Compared with traditional 2D video systems, it is challenging to ensure the viewing experience in 360° video camera sensing because the massive omnidirectional data introduce adverse effects on start-up delay, event-to-eye delay, and frame rate. Therefore, understanding the time consumption of computing tasks in 360° video camera sensing becomes the prerequisite to improving the system’s delay performance and viewing experience. Despite the prior measurement studies on 360° video systems, none of them delves into the system pipeline and dissects the latency at the task level. In this paper, we perform the first in-depth measurement study of task-level time consumption for 360° video camera sensing. We start with identifying the subtle relationship between the three delay metrics and the time consumption breakdown across the system computing task. Next, we develop an open research prototype Zeus to characterize this relationship in various realistic usage scenarios. Our measurement of task-level time consumption demonstrates the importance of the camera CPU-GPU transfer and the server initialization, as well as the negligible effect of 360° video stitching on the delay metrics. Finally, we compare Zeus with a commercial system to validate that our results are representative and can be used to improve today’s 360° video camera sensing systems.
APA, Harvard, Vancouver, ISO, and other styles
9

Jauhari, Jauhari. "SOLO-YOGYA INTO 360-DEGREE PHOTOGRAPHY." Capture : Jurnal Seni Media Rekam 13, no. 1 (December 13, 2021): 17–31. http://dx.doi.org/10.33153/capture.v13i1.3627.

Full text
Abstract:
Currently, technological developments have made it possible for photographic works not only to be present in the form of a flat 180 degree two-dimensional panorama, but even being able to present reality with a 360 degree perspective. This research on the creation of photographic works aims to optimize photographic equipment for photographing with a 360 degree perspective. Eventhough there are many 360 degree application in smartphones, but using a DSLR camera to create works with a 360 degree perspective has the advantage that it can be printed in large sizes with high resolution without breaking the pixels. The method of creating this work is based on the experimental process of developing DSLR camera equipment. This 360 degree photography creation technique uses the 'panning-sequence' technique with 'continuous exposure' which allows the images captured by the camera can be combined or mixed into one panoramic image. In addition to getting an important and interesting visual appearance, the presence of a 360 degree perspective in this work can also give a new nuances in the world of the art of photography.
APA, Harvard, Vancouver, ISO, and other styles
10

Sutrisno, Arif. "STUDI PERBANDINGAN ANIMASI 360 DERAJAT BERTEMA SEJARAH." JADECS (Jurnal of Art, Design, Art Education & Cultural Studies) 6, no. 1 (April 16, 2021): 22. http://dx.doi.org/10.17977/um037v6i12021p22-34.

Full text
Abstract:
To identify general characteristics of historical-themed 360o animation, visual characteristics, and ways of delivering material in historical learning media, this study attempts to compare three historical-themed 360o animations, namely the game trailler Assassin's Creed Syndicate Jack the Ripper, Dinosaurs World 360 VR, and Dunkirk 'Save Every Breath'. This research method is by determine the benchmark focus, planning and research, data collection, implementation, recommendations, and analysis. In general, historical 360o animation uses 3D animation techniques. The flow used tends to be linear with narrative storytelling. The point of view used is first person. The camera movement used is a follow subject. Key Words: Animation 360 Degree, Learning Media, History
APA, Harvard, Vancouver, ISO, and other styles
11

Humpe, Andreas. "Bridge Inspection with an Off-the-Shelf 360° Camera Drone." Drones 4, no. 4 (October 11, 2020): 67. http://dx.doi.org/10.3390/drones4040067.

Full text
Abstract:
The author proposes a new approach for bridge crack detection by a 360° camera on top of a drone. Traditionally, bridge inspection is performed manually and although the use of drones has been implemented before, researchers used standard high definition cameras underneath the drone. To make the approach comparable to the conventional approach, two bridges were selected in Germany and inspected for cracks and defects by applying both methods. The author follows an engineering design process and after developing a prototype of the drone with a 360° camera above the body of the drone, the system is built, tested, and the bridges are inspected. First, the critical parts of the bridges are inspected with an off-the-shelf drone with a high definition camera underneath the drone. The results provide a benchmark for comparison. Next, the new approach to bridge inspection by using a 360° camera on top of the drone is tested. The images of the critical parts of the bridge that were taken with the 360° camera on top of the drone are analyzed and compared to the images of the conventional approach with the camera underneath the drone. The results show that a 360° camera can be used for crack and defect detection with similar results to a standard high definition camera. Furthermore, the 360° camera is more suitable for inspecting corners or the ceiling of, e.g., an arch bridge.
APA, Harvard, Vancouver, ISO, and other styles
12

Lee, Hyunchul, and Okkyung Choi. "An efficient parameter update method of 360-degree VR image model." International Journal of Engineering Business Management 11 (January 1, 2019): 184797901983599. http://dx.doi.org/10.1177/1847979019835993.

Full text
Abstract:
Recently, with the rapid growth of manufacture and ease of user convenience, technologies utilizing virtual reality images have been increasing. It is very important to estimate the projected direction and position of the image to show the image quality similar to the real world, and the estimation of the direction and the position is solved using the relation that transforms the spheres into the expanded equirectangular. The transformation relationship can be divided into a camera intrinsic parameter and a camera extrinsic parameter, and all the images have respective camera parameters. Also, if several images use the same camera, the camera intrinsic parameters of the images will have the same values. However, it is not the best way to set the camera intrinsic parameter to the same value for all images when matching images. To solve these problems and show images that does not have a sense of heterogeneity, it is needed to create the cost function by modeling the conversion relation and calculate the camera parameter that the residual value becomes the minimum. In this article, we compare and analyze efficient camera parameter update methods. For comparative analysis, we use Levenberg–Marquardt, a parameter optimization algorithm using corresponding points, and propose an efficient camera parameter update method based on the analysis results.
APA, Harvard, Vancouver, ISO, and other styles
13

Janiszewski, Mateusz, Masoud Torkan, Lauri Uotinen, and Mikael Rinne. "Rapid Photogrammetry with a 360-Degree Camera for Tunnel Mapping." Remote Sensing 14, no. 21 (October 31, 2022): 5494. http://dx.doi.org/10.3390/rs14215494.

Full text
Abstract:
Structure-from-Motion Multi-View Stereo (SfM-MVS) photogrammetry is a viable method to digitize underground spaces for inspection, documentation, or remote mapping. However, the conventional image acquisition process can be laborious and time-consuming. Previous studies confirmed that the acquisition time can be reduced when using a 360-degree camera to capture the images. This paper demonstrates a method for rapid photogrammetric reconstruction of tunnels using a 360-degree camera. The method is demonstrated in a field test executed in a tunnel section of the Underground Research Laboratory of Aalto University in Espoo, Finland. A 10 m-long tunnel section with exposed rock was photographed using the 360-degree camera from 27 locations and a 3D model was reconstructed using SfM-MVS photogrammetry. The resulting model was then compared with a reference laser scan and a more conventional digital single-lens reflex (DSLR) camera-based model. Image acquisition with a 360-degree camera was 3x faster than with a conventional DSLR camera and the workflow was easier and less prone to errors. The 360-degree camera-based model achieved a 0.0046 m distance accuracy error compared to the reference laser scan. In addition, the orientation of discontinuities was measured remotely from the 3D model and the digitally obtained values matched the manual compass measurements of the sub-vertical fracture sets, with an average error of 2–5°.
APA, Harvard, Vancouver, ISO, and other styles
14

H S, Mrs Anjana. "360 DEGREE DESTROYER FOR MILITARY." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (May 10, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem33561.

Full text
Abstract:
The "360 Degree Destroyer for Military" model operates by automatically detecting various threats such as missiles, tankers, guns, humans, and drones using a laptop camera. Detection is accomplished through the implementation of CNN algorithms. Instead of firing upon detection, the system utilizes a laser to pinpoint detected object and LED is used to indicate the threat level of the detected object, and this system also detects the land mine using metal sensor. Messages indicating the presence of detected objects are promptly sent to Telegram for immediate action. Additionally, the movement of the robot can be controlled manually via Bluetooth by establishing a communication link between a Bluetooth-enabled device (such as a smartphone or a computer) and the robot's control system. This link allows commands to be sent wirelessly from the device to the robot, enabling control over its movement. This manual control via Bluetooth can include commands to move forward, backward, turn left or right, stop, or perform other predefined actions. This innovative approach to threat detection and communication provides a flexible and efficient solution for defense strategies, enhancing situational awareness and response capabilities. Keywords--- Threat Detection, Land Mine, Missiles, Tankers, Guns, Humans, Drones, Laptop Camera, CNN, Laser, Led, Telegram, Bluetooth.
APA, Harvard, Vancouver, ISO, and other styles
15

Syawaludin, Muhammad Firdaus, Myungho Lee, and Jae-In Hwang. "Foveation Pipeline for 360° Video-Based Telemedicine." Sensors 20, no. 8 (April 16, 2020): 2264. http://dx.doi.org/10.3390/s20082264.

Full text
Abstract:
Pan-tilt-zoom (PTZ) and omnidirectional cameras serve as a video-mediated communication interface for telemedicine. Most cases use either PTZ or omnidirectional cameras exclusively; even when used together, images from the two are shown separately on 2D displays. Conventional foveated imaging techniques may offer a solution for exploiting the benefits of both cameras, i.e., the high resolution of the PTZ camera and the wide field-of-view of the omnidirectional camera, but displaying the unified image on a 2D display would reduce the benefit of “omni-” directionality. In this paper, we introduce a foveated imaging pipeline designed to support virtual reality head-mounted displays (HMDs). The pipeline consists of two parallel processes: one for estimating parameters for the integration of the two images and another for rendering images in real time. A control mechanism for placing the foveal region (i.e., high-resolution area) in the scene and zooming is also proposed. Our evaluations showed that the proposed pipeline achieved, on average, 17 frames per second when rendering the foveated view on an HMD, and showed angular resolution improvement on the foveal region compared with the omnidirectional camera view. However, the improvement was less significant when the zoom level was 8× and more. We discuss possible improvement points and future research directions.
APA, Harvard, Vancouver, ISO, and other styles
16

Ningtias, Trisni Wahyu, Koko Joni, and Riza Alfita. "Rancang Bangun Rekonstruksi 3D Dengan Kinect Xbox 360." Jurnal Teknik Elektro dan Komputasi (ELKOM) 2, no. 1 (March 31, 2020): 49–59. http://dx.doi.org/10.32528/elkom.v2i1.3136.

Full text
Abstract:
Technology development is increasing rapidly and effectively brings an impact on the field of technology. One of them is scanning objects using a computer. Object scanning is a technology that combines hardware to see objects and software to process data that has been received by the hardware. The traditional manufacturing process without 3D scanning involves the design, analysis and testing of prototypes needs a very long time and expensive. Whereas 3D scanning is considered capable of being more efficient and practical. This research is conducted to facilitate the 3D scanning process by utilizing sensors on the Kinect 360 camera. The object will be scanned with a 360 camera to get data on each side. The results obtained will be processed by the system and process it into 3D objects. The process of collecting object data using Eclipse software while object rotation using a stepper motor which controlled by arduino. Based on the results of tests from research, the infrared sensor on the Kinect camera is less than optimal in reflecting light back on the object that has uneven surfaces. But the sensor works well on the object that has a flat surface.
APA, Harvard, Vancouver, ISO, and other styles
17

Barazzetti, L., M. Previtali, and F. Roncoroni. "3D MODELLING WITH THE SAMSUNG GEAR 360." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W3 (February 23, 2017): 85–90. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w3-85-2017.

Full text
Abstract:
The Samsung Gear 360 is a consumer grade spherical camera able to capture photos and videos. The aim of this work is to test the metric accuracy and the level of detail achievable with the Samsung Gear 360 coupled with digital modelling techniques based on photogrammetry/computer vision algorithms. Results demonstrate that the direct use of the projection generated inside the mobile phone or with Gear 360 Action Direction (the desktop software for post-processing) have a relatively low metric accuracy. As results were in contrast with the accuracy achieved by using the original fisheye images (front and rear facing images) in photogrammetric reconstructions, an alternative solution to generate the equirectangular projections was developed. A calibration aimed at understanding the intrinsic parameters of the two lenses camera, as well as their relative orientation, allowed one to generate new equirectangular projections from which a significant improvement of geometric accuracy has been achieved.
APA, Harvard, Vancouver, ISO, and other styles
18

Chambel Lopes, Diogo, and Isabel Nogueira. "Evaluation of Direct Sunlight Availability Using a 360° Camera." Solar 4, no. 4 (October 1, 2024): 555–71. http://dx.doi.org/10.3390/solar4040026.

Full text
Abstract:
One important aspect to consider when buying a house or apartment is adequate solar exposure. The same applies to the evaluation of the shadowing effects of existing buildings on prospective construction sites and vice versa. In different climates and seasons, it is not always easy to assess if there will be an excess or lack of sunlight, and both can lead to discomfort and excessive energy consumption. The aim of our project is to design a method to quantify the availability of direct sunlight to answer these questions. We developed a tool in Octave to calculate representative parameters, such as sunlight hours per day over a year and the times of day for which sunlight is present, considering the surrounding objects. The apparent sun position over time is obtained from an existing algorithm and the surrounding objects are surveyed using a picture taken with a 360° camera from a window or other sunlight entry area. The sky regions in the picture are detected and all other regions correspond to obstructions to direct sunlight. The sky detection is not fully automatic, but the sky swap tool in the camera software could be adapted by the manufacturer for this purpose. We present the results for six representative test cases.
APA, Harvard, Vancouver, ISO, and other styles
19

Hasler, O., B. Loesch, S. Blaser, and S. Nebiker. "CONFIGURATION AND SIMULATION TOOL FOR 360-DEGREE STEREO CAMERA RIG." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W13 (June 5, 2019): 793–98. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w13-793-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> The demand for capturing outdoor and indoor scenes is rising with the digitalization trend in the construction industry. An efficient solution for capturing these environments is mobile mapping. Image-based systems with 360&amp;deg; panoramic coverage allow a rapid data acquisition and can be made user-friendly accessible when hosted in a cloud-based 3D geoinformation service. The design of such a 360° stereo camera system is challenging since multiple parameters like focal length, stereo base length and environmental restrictions such as narrow corridors are influencing each other. Therefore, this paper presents a toolset, which helps configuring and evaluating such a panorama stereo camera rig. The first tool is used to determine, from which distance on 360&amp;deg; stereo coverage depending on the parametrization of the rig is achieved. The second tool can be used to capture images with the parametrized camera rig in different virtual indoor and outdoor scenes. The last tool supports stitching the captured images together in respect of the intrinsic and extrinsic parameters from the configuration tool. This toolset radically simplifies the evaluation process of a 360&amp;deg; stereo camera configuration and decreases the number of physical MMS prototypes.</p>
APA, Harvard, Vancouver, ISO, and other styles
20

Kim, Chul-Hyun. "Proposal for 6DoF similar experience at integral 360 camera image using Observatory Box." Journal of Digital Contents Society 21, no. 10 (October 31, 2020): 1885–93. http://dx.doi.org/10.9728/dcs.2020.21.10.1885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lee, Jaehyun, Sungjae Ha, Philippe Gentet, Leehwan Hwang, Soonchul Kwon, and Seunghyun Lee. "A Novel Real-Time Virtual 3D Object Composition Method for 360° Video." Applied Sciences 10, no. 23 (December 4, 2020): 8679. http://dx.doi.org/10.3390/app10238679.

Full text
Abstract:
As highly immersive virtual reality (VR) content, 360° video allows users to observe all viewpoints within the desired direction from the position where the video is recorded. In 360° video content, virtual objects are inserted into recorded real scenes to provide a higher sense of immersion. These techniques are called 3D composition. For a realistic 3D composition in a 360° video, it is important to obtain the internal (focal length) and external (position and rotation) parameters from a 360° camera. Traditional methods estimate the trajectory of a camera by extracting the feature point from the recorded video. However, incorrect results may occur owing to stitching errors from a 360° camera attached to several high-resolution cameras for the stitching process, and a large amount of time is spent on feature tracking owing to the high-resolution of the video. We propose a new method for pre-visualization and 3D composition that overcomes the limitations of existing methods. This system achieves real-time position tracking of the attached camera using a ZED camera and a stereo-vision sensor, and real-time stabilization using a Kalman filter. The proposed system shows high time efficiency and accurate 3D composition.
APA, Harvard, Vancouver, ISO, and other styles
22

Indriyani, Lisa Rahma. "PENGEMBANGAN VIDEO 360 DERAJAT UNTUK MENINGKATKAN KETERAMPILAN MENDIRIGEN LAGU NASIONAL." Joyful Learning Journal 12, no. 4 (December 28, 2023): 242–47. http://dx.doi.org/10.15294/jlj.v12i4.76890.

Full text
Abstract:
Abstrak Penelitian ini mengembangkan media pembelajaran video berbasis kamera 360 derajat materi mendirigen lagu nasional. Tujuan penelitian ini adalah mengembangkan, menguji kelayakan dan keefektifan media pembelajaran video berbasis kamera 360 derajat dikelas III SD Negeri 1 Gemblengan. Jenis penelitian ini adalah penelitian dan pengembangan Research and Development (R&D) dengan model pengembangan oleh Borg & Gall. Teknik pengumpulan data meliputi teknik tes (pretest dan posttest), dan teknik non tes (observasi, wawancara, angket, dan dokumentasi). Desain media pembelajaran video berbasis kamera 360 derajat berisikan video materi mendirigen, video tutorial gerakan mendirigen berbasis kamera 360 derajat serta video kumpulan lagu nasional. Hasil penilaian media pembelajaran video berbasis kamera 360 derajat oleh ahli materi memperoleh persentase sebesar 88,3% dan ahli media sebesar 98% dengan kategori sangat layak. Teknik analisis data menggunakan uji normalitas, uji t-test, dan uji n-gain. Hasil uji normalitas data menunjukkan nilai sig. > 0,05 sehingga data berdistribusi normal. Media pembelajaran video berbasis kamera 360 derajat efektif digunakan dibuktikan dengan hasil uji t, dengan nilai signifikansi (Sig) 0,0004 < 0,05 untuk skala kecil dan 0,000 < 0,05 untuk skala besar. Sehingga diterima dan ditolak. Sementara untuk uji N-gain terjadi peningkatan hasil belajar dari perolehan rata-rata pretest dan posttest psikomotorik yaitu sebesar 0,49 untuk skala kecil dan 0,46 untuk skala besar dengan kategori sedang. Simpulan dari penelitian ini adalah media pembelajaran video berbasis kamera 360 derajat layak dan efektif untuk meningkatkan keterampilan mendirigen lagu nasional siswa kelas III SD Negeri 1 Gemblengan. This research developed 360-degree camera-based video learning media for conducting national songs. This research aimed to develop and test the feasibility and effectiveness of 360-degree camera-based video learning media in class III at SD Negeri 1 Gemblengan. This type of research way research and development (R&D) with a development model by Borg & Gall. Data collection techniques included test techniques (pretest and posttest) and non-test techniques (observation, interviews, questionnaires and documentation). The 360-degree camera-based video learning media design contained videos of conducting material, video tutorials on conducting movements based on a 360-degree camera and videos of collections of national songs. The results of the assessment of 360-degree camera-based video learning media by material experts obtained a percentage of 88.3% and media experts 98% with a very feasible category. Data analysis techniques use the normality test, t-test, and n-gain test. The results of the data normality test show a sig value. > 0.05 so the data is normally distributed. The 360-degree camera-based video learning media is effectively used as proven by the results of the t-test, with a significance value (Sig) of 0.0004 < 0.05 for small scale and 0.000 < 0.05 for large scale. So is accepted and is rejected. Meanwhile, for the N-gain test, there was an increase in learning outcomes from the average gain of the psychomotor pretest and posttest, namely 0.49 for the small scale and 0.46 for the large scale in the medium category. This research concludes that 360-degree camera-based video learning media is feasible and effective for improving the national song conducting skills of class III students at SD Negeri 1 Gemblengan.
APA, Harvard, Vancouver, ISO, and other styles
23

Janiszewski, Mateusz, Markus Prittinen, Masoud Torkan, and Lauri Uotinen. "Rapid tunnel scanning using a 360-degree camera and SfM photogrammetry." IOP Conference Series: Earth and Environmental Science 1124, no. 1 (January 1, 2023): 012010. http://dx.doi.org/10.1088/1755-1315/1124/1/012010.

Full text
Abstract:
Abstract Photogrammetric scanning can be employed for the digitization of underground spaces, for example for remote mapping, visualization, or training purposes. However, such a technique requires capturing many photos, which can be laborious and time-consuming. Previous research has demonstrated that the acquisition time can be reduced by capturing the data with multiple lenses or devices at the same time. Therefore, this paper demonstrates a method for rapid scanning of hard rock tunnels using Structure-from-Motion (SfM) photogrammetry and a 360-degree camera. The test was performed in the Underground Research Laboratory of Aalto University (URLA). The tunnel is located in granitic rocks at a depth of 20 m below the Otaniemi campus in Espoo, Finland. A 10 m long and 3.5 m high tunnel section with exposed rock was selected for this study. Photos were captured using the 360-degree camera from 27 locations and 3D models were reconstructed using SfM photogrammetry. The accuracy, speed, and resolution of the 3D models were measured and compared with models scanned with a digital single-lens reflex (DSLR) camera. The results show that the data capture process with a 360-degree camera is 6x faster compared to a conventional camera. In addition, the orientation of discontinuities was measured remotely from the 3D model and the digitally obtained values matched the manual compass measurements. Even though the quality of the 360-degree camerabased 3D model was visually inferior compared to the DSLR model, the point cloud had sufficient accuracy and resolution for semi-automatic discontinuity measurements. The quality of the models can be improved by combining 360-degree and DSLR photos which result in a point cloud with 3x higher resolution and 2x higher accuracy. The results demonstrated that 360-degree cameras can be used for the rapid digitization of underground tunnels.
APA, Harvard, Vancouver, ISO, and other styles
24

Satyawan, Arief Suryadi, Prio Adjie Utomo, Heni Puspita, and Ike Yuni Wulandari. "360-degree Image Processing on NVIDIA Jetson Nano." Internet of Things and Artificial Intelligence Journal 4, no. 2 (May 4, 2024): 172–86. http://dx.doi.org/10.31763/iota.v4i2.722.

Full text
Abstract:
A wide field of vision is required for autonomous electric vehicles to operate object-detecting systems. By identifying objects, it is possible to imbue the car with human intelligence, similar to that of a driver, so that it can recognize items and make decisions to prevent collisions with them. Using a 360-degree camera is a wonderful idea because it can record events surrounding the car in a single shot. Nevertheless, 360º cameras produce naturally skewed images. To make the image appear normal but have a bigger capture area, it is required to normalize it. In this study, NVIDIA Jetson Nano is used to construct software for 360-degree image normalization processing using Python. To process an image in real-time, first choose the image shape mapping that can give information about the entire item that the camera collected. Then, choose and apply the mapping. Using Python on an NVIDIA Jetson Nano, the author of this research has successfully processed 360-degree images for local and real-time video as well as image geometry modifications.
APA, Harvard, Vancouver, ISO, and other styles
25

Scotti, G., L. Marcenaro, C. Coelho, F. Selvaggi, and C. S. Regazzoni. "Dual camera intelligent sensor for high definition 360 degrees surveillance." IEE Proceedings - Vision, Image, and Signal Processing 152, no. 2 (2005): 250. http://dx.doi.org/10.1049/ip-vis:20041302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Su, Chen, Xinxin Zhou, Haifeng Li, Qing Yang, Zhechao Wang, and Xu Liu. "360 deg full-parallax light-field display using panoramic camera." Applied Optics 55, no. 17 (June 10, 2016): 4729. http://dx.doi.org/10.1364/ao.55.004729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Svobodová, Hana. "Teacher-Student-Environment Interactions in Fieldwork Through 360-Degree Camera." New Educational Review 71, no. 1 (2023): 78–89. http://dx.doi.org/10.15804/tner.23.71.1.06.

Full text
Abstract:
The 360-degree camera is a relatively new technology that might become an effective tool for the development of students and teachers and overall educational improvement. As “normal” video provides a maximum 130-degree perspective, it could not offer a complex capture of education. This paper presents a way to use a 360-degree camera during two different modes of trainee teachers’ fieldwork and proposes how to evaluate this form of education in terms of teacher-student-environment interactions by identifying the occurrence and duration of each type of interaction. It is evident that the type and intensity of interactions affect fieldwork significantly and may lead to different depths of learning.
APA, Harvard, Vancouver, ISO, and other styles
28

Cheng, Hoi Chuen. "Leveraging 360° Camera in 3D Reconstruction: A Vision-Based Approach." International Journal of Signal Processing Systems 12 (2024): 1–6. http://dx.doi.org/10.18178/ijsps.2024.12.1-6.

Full text
Abstract:
In this paper, we present a novel vision-based approach for 3D reconstruction using a single 360° camera, aiming to offer a simplified and accessible solution for various consumer-oriented applications. Consumer-grade 360° cameras have gained significant popularity due to their affordability and ease of use. However, traditional methods for 3D reconstruction often require complex setups with multiple cameras or expensive hardware such as Light Detection and Ranging (LiDAR). Our approach addresses the challenges associated with 360° cameras by converting the distorted Equirectangular Projection (ERP) into four perspective views, allowing compatibility with deep learning models trained on undistorted perspective images. We leverage Visual Simultaneous Localization and Mapping (VSLAM) techniques for camera pose estimation and employ a standard 3D reconstruction pipeline for generating detailed 3D mesh representations of the indoor environment. Through experimental evaluation, we compare the performance of 360° cameras with traditional perspective cameras in 3D reconstruction and analyze the accuracy and performance of our vision-based approach. Our findings demonstrate the potential of using 360° cameras for constructing high-quality models and facilitating efficient data collection for 3D reconstructions, opening up new possibilities for various consumer-oriented applications in multiple fields.
APA, Harvard, Vancouver, ISO, and other styles
29

Wei, Cheng Li, Ang Cher Wee, Chan Wai Herng, and Ying Meng Fai. "Human Factors Evaluation of 360 Panoramic View Camera System for Ground Vehicle." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 51, no. 18 (October 2007): 1086–90. http://dx.doi.org/10.1177/154193120705101807.

Full text
Abstract:
The 360 Panoramic view camera System (360 PS) is a commercial-off-the-shelf camera technology that captures and presents the viewer with a 360-degree horizontal view around the camera. Due to its unique ability to monitor in omni-directional, there is potential for operator functions such as surveillance and monitoring, and for enhancing situational awareness of crews operating vehicles with restricted visual field-of-view. This man-in-the-loop evaluation is to study the limitations and capabilities of the system, and enhance its performance for the proposed functions. Two sets of experiments were designed, and carried out to study the man-machine interface (MMI) issues. Subjective and objective data were collected, which allowed us to identify two preferred display modes out of the six factory modes. Graphic user interface (GUI) overlays were then designed for these two display modes.
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Yiran, Dong Zhao, Xueyi Ma, Jianzhong Zhang, and Jian Zhao. "Panoramic Digital Image Correlation for 360-Deg Full-Field Displacement Measurement." Applied Sciences 13, no. 3 (February 3, 2023): 2019. http://dx.doi.org/10.3390/app13032019.

Full text
Abstract:
In full-field 3D displacement measurement, stereo digital image correlation (Stereo-DIC) has strong capabilities. However, as a result of difficulties with stereo camera calibration and surface merging, 360-deg panoramic displacement measurements remain a challenge. This paper proposes a panoramic displacement field measurement method in order to accurately measure the shape and panoramic displacement field of complex shaped objects with natural textures. The proposed method is based on the robust subset-based DIC algorithm and the well-known Zhang’s calibration method to reconstruct the 3D shape and estimate the full-field displacements of a complex surface from multi-view stereo camera pairs. The method is used in the determination of the scale factor of the 3D reconstructed surface and the stitching of multiple 3D reconstructed surfaces with the aid of the laser point cloud data of the object under test. Based on a discussion of the challenges faced by panoramic DIC, this paper details the proposed solution and describes the specific algorithms implemented. The paper tests the performance of the proposed method using an experimental system with a 360-deg six camera setup. The system was evaluated by measuring the rigid body motion of a cylindrical log sample with known 3D point cloud data. The results confirm that the proposed method is able to accurately measure the panoramic shape and full-field displacement of objects with complex morphologies.
APA, Harvard, Vancouver, ISO, and other styles
31

Su, Chen, Qing Zhong, Chao Yu, Haifeng Li, and Xu Liu. "13.2: 360° Multi-Faced Tracking and Interaction Using A Panoramic Camera." SID Symposium Digest of Technical Papers 46, no. 1 (June 2015): 151–54. http://dx.doi.org/10.1002/sdtp.10331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Zahradník, David, and Jakub Vynikal. "Possible approaches for processing of spherical images using SfM." Stavební obzor - Civil Engineering Journal 32, no. 1 (April 30, 2023): 1–12. http://dx.doi.org/10.14311/cej.2023.01.0001.

Full text
Abstract:
Spherical cameras are being used more frequently in surveying because of their low cost and possibility to process spherical images with conventional SfM methods. Fish-eye lenses are now frequently found on modern 360° cameras, allowing for the capture of the entire scene and subsequent model reconstitution by a non-photogrammetrist specialist. Gyrospherics are a feature of cameras that ensure image stabilization to reduce blur when the camera is moving. This feature allows it to capture moving scenes in time-lapse mode. However, two main factors - hardware parameters and software algorithms - affect the quality of the images that are captured. The 360° camera design allows for a variety of data processing methods by SfM. These images were created using multiple image sensors and lenses from each 360° camera. These images can be processed individually or by applying rules to define relationships between images. Also, images from cameras can be stitched and processed with a spherical camera model. In this paper is proposed processing methods of data from 360° cameras and estimated accuracy of each method.
APA, Harvard, Vancouver, ISO, and other styles
33

Riyanto, Umbar, Nurdiana Handayani, and Mohammad Imam Shalahudin. "Implementasi Metode Perbandingan Eksponensial (MPE) Pada Sistem Pendukung Keputusuan Pemilihan Internet Protocol Camera." Jurnal Sistem Komputer dan Informatika (JSON) 4, no. 1 (September 30, 2022): 123. http://dx.doi.org/10.30865/json.v4i1.4875.

Full text
Abstract:
The development of video surveillance has given rise to various types of surveillance cameras, one of which is the Internet Protocol Camera (IP Camera). The number of IP Camera brands in the market, makes people who want to buy IP Cameras have to find their own information about the specifications and capabilities of the IP Camera to be purchased. It takes time and effort to choose an IP Camera, because you have to learn one by one which IP Camera to buy. This study aims to build a decision support system for choosing an IP Camera with a website-based Exponential Comparison Method (MPE) to make it easier to determine the right IP Camera. MPE can sort the priority of decision alternatives on existing criteria and is able to distinguish the value of each alternative in contrast. Based on the case study, the best alternative is Xiaomi Mi 360 with a value of 386, followed by Yi Home Camera 3 getting a value of 369, Ezviz C6N getting a value of 350, Imilab EC4 getting a value of 343 and Cleverdog Egg Cam getting a value of 110. The results of the MPE calculation generated by the system shows the same value as the manual calculation, then the MPE calculation on the system is declared valid. In addition, the test results with black-box testing show that the system can run well.
APA, Harvard, Vancouver, ISO, and other styles
34

Barazzetti, Luigi, Mattia Previtali, and Marco Scaioni. "Procedures for Condition Mapping Using 360° Images." ISPRS International Journal of Geo-Information 9, no. 1 (January 7, 2020): 34. http://dx.doi.org/10.3390/ijgi9010034.

Full text
Abstract:
The identification of deterioration mechanisms and their monitoring over time is an essential phase for conservation. This work aimed at developing a novel approach for deterioration mapping and monitoring based on 360° images, which allows for simple and rapid data collection. The opportunity to capture the whole scene around a 360° camera reduces the number of images needed in a condition mapping project, resulting in a powerful solution to document small and narrow spaces. The paper will describe the implemented workflow for deterioration mapping based on 360° images, which highlights pathologies on surfaces and quantitatively measures their extension. Such a result will be available as standard outputs as well as an innovative virtual environment for immersive visualization. The case of multi-temporal data acquisition will be considered and discussed as well. Multiple 360° images acquired at different epochs from slightly different points are co-registered to obtain pixel-to-pixel correspondence, providing a solution to quantify and track deterioration effects.
APA, Harvard, Vancouver, ISO, and other styles
35

Cha, Sangguk, Da-Yoon Nam, and Jong-Ki Han. "Hole Concealment Algorithm Using Camera Parameters in Stereo 360 Virtual Reality System." Applied Sciences 11, no. 5 (February 25, 2021): 2033. http://dx.doi.org/10.3390/app11052033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Teppati Losè, L., F. Chiabrando, and A. Spanò. "PRELIMINARY EVALUATION OF A COMMERCIAL 360 MULTI-CAMERA RIG FOR PHOTOGRAMMETRIC PURPOSES." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 1113–20. http://dx.doi.org/10.5194/isprs-archives-xlii-2-1113-2018.

Full text
Abstract:
The research presented in this paper is focused on a preliminary evaluation of a 360 multi-camera rig: the possibilities to use the images acquired by the system in a photogrammetric workflow and for the creation of spherical images are investigated and different tests and analyses are reported. Particular attention is dedicated to different operative approaches for the estimation of the interior orientation parameters of the cameras, both from an operative and theoretical point of view. The consistency of the six cameras that compose the 360 system was in depth analysed adopting a self-calibration approach in a commercial photogrammetric software solution. A 3D calibration field was projected and created, and several topographic measurements were performed in order to have a set of control points to enhance and control the photogrammetric process. The influence of the interior parameters of the six cameras were analyse both in the different phases of the photogrammetric workflow (reprojection errors on the single tie point, dense cloud generation, geometrical description of the surveyed object, etc.), both in the stitching of the different images into a single spherical panorama (some consideration on the influence of the camera parameters on the overall quality of the spherical image are reported also in these section).
APA, Harvard, Vancouver, ISO, and other styles
37

Stouffer, Brooke A. "“More Fire Comin’ Up!”: The 360 Camera, Early Childhood, and Functioning Differently." Art Education 73, no. 6 (October 20, 2020): 12–17. http://dx.doi.org/10.1080/00043125.2020.1785797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Sun, Hung, Tsun-Hung Tsai, and Ke Jiang. "Combining 360° Video and Camera Mapping for Virtual Reality: An Innovative Solution." Educational Innovations and Emerging Technologies 2, no. 2 (June 30, 2022): 39–45. http://dx.doi.org/10.35745/eiet2022v02.02.0004.

Full text
Abstract:
Changing the framework of traditional video with limited viewing angles, 360° photo/video provides an immersive viewing experience. 360° video is one of the applications and the most important feature of virtual reality for immersion and the feeling of being in another space. However, when viewing 360° videos with a head-mounted display, the viewer feels like a fixed rotatable camera, and the viewer's movement does not change the viewing angle of the object, which greatly reduces the spatial immersion required for virtual reality. Therefore, we propose a solution that maintains high-quality graphics and low hardware demands and supports 6DoF head-mounted displays. Through the camera mapping function in the 3D animation software, 360° surround video is projected into a 3D sphere to create a simple 3D object that corresponds to the shape of the object in the image. With pristine video quality and a realistic 3D spatial perspective, it provides better virtual reality immersion than 360° video and is not as complex as a full 3D virtual reality environment. In the future, 3D scanning and photogrammetry can be integrated to reconstruct a more easily applied 3D virtual reality environment
APA, Harvard, Vancouver, ISO, and other styles
39

Delforouzi, Ahmad, Seyed Amir Hossein Tabatabaei, Kimiaki Shirahama, and Marcin Grzegorzek. "A polar model for fast object tracking in 360-degree camera images." Multimedia Tools and Applications 78, no. 7 (August 20, 2018): 9275–97. http://dx.doi.org/10.1007/s11042-018-6525-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Previtali, M., L. Barazzetti, F. Roncoroni, Y. Cao, and M. Scaioni. "360° IMAGE ORIENTATION AND RECONSTRUCTION WITH CAMERA POSITIONS CONSTRAINED BY GNSS MEASUREMENTS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W1-2023 (May 25, 2023): 411–16. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w1-2023-411-2023.

Full text
Abstract:
Abstract. Photogrammetric applications using 360° images are becoming more and more popular in different fields, such as cultural heritage documentation of narrow spaces; civil, architectural, and environmental projects like tunnel surveying; mapping of urban city centres, etc. The popularity of 360° photogrammetry relates to the high productivity of the acquisition phase, giving the opportunity to capture the entire scene around the user in a relatively short time. On the other hand, the photogrammetric workflow needs ground control points (GCPs), well distributed over the survey area, to georeference the produced 3D data. Placing, measuring on-field, and identifying GCP on images is time-consuming and sometimes even not feasible due to environmental conditions. While effective solutions exist for UAV-based projects, direct georeferencing and GNSS assisted photogrammetry is still not fully exploited for ground-based acquisitions. This paper aims at presenting a solution coupling 360° images and high-precision GNSS systems for direct georeferencing of outdoor projects without the need for manually measuring GCPs. Three different acquisition modes for 360° images and GNSS data are presented, and orientation results are compared with manually measured Check Points.
APA, Harvard, Vancouver, ISO, and other styles
41

Kreutzberg, Anette, and Emanuele Naboni. "360° VR for Qualifying Daylight Design." SHS Web of Conferences 64 (2019): 02015. http://dx.doi.org/10.1051/shsconf/20196402015.

Full text
Abstract:
This paper describes the initial findings in an ongoing project aimed at bridging the gap between quantitative daylight simulations and visually perceived daylight quality, using 360° rendered panoramas and animations displayed in virtual reality. A daylight studio equipped with a simple façade pattern for a simultaneous Thermal Delight study was used as case study and test room. The test room was recorded with a 360° camera in sequential image series on days with different weather conditions. The resulting 360° VR time-lapse recordings were proposed for visual diurnal daylight analysis as supplement to thermal measurements used for calibrating and varying the façade pattern on site and in a corresponding thermal simulation model. A comparative experiment was set up to calibrate the perceived visual qualities and ambiance of daylight in 360° photographic panoramas viewed in VR, compared to the perceived visual qualities and ambiance of the real world site. Subjective visual evaluations of the virtual as well as the real space were recorded based on 15 people answering to a questionnaire. Results from the comparative experiment indicate a variety in perception of daylight quality and ambiance but a rather uniform perception of daylight brightness in 360° photographs that can be transferred to 360° rendered panoramas.
APA, Harvard, Vancouver, ISO, and other styles
42

Schäfer, Max B., Selina Eggstein, Kent W. Stewart, and Peter P. Pott. "360° Laparoscopic Imaging System to Facilitate Camera Control and Orientation in Minimally Invasive Surgery." Current Directions in Biomedical Engineering 8, no. 2 (August 1, 2022): 265–68. http://dx.doi.org/10.1515/cdbme-2022-1068.

Full text
Abstract:
Abstract During laparoscopic procedures, the surgeon's view of the situs, and thus her or his performance, is dependent on the skills of the camera assistant. The surgeon lacks control over his own field of view and there is a high potential for reducing mental load and workflow issues. In this paper, a research setup of a 360° laparoscopic imaging system is presented and evaluated. The system consists of a 360° camera and a head-mounted display, through which the surgeon can inspect the situs. In a user test, the system showed advantages over a conventional laparoscope regarding orientation in the situs, intuitiveness of operation, and faster task completion.
APA, Harvard, Vancouver, ISO, and other styles
43

Barbosa, Amanda S., and Dayana B. Costa. "Use of BIM and visual data collected by UAS and 360° camera for construction progress monitoring." IOP Conference Series: Earth and Environmental Science 1101, no. 8 (November 1, 2022): 082007. http://dx.doi.org/10.1088/1755-1315/1101/8/082007.

Full text
Abstract:
Abstract Despite progress monitoring is an essential practice for achieving the success of construction, traditional monitoring methods based on manual information gathered through visual inspections are error-prone, depending on the experience of those who carry them out. Furthermore, most studies of progress monitoring using digital technologies focus on activities carried out outdoors, limiting the application of these methods in residential construction sites, which have several indoor activities. This study proposes a method for outdoor and indoor visual monitoring of construction progress using Building Information Modeling (BIM), 360° camera, and photogrammetry aided by an Unmanned Aerial System (UAS). For this purpose, exploratory case studies were carried out. The first exploratory study aimed to understand data collection and processing operationalization using the proposed technologies. Then, these technologies were used and evaluated to monitor progress in a second exploratory case study, enabling the development of a proposed method for using visual data collected by UAS and 360° camera integrated to BIM for progress monitoring. The status of the external area of the construction site was represented by point clouds generated through images collected by UAS. For monitoring inside the buildings, a 360° camera attached to the safety helmet was used. The results include evaluating the use of a 360° camera to monitor the internal progress of works, presenting its strengths, limitations, and use recommendations. In addition, the results also include the proposal of a method for visual progress monitoring of indoor and outdoor activities using BIM, UAS, and 360° cameras.
APA, Harvard, Vancouver, ISO, and other styles
44

Barazzetti, L., M. Previtali, F. Roncoroni, and R. Valente. "CONNECTING INSIDE AND OUTSIDE THROUGH 360&deg; IMAGERY FOR CLOSE-RANGE PHOTOGRAMMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W9 (January 31, 2019): 87–92. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w9-87-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Metric documentation of buildings requires the connection of different spaces, such as rooms, corridors, floors, and interior and exterior spaces. Images and laser scans have to be oriented and registered to obtain accurate metric data about different areas and the related metric information (e.g., wall thickness). A robust registration can be obtained with total station measurements, especially when a geodetic network with multiple intersections on different station points is available. In the case of a photogrammetric project with several images acquired with a central perspective camera, the lack of total station measurements (i.e., control and check points) could result in a weak orientation for the limited overlap between images acquired through doors and windows. The procedure presented in this paper is based on 360&amp;deg; images acquired with an affordable digital camera (less than 350$). The large field of view of 360&amp;deg; images allows one to simultaneously capture different rooms as well as indoor and outdoor spaces, which will be visible in just a picture. This could provide a more robust orientation of multiple images acquired through narrow spaces. A combined bundle block adjustment that integrates central perspective and spherical images is here proposed and discussed. Additional considerations on the integration of fisheye images are discussed as well.</p>
APA, Harvard, Vancouver, ISO, and other styles
45

Arkan, Fardhan, and Tri Hendrawan Budianto. "Rancang Bangun Sistem Informasi Wisata Kota Muntok Berbasis Android Dengan Teknologi Camera 360." Jurnal Ecotipe (Electronic, Control, Telecommunication, Information, and Power Engineering) 6, no. 2 (October 3, 2019): 90–96. http://dx.doi.org/10.33019/ecotipe.v6i2.1018.

Full text
Abstract:
INTISARI Kota Muntok adalah ibukota Kabupaten Bangka Barat memiliki destinasi wisata yang menjadi unggulan wisata karena banyaknya destinasi wisata mulai dari alam yaitu pantai dan bukit serta sejarah dengan adanya pesanggrahan tempat pengasingan Ir. Soekarno dan Moh. Hatta, kuliner, budaya, dan lainnya. Promosi destinasi wisata yang ada di Kota Muntok dengan menggunakan Internet menjadi penting untuk meningkatkan pariwisata yang ada di Kota Muntok serta diharapkan mendorong pertumbuhan ekonomi di Kota Muntok. Dalam penelitian ini, fokus adalah pemanfaatan Internet dan aplikasi android serta membangun sistem informasi pariwisata sebagai ajang memperkenalkan destinasi wisata yang ada di Kota Muntok yang dijadikan tujuan wisata untuk menarik wisatawan untuk berkunjung ke Kota Muntok. Sistem informasi pariwisata ini berisikan informasi untuk destinasi wisata yang ada di Kota Muntok sehingga orang tertarik berkunjung ke Kota Muntok serta memudahkan wisatawan ketika berkunjung ke Kota Muntok. Sistem informasi pariwisata ini dapat diakses via Internet dan dapat dibuka dengan menggunakan smartphone android ditambah tampilan camera 360. Kata Kunci: Pariwisata, Camera 360, Sistem Informasi, Android
APA, Harvard, Vancouver, ISO, and other styles
46

Previtali, M., L. Barazzetti, and F. Roncoroni. "GNSS ASSISTED PHOTOGRAMMETRIC RECONSTRUCTION FROM COMBINED 360° VIDEOS AND UAV IMAGES." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-2/W4-2024 (February 14, 2024): 365–72. http://dx.doi.org/10.5194/isprs-archives-xlviii-2-w4-2024-365-2024.

Full text
Abstract:
Abstract. This paper introduces an integrated approach utilizing ground data consists of videos captured with a 360° (spherical) camera and aerial data acquired with a UAV equipped with a RTK GNSS module to reconstruct a portion of a small-town city center and/or a cultural heritage site. Previous research has demonstrated that image blocks oriented with RTK data on camera position can reach centimeter accuracies and can be efficiently used to reconstruct large areas and single monuments. However, some areas like porches, narrow passages and streets cannot be properly reconstructed from an aerial point of view. Conversely, ground-based 360° images offer detailed insights into the terrain and features that may be obscured from an aerial perspective. Integration of those two points of view can increase spatial resolution and coverage for 3D reconstruction. Indeed, the UAV captures large-scale features and topography, while ground-based 360° images focus on intricate details and ground-level characteristics. The possibility to exploit GNSS data acquired by UAV may also be used for GNSS-assisted image orientation with the aim of reducing or even avoiding, in specific situations, the need for GCPs. The paper explores practical applications of such data integration in the cultural heritage domain demonstrating the efficacy of the integrated approach in scenarios with complex architectures and inaccessible areas.
APA, Harvard, Vancouver, ISO, and other styles
47

Urminsky, Jan, Milan Marônek, Miroslav Jáňa, and Ladislav Morovič. "ANALYSIS OF WELD JOINT DEFORMATIONS BY OPTICAL 3D SCANNING." Acta Polytechnica 56, no. 1 (February 29, 2016): 76. http://dx.doi.org/10.14311/app.2016.56.0076.

Full text
Abstract:
This paper presents an analysis of weld joint deformation using optical 3D scanning. The weld joints of bimetals were made by explosion welding (EXW). GOM ATOS II TripleScan SO MV320 equipment with measuring volume 320 × 240 × 240 mm, 5.0 MPix camera resolution and GOM ATOS I 350 with a measuring volume of 250 × 200 × 200 mm, 0.8 MPix camera resolution were used for experimental deformation measurements of weldments. The scanned samples were compared with reference specimens. The angular and transverse deformation were visualized by colour deviation maps. The maximum observed deformations of the weld joints ranged from −1.96 to +1.20 mm.
APA, Harvard, Vancouver, ISO, and other styles
48

Mi, Tzu-Wei, and Mau-Tsuen Yang. "Comparison of Tracking Techniques on 360-Degree Videos." Applied Sciences 9, no. 16 (August 14, 2019): 3336. http://dx.doi.org/10.3390/app9163336.

Full text
Abstract:
With the availability of 360-degree cameras, 360-degree videos have become popular recently. To attach a virtual tag on a physical object in 360-degree videos for augmented reality applications, automatic object tracking is required so the virtual tag can follow its corresponding physical object in 360-degree videos. Relative to ordinary videos, 360-degree videos in an equirectangular format have special characteristics such as viewpoint change, occlusion, deformation, lighting change, scale change, and camera shakiness. Tracking algorithms designed for ordinary videos may not work well on 360-degree videos. Therefore, we thoroughly evaluate the performance of eight modern trackers in terms of accuracy and speed on 360-degree videos. The pros and cons of these trackers on 360-degree videos are discussed. Possible improvements to adapt these trackers to 360-degree videos are also suggested. Finally, we provide a dataset containing nine 360-degree videos with ground truth of target positions as a benchmark for future research.
APA, Harvard, Vancouver, ISO, and other styles
49

Lin, Bo-Hong, Vinay M. Shivanna, Jiun-Shiung Chen, and Jiun-In Guo. "360° Map Establishment and Real-Time Simultaneous Localization and Mapping Based on Equirectangular Projection for Autonomous Driving Vehicles." Sensors 23, no. 12 (June 14, 2023): 5560. http://dx.doi.org/10.3390/s23125560.

Full text
Abstract:
This paper proposes the design of a 360° map establishment and real-time simultaneous localization and mapping (SLAM) algorithm based on equirectangular projection. All equirectangular projection images with an aspect ratio of 2:1 are supported for input image types of the proposed system, allowing an unlimited number and arrangement of cameras. Firstly, the proposed system uses dual back-to-back fisheye cameras to capture 360° images, followed by the adoption of the perspective transformation with any yaw degree given to shrink the feature extraction area in order to reduce the computational time, as well as retain the 360° field of view. Secondly, the oriented fast and rotated brief (ORB) feature points extracted from perspective images with a GPU acceleration are used for tracking, mapping, and camera pose estimation in the system. The 360° binary map supports the functions of saving, loading, and online updating to enhance the flexibility, convenience, and stability of the 360° system. The proposed system is also implemented on an nVidia Jetson TX2 embedded platform with 1% accumulated RMS error of 250 m. The average performance of the proposed system achieves 20 frames per second (FPS) in the case with a single-fisheye camera of resolution 1024 × 768, and the system performs panoramic stitching and blending under 1416 × 708 resolution from a dual-fisheye camera at the same time.
APA, Harvard, Vancouver, ISO, and other styles
50

Sole, Aditya, Ivar Farup, and Shoji Tominaga. "Image based reflectance measurement based on camera spectral sensitivities." Electronic Imaging 2016, no. 9 (February 15, 2016): 1–8. http://dx.doi.org/10.2352/issn.2470-1173.2016.9.mmrma-360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography