Academic literature on the topic 'Images 360 degrés'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Images 360 degrés.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Images 360 degrés"
Hadi Ali, Israa, and Sarmad Salman. "360-Degree Panoramic Image Stitching for Un-ordered Images Based on Harris Corner Detection." Indian Journal of Science and Technology 12, no. 4 (January 1, 2019): 1–9. http://dx.doi.org/10.17485/ijst/2019/v12i4/140988.
Full textAssens, Marc, Xavier Giro-i-Nieto, Kevin McGuinness, and Noel E. O’Connor. "Scanpath and saliency prediction on 360 degree images." Signal Processing: Image Communication 69 (November 2018): 8–14. http://dx.doi.org/10.1016/j.image.2018.06.006.
Full textBarazzetti, L., M. Previtali, and F. Roncoroni. "CAN WE USE LOW-COST 360 DEGREE CAMERAS TO CREATE ACCURATE 3D MODELS?" ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 69–75. http://dx.doi.org/10.5194/isprs-archives-xlii-2-69-2018.
Full textAlves, Ricardo Martins, Luís Sousa, Aldric Trindade Negrier, João M. F. Rodrigues, Jânio Monteiro, Pedro J. S. Cardoso, Paulo Felisberto, and Paulo Bica. "Interactive 360 Degree Holographic Installation." International Journal of Creative Interfaces and Computer Graphics 8, no. 1 (January 2017): 20–38. http://dx.doi.org/10.4018/ijcicg.2017010102.
Full textBanchi, Yoshihiro, Keisuke Yoshikawa, and Takashi Kawai. "Evaluating user experience of 180 and 360 degree images." Electronic Imaging 2020, no. 2 (January 26, 2020): 244–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-244.
Full textHussain, Abuelainin. "Interactive 360-Degree Virtual Reality into eLearning Content Design." International Journal of Innovative Technology and Exploring Engineering 10, no. 2 (December 10, 2020): 1–4. http://dx.doi.org/10.35940/ijitee.b8219.1210220.
Full textLee, Hyunchul, and Okkyung Choi. "An efficient parameter update method of 360-degree VR image model." International Journal of Engineering Business Management 11 (January 1, 2019): 184797901983599. http://dx.doi.org/10.1177/1847979019835993.
Full textJauhari, Jauhari. "SOLO-YOGYA INTO 360-DEGREE PHOTOGRAPHY." Capture : Jurnal Seni Media Rekam 13, no. 1 (December 13, 2021): 17–31. http://dx.doi.org/10.33153/capture.v13i1.3627.
Full textTsubaki, Ikuko, and Kazuo Sasaki. "An Interrupted Projection using Seam Carving for 360-degree Images." Electronic Imaging 2018, no. 2 (January 28, 2018): 414–1. http://dx.doi.org/10.2352/issn.2470-1173.2018.2.vipc-414.
Full textBanchi, Yoshihiro, and Takashi Kawai. "Evaluating user experience of different angle VR images." Electronic Imaging 2021, no. 2 (January 18, 2021): 98–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.2.sda-098.
Full textDissertations / Theses on the topic "Images 360 degrés"
Sendjasni, Abderrezzaq. "Objective and subjective quality assessment of 360-degree images." Electronic Thesis or Diss., Poitiers, 2023. http://www.theses.fr/2023POIT2251.
Full text360-degree images, a.k.a. omnidirectional images, are in the center of immersive media. With the increase in demands of the latter, mainly thanks to the offered interactive and immersive experience, it is paramount to provide good quality of experience (QoE). This QoE is significantly impacted by the quality of the content. Like any type of visual signal, 360-degree images go through a sequence of processes including encoding, transmission, decoding, and rendering. Each of these processes has the potential to introduce distortions to the content. To improve the QoE, image quality assessment (IQA) is one of the strategies to be followed. This thesis addresses the quality evaluation of 360-degree images from the objective and subjective perspectives. By focusing on the influence of Head Mounted Displays (HMDs) on the perceived quality of 360-degree images, a psycho-visual study is designed and carried out using four different devices. For this purpose, a 360-degree image datasets is created and a panel of observers is involved. The impact of HMDs on the quality ratings is identified and highlighted as an important factor to consider when con- ducting subjective experiments for 360-degree images. From the objective perspective, we first comprehensively benchmarked several convolutional neural network (CNN) models under various configurations. Then, the processing chain of CNN-based 360-IQA is improved at different scales, from input sampling and representation to aggregating quality scores. Based on the observations of the above studies as well as the benchmark, two 360-IQA models based on CNNs are proposed to accurately predict the quality of 360-degree images. The obtained observations and conclusions from the various contributions shall bring insights for assessing the quality of 360-degree images
360-graders bilder, også kjent som rundstrålende bilder, er i sentrum av oppslukende medier. Med økningen i forventninger til sistnevnte, hovedsakelig takket være den aktiverte interaktive og oppslukende opplevelse, er det avgjørende å gi god kvaliteten på opplevelsen (QoE).Denne QoE er betydelig påvirket av kvaliteten på innholdet. Som alle typer visuelle signaler går 360-graders bilder gjennom en sekvens av prosesser, inkludert koding, overføring, dekoding og gjengivelse. Hver av disse prosessene har potensial til å introdusere forvrengninger til innholdet.For å forbedre QoE er vurdering av bildekvalitet (IQA) en av strategiene å følge. Denne oppgaven tar for seg kvalitetsevaluering av 360-graders bilder fra objektive og subjektive perspektiver. Ved å fokusere på påvirkningen av Head Mounted Displays (HMD-er) på den oppfattede kvaliteten til 360-graders bilder, er en psyko-visuell studie designet og utført ved hjelp av fire forskjellige enheter. For dette formålet opprettes et 360-graders bildedatasett og et panel av observatører er involvert. Virkningen av HMD-er på valitetsvurderingene identifiseres og fremheves som en viktig faktor når du utfører subjektive eksperimenter for 360-graders bilder.Fra det objektive perspektivet benchmarket vi først flere konvolusjonelle nevrale nettverk (CNN) under forskjellige konfigurasjoner. Deretter forbedres prosesseringskjeden til CNN-baserte 360-IQA i forskjellige skalaer, fra input-sampling og representasjon til aggregering av kvalitetspoeng. Basert på observasjonene av de ovenfornevnte studiene så vel som benchmark, foreslås to 360-IQA-modeller basert på CNN-er for å nøyaktig forutsi kvaliteten på 360-graders bilder.De innhentede observasjonene og konklusjonene fra de ulike bidragene skal gi innsikt for å vurdere kvaliteten på 360-graders bilder
Mahmoudian, Bigdoli Navid. "Compression for interactive communications of visual contents." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S072.
Full textInteractive images and videos have received increasing attention due to the interesting features they provide. With these contents, users can navigate within the content and explore the scene from the viewpoint they desire. The characteristics of these media make their compression very challenging. On the one hand, the data is captured in high resolution (very large) to experience a real sense of immersion. On the other hand, the user requests a small portion of the content during navigation. This requires two characteristics: efficient compression of data by exploiting redundancies within the content (to lower the storage cost), and random access ability to extract part of the compressed stream requested by the user (to lower the transmission rate). Classical compression schemes can not handle random accessibility because they use a fixed pre-defined order of sources to capture redundancies. The purpose of this thesis is to provide new tools for interactive compression schemes of images. For that, as the first contribution, we propose an evaluation framework by which we can compare different image/video interactive compression schemes. Moreover, former theoretical studies show that random accessibility can be achieved using incremental codes with the same transmission cost as non-interactive schemes and with reasonable storage overhead. Our second contribution is to build a generic coding scheme that can deal with various interactive media. Using this generic coder, we then propose compression tools for 360-degree images and 3D model texture maps with random access ability to extract the requested part. We also propose new representations for these modalities. Finally, we study the effect of model selection on the compression rates of these interactive coders
Dupont, de Dinechin Grégoire. "Towards comfortable virtual reality viewing of virtual environments created from photographs of the real world." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLM049.
Full textThere are many applications to capturing and digitally recreating real-world people and places for virtual reality (VR), such as preserving and promoting cultural heritage sites, placing users face-to-face with faraway family and friends, and creating photorealistic replicas of specific locations for therapy and training. This is typically done by transforming sets of input images, i.e. photographs and videos, into immersive 360° scenes and interactive 3D objects. However, such image-based virtual environments are often flawed such that they fail to provide users with a comfortable viewing experience. In particular, accurately recovering the scene's 3D geometry is a difficult task, causing many existing approaches to make approximations that are likely to cause discomfort, e.g. as the scene appears distorted or seems to move with the viewer during head motion. In the same way, existing solutions most often fail to accurately render the scene's visual appearance in a comfortable fashion. Standard 3D reconstruction pipelines thus commonly average out captured view-dependent effects such as specular reflections, whereas complex image-based rendering algorithms often fail to achieve VR-compatible framerates, and are likely to cause distracting visual artifacts outside of a small range of head motion. Finally, further complications arise when the goal is to virtually recreate people, as inaccuracies in the appearance of the displayed 3D characters or unconvincing responsive behavior may be additional sources of unease. Therefore, in this thesis, we investigate the extent to which users can be made more comfortable when viewing digital replicas of the real world in VR, by enhancing, combining, and designing new solutions for creating virtual environments from input sets of photographs. We thus demonstrate and evaluate solutions for (1) providing motion parallax during the viewing of 360° images, using a VR interface for estimating depth information, (2) automatically generating responsive 3D virtual agents from 360° videos, by combining pre-trained deep learning networks, and (3) rendering captured view-dependent effects at high framerates in a game engine widely used for VR development, which we apply to digitally recreate a museum's mineralogy collection. We evaluate and discuss each approach by way of user studies, and make our codebase available as an open-source toolkit
顏可欣, YAN KE-SIN, and 顏可欣. "Localization and Route Planning for the Pedestrian Using 360-degree Images." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/rwx9ud.
Full text國立中正大學
電機工程研究所
107
Over the last decades, with the development of mobile devices, smart phones and tablets have become an indispensable part of our lives. The navigation system is one of the often-used functions in the mobile devices. For example, Google Maps not only provides street view services, but also provides route information to guide people to destinations. Therefore, the accurate localization and navigation technology are essential. The global positioning system (GPS) technology is heavily used for outdoor localization. However, its performance is not always guaranteed, in particular in the crowed urban and in the bad weather. Therefore, an accurate outdoor localization technology is still a challenge. Image assisted GPS could be an effective solution. However, it needs to collect large amounts of data for image matching, and the database usually contains tens of thousands of images. This will significantly increase the computation cost. To tackle this problem, in this work, 360-degree images are used to match the query image taken by the user. Usually, the navigation system is designed for vehicle. In addition to the localization technique, we also develop a technique for pedestrian navigation. Considering different properties of pedestrians and the external environmental, the target is to provide the most effective and suitable pedestrians route planning. In addition, the conventional navigation systems do not provide the orientation information in a friendly way. Therefore, in the proposed navigation design, we provide the direction for pedestrians in a comprehensible manner and help they arrive the destination safely and comfortably.
Yu, Chao-Tseng, and 喻昭曾. "Miniature 360-Degree Viewable Image-Plane Disk-Type Multiplex Holographic System." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/23957548767519247173.
Full text萬能科技大學
電資研究所
103
In this paper, we set up a miniature holographic system, based on 360-degree viewable image-plane disk-type multiplex holographic system, to study and get the 360-degree viewable floating 3D images. We replace some optical elements in previous system, such as Laser power, lens, and change related parameters to shrink system space and achieve the work. After resetting the system, we achieve the system reduced to an area of more than half of the original system, but also to produce the desired effect hologram. Finally, the advantages and disadvantages of the imaging system are discussed.
Kuo, Shih-Fu, and 郭士輔. "Investigation of 2D image capture of 360-degree Viewable Image-plane Disk-type Multiplex Holography." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/29780617514599801424.
Full text萬能科技大學
工程科技研究所
98
The「360 degree viewable image-plane disk-type multiplex hologram」can be reconstructed mono-color 3D dispersion-image at vertical direction using white light LED. The image contents come from 3D-software or photos with small image capture system. Then we're designing the bigger stage of capture system to obtain human images. In this paper, we design the bigger image capture system to obtain human images to make the「360 degree viewable image-plane disk-type multiplex hologram」.
Wang, Jian-An, and 王健安. "Position Measurement Using Ultra-High Resolution 360-Degree Panoramic Images and Particle Swarm Optimization." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/t39n22.
Full text國立臺北科技大學
土木與防災研究所
101
The changing features of digital cameras in recent years enable the capturing of images in resolution as high as ten million pixels. Nevertheless, limited by camera angles and range, it is not always possible to capture what is in front of the eye in one picture.Now, the GigaPan Robot Arm combined with any common digital camera makes image capturing in ultra-high resolution possible. In addition to its use in capturing and recording images, this study applied the combination of GigaPan and the PSO algorithm to locating spatial coordinates. Three capturing points were randomly selected around a target and their spatial coordinates were measured by GPS (and used as control points). At each of these points, one set of 360 degree high-resolution panoramic images were captured. Based on the theory that GigaPan can capture and record 360 degree panoramic images, it was possible to calculate the angle between any two objects and the capturing points with the pixels of the panoramic images. With this angle, virtual rays were simulated that streamed from the three capturing points to the target. With the rapid search feature of the PSO algorithm and based on the principle of triangulation, the three capturing points were examined randomly at 0~360 degrees. As different virtual rays moved towards minimum intersection areas among intersection unions, their intersections were the spatial coordinates of the target. Besides locating the spatial coordinates of a target, this study also applied the same method to: (1) locating the surroundings of a building, (2) observing the inclination angle of trees on a slope. The inclination angle was observed in images captured in different periods to see if it tended to increase, thus determining if the slope was in a stable state. Additionally, the angle was compared with the results from LiDAR scanning. The comparison shows that it is possible to obtain reasonable results using the combination of GigaPan and PSO, proving the application potential of this method.
See, Zi Siang. "Creating high dynamic range spherical panorama images for high fidelity 360 degree virtual reality." Thesis, 2020. http://hdl.handle.net/1959.13/1411181.
Full textThis research explores the development of a novel method and apparatus for creating spherical panoramas enhanced with high dynamic range (HDR) for high fidelity Virtual Reality 360 degree (VR360) user experiences. The original contribution to knowledge which this study seeks to make, is a new application of human computer interaction techniques, applied in order to gauge and understand how user experience of interactive panorama images can be virtually operated with the aim of increasing fidelity, or high definition visual similarity and clarity, closest to the original scene depicted. In this context, the term ‘high fidelity’ refers to the aim of producing detailed and accurate HDR spherical panorama images which resemble the original scenes captured sufficiently to afford users a satisfactory and compelling VR360 user experience. A VR360 interactive panorama presentation using spherical panoramas can provide virtual interactivity and wider viewing coverage; with three degrees of freedom, users can look around in multiple directions within the VR360 experiences, gaining the sense of being in control of their own engagement. This degree of freedom is facilitated by the use of mobile displays or head-mount-devices. However, in terms of image reproduction, the exposure range can be a major difficulty in reproducing a high contrast real-world scene. Imaging variables caused by difficulties and obstacles can occur during the production process of spherical panorama facilitated with HDR. This may result in inaccurate image reproduction for location-based subjects, which will in turn result in a poor VR360 user experience. Such problems may include but are not limited to: parallax error, nadir angle difficulty, HDR ghosting effect, insufficient dynamic range and luminance preservation. In contrast, this study presents an HDR spherical panorama reproduction approach which can shorten the production process, reduce imaging variables, and keep technical issues to a minimum, leading to improved photographic image reproduction with fewer visual abnormalities for VR360 experiences. A user study has been conducted; this shows that the novel approach creates images which viewers prefer, on the whole, to those created using more complicated HDR methods, or to those created without the use of HDR at all. In an ideal situation for VR360 reproduction, the proposed solution and imaging workflow would allow multi-angle acquisition to be accomplished in less than a minute. The thesis is comprised of this critical exegesis of the use case study and practice-based research project as outline, with a creative component comprised of a unique set of VR360s presented using the proposed method and apparatus. I hope that the thesis will be of use to future scholars and practitioners, and to the general viewer as well.
Chan, Shih-Hao, and 詹世豪. "Investigation of 360-degree Viewable Image-plane Disk-type Multiplex Full-Color Holography." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/95782559259034501076.
Full text萬能科技大學
工程科技研究所
98
The「360 degree viewable image-plane disk-type multiplex hologram」can be reconstructed mono-color 3D dispersion-image at vertical direction using white light LED . In this paper, we propose to find the relative factor of reference light and object light to get the suitable stripes density on the holographic film. Then, we can synthesize 3D full-color image. Finally, we also design the semi-auto system to make 3D full-color image when we get the synthetic factor .
Cheng, I.-Chen, and 鄭亦真. "Effect of A 360 Degrees Panoramic Image System(3600 PIS)on the Environment Recognition of Students with Moderate and Severe Metal Retardation in Special Education School." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/81859393247682166659.
Full text國立臺灣師範大學
特殊教育學系在職進修碩士學位班
91
The study adopts a system designed by the researcher, which was used to teach students with moderate and severe mental retardation in vocational senior high school stage to recognize environment. The purpose of the study is to explore: After receiving 360 degrees panoramic image system(3600PIS), how is the students’ capability to operate 3600 PIS? Can students take advantage of 3600 PIS to recognize environment? Can students take advantage of 3600 PIS to enhance action capability in the environment? The study adopts an experimental design of multiple-probe-across-subject. The targets are four students with moderate and severe mental retardation from vocational senior high school department of special schools. The independent variable is the teaching system of “The Environment Introduction of Yangming Park by 3600 PIS.” The dependent variable is accurate percentage of each experimental test. Each student has to go through three stages of experiment: basic-line, intervention and generalization period. The results of experiments are as follows: 1. After teaching, students with moderate and severe mental retardation could operate the 3600 PIS and reach the learning level of proficiency. 2. Students with moderate and severe mental retardation could recognize the environment on 3600 PIS. They could tell panorama and the names of scenes and the locations of related passages on the panoramic image. Furthermore, they also could categorize the results from virtual to real environment, tell names of scenes and find out locations of passages. 3. Students with moderate and severe mental retardation could walk through two trails independently on 3600 PIS and generalize the results from cyberspace to not yet experienced environment and walk through two trails.
Books on the topic "Images 360 degrés"
360 [reproduction of degree] imaging: The photographer's panoramic virtual reality manual. Crans-Près-Céligny: RotoVision, 2003.
Find full text360 Degree Spherical Video: The complete guide to 360-Degree video. Grey Goose Graphics LLC, 2016.
Find full textDegas His Life And Works In 500 Images An Illustrated Exploration Of The Artist His Life And Context With A Gallery Of 300 Of His Finest Paintings And Sculptures. Lorenz Books, 2012.
Find full textBook chapters on the topic "Images 360 degrés"
Iorns, Thomas, and Taehyun Rhee. "Real-Time Image Based Lighting for 360-Degree Panoramic Video." In Image and Video Technology – PSIVT 2015 Workshops, 139–51. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30285-0_12.
Full textAhmmed, Ashek, and Manoranjan Paul. "Discrete Cosine Basis Oriented Homogeneous Motion Discovery for 360-Degree Video Coding." In Image and Video Technology, 106–15. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-34879-3_9.
Full textRaj, Surya, and Ansuman Mahapatra. "Optimizing Deep Neural Network for Viewpoint Detection in 360-Degree Images." In Lecture Notes in Networks and Systems, 491–500. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-4863-3_49.
Full textSaurav, Sumeet, T. N. D. Madhu Kiran, B. Sravan Kumar Reddy, K. Sanjay Srivastav, Sanjay Singh, and Ravi Saini. "Dynamic Image Networks for Human Fall Detection in 360-degree Videos." In Communications in Computer and Information Science, 65–78. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-1387-9_6.
Full textHan, Byeong-Ju, and Jae-Young Sim. "Zero-Shot Learning for Reflection Removal of Single 360-Degree Image." In Lecture Notes in Computer Science, 533–48. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19800-7_31.
Full textDrisya, S. S., Ansuman Mahapatra, and S. Priyadharshini. "360-Degree Image Classification and Viewport Prediction Using Deep Neural Networks." In Lecture Notes in Networks and Systems, 483–92. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-4807-6_46.
Full textBarmpoutis, Panagiotis, and Tania Stathaki. "A Novel Framework for Early Fire Detection Using Terrestrial and Aerial 360-Degree Images." In Advanced Concepts for Intelligent Vision Systems, 63–74. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-40605-9_6.
Full textDhiraj, Raunak Manekar, Sumeet Saurav, Somsukla Maiti, Sanjay Singh, Santanu Chaudhury, Neeraj, Ravi Kumar, and Kamal Chaudhary. "Activity Recognition for Indoor Fall Detection in 360-Degree Videos Using Deep Learning Techniques." In Proceedings of 3rd International Conference on Computer Vision and Image Processing, 417–29. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9291-8_33.
Full textGorin, Valérie. "From Empathy to Shame: The Use of Virtual Reality by Humanitarian Organisations." In Making Humanitarian Crises, 147–70. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00824-5_7.
Full textHogan, Ciarán, and Ganesh Sistu. "Automatic Vehicle Ego Body Extraction for Reducing False Detections in Automated Driving Applications." In Communications in Computer and Information Science, 264–75. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26438-2_21.
Full textConference papers on the topic "Images 360 degrés"
Fu, Jianglin, Saeed Ranjbar Alvar, Ivan Bajic, and Rodney Vaughan. "FDDB-360: Face Detection in 360-Degree Fisheye Images." In 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE, 2019. http://dx.doi.org/10.1109/mipr.2019.00011.
Full textPatterson, Dale. "360 Degree photographic imagery for VR." In ACSW 2018: Australasian Computer Science Week 2018. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3167918.3167955.
Full textDelforouzi, Ahmad, Seyed Amir Hossein Tabatabaei, Kimiaki Shirahama, and Marcin Grzegorzek. "Unknown object tracking in 360-degree camera images." In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016. http://dx.doi.org/10.1109/icpr.2016.7899897.
Full textDelforouzi, Ahmad, Seyed Amir Hossein Tabatabaei, Kimiaki Shirahama, and Marcin Grzegorzek. "Polar Object Tracking in 360-Degree Camera Images." In 2016 IEEE International Symposium on Multimedia (ISM). IEEE, 2016. http://dx.doi.org/10.1109/ism.2016.0077.
Full text"360 degrees-viewable display of 3D solid images." In SIGGRAPH07: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2007. http://dx.doi.org/10.1145/1280720.1280839.
Full textMolnar, Fanni, and Andras Kovacs. "360-Degree image stitching with GPU support." In 2016 IEEE 20th Jubilee International Conference on Intelligent Engineering Systems (INES). IEEE, 2016. http://dx.doi.org/10.1109/ines.2016.7555130.
Full textWegner, Krzysztof, Olgierd Stankiewicz, Tomasz Grajek, and Marek Domanski. "Depth Estimation from Stereoscopic 360-Degree Video." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451452.
Full textBIDGOLI, Navid MAHMOUDIAN, Thomas MAUGEY, and Aline ROUMY. "Intra-coding of 360-degree images on the sphere." In 2019 Picture Coding Symposium (PCS). IEEE, 2019. http://dx.doi.org/10.1109/pcs48520.2019.8954538.
Full textChen, Xinlang, Pan Gao, and Ran Wei. "Visual Saliency Prediction on 360 Degree Images With CNN." In 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW). IEEE, 2021. http://dx.doi.org/10.1109/icmew53276.2021.9456008.
Full textDe Simone, Francesca, Roberto G. de A. Azevedo, Sohyeong Kim, and Pascal Frossard. "Graph-Based Detection of Seams In 360-Degree Images." In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8803578.
Full text