Academic literature on the topic 'Images 360 degrés'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Images 360 degrés.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Images 360 degrés"

1

Hadi Ali, Israa, and Sarmad Salman. "360-Degree Panoramic Image Stitching for Un-ordered Images Based on Harris Corner Detection." Indian Journal of Science and Technology 12, no. 4 (January 1, 2019): 1–9. http://dx.doi.org/10.17485/ijst/2019/v12i4/140988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Assens, Marc, Xavier Giro-i-Nieto, Kevin McGuinness, and Noel E. O’Connor. "Scanpath and saliency prediction on 360 degree images." Signal Processing: Image Communication 69 (November 2018): 8–14. http://dx.doi.org/10.1016/j.image.2018.06.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Barazzetti, L., M. Previtali, and F. Roncoroni. "CAN WE USE LOW-COST 360 DEGREE CAMERAS TO CREATE ACCURATE 3D MODELS?" ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (May 30, 2018): 69–75. http://dx.doi.org/10.5194/isprs-archives-xlii-2-69-2018.

Full text
Abstract:
360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.
APA, Harvard, Vancouver, ISO, and other styles
4

Alves, Ricardo Martins, Luís Sousa, Aldric Trindade Negrier, João M. F. Rodrigues, Jânio Monteiro, Pedro J. S. Cardoso, Paulo Felisberto, and Paulo Bica. "Interactive 360 Degree Holographic Installation." International Journal of Creative Interfaces and Computer Graphics 8, no. 1 (January 2017): 20–38. http://dx.doi.org/10.4018/ijcicg.2017010102.

Full text
Abstract:
With new marketing strategies and technologies, new demands arise, and the standard public relation or salesperson is not enough, costumers tend to have higher standards while companies try to capture their attention, requiring the use of creative contents and ideas. For this purpose, this article describes how an interactive holographic installation was developed, making use of a holographic technology to call attention of potential clients. This is achieved by working as a host or showing a product advertising the company. The installation consists in a 360 degree (8 view) holographic avatar or object and optionality, also a screen, where a set of menus with videos, images and textual contents are presented. It uses several Microsoft Kinect sensors for enabling user (and other persons) tracking and natural interaction around the installation, through gestures and speech while building several statistics of the visualized content. All those statistics can be analyzed on-the-fly by the company to understand the success of the event.
APA, Harvard, Vancouver, ISO, and other styles
5

Banchi, Yoshihiro, Keisuke Yoshikawa, and Takashi Kawai. "Evaluating user experience of 180 and 360 degree images." Electronic Imaging 2020, no. 2 (January 26, 2020): 244–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-244.

Full text
Abstract:
This paper describes a comparison of user experience of virtual reality (VR) image format. The authors prepared the following four conditions and evaluated the user experience during viewing VR images with a headset by measuring subjective and objective indices; Condition 1: monoscopic 180-degree image, Condition 2: stereoscopic 180-degree image, Condition 3: monoscopic 360-degree image, Condition 4: stereoscopic 360-degree image. From the results of the subjective indices (reality, presence, and depth sensation), condition 4 was evaluated highest, and conditions 2 and 3 were evaluated to the same extent. In addition, from the results of the objective indices (eye and head tracking), a tendency to suppress head movement was found in 180-degree images.
APA, Harvard, Vancouver, ISO, and other styles
6

Hussain, Abuelainin. "Interactive 360-Degree Virtual Reality into eLearning Content Design." International Journal of Innovative Technology and Exploring Engineering 10, no. 2 (December 10, 2020): 1–4. http://dx.doi.org/10.35940/ijitee.b8219.1210220.

Full text
Abstract:
The techniques and methods essential in creating 2D and 3D virtual reality images that can be displayed in multimedia devices are the main aims of the study. Tools such as desktops, laptops, tablets, smartphones, and other multimedia devices that display such content are the primary concern in the study. Such devices communicate the content through videos, images, or sound which are realistic and useful to the user. Such contents can be captured from different locations in virtual imaginary sites through the abovenamed electronic devices. These are beneficial e-learning instructional techniques for students, especially for higher learning [1]. Considering architectural learners who rely on such images to develop the expected simple designs useful in real constructions, 360-degree imaging has to be considered in e-learning for their benefits. The primary forms through which the content can be transformed into a virtual reality include YouTube and Facebook websites, all of which can display 360-degree virtual environment content. Through this, the learners will interact with virtual reality in such setups, thus enhancing their studies.
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Hyunchul, and Okkyung Choi. "An efficient parameter update method of 360-degree VR image model." International Journal of Engineering Business Management 11 (January 1, 2019): 184797901983599. http://dx.doi.org/10.1177/1847979019835993.

Full text
Abstract:
Recently, with the rapid growth of manufacture and ease of user convenience, technologies utilizing virtual reality images have been increasing. It is very important to estimate the projected direction and position of the image to show the image quality similar to the real world, and the estimation of the direction and the position is solved using the relation that transforms the spheres into the expanded equirectangular. The transformation relationship can be divided into a camera intrinsic parameter and a camera extrinsic parameter, and all the images have respective camera parameters. Also, if several images use the same camera, the camera intrinsic parameters of the images will have the same values. However, it is not the best way to set the camera intrinsic parameter to the same value for all images when matching images. To solve these problems and show images that does not have a sense of heterogeneity, it is needed to create the cost function by modeling the conversion relation and calculate the camera parameter that the residual value becomes the minimum. In this article, we compare and analyze efficient camera parameter update methods. For comparative analysis, we use Levenberg–Marquardt, a parameter optimization algorithm using corresponding points, and propose an efficient camera parameter update method based on the analysis results.
APA, Harvard, Vancouver, ISO, and other styles
8

Jauhari, Jauhari. "SOLO-YOGYA INTO 360-DEGREE PHOTOGRAPHY." Capture : Jurnal Seni Media Rekam 13, no. 1 (December 13, 2021): 17–31. http://dx.doi.org/10.33153/capture.v13i1.3627.

Full text
Abstract:
Currently, technological developments have made it possible for photographic works not only to be present in the form of a flat 180 degree two-dimensional panorama, but even being able to present reality with a 360 degree perspective. This research on the creation of photographic works aims to optimize photographic equipment for photographing with a 360 degree perspective. Eventhough there are many 360 degree application in smartphones, but using a DSLR camera to create works with a 360 degree perspective has the advantage that it can be printed in large sizes with high resolution without breaking the pixels. The method of creating this work is based on the experimental process of developing DSLR camera equipment. This 360 degree photography creation technique uses the 'panning-sequence' technique with 'continuous exposure' which allows the images captured by the camera can be combined or mixed into one panoramic image. In addition to getting an important and interesting visual appearance, the presence of a 360 degree perspective in this work can also give a new nuances in the world of the art of photography.
APA, Harvard, Vancouver, ISO, and other styles
9

Tsubaki, Ikuko, and Kazuo Sasaki. "An Interrupted Projection using Seam Carving for 360-degree Images." Electronic Imaging 2018, no. 2 (January 28, 2018): 414–1. http://dx.doi.org/10.2352/issn.2470-1173.2018.2.vipc-414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Banchi, Yoshihiro, and Takashi Kawai. "Evaluating user experience of different angle VR images." Electronic Imaging 2021, no. 2 (January 18, 2021): 98–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.2.sda-098.

Full text
Abstract:
This paper describes a comparison of user experience of virtual reality (VR) image angles. 7 angles conditions are prepared and evaluated the user experience during viewing VR images with a headset by measuring subjective and objective indexes. Angle conditions were every 30 degrees from 180 to 360 degrees. From the results of the subjective indexes (reality, presence, and depth sensation), a 360-degree image was evaluated highest, and different evaluations were made between 240 and 270 degrees.In addition, from the results of the objective indexes (eye and head tracking), a tendency to spread the eye and head movement was found as the image angle increases.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Images 360 degrés"

1

Sendjasni, Abderrezzaq. "Objective and subjective quality assessment of 360-degree images." Electronic Thesis or Diss., Poitiers, 2023. http://www.theses.fr/2023POIT2251.

Full text
Abstract:
Les images à 360 degrés, aussi appelées images omnidirectionnelles, sont au cœur des contenus immersifs. Avec l’augmentation de leur utilisation notamment grâce à l’expérience interactive et immersive qu’ils offrent, il est primordial de garantir une bonne qualité d’expérience (QoE). Cette dernière est considérablement impactée par la qualité du contenu lui-même. En l’occurrence, les images à 360 degrés, comme tout type de signal visuel, passent par une séquence de processus comprenant l’encodage, la transmission, le décodage et le rendu. Chacun de ces processus est susceptible d’introduire des distorsions dans le contenu. Pour améliorer la qualité d’expérience, toutes ces dégradations potentielles doivent être soigneusement prises en compte et réduites à un niveau imperceptible. Pour atteindre cet objectif, l’évaluation de la qualité de l’image est l’une des stratégies devant être utilisée. Cette thèse aborde l’évaluation de la qualité des images à 360 degrés des points de vue objectif et subjectif. Ainsi, en s’intéressant à l’effet des visiocasques sur la qualité perçue des images 360 degrés, une étude psycho-visuelle est conçue et réalisée en utilisant quatre dispositifs différents. À cette fin, une base de données a été créé et un panel d’observateurs a été impliqué. L’impact des visiocasques sur la qualité a été identifié et mis en évidence comme un facteur important à prendre en compte lors de la réalisation d’expériences subjectives pour des images à 360 degrés. D’un point de vue objectif, nous avons d’abord procédé à une étude comparative extensive de plusieurs modèles de réseaux de neurones convolutifs (CNN) sous diverses configurations. Ensuite, nous avons amélioré la chaîne de traitement de l’évaluation de la qualité basée sur les CNN à différentes échelles, de l’échantillonnage et de la représentation des entrées à l’agrégation des scores de qualité. En se basant sur les résultats de ces études, et de l’analyse comparative, deux modèles de qualité basés sur les CNN sont proposés pour prédire avec précision la qualité des images à 360 degrés. Les observations et les conclusions obtenues à partir des différentes contributions de cette thèse apporteront un éclairage sur l’évaluation de la qualité des images à 360 degrés
360-degree images, a.k.a. omnidirectional images, are in the center of immersive media. With the increase in demands of the latter, mainly thanks to the offered interactive and immersive experience, it is paramount to provide good quality of experience (QoE). This QoE is significantly impacted by the quality of the content. Like any type of visual signal, 360-degree images go through a sequence of processes including encoding, transmission, decoding, and rendering. Each of these processes has the potential to introduce distortions to the content. To improve the QoE, image quality assessment (IQA) is one of the strategies to be followed. This thesis addresses the quality evaluation of 360-degree images from the objective and subjective perspectives. By focusing on the influence of Head Mounted Displays (HMDs) on the perceived quality of 360-degree images, a psycho-visual study is designed and carried out using four different devices. For this purpose, a 360-degree image datasets is created and a panel of observers is involved. The impact of HMDs on the quality ratings is identified and highlighted as an important factor to consider when con- ducting subjective experiments for 360-degree images. From the objective perspective, we first comprehensively benchmarked several convolutional neural network (CNN) models under various configurations. Then, the processing chain of CNN-based 360-IQA is improved at different scales, from input sampling and representation to aggregating quality scores. Based on the observations of the above studies as well as the benchmark, two 360-IQA models based on CNNs are proposed to accurately predict the quality of 360-degree images. The obtained observations and conclusions from the various contributions shall bring insights for assessing the quality of 360-degree images
360-graders bilder, også kjent som rundstrålende bilder, er i sentrum av oppslukende medier. Med økningen i forventninger til sistnevnte, hovedsakelig takket være den aktiverte interaktive og oppslukende opplevelse, er det avgjørende å gi god kvaliteten på opplevelsen (QoE).Denne QoE er betydelig påvirket av kvaliteten på innholdet. Som alle typer visuelle signaler går 360-graders bilder gjennom en sekvens av prosesser, inkludert koding, overføring, dekoding og gjengivelse. Hver av disse prosessene har potensial til å introdusere forvrengninger til innholdet.For å forbedre QoE er vurdering av bildekvalitet (IQA) en av strategiene å følge. Denne oppgaven tar for seg kvalitetsevaluering av 360-graders bilder fra objektive og subjektive perspektiver. Ved å fokusere på påvirkningen av Head Mounted Displays (HMD-er) på den oppfattede kvaliteten til 360-graders bilder, er en psyko-visuell studie designet og utført ved hjelp av fire forskjellige enheter. For dette formålet opprettes et 360-graders bildedatasett og et panel av observatører er involvert. Virkningen av HMD-er på valitetsvurderingene identifiseres og fremheves som en viktig faktor når du utfører subjektive eksperimenter for 360-graders bilder.Fra det objektive perspektivet benchmarket vi først flere konvolusjonelle nevrale nettverk (CNN) under forskjellige konfigurasjoner. Deretter forbedres prosesseringskjeden til CNN-baserte 360-IQA i forskjellige skalaer, fra input-sampling og representasjon til aggregering av kvalitetspoeng. Basert på observasjonene av de ovenfornevnte studiene så vel som benchmark, foreslås to 360-IQA-modeller basert på CNN-er for å nøyaktig forutsi kvaliteten på 360-graders bilder.De innhentede observasjonene og konklusjonene fra de ulike bidragene skal gi innsikt for å vurdere kvaliteten på 360-graders bilder
APA, Harvard, Vancouver, ISO, and other styles
2

Mahmoudian, Bigdoli Navid. "Compression for interactive communications of visual contents." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S072.

Full text
Abstract:
Les images et vidéos interactives ont récemment vu croître leur popularité. En effet, avec ce type de contenu, les utilisateurs peuvent naviguer dans la scène et changer librement de point de vue. Les caractéristiques de ces supports posent de nouveaux défis pour la compression. D'une part, les données sont capturées en très haute résolution pour obtenir un réel sentiment d'immersion. D'autre part, seule une petite partie du contenu est visualisée par l'utilisateur lors de sa navigation. Cela induit deux caractéristiques : une compression efficace des données en exploitant les redondances au sein du contenu (pour réduire les coûts de stockage) et une compression avec accès aléatoire pour extraire la partie du flux compressé demandée par l'utilisateur (pour réduire le débit de transmission). Les schémas classiques de compression ne peuvent gérer de manière optimale l’accès aléatoire, car ils utilisent un ordre de traitement des données fixe et prédéfini qui ne peut s'adapter à la navigation de l'utilisateur. Le but de cette thèse est de fournir de nouveaux outils pour les schémas interactifs de compression d’images. Pour cela, comme première contribution, nous proposons un cadre d’évaluation permettant de comparer différents schémas interactifs de compression d'image / vidéo. En outre, des études théoriques antérieures ont montré que l’accès aléatoire peut être obtenu à l’aide de codes incrémentaux présentant le même coût de transmission que les schémas non interactifs au prix d'une faible augmentation du coût de stockage. Notre deuxième contribution consiste à créer un schéma de codage générique pouvant s'appliquer à divers supports interactifs. À l'aide de ce codeur générique, nous proposons ensuite des outils de compression pour deux modalités d'images interactives : les images omnidirectionnelles (360 degrés) et les cartes de texture de modèle 3D. Nous proposons également de nouvelles représentations de ces modalités. Enfin, nous étudions l’effet de la sélection du modèle sur les taux de compression de ces codeurs interactifs
Interactive images and videos have received increasing attention due to the interesting features they provide. With these contents, users can navigate within the content and explore the scene from the viewpoint they desire. The characteristics of these media make their compression very challenging. On the one hand, the data is captured in high resolution (very large) to experience a real sense of immersion. On the other hand, the user requests a small portion of the content during navigation. This requires two characteristics: efficient compression of data by exploiting redundancies within the content (to lower the storage cost), and random access ability to extract part of the compressed stream requested by the user (to lower the transmission rate). Classical compression schemes can not handle random accessibility because they use a fixed pre-defined order of sources to capture redundancies. The purpose of this thesis is to provide new tools for interactive compression schemes of images. For that, as the first contribution, we propose an evaluation framework by which we can compare different image/video interactive compression schemes. Moreover, former theoretical studies show that random accessibility can be achieved using incremental codes with the same transmission cost as non-interactive schemes and with reasonable storage overhead. Our second contribution is to build a generic coding scheme that can deal with various interactive media. Using this generic coder, we then propose compression tools for 360-degree images and 3D model texture maps with random access ability to extract the requested part. We also propose new representations for these modalities. Finally, we study the effect of model selection on the compression rates of these interactive coders
APA, Harvard, Vancouver, ISO, and other styles
3

Dupont, de Dinechin Grégoire. "Towards comfortable virtual reality viewing of virtual environments created from photographs of the real world." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLM049.

Full text
Abstract:
La reconstitution en réalité virtuelle de lieux, personnes, et objets réels ouvre la voie à de nombreux usages, tels que préserver et promouvoir des sites culturels, générer des avatars photoréalistes pour se retrouver virtuellement avec famille et amis à distance, ou encore recréer des lieux ou situations spécifiques à des fins thérapeutiques ou de formation. Tout cela s'appuie sur notre capacité à transformer des images du monde réel (photos et vidéos) en environnements 360° immersifs et objets 3D interactifs. Cependant, ces environnements virtuels à base d'images demeurent souvent imparfaits, et peuvent ainsi rendre le visionnage en réalité virtuelle inconfortable pour les utilisateurs. En particulier, il est difficile de reconstituer avec précision la géométrie d'une scène réelle, et souvent de nombreuses approximations sont ainsi faites qui peuvent être source d'inconfort lors de l'observation ou du déplacement. De même, il est difficile de restituer fidèlement l'aspect visuel de la scène : les méthodes classiques ne peuvent ainsi restituer certains effets visuels complexes tels que transparence et réflexions spéculaires, tandis que les algorithmes de rendu plus spécialisés ont tendance à générer des artefacts visuels et peuvent être source de latence. Par ailleurs, ces problèmes deviennent d'autant plus complexes lorsqu'il s'agit de reconstituer des personnes, l'oeil humain étant très sensible aux défauts dans l'apparence ou le comportement de personnages virtuels. Par conséquent, l'objectif de cette thèse est d'étudier les méthodes permettant de rendre les utilisateurs plus confortables lors du visionnage immersif de reconstitutions digitales du monde réel, par l'amélioration et le développement de nouvelles méthodes de création d'environnements virtuels à partir de photos. Nous démontrons et évaluons ainsi des solutions permettant (1) de fournir une meilleure parallaxe de mouvement lors du visionnage d'images 360°, par le biais d'une interface immersive pour l'estimation de cartes de profondeur, (2) de générer automatiquement des agents virtuels 3D capables d'interaction à partir de vidéos 360°, en combinant des modèles pré-entrainés d'apprentissage profond, et (3) de restituer des effets visuels de façon photoréaliste en réalité virtuelle, par le développement d'outils que nous appliquons ensuite pour recréer virtuellement la collection d'un musée de minéralogie. Nous évaluons chaque approche par le biais d'études utilisateur, et rendons notre code accessible sous forme d'outils open source
There are many applications to capturing and digitally recreating real-world people and places for virtual reality (VR), such as preserving and promoting cultural heritage sites, placing users face-to-face with faraway family and friends, and creating photorealistic replicas of specific locations for therapy and training. This is typically done by transforming sets of input images, i.e. photographs and videos, into immersive 360° scenes and interactive 3D objects. However, such image-based virtual environments are often flawed such that they fail to provide users with a comfortable viewing experience. In particular, accurately recovering the scene's 3D geometry is a difficult task, causing many existing approaches to make approximations that are likely to cause discomfort, e.g. as the scene appears distorted or seems to move with the viewer during head motion. In the same way, existing solutions most often fail to accurately render the scene's visual appearance in a comfortable fashion. Standard 3D reconstruction pipelines thus commonly average out captured view-dependent effects such as specular reflections, whereas complex image-based rendering algorithms often fail to achieve VR-compatible framerates, and are likely to cause distracting visual artifacts outside of a small range of head motion. Finally, further complications arise when the goal is to virtually recreate people, as inaccuracies in the appearance of the displayed 3D characters or unconvincing responsive behavior may be additional sources of unease. Therefore, in this thesis, we investigate the extent to which users can be made more comfortable when viewing digital replicas of the real world in VR, by enhancing, combining, and designing new solutions for creating virtual environments from input sets of photographs. We thus demonstrate and evaluate solutions for (1) providing motion parallax during the viewing of 360° images, using a VR interface for estimating depth information, (2) automatically generating responsive 3D virtual agents from 360° videos, by combining pre-trained deep learning networks, and (3) rendering captured view-dependent effects at high framerates in a game engine widely used for VR development, which we apply to digitally recreate a museum's mineralogy collection. We evaluate and discuss each approach by way of user studies, and make our codebase available as an open-source toolkit
APA, Harvard, Vancouver, ISO, and other styles
4

顏可欣, YAN KE-SIN, and 顏可欣. "Localization and Route Planning for the Pedestrian Using 360-degree Images." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/rwx9ud.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
107
Over the last decades, with the development of mobile devices, smart phones and tablets have become an indispensable part of our lives. The navigation system is one of the often-used functions in the mobile devices. For example, Google Maps not only provides street view services, but also provides route information to guide people to destinations. Therefore, the accurate localization and navigation technology are essential. The global positioning system (GPS) technology is heavily used for outdoor localization. However, its performance is not always guaranteed, in particular in the crowed urban and in the bad weather. Therefore, an accurate outdoor localization technology is still a challenge. Image assisted GPS could be an effective solution. However, it needs to collect large amounts of data for image matching, and the database usually contains tens of thousands of images. This will significantly increase the computation cost. To tackle this problem, in this work, 360-degree images are used to match the query image taken by the user. Usually, the navigation system is designed for vehicle. In addition to the localization technique, we also develop a technique for pedestrian navigation. Considering different properties of pedestrians and the external environmental, the target is to provide the most effective and suitable pedestrians route planning. In addition, the conventional navigation systems do not provide the orientation information in a friendly way. Therefore, in the proposed navigation design, we provide the direction for pedestrians in a comprehensible manner and help they arrive the destination safely and comfortably.
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Chao-Tseng, and 喻昭曾. "Miniature 360-Degree Viewable Image-Plane Disk-Type Multiplex Holographic System." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/23957548767519247173.

Full text
Abstract:
碩士
萬能科技大學
電資研究所
103
In this paper, we set up a miniature holographic system, based on 360-degree viewable image-plane disk-type multiplex holographic system, to study and get the 360-degree viewable floating 3D images. We replace some optical elements in previous system, such as Laser power, lens, and change related parameters to shrink system space and achieve the work. After resetting the system, we achieve the system reduced to an area of more than half of the original system, but also to produce the desired effect hologram. Finally, the advantages and disadvantages of the imaging system are discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Kuo, Shih-Fu, and 郭士輔. "Investigation of 2D image capture of 360-degree Viewable Image-plane Disk-type Multiplex Holography." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/29780617514599801424.

Full text
Abstract:
碩士
萬能科技大學
工程科技研究所
98
The「360 degree viewable image-plane disk-type multiplex hologram」can be reconstructed mono-color 3D dispersion-image at vertical direction using white light LED. The image contents come from 3D-software or photos with small image capture system. Then we're designing the bigger stage of capture system to obtain human images. In this paper, we design the bigger image capture system to obtain human images to make the「360 degree viewable image-plane disk-type multiplex hologram」.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Jian-An, and 王健安. "Position Measurement Using Ultra-High Resolution 360-Degree Panoramic Images and Particle Swarm Optimization." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/t39n22.

Full text
Abstract:
碩士
國立臺北科技大學
土木與防災研究所
101
The changing features of digital cameras in recent years enable the capturing of images in resolution as high as ten million pixels. Nevertheless, limited by camera angles and range, it is not always possible to capture what is in front of the eye in one picture.Now, the GigaPan Robot Arm combined with any common digital camera makes image capturing in ultra-high resolution possible. In addition to its use in capturing and recording images, this study applied the combination of GigaPan and the PSO algorithm to locating spatial coordinates. Three capturing points were randomly selected around a target and their spatial coordinates were measured by GPS (and used as control points). At each of these points, one set of 360 degree high-resolution panoramic images were captured. Based on the theory that GigaPan can capture and record 360 degree panoramic images, it was possible to calculate the angle between any two objects and the capturing points with the pixels of the panoramic images. With this angle, virtual rays were simulated that streamed from the three capturing points to the target. With the rapid search feature of the PSO algorithm and based on the principle of triangulation, the three capturing points were examined randomly at 0~360 degrees. As different virtual rays moved towards minimum intersection areas among intersection unions, their intersections were the spatial coordinates of the target. Besides locating the spatial coordinates of a target, this study also applied the same method to: (1) locating the surroundings of a building, (2) observing the inclination angle of trees on a slope. The inclination angle was observed in images captured in different periods to see if it tended to increase, thus determining if the slope was in a stable state. Additionally, the angle was compared with the results from LiDAR scanning. The comparison shows that it is possible to obtain reasonable results using the combination of GigaPan and PSO, proving the application potential of this method.
APA, Harvard, Vancouver, ISO, and other styles
8

See, Zi Siang. "Creating high dynamic range spherical panorama images for high fidelity 360 degree virtual reality." Thesis, 2020. http://hdl.handle.net/1959.13/1411181.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)
This research explores the development of a novel method and apparatus for creating spherical panoramas enhanced with high dynamic range (HDR) for high fidelity Virtual Reality 360 degree (VR360) user experiences. The original contribution to knowledge which this study seeks to make, is a new application of human computer interaction techniques, applied in order to gauge and understand how user experience of interactive panorama images can be virtually operated with the aim of increasing fidelity, or high definition visual similarity and clarity, closest to the original scene depicted. In this context, the term ‘high fidelity’ refers to the aim of producing detailed and accurate HDR spherical panorama images which resemble the original scenes captured sufficiently to afford users a satisfactory and compelling VR360 user experience. A VR360 interactive panorama presentation using spherical panoramas can provide virtual interactivity and wider viewing coverage; with three degrees of freedom, users can look around in multiple directions within the VR360 experiences, gaining the sense of being in control of their own engagement. This degree of freedom is facilitated by the use of mobile displays or head-mount-devices. However, in terms of image reproduction, the exposure range can be a major difficulty in reproducing a high contrast real-world scene. Imaging variables caused by difficulties and obstacles can occur during the production process of spherical panorama facilitated with HDR. This may result in inaccurate image reproduction for location-based subjects, which will in turn result in a poor VR360 user experience. Such problems may include but are not limited to: parallax error, nadir angle difficulty, HDR ghosting effect, insufficient dynamic range and luminance preservation. In contrast, this study presents an HDR spherical panorama reproduction approach which can shorten the production process, reduce imaging variables, and keep technical issues to a minimum, leading to improved photographic image reproduction with fewer visual abnormalities for VR360 experiences. A user study has been conducted; this shows that the novel approach creates images which viewers prefer, on the whole, to those created using more complicated HDR methods, or to those created without the use of HDR at all. In an ideal situation for VR360 reproduction, the proposed solution and imaging workflow would allow multi-angle acquisition to be accomplished in less than a minute. The thesis is comprised of this critical exegesis of the use case study and practice-based research project as outline, with a creative component comprised of a unique set of VR360s presented using the proposed method and apparatus. I hope that the thesis will be of use to future scholars and practitioners, and to the general viewer as well.
APA, Harvard, Vancouver, ISO, and other styles
9

Chan, Shih-Hao, and 詹世豪. "Investigation of 360-degree Viewable Image-plane Disk-type Multiplex Full-Color Holography." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/95782559259034501076.

Full text
Abstract:
碩士
萬能科技大學
工程科技研究所
98
The「360 degree viewable image-plane disk-type multiplex hologram」can be reconstructed mono-color 3D dispersion-image at vertical direction using white light LED . In this paper, we propose to find the relative factor of reference light and object light to get the suitable stripes density on the holographic film. Then, we can synthesize 3D full-color image. Finally, we also design the semi-auto system to make 3D full-color image when we get the synthetic factor .
APA, Harvard, Vancouver, ISO, and other styles
10

Cheng, I.-Chen, and 鄭亦真. "Effect of A 360 Degrees Panoramic Image System(3600 PIS)on the Environment Recognition of Students with Moderate and Severe Metal Retardation in Special Education School." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/81859393247682166659.

Full text
Abstract:
碩士
國立臺灣師範大學
特殊教育學系在職進修碩士學位班
91
The study adopts a system designed by the researcher, which was used to teach students with moderate and severe mental retardation in vocational senior high school stage to recognize environment. The purpose of the study is to explore: After receiving 360 degrees panoramic image system(3600PIS), how is the students’ capability to operate 3600 PIS? Can students take advantage of 3600 PIS to recognize environment? Can students take advantage of 3600 PIS to enhance action capability in the environment? The study adopts an experimental design of multiple-probe-across-subject. The targets are four students with moderate and severe mental retardation from vocational senior high school department of special schools. The independent variable is the teaching system of “The Environment Introduction of Yangming Park by 3600 PIS.” The dependent variable is accurate percentage of each experimental test. Each student has to go through three stages of experiment: basic-line, intervention and generalization period. The results of experiments are as follows: 1. After teaching, students with moderate and severe mental retardation could operate the 3600 PIS and reach the learning level of proficiency. 2. Students with moderate and severe mental retardation could recognize the environment on 3600 PIS. They could tell panorama and the names of scenes and the locations of related passages on the panoramic image. Furthermore, they also could categorize the results from virtual to real environment, tell names of scenes and find out locations of passages. 3. Students with moderate and severe mental retardation could walk through two trails independently on 3600 PIS and generalize the results from cyberspace to not yet experienced environment and walk through two trails.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Images 360 degrés"

1

360 [reproduction of degree] imaging: The photographer's panoramic virtual reality manual. Crans-Près-Céligny: RotoVision, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

360 Degree Spherical Video: The complete guide to 360-Degree video. Grey Goose Graphics LLC, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Degas His Life And Works In 500 Images An Illustrated Exploration Of The Artist His Life And Context With A Gallery Of 300 Of His Finest Paintings And Sculptures. Lorenz Books, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Images 360 degrés"

1

Iorns, Thomas, and Taehyun Rhee. "Real-Time Image Based Lighting for 360-Degree Panoramic Video." In Image and Video Technology – PSIVT 2015 Workshops, 139–51. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30285-0_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ahmmed, Ashek, and Manoranjan Paul. "Discrete Cosine Basis Oriented Homogeneous Motion Discovery for 360-Degree Video Coding." In Image and Video Technology, 106–15. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-34879-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Raj, Surya, and Ansuman Mahapatra. "Optimizing Deep Neural Network for Viewpoint Detection in 360-Degree Images." In Lecture Notes in Networks and Systems, 491–500. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-4863-3_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Saurav, Sumeet, T. N. D. Madhu Kiran, B. Sravan Kumar Reddy, K. Sanjay Srivastav, Sanjay Singh, and Ravi Saini. "Dynamic Image Networks for Human Fall Detection in 360-degree Videos." In Communications in Computer and Information Science, 65–78. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-1387-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Han, Byeong-Ju, and Jae-Young Sim. "Zero-Shot Learning for Reflection Removal of Single 360-Degree Image." In Lecture Notes in Computer Science, 533–48. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19800-7_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Drisya, S. S., Ansuman Mahapatra, and S. Priyadharshini. "360-Degree Image Classification and Viewport Prediction Using Deep Neural Networks." In Lecture Notes in Networks and Systems, 483–92. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-4807-6_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Barmpoutis, Panagiotis, and Tania Stathaki. "A Novel Framework for Early Fire Detection Using Terrestrial and Aerial 360-Degree Images." In Advanced Concepts for Intelligent Vision Systems, 63–74. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-40605-9_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dhiraj, Raunak Manekar, Sumeet Saurav, Somsukla Maiti, Sanjay Singh, Santanu Chaudhury, Neeraj, Ravi Kumar, and Kamal Chaudhary. "Activity Recognition for Indoor Fall Detection in 360-Degree Videos Using Deep Learning Techniques." In Proceedings of 3rd International Conference on Computer Vision and Image Processing, 417–29. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9291-8_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gorin, Valérie. "From Empathy to Shame: The Use of Virtual Reality by Humanitarian Organisations." In Making Humanitarian Crises, 147–70. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00824-5_7.

Full text
Abstract:
AbstractSince the UNHCR’s movie Clouds over Sidra, filmed in 2015 in the Zaatari refugee camp, the use of Virtual Reality (VR) has increased among humanitarian organisations for raising funds or awareness. The International Committee of the Red Cross (ICRC) and Médecins Sans Frontières (MSF), among others, have relied on 3D and 360-degree images to enhance emotional resonance among audiences, including diplomats, military groups or decision-makers. Marketed as the ‘ultimate empathy machine’, VR operates through immersive reality. Yet the necessity to see and feel reality ‘as it is’ is not unprecedented in the visual history of humanitarianism. Therefore, this chapter first critically examines the immersive experience of humanitarian VR movies and their performative and affective potential. Indeed, VR claims to erase the distance and to elicit empathetic connections do not prevent this innovative technology from adopting a voyeuristic or aesthetic gaze, long associated in humanitarian imagery with self-centeredness rather than other centeredness. Then, building on recent VR movies who draw on outrage and indignation, such as The Right Choice (ICRC, 2018) or Not A Target (MSF Switzerland, 2016), this chapter opens new lines of inquiry in the capacity of immersive technologies to mobilise shame rather than empathy.
APA, Harvard, Vancouver, ISO, and other styles
10

Hogan, Ciarán, and Ganesh Sistu. "Automatic Vehicle Ego Body Extraction for Reducing False Detections in Automated Driving Applications." In Communications in Computer and Information Science, 264–75. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-26438-2_21.

Full text
Abstract:
AbstractFisheye cameras are extensively employed in autonomous vehicles due to their wider field of view, which produces a complete 360-degree image of the vehicle with a minimum number of sensors. The drawback of having a broader field of view is that it may include undesirable portions of the vehicle’s ego body in its perspective. Due to objects’ reflections on the car body, this may produce false positives in perception systems. Processing ego vehicle pixels also uses up unnecessary computing power. Unexpectedly, there is no literature on this relevant practical problem. To our knowledge, this is the first attempt to discuss the significance of autonomous ego body extraction for automobile applications that are crucial for safety. We also proposed a simple deep learning model for identifying the vehicle’s ego-body. This model would enable us to eliminate any pointless processing of the car’s bodywork, eliminate the potential for pedestrians or other objects to be mistakenly detected in the car’s ego-body reflection, and finally, check to see if the camera is mounted incorrectly. The proposed network is a U-Net model with a Res-Net50 encoder pre-trained on ImageNet and trained for binary semantic segmentation on vehicle ego-body data. Our training data is an internal Valeo dataset with 10K samples collected by three separate car lines across Europe. This proposed network could then be integrated into the vehicles existing perception system by extracting the ego-body contour data and supplying this to the other algorithms which then ignore the area outside the contour coordinates. The proposed network can run at set intervals to save computing power and to check if the camera is misaligned by comparing the new contour data to the previous data.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Images 360 degrés"

1

Fu, Jianglin, Saeed Ranjbar Alvar, Ivan Bajic, and Rodney Vaughan. "FDDB-360: Face Detection in 360-Degree Fisheye Images." In 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE, 2019. http://dx.doi.org/10.1109/mipr.2019.00011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Patterson, Dale. "360 Degree photographic imagery for VR." In ACSW 2018: Australasian Computer Science Week 2018. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3167918.3167955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Delforouzi, Ahmad, Seyed Amir Hossein Tabatabaei, Kimiaki Shirahama, and Marcin Grzegorzek. "Unknown object tracking in 360-degree camera images." In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016. http://dx.doi.org/10.1109/icpr.2016.7899897.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Delforouzi, Ahmad, Seyed Amir Hossein Tabatabaei, Kimiaki Shirahama, and Marcin Grzegorzek. "Polar Object Tracking in 360-Degree Camera Images." In 2016 IEEE International Symposium on Multimedia (ISM). IEEE, 2016. http://dx.doi.org/10.1109/ism.2016.0077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"360 degrees-viewable display of 3D solid images." In SIGGRAPH07: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2007. http://dx.doi.org/10.1145/1280720.1280839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Molnar, Fanni, and Andras Kovacs. "360-Degree image stitching with GPU support." In 2016 IEEE 20th Jubilee International Conference on Intelligent Engineering Systems (INES). IEEE, 2016. http://dx.doi.org/10.1109/ines.2016.7555130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wegner, Krzysztof, Olgierd Stankiewicz, Tomasz Grajek, and Marek Domanski. "Depth Estimation from Stereoscopic 360-Degree Video." In 2018 25th IEEE International Conference on Image Processing (ICIP). IEEE, 2018. http://dx.doi.org/10.1109/icip.2018.8451452.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

BIDGOLI, Navid MAHMOUDIAN, Thomas MAUGEY, and Aline ROUMY. "Intra-coding of 360-degree images on the sphere." In 2019 Picture Coding Symposium (PCS). IEEE, 2019. http://dx.doi.org/10.1109/pcs48520.2019.8954538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Xinlang, Pan Gao, and Ran Wei. "Visual Saliency Prediction on 360 Degree Images With CNN." In 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW). IEEE, 2021. http://dx.doi.org/10.1109/icmew53276.2021.9456008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

De Simone, Francesca, Roberto G. de A. Azevedo, Sohyeong Kim, and Pascal Frossard. "Graph-Based Detection of Seams In 360-Degree Images." In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8803578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography