Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Light field images.

Дисертації з теми "Light field images"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Light field images".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Zhang, Zhengyu. "Quality Assessment of Light Field Images." Electronic Thesis or Diss., Rennes, INSA, 2024. http://www.theses.fr/2024ISAR0002.

Повний текст джерела
Анотація:
Les images de champs de lumière (LFI) suscitent un intérêt et une fascination remarquables en raison de leur importance croissante dans les applications immersives. Étant donné que les LFI peuvent être déformés à différentes étapes, de l'acquisition à la visualisation, l'évaluation de la qualité des images de champs de lumière (LFIQA) est d'une importance vitale pour surveiller les dégradations potentielles de la qualité des LFI.La première contribution (Chapitre 3) de ce travail se concentre sur le développement de deux métriques LFIQA sans référence (NR) fondées sur des caractéristiques ad-hoc, dans lesquelles les informations de texture et les coefficients ondelettes sont exploités pour l'évaluation de la qualité.Puis dans la deuxième partie (Chapitre 4), nous explorons le potentiel de la technologie de l’apprentissage profond (deep learning) pour l'évaluation de la qualité des LFI, et nous proposons quatre métriques LFIQA basées sur l’apprentissage profond, dont trois métriques sans référence (NR) et une métrique Full-Reference (FR).Dans la dernière partie (Chapitre 5), nous menons des évaluations subjectives et proposons une nouvelle base de données normalisée pour la LFIQA. De plus, nous fournissons une étude comparative (benchmark) de nombreuses métriques objectives LFIQA de l’état de l’art, sur la base de données proposée
Light Field Image (LFI) has garnered remarkable interest and fascination due to its burgeoning significance in immersive applications. Since LFIs may be distorted at various stages from acquisition to visualization, Light Field Image Quality Assessment (LFIQA) is vitally important to monitor the potential impairments of LFI quality. The first contribution (Chapter 3) of this work focuses on developing two handcrafted feature-based No-Reference (NR) LFIQA metrics, in which texture information and wavelet information are exploited for quality evaluation. Then in the second part (Chapter 4), we explore the potential of combining deep learning technology with the quality assessment of LFIs, and propose four deep learning-based LFIQA metrics according to different LFI characteristics, including three NR metrics and one Full-Reference (FR) metric. In the last part (Chapter 5), we conduct subjective experiments and propose a novel standard LFIQA database. Moreover, a benchmark of numerous state-of-the-art objective LFIQA metrics on the proposed database is provided
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chiesa, Valeria. "Revisiting face processing with light field images." Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS059.pdf.

Повний текст джерела
Анотація:
L'objectif principal de cette thèse est de présenter une technologie d'acquisition non conventionnelle, d'étudier les performances d'analyse du visage en utilisant des images collectées avec une caméra spéciale, de comparer les résultats avec ceux obtenus en élaborant des données à partir de dispositifs similaires et de démontrer le bénéfice apporté par l'utilisation de dispositifs modernes par rapport à des caméras standards utilisées en biométrie. Au début de la thèse, la littérature sur l'analyse du visage à l'aide de données "light field" a été étudiée. Le problème de la rareté des données biométriques (et en particulier des images de visages humains) recueillies à l'aide de caméras plénoptiques a été résolu par l'acquisition systématique d'une base de données de visages "light field", désormais accessible au public. Grâce aux données recueillies, il a été possible de concevoir et de développer des expériences en analyse du visage. De plus, une base de référence exhaustive pour une comparaison entre deux technologies RGB-D a été créée pour appuyer les études en perspective. Pendant la période de cette thèse, l'intérêt pour la technologie du plénoptique appliquée à l'analyse du visage s'est accrue et la nécessité d'une étude d'un algorithme dédié aux images "light field" est devenue incontournable. Ainsi, une vue d'ensemble complète des méthodes existantes a été élaborée
Being able to predict the macroscopic response of a material from the knowledge of its constituent at a microscopic or mesoscopic scale has always been the Holy Grail pursued by material science, for it provides building bricks for the understanding of complex structures as well as for the development of tailor-made optimized materials. The homogenization theory constitutes nowadays a well-established theoretical framework to estimate the overall response of composite materials for a broad range of mechanical behaviors. Such a framework is still lacking for brittle fracture, which is a dissipative evolution problem that (ii) localizes at the crack tip and (iii) is related to a structural one. In this work, we propose a theoretical framework based on a perturbative approach of Linear Elastic Fracture Mechanics to model (i) crack propagation in large-scale disordered materials as well (ii) the dissipative processes involved at the crack tip during the interaction of a crack with material heterogeneities. Their ultimate contribution to the macroscopic toughness of the composite is (iii) estimated from the resolution of the structural problem using an approach inspired by statistical physics. The theoretical and numerical inputs presented in the thesis are finally compared to experimental measurements of crack propagation in 3D-printed heterogeneous polymers obtained through digital image correlation
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Dricot, Antoine. "Light-field image and video compression for future immersive applications." Thesis, Paris, ENST, 2017. http://www.theses.fr/2017ENST0008/document.

Повний текст джерела
Анотація:
L’évolution des technologies vidéo permet des expériences de plus en plus immersives. Cependant, les technologies 3D actuelles sont encore très limitées et offrent des situations de visualisation qui ne sont ni confortables ni naturelles. La prochaine génération de technologies vidéo immersives apparait donc comme un défi technique majeur, en particulier avec la prometteuse approche light-field (LF). Le light-field représente tous les rayons lumineux dans une scène. De nouveaux dispositifs d’acquisition apparaissent, tels que des ensembles de caméras ou des appareils photo plénoptiques (basés sur des micro-lentilles). Plusieurs sortes de systèmes d’affichage ciblent des applications immersives, comme les visiocasques ou les écrans light-field basés sur la projection, et des applications cibles prometteuses existent déjà (e.g. la vidéo 360°, la réalité virtuelle, etc.). Depuis plusieurs années, le light-field a stimulé l’intérêt de plusieurs entreprises et institutions, par exemple dans des groupes MPEG et JPEG. Les contenus light-feld ont des structures spécifiques et utilisent une quantité massive de données, ce qui représente un défi pour implémenter les futurs services. L'un des buts principaux de notre travail est d'abord de déterminer quelles technologies sont réalistes ou prometteuses. Cette étude est faite sous l'angle de la compression image et vidéo, car l'efficacité de la compression est un facteur clé pour mettre en place ces services light-field sur le marché. On propose ensuite des nouveaux schémas de codage pour augmenter les performances de compression et permettre une transmission efficace des contenus light-field sur les futurs réseaux
Evolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Dricot, Antoine. "Light-field image and video compression for future immersive applications." Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0008.

Повний текст джерела
Анотація:
L’évolution des technologies vidéo permet des expériences de plus en plus immersives. Cependant, les technologies 3D actuelles sont encore très limitées et offrent des situations de visualisation qui ne sont ni confortables ni naturelles. La prochaine génération de technologies vidéo immersives apparait donc comme un défi technique majeur, en particulier avec la prometteuse approche light-field (LF). Le light-field représente tous les rayons lumineux dans une scène. De nouveaux dispositifs d’acquisition apparaissent, tels que des ensembles de caméras ou des appareils photo plénoptiques (basés sur des micro-lentilles). Plusieurs sortes de systèmes d’affichage ciblent des applications immersives, comme les visiocasques ou les écrans light-field basés sur la projection, et des applications cibles prometteuses existent déjà (e.g. la vidéo 360°, la réalité virtuelle, etc.). Depuis plusieurs années, le light-field a stimulé l’intérêt de plusieurs entreprises et institutions, par exemple dans des groupes MPEG et JPEG. Les contenus light-feld ont des structures spécifiques et utilisent une quantité massive de données, ce qui représente un défi pour implémenter les futurs services. L'un des buts principaux de notre travail est d'abord de déterminer quelles technologies sont réalistes ou prometteuses. Cette étude est faite sous l'angle de la compression image et vidéo, car l'efficacité de la compression est un facteur clé pour mettre en place ces services light-field sur le marché. On propose ensuite des nouveaux schémas de codage pour augmenter les performances de compression et permettre une transmission efficace des contenus light-field sur les futurs réseaux
Evolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks
Стилі APA, Harvard, Vancouver, ISO та ін.
5

McEwen, Bryce Adam. "Microscopic Light Field Particle Image Velocimetry." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3238.

Повний текст джерела
Анотація:
This work presents the development and analysis of a system that combines the concepts of light field microscopy and particle image velocimetry (PIV) to measure three-dimensional velocities within a microvolume. Rectanglar microchannels were fabricated with dimensions on the order of 350-950 micrometers using a photolithographic process and polydimethylsiloxane (PDMS). The flow was seeded with fluorescent particles and pumped through microchannels at Reynolds numbers ranging from 0.016 to 0.028. Flow at Reynolds numbers in the range of 0.02 to 0.03 was seeded with fluorescent particles and pumped through microchannels. A light field microscope with a lateral resolution of 6.25 micrometers and an axial resolution of 15.5 micrometers was designed and built based on the concepts described by Levoy et al. Light field images were captured continuously at a frame rate of 3.9 frames per second using a Canon 5D Mark II DSLR camera. Each image was post processed to render a stack of two-dimensional images. The focal stacks were further post processed using various methods including bandpass filtering, 3D deconvolution, and intensity-based thresholding, to remove effects of diffraction and blurring. Subsequently, a multi-pass, three-dimensional PIV algorithm was used to measure channel velocities. Results from PIV analysis were compared with an analytical solution for fully-developed cases, and with CFD simulations for developing flows. Relative errors for fully-developed flow measurements, within the light field microscope refocusing range, were approximately 5% or less. Overall, the main limitations are the reduction in lateral resolution, and the somewhat low axial resolution. Advantages include the relatively low cost, ease of incorporation into existing micro-PIV systems, simple self-calibration process, and potential for resolving instantaneous three-dimensional velocities in a microvolume.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Souza, Wallace Bruno Silva de. "Transmissão progressiva de imagens sintetizadas de light field." reponame:Repositório Institucional da UnB, 2018. http://repositorio.unb.br/handle/10482/34206.

Повний текст джерела
Анотація:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2018.
Esta proposta estabelece um método otimizado baseado em taxa-distorção para transmitir imagens sintetizadas de light field. Resumidamente, uma imagem light field pode ser interpretada como um dado quadridimensional (4D) que possui tanto resolução espacial, quanto resolução angular, sendo que cada subimagem bidimensional desse dado 4D é tido como uma determinada perspectiva, isto é, uma imagem de subabertura (SAI, do inglês Sub-Aperture Image). Este trabalho visa modi car e aprimorar uma proposta anterior chamada de Comunicação Progressiva de Light Field (PLFC, do inglês Progressive Light Field Communication ), a qual trata da sintetização de imagens referentes a diferentes focos requisitados por um usuário. Como o PLFC, este trabalho busca fornecer informação suficiente para o usuário de modo que, conforme a transmissão avance, ele tenha condições de sintetizar suas próprias imagens de ponto focal, sem a necessidade de se enviar novas imagens. Assim, a primeira modificação proposta diz respeito à como escolher a cache inicial do usuário, determinando uma quantidade ideal de imagens de subabertura para enviar no início da transmissão. Propõe-se também um aprimoramento do processo de seleção de imagens adicionais por meio de um algoritmo de refinamento, o qual é aplicado inclusive na inicialização da cache. Esse novo processo de seleção lida com QPs (Passo de Quantização, do inglês Quantization Parameter ) dinâmicos durante a codificação e envolve não só os ganhos imediatos para a qualidade da imagem sintetizada, mas ainda considera as sintetizações subsequentes. Tal ideia já foi apresentada pelo PLFC, mas não havia sido implementada de maneira satisfatória. Estabelece-se ainda uma maneira automática para calcular o multiplicador de Lagrange que controla a influência do benefício futuro associado à transmissão de uma SAI. Por fim, descreve-se um modo simplificado de obter esse benefício futuro, reduzindo a complexidade computacional envolvida. Muitas são as utilidades de um sistema como este, podendo, por exemplo, ser usado para identificar algum elemento em uma imagem light field, ajustando apropriadamente o foco em questão. Além da proposta, os resultados obtidos são exibidos, sendo feita uma discussão acerca dos significativos ganhos conseguidos de até 32; 8% com relação ao PLFC anterior em termos de BD-Taxa. Esse ganho chega a ser de até 85; 8% em comparação com transmissões triviais de dados light field.
This work proposes an optimized rate-distortion method to transmit light field synthesized images. Briefy, light eld images could be understood like quadridimensional (4D) data, which have both spatial and angular resolution, once each bidimensional subimage in this 4D image is a certain perspective, that is, a SAI (Sub-Aperture Image). This work aims to modify and to improve a previous proposal named PLFC (Progressive Light Field Communication), which addresses the image synthesis for diferent focal point images requested by an user. Like the PLFC, this work tries to provide enough information to the user so that, as the transmsission progress, he can synthesize his own focal point images, without the need to transmit new images. Thus, the first proposed modification refers to how the user's initial cache should be chosen, defining an ideal ammount of SAIs to send at the transmission begining. An improvement of the additional images selection process is also proposed by means of a refinement algorithm, which is applied even in the cache initialization. This new selection process works with dynamic QPs (Quantization Parameter) during encoding and involves not only the immediate gains for the synthesized image, but either considers the subsequent synthesis. This idea already was presented by PLFC, but had not been satisfactorily implemented. Moreover, this work proposes an automatic way to calculate the Lagrange multiplier which controls the in uence of the future benefit associated with the transmission of some SAI. Finally, a simplified manner of obtaining this future benefit is then described, reducing the computational complexity involved. The utilities of such a system are diverse and, for example, it can be used to identify some element in a light field image, adjusting the focus accordingly. Besides the proposal, the obtained results are shown, and a discussion is made about the significant achieved gains up to 32:8% compared to the previous PLFC in terms of BD-Rate. This gain is up to 85:8% in relation to trivial light field data transmissions.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Nieto, Grégoire. "Light field remote vision." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM051/document.

Повний текст джерела
Анотація:
Les champs de lumière ont attisé la curiosité durant ces dernières décennies. Capturés par une caméra plénoptique ou un ensemble de caméras, ils échantillonnent la fonction plénoptique qui informe sur la radiance de n'importe quel rayon lumineux traversant la scène observée. Les champs lumineux offrent de nombreuses applications en vision par ordinateur comme en infographie, de la reconstruction 3D à la segmentation, en passant par la synthèse de vue, l'inpainting ou encore le matting par exemple.Dans ce travail nous nous attelons au problème de reconstruction du champ de lumière dans le but de synthétiser une image, comme si elle avait été prise par une caméra plus proche du sujet de la scène que l'appareil de capture plénoptique. Notre approche consiste à formuler la reconstruction du champ lumineux comme un problème de rendu basé image (IBR). La plupart des algorithmes de rendu basé image s'appuient dans un premier temps sur une reconstruction 3D approximative de la scène, appelée proxy géométrique, afin d'établir des correspondances entre les points image des vues sources et ceux de la vue cible. Une nouvelle vue est générée par l'utilisation conjointe des images sources et du proxy géométrique, bien souvent par la projection des images sources sur le point de vue cible et leur fusion en intensité.Un simple mélange des couleurs des images sources ne garantit pas la cohérence de l'image synthétisée. Nous proposons donc une méthode de rendu direct multi-échelles basée sur les pyramides de laplaciens afin de fusionner les images sources à toutes les fréquences, prévenant ainsi l'apparition d'artefacts de rendu.Mais l'imperfection du proxy géométrique est aussi la cause d'artefacts de rendu, qui se traduisent par du bruit en haute fréquence dans l'image synthétisée. Nous introduisons une nouvelle méthode de rendu variationnelle avec des contraintes sur les gradients de l'image cible dans le but de mieux conditionner le système d'équation linéaire à résoudre et supprimer les artefacts de rendu dus au proxy.Certaines scènes posent de grandes difficultés de reconstruction du fait du caractère non-lambertien éventuel de certaines surfaces~; d'autre part même un bon proxy ne suffit pas, lorsque des réflexions, transparences et spécularités remettent en cause les règles de la parallaxe. Nous proposons méthode originale basée sur l'approximation locale de l'espace plénoptique à partir d'un échantillonnage épars afin de synthétiser n'importe quel point de vue sans avoir recours à la reconstruction explicite d'un proxy géométrique. Nous évaluons notre méthode à la fois qualitativement et quantitativement sur des scènes non-triviales contenant des matériaux non-lambertiens.Enfin nous ouvrons une discussion sur le problème du placement optimal de caméras contraintes pour le rendu basé image, et sur l'utilisation de nos algorithmes pour la vision d'objets dissimulés derrière des camouflages.Les différents algorithmes proposés sont illustrés par des résultats sur des jeux de données plénoptiques structurés (de type grilles de caméras) ou non-structurés
Light fields have gathered much interest during the past few years. Captured from a plenoptic camera or a camera array, they sample the plenoptic function that provides rich information about the radiance of any ray passing through the observed scene. They offer a pletora of computer vision and graphics applications: 3D reconstruction, segmentation, novel view synthesis, inpainting or matting for instance.Reconstructing the light field consists in recovering the missing rays given the captured samples. In this work we cope with the problem of reconstructing the light field in order to synthesize an image, as if it was taken by a camera closer to the scene than the input plenoptic device or set of cameras. Our approach is to formulate the light field reconstruction challenge as an image-based rendering (IBR) problem. Most of IBR algorithms first estimate the geometry of the scene, known as a geometric proxy, to make correspondences between the input views and the target view. A new image is generated by the joint use of both the input images and the geometric proxy, often projecting the input images on the target point of view and blending them in intensity.A naive color blending of the input images do not guaranty the coherence of the synthesized image. Therefore we propose a direct multi-scale approach based on Laplacian rendering to blend the source images at all the frequencies, thus preventing rendering artifacts.However, the imperfection of the geometric proxy is also a main cause of rendering artifacts, that are displayed as a high-frequency noise in the synthesized image. We introduce a novel variational rendering method with gradient constraints on the target image for a better-conditioned linear system to solve, removing the high-frequency noise due to the geometric proxy.Some scene reconstructions are very challenging because of the presence of non-Lambertian materials; moreover, even a perfect geometric proxy is not sufficient when reflections, transparencies and specularities question the rules of parallax. We propose an original method based on the local approximation of the sparse light field in the plenoptic space to generate a new viewpoint without the need for any explicit geometric proxy reconstruction. We evaluate our method both quantitatively and qualitatively on non-trivial scenes that contain non-Lambertian surfaces.Lastly we discuss the question of the optimal placement of constrained cameras for IBR, and the use of our algorithms to recover objects that are hidden behind a camouflage.The proposed algorithms are illustrated by results on both structured (camera arrays) and unstructured plenoptic datasets
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hawary, Fatma. "Light field image compression and compressive acquisition." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S082.

Повний текст джерела
Анотація:
En capturant une scène à partir de plusieurs points de vue, un champ de lumière fournit une représentation riche de la géométrie de la scène, ce qui permet une variété de nouvelles applications de post-capture ainsi que des expériences immersives. L'objectif de cette thèse est d'étudier la compressibilité des contenus de type champ de lumière afin de proposer de nouvelles solutions pour une imagerie de champs lumière à plus haute résolution. Deux aspects principaux ont été étudiés à travers ce travail. Les performances en compression sur les champs lumière des schémas de codage actuels étant encore limitées, il est nécessaire d'introduire des approches plus adaptées aux structures des champs de lumière. Nous proposons un schéma de compression comportant deux couches de codage. Une première couche encode uniquement un sous-ensemble de vues d’un champ de lumière et reconstruit les vues restantes via une méthode basée sur la parcimonie. Un codage résiduel améliore ensuite la qualité finale du champ de lumière décodé. Avec les moyens actuels de capture et de stockage, l’acquisition d’un champ de lumière à très haute résolution spatiale et angulaire reste impossible, une alternative consiste à reconstruire le champ de lumière avec une large résolution à partir d’un sous-ensemble d’échantillons acquis. Nous proposons une méthode de reconstruction automatique pour restaurer un champ de lumière échantillonné. L’approche utilise la parcimonie du champs de lumière dans le domaine de Fourier. Aucune estimation de la géométrie de la scène n'est nécessaire, et une reconstruction précise est obtenue même avec un échantillonnage assez réduit. Une étude supplémentaire du schéma complet, comprenant les deux approches proposées est menée afin de mesurer la distorsion introduite par les différents traitements. Les résultats montrent des performances comparables aux méthodes de synthèse de vues basées sur la l’estimation de profondeur
By capturing a scene from several points of view, a light field provides a rich representation of the scene geometry that brings a variety of novel post-capture applications and enables immersive experiences. The objective of this thesis is to study the compressibility of light field contents in order to propose novel solutions for higher-resolution light field imaging. Two main aspects were studied through this work. The compression performance on light fields of the actual coding schemes still being limited, there is need to introduce more adapted approaches to better describe the light field structures. We propose a scalable coding scheme that encodes only a subset of light field views and reconstruct the remaining views via a sparsity-based method. A residual coding provides an enhancement to the final quality of the decoded light field. Acquiring very large-scale light fields is still not feasible with the actual capture and storage facilities, a possible alternative is to reconstruct the densely sampled light field from a subset of acquired samples. We propose an automatic reconstruction method to recover a compressively sampled light field, that exploits its sparsity in the Fourier domain. No geometry estimation is needed, and an accurate reconstruction is achieved even with very low number of captured samples. A further study is conducted for the full scheme including a compressive sensing of a light field and its transmission via the proposed coding approach. The distortion introduced by the different processing is measured. The results show comparable performances to depth-based view synthesis methods
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Löw, Joakim, Anders Ynnerman, Per Larsson, and Jonas Unger. "HDR Light Probe Sequence Resampling for Realtime Incident Light Field Rendering." Linköpings universitet, Visuell informationsteknologi och applikationer, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-18052.

Повний текст джерела
Анотація:
This paper presents a method for resampling a sequence of high dynamic range light probe images into a representation of Incident Light Field (ILF) illumination which enables realtime rendering. The light probe sequences are captured at varying positions in a real world environment using a high dynamic range video camera pointed at a mirror sphere. The sequences are then resampled to a set of radiance maps in a regular three dimensional grid before projection onto spherical harmonics. The capture locations and amount of samples in the original data make it inconvenient for direct use in rendering and resampling is necessary to produce an efficient data structure. Each light probe represents a large set of incident radiance samples from different directions around the capture location. Under the assumption that the spatial volume in which the capture was performed has no internal occlusion, the radiance samples are projected through the volume along their corresponding direction in order to build a new set of radiance maps at selected locations, in this case a three dimensional grid. The resampled data is projected onto a spherical harmonic basis to allow for realtime lighting of synthetic objects inside the incident light field.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Baravdish, Gabriel. "GPU Accelerated Light Field Compression." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150558.

Повний текст джерела
Анотація:
This thesis presents a GPU accelerated method to compress light field or light field videos. The implementation is based on an earlier work of a full light field compression framework. The large amount of data storage by capturing light fields is a challenge to compress and we seek to accelerate the encoding part. We compress by projecting each data point onto a set of dictionaries and seek a sparse representation with the least error. An optimized greedy algorithm to suit computations on the GPU is presented. We benefit of the algorithm outline by encoding the data segmentally in parallel for faster computation speed while maintaining the quality. The results shows a significantly faster encoding time compared to the results in the same research field. We conclude that there are further improvements to increase the speed, and thus it is not too far from an interactive compression speed.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Yang, Jason C. (Jason Chieh-Sheng) 1977. "A light field camera for image based rendering." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86575.

Повний текст джерела
Анотація:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaves 54-55).
by Jason C. Yang.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Vorhies, John T. "Low-complexity Algorithms for Light Field Image Processing." University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590771210097321.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Unger, Jonas. "Incident Light Fields." Doctoral thesis, Linköpings universitet, Visuell informationsteknologi och applikationer, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16287.

Повний текст джерела
Анотація:
Image based lighting, (IBL), is a computer graphics technique for creating photorealistic renderings of synthetic objects such that they can be placed into real world scenes. IBL has been widely recognized and is today used in commercial production pipelines. However, the current techniques only use illumination captured at a single point in space. This means that traditional IBL cannot capture or recreate effects such as cast shadows, shafts of light or other important spatial variations in the illumination. Such lighting effects are, in many cases, artistically created or are there to emphasize certain features, and are therefore a very important part of the visual appearance of a scene. This thesis and the included papers present methods that extend IBL to allow for capture and rendering with spatially varying illumination. This is accomplished by measuring the light field incident onto a region in space, called an Incident Light Field, (ILF), and using it as illumination in renderings. This requires the illumination to be captured at a large number of points in space instead of just one. The complexity of the capture methods and rendering algorithms are then significantly increased. The technique for measuring spatially varying illumination in real scenes is based on capture of High Dynamic Range, (HDR), image sequences. For efficient measurement, the image capture is performed at video frame rates. The captured illumination information in the image sequences is processed such that it can be used in computer graphics rendering. By extracting high intensity regions from the captured data and representing them separately, this thesis also describes a technique for increasing rendering efficiency and methods for editing the captured illumination, for example artificially moving or turning on and of individual light sources.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Tsai, Dorian Yu Peng. "Light-field features for robotic vision in the presence of refractive objects." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/192102/1/Dorian%20Yu%20Peng_Tsai_Thesis.pdf.

Повний текст джерела
Анотація:
Curved transparent objects are difficult for robots to perceive and this makes it difficult for robots to work with them. This thesis shows that multi-aperture or light-field cameras overcome this problem since they capture a set of dense and uniformly sampled views to capture multiple views of the scene. The advances constitute a critical step towards enabling robots to work more safely and reliably with everyday refractive objects.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Tung, Yan Foo. "Testing and performance characterization of the split field polarimeter in the 3-5m̆ waveband /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Dec%5FTung.pdf.

Повний текст джерела
Анотація:
Thesis (M.S. in Combat Systems Technology)--Naval Postgraduate School, December 2003.
Thesis advisor(s): Alfred W. Cooper, Gamani Karunasiri. Includes bibliographical references (p. 83-84). Also available online.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Pendlebury, Jonathon Remy. "Light Field Imaging Applied to Reacting and Microscopic Flows." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/5754.

Повний текст джерела
Анотація:
Light field imaging, specifically synthetic aperture (SA) refocusing is a method used to combine images from an array of cameras to generate a single image with a narrow depth of field that can be positioned arbitrarily throughout the volume under investigation. Creating a stack of narrow depth of field images at varying locations generates a focal stack that can be used to find the location of objects in three dimensions. SA refocusing is particularly useful when reconstructing particle fields that are then used to determine the movement of the fluid they are entrained in, and it can also be used for shape reconstruction. This study applies SA refocusing to reacting flows and microscopic flows by performing shape reconstruction and 3D PIV on a flame, and 3D PIV on flow through a micro channel. The reacting flows in particular posed problems with the method. Reconstruction of the flame envelope was successful except for significant elongation in the optical axis caused by cameras viewing the flame from primarily one direction. 3D PIV on reacting flows suffered heavily from the index of refraction generated by the flame. The refocusing algorithm used assumed the particles were viewed through a constant refractive index (RI) and does not compensate for variations in the RI. This variation caused apparent motion in the particles that obscured their true locations making the 3D PIV prone to error. Microscopic PIV (µPIV) was performed on a channel containing a backward facing step. A microlens array was placed in the imaging section of the setup to capture a light field from the scene, which was then refocusing using SA refocusing. PIV on these volumes was compared to a CFD simulation on the same channel. Comparisons showed that error was most significant near the boundaries and the step of the channel. The axial velocity in particular had significant error near the step were the axial velocity was highest. Flow-wise velocity, though, appeared accurate with average flow-wise error approximately 20% throughout the channel volume.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Zhao, Ping. "Low-Complexity Deep Learning-Based Light Field Image Quality Assessment." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25977.

Повний текст джерела
Анотація:
Light field image quality assessment (LF-IQA) has attracted increasing research interests due to the fast-growing demands for immersive media experience. The majority of existing LF-IQA metrics, however, heavily rely on high-complexity statistics-based feature extraction for the quality assessment task, which will be hardly sustainable in real-time applications or power-constrained consumer electronic devices in future real-life applications. In this research, a low-complexity Deep learning-based Light Field Image Quality Evaluator (DeLFIQE) is proposed to automatically and efficiently extract features for LF-IQA. To the best of my knowledge, this is the first attempt in LF-IQA with a dedicatedly designed convolutional neural network (CNN) based deep learning model. First, to significantly accelerate the training process, discriminative Epipolar Plane Image (EPI) patches, instead of the full light field images (LFIs) or full EPIs, are obtained and used as input for training and testing in DeLFIQE. By utilizing the EPI patches as input, the quality evaluation of 4-D LFIs is converted to the evaluation of 2-D EPI patches, thus significantly reducing the computational complexity. Furthermore, discriminative EPI patches are selected in such a way that they contain most of the distortion information, thus further improving the training efficiency. Second, to improve the quality assessment accuracy and robustness, a multi-task learning mechanism is designed and employed in DeLFIQE. Specifically, alongside the main task that predicts the final quality score, an auxiliary classification task is designed to classify LFIs based on their distortion types and severity levels. That way, the features are extracted to reflect the distortion types and severity levels, which in turn helps the main task improve the accuracy and robustness of the prediction. The extensive experiments show that DeLFIQE outperforms state-of-the-art metrics from both accuracy and correlation perspectives, especially on benchmark LF datasets of high angular resolutions. When tested on the LF datasets of low angular resolutions, however, the performance of DeLFIQE slightly declines, although still remains competitive. It is believed that it is due to the fact that the distortion feature information contained in the EPI patches gets reduced with the decrease of the LFIs’ angular resolutions, thus reducing the training efficiency and the overall performance of DeLFIQE. Therefore, a General-purpose deep learning-based Light Field Image Quality Evaluator (GeLFIQE) is proposed to perform accurately and efficiently on LF datasets of both high and low angular resolutions. First, a deep CNN model is pre-trained on one of the most comprehensive benchmark LF datasets of high angular resolutions containing abundant distortion features. Next, the features learned from the pre-trained model are transferred to the target LF dataset-specific CNN model to help improve the generalisation and overall performance on low-resolution LFIs containing fewer distortion features. The experimental results show that GeLFIQE substantially improves the performance of DeLFIQE on low-resolution LF datasets, which makes it a real general-purpose LF-IQA metric for LF datasets of various resolutions.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Unger, Jonas, Stefan Gustavson, Larsson Per, and Anders Ynnerman. "Free Form Incident Light Fields." Linköpings universitet, Visuell informationsteknologi och applikationer, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16286.

Повний текст джерела
Анотація:
This paper presents methods for photo-realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4-D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ang, Jason. "Offset Surface Light Fields." Thesis, University of Waterloo, 2003. http://hdl.handle.net/10012/1100.

Повний текст джерела
Анотація:
For producing realistic images, reflection is an important visual effect. Reflections of the environment are important not only for highly reflective objects, such as mirrors, but also for more common objects such as brushed metals and glossy plastics. Generating these reflections accurately at real-time rates for interactive applications, however, is a difficult problem. Previous works in this area have made assumptions that sacrifice accuracy in order to preserve interactivity. I will present an algorithm that tries to handle reflection accurately in the general case for real-time rendering. The algorithm uses a database of prerendered environment maps to render both the original object itself and an additional bidirectional reflection distribution function (BRDF). The algorithm performs image-based rendering in reflection space in order to achieve accurate results. It also uses graphics processing unit (GPU) features to accelerate rendering.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Yeung, Henry Wing Fung. "Efficient Deep Neural Network Designs for High Dimensional and Large Volume Image Processing." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24336.

Повний текст джерела
Анотація:
Over time, more advanced methods of imaging are being developed for capturing richer information from the scene. Such advancement leads to an increase in the spatial resolution, i.e. number of pixels in the width and height of the image, from the angular resolution, i.e. light rays from multiple angles, or spectral resolution, i.e. bands across the electromagnetic spectrum. As a result, the number of dimensions and volume per image increases significantly. Examples of such images are light field images and satellite images. Light field images, which capture the ray of light at each point of the scene instead of the total amount of light measured at each point of the photosensor, contain 4 dimensions, i.e. spatial width and height and angular width and height, as opposed to only 2 dimensions of images taken from the traditional DSLR cameras, i.e. spatial width and height only. Satellite images, on the other hand, have the same number of dimensions as the traditional images but contain far greater size per dimension. The spatial width and height of a satellite image can be over 3200 by 3200 pixels. Moreover, there can be more than 16 bands from the short-wave infrared (1195-2365nm) range, instead of the 3 RGB channels of the traditional images. Both the light field images and the satellite images contain more information compared to the traditional images because of their huge image size. However, it is problematic to analyse due to the exact same reason. This problem is particularly important to the recently popular deep learning based technique. Deep learning based methods rely on feeding a large sample size dataset to a deep neural network which is trained through back-propagation. The training process for this method is extremely time consuming. Given the same amount of compute, the training time depends on the complexity of the network and the size of the input data. In the case of training a model for high dimensional and large volume images, we will run into a trade-off between training time and data exploitation in the design of the neural networks. Specifically, building a neural network that utilises all 4 dimensions of the light field image can easily result in a structure that takes over a month to train. Owing to this, many researchers resort to methods for handling the image by separating it into parts that reduce training time but reduce the exploitation of correlation within data, thus hampering model performance. This thesis aims to provide efficient designs in handling the high dimensional light field images and the large volume satellite images on problems such as spatial super-resolution, light field reconstruction, classification and segmentation. We will design networks that utilise all available information when building the neural network and are efficiently connected for learning a good feature representation. Some of our solutions achieve state-of-the-art results at the time of publication.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Stepanov, Milan. "Selected topics in learning-based coding for light field imaging." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG050.

Повний текст джерела
Анотація:
La tendance actuelle en matière de technologie d'imagerie est d'aller au-delà de la représentation 2D du monde capturée par une caméra conventionnelle. La technologie de champ lumineux, light field, nous permet de capturer des repères directionnels plus riches. Avec la disponibilité récente des caméras portables à champ lumineux, il est possible de capturer facilement une scène sous différentes perspectives en un seul temps d'exposition, permettant de nouvelles applications telles qu'un changement de perspective, la mise au point à différentes profondeurs de la scène et l'édition. profondeur de champ.Alors que le nouveau modèle d'imagerie repousse les frontières de l'immersion, de la qualité de l'expérience et de la photographie numérique, il génère d'énormes quantités de données exigeant des ressources de stockage et de bande passante importantes. Surpasser ces défis, les champs lumineux nécessitent le développement de schémas de codage efficaces.Dans cette thèse, nous explorons des approches basées sur l'apprentissage profond pour la compression du champ lumineux. Notre schéma de codage hybride combine une approche de compression basée sur l'apprentissage avec un schéma de codage vidéo traditionnel et offre un outil très efficace pour la compression avec perte d'images en champ clair. Nous utilisons une architecture basée sur un encodeur automatique et un goulot d'étranglement contraint par l'entropie pour obtenir une opérabilité particulière du codec de base. De plus, une couche d'amélioration basée sur un codec vidéo traditionnel offre une évolutivité de qualité fine au-dessus de la couche de base. Le codec proposé atteint de meilleures performances par rapport aux méthodes de pointe ; les expériences quantitatives montrent, en moyenne, une réduction de débit de plus de 30% par rapport aux codecs JPEG Pleno et HEVC.De plus, nous proposons un codec de champ lumineux sans perte basé sur l'apprentissage qui exploite les méthodes de synthèse de vue pour obtenir des estimations de haute qualité et un modèle auto-régressif qui construit une distribution de probabilité pour le codage arithmétique. La méthode proposée surpasse les méthodes de pointe en termes de débit binaire tout en maintenant une faible complexité de calcul.Enfin, nous étudions le paradigme de codage de source distribué pour les images en champ lumineux. Nous tirons parti des capacités de modélisation élevées des méthodes d'apprentissage en profondeur au niveau de deux blocs fonctionnels critiques du schéma de codage de source distribué : pour l'estimation des vues Wyner-Ziv et la modélisation du bruit de corrélation. Notre étude initiale montre que l'intégration d'une méthode de synthèse de vues basée sur l'apprentissage profond dans un schéma de codage distribué améliore les performances de codage par rapport au HEVC Intra. Nous obtenons des gains supplémentaires en intégrant la modélisation basée sur l'apprentissage en profondeur du signal résiduel
The current trend in imaging technology is to go beyond the 2D representation of the world captured by a conventional camera. Light field technology enables us to capture richer directional cues. With the recent availability of hand-held light field cameras, it is possible to capture a scene from various perspectives with ease at a single exposure time, enabling new applications such as a change of perspective, focusing at different depths in the scene, and editing depth-of-field.Whereas the new imaging model increases frontiers of immersiveness, quality of experience, and digital photography, it generates huge amounts of data demanding significant storage and bandwidth resources. To overcomethese challenges, light fields require the development of efficient coding schemes.In this thesis, we explore deep-learning-based approaches for light field compression. Our hybrid coding scheme combines a learning-based compression approach with a traditional video coding scheme and offers a highly efficient tool for lossy compression of light field images. We employ an auto-encoder-based architecture and an entropy constrained bottleneck to achieve particular operability of the base codec. In addition, an enhancement layer based on a traditional video codec offers fine-grained quality scalability on top of the base layer. The proposed codec achieves better performance compared to state-of-the-art methods; quantitative experiments show, on average, more than 30% bitrate reduction compared to JPEG Pleno and HEVC codecs.Moreover, we propose a learning-based lossless light field codec that leverages view synthesis methods to obtain high-quality estimates and an auto-regressive model that builds probability distribution for arithmetic coding. The proposed method outperforms state-of-the-art methods in terms of bitrate while maintaining low computational complexity.Last but not least, we investigate distributed source coding paradigm for light field images. We leverage the high modeling capabilities of deep learning methods at two critical functional blocks in the distributed source coding scheme: for the estimation of Wyner-Ziv views and correlation noise modeling. Our initial study shows that incorporating a deep learning-based view synthesis method into a distributed coding scheme improves coding performance compared to the HEVC Intra. We achieve further gains by integrating the deep-learning-based modeling of the residual signal
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Gullapalli, Sai Krishna. "Wave-Digital FPGA Architectures of 4-D Depth Enhancement Filters for Real-Time Light Field Image Processing." University of Akron / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=akron1574443263497981.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Milnes, Thomas Bradford. "Arbitrarily-controllable programmable aperture light field cameras : design theory, and applications to image deconvolution & 3-dimensional scanning." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85232.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2013.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 125-128).
This thesis describes a new class of programmable-aperture light field cameras based on an all-digital, grayscale aperture. A number of prototypes utilizing this arbitrarily-controllable programmable aperture (ACPA) light field technology are presented. This new method of capturing light field data lends itself to an improved deconvolution technique dubbed "Programmable Deconvolution," as well as to 3D scanning and super-resolution imaging. The use & performance of ACPA cameras in these applications is explored both in theory and with experimental results. Additionally, a framework for ACPA camera design for optimal 3D scanning is described.
by Thomas Bradford Milnes.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Johansson, Erik. "3D Reconstruction of Human Faces from Reflectance Fields." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2365.

Повний текст джерела
Анотація:

Human viewers are extremely sensitive to the appearanceof peoples faces, which makes the rendering of realistic human faces a challenging problem. Techniques for doing this have continuously been invented and evolved since more than thirty years.

This thesis makes use of recent methods within the area of image based rendering, namely the acquisition of reflectance fields from human faces. The reflectance fields are used to synthesize and realistically render models of human faces.

A shape from shading technique, assuming that human skin adheres to the Phong model, has been used to estimate surface normals. Belief propagation in graphs has then been used to enforce integrability before reconstructing the surfaces. Finally, the additivity of light has been used to realistically render the models.

The resulting models closely resemble the subjects from which they were created, and can realistically be rendered from novel directions in any illumination environment.

Стилі APA, Harvard, Vancouver, ISO та ін.
25

Ziegler, Matthias [Verfasser], Günther [Akademischer Betreuer] Greiner, and Günther [Gutachter] Greiner. "Advanced image processing for immersive media applications using sparse light-fields / Matthias Ziegler ; Gutachter: Günther Greiner ; Betreuer: Günther Greiner." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2021. http://d-nb.info/1228627576/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Magalhães, Filipe Bento. "Capacitor MOS aplicado em sensor de imagem química." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/3/3140/tde-06072014-230841/.

Повний текст джерела
Анотація:
O desenvolvimento de sensores em sistemas para controle ambiental tem-se mostrado uma área de elevado interesse científico e técnico. Os principais desafios nesta área estão relacionados ao desenvolvimento de sensores com capacidade de detecção de várias substâncias. Neste contexto, os capacitores MOS apresentam-se como dispositivos versáteis para a geração de imagens químicas com potencial de detecção e classificação de diferentes substâncias a partir de apenas um único sensor. No presente trabalho, foi proposto um sensor MOS com um perfil geométrico de porta em forma de cata-vento composta por Pd, Au e Pt. A resposta do sensor mostrou ter alta sensibilidade a moléculas ricas em átomos de H, como os gases H2 e NH3. As medidas de capacitância mostraram que o sensor tem uma resposta não linear para H2 e NH3 obedecendo à lei da isoterma de Langmuir. O sensor MOS mostrou-se eficiente na geração de imagens químicas através da técnica de escaneamento por luz pulsada. As imagens químicas correspondentes aos gases H2 e NH3 mostraram diferentes padrões quando o N2 foi utilizado como gás transportador. A diferença entre os padrões aconteceu principalmente devido ao perfil geométrico da porta metálica. A sensibilidade do sensor mostrou dependência com o potencial de polarização. Nas medidas de capacitância, a maior sensibilidade foi observada para potenciais próximos da tensão de banda plana. Já para as imagens químicas, a maior sensibilidade foi observada para potenciais inteiramente na região de depleção. A sensibilidade do sensor também se mostrou dependente do gás transporta- dor. O sensor mostrou ser mais sensível com N2 como gás transportador do que com ar seco. No entanto, o processo de dessorção dos íons H+ resultou ser mais eficiente em ar seco. Os resultados obtidos no presente trabalho sugerem a possibilidade de fabricação de um nariz optoeletrônico utilizando apenas um único sensor MOS.
The development of sensors and systems for environmental control has been shown to be an area of high scientific and technical interest. The main challenges in this area are related to the development of sensors capable of detecting many different substances. In this context, the MOS devices present themselves as versatile devices for chemical imaging with potential for detection and classification of different substances only using one single sensor. In the present work, was proposed a MOS sensor with a wing-vane geometric profile of its gate constituted of Pd, Au and Pt metals. The sensor\'s response showed to have high sensitivity to molecules rich on H atoms, such as H2 and NH3 gases. Capacitance measurements showed that the sensor has a nonlinear response for H2 and NH3 obeying the Langmuir isotherm law. The MOS sensor proved to be efficient in Chemical Imaging generation through the scanned light pulse technique. The chemical images of the H2 and NH3 gases showed different patterns when the N2 was used as carrier gas. The different patterns responses happened mainly due to geometric profile of the metallic gate. The sensor sensitivity showed dependence on the bias potential. In the capacitance measures, greater sensitivity was observed for potential near the flat-band voltage. In the chemical images, the greater sensitivity was observed for bias potential within depletion region. The sensor sensitivity was also dependent on the carrier gas. The sensor showed to be more sensitive with N2 as carrier gas than to dry air. However the desorption process of H+ have been more efficient in dry air. The results obtained in the present work suggest the possibility of manufacturing an optoelectronic nose using only a single MOS sensor.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Revi, Frank. "Measurement of two-dimensional concentration fields of a glycol-based tracer aerosol using laser light sheet illumination and microcomputer video image acquisition and processing." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/69291.

Повний текст джерела
Анотація:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1992.
Includes bibliographical references (leaves 48-49).
The use of a tracer aerosol with a bulk density close to that of air is a convenient way to study the dispersal of pollutants in ambient room air flow. Conventional point measurement techniques do not permit the rapid and accurate determination of the concentration fields produced by the injection of such a tracer into a volume of air. An instantaneous two dimensional distribution would aid in the characterization of flow and diffusion processes in the volume studied, and permit verification of theoretical models. A method is developed to measure such two dimensional concentration fields using a laser light sheet to illuminate the plane of interest, which is captured and processed using current microcomputer-based video image acquisition and analysis technology. Point concentrations, determined optically using extinction of monochromatic illumination projected through the aerosol onto a photo detector, are used to calibrate the captured video linages to detennine actual concentration values. Accuracy, reproducibility, and maximum rate of data acquisition are evaluated by means of theoretical models of ambient air flow in a sealed box with pointinjection of the tracer, and in a duct of circular cross section with constant air velocity under both constant and pulsed injection scenarios.
by Frank Revi.
M.S.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Carstens, Jacobus Everhardus. "Fast generation of digitally reconstructed radiographs for use in 2D-3D image registration." Thesis, Link to the online version, 2008. http://hdl.handle.net/10019/1797.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Reynoso, Pacheco Helen Carolina. "Construcción de la Imagen: El uso de la luz natural bajo la perspectiva de Emmanuel Lubezki en la película El Renacido." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2020. http://hdl.handle.net/10757/654486.

Повний текст джерела
Анотація:
Este trabajo de investigación analiza el fenómeno comunicacional del estilo fotográfico de Emmanuel Lubezki a través de las aplicaciones narrativas y expresivas de la cámara en la película ‘‘El Renacido’’. Cabe mencionar que se analiza el trabajo de Lubezki partiendo de la luz como elemento fundamental de expresividad. De tal modo que se logra tener una presencia de la belleza panorámica a través de los paisajes naturales que modifican las actitudes receptivas y emocionales del espectador.
This research paper analyzes the communicational phenomenon of Emmanuel Lubezki photographic style through the camera’s narrative and expressive applications in the film 'The Reborn''. It is worth mentioning that Lubezki’s work is analyzed from light as a fundamental element of expressiveness. In such a way that it is possible to have a presence of the panoramic beauty through the natural landscapes that modify the receptive and emotional attitudes of the spectator.
Trabajo de investigación
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Svoboda, Karel. "Fotografování s využitím světelného pole." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241965.

Повний текст джерела
Анотація:
The aim of this thesis is to explain terms like light field, plenoptic camera or digital lens. Also the goal is to explain the principle of rendering the resulting images with the option to select the plane of focus, depth of field, changes in perspective and partial change in the angle of the point of view. The main outputs of this thesis are scripts for rendering images from Lytro camera and the interactive application, which clearly demonstrates the principles of plenoptic sensing.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Pamplona, Vitor Fernando. "Interactive measurements and tailored displays for optical aberrations of the human eye." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/87586.

Повний текст джерела
Анотація:
Esta tese descreve métodos interativos para estimar e compensar erros de refração (NETRA) e opacidades ópticas (CATRA) em sistemas de imageamento usando telas de campos de luz programáveis, de alta resolução e alto contraste. Os novos métodos para oftalmologia computacional descritos aqui podem avaliar câmeras e olhos se o usuário do sistema estiver consciente do modelo de interação. A solução combina elementos ópticos baratos, interfaces interativas e reconstrução computacional. Uma tela de campos de luz, posicionada perto do olho, cria objetos virtuais em profundidades pré-definidas através de várias seções do olho. Via esta plataforma, cria-se uma nova gama de aplicações interativas que é extremamente sensível a aberrações ópticas. A capacidade de focar em um objeto virtual, alinhar padrões exibidos na tela e detectar suas variações de forma e brilho permite ao sistema estimar a função de propagação de ponto de luz para o olho e a acomodação da lente. Enquanto os sistemas convencionais requerem formação especializada, dispositivos caros, procedimentos de segurança sensíveis e normalmente não são móveis, esta tese simplifica o mecanismo, colocando o paciente no centro do teste. Ao final, a resposta do usuário calcula a condição de refração em termos de poderes esférico e cilíndrico, o eixo de astigmatismo, o poder de acomodação da lente e mapas para a opacidade, atenuação, contraste e função de espalhamento de um ponto de luz. O objetivo é permitir que o público em geral opere um sistema de iluminação portátil e obtenha uma compreensão de suas próprias condições visuais. Esta tese apresenta projetos ópticos para software e hardware para oftalmologia computacional. Avaliações com usuários e com câmeras com lentes modificadas são realizadas. Os dados compilados são usados para reconstruir visão afetada do indivíduo, oferecendo uma nova abordagem para capturar informações para o rastreio, diagnóstico e análises clínicas de anomalias visuais.
This thesis proposes light-field pre-warping methods for measuring and compensating for optical aberrations in focal imaging systems. Interactive methods estimate refractive conditions (NETRA) and model lens opacities (CATRA) of interaction-aware eyes and cameras using cost-efficient hardware apps for high-resolution screens. Tailored displays use stereo-viewing hardware to compensate for the measured visual aberrations and display in-focus information that avoids the need of corrective eyeglasses. A light-field display, positioned very close to the eye, creates virtual objects in a wide range of predefined depths through different sectors of the eye’s aperture. This platform creates a new range of interactivity that is extremely sensitive to spatially-distributed optical aberrations. The ability to focus on virtual objects, interactively align displayed patterns, and detect variations in shape and brightness allows the estimation of the eye’s point spread function and its lens’ accommodation range. While conventional systems require specialized training, costly devices, strict security procedures, and are usually not mobile, this thesis simplifies the mechanism by putting the human subject in the loop. Captured data is transformed into refractive conditions in terms of spherical and cylindrical powers, axis of astigmatism, focal range and aperture maps for opacity, attenuation, contrast and sub-aperture point-spread functions. These optical widgets carefully designed to interactive interfaces plus computational analysis and reconstruction establish the field of computational ophthalmology. The overall goal is to allow a general audience to operate portable light-field displays to gain a meaningful understanding of their own visual conditions. Ubiquitous, updated, and accurate diagnostic records can make private and public displays show information in a resolution that goes beyond the viewer’s visual acuity. The new display technology is able to compensate for refractive errors and avoid light-scattering paths. Tailored Displays free the viewer from needing wearable optical corrections when looking at it, expanding the notion of glasses-free multi-focus displays to add individual variabilities. This thesis includes proof-of-concept designs for ophthalmatic devices and tailored displays. User evaluations and validations with modified camera optics are performed. Capturing the daily variabilities of an individual’s sensory system is expected to unleash a new era of high-quality tailored consumer devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Verhack, Ruben [Verfasser], Peter [Akademischer Betreuer] Lambert, Thomas [Akademischer Betreuer] Sikora, Glenn van [Gutachter] Wallendael, Klaus [Gutachter] Obermeyer, Christine [Gutachter] Guillemot, Jean-François [Gutachter] Macq, and Tim [Gutachter] Wauters. "Steered mixture-of-experts for image and light field representation, processing, and coding : a universal approach for immersive experiences of camera-captured scenes / Ruben Verhack ; Gutachter: Glenn van Wallendael, Klaus Obermeyer, Christine Guillemot, Jean-François Macq, Tim Wauters ; Peter Lambert, Thomas Sikora." Berlin : Technische Universität Berlin, 2020. http://d-nb.info/1223981444/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Lu, Heqi. "Echantillonage d'importance des sources de lumières réalistes." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0001/document.

Повний текст джерела
Анотація:
On peut atteindre des images réalistes par la simulation du transport lumineuse avec des méthodes de Monte-Carlo. La possibilité d’utiliser des sources de lumière réalistes pour synthétiser les images contribue grandement à leur réalisme physique. Parmi les modèles existants, ceux basés sur des cartes d’environnement ou des champs lumineuse sont attrayants en raison de leur capacité à capter fidèlement les effets de champs lointain et de champs proche, aussi bien que leur possibilité d’être acquis directement. Parce que ces sources lumineuses acquises ont des fréquences arbitraires et sont éventuellement de grande dimension (4D), leur utilisation pour un rendu réaliste conduit à des problèmes de performance.Dans ce manuscrit, je me concentre sur la façon d’équilibrer la précision de la représentation et de l’efficacité de la simulation. Mon travail repose sur la génération des échantillons de haute qualité à partir des sources de lumière par des estimateurs de Monte-Carlo non-biaisés. Dans ce manuscrit, nous présentons trois nouvelles méthodes.La première consiste à générer des échantillons de haute qualité de manière efficace à partir de cartes d’environnement dynamiques (i.e. qui changent au cours du temps). Nous y parvenons en adoptant une approche GPU qui génère des échantillons de lumière grâce à une approximation du facteur de forme et qui combine ces échantillons avec ceux issus de la BRDF pour chaque pixel d’une image. Notre méthode est précise et efficace. En effet, avec seulement 256 échantillons par pixel, nous obtenons des résultats de haute qualité en temps réel pour une résolution de 1024 × 768. La seconde est une stratégie d’échantillonnage adaptatif pour des sources représente comme un "light field". Nous générons des échantillons de haute qualité de manière efficace en limitant de manière conservative la zone d’échantillonnage sans réduire la précision. Avec une mise en oeuvre sur GPU et sans aucun calcul de visibilité, nous obtenons des résultats de haute qualité avec 200 échantillons pour chaque pixel, en temps réel et pour une résolution de 1024×768. Le rendu est encore être interactif, tant que la visibilité est calculée en utilisant notre nouvelle technique de carte d’ombre (shadow map). Nous proposons également une approche totalement non-biaisée en remplaçant le test de visibilité avec une approche CPU. Parce que l’échantillonnage d’importance à base de lumière n’est pas très efficace lorsque le matériau sous-jacent de la géométrie est spéculaire, nous introduisons une nouvelle technique d’équilibrage pour de l’échantillonnage multiple (Multiple Importance Sampling). Cela nous permet de combiner d’autres techniques d’échantillonnage avec le notre basé sur la lumière. En minimisant la variance selon une approximation de second ordre, nous sommes en mesure de trouver une bonne représentation entre les différentes techniques d’échantillonnage sans aucune connaissance préalable. Notre méthode est pertinence, puisque nous réduisons effectivement en moyenne la variance pour toutes nos scènes de test avec différentes sources de lumière, complexités de visibilité et de matériaux. Notre méthode est aussi efficace par le fait que le surcoût de notre approche «boîte noire» est constant et représente 1% du processus de rendu dans son ensemble
Realistic images can be rendered by simulating light transport with Monte Carlo techniques. The possibility to use realistic light sources for synthesizing images greatly contributes to their physical realism. Among existing models, the ones based on environment maps and light fields are attractive due to their ability to capture faithfully the far-field and near-field effects as well as their possibility of being acquired directly. Since acquired light sources have arbitrary frequencies and possibly high dimension (4D), using such light sources for realistic rendering leads to performance problems.In this thesis, we focus on how to balance the accuracy of the representation and the efficiency of the simulation. Our work relies on generating high quality samples from the input light sources for unbiased Monte Carlo estimation. In this thesis, we introduce three novel methods.The first one is to generate high quality samples efficiently from dynamic environment maps that are changing over time. We achieve this by introducing a GPU approach that generates light samples according to an approximation of the form factor and combines the samples from BRDF sampling for each pixel of a frame. Our method is accurate and efficient. Indeed, with only 256 samples per pixel, we achieve high quality results in real time at 1024 × 768 resolution. The second one is an adaptive sampling strategy for light field light sources (4D), we generate high quality samples efficiently by restricting conservatively the sampling area without reducing accuracy. With a GPU implementation and without any visibility computations, we achieve high quality results with 200 samples per pixel in real time at 1024 × 768 resolution. The performance is still interactive as long as the visibility is computed using our shadow map technique. We also provide a fully unbiased approach by replacing the visibility test with a offline CPU approach. Since light-based importance sampling is not very effective when the underlying material of the geometry is specular, we introduce a new balancing technique for Multiple Importance Sampling. This allows us to combine other sampling techniques with our light-based importance sampling. By minimizing the variance based on a second-order approximation, we are able to find good balancing between different sampling techniques without any prior knowledge. Our method is effective, since we actually reduce in average the variance for all of our test scenes with different light sources, visibility complexity, and materials. Our method is also efficient, by the fact that the overhead of our "black-box" approach is constant and represents 1% of the whole rendering process
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Vanhoey, Kenneth. "Traitement conjoint de la géométrie et de la radiance d'objets 3D numérisés." Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD005/document.

Повний текст джерела
Анотація:
Depuis quelques décennies, les communautés d'informatique graphique et de vision ont contribué à l'émergence de technologies permettant la numérisation d'objets 3D. Une demande grandissante pour ces technologies vient des acteurs de la culture, notamment pour l'archivage, l'étude à distance et la restauration d'objets du patrimoine culturel : statuettes, grottes et bâtiments par exemple. En plus de la géométrie, il peut être intéressant de numériser la photométrie avec plus ou moins de détail : simple texture (2D), champ de lumière (4D), SV-BRDF (6D), etc. Nous formulons des solutions concrètes pour la création et le traitement de champs de lumière surfaciques représentés par des fonctions de radiance attachés à la surface.Nous traitons le problème de la phase de construction de ces fonctions à partir de plusieurs prises de vue de l'objet dans des conditions sur site : échantillonnage non structuré voire peu dense et bruité. Un procédé permettant une reconstruction robuste générant un champ de lumière surfacique variant de prévisible et sans artefacts à excellente, notamment en fonction des conditions d'échantillonnage, est proposé. Ensuite, nous suggérons un algorithme de simplification permettant de réduire la complexité mémoire et calculatoire de ces modèles parfois lourds. Pour cela, nous introduisons une métrique qui mesure conjointement la dégradation de la géométrie et de la radiance. Finalement, un algorithme d'interpolation de fonctions de radiance est proposé afin de servir une visualisation lisse et naturelle, peu sensible à la densité spatiale des fonctions. Cette visualisation est particulièrement bénéfique lorsque le modèle est simplifié
Vision and computer graphics communities have built methods for digitizing, processing and rendering 3D objects. There is an increasing demand coming from cultural communities for these technologies, especially for archiving, remote studying and restoring cultural artefacts like statues, buildings or caves. Besides digitizing geometry, there can be a demand for recovering the photometry with more or less complexity : simple textures (2D), light fields (4D), SV-BRDF (6D), etc. In this thesis, we present steady solutions for constructing and treating surface light fields represented by hemispherical radiance functions attached to the surface in real-world on-site conditions. First, we tackle the algorithmic reconstruction-phase of defining these functions based on photographic acquisitions from several viewpoints in real-world "on-site" conditions. That is, the photographic sampling may be unstructured and very sparse or noisy. We propose a process for deducing functions in a manner that is robust and generates a surface light field that may vary from "expected" and artefact-less to high quality, depending on the uncontrolled conditions. Secondly, a mesh simplification algorithm is guided by a new metric that measures quality loss both in terms of geometry and radiance. Finally, we propose a GPU-compatible radiance interpolation algorithm that allows for coherent radiance interpolation over the mesh. This generates a smooth visualisation of the surface light field, even for poorly tessellated meshes. This is particularly suited for very simplified models
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Nováček, Petr. "Moderní prostředky pro digitální snímání scény." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221292.

Повний текст джерела
Анотація:
The thesis composes conventional and modern methods for digital scene capturing. The target of the thesis is a comparison of CMOS with Bayer mask and Foveon X3 Merrill sensors followed by a design of algorithms for image fusion which can combine advantages of the both sensor types. The thesis starts with an introduction and a description of methods and processes leading to scene capturing. The next part deals with capturing a gallery of test images and with a comparison of both sensors based on the gallery images. Further there are algorithms designed for image fusion which can combine advantages of the selected sensors. The last part of the thesis is devoted to an evaluation of results and of the used algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Frugier, Pierre Antoine. "Quantification 3D d’une surface dynamique par lumière structurée en impulsion nanoseconde. Application à la physique des chocs, du millimètre au décimètre." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112129.

Повний текст джерела
Анотація:
La technique de reconstruction de forme par lumière structurée (ou projection de motifs) permet d’acquérir la topographie d’une surface objet avec une précision et un échantillonnage de points dense, de manière strictement non invasive. Pour ces raisons, elle fait depuis plusieurs années l’objet d’un fort intérêt. Les travaux présentés ici ont pour objectif d’adapter cette technique aux conditions sévères des expériences de physique des chocs : aspect monocoup, grande brièveté des phénomènes, diversité des échelles d’observation (de quelques millimètres au décimètre). Pour répondre à ces exigences, nous proposons de réaliser un dispositif autour d’un système d’imagerie rapide par éclairage laser nanoseconde, présentant des performances éprouvées et bien adaptées. La première partie des travaux s’intéresse à analyser les phénomènes prépondérants pour la qualité des images. Nous montrons quels sont les contributeurs principaux à la dégradation des signaux, et une technique efficace de lissage du speckle par fibrage est présentée. La deuxième partie donne une formulation projective de la reconstruction de forme ; celle-ci est rigoureuse, ne nécessitant pas de travailler dans l’approximation de faible perspective, ou de contraindre la géométrie de l’instrument. Un protocole d’étalonnage étendant la technique DLT (Direct Linear Transformation) aux systèmes à lumière structurée est proposé. Le modèle permet aussi, pour une expérience donnée, de prédire les performances de l’instrument par l’évaluation a priori des incertitudes de reconstruction. Nous montrons comment elles dépendent des paramètres du positionnement des sous-ensembles et de la forme-même de l’objet. Une démarche d’optimisation de la configuration de l’instrument pour une reconstruction donnée est introduite. La profondeur de champ limitant le champ objet minimal observable, la troisième partie propose de l’étendre par codage pupillaire : une démarche de conception originale est exposée. L’optimisation des composants est réalisée par algorithme génétique, sur la base de critères et de métriques définis dans l’espace de Fourier. Afin d’illustrer les performances de cette approche, un masque binaire annulaire a été conçu, réalisé et testé expérimentalement. Il corrige des défauts de mise au point très significatifs (Ψ≥±40 radians) sans impératif de filtrage de l’image. Nous montrons aussi que ce procédé donne accès à des composants tolérant des défauts de mise au point extrêmes (Ψ≈±100 radians , après filtrage). La dernière partie présente une validation expérimentale de l’instrument dans différents régimes, et à différentes échelles. Il a notamment été mis en œuvre sur l’installation LULI2000, où il a permis de mesurer dynamiquement la déformation et la fragmentation d’un matériau à base de carbone (champs millimétriques). Nous présentons également les mesures obtenues sous sollicitation pyrotechnique sur un revêtement de cuivre cylindrique de dimensions décimétriques. L’apparition et la croissance rapide de déformations radiales submillimétriques est mesurée à la surface du revêtement
A Structured Light System (SLS) is an efficient means to measure a surface topography, as it features both high accuracy and dense spatial sampling in a strict non-invasive way. For these reasons, it became in the past years a technique of reference. The aim of the PhD is to bring this technique to the field of shock physics. Experiments involving shocks are indeed very specific: they only allow single-shot acquisition of extremely short phenomena occurring under a large range of spatial extensions (from a few mm to decimeters). In order to address these difficulties, we have envisioned the use of a well-known high-speed technique: pulsed laser illumination. The first part of the work deals with the evaluation of the key-parameters that have to be taken into account if one wants to get sharp acquisitions. The extensive study demonstrates that speckle effect and depth of field limitation are of particular importance. In this part, we provide an effective way to smooth speckle in nanosecond regime, leaving 14% of residual contrast. Second part introduces an original projective formulation for object-points reconstruction. This geometric approach is rigorous; it doesn’t involve any weak-perspective assumptions or geometric constraints (like camera-projector crossing of optical axis in object space). From this formulation, a calibration procedure is derived; we demonstrate that calibrating any structured-light system can be done by extending the Direct Linear Transformation (DLT) photogrammetric approach to SLS. Finally, we demonstrate that reconstruction uncertainties can be derived from the proposed model in an a priori manner; the accuracy of the reconstruction depends both on the configuration of the instrument and on the object shape itself. We finally introduce a procedure for optimizing the configuration of the instrument in order to lower the uncertainties for a given object. Since depth of field puts a limitation on the lowest measurable field extension, the third part focuses on extending it through pupil coding. We present an original way of designing phase components, based on criteria and metrics defined in Fourier space. The design of a binary annular phase mask is exhibited theoretically and experimentally. This one tolerates a defocus as high as Ψ≥±40 radians, without the need for image processing. We also demonstrate that masks designed with our method can restore extremely high defoci (Ψ≈±100 radians) after processing, hence extending depth of focus by amounts unseen yet. Finally, the fourth part exhibits experimental measurements obtained with the setup in different high-speed regimes and for different scales. It was embedded on LULI2000 high energy laser facility, and allowed measurements of the deformation and dynamic fragmentation of a sample of carbon. Finally, sub-millimetric deformations measured in ultra-high speed regime, on a cylinder of copper under pyrotechnic solicitation are presented
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Wang, Neng-Chien, and 王能謙. "Image Deblurring Technologies for Large Images and Light Field Images." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/23708713883408629099.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
電信工程學研究所
104
Image processing has been developed for a long time. This paper can be separated into two parts. We will introduce the proposed techniques of image deblurring at first. Then the proposed light field deblurring algorithm will be introduced. The literatures of image deblurring can be categorized into two classes: blind deconvolution and non-blind deconvolution. First, we try to improve the efficiency of non-blind deconvolution in ultra-high resolution images. The complexity of deblurring is raised in ultra-high resolution images. Therefore, we try to reduce the computation time. We modified the algorithm “Fast Image Deconvolution” proposed by Krishnan in 2009. To reduce complexity, we process the image in block, and find the optimal division that can minimize the complexity. Merging the result of each block directly will cause blocking effect, so it should be overlapped between sub-images with linear weight. The size of overlapping decided our computing time and performance. Less overlapping is more efficient but leads to worse performance. For balance, we choose a specific size of overlapping which give consideration both efficiency and performance. Another topic is light field deblurring. A light field camera can capture the location and the angle information from a single shot. Thereby, we can reconstruct the depth of scene and stereoscopic images can be obtained. A light field camera is composed by the array of lens. We will obtain sub-images by every lens. If we want to render the image, we have to obtain the disparity of each microimage pair and hence we can estimate the information of the depth. At first, we obtain the relationship among microlenses by using regression analysis. Then, we take white image into consideration to compensate the luminance of the edge of every microimage and use quad-trees to compute disparity more precisely. Moreover, we use the image-based rendering technique to improve the quality of the reconstructed image. After rendering image, we use the technique of image segmentation. Then, every object will be cut apart. We estimate the depth of every object by the disparity, and hence we can reconstruct the depth map of the whole image.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Chuang, Shih-Chung, and 莊士昌. "Image Rendering Techniques and Depth Recovery for Light field images." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/56855697872267415454.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
電信工程學研究所
103
After the first commercial hand-held plenoptic camera was presented by Ng in 2012, the applications and the research of plenoptic cameras were getting richer in recent years. The major difference between the plenoptic camera and the traditional camera is that the plenoptic camera can capture the angular information in the scene and adjust the tradeoff between the spatial resolution and the angular information. With the use of the plenoptic camera, the information we get from a single shot of a camera is enriched. By using the information, we can reconstruct the depth of scene and render an image in different views. Nonetheless, the depth reconstruction and the rendering problems are more complicated than those of the traditional camera, since the plenoptic camera is consisted of a lens array and each lens leads to a micro image. In addition, if we want to render the image precisely, we have to obtain the disparity of each microimage pair and hence the depth information first. Because the rendering problem of the plenoptic camera is closely related to the depth of scene, we have to handle the rendering problem and the depth reconstruction problem at the same time or in sequence. In this thesis, we first obtain the relationship among microlenses by using regression analysis. Then, we use the stereo matching technique to get the depth of scene and the image-based rendering technique to improve the quality of the reconstructed image. Besides, we use quad-tree and white image to improve the performance of proposed method. In the end, we compare the result of the proposed algorithm with the previous work for rendering the microimages acquired from the plenoptic camera rendering and depth reconstruction and show that the proposed algorithm has better performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Huang, Wei-Hao, and 黃威豪. "Face Segmentation In Portrait Images by Light Field Camera." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/25433666517493577592.

Повний текст джерела
Анотація:
碩士
中興大學
通訊工程研究所
103
Digital Cameras become more popular in recent years, and portrait photography is one of their major applications. Due to its limitation of focusing on a single point and the capability of identifying only clearly focused faces, the digital camera can hardly identify all faces in the image. The light field camera, presented by the Lytro- company in 2012, has the capability of focusing on any point in an image. Since it has no face segmentation function yet, this research tries to use its special capability to enhance the face segmentation in the images. In our method, first a number of images focused on different faces are taken, using the capability of focusing on any point in an image taken by the light field camera. Then the images are transformed into binary images by thresholding according to the skin tone of the faces. The connected component labeling operation is used to find all the possible faces in the images. To get the relative depth information between the faces in the image, an experiment is conducted to measure the value of the thresholds of shape from focus and sharpness index ,which are used to assist to evaluate the right position of faces. The experimental results show that our method can extract all the possible faces in the images, which can be used to adjust the focus of the light field camera or for the further face recognition task.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Lourenço, Rui Miguel Leonel. "Structure tensor-based depth estimation from light field images." Master's thesis, 2019. http://hdl.handle.net/10400.8/3927.

Повний текст джерела
Анотація:
This thesis presents a novel framework for depth estimation from light eld images based on the use of the structure tensor. A study of prior knowledge introduces general concepts of depth estimation from light eld images. This is followed by a study of the state-of-the art, including a discussion of several distinct depth estimation methods and an explanation of the structure tensor and how it has been used to acquire depth estimation from a light eld image. The framework developed improves on two limitations of traditional structure tensor derived depth maps. In traditional approaches, foreground objects present enlarged boundaries in the estimated disparity map. This is known as silhouette enlargement. The proposed method for silhouette enhancement uses edge detection algorithms on both the epipolar plane images and their corresponding structure tensor-based disparity estimation and analyses the di erence in the position of these di erent edges to establish a map of the erroneous regions. These regions can be inpainted with values from the correct region. Additionally, a method was developed to enhance edge information by linking edge segments. Structure tensor-based methods produce results with some noise. This increases the di culty of using the resulting depth maps to estimate the orientation of scenic surfaces, since the di erence between the disparity of adjacent pixels often does not correlate with the real orientation of the scenic structure. To address this limitation, a seed growing approach was adopted, detecting and tting image planes in a least squares sense, and using the estimated planes to calculate the depth for the corresponding planar region. The full framework provides signi cant improvements on previous structure tensorbased methods. When compared with other state-of-the-art methods, it proves competitive in both mean square error and mean angle error, with no single method proving superior in every metric.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Chang, Ruei-Yu, and 張瑞宇. "Fully Convolutional Networks Based Reflection Separation for Light Field Images." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/k6m8qh.

Повний текст джерела
Анотація:
碩士
國立中央大學
通訊工程學系
107
Existing reflection separation schemes designed for multi-view images cannot be applied to light filed images due to the dense light fields with narrow baselines. In order to improve accuracy of the reconstructed background (i.e., the transmitted layer), most light field data based reflection separation schemes estimate a disparity map before reflection separation. Different from previous work, this thesis uses the existing EPINET based on disparity estimation of light field image without reflection, and separates the mixed light field image of weak reflection. At the training stage, the network takes multi-view images stacks along principle directions of light field data as inputs, and significant convolution features of the background layer are learned in an end-to-end manner. Then the FCN learns to predict pixel-wise gray-scale values of the background layer of the central view. Our experimental results show that the background layer can be reconstructed effectively by using EPINET and the mixed light field image dataset proposed in this thesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Tsai, Yu-Ju, and 蔡侑儒. "Estimate Disparity of Light Field Images by Deep Neural Network." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/k32675.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
資訊工程學研究所
107
In this paper, we introduce a light field depth estimation method based on a convolutional neural network. Light field camera can capture the spatial and angular properties of light in a scene. By using this property, we can compute depth information from light field images. However, the narrow baseline in light-field cameras makes the depth estimation of light field difficult. Many approaches try to solve these limitations in the depth estimation of the light field, but there is some trade-off between the speed and the accuracy in these methods. We consider the repetitive structure of the light field and redundant sub-aperture views in light field images. First, to utilize the repetitive structure of the light field, we integrate this property into our network design. Second, by applying attention based sub-aperture views selection, we can let the network learn more useful views by itself. Finally, we compare our experimental result with other states of the art methods to show our improvement in light field depth estimation.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Chang, Po-Yi, and 張博一. "Depth Estimation Using Adaptive Support-Weight Approach for Light Field Images." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/5d6h37.

Повний текст джерела
Анотація:
碩士
國立中央大學
通訊工程學系
106
Light field cameras acquire muti-view images using the microlens array. Due to the narrow baseline between multiple views, sub-pixel accuracy of the estimated disparity is expected. Adaptive support-weight approach is a local based disparity estimation method. Although several adaptive support-weight approach (ASW) based disparity estimation schemes for light field images have been proposed, they did not consider the problem of sub-pixel accuracy. Therefore, this thesis proposes to improve adaptive support-weight approach based depth estimation for light field images. Before disparity estimation, bicubic interpolation is applied to light field images for sub-pixel accuracy. Then the adaptive support-weight approach estimates disparities, where the cross window is adopted to reduce computation complexity. The intersection position of the vertical and horizontal arms is dynamically adjusted on the image border. Then, we increase the weights of pixels which have higher edge response. Finally, the estimated disparities from multiple view featuring with the same horizontal position are combined to generate the disparity map of the central view. Our experimental results show that the average error rate of the proposed method is lower than that of the EPI based adaptive window matching approach for 5.4%.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Wang, Yen-Chang, and 王嚴璋. "Generating Label Map and Extending Depth of Field from Different Focus Images Obtained by Means of Light Field Camera." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/94835126278171932887.

Повний текст джерела
Анотація:
碩士
國立清華大學
電機工程學系
100
Recently, the invention of hand-held light field camera raises a revolution in photography. We can record not only the intensity of light but also the direction of light by the light field camera. With the additional information, we could process many applications such as digital refocusing, moving observer, and depth estimation which can be utilized to computer vision, computer graphics, and machine vision. In this thesis, we mainly concentrate on the function of digital refocusing which can easily get series of images with different focal length. We design an energy function and minimize it to get a label map which represents the index of sharpest image for each pixel. The primary core of the energy function is a pixel-based AFM method and a region-based adaboost classification method is secondary. We get a more robust result than traditional depth from focus (DFF) method through this energy function. We also generate a virtual all-focus image for further applications by utilizing the label map. We us the Lytro light field camera to capture real world scene and refocus it to a set which contains several images with different focal length. For each pixel, we compute the cost to each label by applying the energy function described above. Finally, we generate a label map and a virtual all-focus image by our algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Yang, Hao-Hsueh, and 楊浩學. "Depth Estimation Based on Segmentation, Superpixel Auto Adjustment and Local Matching Algorithm for Stereo Matching and Light Field Images." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/12781767897456096386.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
電信工程學研究所
105
After the releasing of Plenoptic camera in November 2012, the research of light field camera is getting popular in recent years. The main difference between Plenoptic camera and traditional camera is that the angular information of light ray can be acquired by the former one. With one shot only, we can reconstruct the depth of scene and render the micro images into one final image from different views. We can also change the focal distance to make near or far objects clear. These are the appealing advantages of Plenoptic camera. Stereo matching is also a popular research topic since we can obtain depth information by two images from left and right views. Many applications can be done if we have the accurate depth information about an image. Besides, the concept of stereo matching can be used in light field image rendering to get better result. In this thesis, we divide the contents into three parts. The first part is to enhance the original rendering technique used in light field image with better local matching algorithm added. The second part is stereo matching. We use segmentation to help stereo matching and find an auto adjustment method to decide the best number of superpixel for each image. We also find a new local matching algorithm that is efficient especially for stereo matching after segmentation. Some techniques that can further increase the result are also added. The third part is a new depth estimation method used in light field image especially for some light field images that are hard to estimate depth by stereo matching. The method to recover depth information is based on segmentation and images from different focal distance.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Medeiros, João Diogo Gameiro. "Depth extraction in 3D holoscopic images." Master's thesis, 2018. http://hdl.handle.net/10071/17860.

Повний текст джерела
Анотація:
Holoscopy is a technology that comes as an alternative to traditional methods of capturing images and viewing 3D content. A light field camera can be used for the capture process, which allows the storage of information regarding the direction all light rays, unlike the traditional cameras. With the saved information it is possible to estimate a depth map that can be used for areas such as robotic navigation or medicine. This dissertation proposes to improve an existing depth estimation algorithm by developing new processing mechanisms which provide a dynamic balancing between computational speed and precision. All proposed solutions were implemented using CPU parallelization in order to reduce the computing time. For the proposed algorithms, qualitative tests were performed using the Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Structural Similarity Index Method (SSIM). A comparative analysis between the processing times of the proposed algorithms and the original solutions was also performed. The achieved results were quite satisfactory since there was a significant decrease in processing times for any of the proposed solutions without the accuracy estimate being substantially affected.
A holoscopia é uma tecnologia que surge como alternativa aos métodos tradicionais de captura de imagens e de visualização de conteúdos 3D. Para o processo de captura é utilizada uma câmera de campo de luz que permite armazenar a direção de todos os raios, ao contrário do que acontece com as câmeras tradicionais. Com a informação guardada é possível gerar um mapa de profundidade da imagem cuja utilização poderá ser útil em áreas como a navegação robótica ou a medicina. Nesta dissertação, propõe-se melhorar uma solução já existente através do desenvolvimento de novos mecanismos de processamento que permitam um balanceamento dinâmico entre a velocidade computacional e a precisão. Todas as soluções propostas foram implementadas recorrendo à paralelização da CPU para que se conseguisse reduzir substancialmente o tempo de computação. Para os algoritmos propostos foram efectuados testes qualitativos com recurso à utilização das métricas Mean Absolute Error (MAE), Root Mean Square Error (RMSE) e Structural Similarity Index Method (SSIM). Uma análise comparativa entre os tempos de processamento dos algoritmos propostos e as soluções originais foi também efectuada. Os resultados alcançados foram bastante satisfatórios dado que se registou uma redução acentuada nos tempos de processamento para qualquer uma das soluções implementadas sem que a estimativa de precisão tenha sido substancialmente afetada.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Tavares, Paulo José da Silva. "Three-dimensional geometry characterization using structured light fields." Doctoral thesis, 2008. http://hdl.handle.net/10216/59466.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Tavares, Paulo José da Silva. "Three-dimensional geometry characterization using structured light fields." Tese, 2008. http://hdl.handle.net/10216/59466.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Lin, Ren Jie, and 林仁傑. "Achromatic metalens array for light field image." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/36sm3w.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
應用物理研究所
106
Vision is the most important system for living creature to perceiving surrounding environmental information. Compared with a human eyes, the insect''s visual system is composed of an array of tiny eyes, also known as compound eyes. Such a visual system has a large field of view and the advantage of estimating the depth of the object. Such features have attracted human technology to try to develop similar optical imaging systems, such as light field cameras. The light field image records the position and direction information of light rays distributing in the target scene which can be captured by the use of microlens array. Compared with the conventional imaging system, four-dimensional light field imaging system can provide not only the two-dimensional intensity but also two-dimensional momentum information of light which enables the scene to be reconstructed with refocusing images and depth of objects. However, it is not easy to obtain the precise shaping and low defect microlens array or different forms (convex or concave) or arbitrary numerical aperture (NA) at one microlens array by most of fabrication processes such as electron beam lithography, UV- lithography, photoresist melting, nanoimprinting lithography and so on. Metasurfaces, the two-dimensional metamaterials, have appeared as one of the most rapidly growing fields of nanophotonics. They have attracted extensive research interest because their exceptional optical properties and compact size can provide technical solutions for cutting-edge optical applications, such as imaging, polarization conversion, nonlinear components, and hologram. Recently, the chromatic aberrations of metasurface, resulting from the resonance of nanoantennas and the intrinsic dispersion of constructive materials, has been eliminated in visible region by using incorporating an integrated-resonant unit element. It gives rise to a burst of upsurge of imaging applications by using metalens. Here, we propose a light field imaging system with an ultra-compact and flat GaN achromatic metalens array without spherical aberration to acquire four-dimensional light field information. Using this platform and rendering algorithm, we can get the reconstructed scene by a series of images with arbitrary focusing depths slice-by-slice and depth of objects. Compared with microlens array, the advantages of our metalens array are achromatism, spherical aberration free, focal length and numerical aperture can be arbitrarily designed, and can directly integrate with CMOS CCD by semiconductor fabrication process.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

CHIANG, TAI-I., and 姜太乙. "Face Recognition Based on Moment Light Field Image." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/c9dcr5.

Повний текст джерела
Анотація:
碩士
國立高雄應用科技大學
光電與通訊工程研究所
104
To model images of faces, it is important to have a framework of how the face can change in a moment. Faces could vary widely, but the changes can be broken down into two parts: variations in lighting and the expression across the face among individuals. In this thesis, face recognition system considering moment images is proposed to model faces under various lighting. The continuity equation is proposed to extract the first angular moments of the light field so as to construct views of different light sources for recognition. The method reduces the time cost by few images from the training set. The experiments have been extensively assessed with the CMU-PIE face database. Our method has been shown with a noticeable performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії