Дисертації з теми "Light field images"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Light field images".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Zhang, Zhengyu. "Quality Assessment of Light Field Images." Electronic Thesis or Diss., Rennes, INSA, 2024. http://www.theses.fr/2024ISAR0002.
Повний текст джерелаLight Field Image (LFI) has garnered remarkable interest and fascination due to its burgeoning significance in immersive applications. Since LFIs may be distorted at various stages from acquisition to visualization, Light Field Image Quality Assessment (LFIQA) is vitally important to monitor the potential impairments of LFI quality. The first contribution (Chapter 3) of this work focuses on developing two handcrafted feature-based No-Reference (NR) LFIQA metrics, in which texture information and wavelet information are exploited for quality evaluation. Then in the second part (Chapter 4), we explore the potential of combining deep learning technology with the quality assessment of LFIs, and propose four deep learning-based LFIQA metrics according to different LFI characteristics, including three NR metrics and one Full-Reference (FR) metric. In the last part (Chapter 5), we conduct subjective experiments and propose a novel standard LFIQA database. Moreover, a benchmark of numerous state-of-the-art objective LFIQA metrics on the proposed database is provided
Chiesa, Valeria. "Revisiting face processing with light field images." Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS059.pdf.
Повний текст джерелаBeing able to predict the macroscopic response of a material from the knowledge of its constituent at a microscopic or mesoscopic scale has always been the Holy Grail pursued by material science, for it provides building bricks for the understanding of complex structures as well as for the development of tailor-made optimized materials. The homogenization theory constitutes nowadays a well-established theoretical framework to estimate the overall response of composite materials for a broad range of mechanical behaviors. Such a framework is still lacking for brittle fracture, which is a dissipative evolution problem that (ii) localizes at the crack tip and (iii) is related to a structural one. In this work, we propose a theoretical framework based on a perturbative approach of Linear Elastic Fracture Mechanics to model (i) crack propagation in large-scale disordered materials as well (ii) the dissipative processes involved at the crack tip during the interaction of a crack with material heterogeneities. Their ultimate contribution to the macroscopic toughness of the composite is (iii) estimated from the resolution of the structural problem using an approach inspired by statistical physics. The theoretical and numerical inputs presented in the thesis are finally compared to experimental measurements of crack propagation in 3D-printed heterogeneous polymers obtained through digital image correlation
Dricot, Antoine. "Light-field image and video compression for future immersive applications." Thesis, Paris, ENST, 2017. http://www.theses.fr/2017ENST0008/document.
Повний текст джерелаEvolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks
Dricot, Antoine. "Light-field image and video compression for future immersive applications." Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0008.
Повний текст джерелаEvolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks
McEwen, Bryce Adam. "Microscopic Light Field Particle Image Velocimetry." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3238.
Повний текст джерелаSouza, Wallace Bruno Silva de. "Transmissão progressiva de imagens sintetizadas de light field." reponame:Repositório Institucional da UnB, 2018. http://repositorio.unb.br/handle/10482/34206.
Повний текст джерелаEsta proposta estabelece um método otimizado baseado em taxa-distorção para transmitir imagens sintetizadas de light field. Resumidamente, uma imagem light field pode ser interpretada como um dado quadridimensional (4D) que possui tanto resolução espacial, quanto resolução angular, sendo que cada subimagem bidimensional desse dado 4D é tido como uma determinada perspectiva, isto é, uma imagem de subabertura (SAI, do inglês Sub-Aperture Image). Este trabalho visa modi car e aprimorar uma proposta anterior chamada de Comunicação Progressiva de Light Field (PLFC, do inglês Progressive Light Field Communication ), a qual trata da sintetização de imagens referentes a diferentes focos requisitados por um usuário. Como o PLFC, este trabalho busca fornecer informação suficiente para o usuário de modo que, conforme a transmissão avance, ele tenha condições de sintetizar suas próprias imagens de ponto focal, sem a necessidade de se enviar novas imagens. Assim, a primeira modificação proposta diz respeito à como escolher a cache inicial do usuário, determinando uma quantidade ideal de imagens de subabertura para enviar no início da transmissão. Propõe-se também um aprimoramento do processo de seleção de imagens adicionais por meio de um algoritmo de refinamento, o qual é aplicado inclusive na inicialização da cache. Esse novo processo de seleção lida com QPs (Passo de Quantização, do inglês Quantization Parameter ) dinâmicos durante a codificação e envolve não só os ganhos imediatos para a qualidade da imagem sintetizada, mas ainda considera as sintetizações subsequentes. Tal ideia já foi apresentada pelo PLFC, mas não havia sido implementada de maneira satisfatória. Estabelece-se ainda uma maneira automática para calcular o multiplicador de Lagrange que controla a influência do benefício futuro associado à transmissão de uma SAI. Por fim, descreve-se um modo simplificado de obter esse benefício futuro, reduzindo a complexidade computacional envolvida. Muitas são as utilidades de um sistema como este, podendo, por exemplo, ser usado para identificar algum elemento em uma imagem light field, ajustando apropriadamente o foco em questão. Além da proposta, os resultados obtidos são exibidos, sendo feita uma discussão acerca dos significativos ganhos conseguidos de até 32; 8% com relação ao PLFC anterior em termos de BD-Taxa. Esse ganho chega a ser de até 85; 8% em comparação com transmissões triviais de dados light field.
This work proposes an optimized rate-distortion method to transmit light field synthesized images. Briefy, light eld images could be understood like quadridimensional (4D) data, which have both spatial and angular resolution, once each bidimensional subimage in this 4D image is a certain perspective, that is, a SAI (Sub-Aperture Image). This work aims to modify and to improve a previous proposal named PLFC (Progressive Light Field Communication), which addresses the image synthesis for diferent focal point images requested by an user. Like the PLFC, this work tries to provide enough information to the user so that, as the transmsission progress, he can synthesize his own focal point images, without the need to transmit new images. Thus, the first proposed modification refers to how the user's initial cache should be chosen, defining an ideal ammount of SAIs to send at the transmission begining. An improvement of the additional images selection process is also proposed by means of a refinement algorithm, which is applied even in the cache initialization. This new selection process works with dynamic QPs (Quantization Parameter) during encoding and involves not only the immediate gains for the synthesized image, but either considers the subsequent synthesis. This idea already was presented by PLFC, but had not been satisfactorily implemented. Moreover, this work proposes an automatic way to calculate the Lagrange multiplier which controls the in uence of the future benefit associated with the transmission of some SAI. Finally, a simplified manner of obtaining this future benefit is then described, reducing the computational complexity involved. The utilities of such a system are diverse and, for example, it can be used to identify some element in a light field image, adjusting the focus accordingly. Besides the proposal, the obtained results are shown, and a discussion is made about the significant achieved gains up to 32:8% compared to the previous PLFC in terms of BD-Rate. This gain is up to 85:8% in relation to trivial light field data transmissions.
Nieto, Grégoire. "Light field remote vision." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM051/document.
Повний текст джерелаLight fields have gathered much interest during the past few years. Captured from a plenoptic camera or a camera array, they sample the plenoptic function that provides rich information about the radiance of any ray passing through the observed scene. They offer a pletora of computer vision and graphics applications: 3D reconstruction, segmentation, novel view synthesis, inpainting or matting for instance.Reconstructing the light field consists in recovering the missing rays given the captured samples. In this work we cope with the problem of reconstructing the light field in order to synthesize an image, as if it was taken by a camera closer to the scene than the input plenoptic device or set of cameras. Our approach is to formulate the light field reconstruction challenge as an image-based rendering (IBR) problem. Most of IBR algorithms first estimate the geometry of the scene, known as a geometric proxy, to make correspondences between the input views and the target view. A new image is generated by the joint use of both the input images and the geometric proxy, often projecting the input images on the target point of view and blending them in intensity.A naive color blending of the input images do not guaranty the coherence of the synthesized image. Therefore we propose a direct multi-scale approach based on Laplacian rendering to blend the source images at all the frequencies, thus preventing rendering artifacts.However, the imperfection of the geometric proxy is also a main cause of rendering artifacts, that are displayed as a high-frequency noise in the synthesized image. We introduce a novel variational rendering method with gradient constraints on the target image for a better-conditioned linear system to solve, removing the high-frequency noise due to the geometric proxy.Some scene reconstructions are very challenging because of the presence of non-Lambertian materials; moreover, even a perfect geometric proxy is not sufficient when reflections, transparencies and specularities question the rules of parallax. We propose an original method based on the local approximation of the sparse light field in the plenoptic space to generate a new viewpoint without the need for any explicit geometric proxy reconstruction. We evaluate our method both quantitatively and qualitatively on non-trivial scenes that contain non-Lambertian surfaces.Lastly we discuss the question of the optimal placement of constrained cameras for IBR, and the use of our algorithms to recover objects that are hidden behind a camouflage.The proposed algorithms are illustrated by results on both structured (camera arrays) and unstructured plenoptic datasets
Hawary, Fatma. "Light field image compression and compressive acquisition." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S082.
Повний текст джерелаBy capturing a scene from several points of view, a light field provides a rich representation of the scene geometry that brings a variety of novel post-capture applications and enables immersive experiences. The objective of this thesis is to study the compressibility of light field contents in order to propose novel solutions for higher-resolution light field imaging. Two main aspects were studied through this work. The compression performance on light fields of the actual coding schemes still being limited, there is need to introduce more adapted approaches to better describe the light field structures. We propose a scalable coding scheme that encodes only a subset of light field views and reconstruct the remaining views via a sparsity-based method. A residual coding provides an enhancement to the final quality of the decoded light field. Acquiring very large-scale light fields is still not feasible with the actual capture and storage facilities, a possible alternative is to reconstruct the densely sampled light field from a subset of acquired samples. We propose an automatic reconstruction method to recover a compressively sampled light field, that exploits its sparsity in the Fourier domain. No geometry estimation is needed, and an accurate reconstruction is achieved even with very low number of captured samples. A further study is conducted for the full scheme including a compressive sensing of a light field and its transmission via the proposed coding approach. The distortion introduced by the different processing is measured. The results show comparable performances to depth-based view synthesis methods
Löw, Joakim, Anders Ynnerman, Per Larsson, and Jonas Unger. "HDR Light Probe Sequence Resampling for Realtime Incident Light Field Rendering." Linköpings universitet, Visuell informationsteknologi och applikationer, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-18052.
Повний текст джерелаBaravdish, Gabriel. "GPU Accelerated Light Field Compression." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150558.
Повний текст джерелаYang, Jason C. (Jason Chieh-Sheng) 1977. "A light field camera for image based rendering." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86575.
Повний текст джерелаIncludes bibliographical references (leaves 54-55).
by Jason C. Yang.
M.Eng.
Vorhies, John T. "Low-complexity Algorithms for Light Field Image Processing." University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590771210097321.
Повний текст джерелаUnger, Jonas. "Incident Light Fields." Doctoral thesis, Linköpings universitet, Visuell informationsteknologi och applikationer, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16287.
Повний текст джерелаTsai, Dorian Yu Peng. "Light-field features for robotic vision in the presence of refractive objects." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/192102/1/Dorian%20Yu%20Peng_Tsai_Thesis.pdf.
Повний текст джерелаTung, Yan Foo. "Testing and performance characterization of the split field polarimeter in the 3-5m̆ waveband /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Dec%5FTung.pdf.
Повний текст джерелаThesis advisor(s): Alfred W. Cooper, Gamani Karunasiri. Includes bibliographical references (p. 83-84). Also available online.
Pendlebury, Jonathon Remy. "Light Field Imaging Applied to Reacting and Microscopic Flows." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/5754.
Повний текст джерелаZhao, Ping. "Low-Complexity Deep Learning-Based Light Field Image Quality Assessment." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25977.
Повний текст джерелаUnger, Jonas, Stefan Gustavson, Larsson Per, and Anders Ynnerman. "Free Form Incident Light Fields." Linköpings universitet, Visuell informationsteknologi och applikationer, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16286.
Повний текст джерелаAng, Jason. "Offset Surface Light Fields." Thesis, University of Waterloo, 2003. http://hdl.handle.net/10012/1100.
Повний текст джерелаYeung, Henry Wing Fung. "Efficient Deep Neural Network Designs for High Dimensional and Large Volume Image Processing." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24336.
Повний текст джерелаStepanov, Milan. "Selected topics in learning-based coding for light field imaging." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG050.
Повний текст джерелаThe current trend in imaging technology is to go beyond the 2D representation of the world captured by a conventional camera. Light field technology enables us to capture richer directional cues. With the recent availability of hand-held light field cameras, it is possible to capture a scene from various perspectives with ease at a single exposure time, enabling new applications such as a change of perspective, focusing at different depths in the scene, and editing depth-of-field.Whereas the new imaging model increases frontiers of immersiveness, quality of experience, and digital photography, it generates huge amounts of data demanding significant storage and bandwidth resources. To overcomethese challenges, light fields require the development of efficient coding schemes.In this thesis, we explore deep-learning-based approaches for light field compression. Our hybrid coding scheme combines a learning-based compression approach with a traditional video coding scheme and offers a highly efficient tool for lossy compression of light field images. We employ an auto-encoder-based architecture and an entropy constrained bottleneck to achieve particular operability of the base codec. In addition, an enhancement layer based on a traditional video codec offers fine-grained quality scalability on top of the base layer. The proposed codec achieves better performance compared to state-of-the-art methods; quantitative experiments show, on average, more than 30% bitrate reduction compared to JPEG Pleno and HEVC codecs.Moreover, we propose a learning-based lossless light field codec that leverages view synthesis methods to obtain high-quality estimates and an auto-regressive model that builds probability distribution for arithmetic coding. The proposed method outperforms state-of-the-art methods in terms of bitrate while maintaining low computational complexity.Last but not least, we investigate distributed source coding paradigm for light field images. We leverage the high modeling capabilities of deep learning methods at two critical functional blocks in the distributed source coding scheme: for the estimation of Wyner-Ziv views and correlation noise modeling. Our initial study shows that incorporating a deep learning-based view synthesis method into a distributed coding scheme improves coding performance compared to the HEVC Intra. We achieve further gains by integrating the deep-learning-based modeling of the residual signal
Gullapalli, Sai Krishna. "Wave-Digital FPGA Architectures of 4-D Depth Enhancement Filters for Real-Time Light Field Image Processing." University of Akron / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=akron1574443263497981.
Повний текст джерелаMilnes, Thomas Bradford. "Arbitrarily-controllable programmable aperture light field cameras : design theory, and applications to image deconvolution & 3-dimensional scanning." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85232.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 125-128).
This thesis describes a new class of programmable-aperture light field cameras based on an all-digital, grayscale aperture. A number of prototypes utilizing this arbitrarily-controllable programmable aperture (ACPA) light field technology are presented. This new method of capturing light field data lends itself to an improved deconvolution technique dubbed "Programmable Deconvolution," as well as to 3D scanning and super-resolution imaging. The use & performance of ACPA cameras in these applications is explored both in theory and with experimental results. Additionally, a framework for ACPA camera design for optimal 3D scanning is described.
by Thomas Bradford Milnes.
Ph. D.
Johansson, Erik. "3D Reconstruction of Human Faces from Reflectance Fields." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2365.
Повний текст джерелаHuman viewers are extremely sensitive to the appearanceof peoples faces, which makes the rendering of realistic human faces a challenging problem. Techniques for doing this have continuously been invented and evolved since more than thirty years.
This thesis makes use of recent methods within the area of image based rendering, namely the acquisition of reflectance fields from human faces. The reflectance fields are used to synthesize and realistically render models of human faces.
A shape from shading technique, assuming that human skin adheres to the Phong model, has been used to estimate surface normals. Belief propagation in graphs has then been used to enforce integrability before reconstructing the surfaces. Finally, the additivity of light has been used to realistically render the models.
The resulting models closely resemble the subjects from which they were created, and can realistically be rendered from novel directions in any illumination environment.
Ziegler, Matthias [Verfasser], Günther [Akademischer Betreuer] Greiner, and Günther [Gutachter] Greiner. "Advanced image processing for immersive media applications using sparse light-fields / Matthias Ziegler ; Gutachter: Günther Greiner ; Betreuer: Günther Greiner." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2021. http://d-nb.info/1228627576/34.
Повний текст джерелаMagalhães, Filipe Bento. "Capacitor MOS aplicado em sensor de imagem química." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/3/3140/tde-06072014-230841/.
Повний текст джерелаThe development of sensors and systems for environmental control has been shown to be an area of high scientific and technical interest. The main challenges in this area are related to the development of sensors capable of detecting many different substances. In this context, the MOS devices present themselves as versatile devices for chemical imaging with potential for detection and classification of different substances only using one single sensor. In the present work, was proposed a MOS sensor with a wing-vane geometric profile of its gate constituted of Pd, Au and Pt metals. The sensor\'s response showed to have high sensitivity to molecules rich on H atoms, such as H2 and NH3 gases. Capacitance measurements showed that the sensor has a nonlinear response for H2 and NH3 obeying the Langmuir isotherm law. The MOS sensor proved to be efficient in Chemical Imaging generation through the scanned light pulse technique. The chemical images of the H2 and NH3 gases showed different patterns when the N2 was used as carrier gas. The different patterns responses happened mainly due to geometric profile of the metallic gate. The sensor sensitivity showed dependence on the bias potential. In the capacitance measures, greater sensitivity was observed for potential near the flat-band voltage. In the chemical images, the greater sensitivity was observed for bias potential within depletion region. The sensor sensitivity was also dependent on the carrier gas. The sensor showed to be more sensitive with N2 as carrier gas than to dry air. However the desorption process of H+ have been more efficient in dry air. The results obtained in the present work suggest the possibility of manufacturing an optoelectronic nose using only a single MOS sensor.
Revi, Frank. "Measurement of two-dimensional concentration fields of a glycol-based tracer aerosol using laser light sheet illumination and microcomputer video image acquisition and processing." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/69291.
Повний текст джерелаIncludes bibliographical references (leaves 48-49).
The use of a tracer aerosol with a bulk density close to that of air is a convenient way to study the dispersal of pollutants in ambient room air flow. Conventional point measurement techniques do not permit the rapid and accurate determination of the concentration fields produced by the injection of such a tracer into a volume of air. An instantaneous two dimensional distribution would aid in the characterization of flow and diffusion processes in the volume studied, and permit verification of theoretical models. A method is developed to measure such two dimensional concentration fields using a laser light sheet to illuminate the plane of interest, which is captured and processed using current microcomputer-based video image acquisition and analysis technology. Point concentrations, determined optically using extinction of monochromatic illumination projected through the aerosol onto a photo detector, are used to calibrate the captured video linages to detennine actual concentration values. Accuracy, reproducibility, and maximum rate of data acquisition are evaluated by means of theoretical models of ambient air flow in a sealed box with pointinjection of the tracer, and in a duct of circular cross section with constant air velocity under both constant and pulsed injection scenarios.
by Frank Revi.
M.S.
Carstens, Jacobus Everhardus. "Fast generation of digitally reconstructed radiographs for use in 2D-3D image registration." Thesis, Link to the online version, 2008. http://hdl.handle.net/10019/1797.
Повний текст джерелаReynoso, Pacheco Helen Carolina. "Construcción de la Imagen: El uso de la luz natural bajo la perspectiva de Emmanuel Lubezki en la película El Renacido." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2020. http://hdl.handle.net/10757/654486.
Повний текст джерелаThis research paper analyzes the communicational phenomenon of Emmanuel Lubezki photographic style through the camera’s narrative and expressive applications in the film 'The Reborn''. It is worth mentioning that Lubezki’s work is analyzed from light as a fundamental element of expressiveness. In such a way that it is possible to have a presence of the panoramic beauty through the natural landscapes that modify the receptive and emotional attitudes of the spectator.
Trabajo de investigación
Svoboda, Karel. "Fotografování s využitím světelného pole." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241965.
Повний текст джерелаPamplona, Vitor Fernando. "Interactive measurements and tailored displays for optical aberrations of the human eye." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2012. http://hdl.handle.net/10183/87586.
Повний текст джерелаThis thesis proposes light-field pre-warping methods for measuring and compensating for optical aberrations in focal imaging systems. Interactive methods estimate refractive conditions (NETRA) and model lens opacities (CATRA) of interaction-aware eyes and cameras using cost-efficient hardware apps for high-resolution screens. Tailored displays use stereo-viewing hardware to compensate for the measured visual aberrations and display in-focus information that avoids the need of corrective eyeglasses. A light-field display, positioned very close to the eye, creates virtual objects in a wide range of predefined depths through different sectors of the eye’s aperture. This platform creates a new range of interactivity that is extremely sensitive to spatially-distributed optical aberrations. The ability to focus on virtual objects, interactively align displayed patterns, and detect variations in shape and brightness allows the estimation of the eye’s point spread function and its lens’ accommodation range. While conventional systems require specialized training, costly devices, strict security procedures, and are usually not mobile, this thesis simplifies the mechanism by putting the human subject in the loop. Captured data is transformed into refractive conditions in terms of spherical and cylindrical powers, axis of astigmatism, focal range and aperture maps for opacity, attenuation, contrast and sub-aperture point-spread functions. These optical widgets carefully designed to interactive interfaces plus computational analysis and reconstruction establish the field of computational ophthalmology. The overall goal is to allow a general audience to operate portable light-field displays to gain a meaningful understanding of their own visual conditions. Ubiquitous, updated, and accurate diagnostic records can make private and public displays show information in a resolution that goes beyond the viewer’s visual acuity. The new display technology is able to compensate for refractive errors and avoid light-scattering paths. Tailored Displays free the viewer from needing wearable optical corrections when looking at it, expanding the notion of glasses-free multi-focus displays to add individual variabilities. This thesis includes proof-of-concept designs for ophthalmatic devices and tailored displays. User evaluations and validations with modified camera optics are performed. Capturing the daily variabilities of an individual’s sensory system is expected to unleash a new era of high-quality tailored consumer devices.
Verhack, Ruben [Verfasser], Peter [Akademischer Betreuer] Lambert, Thomas [Akademischer Betreuer] Sikora, Glenn van [Gutachter] Wallendael, Klaus [Gutachter] Obermeyer, Christine [Gutachter] Guillemot, Jean-François [Gutachter] Macq, and Tim [Gutachter] Wauters. "Steered mixture-of-experts for image and light field representation, processing, and coding : a universal approach for immersive experiences of camera-captured scenes / Ruben Verhack ; Gutachter: Glenn van Wallendael, Klaus Obermeyer, Christine Guillemot, Jean-François Macq, Tim Wauters ; Peter Lambert, Thomas Sikora." Berlin : Technische Universität Berlin, 2020. http://d-nb.info/1223981444/34.
Повний текст джерелаLu, Heqi. "Echantillonage d'importance des sources de lumières réalistes." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0001/document.
Повний текст джерелаRealistic images can be rendered by simulating light transport with Monte Carlo techniques. The possibility to use realistic light sources for synthesizing images greatly contributes to their physical realism. Among existing models, the ones based on environment maps and light fields are attractive due to their ability to capture faithfully the far-field and near-field effects as well as their possibility of being acquired directly. Since acquired light sources have arbitrary frequencies and possibly high dimension (4D), using such light sources for realistic rendering leads to performance problems.In this thesis, we focus on how to balance the accuracy of the representation and the efficiency of the simulation. Our work relies on generating high quality samples from the input light sources for unbiased Monte Carlo estimation. In this thesis, we introduce three novel methods.The first one is to generate high quality samples efficiently from dynamic environment maps that are changing over time. We achieve this by introducing a GPU approach that generates light samples according to an approximation of the form factor and combines the samples from BRDF sampling for each pixel of a frame. Our method is accurate and efficient. Indeed, with only 256 samples per pixel, we achieve high quality results in real time at 1024 × 768 resolution. The second one is an adaptive sampling strategy for light field light sources (4D), we generate high quality samples efficiently by restricting conservatively the sampling area without reducing accuracy. With a GPU implementation and without any visibility computations, we achieve high quality results with 200 samples per pixel in real time at 1024 × 768 resolution. The performance is still interactive as long as the visibility is computed using our shadow map technique. We also provide a fully unbiased approach by replacing the visibility test with a offline CPU approach. Since light-based importance sampling is not very effective when the underlying material of the geometry is specular, we introduce a new balancing technique for Multiple Importance Sampling. This allows us to combine other sampling techniques with our light-based importance sampling. By minimizing the variance based on a second-order approximation, we are able to find good balancing between different sampling techniques without any prior knowledge. Our method is effective, since we actually reduce in average the variance for all of our test scenes with different light sources, visibility complexity, and materials. Our method is also efficient, by the fact that the overhead of our "black-box" approach is constant and represents 1% of the whole rendering process
Vanhoey, Kenneth. "Traitement conjoint de la géométrie et de la radiance d'objets 3D numérisés." Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD005/document.
Повний текст джерелаVision and computer graphics communities have built methods for digitizing, processing and rendering 3D objects. There is an increasing demand coming from cultural communities for these technologies, especially for archiving, remote studying and restoring cultural artefacts like statues, buildings or caves. Besides digitizing geometry, there can be a demand for recovering the photometry with more or less complexity : simple textures (2D), light fields (4D), SV-BRDF (6D), etc. In this thesis, we present steady solutions for constructing and treating surface light fields represented by hemispherical radiance functions attached to the surface in real-world on-site conditions. First, we tackle the algorithmic reconstruction-phase of defining these functions based on photographic acquisitions from several viewpoints in real-world "on-site" conditions. That is, the photographic sampling may be unstructured and very sparse or noisy. We propose a process for deducing functions in a manner that is robust and generates a surface light field that may vary from "expected" and artefact-less to high quality, depending on the uncontrolled conditions. Secondly, a mesh simplification algorithm is guided by a new metric that measures quality loss both in terms of geometry and radiance. Finally, we propose a GPU-compatible radiance interpolation algorithm that allows for coherent radiance interpolation over the mesh. This generates a smooth visualisation of the surface light field, even for poorly tessellated meshes. This is particularly suited for very simplified models
Nováček, Petr. "Moderní prostředky pro digitální snímání scény." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221292.
Повний текст джерелаFrugier, Pierre Antoine. "Quantification 3D d’une surface dynamique par lumière structurée en impulsion nanoseconde. Application à la physique des chocs, du millimètre au décimètre." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112129.
Повний текст джерелаA Structured Light System (SLS) is an efficient means to measure a surface topography, as it features both high accuracy and dense spatial sampling in a strict non-invasive way. For these reasons, it became in the past years a technique of reference. The aim of the PhD is to bring this technique to the field of shock physics. Experiments involving shocks are indeed very specific: they only allow single-shot acquisition of extremely short phenomena occurring under a large range of spatial extensions (from a few mm to decimeters). In order to address these difficulties, we have envisioned the use of a well-known high-speed technique: pulsed laser illumination. The first part of the work deals with the evaluation of the key-parameters that have to be taken into account if one wants to get sharp acquisitions. The extensive study demonstrates that speckle effect and depth of field limitation are of particular importance. In this part, we provide an effective way to smooth speckle in nanosecond regime, leaving 14% of residual contrast. Second part introduces an original projective formulation for object-points reconstruction. This geometric approach is rigorous; it doesn’t involve any weak-perspective assumptions or geometric constraints (like camera-projector crossing of optical axis in object space). From this formulation, a calibration procedure is derived; we demonstrate that calibrating any structured-light system can be done by extending the Direct Linear Transformation (DLT) photogrammetric approach to SLS. Finally, we demonstrate that reconstruction uncertainties can be derived from the proposed model in an a priori manner; the accuracy of the reconstruction depends both on the configuration of the instrument and on the object shape itself. We finally introduce a procedure for optimizing the configuration of the instrument in order to lower the uncertainties for a given object. Since depth of field puts a limitation on the lowest measurable field extension, the third part focuses on extending it through pupil coding. We present an original way of designing phase components, based on criteria and metrics defined in Fourier space. The design of a binary annular phase mask is exhibited theoretically and experimentally. This one tolerates a defocus as high as Ψ≥±40 radians, without the need for image processing. We also demonstrate that masks designed with our method can restore extremely high defoci (Ψ≈±100 radians) after processing, hence extending depth of focus by amounts unseen yet. Finally, the fourth part exhibits experimental measurements obtained with the setup in different high-speed regimes and for different scales. It was embedded on LULI2000 high energy laser facility, and allowed measurements of the deformation and dynamic fragmentation of a sample of carbon. Finally, sub-millimetric deformations measured in ultra-high speed regime, on a cylinder of copper under pyrotechnic solicitation are presented
Wang, Neng-Chien, and 王能謙. "Image Deblurring Technologies for Large Images and Light Field Images." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/23708713883408629099.
Повний текст джерела國立臺灣大學
電信工程學研究所
104
Image processing has been developed for a long time. This paper can be separated into two parts. We will introduce the proposed techniques of image deblurring at first. Then the proposed light field deblurring algorithm will be introduced. The literatures of image deblurring can be categorized into two classes: blind deconvolution and non-blind deconvolution. First, we try to improve the efficiency of non-blind deconvolution in ultra-high resolution images. The complexity of deblurring is raised in ultra-high resolution images. Therefore, we try to reduce the computation time. We modified the algorithm “Fast Image Deconvolution” proposed by Krishnan in 2009. To reduce complexity, we process the image in block, and find the optimal division that can minimize the complexity. Merging the result of each block directly will cause blocking effect, so it should be overlapped between sub-images with linear weight. The size of overlapping decided our computing time and performance. Less overlapping is more efficient but leads to worse performance. For balance, we choose a specific size of overlapping which give consideration both efficiency and performance. Another topic is light field deblurring. A light field camera can capture the location and the angle information from a single shot. Thereby, we can reconstruct the depth of scene and stereoscopic images can be obtained. A light field camera is composed by the array of lens. We will obtain sub-images by every lens. If we want to render the image, we have to obtain the disparity of each microimage pair and hence we can estimate the information of the depth. At first, we obtain the relationship among microlenses by using regression analysis. Then, we take white image into consideration to compensate the luminance of the edge of every microimage and use quad-trees to compute disparity more precisely. Moreover, we use the image-based rendering technique to improve the quality of the reconstructed image. After rendering image, we use the technique of image segmentation. Then, every object will be cut apart. We estimate the depth of every object by the disparity, and hence we can reconstruct the depth map of the whole image.
Chuang, Shih-Chung, and 莊士昌. "Image Rendering Techniques and Depth Recovery for Light field images." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/56855697872267415454.
Повний текст джерела國立臺灣大學
電信工程學研究所
103
After the first commercial hand-held plenoptic camera was presented by Ng in 2012, the applications and the research of plenoptic cameras were getting richer in recent years. The major difference between the plenoptic camera and the traditional camera is that the plenoptic camera can capture the angular information in the scene and adjust the tradeoff between the spatial resolution and the angular information. With the use of the plenoptic camera, the information we get from a single shot of a camera is enriched. By using the information, we can reconstruct the depth of scene and render an image in different views. Nonetheless, the depth reconstruction and the rendering problems are more complicated than those of the traditional camera, since the plenoptic camera is consisted of a lens array and each lens leads to a micro image. In addition, if we want to render the image precisely, we have to obtain the disparity of each microimage pair and hence the depth information first. Because the rendering problem of the plenoptic camera is closely related to the depth of scene, we have to handle the rendering problem and the depth reconstruction problem at the same time or in sequence. In this thesis, we first obtain the relationship among microlenses by using regression analysis. Then, we use the stereo matching technique to get the depth of scene and the image-based rendering technique to improve the quality of the reconstructed image. Besides, we use quad-tree and white image to improve the performance of proposed method. In the end, we compare the result of the proposed algorithm with the previous work for rendering the microimages acquired from the plenoptic camera rendering and depth reconstruction and show that the proposed algorithm has better performance.
Huang, Wei-Hao, and 黃威豪. "Face Segmentation In Portrait Images by Light Field Camera." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/25433666517493577592.
Повний текст джерела中興大學
通訊工程研究所
103
Digital Cameras become more popular in recent years, and portrait photography is one of their major applications. Due to its limitation of focusing on a single point and the capability of identifying only clearly focused faces, the digital camera can hardly identify all faces in the image. The light field camera, presented by the Lytro- company in 2012, has the capability of focusing on any point in an image. Since it has no face segmentation function yet, this research tries to use its special capability to enhance the face segmentation in the images. In our method, first a number of images focused on different faces are taken, using the capability of focusing on any point in an image taken by the light field camera. Then the images are transformed into binary images by thresholding according to the skin tone of the faces. The connected component labeling operation is used to find all the possible faces in the images. To get the relative depth information between the faces in the image, an experiment is conducted to measure the value of the thresholds of shape from focus and sharpness index ,which are used to assist to evaluate the right position of faces. The experimental results show that our method can extract all the possible faces in the images, which can be used to adjust the focus of the light field camera or for the further face recognition task.
Lourenço, Rui Miguel Leonel. "Structure tensor-based depth estimation from light field images." Master's thesis, 2019. http://hdl.handle.net/10400.8/3927.
Повний текст джерелаChang, Ruei-Yu, and 張瑞宇. "Fully Convolutional Networks Based Reflection Separation for Light Field Images." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/k6m8qh.
Повний текст джерела國立中央大學
通訊工程學系
107
Existing reflection separation schemes designed for multi-view images cannot be applied to light filed images due to the dense light fields with narrow baselines. In order to improve accuracy of the reconstructed background (i.e., the transmitted layer), most light field data based reflection separation schemes estimate a disparity map before reflection separation. Different from previous work, this thesis uses the existing EPINET based on disparity estimation of light field image without reflection, and separates the mixed light field image of weak reflection. At the training stage, the network takes multi-view images stacks along principle directions of light field data as inputs, and significant convolution features of the background layer are learned in an end-to-end manner. Then the FCN learns to predict pixel-wise gray-scale values of the background layer of the central view. Our experimental results show that the background layer can be reconstructed effectively by using EPINET and the mixed light field image dataset proposed in this thesis.
Tsai, Yu-Ju, and 蔡侑儒. "Estimate Disparity of Light Field Images by Deep Neural Network." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/k32675.
Повний текст джерела國立臺灣大學
資訊工程學研究所
107
In this paper, we introduce a light field depth estimation method based on a convolutional neural network. Light field camera can capture the spatial and angular properties of light in a scene. By using this property, we can compute depth information from light field images. However, the narrow baseline in light-field cameras makes the depth estimation of light field difficult. Many approaches try to solve these limitations in the depth estimation of the light field, but there is some trade-off between the speed and the accuracy in these methods. We consider the repetitive structure of the light field and redundant sub-aperture views in light field images. First, to utilize the repetitive structure of the light field, we integrate this property into our network design. Second, by applying attention based sub-aperture views selection, we can let the network learn more useful views by itself. Finally, we compare our experimental result with other states of the art methods to show our improvement in light field depth estimation.
Chang, Po-Yi, and 張博一. "Depth Estimation Using Adaptive Support-Weight Approach for Light Field Images." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/5d6h37.
Повний текст джерела國立中央大學
通訊工程學系
106
Light field cameras acquire muti-view images using the microlens array. Due to the narrow baseline between multiple views, sub-pixel accuracy of the estimated disparity is expected. Adaptive support-weight approach is a local based disparity estimation method. Although several adaptive support-weight approach (ASW) based disparity estimation schemes for light field images have been proposed, they did not consider the problem of sub-pixel accuracy. Therefore, this thesis proposes to improve adaptive support-weight approach based depth estimation for light field images. Before disparity estimation, bicubic interpolation is applied to light field images for sub-pixel accuracy. Then the adaptive support-weight approach estimates disparities, where the cross window is adopted to reduce computation complexity. The intersection position of the vertical and horizontal arms is dynamically adjusted on the image border. Then, we increase the weights of pixels which have higher edge response. Finally, the estimated disparities from multiple view featuring with the same horizontal position are combined to generate the disparity map of the central view. Our experimental results show that the average error rate of the proposed method is lower than that of the EPI based adaptive window matching approach for 5.4%.
Wang, Yen-Chang, and 王嚴璋. "Generating Label Map and Extending Depth of Field from Different Focus Images Obtained by Means of Light Field Camera." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/94835126278171932887.
Повний текст джерела國立清華大學
電機工程學系
100
Recently, the invention of hand-held light field camera raises a revolution in photography. We can record not only the intensity of light but also the direction of light by the light field camera. With the additional information, we could process many applications such as digital refocusing, moving observer, and depth estimation which can be utilized to computer vision, computer graphics, and machine vision. In this thesis, we mainly concentrate on the function of digital refocusing which can easily get series of images with different focal length. We design an energy function and minimize it to get a label map which represents the index of sharpest image for each pixel. The primary core of the energy function is a pixel-based AFM method and a region-based adaboost classification method is secondary. We get a more robust result than traditional depth from focus (DFF) method through this energy function. We also generate a virtual all-focus image for further applications by utilizing the label map. We us the Lytro light field camera to capture real world scene and refocus it to a set which contains several images with different focal length. For each pixel, we compute the cost to each label by applying the energy function described above. Finally, we generate a label map and a virtual all-focus image by our algorithm.
Yang, Hao-Hsueh, and 楊浩學. "Depth Estimation Based on Segmentation, Superpixel Auto Adjustment and Local Matching Algorithm for Stereo Matching and Light Field Images." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/12781767897456096386.
Повний текст джерела國立臺灣大學
電信工程學研究所
105
After the releasing of Plenoptic camera in November 2012, the research of light field camera is getting popular in recent years. The main difference between Plenoptic camera and traditional camera is that the angular information of light ray can be acquired by the former one. With one shot only, we can reconstruct the depth of scene and render the micro images into one final image from different views. We can also change the focal distance to make near or far objects clear. These are the appealing advantages of Plenoptic camera. Stereo matching is also a popular research topic since we can obtain depth information by two images from left and right views. Many applications can be done if we have the accurate depth information about an image. Besides, the concept of stereo matching can be used in light field image rendering to get better result. In this thesis, we divide the contents into three parts. The first part is to enhance the original rendering technique used in light field image with better local matching algorithm added. The second part is stereo matching. We use segmentation to help stereo matching and find an auto adjustment method to decide the best number of superpixel for each image. We also find a new local matching algorithm that is efficient especially for stereo matching after segmentation. Some techniques that can further increase the result are also added. The third part is a new depth estimation method used in light field image especially for some light field images that are hard to estimate depth by stereo matching. The method to recover depth information is based on segmentation and images from different focal distance.
Medeiros, João Diogo Gameiro. "Depth extraction in 3D holoscopic images." Master's thesis, 2018. http://hdl.handle.net/10071/17860.
Повний текст джерелаA holoscopia é uma tecnologia que surge como alternativa aos métodos tradicionais de captura de imagens e de visualização de conteúdos 3D. Para o processo de captura é utilizada uma câmera de campo de luz que permite armazenar a direção de todos os raios, ao contrário do que acontece com as câmeras tradicionais. Com a informação guardada é possível gerar um mapa de profundidade da imagem cuja utilização poderá ser útil em áreas como a navegação robótica ou a medicina. Nesta dissertação, propõe-se melhorar uma solução já existente através do desenvolvimento de novos mecanismos de processamento que permitam um balanceamento dinâmico entre a velocidade computacional e a precisão. Todas as soluções propostas foram implementadas recorrendo à paralelização da CPU para que se conseguisse reduzir substancialmente o tempo de computação. Para os algoritmos propostos foram efectuados testes qualitativos com recurso à utilização das métricas Mean Absolute Error (MAE), Root Mean Square Error (RMSE) e Structural Similarity Index Method (SSIM). Uma análise comparativa entre os tempos de processamento dos algoritmos propostos e as soluções originais foi também efectuada. Os resultados alcançados foram bastante satisfatórios dado que se registou uma redução acentuada nos tempos de processamento para qualquer uma das soluções implementadas sem que a estimativa de precisão tenha sido substancialmente afetada.
Tavares, Paulo José da Silva. "Three-dimensional geometry characterization using structured light fields." Doctoral thesis, 2008. http://hdl.handle.net/10216/59466.
Повний текст джерелаTavares, Paulo José da Silva. "Three-dimensional geometry characterization using structured light fields." Tese, 2008. http://hdl.handle.net/10216/59466.
Повний текст джерелаLin, Ren Jie, and 林仁傑. "Achromatic metalens array for light field image." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/36sm3w.
Повний текст джерела國立臺灣大學
應用物理研究所
106
Vision is the most important system for living creature to perceiving surrounding environmental information. Compared with a human eyes, the insect''s visual system is composed of an array of tiny eyes, also known as compound eyes. Such a visual system has a large field of view and the advantage of estimating the depth of the object. Such features have attracted human technology to try to develop similar optical imaging systems, such as light field cameras. The light field image records the position and direction information of light rays distributing in the target scene which can be captured by the use of microlens array. Compared with the conventional imaging system, four-dimensional light field imaging system can provide not only the two-dimensional intensity but also two-dimensional momentum information of light which enables the scene to be reconstructed with refocusing images and depth of objects. However, it is not easy to obtain the precise shaping and low defect microlens array or different forms (convex or concave) or arbitrary numerical aperture (NA) at one microlens array by most of fabrication processes such as electron beam lithography, UV- lithography, photoresist melting, nanoimprinting lithography and so on. Metasurfaces, the two-dimensional metamaterials, have appeared as one of the most rapidly growing fields of nanophotonics. They have attracted extensive research interest because their exceptional optical properties and compact size can provide technical solutions for cutting-edge optical applications, such as imaging, polarization conversion, nonlinear components, and hologram. Recently, the chromatic aberrations of metasurface, resulting from the resonance of nanoantennas and the intrinsic dispersion of constructive materials, has been eliminated in visible region by using incorporating an integrated-resonant unit element. It gives rise to a burst of upsurge of imaging applications by using metalens. Here, we propose a light field imaging system with an ultra-compact and flat GaN achromatic metalens array without spherical aberration to acquire four-dimensional light field information. Using this platform and rendering algorithm, we can get the reconstructed scene by a series of images with arbitrary focusing depths slice-by-slice and depth of objects. Compared with microlens array, the advantages of our metalens array are achromatism, spherical aberration free, focal length and numerical aperture can be arbitrarily designed, and can directly integrate with CMOS CCD by semiconductor fabrication process.
CHIANG, TAI-I., and 姜太乙. "Face Recognition Based on Moment Light Field Image." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/c9dcr5.
Повний текст джерела國立高雄應用科技大學
光電與通訊工程研究所
104
To model images of faces, it is important to have a framework of how the face can change in a moment. Faces could vary widely, but the changes can be broken down into two parts: variations in lighting and the expression across the face among individuals. In this thesis, face recognition system considering moment images is proposed to model faces under various lighting. The continuity equation is proposed to extract the first angular moments of the light field so as to construct views of different light sources for recognition. The method reduces the time cost by few images from the training set. The experiments have been extensively assessed with the CMU-PIE face database. Our method has been shown with a noticeable performance.