Academic literature on the topic 'Light field images'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Light field images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Light field images"

1

Garces, Elena, Jose I. Echevarria, Wen Zhang, Hongzhi Wu, Kun Zhou, and Diego Gutierrez. "Intrinsic Light Field Images." Computer Graphics Forum 36, no. 8 (May 5, 2017): 589–99. http://dx.doi.org/10.1111/cgf.13154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Seung-Jae, and In Kyu Park. "Dictionary Learning based Superresolution on 4D Light Field Images." Journal of Broadcast Engineering 20, no. 5 (September 30, 2015): 676–86. http://dx.doi.org/10.5909/jbe.2015.20.5.676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yan, Tao, Yuyang Ding, Fan Zhang, Ningyu Xie, Wenxi Liu, Zhengtian Wu, and Yuan Liu. "Snow Removal From Light Field Images." IEEE Access 7 (2019): 164203–15. http://dx.doi.org/10.1109/access.2019.2951917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Li, Yunpeng Ma, Song Hong, and Ke Chen. "Reivew of Light Field Image Super-Resolution." Electronics 11, no. 12 (June 17, 2022): 1904. http://dx.doi.org/10.3390/electronics11121904.

Full text
Abstract:
Currently, light fields play important roles in industry, including in 3D mapping, virtual reality and other fields. However, as a kind of high-latitude data, light field images are difficult to acquire and store. Thus, the study of light field super-resolution is of great importance. Compared with traditional 2D planar images, 4D light field images contain information from different angles in the scene, and thus the super-resolution of light field images needs to be performed not only in the spatial domain but also in the angular domain. In the early days of light field super-resolution research, many solutions for 2D image super-resolution, such as Gaussian models and sparse representations, were also used in light field super-resolution. With the development of deep learning, light field image super-resolution solutions based on deep-learning techniques are becoming increasingly common and are gradually replacing traditional methods. In this paper, the current research on super-resolution light field images, including traditional methods and deep-learning-based methods, are outlined and discussed separately. This paper also lists publicly available datasets and compares the performance of various methods on these datasets as well as analyses the importance of light field super-resolution research and its future development.
APA, Harvard, Vancouver, ISO, and other styles
5

Kobayashi, Kenkichi, and Hideo Saito. "High-Resolution Image Synthesis from Video Sequence by Light Field." Journal of Robotics and Mechatronics 15, no. 3 (June 20, 2003): 254–62. http://dx.doi.org/10.20965/jrm.2003.p0254.

Full text
Abstract:
We propose a novel method to synthesize high-resolution images from image sequences taken with a moving video camera. Each frame in the image sequence is a part of the photographed object. Our method integrates these frames to generate high-resolution images of object by constructing a light field, which is quite different from general mosaic methods. In light fields constructed straightforwardly, blur and discontinuity are introduced into synthesized images by depth variation of the object. In our method, the light field is optimized to remove blur and discontinuity so clear images can be synthesized. We find the optimum light field for generating sharp unblurred images by reparameterizing light field and evaluating sharpness of synthesized images from each light field. The optimized light field is adapted to the depth variation of the object surface, but the exact shape of the object is not necessary. High resolution images that are impractical in the real system can be virtually synthesized from the light field. Results of the experiment applied to a book surface demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
6

Sun Junyang, 孙俊阳, 孙. 俊. Sun Jun, 许传龙 Xu Chuanlong, 张. 彪. Zhang Biao, and 王式民 Wang Shimin. "A Calibration Method of Focused Light Field Cameras Based on Light Field Images." Acta Optica Sinica 37, no. 5 (2017): 0515002. http://dx.doi.org/10.3788/aos201737.0515002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

KOMATSU, Koji, Kohei ISECHI, Keita TAKAHASHI, and Toshiaki FUJII. "Light Field Coding Using Weighted Binary Images." IEICE Transactions on Information and Systems E102.D, no. 11 (November 1, 2019): 2110–19. http://dx.doi.org/10.1587/transinf.2019pcp0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yamauchi, Masaki, and Tomohiro Yendo. "Light field display using wavelength division multiplexing." Electronic Imaging 2020, no. 2 (January 26, 2020): 101–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-101.

Full text
Abstract:
We propose a large screen 3D display which enables multiple viewers to see simultaneously without special glasses. In prior researches, methods of using a projector array or a swinging screen were proposed. However, the former has difficulty in installing and adjusting a large number of projectors and the latter cases occurrence of vibration and noise because of the mechanical motion of the screen. Our proposed display consists of a wavelength modulation projector and a spectroscopic screen. The screen shows images of which color depends on viewing points. The projector projects binary images to the screen in time-division according to wavelength of projection light. The wavelength of the light changes at high-speed with time. Therefore, the system can show 3D images to multiple viewers simultaneously by projecting proper images according to each viewing points. The installation of the display is easy and vibration or noise are not occurred because only one projector is used and the screen has no mechanical motion. We conducted simulation and confirmed that the proposed display can show 3D images to multiple viewers simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
9

Xiao, Bo, Xiujing Gao, and Hongwu Huang. "Optimizing Underwater Image Restoration and Depth Estimation with Light Field Images." Journal of Marine Science and Engineering 12, no. 6 (June 2, 2024): 935. http://dx.doi.org/10.3390/jmse12060935.

Full text
Abstract:
Methods based on light field information have shown promising results in depth estimation and underwater image restoration. However, improvements are still needed in terms of depth estimation accuracy and image restoration quality. Previous work on underwater image restoration employed an image formation model (IFM) that overlooked the effects of light attenuation and scattering coefficients in underwater environments, leading to unavoidable color deviation and distortion in the restored images. Additionally, the high blurriness and associated distortions in underwater images make depth information extraction and estimation very challenging. In this paper, we refine the light propagation model and propose a method to estimate the attenuation and backscattering coefficients of the underwater IFM. We simplify these coefficients into distance-related functions and design a relationship between distance and the darkest channel to estimate the water coefficients, effectively suppressing color deviation and distortion in the restoration results. Furthermore, to increase the accuracy of depth estimation, we propose using blur cues to construct a cost for refocusing in the depth direction, reducing the impact of high signal-to-noise ratio environments on depth information extraction, and effectively enhancing the accuracy and robustness of depth estimation. Finally, experimental comparisons show that our method achieves more accurate depth estimation and image restoration closer to real scenes compared to state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Salem, Ahmed, Hatem Ibrahem, and Hyun-Soo Kang. "Light Field Reconstruction Using Residual Networks on Raw Images." Sensors 22, no. 5 (March 2, 2022): 1956. http://dx.doi.org/10.3390/s22051956.

Full text
Abstract:
Although Light-Field (LF) technology attracts attention due to its large number of applications, especially with the introduction of consumer LF cameras and its frequent use, reconstructing densely sampled LF images represents a great challenge to the use and development of LF technology. Our paper proposes a learning-based method to reconstruct densely sampled LF images from a sparse set of input images. We trained our model with raw LF images rather than using multiple images of the same scene. Raw LF can represent the two-dimensional array of images captured in a single image. Therefore, it enables the network to understand and model the relationship between different images of the same scene well and thus restore more texture details and provide better quality. Using raw images has transformed the task from image reconstruction into image-to-image translation. The feature of small-baseline LF was used to define the images to be reconstructed using the nearest input view to initialize input images. Our network was trained end-to-end to minimize the sum of absolute errors between the reconstructed and ground-truth images. Experimental results on three challenging real-world datasets demonstrate the high performance of our proposed method and its outperformance over the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Light field images"

1

Zhang, Zhengyu. "Quality Assessment of Light Field Images." Electronic Thesis or Diss., Rennes, INSA, 2024. http://www.theses.fr/2024ISAR0002.

Full text
Abstract:
Les images de champs de lumière (LFI) suscitent un intérêt et une fascination remarquables en raison de leur importance croissante dans les applications immersives. Étant donné que les LFI peuvent être déformés à différentes étapes, de l'acquisition à la visualisation, l'évaluation de la qualité des images de champs de lumière (LFIQA) est d'une importance vitale pour surveiller les dégradations potentielles de la qualité des LFI.La première contribution (Chapitre 3) de ce travail se concentre sur le développement de deux métriques LFIQA sans référence (NR) fondées sur des caractéristiques ad-hoc, dans lesquelles les informations de texture et les coefficients ondelettes sont exploités pour l'évaluation de la qualité.Puis dans la deuxième partie (Chapitre 4), nous explorons le potentiel de la technologie de l’apprentissage profond (deep learning) pour l'évaluation de la qualité des LFI, et nous proposons quatre métriques LFIQA basées sur l’apprentissage profond, dont trois métriques sans référence (NR) et une métrique Full-Reference (FR).Dans la dernière partie (Chapitre 5), nous menons des évaluations subjectives et proposons une nouvelle base de données normalisée pour la LFIQA. De plus, nous fournissons une étude comparative (benchmark) de nombreuses métriques objectives LFIQA de l’état de l’art, sur la base de données proposée
Light Field Image (LFI) has garnered remarkable interest and fascination due to its burgeoning significance in immersive applications. Since LFIs may be distorted at various stages from acquisition to visualization, Light Field Image Quality Assessment (LFIQA) is vitally important to monitor the potential impairments of LFI quality. The first contribution (Chapter 3) of this work focuses on developing two handcrafted feature-based No-Reference (NR) LFIQA metrics, in which texture information and wavelet information are exploited for quality evaluation. Then in the second part (Chapter 4), we explore the potential of combining deep learning technology with the quality assessment of LFIs, and propose four deep learning-based LFIQA metrics according to different LFI characteristics, including three NR metrics and one Full-Reference (FR) metric. In the last part (Chapter 5), we conduct subjective experiments and propose a novel standard LFIQA database. Moreover, a benchmark of numerous state-of-the-art objective LFIQA metrics on the proposed database is provided
APA, Harvard, Vancouver, ISO, and other styles
2

Chiesa, Valeria. "Revisiting face processing with light field images." Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS059.pdf.

Full text
Abstract:
L'objectif principal de cette thèse est de présenter une technologie d'acquisition non conventionnelle, d'étudier les performances d'analyse du visage en utilisant des images collectées avec une caméra spéciale, de comparer les résultats avec ceux obtenus en élaborant des données à partir de dispositifs similaires et de démontrer le bénéfice apporté par l'utilisation de dispositifs modernes par rapport à des caméras standards utilisées en biométrie. Au début de la thèse, la littérature sur l'analyse du visage à l'aide de données "light field" a été étudiée. Le problème de la rareté des données biométriques (et en particulier des images de visages humains) recueillies à l'aide de caméras plénoptiques a été résolu par l'acquisition systématique d'une base de données de visages "light field", désormais accessible au public. Grâce aux données recueillies, il a été possible de concevoir et de développer des expériences en analyse du visage. De plus, une base de référence exhaustive pour une comparaison entre deux technologies RGB-D a été créée pour appuyer les études en perspective. Pendant la période de cette thèse, l'intérêt pour la technologie du plénoptique appliquée à l'analyse du visage s'est accrue et la nécessité d'une étude d'un algorithme dédié aux images "light field" est devenue incontournable. Ainsi, une vue d'ensemble complète des méthodes existantes a été élaborée
Being able to predict the macroscopic response of a material from the knowledge of its constituent at a microscopic or mesoscopic scale has always been the Holy Grail pursued by material science, for it provides building bricks for the understanding of complex structures as well as for the development of tailor-made optimized materials. The homogenization theory constitutes nowadays a well-established theoretical framework to estimate the overall response of composite materials for a broad range of mechanical behaviors. Such a framework is still lacking for brittle fracture, which is a dissipative evolution problem that (ii) localizes at the crack tip and (iii) is related to a structural one. In this work, we propose a theoretical framework based on a perturbative approach of Linear Elastic Fracture Mechanics to model (i) crack propagation in large-scale disordered materials as well (ii) the dissipative processes involved at the crack tip during the interaction of a crack with material heterogeneities. Their ultimate contribution to the macroscopic toughness of the composite is (iii) estimated from the resolution of the structural problem using an approach inspired by statistical physics. The theoretical and numerical inputs presented in the thesis are finally compared to experimental measurements of crack propagation in 3D-printed heterogeneous polymers obtained through digital image correlation
APA, Harvard, Vancouver, ISO, and other styles
3

Dricot, Antoine. "Light-field image and video compression for future immersive applications." Thesis, Paris, ENST, 2017. http://www.theses.fr/2017ENST0008/document.

Full text
Abstract:
L’évolution des technologies vidéo permet des expériences de plus en plus immersives. Cependant, les technologies 3D actuelles sont encore très limitées et offrent des situations de visualisation qui ne sont ni confortables ni naturelles. La prochaine génération de technologies vidéo immersives apparait donc comme un défi technique majeur, en particulier avec la prometteuse approche light-field (LF). Le light-field représente tous les rayons lumineux dans une scène. De nouveaux dispositifs d’acquisition apparaissent, tels que des ensembles de caméras ou des appareils photo plénoptiques (basés sur des micro-lentilles). Plusieurs sortes de systèmes d’affichage ciblent des applications immersives, comme les visiocasques ou les écrans light-field basés sur la projection, et des applications cibles prometteuses existent déjà (e.g. la vidéo 360°, la réalité virtuelle, etc.). Depuis plusieurs années, le light-field a stimulé l’intérêt de plusieurs entreprises et institutions, par exemple dans des groupes MPEG et JPEG. Les contenus light-feld ont des structures spécifiques et utilisent une quantité massive de données, ce qui représente un défi pour implémenter les futurs services. L'un des buts principaux de notre travail est d'abord de déterminer quelles technologies sont réalistes ou prometteuses. Cette étude est faite sous l'angle de la compression image et vidéo, car l'efficacité de la compression est un facteur clé pour mettre en place ces services light-field sur le marché. On propose ensuite des nouveaux schémas de codage pour augmenter les performances de compression et permettre une transmission efficace des contenus light-field sur les futurs réseaux
Evolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks
APA, Harvard, Vancouver, ISO, and other styles
4

Dricot, Antoine. "Light-field image and video compression for future immersive applications." Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0008.

Full text
Abstract:
L’évolution des technologies vidéo permet des expériences de plus en plus immersives. Cependant, les technologies 3D actuelles sont encore très limitées et offrent des situations de visualisation qui ne sont ni confortables ni naturelles. La prochaine génération de technologies vidéo immersives apparait donc comme un défi technique majeur, en particulier avec la prometteuse approche light-field (LF). Le light-field représente tous les rayons lumineux dans une scène. De nouveaux dispositifs d’acquisition apparaissent, tels que des ensembles de caméras ou des appareils photo plénoptiques (basés sur des micro-lentilles). Plusieurs sortes de systèmes d’affichage ciblent des applications immersives, comme les visiocasques ou les écrans light-field basés sur la projection, et des applications cibles prometteuses existent déjà (e.g. la vidéo 360°, la réalité virtuelle, etc.). Depuis plusieurs années, le light-field a stimulé l’intérêt de plusieurs entreprises et institutions, par exemple dans des groupes MPEG et JPEG. Les contenus light-feld ont des structures spécifiques et utilisent une quantité massive de données, ce qui représente un défi pour implémenter les futurs services. L'un des buts principaux de notre travail est d'abord de déterminer quelles technologies sont réalistes ou prometteuses. Cette étude est faite sous l'angle de la compression image et vidéo, car l'efficacité de la compression est un facteur clé pour mettre en place ces services light-field sur le marché. On propose ensuite des nouveaux schémas de codage pour augmenter les performances de compression et permettre une transmission efficace des contenus light-field sur les futurs réseaux
Evolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks
APA, Harvard, Vancouver, ISO, and other styles
5

McEwen, Bryce Adam. "Microscopic Light Field Particle Image Velocimetry." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3238.

Full text
Abstract:
This work presents the development and analysis of a system that combines the concepts of light field microscopy and particle image velocimetry (PIV) to measure three-dimensional velocities within a microvolume. Rectanglar microchannels were fabricated with dimensions on the order of 350-950 micrometers using a photolithographic process and polydimethylsiloxane (PDMS). The flow was seeded with fluorescent particles and pumped through microchannels at Reynolds numbers ranging from 0.016 to 0.028. Flow at Reynolds numbers in the range of 0.02 to 0.03 was seeded with fluorescent particles and pumped through microchannels. A light field microscope with a lateral resolution of 6.25 micrometers and an axial resolution of 15.5 micrometers was designed and built based on the concepts described by Levoy et al. Light field images were captured continuously at a frame rate of 3.9 frames per second using a Canon 5D Mark II DSLR camera. Each image was post processed to render a stack of two-dimensional images. The focal stacks were further post processed using various methods including bandpass filtering, 3D deconvolution, and intensity-based thresholding, to remove effects of diffraction and blurring. Subsequently, a multi-pass, three-dimensional PIV algorithm was used to measure channel velocities. Results from PIV analysis were compared with an analytical solution for fully-developed cases, and with CFD simulations for developing flows. Relative errors for fully-developed flow measurements, within the light field microscope refocusing range, were approximately 5% or less. Overall, the main limitations are the reduction in lateral resolution, and the somewhat low axial resolution. Advantages include the relatively low cost, ease of incorporation into existing micro-PIV systems, simple self-calibration process, and potential for resolving instantaneous three-dimensional velocities in a microvolume.
APA, Harvard, Vancouver, ISO, and other styles
6

Souza, Wallace Bruno Silva de. "Transmissão progressiva de imagens sintetizadas de light field." reponame:Repositório Institucional da UnB, 2018. http://repositorio.unb.br/handle/10482/34206.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2018.
Esta proposta estabelece um método otimizado baseado em taxa-distorção para transmitir imagens sintetizadas de light field. Resumidamente, uma imagem light field pode ser interpretada como um dado quadridimensional (4D) que possui tanto resolução espacial, quanto resolução angular, sendo que cada subimagem bidimensional desse dado 4D é tido como uma determinada perspectiva, isto é, uma imagem de subabertura (SAI, do inglês Sub-Aperture Image). Este trabalho visa modi car e aprimorar uma proposta anterior chamada de Comunicação Progressiva de Light Field (PLFC, do inglês Progressive Light Field Communication ), a qual trata da sintetização de imagens referentes a diferentes focos requisitados por um usuário. Como o PLFC, este trabalho busca fornecer informação suficiente para o usuário de modo que, conforme a transmissão avance, ele tenha condições de sintetizar suas próprias imagens de ponto focal, sem a necessidade de se enviar novas imagens. Assim, a primeira modificação proposta diz respeito à como escolher a cache inicial do usuário, determinando uma quantidade ideal de imagens de subabertura para enviar no início da transmissão. Propõe-se também um aprimoramento do processo de seleção de imagens adicionais por meio de um algoritmo de refinamento, o qual é aplicado inclusive na inicialização da cache. Esse novo processo de seleção lida com QPs (Passo de Quantização, do inglês Quantization Parameter ) dinâmicos durante a codificação e envolve não só os ganhos imediatos para a qualidade da imagem sintetizada, mas ainda considera as sintetizações subsequentes. Tal ideia já foi apresentada pelo PLFC, mas não havia sido implementada de maneira satisfatória. Estabelece-se ainda uma maneira automática para calcular o multiplicador de Lagrange que controla a influência do benefício futuro associado à transmissão de uma SAI. Por fim, descreve-se um modo simplificado de obter esse benefício futuro, reduzindo a complexidade computacional envolvida. Muitas são as utilidades de um sistema como este, podendo, por exemplo, ser usado para identificar algum elemento em uma imagem light field, ajustando apropriadamente o foco em questão. Além da proposta, os resultados obtidos são exibidos, sendo feita uma discussão acerca dos significativos ganhos conseguidos de até 32; 8% com relação ao PLFC anterior em termos de BD-Taxa. Esse ganho chega a ser de até 85; 8% em comparação com transmissões triviais de dados light field.
This work proposes an optimized rate-distortion method to transmit light field synthesized images. Briefy, light eld images could be understood like quadridimensional (4D) data, which have both spatial and angular resolution, once each bidimensional subimage in this 4D image is a certain perspective, that is, a SAI (Sub-Aperture Image). This work aims to modify and to improve a previous proposal named PLFC (Progressive Light Field Communication), which addresses the image synthesis for diferent focal point images requested by an user. Like the PLFC, this work tries to provide enough information to the user so that, as the transmsission progress, he can synthesize his own focal point images, without the need to transmit new images. Thus, the first proposed modification refers to how the user's initial cache should be chosen, defining an ideal ammount of SAIs to send at the transmission begining. An improvement of the additional images selection process is also proposed by means of a refinement algorithm, which is applied even in the cache initialization. This new selection process works with dynamic QPs (Quantization Parameter) during encoding and involves not only the immediate gains for the synthesized image, but either considers the subsequent synthesis. This idea already was presented by PLFC, but had not been satisfactorily implemented. Moreover, this work proposes an automatic way to calculate the Lagrange multiplier which controls the in uence of the future benefit associated with the transmission of some SAI. Finally, a simplified manner of obtaining this future benefit is then described, reducing the computational complexity involved. The utilities of such a system are diverse and, for example, it can be used to identify some element in a light field image, adjusting the focus accordingly. Besides the proposal, the obtained results are shown, and a discussion is made about the significant achieved gains up to 32:8% compared to the previous PLFC in terms of BD-Rate. This gain is up to 85:8% in relation to trivial light field data transmissions.
APA, Harvard, Vancouver, ISO, and other styles
7

Nieto, Grégoire. "Light field remote vision." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM051/document.

Full text
Abstract:
Les champs de lumière ont attisé la curiosité durant ces dernières décennies. Capturés par une caméra plénoptique ou un ensemble de caméras, ils échantillonnent la fonction plénoptique qui informe sur la radiance de n'importe quel rayon lumineux traversant la scène observée. Les champs lumineux offrent de nombreuses applications en vision par ordinateur comme en infographie, de la reconstruction 3D à la segmentation, en passant par la synthèse de vue, l'inpainting ou encore le matting par exemple.Dans ce travail nous nous attelons au problème de reconstruction du champ de lumière dans le but de synthétiser une image, comme si elle avait été prise par une caméra plus proche du sujet de la scène que l'appareil de capture plénoptique. Notre approche consiste à formuler la reconstruction du champ lumineux comme un problème de rendu basé image (IBR). La plupart des algorithmes de rendu basé image s'appuient dans un premier temps sur une reconstruction 3D approximative de la scène, appelée proxy géométrique, afin d'établir des correspondances entre les points image des vues sources et ceux de la vue cible. Une nouvelle vue est générée par l'utilisation conjointe des images sources et du proxy géométrique, bien souvent par la projection des images sources sur le point de vue cible et leur fusion en intensité.Un simple mélange des couleurs des images sources ne garantit pas la cohérence de l'image synthétisée. Nous proposons donc une méthode de rendu direct multi-échelles basée sur les pyramides de laplaciens afin de fusionner les images sources à toutes les fréquences, prévenant ainsi l'apparition d'artefacts de rendu.Mais l'imperfection du proxy géométrique est aussi la cause d'artefacts de rendu, qui se traduisent par du bruit en haute fréquence dans l'image synthétisée. Nous introduisons une nouvelle méthode de rendu variationnelle avec des contraintes sur les gradients de l'image cible dans le but de mieux conditionner le système d'équation linéaire à résoudre et supprimer les artefacts de rendu dus au proxy.Certaines scènes posent de grandes difficultés de reconstruction du fait du caractère non-lambertien éventuel de certaines surfaces~; d'autre part même un bon proxy ne suffit pas, lorsque des réflexions, transparences et spécularités remettent en cause les règles de la parallaxe. Nous proposons méthode originale basée sur l'approximation locale de l'espace plénoptique à partir d'un échantillonnage épars afin de synthétiser n'importe quel point de vue sans avoir recours à la reconstruction explicite d'un proxy géométrique. Nous évaluons notre méthode à la fois qualitativement et quantitativement sur des scènes non-triviales contenant des matériaux non-lambertiens.Enfin nous ouvrons une discussion sur le problème du placement optimal de caméras contraintes pour le rendu basé image, et sur l'utilisation de nos algorithmes pour la vision d'objets dissimulés derrière des camouflages.Les différents algorithmes proposés sont illustrés par des résultats sur des jeux de données plénoptiques structurés (de type grilles de caméras) ou non-structurés
Light fields have gathered much interest during the past few years. Captured from a plenoptic camera or a camera array, they sample the plenoptic function that provides rich information about the radiance of any ray passing through the observed scene. They offer a pletora of computer vision and graphics applications: 3D reconstruction, segmentation, novel view synthesis, inpainting or matting for instance.Reconstructing the light field consists in recovering the missing rays given the captured samples. In this work we cope with the problem of reconstructing the light field in order to synthesize an image, as if it was taken by a camera closer to the scene than the input plenoptic device or set of cameras. Our approach is to formulate the light field reconstruction challenge as an image-based rendering (IBR) problem. Most of IBR algorithms first estimate the geometry of the scene, known as a geometric proxy, to make correspondences between the input views and the target view. A new image is generated by the joint use of both the input images and the geometric proxy, often projecting the input images on the target point of view and blending them in intensity.A naive color blending of the input images do not guaranty the coherence of the synthesized image. Therefore we propose a direct multi-scale approach based on Laplacian rendering to blend the source images at all the frequencies, thus preventing rendering artifacts.However, the imperfection of the geometric proxy is also a main cause of rendering artifacts, that are displayed as a high-frequency noise in the synthesized image. We introduce a novel variational rendering method with gradient constraints on the target image for a better-conditioned linear system to solve, removing the high-frequency noise due to the geometric proxy.Some scene reconstructions are very challenging because of the presence of non-Lambertian materials; moreover, even a perfect geometric proxy is not sufficient when reflections, transparencies and specularities question the rules of parallax. We propose an original method based on the local approximation of the sparse light field in the plenoptic space to generate a new viewpoint without the need for any explicit geometric proxy reconstruction. We evaluate our method both quantitatively and qualitatively on non-trivial scenes that contain non-Lambertian surfaces.Lastly we discuss the question of the optimal placement of constrained cameras for IBR, and the use of our algorithms to recover objects that are hidden behind a camouflage.The proposed algorithms are illustrated by results on both structured (camera arrays) and unstructured plenoptic datasets
APA, Harvard, Vancouver, ISO, and other styles
8

Hawary, Fatma. "Light field image compression and compressive acquisition." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S082.

Full text
Abstract:
En capturant une scène à partir de plusieurs points de vue, un champ de lumière fournit une représentation riche de la géométrie de la scène, ce qui permet une variété de nouvelles applications de post-capture ainsi que des expériences immersives. L'objectif de cette thèse est d'étudier la compressibilité des contenus de type champ de lumière afin de proposer de nouvelles solutions pour une imagerie de champs lumière à plus haute résolution. Deux aspects principaux ont été étudiés à travers ce travail. Les performances en compression sur les champs lumière des schémas de codage actuels étant encore limitées, il est nécessaire d'introduire des approches plus adaptées aux structures des champs de lumière. Nous proposons un schéma de compression comportant deux couches de codage. Une première couche encode uniquement un sous-ensemble de vues d’un champ de lumière et reconstruit les vues restantes via une méthode basée sur la parcimonie. Un codage résiduel améliore ensuite la qualité finale du champ de lumière décodé. Avec les moyens actuels de capture et de stockage, l’acquisition d’un champ de lumière à très haute résolution spatiale et angulaire reste impossible, une alternative consiste à reconstruire le champ de lumière avec une large résolution à partir d’un sous-ensemble d’échantillons acquis. Nous proposons une méthode de reconstruction automatique pour restaurer un champ de lumière échantillonné. L’approche utilise la parcimonie du champs de lumière dans le domaine de Fourier. Aucune estimation de la géométrie de la scène n'est nécessaire, et une reconstruction précise est obtenue même avec un échantillonnage assez réduit. Une étude supplémentaire du schéma complet, comprenant les deux approches proposées est menée afin de mesurer la distorsion introduite par les différents traitements. Les résultats montrent des performances comparables aux méthodes de synthèse de vues basées sur la l’estimation de profondeur
By capturing a scene from several points of view, a light field provides a rich representation of the scene geometry that brings a variety of novel post-capture applications and enables immersive experiences. The objective of this thesis is to study the compressibility of light field contents in order to propose novel solutions for higher-resolution light field imaging. Two main aspects were studied through this work. The compression performance on light fields of the actual coding schemes still being limited, there is need to introduce more adapted approaches to better describe the light field structures. We propose a scalable coding scheme that encodes only a subset of light field views and reconstruct the remaining views via a sparsity-based method. A residual coding provides an enhancement to the final quality of the decoded light field. Acquiring very large-scale light fields is still not feasible with the actual capture and storage facilities, a possible alternative is to reconstruct the densely sampled light field from a subset of acquired samples. We propose an automatic reconstruction method to recover a compressively sampled light field, that exploits its sparsity in the Fourier domain. No geometry estimation is needed, and an accurate reconstruction is achieved even with very low number of captured samples. A further study is conducted for the full scheme including a compressive sensing of a light field and its transmission via the proposed coding approach. The distortion introduced by the different processing is measured. The results show comparable performances to depth-based view synthesis methods
APA, Harvard, Vancouver, ISO, and other styles
9

Löw, Joakim, Anders Ynnerman, Per Larsson, and Jonas Unger. "HDR Light Probe Sequence Resampling for Realtime Incident Light Field Rendering." Linköpings universitet, Visuell informationsteknologi och applikationer, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-18052.

Full text
Abstract:
This paper presents a method for resampling a sequence of high dynamic range light probe images into a representation of Incident Light Field (ILF) illumination which enables realtime rendering. The light probe sequences are captured at varying positions in a real world environment using a high dynamic range video camera pointed at a mirror sphere. The sequences are then resampled to a set of radiance maps in a regular three dimensional grid before projection onto spherical harmonics. The capture locations and amount of samples in the original data make it inconvenient for direct use in rendering and resampling is necessary to produce an efficient data structure. Each light probe represents a large set of incident radiance samples from different directions around the capture location. Under the assumption that the spatial volume in which the capture was performed has no internal occlusion, the radiance samples are projected through the volume along their corresponding direction in order to build a new set of radiance maps at selected locations, in this case a three dimensional grid. The resampled data is projected onto a spherical harmonic basis to allow for realtime lighting of synthetic objects inside the incident light field.
APA, Harvard, Vancouver, ISO, and other styles
10

Baravdish, Gabriel. "GPU Accelerated Light Field Compression." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150558.

Full text
Abstract:
This thesis presents a GPU accelerated method to compress light field or light field videos. The implementation is based on an earlier work of a full light field compression framework. The large amount of data storage by capturing light fields is a challenge to compress and we seek to accelerate the encoding part. We compress by projecting each data point onto a set of dictionaries and seek a sparse representation with the least error. An optimized greedy algorithm to suit computations on the GPU is presented. We benefit of the algorithm outline by encoding the data segmentally in parallel for faster computation speed while maintaining the quality. The results shows a significantly faster encoding time compared to the results in the same research field. We conclude that there are further improvements to increase the speed, and thus it is not too far from an interactive compression speed.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Light field images"

1

Daly, Charles J. Scalar diffraction from a circular aperture. Boston: Kluwer Academic, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

The low light photography field guide: Go beyond daylight to capture stunning low light images. Lewes, East Sussex: ILEX, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

The low light photography field guide: Go beyond daylight to capture stunning low light images. Waltham, MA: Focal Press/Elsevier, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Freeman, Michael. Low Light Photography Field Guide: The Essential Guide to Getting Perfect Images in Challenging Light. Taylor & Francis Group, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wilson, Rita, and Brigid Maher, eds. Words, Images and Performances in Translation. Continuum International Publishing Group, 2012. http://dx.doi.org/10.5040/9781472541833.

Full text
Abstract:
This volume presents fresh approaches to the role that translation – in its many forms – plays in enabling and mediating global cultural exchange. As modes of communication and textual production continue to evolve, the field of translation studies has an increasingly important role in exploring the ways in which words, images and performances are translated and reinterpreted in new socio-cultural contexts. The book includes an innovative mix of literary, cultural and intersemiotic perspectives and represents a wide range of languages and cultures. The contributions are all linked by a shared focus on the place of translation in the contemporary world, and the ways in which translation, and the discipline of translation studies, can shed light on questions of inter- and hypertextuality, multimodality and globalization in contemporary cultural production.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Cha. Light Field Sampling (Synthesis Lectures on Image, Video, and Multimedia Processing). Morgan and Claypool Publishers, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Murray, Jonathan, and Nea Ehrlich, eds. Drawn from Life. Edinburgh University Press, 2018. http://dx.doi.org/10.3366/edinburgh/9780748694112.001.0001.

Full text
Abstract:
Documentary cinema has always drawn from real life. However, an increasing number of contemporary filmmakers go further still, drawing onscreen images of reality through a range of animated filmmaking techniques and aesthetics. This book is the first of its kind, exploring the field of animated documentary film from a diverse range of scholarly and practice-based perspectives. The book’s chapters explore and propose answers to a range of questions that preoccupy twenty-first-century film artists and audiences alike: What are the historical roots of animated documentary? What kinds of reasons inspire practitioners to employ animation within documentary contexts? How do animated documentary images reflect and influence our understanding and experience of multiple forms of reality – public and private, psychological and political? From early cinema to present-day scientific research, military uses, digital art and gaming, this book casts new light on the capacity of the moving image to act as a record of the world around us, challenging many orthodox definitions of both animated and documentary cinema.
APA, Harvard, Vancouver, ISO, and other styles
8

Leerdam, Andrea. Woodcuts as Reading Guides. Amsterdam University Press, 2023. http://dx.doi.org/10.5117/9789048560257.

Full text
Abstract:
In the first half of the sixteenth century, the Low Countries saw the rise of a lively market for practical and instructive books that targeted non-specialist readers. This study shows how woodcuts in vernacular books on medicine and astrology fulfilled important rhetorical functions in knowledge communication. These images guided readers’ perceptions of the organisation, visualisation, and reliability of knowledge. Andrea van Leerdam uncovers the assumptions and intentions of book producers to which images testify, and shows how actual readers engaged with these illustrated books. Drawing on insights from the field of information design studies, she scrutinises the books’ material characteristics, including their lay-outs and traces of use, to shed light on the habits and interests of early modern readers. She situates these works in a culture where medicine and astrology were closely interwoven in daily life and where both book producers and readers were exploring the potential of images.
APA, Harvard, Vancouver, ISO, and other styles
9

DeSnyder, Sarah M., Simona F. Shaitelman, and Mark V. Schaverien. Lymphedema and Body Image Disturbance. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780190655617.003.0010.

Full text
Abstract:
Abstract: Lymphedema is a dreaded side effect of cancer treatments. Studies within the field of psychosocial oncology have shed light on the profound effect of lymphedema secondary to treatment of cancer on quality of life, body image, activities of daily living, and financial stress. Patients who develop lymphedema are at risk for body image disturbances. It is critical for healthcare providers to recognize and treat lymphedema at its earliest stages not only to control lymphedema but to mitigate the detrimental downstream effects of lymphedema including body image disturbance, social anxiety, and depression, all of which affect health-related quality of life. For those who experience diminished health-related quality of life due to lymphedema, healthcare providers must intervene with psychosocial support.
APA, Harvard, Vancouver, ISO, and other styles
10

Kitts, Margo, Mark Juergensmeyer, and Michael Jerryson. Introduction. Edited by Michael Jerryson, Mark Juergensmeyer, and Margo Kitts. Oxford University Press, 2013. http://dx.doi.org/10.1093/oxfordhb/9780199759996.013.0041.

Full text
Abstract:
This Handbook describes four major dimensions: 1) overviews of major religious traditions; 2) patterns and themes relating to religious violence; 3) major analytic approaches; and 4) new directions in theory and analysis related to religion and violence. There is a much more nuanced interpretation of the presence of violence in so many different traditions. This chapter, which specifically presents overviews of traditions, patterns and themes, analytic approaches, and new directions in order to offer a roadmap to the academic field of studies in religion and violence, demonstrates both the range and diversity of areas of inquiry within this emerging field. The images and acts of destruction discussed in the chapters can be collectively described as “religious violence.” The study of religious violence and the religious dimensions of violent situations do much to shed light on the nature of religion itself.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Light field images"

1

Cho, Donghyeon, Sunyeong Kim, and Yu-Wing Tai. "Consistent Matting for Light Field Images." In Computer Vision – ECCV 2014, 90–104. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10593-2_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jung, Daniel, and Reinhard Koch. "Efficient Rendering of Light Field Images." In Video Processing and Computational Video, 184–211. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24870-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Koch, R., B. Heigl, M. Pollefeys, L. Van Gool, and H. Niemann. "A Geometric Approach to Light field Calibration." In Computer Analysis of Images and Patterns, 596–603. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48375-6_71.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Anisimov, Yuriy, Oliver Wasenmüller, and Didier Stricker. "A Compact Light Field Camera for Real-Time Depth Estimation." In Computer Analysis of Images and Patterns, 52–63. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29888-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jin, Panqi, Gangyi Jiang, Yeyao Chen, Zhidi Jiang, and Mei Yu. "Perceptual Light Field Image Coding with CTU Level Bit Allocation." In Computer Analysis of Images and Patterns, 255–64. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44240-7_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shan, Liang, Ping An, Deyang Liu, and Ran Ma. "Subjective Evaluation of Light Field Images for Quality Assessment Database." In Communications in Computer and Information Science, 267–76. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8108-8_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Xuechun, Wentao Chao, and Fuqing Duan. "Depth Optimization for Accurate 3D Reconstruction from Light Field Images." In Pattern Recognition and Computer Vision, 79–90. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8432-9_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Prathap, Parvathy, and J. Jayakumari. "Analysis of Light Field Imaging and Segmentation on All-Focus Images." In Lecture Notes in Electrical Engineering, 331–42. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3992-3_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Feng, Mingtao, Syed Zulqarnain Gilani, Yaonan Wang, and Ajmal Mian. "3D Face Reconstruction from Light Field Images: A Model-Free Approach." In Computer Vision – ECCV 2018, 508–26. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01249-6_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kara, Peter A., Peter T. Kovacs, Suren Vagharshakyan, Maria G. Martini, Sandor Imre, Attila Barsi, Kristof Lackner, and Tibor Balogh. "Perceptual Quality of Reconstructed Medical Images on Projection-Based Light Field Displays." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 476–83. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-49655-9_58.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Light field images"

1

Paniate, Alberto, Gianlorenzo Massaro, Alessio Avella, Alice Meda, Francesco V. Pepe, Marco Genovese, Milena D’Angelo, and Ivano Ruo Berchera. "Light-field ghost imaging." In Quantum 2.0, QTu3A.27. Washington, D.C.: Optica Publishing Group, 2024. http://dx.doi.org/10.1364/quantum.2024.qtu3a.27.

Full text
Abstract:
We propose a technique which exploits light correlations and light-field principles to recover the volumetric image of an object without acquiring axially resolved images and without knowing its position or longitudinal extent.
APA, Harvard, Vancouver, ISO, and other styles
2

Imtiaz, Shariar Md, F. M. Fahmid Hossain, Nyamsuren Darkhanbaatar, Erkhembaatar Dashdavaa, Ki-Chul Kwon, Seok-Hee Jeon, and Nam Kim. "Estimating Depth Map from Light Field Microscopic Images Using Attention UNET." In 2024 Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR), 1–2. IEEE, 2024. http://dx.doi.org/10.1109/cleo-pr60912.2024.10676467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhou, Yan, Huiwen Guo, Guoyuan Liang, and Xinyu Wu. "Shadow removal for light field images." In 2014 IEEE International Conference on Information and Automation (ICIA). IEEE, 2014. http://dx.doi.org/10.1109/icinfa.2014.6932830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhu, Dong, ChunHong WU, Yunluo Liu, and Dongmei Fu. "3D reconstruction based on light field images." In Ninth International Conference on Graphic and Image Processing, edited by Hui Yu and Junyu Dong. SPIE, 2018. http://dx.doi.org/10.1117/12.2304504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chantara, Wisarut, Ji-Hun Mun, and Yo-Sung Ho. "Efficient Depth Estimation for Light Field Images." In 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2018. http://dx.doi.org/10.23919/apsipa.2018.8659647.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Seifi, Mozhdeh, Neus Sabater, Valter Drazic, and Patrick Perez. "Disparity-guided demosaicking of light field images." In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7026109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Xianyu, Feng Dai, Yike Ma, and Yongdong Zhang. "Automatic foreground segmentation using light field images." In 2015 Visual Communications and Image Processing (VCIP). IEEE, 2015. http://dx.doi.org/10.1109/vcip.2015.7457895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

DuVall, Matthew, John Flynn, Michael Broxton, and Paul Debevec. "Compositing light field video using multiplane images." In SIGGRAPH '19: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3306214.3338614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Noury, Charles-Antoine, Celine Teuliere, and Michel Dhome. "Light-Field Camera Calibration from Raw Images." In 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2017. http://dx.doi.org/10.1109/dicta.2017.8227459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jeong, Youngmo, Seokil Moon, Jaebum Cho, and Byoungho Lee. "One-shot 360-degree light field recording with light field camera and reflected images." In Imaging Systems and Applications. Washington, D.C.: OSA, 2017. http://dx.doi.org/10.1364/isa.2017.im4e.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Light field images"

1

Letcher, Theodore, Julie Parno, Zoe Courville, Lauren Farnsworth, and Jason Olivier. A generalized photon-tracking approach to simulate spectral snow albedo and transmittance using X-ray microtomography and geometric optics. Engineer Research and Development Center (U.S.), June 2023. http://dx.doi.org/10.21079/11681/47122.

Full text
Abstract:
A majority of snow radiative transfer models (RTMs) treat snow as a collection of idealized grains rather than an organized ice–air matrix. Here we present a generalized multi-layer photon-tracking RTM that simulates light reflectance and transmittance of snow based on X-ray micro- tomography images, treating snow as a coherent 3D structure rather than a collection of grains. The model uses a blended approach to expand ray-tracing techniques applied to sub-1 cm3 snow samples to snowpacks of arbitrary depths. While this framework has many potential applications, this study’s effort is focused on simulating reflectance and transmittance in the visible and near infrared (NIR) through thin snow- packs as this is relevant for surface energy balance and remote sensing applications. We demonstrate that this framework fits well within the context of previous work and capably reproduces many known optical properties of a snow surface, including the dependence of spectral reflectance on the snow specific surface area and incident zenith angle as well as the surface bidirectional reflectance distribution function (BRDF). To evaluate the model, we compare it against reflectance data collected with a spectroradiometer at a field site in east-central Vermont. In this experiment, painted panels were inserted at various depths beneath the snow to emulate thin snow. The model compares remarkably well against the reflectance measured with a spectroradiometer, with an average RMSE of 0.03 in the 400–1600 nm range. Sensitivity simulations using this model indicate that snow transmittance is greatest in the visible wavelengths, limiting light penetration to the top 6 cm of the snowpack for fine-grain snow but increasing to 12 cm for coarse-grain snow. These results suggest that the 5% transmission depth in snow can vary by over 6 cm according to the snow type.
APA, Harvard, Vancouver, ISO, and other styles
2

King, E. L., A. Normandeau, T. Carson, P. Fraser, C. Staniforth, A. Limoges, B. MacDonald, F. J. Murrillo-Perez, and N. Van Nieuwenhove. Pockmarks, a paleo fluid efflux event, glacial meltwater channels, sponge colonies, and trawling impacts in Emerald Basin, Scotian Shelf: autonomous underwater vehicle surveys, William Kennedy 2022011 cruise report. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/331174.

Full text
Abstract:
A short but productive cruise aboard RV William Kennedy tested various new field equipment near Halifax (port of departure and return) but also in areas that could also benefit science understanding. The GSC-A Gavia Autonomous Underwater Vehicle equipped with bathymetric, sidescan and sub-bottom profiler was successfully deployed for the first time on Scotian Shelf science targets. It surveyed three small areas: two across known benthic sponge, Vazella (Russian Hat) within a DFO-directed trawling closure area on the SE flank of Sambro Bank, bordering Emerald Basin, and one across known pockmarks, eroded cone-shaped depression in soft mud due to fluid efflux. The sponge study sites (~ 150 170 m water depth) were known to lie in an area of till (subglacial diamict) exposure at the seabed. The AUV data identified gravel and cobble-rich seabed, registering individual clasts at 35 cm gridded resolution. A subtle variation in seabed texture is recognized in sidescan images, from cobble-rich on ridge crests and flanks, to limited mud-rich sediment in intervening troughs. Correlation between seabed topography and texture with the (previously collected) Vazella distribution along two transects is not straightforward. However there may be a preference for the sponge in the depressions, some of which have a thin but possibly ephemeral sediment cover. Both sponge study sites depict a hereto unknown morphology, carved in glacial deposits, consisting of a series of discontinuous ridges interpreted to be generated by erosion in multiple, continuous, meandering and cross-cutting channels. The morphology is identical to glacial Nye, or mp;lt;"N-mp;lt;"channels, cut by sub-glacial meltwater. However their scale (10 to 100 times mp;lt;"typicalmp;gt;" N-channels) and the unique eroded medium, (till rather than bedrock), presents a rare or unknown size and medium and suggests a continuum in sub-glacial meltwater channels between much larger tunnel valleys, common to the eastward, and the bedrock forms. A comparison is made with coastal Nova Scotia forms in bedrock. The Emerald Basin AUV site, targeting pockmarks was in ~260 to 270 m water depth and imaged eight large and one small pockmark. The main aim was to investigate possible recent or continuous fluid flux activity in light of ocean acidification or greenhouse gas contribution; most accounts to date suggested inactivity. While a lack of common attributes marking activity is confirmed, creep or rotational flank failure is recognized, as is a depletion of buried diffuse methane immediately below the seabed features. Discovery of a second, buried, pockmark horizon, with smaller but more numerous erosive cones and no spatial correlation to the buried diffuse gas or the seabed pockmarks, indicates a paleo-event of fluid or gas efflux; general timing and possible mechanisms are suggested. The basinal survey also registered numerous otter board trawl marks cutting the surficial mud from past fishing activity. The AUV data present a unique dataset for follow-up quantification of the disturbance. Recent realization that this may play a significant role in ocean acidification on a global scale can benefit from such disturbance quantification. The new pole-mounted sub-bottom profiler collected high quality data, enabling correlation of recently recognized till ridges exposed at the seabed as they become buried across the flank and base of the basin. These, along with the Nye channels, will help reconstruct glacial behavior and flow patterns which to date are only vaguely documented. Several cores provide the potential for stratigraphic dating of key horizons and will augment Holocene environmental history investigations by a Dalhousie University student. In summary, several unique features have been identified, providing sufficient field data for further compilation, analysis and follow-up publications.
APA, Harvard, Vancouver, ISO, and other styles
3

Suir, Glenn, Christina Saltus, and Sam Jackson. Remote Assessment of Swamp and Bottomland Hardwood Habitat Condition in the Maurepas Diversion Project Area. Engineer Research and Development Center (U.S.), August 2021. http://dx.doi.org/10.21079/11681/41563.

Full text
Abstract:
This study used high spatial resolution satellite imagery to identify and map Bottomland Hardwood (BLH) BLH and swamp within the Maurepas Diversion Project area and use Light Detection and Ranging (Lidar) elevation data, vegetation indices, and established stand-level thresholds to evaluate the condition of forested habitat. The Forest Condition methods and data developed as part of this study provide a remote sensing-based supplement to the field-based methods used in previous studies. Furthermore, several advantages are realized over traditional methods including higher resolution products, repeatability, improved coverage, and reduced effort and cost. This study advances previous methods and provides products useful for informing ecosystem decision making related to environmental assessments.
APA, Harvard, Vancouver, ISO, and other styles
4

Hart, Carl R., and Gregory W. Lyons. A Measurement System for the Study of Nonlinear Propagation Through Arrays of Scatterers. Engineer Research and Development Center (U.S.), November 2020. http://dx.doi.org/10.21079/11681/38621.

Full text
Abstract:
Various experimental challenges exist in measuring the spatial and temporal field of a nonlinear acoustic pulse propagating through an array of scatterers. Probe interference and undesirable high-frequency response plague typical approaches with acoustic microphones, which are also limited to resolving the pressure field at a single position. Measurements made with optical methods do not have such drawbacks, and schlieren measurements are particularly well suited to measuring both the spatial and temporal evolution of nonlinear pulse propagation in an array of scatterers. Herein, a measurement system is described based on a z-type schlieren setup, which is suitable for measuring axisymmetric phenomena and visualizing weak shock propagation. In order to reduce directivity and initiate nearly spherically-symmetric propagation, laser induced breakdown serves as the source for the nonlinear pulse. A key component of the schlieren system is a standard schliere, which allows quantitative schlieren measurements to be performed. Sizing of the standard schliere is aided by generating estimates of the expected light refraction from the nonlinear pulse, by way of the forward Abel transform. Finally, considerations for experimental sequencing, image capture, and a reconfigurable rod array designed to minimize spurious wave interactions are specified. 15.
APA, Harvard, Vancouver, ISO, and other styles
5

Burks, Thomas F., Victor Alchanatis, and Warren Dixon. Enhancement of Sensing Technologies for Selective Tree Fruit Identification and Targeting in Robotic Harvesting Systems. United States Department of Agriculture, October 2009. http://dx.doi.org/10.32747/2009.7591739.bard.

Full text
Abstract:
The proposed project aims to enhance tree fruit identification and targeting for robotic harvesting through the selection of appropriate sensor technology, sensor fusion, and visual servo-control approaches. These technologies will be applicable for apple, orange and grapefruit harvest, although specific sensor wavelengths may vary. The primary challenges are fruit occlusion, light variability, peel color variation with maturity, range to target, and computational requirements of image processing algorithms. There are four major development tasks in original three-year proposed study. First, spectral characteristics in the VIS/NIR (0.4-1.0 micron) will be used in conjunction with thermal data to provide accurate and robust detection of fruit in the tree canopy. Hyper-spectral image pairs will be combined to provide automatic stereo matching for accurate 3D position. Secondly, VIS/NIR/FIR (0.4-15.0 micron) spectral sensor technology will be evaluated for potential in-field on-the-tree grading of surface defect, maturity and size for selective fruit harvest. Thirdly, new adaptive Lyapunov-basedHBVS (homography-based visual servo) methods to compensate for camera uncertainty, distortion effects, and provide range to target from a single camera will be developed, simulated, and implemented on a camera testbed to prove concept. HBVS methods coupled with imagespace navigation will be implemented to provide robust target tracking. And finally, harvesting test will be conducted on the developed technologies using the University of Florida harvesting manipulator test bed. During the course of the project it was determined that the second objective was overly ambitious for the project period and effort was directed toward the other objectives. The results reflect the synergistic efforts of the three principals. The USA team has focused on citrus based approaches while the Israeli counterpart has focused on apples. The USA team has improved visual servo control through the use of a statistical-based range estimate and homography. The results have been promising as long as the target is visible. In addition, the USA team has developed improved fruit detection algorithms that are robust under light variation and can localize fruit centers for partially occluded fruit. Additionally, algorithms have been developed to fuse thermal and visible spectrum image prior to segmentation in order to evaluate the potential improvements in fruit detection. Lastly, the USA team has developed a multispectral detection approach which demonstrated fruit detection levels above 90% of non-occluded fruit. The Israel team has focused on image registration and statistical based fruit detection with post-segmentation fusion. The results of all programs have shown significant progress with increased levels of fruit detection over prior art.
APA, Harvard, Vancouver, ISO, and other styles
6

Douglas, Thomas A., Christopher A. Hiemstra, Stephanie P. Saari, Kevin L. Bjella, Seth W. Campbell, M. Torre Jorgenson, Dana R. N. Brown, and Anna K. Liljedahl. Degrading Permafrost Mapped with Electrical Resistivity Tomography, Airborne Imagery and LiDAR, and Seasonal Thaw Measurements. U.S. Army Engineer Research and Development Center, July 2021. http://dx.doi.org/10.21079/11681/41185.

Full text
Abstract:
Accurate identification of the relationships between permafrost extent and landscape patterns helps develop airborne geophysical or remote sensing tools to map permafrost in remote locations or across large areas. These tools are particularly applicable in discontinuous permafrost where climate warming or disturbances such as human development or fire can lead to rapid permafrost degradation. We linked field-based geophysical, point-scale, and imagery surveying measurements to map permafrost at five fire scars on the Tanana Flats in central Alaska. Ground-based elevation surveys, seasonal thaw-depth profiles, and electrical resistivity tomography (ERT) measurements were combined with airborne imagery and light detection and ranging (LiDAR) to identify relationships between permafrost geomorphology and elapsed time since fire disturbance. ERT was a robust technique for mapping the presence or absence of permafrost because of the marked difference in resistivity values for frozen versus unfrozen material. There was no clear relationship between elapsed time since fire and permafrost extent at our sites. The transition zone boundaries between permafrost soils and unfrozen soils in the collapse-scar bogs at our sites had complex and unpredictable morphologies, suggesting attempts to quantify the presence or absence of permafrost using aerial measurements alone could lead to incomplete results. The results from our study indicated limitations in being able to apply airborne surveying measurements at the landscape scale toward accurately estimating permafrost extent.
APA, Harvard, Vancouver, ISO, and other styles
7

Ley, Matt, Tom Baldvins, Hannah Pilkington, David Jones, and Kelly Anderson. Vegetation classification and mapping project: Big Thicket National Preserve. National Park Service, 2024. http://dx.doi.org/10.36967/2299254.

Full text
Abstract:
The Big Thicket National Preserve (BITH) vegetation inventory project classified and mapped vegetation within the administrative boundary and estimated thematic map accuracy quantitatively. National Park Service (NPS) Vegetation Mapping Inventory Program provided technical guidance. The overall process included initial planning and scoping, imagery procurement, vegetation classification field data collection, data analysis, imagery interpretation/classification, accuracy assessment (AA), and report writing and database development. Initial planning and scoping meetings took place during May, 2016 in Kountze, Texas where representatives gathered from BITH, the NPS Gulf Coast Inventory and Monitoring Network, and Colorado State University. The project acquired new 2014 orthoimagery (30-cm, 4-band (RGB and CIR)) from the Hexagon Imagery Program. Supplemental imagery for the interpretation phase included Texas Natural Resources Information System (TNRIS) 2015 50 cm leaf-off 4-band imagery from the Texas Orthoimagery Program (TOP), Farm Service Agency (FSA) 100-cm (2016) and 60 cm (2018) National Aerial Imagery Program (NAIP) imagery, and current and historical true-color Google Earth and Bing Maps imagery. In addition to aerial and satellite imagery, 2017 Neches River Basin Light Detection and Ranging (LiDAR) data was obtained from the United States Geological Survey (USGS) and TNRIS to analyze vegetation structure at BITH. The preliminary vegetation classification included 110 United States National Vegetation Classification (USNVC) associations. Existing vegetation and mapping data combined with vegetation plot data contributed to the final vegetation classification. Quantitative classification using hierarchical clustering and professional expertise was supported by vegetation data collected from 304 plots surveyed between 2016 and 2019 and 110 additional observation plots. The final vegetation classification includes 75 USNVC associations and 27 park special types including 80 forest and woodland, 7 shrubland, 12 herbaceous, and 3 sparse vegetation types. The final BITH map consists of 51 map classes. Land cover classes include five types: pasture / hay ground agricultural vegetation; non ? vegetated / barren land, borrow pit, cut bank; developed, open space; developed, low ? high intensity; and water. The 46 vegetation classes represent 102 associations or park specials. Of these, 75 represent natural vegetation associations within the USNVC, and 27 types represent unpublished park specials. Of the 46 vegetation map classes, 26 represent a single USNVC association/park special, 7 map classes contain two USNVC associations/park specials, 4 map classes contain three USNVC associations/park specials, and 9 map classes contain four or more USNVC associations/park specials. Forest and woodland types had an abundance of Pinus taeda, Liquidambar styraciflua, Ilex opaca, Ilex vomitoria, Quercus nigra, and Vitis rotundifolia. Shrubland types were dominated by Pinus taeda, Ilex vomitoria, Triadica sebifera, Liquidambar styraciflua, and/or Callicarpa americana. Herbaceous types had an abundance of Zizaniopsis miliacea, Juncus effusus, Panicum virgatum, and/or Saccharum giganteum. The final BITH vegetation map consists of 7,271 polygons totaling 45,771.8 ha (113,104.6 ac). Mean polygon size is 6.3 ha (15.6 ac). Of the total area, 43,314.4 ha (107,032.2 ac) or 94.6% represent natural or ruderal vegetation. Developed areas such as roads, parking lots, and campgrounds comprise 421.9 ha (1,042.5 ac) or 0.9% of the total. Open water accounts for approximately 2,034.9 ha (5,028.3 ac) or 4.4% of the total mapped area. Within the natural or ruderal vegetation types, forest and woodland types were the most extensive at 43,022.19 ha (106,310.1 ac) or 94.0%, followed by herbaceous vegetation types at 129.7 ha (320.5 ac) or 0.3%, sparse vegetation types at 119.2 ha (294.5 ac) or 0.3%, and shrubland types at 43.4 ha (107.2 ac) or 0.1%. A total of 784 AA samples were collected to evaluate the map?s thematic accuracy. When each AA sample was evaluated for a variety of potential errors, a number of the disagreements were overturned. It was determined that 182 plot records disagreed due to either an erroneous field call or a change in the vegetation since the imagery date, and 79 disagreed due to a true map classification error. Those records identified as incorrect due to an erroneous field call or changes in vegetation were considered correct for the purpose of the AA. As a simple plot count proportion, the reconciled overall accuracy was 89.9% (705/784). The spatially-weighted overall accuracy was 92.1% with a Kappa statistic of 89.6%. This method provides more weight to larger map classes in the park. Five map classes had accuracies below 80%. After discussing preliminary results with the parl, we retained those map classes because the community was rare, the map classes provided desired detail for management or the accuracy was reasonably close to the 80% target. When the 90% AA confidence intervals were included, an additional eight classes had thematic accruacies that extend below 80%. In addition to the vegetation polygon database and map, several products to support park resource management include the vegetation classification, field key to the associations, local association descriptions, photographic database, project geodatabase, ArcGIS .mxd files for map posters, and aerial imagery acquired for the project. The project geodatabase links the spatial vegetation data layer to vegetation classification, plot photos, project boundary extent, AA points, and PLOTS database sampling data. The geodatabase includes USNVC hierarchy tables allowing for spatial queries of data associated with a vegetation polygon or sample point. All geospatial products are projected using North American Datum 1983 (NAD83) in Universal Transverse Mercator (UTM) Zone 15 N. The final report includes methods and results, contingency tables showing AA results, field forms, species list, and a guide to imagery interpretation. These products provide useful information to assist with management of park resources and inform future management decisions. Use of standard national vegetation classification and mapping protocols facilitates effective resource stewardship by ensuring the compatibility and widespread use throughout NPS as well as other federal and state agencies. Products support a wide variety of resource assessments, park management and planning needs. Associated information provides a structure for framing and answering critical scientific questions about vegetation communities and their relationship to environmental processes across the landscape.
APA, Harvard, Vancouver, ISO, and other styles
8

Hodul, M., H. P. White, and A. Knudby. A report on water quality monitoring in Quesnel Lake, British Columbia, subsequent to the Mount Polley tailings dam spill, using optical satellite imagery. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/330556.

Full text
Abstract:
In the early morning on the 4th of August 2014, a tailings dam near Quesnel, BC burst, spilling approximately 25 million m3 of runoff containing heavy metal elements into nearby Quesnel Lake (Byrne et al. 2018). The runoff slurry, which included lead, arsenic, selenium, and vanadium spilled through Hazeltine Creek, scouring its banks and picking up till and forest cover on the way, and ultimately ended up in Quesnel Lake, whose water level rose by 1.5 m as a result. While the introduction of heavy metals into Quesnel Lake was of environmental concern, the additional till and forest cover scoured from the banks of Hazeltine Creek added to the lake has also been of concern to salmon spawning grounds. Immediate repercussions of the spill involved the damage of sensitive environments along the banks and on the lake bed, the closing of the seasonal salmon fishery in the lake, and a change in the microbial composition of the lake bed (Hatam et al. 2019). In addition, there appears to be a seasonal resuspension of the tailings sediment due to thermal cycling of the water and surface winds (Hamilton et al. 2020). While the water quality of Quesnel Lake continues to be monitored for the tailings sediments, primarily by members at the Quesnel River Research Centre, the sample-and-test methods of water quality testing used, while highly accurate, are expensive to undertake, and not spatially exhaustive. The use of remote sensing techniques, though not as accurate as lab testing, allows for the relatively fast creation of expansive water quality maps using sensors mounted on boats, planes, and satellites (Ritchie et al. 2003). The most common method for the remote sensing of surface water quality is through the use of a physics-based semianalytical model which simulates light passing through a water column with a given set of Inherent Optical Properties (IOPs), developed by Lee et al. (1998) and commonly referred to as a Radiative Transfer Model (RTM). The RTM forward-models a wide range of water-leaving spectral signatures based on IOPs determined by a mix of water constituents, including natural materials and pollutants. Remote sensing imagery is then used to invert the model by finding the modelled water spectrum which most closely resembles that seen in the imagery (Brando et al 2009). This project set out to develop an RTM water quality model to monitor the water quality in Quesnel Lake, allowing for the entire surface of the lake to be mapped at once, in an effort to easily determine the timing and extent of resuspension events, as well as potentially investigate greening events reported by locals. The project intended to use a combination of multispectral imagery (Landsat-8 and Sentinel-2), as well as hyperspectral imagery (DESIS), combined with field calibration/validation of the resulting models. The project began in the Autumn before the COVID pandemic, with plans to undertake a comprehensive fieldwork campaign to gather model calibration data in the summer of 2020. Since a province-wide travel shutdown and social distancing procedures made it difficult to carry out water quality surveying in a small boat, an insufficient amount of fieldwork was conducted to suit the needs of the project. Thus, the project has been put on hold, and the primary researcher has moved to a different project. This document stands as a report on all of the work conducted up to April 2021, intended largely as an instructional document for researchers who may wish to continue the work once fieldwork may freely and safely resume. This research was undertaken at the University of Ottawa, with supporting funding provided by the Earth Observations for Cumulative Effects (EO4CE) Program Work Package 10b: Site Monitoring and Remediation, Canada Centre for Remote Sensing, through the Natural Resources Canada Research Affiliate Program (RAP).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography