To see the other types of publications on this topic, follow the link: Light field images.

Journal articles on the topic 'Light field images'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Light field images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Garces, Elena, Jose I. Echevarria, Wen Zhang, Hongzhi Wu, Kun Zhou, and Diego Gutierrez. "Intrinsic Light Field Images." Computer Graphics Forum 36, no. 8 (May 5, 2017): 589–99. http://dx.doi.org/10.1111/cgf.13154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Seung-Jae, and In Kyu Park. "Dictionary Learning based Superresolution on 4D Light Field Images." Journal of Broadcast Engineering 20, no. 5 (September 30, 2015): 676–86. http://dx.doi.org/10.5909/jbe.2015.20.5.676.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yan, Tao, Yuyang Ding, Fan Zhang, Ningyu Xie, Wenxi Liu, Zhengtian Wu, and Yuan Liu. "Snow Removal From Light Field Images." IEEE Access 7 (2019): 164203–15. http://dx.doi.org/10.1109/access.2019.2951917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Li, Yunpeng Ma, Song Hong, and Ke Chen. "Reivew of Light Field Image Super-Resolution." Electronics 11, no. 12 (June 17, 2022): 1904. http://dx.doi.org/10.3390/electronics11121904.

Full text
Abstract:
Currently, light fields play important roles in industry, including in 3D mapping, virtual reality and other fields. However, as a kind of high-latitude data, light field images are difficult to acquire and store. Thus, the study of light field super-resolution is of great importance. Compared with traditional 2D planar images, 4D light field images contain information from different angles in the scene, and thus the super-resolution of light field images needs to be performed not only in the spatial domain but also in the angular domain. In the early days of light field super-resolution research, many solutions for 2D image super-resolution, such as Gaussian models and sparse representations, were also used in light field super-resolution. With the development of deep learning, light field image super-resolution solutions based on deep-learning techniques are becoming increasingly common and are gradually replacing traditional methods. In this paper, the current research on super-resolution light field images, including traditional methods and deep-learning-based methods, are outlined and discussed separately. This paper also lists publicly available datasets and compares the performance of various methods on these datasets as well as analyses the importance of light field super-resolution research and its future development.
APA, Harvard, Vancouver, ISO, and other styles
5

Kobayashi, Kenkichi, and Hideo Saito. "High-Resolution Image Synthesis from Video Sequence by Light Field." Journal of Robotics and Mechatronics 15, no. 3 (June 20, 2003): 254–62. http://dx.doi.org/10.20965/jrm.2003.p0254.

Full text
Abstract:
We propose a novel method to synthesize high-resolution images from image sequences taken with a moving video camera. Each frame in the image sequence is a part of the photographed object. Our method integrates these frames to generate high-resolution images of object by constructing a light field, which is quite different from general mosaic methods. In light fields constructed straightforwardly, blur and discontinuity are introduced into synthesized images by depth variation of the object. In our method, the light field is optimized to remove blur and discontinuity so clear images can be synthesized. We find the optimum light field for generating sharp unblurred images by reparameterizing light field and evaluating sharpness of synthesized images from each light field. The optimized light field is adapted to the depth variation of the object surface, but the exact shape of the object is not necessary. High resolution images that are impractical in the real system can be virtually synthesized from the light field. Results of the experiment applied to a book surface demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
6

Sun Junyang, 孙俊阳, 孙. 俊. Sun Jun, 许传龙 Xu Chuanlong, 张. 彪. Zhang Biao, and 王式民 Wang Shimin. "A Calibration Method of Focused Light Field Cameras Based on Light Field Images." Acta Optica Sinica 37, no. 5 (2017): 0515002. http://dx.doi.org/10.3788/aos201737.0515002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

KOMATSU, Koji, Kohei ISECHI, Keita TAKAHASHI, and Toshiaki FUJII. "Light Field Coding Using Weighted Binary Images." IEICE Transactions on Information and Systems E102.D, no. 11 (November 1, 2019): 2110–19. http://dx.doi.org/10.1587/transinf.2019pcp0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yamauchi, Masaki, and Tomohiro Yendo. "Light field display using wavelength division multiplexing." Electronic Imaging 2020, no. 2 (January 26, 2020): 101–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-101.

Full text
Abstract:
We propose a large screen 3D display which enables multiple viewers to see simultaneously without special glasses. In prior researches, methods of using a projector array or a swinging screen were proposed. However, the former has difficulty in installing and adjusting a large number of projectors and the latter cases occurrence of vibration and noise because of the mechanical motion of the screen. Our proposed display consists of a wavelength modulation projector and a spectroscopic screen. The screen shows images of which color depends on viewing points. The projector projects binary images to the screen in time-division according to wavelength of projection light. The wavelength of the light changes at high-speed with time. Therefore, the system can show 3D images to multiple viewers simultaneously by projecting proper images according to each viewing points. The installation of the display is easy and vibration or noise are not occurred because only one projector is used and the screen has no mechanical motion. We conducted simulation and confirmed that the proposed display can show 3D images to multiple viewers simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
9

Xiao, Bo, Xiujing Gao, and Hongwu Huang. "Optimizing Underwater Image Restoration and Depth Estimation with Light Field Images." Journal of Marine Science and Engineering 12, no. 6 (June 2, 2024): 935. http://dx.doi.org/10.3390/jmse12060935.

Full text
Abstract:
Methods based on light field information have shown promising results in depth estimation and underwater image restoration. However, improvements are still needed in terms of depth estimation accuracy and image restoration quality. Previous work on underwater image restoration employed an image formation model (IFM) that overlooked the effects of light attenuation and scattering coefficients in underwater environments, leading to unavoidable color deviation and distortion in the restored images. Additionally, the high blurriness and associated distortions in underwater images make depth information extraction and estimation very challenging. In this paper, we refine the light propagation model and propose a method to estimate the attenuation and backscattering coefficients of the underwater IFM. We simplify these coefficients into distance-related functions and design a relationship between distance and the darkest channel to estimate the water coefficients, effectively suppressing color deviation and distortion in the restoration results. Furthermore, to increase the accuracy of depth estimation, we propose using blur cues to construct a cost for refocusing in the depth direction, reducing the impact of high signal-to-noise ratio environments on depth information extraction, and effectively enhancing the accuracy and robustness of depth estimation. Finally, experimental comparisons show that our method achieves more accurate depth estimation and image restoration closer to real scenes compared to state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Salem, Ahmed, Hatem Ibrahem, and Hyun-Soo Kang. "Light Field Reconstruction Using Residual Networks on Raw Images." Sensors 22, no. 5 (March 2, 2022): 1956. http://dx.doi.org/10.3390/s22051956.

Full text
Abstract:
Although Light-Field (LF) technology attracts attention due to its large number of applications, especially with the introduction of consumer LF cameras and its frequent use, reconstructing densely sampled LF images represents a great challenge to the use and development of LF technology. Our paper proposes a learning-based method to reconstruct densely sampled LF images from a sparse set of input images. We trained our model with raw LF images rather than using multiple images of the same scene. Raw LF can represent the two-dimensional array of images captured in a single image. Therefore, it enables the network to understand and model the relationship between different images of the same scene well and thus restore more texture details and provide better quality. Using raw images has transformed the task from image reconstruction into image-to-image translation. The feature of small-baseline LF was used to define the images to be reconstructed using the nearest input view to initialize input images. Our network was trained end-to-end to minimize the sum of absolute errors between the reconstructed and ground-truth images. Experimental results on three challenging real-world datasets demonstrate the high performance of our proposed method and its outperformance over the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
11

Sieberth, T., R. Wackrow, V. Hofer, and V. Barrera. "LIGHT FIELD CAMERA AS TOOL FOR FORENSIC PHOTOGRAMMETRY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-1 (September 26, 2018): 393–99. http://dx.doi.org/10.5194/isprs-archives-xlii-1-393-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> Light field cameras record both the light intensity received by the sensor and the direction in which the light rays are travelling through space. Recording the additional information of the direction of Light rays provides the opportunity to refocus an image after acquisition. Furthermore, a depth image can be created, providing 3D information for each image pixel. Both, focused images and 3D information are relevant for forensic investigations. Basic overview images are often acquired by photographic novices and under difficult conditions, which make refocusing of images a useful feature to enhance information for documentation purposes. Besides focused images, it can also be useful to have 3D data of an incident scene. Capital crime scenes such as homicide are usually documented in 3D using laser scanning. However, not every crime scene can be identified as capital crime scene straight away but only in the course of the investigation, making 3D data acquisition of the discovery situation impossible. If this is the case, light field images taken during the discovery of the scene can provide substantial 3D data. We will present how light field images are refocused and used to perform photogrammetric reconstruction of a scene and compare the generated 3D model to standard photogrammetry and laser scanning data. The results show that refocused light field images used for photogrammetry can improve the photogrammetry result and aid photogrammetric processing.</p>
APA, Harvard, Vancouver, ISO, and other styles
12

Fachada, Sarah, Daniele Bonatto, Mehrdad Teratani, and Gauthier Lafruit. "Light Field Rendering for non-Lambertian Objects." Electronic Imaging 2021, no. 2 (January 18, 2021): 54–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.2.sda-054.

Full text
Abstract:
In this paper we propose a solution for view synthesis of scenes presenting highly non-Lambertian objects. While Image- Based Rendering methods can easily render diffuse materials given only their depth, non-Lambertian objects present non-linear displacements of their features, characterized by curved lines in epipolar plane images. Hence, we propose to replace the depth maps used for rendering new viewpoints by a more complex “non- Lambertian map” describing the light field?s behavior. In a 4D light field, diffuse features are linearly displaced following their disparity, but non-Lambertian feature can follow any trajectory and need to be approximated by non-Lambertian maps. We compute those maps from nine input images using Bezier or polynomial interpolation. After the map computation, a classical Image- Based Rendering method is applied to warp the input images to novel viewpoints.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Yun, Gangyi Jiang, Zhidi Jiang, Zhiyong Pan, Mei Yu, and Yo-Sung Ho. "Pseudoreference Subaperture Images and Microlens Image-Based Blind Light Field Image Quality Measurement." IEEE Transactions on Instrumentation and Measurement 70 (2021): 1–15. http://dx.doi.org/10.1109/tim.2021.3096865.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

KONDO, Shu, Yuto KOBAYASHI, Keita TAKAHASHI, and Toshiaki FUJII. "Physically-Correct Light-Field Factorization for Perspective Images." IEICE Transactions on Information and Systems E100.D, no. 9 (2017): 2052–55. http://dx.doi.org/10.1587/transinf.2016pcl0006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Paudyal, Pradip, Federica Battisti, and Marco Carli. "Reduced Reference Quality Assessment of Light Field Images." IEEE Transactions on Broadcasting 65, no. 1 (March 2019): 152–65. http://dx.doi.org/10.1109/tbc.2019.2892092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Salem, Ahmed, Hatem Ibrahem, and Hyun-Soo Kang. "Light Field Image Super-Resolution Using Deep Residual Networks on Lenslet Images." Sensors 23, no. 4 (February 10, 2023): 2018. http://dx.doi.org/10.3390/s23042018.

Full text
Abstract:
Due to its widespread usage in many applications, numerous deep learning algorithms have been proposed to overcome Light Field’s trade-off (LF). The sensor’s low resolution limits angular and spatial resolution, which causes this trade-off. The proposed method should be able to model the non-local properties of the 4D LF data fully to mitigate this problem. Therefore, this paper proposes a different approach to increase spatial and angular information interaction for LF image super-resolution (SR). We achieved this by processing the LF Sub-Aperture Images (SAI) independently to extract the spatial information and the LF Macro-Pixel Image (MPI) to extract the angular information. The MPI or Lenslet LF image is characterized by its ability to integrate more complementary information between different viewpoints (SAIs). In particular, we extract initial features and then process MAI and SAIs alternately to incorporate angular and spatial information. Finally, the interacted features are added to the initial extracted features to reconstruct the final output. We trained the proposed network to minimize the sum of absolute errors between low-resolution (LR) input and high-resolution (HR) output images. Experimental results prove the high performance of our proposed method over the state-of-the-art methods on LFSR for small baseline LF images.
APA, Harvard, Vancouver, ISO, and other styles
17

Chantara, Wisarut, and Moongu Jeon. "All-in-Focused Image Combination in the Frequency Domain Using Light Field Images." Applied Sciences 9, no. 18 (September 8, 2019): 3752. http://dx.doi.org/10.3390/app9183752.

Full text
Abstract:
All-in-focused image combination is a fusion technique used to acquire related data from a set of focused images at different depth levels, which suggests that one can determine objects in the foreground and background regions. When attempting to reconstruct an all-in-focused image, we need to identify in-focused regions from multiple input images captured with different focal lengths. This paper presents a new method to find and fuse the in-focused regions of the different focal stack images. After we apply the two-dimensional discrete cosine transform (DCT) to transform the focal stack images into the frequency domain, we utilize the sum of the updated modified Laplacian (SUML), enhancement of the SUML, and harmonic mean (HM) for calculating in-focused regions of the stack images. After fusing all the in-focused information, we transform the result back by using the inverse DCT. Hence, the out-focused parts are removed. Finally, we combine all the in-focused image regions and reconstruct the all-in-focused image.
APA, Harvard, Vancouver, ISO, and other styles
18

Du, Yifan, Wei Lang, Xinwen Hu, Li Yu, Hua Zhang, Lingjun Zhang, and Yifan Wu. "Quality Assessment of Light Field Images Based on Adaptive Attention in ViT." Electronics 13, no. 15 (July 29, 2024): 2985. http://dx.doi.org/10.3390/electronics13152985.

Full text
Abstract:
Light field images can record multiple information about the light rays in a scene and provide multiple views from a single image, offering a new data source for 3D reconstruction. However, ensuring the quality of light field images themselves is challenging, and distorted image inputs may lead to poor reconstruction results. Accurate light field image quality assessment can pre-judge the quality of light field images used as input for 3D reconstruction, providing a reference for the reconstruction results before the reconstruction work, significantly improving the efficiency of 3D reconstruction based on light field images. In this paper, we propose an Adaptive Vision Transformer-based light-field image-quality assessment model (AViT-LFIQA). The model adopts a multi-view sub-aperture image sequence input method, greatly reducing the number of input images while retaining as much information as possible from the original light field image, alleviating the training pressure on the neural network. Furthermore, we design an adaptive learnable attention layer based on ViT, which addresses the lack of inductive bias in ViT by using adaptive diagonal masking and a learnable temperature coefficient strategy, making the model more suitable for training on small datasets of light field images. Experimental results demonstrate that the proposed model is effective for various types of distortions and shows superior performance in light-field image-quality assessment.
APA, Harvard, Vancouver, ISO, and other styles
19

Oucherif, Sabrine Djedjiga, Mohamad Motasem Nawaf, Jean-Marc Boï, Lionel Nicod, Elodie Mallor, Séverine Dubuisson, and Djamal Merad. "Enhancing Facial Expression Recognition through Light Field Cameras." Sensors 24, no. 17 (September 3, 2024): 5724. http://dx.doi.org/10.3390/s24175724.

Full text
Abstract:
In this paper, we study facial expression recognition (FER) using three modalities obtained from a light field camera: sub-aperture (SA), depth map, and all-in-focus (AiF) images. Our objective is to construct a more comprehensive and effective FER system by investigating multimodal fusion strategies. For this purpose, we employ EfficientNetV2-S, pre-trained on AffectNet, as our primary convolutional neural network. This model, combined with a BiGRU, is used to process SA images. We evaluate various fusion techniques at both decision and feature levels to assess their effectiveness in enhancing FER accuracy. Our findings show that the model using SA images surpasses state-of-the-art performance, achieving 88.13% ± 7.42% accuracy under the subject-specific evaluation protocol and 91.88% ± 3.25% under the subject-independent evaluation protocol. These results highlight our model’s potential in enhancing FER accuracy and robustness, outperforming existing methods. Furthermore, our multimodal fusion approach, integrating SA, AiF, and depth images, demonstrates substantial improvements over unimodal models. The decision-level fusion strategy, particularly using average weights, proved most effective, achieving 90.13% ± 4.95% accuracy under the subject-specific evaluation protocol and 93.33% ± 4.92% under the subject-independent evaluation protocol. This approach leverages the complementary strengths of each modality, resulting in a more comprehensive and accurate FER system.
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Xiaowei, Zhiqing Ren, Tianhao Wang, and Huan Deng. "Ownership protection for light-field 3D images: HDCT watermarking." Optics Express 29, no. 26 (December 10, 2021): 43256. http://dx.doi.org/10.1364/oe.446397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Navarro, Julia, and Antoni Buades. "Robust and Dense Depth Estimation for Light Field Images." IEEE Transactions on Image Processing 26, no. 4 (April 2017): 1873–86. http://dx.doi.org/10.1109/tip.2017.2666041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Schiopu, I., and A. Munteanu. "Deep‐learning‐based depth estimation from light field images." Electronics Letters 55, no. 20 (October 2019): 1086–88. http://dx.doi.org/10.1049/el.2019.2073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

YANG, Shang-peng, Cheng-cai XU, Yi-fan JIE, Guang-quan ZHOU, and Ping ZHOU. "Distortionless condition for microlens images in light field imaging." Chinese Journal of Liquid Crystals and Displays 38, no. 6 (2023): 829–34. http://dx.doi.org/10.37188/cjlcd.2023-0021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Jin, Jing, Junhui Hou, Hui Yuan, and Sam Kwong. "Learning Light Field Angular Super-Resolution via a Geometry-Aware Network." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11141–48. http://dx.doi.org/10.1609/aaai.v34i07.6771.

Full text
Abstract:
The acquisition of light field images with high angular resolution is costly. Although many methods have been proposed to improve the angular resolution of a sparsely-sampled light field, they always focus on the light field with a small baseline, which is captured by a consumer light field camera. By making full use of the intrinsic geometry information of light fields, in this paper we propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline. Our model consists of two learnable modules and a physically-based module. Specifically, it includes a depth estimation module for explicitly modeling the scene geometry, a physically-based warping for novel views synthesis, and a light field blending module specifically designed for light field reconstruction. Moreover, we introduce a novel loss function to promote the preservation of the light field parallax structure. Experimental results over various light field datasets including large baseline light field images demonstrate the significant superiority of our method when compared with state-of-the-art ones, i.e., our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48×. In addition, our method preserves the light field parallax structure better.
APA, Harvard, Vancouver, ISO, and other styles
25

Meng, Qingyu, Haiyang Yu, Xiaoyu Jiang, and Xinzhu Sang. "A Sparse Capture Light-Field Coding Algorithm Based on Target Pixel Matching for a Multi-Projector-Type Light-Field Display System." Photonics 10, no. 2 (February 19, 2023): 223. http://dx.doi.org/10.3390/photonics10020223.

Full text
Abstract:
The traditional light-field coding algorithm used in a multi-projector-type light-field display system requires sophisticated and complex three-dimensional modeling processes or parallax images obtained through dense capture. Here we propose an algorithm based on target pixel matching, which directly uses parallax images without a complex modeling process, and can achieve a more accurate light-field reconstruction effect under sparse capture conditions. For the lack of capture information caused by sparse capture, this algorithm compares the pixel similarity of the captured images of the same object point on different cameras to accurately determine the real capture information of the object point at different depths, which is recorded as the target pixel, and then the target pixel is encoded according to the lighting path to obtain the correct projector image array (PIA). By comparing the quality of PIAs generated by the traditional light-field coding algorithm and the display effect after loading the PIAs into the actual display system, we proved the effectiveness of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
26

Yan, Tao, Yiming Mao, Jianming Wang, Wenxi Liu, Xiaohua Qian, and Rynson W. H. Lau. "Generating Stereoscopic Images With Convergence Control Ability From a Light Field Image Pair." IEEE Transactions on Circuits and Systems for Video Technology 30, no. 5 (May 2020): 1435–50. http://dx.doi.org/10.1109/tcsvt.2019.2903556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Yu, Xunbo, Hanyu Li, Xiwen Su, Xin Gao, Xinzhu Sang, and Binbin Yan. "Image edge smoothing method for light-field displays based on joint design of optical structure and elemental images." Optics Express 31, no. 11 (May 12, 2023): 18017. http://dx.doi.org/10.1364/oe.488781.

Full text
Abstract:
Image visual quality is of fundamental importance for three-dimensional (3D) light-field displays. The pixels of a light-field display are enlarged after the imaging of the light-field system, increasing the graininess of the image, which leads to a severe decline in the image edge smoothness as well as image quality. In this paper, a joint optimization method is proposed to minimize the “sawtooth edge” phenomenon of reconstructed images in light-field display systems. In the joint optimization scheme, neural networks are used to simultaneously optimize the point spread functions of the optical components and elemental images, and the optical components are designed based on the results. The simulations and experimental data show that a less grainy 3D image is achievable through the proposed joint edge smoothing method.
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Yi-Jian, Xue-Rui Wen, Wei-Ze Li, Yan Xing, Han-Le Zhang, and Qiong-Hua Wang. "P‐123: Analysis of Light Shaping Diffuser in Integral Imaging Based Light Field Display." SID Symposium Digest of Technical Papers 55, no. 1 (June 2024): 1866–69. http://dx.doi.org/10.1002/sdtp.17948.

Full text
Abstract:
Light shaping diffuser (LSD) is a crucial component in integral imaging based light field display for achieving continuous 3D images. However, LSD also constrains the display resolution in wide viewing angles. To addressing this issue, this study conducts thorough analyses. The light homogenization characteristics of LSD are proposed, complementing the understanding of LSD's mechanism for achieving continuous 3D images. Furthermore, the scattering function of LSD is established for the first time, explaining the mechanism behind LSD constraining resolution in wide viewing angles. The proposed scattering function can guide the optimization design of resolution enhanced integral imaging based light field display.
APA, Harvard, Vancouver, ISO, and other styles
29

Prof. Sathish. "Light Field Image Coding with Image Prediction in Redundancy." Journal of Soft Computing Paradigm 2, no. 3 (July 21, 2020): 160–67. http://dx.doi.org/10.36548/jscp.2020.3.003.

Full text
Abstract:
The proposed work involves a hybrid data representation using efficient light field coding. The existing light field coding solution are implemented using sub-aperture or micro-images. However, the full capacity in terms of intrinsic redundancy in light field images is not completely explored. This paper represents a hybrid data representation which explores four major redundancy types. Using coding block, the most predominant redundancy is exploited to find the optimum coding solution that provides maximum flexibility. To show how efficient the hybrid representation works, we have proposed a combination of pseudo-video sequence coding approach with pixel prediction methods. The observed experimental results shows a positive bit rate saving when compared to other similar methods. Similarly, the proposed method is also said to outperform other coding algorithms such as WaSP and MuLE when compared on a HEVC-based benchmark.
APA, Harvard, Vancouver, ISO, and other styles
30

Sharma, Rishabh, Stuart Perry, and Eva Cheng. "Noise-Resilient Depth Estimation for Light Field Images Using Focal Stack and FFT Analysis." Sensors 22, no. 5 (March 3, 2022): 1993. http://dx.doi.org/10.3390/s22051993.

Full text
Abstract:
Depth estimation for light field images is essential for applications such as light field image compression, reconstructing perspective views and 3D reconstruction. Previous depth map estimation approaches do not capture sharp transitions around object boundaries due to occlusions, making many of the current approaches unreliable at depth discontinuities. This is especially the case for light field images because the pixels do not exhibit photo-consistency in the presence of occlusions. In this paper, we propose an algorithm to estimate the depth map for light field images using depth from defocus. Our approach uses a small patch size of pixels in each focal stack image for comparing defocus cues, allowing the algorithm to generate sharper depth boundaries. Then, in contrast to existing approaches that use defocus cues for depth estimation, we use frequency domain analysis image similarity checking to generate the depth map. Processing in the frequency domain reduces the individual pixel errors that occur while directly comparing RGB images, making the algorithm more resilient to noise. The algorithm has been evaluated on both a synthetic image dataset and real-world images in the JPEG dataset. Experimental results demonstrate that our proposed algorithm outperforms state-of-the-art depth estimation techniques for light field images, particularly in case of noisy images.
APA, Harvard, Vancouver, ISO, and other styles
31

Palmieri, Luca, Gabriele Scrofani, Nicolò Incardona, Genaro Saavedra, Manuel Martínez-Corral, and Reinhard Koch. "Robust Depth Estimation for Light Field Microscopy." Sensors 19, no. 3 (January 25, 2019): 500. http://dx.doi.org/10.3390/s19030500.

Full text
Abstract:
Light field technologies have seen a rise in recent years and microscopy is a field where such technology has had a deep impact. The possibility to provide spatial and angular information at the same time and in a single shot brings several advantages and allows for new applications. A common goal in these applications is the calculation of a depth map to reconstruct the three-dimensional geometry of the scene. Many approaches are applicable, but most of them cannot achieve high accuracy because of the nature of such images: biological samples are usually poor in features and do not exhibit sharp colors like natural scene. Due to such conditions, standard approaches result in noisy depth maps. In this work, a robust approach is proposed where accurate depth maps can be produced exploiting the information recorded in the light field, in particular, images produced with Fourier integral Microscope. The proposed approach can be divided into three main parts. Initially, it creates two cost volumes using different focal cues, namely correspondences and defocus. Secondly, it applies filtering methods that exploit multi-scale and super-pixels cost aggregation to reduce noise and enhance the accuracy. Finally, it merges the two cost volumes and extracts a depth map through multi-label optimization.
APA, Harvard, Vancouver, ISO, and other styles
32

Liu, Jinghuai, Qian Zhang, Ang Shen, Ying Gao, Jiaqi Hou, Bin Wang, and Tao Yan. "A Novel Light Field Image Compression Method Using EPI Restoration Neural Network." BioMed Research International 2022 (June 13, 2022): 1–8. http://dx.doi.org/10.1155/2022/8324438.

Full text
Abstract:
Different from traditional images, light field images record not only spatial information but also angle information. Due to the large volume of light field data brings great difficulties to storage and compression, light field compression technology has attracted much attention. The epipolar plane image (EPI) contains a lot of low rank information, which is suitable for recovering the complete EPI from a part of EPI. In this paper, a light field image coding framework based on EPI restoration neural network has been proposed. Compared with previous algorithms, the proposed algorithm further takes advantage of the inherent similarity in light field images, and the proposed framework has higher performance and robustness. Experimental results show that the proposed method has superior performance compared to the state-of-the-art both in quantitatively and qualitatively.
APA, Harvard, Vancouver, ISO, and other styles
33

Momonoi, Yoshiharu, Koya Yamamoto, Yoshihiro Yokote, Atsushi Sato, and Yasuhiro Takaki. "Systematic Approach for Alignment of Light Field Mirage." Applied Sciences 12, no. 23 (December 4, 2022): 12413. http://dx.doi.org/10.3390/app122312413.

Full text
Abstract:
We previously proposed techniques to eliminate repeated three-dimensional (3D) images produced by the light field Mirage, which consists of circularly aligned multiple-slanted light field displays. However, we only constructed the lower half of the system to verify the proposed elimination techniques. In this study, we developed an alignment technique for a complete 360-degree display system. The alignment techniques for conventional 360-degree display systems, which use a large number of projectors, greatly depend on electronic calibration, which indispensably causes image quality degradation. We propose a systematic approach for the alignment for the light field Mirage, which causes less image quality degradation by taking advantage of the small number of display devices required for the light field Mirage. The calibration technique for light field displays, the image stitching technique, and the brightness matching technique are consecutively performed, and the generation of 360-degree 3D images is verified.
APA, Harvard, Vancouver, ISO, and other styles
34

Bhullar, A., R. A. Ali, and D. L. Welch. "A package for the automated classification of images containing supernova light echoes." Astronomy & Astrophysics 655 (November 2021): A82. http://dx.doi.org/10.1051/0004-6361/202039755.

Full text
Abstract:
Context. The so-called light echoes of supernovae – the apparent motion of outburst-illuminated interstellar dust – can be detected in astronomical difference images; however, light echoes are extremely rare which makes manual detection an arduous task. Surveys for centuries-old supernova light echoes can involve hundreds of pointings of wide-field imagers wherein the subimages from each CCD amplifier require examination. Aims. We introduce ALED, a Python package that implements (i) a capsule network trained to automatically identify images with a high probability of containing at least one supernova light echo and (ii) routing path visualization to localize light echoes and/or light echo-like features in the identified images. Methods. We compared the performance of the capsule network implemented in ALED (ALED-m) to several capsule and convolutional neural networks of different architectures. We also applied ALED to a large catalogue of astronomical difference images and manually inspected candidate light echo images for human verification. Results. ALED-m was found to achieve 90% classification accuracy on the test set and to precisely localize the identified light echoes via routing path visualization. From a set of 13 000+ astronomical difference images, ALED identified a set of light echoes that had been overlooked in manual classification.
APA, Harvard, Vancouver, ISO, and other styles
35

Mun, Ji-Hun, and Yo-Sung Ho. "Depth from stacked light field images using generative adversarial network." Electronic Imaging 2019, no. 11 (January 13, 2019): 270–1. http://dx.doi.org/10.2352/issn.2470-1173.2019.11.ipas-270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Cho, Donghyeon, Sunyeong Kim, Yu-Wing Tai, and In So Kweon. "Automatic Trimap Generation and Consistent Matting for Light-Field Images." IEEE Transactions on Pattern Analysis and Machine Intelligence 39, no. 8 (August 1, 2017): 1504–17. http://dx.doi.org/10.1109/tpami.2016.2606397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ai, Wei, Sen Xiang, and Li Yu. "Robust depth estimation for multi-occlusion in light-field images." Optics Express 27, no. 17 (August 16, 2019): 24793. http://dx.doi.org/10.1364/oe.27.024793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Paudyal, Pradip, Federica Battisti, Marten Sjostrom, Roger Olsson, and Marco Carli. "Towards the Perceptual Quality Evaluation of Compressed Light Field Images." IEEE Transactions on Broadcasting 63, no. 3 (September 2017): 507–22. http://dx.doi.org/10.1109/tbc.2017.2704430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Williem, Ki Won Shon, and In Kyu Park. "Spatio-angular consistent editing framework for 4D light field images." Multimedia Tools and Applications 75, no. 23 (July 15, 2016): 16615–31. http://dx.doi.org/10.1007/s11042-016-3754-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Salem, Ahmed, Hatem Ibrahem, Bilel Yagoub, and Hyun-Soo Kang. "End-to-End Residual Network for Light Field Reconstruction on Raw Images and View Image Stacks." Sensors 22, no. 9 (May 6, 2022): 3540. http://dx.doi.org/10.3390/s22093540.

Full text
Abstract:
Light field (LF) technology has become a focus of great interest (due to its use in many applications), especially since the introduction of the consumer LF camera, which facilitated the acquisition of dense LF images. Obtaining densely sampled LF images is costly due to the trade-off between spatial and angular resolutions. Accordingly, in this research, we suggest a learning-based solution to this challenging problem, reconstructing dense, high-quality LF images. Instead of training our model with several images of the same scene, we used raw LF images (lenslet images). The raw LF format enables the encoding of several images of the same scene into one image. Consequently, it helps the network to understand and simulate the relationship between different images, resulting in higher quality images. We divided our model into two successive modules: LFR and LF augmentation (LFA). Each module is represented using a convolutional neural network-based residual network (CNN). We trained our network to lessen the absolute error between the novel and reference views. Experimental findings on real-world datasets show that our suggested method has excellent performance and superiority over state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
41

Kim, Hyun Myung, Young Jin Yoo, Jeong Min Lee, and Young Min Song. "A Wide Field-of-View Light-Field Camera with Adjustable Multiplicity for Practical Applications." Sensors 22, no. 9 (April 30, 2022): 3455. http://dx.doi.org/10.3390/s22093455.

Full text
Abstract:
The long-fascinated idea of creating 3D images that depict depth information along with color and brightness has been realized with the advent of a light-field camera (LFC). Recently advanced LFCs mainly utilize micro-lens arrays (MLAs) as a key component to acquire rich 3D information, including depth, encoded color, reflectivity, refraction, occlusion, and transparency. The wide field-of-view (FOV) capability of LFCs, which is expected to be of great benefit for extended applications, is obstructed by the fundamental limitations of LFCs. Here, we present a practical strategy for the wide FOV-LFC by adjusting the spacing factor. Multiplicity (M) is the inverse magnification of the MLA located between the image plane and the sensor, which was introduced as the overlap ratio between the micro-images. M was adopted as a design parameter in several factors of the LFC, and a commercial lens with adjustable FOV was used as the main lens for practicality. The light-field (LF) information was evaluated by considering the pixel resolution and overlapping area in narrow and wide FOV. The M was optimized for narrow and wide FOV, respectively, by the trade-off relationship between pixel resolution and geometric resolution. Customized wide FOV-LFCs with different M were compared by spatial resolution test and depth information test, and the wide FOV-LFC with optimized M provides LF images with high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
42

Shi, Shengxian, Linlin Sun, Yinsen Luan, Rui Wang, and T. H. New. "Design and evaluation of a light-field multi-wavelength pyrometer." Review of Scientific Instruments 93, no. 11 (November 1, 2022): 114901. http://dx.doi.org/10.1063/5.0119009.

Full text
Abstract:
This letter describes the design and implementation of a multi-wavelength light-field pyrometer, where six-channel radiation images were captured with one CMOS sensor. Such capability is achieved by placing a 2 × 3 filter array in front of the main lens of an unfocused light-field camera, such that discrete wavelength and radiation intensity can be simultaneously recorded. It demonstrates that through black-body furnace experiments, how multi-channel radiation images can be extracted from one raw light-field multispectral image, and how accurate 2D temperature distribution can be recovered by optimization algorithms.
APA, Harvard, Vancouver, ISO, and other styles
43

Shan, Liang, Ping An, Chunli Meng, Xinpeng Huang, Chao Yang, and Liquan Shen. "A No-Reference Image Quality Assessment Metric by Multiple Characteristics of Light Field Images." IEEE Access 7 (2019): 127217–29. http://dx.doi.org/10.1109/access.2019.2940093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Viganò, Nicola, Felix Lucka, Ombeline de La Rochefoucauld, Sophia Bethany Coban, Robert van Liere, Marta Fajardo, Philippe Zeitoun, and Kees Joost Batenburg. "Emulation of X-ray Light-Field Cameras." Journal of Imaging 6, no. 12 (December 11, 2020): 138. http://dx.doi.org/10.3390/jimaging6120138.

Full text
Abstract:
X-ray plenoptic cameras acquire multi-view X-ray transmission images in a single exposure (light-field). Their development is challenging: designs have appeared only recently, and they are still affected by important limitations. Concurrently, the lack of available real X-ray light-field data hinders dedicated algorithmic development. Here, we present a physical emulation setup for rapidly exploring the parameter space of both existing and conceptual camera designs. This will assist and accelerate the design of X-ray plenoptic imaging solutions, and provide a tool for generating unlimited real X-ray plenoptic data. We also demonstrate that X-ray light-fields allow for reconstructing sharp spatial structures in three-dimensions (3D) from single-shot data.
APA, Harvard, Vancouver, ISO, and other styles
45

Sijbrandij, S. J., K. F. Russell, R. C. Thomson, and M. K. Miller. "Digital Field Ion Microscopy." Microscopy and Microanalysis 4, S2 (July 1998): 88–89. http://dx.doi.org/10.1017/s1431927600020560.

Full text
Abstract:
Due to environmental concerns, there is a trend to avoid the use of chemicals needed to develop negatives and to process photographic paper, and to use digital technologies instead. Digital technology also offers the advantages that it is convenient, as it enables quick access to the endresult, allows image storage and processing on computer, allows rapid hard copy output, and simplifies electronic publishing. Recently significant improvements have been made to the performance and cost of camera-sensors and printers. In this paper, field ion images recorded with two digital cameras of different resolution are compared to images recorded on standard 35 mm negative film. It should be noted that field ion images exhibit low light intensity and high contrast.Field ion images were recorded from a standard microchannel plate and a phosphor screen and had acceptance angles of ∼60°.
APA, Harvard, Vancouver, ISO, and other styles
46

Hansen, C. J., M. A. Ravine, P. M. Schenk, G. C. Collins, E. J. Leonard, C. B. Phillips, M. A. Caplinger, F. Tosi, S. J. Bolton, and Björn Jónsson. "Juno’s JunoCam Images of Europa." Planetary Science Journal 5, no. 3 (March 1, 2024): 76. http://dx.doi.org/10.3847/psj/ad24f4.

Full text
Abstract:
Abstract On 2022 September 29 the Juno spacecraft passed Europa at 355 km, the first close pass since the Galileo flyby in 2000. Juno’s visible-light imager, JunoCam, collected four images, enabling cartographic, topographic, and surface geology analysis. The topography along the terminator is consistent with previously reported features that may indicate true polar wander. A bright band was discovered, and indicates global symmetry in the stress field that forms bright bands on Europa. The named feature Gwern is shown not to be an impact crater. Surface change detection shows no changes in 22 yr, although this is a difficult task considering differences between the JunoCam and Galileo imagers and very different viewing geometries. No active eruptions were detected.
APA, Harvard, Vancouver, ISO, and other styles
47

Schambach, Maximilian, and Fernando Puente León. "Reconstruction of multispectral images from spectrally coded light fields of flat scenes." tm - Technisches Messen 86, no. 12 (November 18, 2019): 758–64. http://dx.doi.org/10.1515/teme-2019-0103.

Full text
Abstract:
AbstractWe present a novel method to reconstruct multispectral images of flat objects from spectrally coded light fields as taken by an unfocused light field camera with a spectrally coded microlens array. In this sense, the spectrally coded light field camera is used as a multispectral snapshot imager, acquiring a multispectral datacube in a single exposure. The multispectral image, corresponding to the light field’s central view, is reconstructed by shifting the spectrally coded subapertures onto the central view according to their respective disparity. We assume that the disparity of the scene is approximately constant and non-zero. Since the spectral mask is identical for all subapertures, the missing spectral data of the central view will be filled up from the shifted spectrally coded subapertures. We investigate the reconstruction quality for different spectral masks and camera parameter sets optimized for real life applications such as in-line production monitoring for which the constant disparity constraint naturally holds. For synthesized reference scenes, using 16 color channels, we achieve a reconstruction \mathrm{PSNR} of up to 51 dB.
APA, Harvard, Vancouver, ISO, and other styles
48

Shen, Yu, Yuhang Liu, Yonglin Tian, Zhongmin Liu, and Feiyue Wang. "A New Parallel Intelligence Based Light Field Dataset for Depth Refinement and Scene Flow Estimation." Sensors 22, no. 23 (December 4, 2022): 9483. http://dx.doi.org/10.3390/s22239483.

Full text
Abstract:
Computer vision tasks, such as motion estimation, depth estimation, object detection, etc., are better suited to light field images with more structural information than traditional 2D monocular images. However, since costly data acquisition instruments are difficult to calibrate, it is always hard to obtain real-world scene light field images. The majority of the datasets for static light field images now available are modest in size and cannot be used in methods such as transformer to fully leverage local and global correlations. Additionally, studies on dynamic situations, such as object tracking and motion estimates based on 4D light field images, have been rare, and we anticipate a superior performance. In this paper, we firstly propose a new static light field dataset that contains up to 50 scenes and takes 8 to 10 perspectives for each scene, with the ground truth including disparities, depths, surface normals, segmentations, and object poses. This dataset is larger scaled compared to current mainstream datasets for depth estimation refinement, and we focus on indoor and some outdoor scenarios. Second, to generate additional optical flow ground truth that indicates 3D motion of objects in addition to the ground truth obtained in static scenes in order to calculate more precise pixel level motion estimation, we released a light field scene flow dataset with dense 3D motion ground truth of pixels, and each scene has 150 frames. Thirdly, by utilizing the DistgDisp and DistgASR, which decouple the angular and spatial domain of the light field, we perform disparity estimation and angular super-resolution to evaluate the performance of our light field dataset. The performance and potential of our dataset in disparity estimation and angular super-resolution have been demonstrated by experimental results.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Qian, Li Fang, Long Ye, Wei Zhong, Fei Hu, and Qin Zhang. "Flexible Light Field Angular Superresolution via a Deep Coarse-to-Fine Framework." Wireless Communications and Mobile Computing 2022 (March 10, 2022): 1–10. http://dx.doi.org/10.1155/2022/4570755.

Full text
Abstract:
Acquisition of densely-sampled light fields (LFs) is challenging. In this paper, we develop a coarse-to-fine light field angular superresolution that reconstructs densely-sampled LFs from sparsely-sampled ones. Unlike most of other methods, which are limited by the regularity of sampling patterns, our method can flexibly deal with different scale factors with one model. Specifically, a coarse restoration on epipolar plane images (EPIs) with arbitrary angular resolution is performed and then a refinement with 3D convolutional neural networks (CNNs) on stacked EPIs. The subaperture images in LFs are synthesized first horizontally, then vertically, forming a pseudo 4DCNN. In addition, our method can handle large baseline light field without using geometry information, which means it is not constrained by Lambertian assumption. Experimental results over various light field datasets including large baseline LFs demonstrate the significant superiority of our method when compared with state-of-the-art ones.
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Shunzhou, Tianfei Zhou, Yao Lu, and Huijun Di. "Detail-Preserving Transformer for Light Field Image Super-resolution." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2522–30. http://dx.doi.org/10.1609/aaai.v36i3.20153.

Full text
Abstract:
Recently, numerous algorithms have been developed to tackle the problem of light field super-resolution (LFSR), i.e., super-resolving low-resolution light fields to gain high-resolution views. Despite delivering encouraging results, these approaches are all convolution-based, and are naturally weak in global relation modeling of sub-aperture images necessarily to characterize the inherent structure of light fields. In this paper, we put forth a novel formulation built upon Transformers, by treating LFSR as a sequence-to-sequence reconstruction task. In particular, our model regards sub-aperture images of each vertical or horizontal angular view as a sequence, and establishes long-range geometric dependencies within each sequence via a spatial-angular locally-enhanced self-attention layer, which maintains the locality of each sub-aperture image as well. Additionally, to better recover image details, we propose a detail-preserving Transformer (termed as DPT), by leveraging gradient maps of light field to guide the sequence learning. DPT consists of two branches, with each associated with a Transformer for learning from an original or gradient image sequence. The two branches are finally fused to obtain comprehensive feature representations for reconstruction. Evaluations are conducted on a number of light field datasets, including real-world scenes and synthetic data. The proposed method achieves superior performance comparing with other state-of-the-art schemes. Our code is publicly available at: https://github.com/BITszwang/DPT.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography