Добірка наукової літератури з теми "Light field images"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Light field images".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Light field images"
Garces, Elena, Jose I. Echevarria, Wen Zhang, Hongzhi Wu, Kun Zhou, and Diego Gutierrez. "Intrinsic Light Field Images." Computer Graphics Forum 36, no. 8 (May 5, 2017): 589–99. http://dx.doi.org/10.1111/cgf.13154.
Повний текст джерелаLee, Seung-Jae, and In Kyu Park. "Dictionary Learning based Superresolution on 4D Light Field Images." Journal of Broadcast Engineering 20, no. 5 (September 30, 2015): 676–86. http://dx.doi.org/10.5909/jbe.2015.20.5.676.
Повний текст джерелаYan, Tao, Yuyang Ding, Fan Zhang, Ningyu Xie, Wenxi Liu, Zhengtian Wu, and Yuan Liu. "Snow Removal From Light Field Images." IEEE Access 7 (2019): 164203–15. http://dx.doi.org/10.1109/access.2019.2951917.
Повний текст джерелаYu, Li, Yunpeng Ma, Song Hong, and Ke Chen. "Reivew of Light Field Image Super-Resolution." Electronics 11, no. 12 (June 17, 2022): 1904. http://dx.doi.org/10.3390/electronics11121904.
Повний текст джерелаKobayashi, Kenkichi, and Hideo Saito. "High-Resolution Image Synthesis from Video Sequence by Light Field." Journal of Robotics and Mechatronics 15, no. 3 (June 20, 2003): 254–62. http://dx.doi.org/10.20965/jrm.2003.p0254.
Повний текст джерелаSun Junyang, 孙俊阳, 孙. 俊. Sun Jun, 许传龙 Xu Chuanlong, 张. 彪. Zhang Biao, and 王式民 Wang Shimin. "A Calibration Method of Focused Light Field Cameras Based on Light Field Images." Acta Optica Sinica 37, no. 5 (2017): 0515002. http://dx.doi.org/10.3788/aos201737.0515002.
Повний текст джерелаKOMATSU, Koji, Kohei ISECHI, Keita TAKAHASHI, and Toshiaki FUJII. "Light Field Coding Using Weighted Binary Images." IEICE Transactions on Information and Systems E102.D, no. 11 (November 1, 2019): 2110–19. http://dx.doi.org/10.1587/transinf.2019pcp0001.
Повний текст джерелаYamauchi, Masaki, and Tomohiro Yendo. "Light field display using wavelength division multiplexing." Electronic Imaging 2020, no. 2 (January 26, 2020): 101–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-101.
Повний текст джерелаXiao, Bo, Xiujing Gao, and Hongwu Huang. "Optimizing Underwater Image Restoration and Depth Estimation with Light Field Images." Journal of Marine Science and Engineering 12, no. 6 (June 2, 2024): 935. http://dx.doi.org/10.3390/jmse12060935.
Повний текст джерелаSalem, Ahmed, Hatem Ibrahem, and Hyun-Soo Kang. "Light Field Reconstruction Using Residual Networks on Raw Images." Sensors 22, no. 5 (March 2, 2022): 1956. http://dx.doi.org/10.3390/s22051956.
Повний текст джерелаДисертації з теми "Light field images"
Zhang, Zhengyu. "Quality Assessment of Light Field Images." Electronic Thesis or Diss., Rennes, INSA, 2024. http://www.theses.fr/2024ISAR0002.
Повний текст джерелаLight Field Image (LFI) has garnered remarkable interest and fascination due to its burgeoning significance in immersive applications. Since LFIs may be distorted at various stages from acquisition to visualization, Light Field Image Quality Assessment (LFIQA) is vitally important to monitor the potential impairments of LFI quality. The first contribution (Chapter 3) of this work focuses on developing two handcrafted feature-based No-Reference (NR) LFIQA metrics, in which texture information and wavelet information are exploited for quality evaluation. Then in the second part (Chapter 4), we explore the potential of combining deep learning technology with the quality assessment of LFIs, and propose four deep learning-based LFIQA metrics according to different LFI characteristics, including three NR metrics and one Full-Reference (FR) metric. In the last part (Chapter 5), we conduct subjective experiments and propose a novel standard LFIQA database. Moreover, a benchmark of numerous state-of-the-art objective LFIQA metrics on the proposed database is provided
Chiesa, Valeria. "Revisiting face processing with light field images." Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS059.pdf.
Повний текст джерелаBeing able to predict the macroscopic response of a material from the knowledge of its constituent at a microscopic or mesoscopic scale has always been the Holy Grail pursued by material science, for it provides building bricks for the understanding of complex structures as well as for the development of tailor-made optimized materials. The homogenization theory constitutes nowadays a well-established theoretical framework to estimate the overall response of composite materials for a broad range of mechanical behaviors. Such a framework is still lacking for brittle fracture, which is a dissipative evolution problem that (ii) localizes at the crack tip and (iii) is related to a structural one. In this work, we propose a theoretical framework based on a perturbative approach of Linear Elastic Fracture Mechanics to model (i) crack propagation in large-scale disordered materials as well (ii) the dissipative processes involved at the crack tip during the interaction of a crack with material heterogeneities. Their ultimate contribution to the macroscopic toughness of the composite is (iii) estimated from the resolution of the structural problem using an approach inspired by statistical physics. The theoretical and numerical inputs presented in the thesis are finally compared to experimental measurements of crack propagation in 3D-printed heterogeneous polymers obtained through digital image correlation
Dricot, Antoine. "Light-field image and video compression for future immersive applications." Thesis, Paris, ENST, 2017. http://www.theses.fr/2017ENST0008/document.
Повний текст джерелаEvolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks
Dricot, Antoine. "Light-field image and video compression for future immersive applications." Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0008.
Повний текст джерелаEvolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks
McEwen, Bryce Adam. "Microscopic Light Field Particle Image Velocimetry." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3238.
Повний текст джерелаSouza, Wallace Bruno Silva de. "Transmissão progressiva de imagens sintetizadas de light field." reponame:Repositório Institucional da UnB, 2018. http://repositorio.unb.br/handle/10482/34206.
Повний текст джерелаEsta proposta estabelece um método otimizado baseado em taxa-distorção para transmitir imagens sintetizadas de light field. Resumidamente, uma imagem light field pode ser interpretada como um dado quadridimensional (4D) que possui tanto resolução espacial, quanto resolução angular, sendo que cada subimagem bidimensional desse dado 4D é tido como uma determinada perspectiva, isto é, uma imagem de subabertura (SAI, do inglês Sub-Aperture Image). Este trabalho visa modi car e aprimorar uma proposta anterior chamada de Comunicação Progressiva de Light Field (PLFC, do inglês Progressive Light Field Communication ), a qual trata da sintetização de imagens referentes a diferentes focos requisitados por um usuário. Como o PLFC, este trabalho busca fornecer informação suficiente para o usuário de modo que, conforme a transmissão avance, ele tenha condições de sintetizar suas próprias imagens de ponto focal, sem a necessidade de se enviar novas imagens. Assim, a primeira modificação proposta diz respeito à como escolher a cache inicial do usuário, determinando uma quantidade ideal de imagens de subabertura para enviar no início da transmissão. Propõe-se também um aprimoramento do processo de seleção de imagens adicionais por meio de um algoritmo de refinamento, o qual é aplicado inclusive na inicialização da cache. Esse novo processo de seleção lida com QPs (Passo de Quantização, do inglês Quantization Parameter ) dinâmicos durante a codificação e envolve não só os ganhos imediatos para a qualidade da imagem sintetizada, mas ainda considera as sintetizações subsequentes. Tal ideia já foi apresentada pelo PLFC, mas não havia sido implementada de maneira satisfatória. Estabelece-se ainda uma maneira automática para calcular o multiplicador de Lagrange que controla a influência do benefício futuro associado à transmissão de uma SAI. Por fim, descreve-se um modo simplificado de obter esse benefício futuro, reduzindo a complexidade computacional envolvida. Muitas são as utilidades de um sistema como este, podendo, por exemplo, ser usado para identificar algum elemento em uma imagem light field, ajustando apropriadamente o foco em questão. Além da proposta, os resultados obtidos são exibidos, sendo feita uma discussão acerca dos significativos ganhos conseguidos de até 32; 8% com relação ao PLFC anterior em termos de BD-Taxa. Esse ganho chega a ser de até 85; 8% em comparação com transmissões triviais de dados light field.
This work proposes an optimized rate-distortion method to transmit light field synthesized images. Briefy, light eld images could be understood like quadridimensional (4D) data, which have both spatial and angular resolution, once each bidimensional subimage in this 4D image is a certain perspective, that is, a SAI (Sub-Aperture Image). This work aims to modify and to improve a previous proposal named PLFC (Progressive Light Field Communication), which addresses the image synthesis for diferent focal point images requested by an user. Like the PLFC, this work tries to provide enough information to the user so that, as the transmsission progress, he can synthesize his own focal point images, without the need to transmit new images. Thus, the first proposed modification refers to how the user's initial cache should be chosen, defining an ideal ammount of SAIs to send at the transmission begining. An improvement of the additional images selection process is also proposed by means of a refinement algorithm, which is applied even in the cache initialization. This new selection process works with dynamic QPs (Quantization Parameter) during encoding and involves not only the immediate gains for the synthesized image, but either considers the subsequent synthesis. This idea already was presented by PLFC, but had not been satisfactorily implemented. Moreover, this work proposes an automatic way to calculate the Lagrange multiplier which controls the in uence of the future benefit associated with the transmission of some SAI. Finally, a simplified manner of obtaining this future benefit is then described, reducing the computational complexity involved. The utilities of such a system are diverse and, for example, it can be used to identify some element in a light field image, adjusting the focus accordingly. Besides the proposal, the obtained results are shown, and a discussion is made about the significant achieved gains up to 32:8% compared to the previous PLFC in terms of BD-Rate. This gain is up to 85:8% in relation to trivial light field data transmissions.
Nieto, Grégoire. "Light field remote vision." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM051/document.
Повний текст джерелаLight fields have gathered much interest during the past few years. Captured from a plenoptic camera or a camera array, they sample the plenoptic function that provides rich information about the radiance of any ray passing through the observed scene. They offer a pletora of computer vision and graphics applications: 3D reconstruction, segmentation, novel view synthesis, inpainting or matting for instance.Reconstructing the light field consists in recovering the missing rays given the captured samples. In this work we cope with the problem of reconstructing the light field in order to synthesize an image, as if it was taken by a camera closer to the scene than the input plenoptic device or set of cameras. Our approach is to formulate the light field reconstruction challenge as an image-based rendering (IBR) problem. Most of IBR algorithms first estimate the geometry of the scene, known as a geometric proxy, to make correspondences between the input views and the target view. A new image is generated by the joint use of both the input images and the geometric proxy, often projecting the input images on the target point of view and blending them in intensity.A naive color blending of the input images do not guaranty the coherence of the synthesized image. Therefore we propose a direct multi-scale approach based on Laplacian rendering to blend the source images at all the frequencies, thus preventing rendering artifacts.However, the imperfection of the geometric proxy is also a main cause of rendering artifacts, that are displayed as a high-frequency noise in the synthesized image. We introduce a novel variational rendering method with gradient constraints on the target image for a better-conditioned linear system to solve, removing the high-frequency noise due to the geometric proxy.Some scene reconstructions are very challenging because of the presence of non-Lambertian materials; moreover, even a perfect geometric proxy is not sufficient when reflections, transparencies and specularities question the rules of parallax. We propose an original method based on the local approximation of the sparse light field in the plenoptic space to generate a new viewpoint without the need for any explicit geometric proxy reconstruction. We evaluate our method both quantitatively and qualitatively on non-trivial scenes that contain non-Lambertian surfaces.Lastly we discuss the question of the optimal placement of constrained cameras for IBR, and the use of our algorithms to recover objects that are hidden behind a camouflage.The proposed algorithms are illustrated by results on both structured (camera arrays) and unstructured plenoptic datasets
Hawary, Fatma. "Light field image compression and compressive acquisition." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S082.
Повний текст джерелаBy capturing a scene from several points of view, a light field provides a rich representation of the scene geometry that brings a variety of novel post-capture applications and enables immersive experiences. The objective of this thesis is to study the compressibility of light field contents in order to propose novel solutions for higher-resolution light field imaging. Two main aspects were studied through this work. The compression performance on light fields of the actual coding schemes still being limited, there is need to introduce more adapted approaches to better describe the light field structures. We propose a scalable coding scheme that encodes only a subset of light field views and reconstruct the remaining views via a sparsity-based method. A residual coding provides an enhancement to the final quality of the decoded light field. Acquiring very large-scale light fields is still not feasible with the actual capture and storage facilities, a possible alternative is to reconstruct the densely sampled light field from a subset of acquired samples. We propose an automatic reconstruction method to recover a compressively sampled light field, that exploits its sparsity in the Fourier domain. No geometry estimation is needed, and an accurate reconstruction is achieved even with very low number of captured samples. A further study is conducted for the full scheme including a compressive sensing of a light field and its transmission via the proposed coding approach. The distortion introduced by the different processing is measured. The results show comparable performances to depth-based view synthesis methods
Löw, Joakim, Anders Ynnerman, Per Larsson, and Jonas Unger. "HDR Light Probe Sequence Resampling for Realtime Incident Light Field Rendering." Linköpings universitet, Visuell informationsteknologi och applikationer, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-18052.
Повний текст джерелаBaravdish, Gabriel. "GPU Accelerated Light Field Compression." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150558.
Повний текст джерелаКниги з теми "Light field images"
Daly, Charles J. Scalar diffraction from a circular aperture. Boston: Kluwer Academic, 2000.
Знайти повний текст джерелаThe low light photography field guide: Go beyond daylight to capture stunning low light images. Lewes, East Sussex: ILEX, 2011.
Знайти повний текст джерелаThe low light photography field guide: Go beyond daylight to capture stunning low light images. Waltham, MA: Focal Press/Elsevier, 2011.
Знайти повний текст джерелаFreeman, Michael. Low Light Photography Field Guide: The Essential Guide to Getting Perfect Images in Challenging Light. Taylor & Francis Group, 2014.
Знайти повний текст джерелаWilson, Rita, and Brigid Maher, eds. Words, Images and Performances in Translation. Continuum International Publishing Group, 2012. http://dx.doi.org/10.5040/9781472541833.
Повний текст джерелаZhang, Cha. Light Field Sampling (Synthesis Lectures on Image, Video, and Multimedia Processing). Morgan and Claypool Publishers, 2007.
Знайти повний текст джерелаMurray, Jonathan, and Nea Ehrlich, eds. Drawn from Life. Edinburgh University Press, 2018. http://dx.doi.org/10.3366/edinburgh/9780748694112.001.0001.
Повний текст джерелаLeerdam, Andrea. Woodcuts as Reading Guides. Amsterdam University Press, 2023. http://dx.doi.org/10.5117/9789048560257.
Повний текст джерелаDeSnyder, Sarah M., Simona F. Shaitelman, and Mark V. Schaverien. Lymphedema and Body Image Disturbance. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780190655617.003.0010.
Повний текст джерелаKitts, Margo, Mark Juergensmeyer, and Michael Jerryson. Introduction. Edited by Michael Jerryson, Mark Juergensmeyer, and Margo Kitts. Oxford University Press, 2013. http://dx.doi.org/10.1093/oxfordhb/9780199759996.013.0041.
Повний текст джерелаЧастини книг з теми "Light field images"
Cho, Donghyeon, Sunyeong Kim, and Yu-Wing Tai. "Consistent Matting for Light Field Images." In Computer Vision – ECCV 2014, 90–104. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-10593-2_7.
Повний текст джерелаJung, Daniel, and Reinhard Koch. "Efficient Rendering of Light Field Images." In Video Processing and Computational Video, 184–211. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24870-2_8.
Повний текст джерелаKoch, R., B. Heigl, M. Pollefeys, L. Van Gool, and H. Niemann. "A Geometric Approach to Light field Calibration." In Computer Analysis of Images and Patterns, 596–603. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48375-6_71.
Повний текст джерелаAnisimov, Yuriy, Oliver Wasenmüller, and Didier Stricker. "A Compact Light Field Camera for Real-Time Depth Estimation." In Computer Analysis of Images and Patterns, 52–63. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-29888-3_5.
Повний текст джерелаJin, Panqi, Gangyi Jiang, Yeyao Chen, Zhidi Jiang, and Mei Yu. "Perceptual Light Field Image Coding with CTU Level Bit Allocation." In Computer Analysis of Images and Patterns, 255–64. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-44240-7_25.
Повний текст джерелаShan, Liang, Ping An, Deyang Liu, and Ran Ma. "Subjective Evaluation of Light Field Images for Quality Assessment Database." In Communications in Computer and Information Science, 267–76. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8108-8_25.
Повний текст джерелаWang, Xuechun, Wentao Chao, and Fuqing Duan. "Depth Optimization for Accurate 3D Reconstruction from Light Field Images." In Pattern Recognition and Computer Vision, 79–90. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8432-9_7.
Повний текст джерелаPrathap, Parvathy, and J. Jayakumari. "Analysis of Light Field Imaging and Segmentation on All-Focus Images." In Lecture Notes in Electrical Engineering, 331–42. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3992-3_27.
Повний текст джерелаFeng, Mingtao, Syed Zulqarnain Gilani, Yaonan Wang, and Ajmal Mian. "3D Face Reconstruction from Light Field Images: A Model-Free Approach." In Computer Vision – ECCV 2018, 508–26. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01249-6_31.
Повний текст джерелаKara, Peter A., Peter T. Kovacs, Suren Vagharshakyan, Maria G. Martini, Sandor Imre, Attila Barsi, Kristof Lackner, and Tibor Balogh. "Perceptual Quality of Reconstructed Medical Images on Projection-Based Light Field Displays." In Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 476–83. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-49655-9_58.
Повний текст джерелаТези доповідей конференцій з теми "Light field images"
Paniate, Alberto, Gianlorenzo Massaro, Alessio Avella, Alice Meda, Francesco V. Pepe, Marco Genovese, Milena D’Angelo, and Ivano Ruo Berchera. "Light-field ghost imaging." In Quantum 2.0, QTu3A.27. Washington, D.C.: Optica Publishing Group, 2024. http://dx.doi.org/10.1364/quantum.2024.qtu3a.27.
Повний текст джерелаImtiaz, Shariar Md, F. M. Fahmid Hossain, Nyamsuren Darkhanbaatar, Erkhembaatar Dashdavaa, Ki-Chul Kwon, Seok-Hee Jeon, and Nam Kim. "Estimating Depth Map from Light Field Microscopic Images Using Attention UNET." In 2024 Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR), 1–2. IEEE, 2024. http://dx.doi.org/10.1109/cleo-pr60912.2024.10676467.
Повний текст джерелаZhou, Yan, Huiwen Guo, Guoyuan Liang, and Xinyu Wu. "Shadow removal for light field images." In 2014 IEEE International Conference on Information and Automation (ICIA). IEEE, 2014. http://dx.doi.org/10.1109/icinfa.2014.6932830.
Повний текст джерелаZhu, Dong, ChunHong WU, Yunluo Liu, and Dongmei Fu. "3D reconstruction based on light field images." In Ninth International Conference on Graphic and Image Processing, edited by Hui Yu and Junyu Dong. SPIE, 2018. http://dx.doi.org/10.1117/12.2304504.
Повний текст джерелаChantara, Wisarut, Ji-Hun Mun, and Yo-Sung Ho. "Efficient Depth Estimation for Light Field Images." In 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2018. http://dx.doi.org/10.23919/apsipa.2018.8659647.
Повний текст джерелаSeifi, Mozhdeh, Neus Sabater, Valter Drazic, and Patrick Perez. "Disparity-guided demosaicking of light field images." In 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7026109.
Повний текст джерелаChen, Xianyu, Feng Dai, Yike Ma, and Yongdong Zhang. "Automatic foreground segmentation using light field images." In 2015 Visual Communications and Image Processing (VCIP). IEEE, 2015. http://dx.doi.org/10.1109/vcip.2015.7457895.
Повний текст джерелаDuVall, Matthew, John Flynn, Michael Broxton, and Paul Debevec. "Compositing light field video using multiplane images." In SIGGRAPH '19: Special Interest Group on Computer Graphics and Interactive Techniques Conference. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3306214.3338614.
Повний текст джерелаNoury, Charles-Antoine, Celine Teuliere, and Michel Dhome. "Light-Field Camera Calibration from Raw Images." In 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE, 2017. http://dx.doi.org/10.1109/dicta.2017.8227459.
Повний текст джерелаJeong, Youngmo, Seokil Moon, Jaebum Cho, and Byoungho Lee. "One-shot 360-degree light field recording with light field camera and reflected images." In Imaging Systems and Applications. Washington, D.C.: OSA, 2017. http://dx.doi.org/10.1364/isa.2017.im4e.1.
Повний текст джерелаЗвіти організацій з теми "Light field images"
Letcher, Theodore, Julie Parno, Zoe Courville, Lauren Farnsworth, and Jason Olivier. A generalized photon-tracking approach to simulate spectral snow albedo and transmittance using X-ray microtomography and geometric optics. Engineer Research and Development Center (U.S.), June 2023. http://dx.doi.org/10.21079/11681/47122.
Повний текст джерелаKing, E. L., A. Normandeau, T. Carson, P. Fraser, C. Staniforth, A. Limoges, B. MacDonald, F. J. Murrillo-Perez, and N. Van Nieuwenhove. Pockmarks, a paleo fluid efflux event, glacial meltwater channels, sponge colonies, and trawling impacts in Emerald Basin, Scotian Shelf: autonomous underwater vehicle surveys, William Kennedy 2022011 cruise report. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/331174.
Повний текст джерелаSuir, Glenn, Christina Saltus, and Sam Jackson. Remote Assessment of Swamp and Bottomland Hardwood Habitat Condition in the Maurepas Diversion Project Area. Engineer Research and Development Center (U.S.), August 2021. http://dx.doi.org/10.21079/11681/41563.
Повний текст джерелаHart, Carl R., and Gregory W. Lyons. A Measurement System for the Study of Nonlinear Propagation Through Arrays of Scatterers. Engineer Research and Development Center (U.S.), November 2020. http://dx.doi.org/10.21079/11681/38621.
Повний текст джерелаBurks, Thomas F., Victor Alchanatis, and Warren Dixon. Enhancement of Sensing Technologies for Selective Tree Fruit Identification and Targeting in Robotic Harvesting Systems. United States Department of Agriculture, October 2009. http://dx.doi.org/10.32747/2009.7591739.bard.
Повний текст джерелаDouglas, Thomas A., Christopher A. Hiemstra, Stephanie P. Saari, Kevin L. Bjella, Seth W. Campbell, M. Torre Jorgenson, Dana R. N. Brown, and Anna K. Liljedahl. Degrading Permafrost Mapped with Electrical Resistivity Tomography, Airborne Imagery and LiDAR, and Seasonal Thaw Measurements. U.S. Army Engineer Research and Development Center, July 2021. http://dx.doi.org/10.21079/11681/41185.
Повний текст джерелаLey, Matt, Tom Baldvins, Hannah Pilkington, David Jones, and Kelly Anderson. Vegetation classification and mapping project: Big Thicket National Preserve. National Park Service, 2024. http://dx.doi.org/10.36967/2299254.
Повний текст джерелаHodul, M., H. P. White, and A. Knudby. A report on water quality monitoring in Quesnel Lake, British Columbia, subsequent to the Mount Polley tailings dam spill, using optical satellite imagery. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/330556.
Повний текст джерела