Literatura científica selecionada sobre o tema "Depth of field fusion"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Depth of field fusion".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Depth of field fusion"

1

Wang, Shuzhen, Haili Zhao e Wenbo Jing. "Fast all-focus image reconstruction method based on light field imaging". ITM Web of Conferences 45 (2022): 01030. http://dx.doi.org/10.1051/itmconf/20224501030.

Texto completo da fonte
Resumo:
To achieve high-quality imaging of all focal planes with large depth of field information, a fast all-focus image reconstruction technology based on light field imaging is proposed: combining light field imaging to collect field of view information, and using light field reconstruction to obtain a multi-focus image source set, using the improved NSML image fusion method performs image fusion to quickly obtain an all-focus image with a large depth of field. Experiments have proved that this method greatly reduces the time consumed in the image fusion process by simplifying the calculation process of NSML, and improves the efficiency of image fusion. This method not only achieves excellent fusion image quality, but also improves the real-time performance of the algorithm.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Chen, Jiaxin, Shuo Zhang e Youfang Lin. "Attention-based Multi-Level Fusion Network for Light Field Depth Estimation". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 2 (18 de maio de 2021): 1009–17. http://dx.doi.org/10.1609/aaai.v35i2.16185.

Texto completo da fonte
Resumo:
Depth estimation from Light Field (LF) images is a crucial basis for LF related applications. Since multiple views with abundant information are available, how to effectively fuse features of these views is a key point for accurate LF depth estimation. In this paper, we propose a novel attention-based multi-level fusion network. Combined with the four-branch structure, we design intra-branch fusion strategy and inter-branch fusion strategy to hierarchically fuse effective features from different views. By introducing the attention mechanism, features of views with less occlusions and richer textures are selected inside and between these branches to provide more effective information for depth estimation. The depth maps are finally estimated after further aggregation. Experimental results shows the proposed method achieves state-of-the-art performance in both quantitative and qualitative evaluation, which also ranks first in the commonly used HCI 4D Light Field Benchmark.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Piao, Yongri, Miao Zhang, Xiaohui Wang e Peihua Li. "Extended depth of field integral imaging using multi-focus fusion". Optics Communications 411 (março de 2018): 8–14. http://dx.doi.org/10.1016/j.optcom.2017.10.081.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

De, Ishita, Bhabatosh Chanda e Buddhajyoti Chattopadhyay. "Enhancing effective depth-of-field by image fusion using mathematical morphology". Image and Vision Computing 24, n.º 12 (dezembro de 2006): 1278–87. http://dx.doi.org/10.1016/j.imavis.2006.04.005.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Pu, Can, Runzi Song, Radim Tylecek, Nanbo Li e Robert Fisher. "SDF-MAN: Semi-Supervised Disparity Fusion with Multi-Scale Adversarial Networks". Remote Sensing 11, n.º 5 (27 de fevereiro de 2019): 487. http://dx.doi.org/10.3390/rs11050487.

Texto completo da fonte
Resumo:
Refining raw disparity maps from different algorithms to exploit their complementary advantages is still challenging. Uncertainty estimation and complex disparity relationships among pixels limit the accuracy and robustness of existing methods and there is no standard method for fusion of different kinds of depth data. In this paper, we introduce a new method to fuse disparity maps from different sources, while incorporating supplementary information (intensity, gradient, etc.) into a refiner network to better refine raw disparity inputs. A discriminator network classifies disparities at different receptive fields and scales. Assuming a Markov Random Field for the refined disparity map produces better estimates of the true disparity distribution. Both fully supervised and semi-supervised versions of the algorithm are proposed. The approach includes a more robust loss function to inpaint invalid disparity values and requires much less labeled data to train in the semi-supervised learning mode. The algorithm can be generalized to fuse depths from different kinds of depth sources. Experiments explored different fusion opportunities: stereo-monocular fusion, stereo-ToF fusion and stereo-stereo fusion. The experiments show the superiority of the proposed algorithm compared with the most recent algorithms on public synthetic datasets (Scene Flow, SYNTH3, our synthetic garden dataset) and real datasets (Kitti2015 dataset and Trimbot2020 Garden dataset).
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Bouzos, Odysseas, Ioannis Andreadis e Nikolaos Mitianoudis. "Conditional Random Field-Guided Multi-Focus Image Fusion". Journal of Imaging 8, n.º 9 (5 de setembro de 2022): 240. http://dx.doi.org/10.3390/jimaging8090240.

Texto completo da fonte
Resumo:
Multi-Focus image fusion is of great importance in order to cope with the limited Depth-of-Field of optical lenses. Since input images contain noise, multi-focus image fusion methods that support denoising are important. Transform-domain methods have been applied to image fusion, however, they are likely to produce artifacts. In order to cope with these issues, we introduce the Conditional Random Field (CRF) CRF-Guided fusion method. A novel Edge Aware Centering method is proposed and employed to extract the low and high frequencies of the input images. The Independent Component Analysis—ICA transform is applied to high-frequency components and a Conditional Random Field (CRF) model is created from the low frequency and the transform coefficients. The CRF model is solved efficiently with the α-expansion method. The estimated labels are used to guide the fusion of the low-frequency components and the transform coefficients. Inverse ICA is then applied to the fused transform coefficients. Finally, the fused image is the addition of the fused low frequency and the fused high frequency. CRF-Guided fusion does not introduce artifacts during fusion and supports image denoising during fusion by applying transform domain coefficient shrinkage. Quantitative and qualitative evaluation demonstrate the superior performance of CRF-Guided fusion compared to state-of-the-art multi-focus image fusion methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Pei, Xiangyu, Shujun Xing, Xunbo Yu, Gao Xin, Xudong Wen, Chenyu Ning, Xinhui Xie et al. "Three-dimensional light field fusion display system and coding scheme for extending depth of field". Optics and Lasers in Engineering 169 (outubro de 2023): 107716. http://dx.doi.org/10.1016/j.optlaseng.2023.107716.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Jie, Yuchan, Xiaosong Li, Mingyi Wang e Haishu Tan. "Multi-Focus Image Fusion for Full-Field Optical Angiography". Entropy 25, n.º 6 (16 de junho de 2023): 951. http://dx.doi.org/10.3390/e25060951.

Texto completo da fonte
Resumo:
Full-field optical angiography (FFOA) has considerable potential for clinical applications in the prevention and diagnosis of various diseases. However, owing to the limited depth of focus attainable using optical lenses, only information about blood flow in the plane within the depth of field can be acquired using existing FFOA imaging techniques, resulting in partially unclear images. To produce fully focused FFOA images, an FFOA image fusion method based on the nonsubsampled contourlet transform and contrast spatial frequency is proposed. Firstly, an imaging system is constructed, and the FFOA images are acquired by intensity-fluctuation modulation effect. Secondly, we decompose the source images into low-pass and bandpass images by performing nonsubsampled contourlet transform. A sparse representation-based rule is introduced to fuse the lowpass images to effectively retain the useful energy information. Meanwhile, a contrast spatial frequency rule is proposed to fuse bandpass images, which considers the neighborhood correlation and gradient relationships of pixels. Finally, the fully focused image is produced by reconstruction. The proposed method significantly expands the range of focus of optical angiography and can be effectively extended to public multi-focused datasets. Experimental results confirm that the proposed method outperformed some state-of-the-art methods in both qualitative and quantitative evaluations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Wang, Hui-Feng, Gui-ping Wang, Xiao-Yan Wang, Chi Ruan e Shi-qin Chen. "A kind of infrared expand depth of field vision sensor in low-visibility road condition for safety-driving". Sensor Review 36, n.º 1 (18 de janeiro de 2016): 7–13. http://dx.doi.org/10.1108/sr-04-2015-0055.

Texto completo da fonte
Resumo:
Purpose – This study aims to consider active vision in low-visibility environments to reveal the factors of optical properties which affect visibility and to explore a method of obtaining different depths of fields by multimode imaging.Bad weather affects the driver’s visual range tremendously and thus has a serious impact on transport safety. Design/methodology/approach – A new mechanism and a core algorithm for obtaining an excellent large field-depth image which can be used to aid safe driving is designed and implemented. In this mechanism, atmospheric extinction principle and field expansion system are researched as the basis, followed by image registration and fusion algorithm for the Infrared Extended Depth of Field (IR-EDOF) sensor. Findings – The experimental results show that the idea we propose can work well to expand the field depth in a low-visibility road environment as a new aided safety-driving sensor. Originality/value – The paper presents a new kind of active optical extension, as well as enhanced driving aids, which is an effective solution to the problem of weakening of visual ability. It is a practical engineering sensor scheme for safety driving in low-visibility road environments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Xiao, Yuhao, Guijin Wang, Xiaowei Hu, Chenbo Shi, Long Meng e Huazhong Yang. "Guided, Fusion-Based, Large Depth-of-field 3D Imaging Using a Focal Stack". Sensors 19, n.º 22 (7 de novembro de 2019): 4845. http://dx.doi.org/10.3390/s19224845.

Texto completo da fonte
Resumo:
Three dimensional (3D) imaging technology has been widely used for many applications, such as human–computer interactions, making industrial measurements, and dealing with cultural relics. However, existing active methods often require both large apertures of projector and camera to maximize light throughput, resulting in a shallow working volume in which projector and camera are simultaneously in focus. In this paper, we propose a novel method to extend the working range of the structured light 3D imaging system based on the focal stack. Specifically in the case of large depth variation scenes, we first adopted the gray code method for local, 3D shape measurement with multiple focal distance settings. Then we extracted the texture map of each focus position into a focal stack to generate a global coarse depth map. Under the guidance of the global coarse depth map, the high-quality 3D shape measurement of the overall scene was obtained by local, 3D shape-measurement fusion. To validate the method, we developed a prototype system that can perform high-quality measurements in the depth range of 400 mm with a measurement error of 0.08%.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Depth of field fusion"

1

Duan, Jun Wei. "New regional multifocus image fusion techniques for extending depth of field". Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3951602.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Hua, Xiaoben, e Yuxia Yang. "A Fusion Model For Enhancement of Range Images". Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2203.

Texto completo da fonte
Resumo:
In this thesis, we would like to present a new way to enhance the “depth map” image which is called as the fusion of depth images. The goal of our thesis is to try to enhance the “depth images” through a fusion of different classification methods. For that, we will use three similar but different methodologies, the Graph-Cut, Super-Pixel and Principal Component Analysis algorithms to solve the enhancement and output of our result. After that, we will compare the effect of the enhancement of our result with the original depth images. This result indicates the effectiveness of our methodology.
Room 401, No.56, Lane 21, Yin Gao Road, Shanghai, China
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ocampo, Blandon Cristian Felipe. "Patch-Based image fusion for computational photography". Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0020.

Texto completo da fonte
Resumo:
Dans de nombreuses situations, la dynamique des capteurs ou la profondeur de champ des appareils photographiques conventionnels sont insuffisantes pour capturer fidèlement des scènes naturelles. Une méthode classique pour contourner ces limitations est de fusionner des images acquises avec des paramètres de prise de vue variables. Ces méthodes nécessitent que les images soient parfaitement alignées et que les scènes soient statiques, faute de quoi des artefacts (fantômes) ou des structures irrégulières apparaissent lors de la fusion. Le but de cette thèse est de développer des techniques permettant de traiter directement des images dynamiques et non-alignées, en exploitant des mesures de similarité locales par patchs entre images.Dans la première partie de cette thèse, nous présentons une méthode pour la fusion d'images de scènes dynamiques capturées avec des temps d'exposition variables. Notre méthode repose sur l'utilisation jointe d'une normalisation de contraste, de combinaisons non-locales de patchs et de régularisations. Ceci permet de produire de manière efficace des images contrastées et bien exposées, même dans des cas difficiles (objets en mouvement, scènes non planes, déformations optiques, etc.).Dans la deuxième partie de la thèse nous proposons, toujours dans des cas dynamiques, une méthode de fusion d'images acquises avec des mises au point variables. Le cœur de notre méthode repose sur une comparaison de patchs entre images ayant des niveaux de flou variables.Nos méthodes ont été évaluées sur des bases de données classiques et sur d'autres, nouvelles, crées pour les besoins de ce travail. Les expériences montrent la robustesse des méthodes aux distortions géométriques, aux variations d'illumination et au flou. Ces méthodes se comparent favorablement à des méthodes de l'état de l'art, à un coût algorithmique moindre. En marge de ces travaux, nous analysons également la capacité de l'algorithme PatchMatch à reconstruire des images en présence de flou et de changements d'illumination, et nous proposons différentes stratégies pour améliorer ces reconstructions
The most common computational techniques to deal with the limited high dynamic range and reduced depth of field of conventional cameras are based on the fusion of images acquired with different settings. These approaches require aligned images and motionless scenes, otherwise ghost artifacts and irregular structures can arise after the fusion. The goal of this thesis is to develop patch-based techniques in order to deal with motion and misalignment for image fusion, particularly in the case of variable illumination and blur.In the first part of this work, we present a methodology for the fusion of bracketed exposure images for dynamic scenes. Our method combines a carefully crafted contrast normalization, a fast non-local combination of patches and different regularization steps. This yields an efficient way of producing contrasted and well-exposed images from hand-held captures of dynamic scenes, even in difficult cases (moving objects, non planar scenes, optical deformations, etc.).In a second part, we propose a multifocus image fusion method that also deals with hand-held acquisition conditions and moving objects. At the core of our methodology, we propose a patch-based algorithm that corrects local geometric deformations by relying on both color and gradient orientations.Our methods were evaluated on common and new datasets created for the purpose of this work. From the experiments we conclude that our methods are consistently more robust than alternative methods to geometric distortions and illumination variations or blur. As a byproduct of our study, we also analyze the capacity of the PatchMatch algorithm to reconstruct images in the presence of blur and illumination changes, and propose different strategies to improve such reconstructions
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ramirez, Hernandez Pavel. "Extended depth of field". Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9941.

Texto completo da fonte
Resumo:
In this thesis the extension of the depth of field of optical systems is investigated. The problem of achieving extended depth of field (EDF) while preserving the transverse resolution is also addressed. A new expression for the transport of intensity equation in the prolate spheroidal coordinates system is derived, with the aim of investigating the phase retrieval problem with applications to EDF. A framework for the optimisation of optical systems with EDF is also introduced, where the main motivation is to find an appropriate scenario that will allow a convex optimisation solution leading to global optima. The relevance in such approach is that it does not depend on the optimisation algorithms since each local optimum is a global one. The multi-objective optimisation framework for optical systems is also discussed, where the main focus is the optimisation of pupil plane masks. The solution for the multi-objective optimisation problem is presented not as a single mask but as a set of masks. Convex frameworks for this problem are further investigated and it is shown that the convex optimisation of pupil plane masks is possible, providing global optima to the optimisation problems for optical systems. Seven masks are provided as examples of the convex optimisation solutions for optical systems, in particular 5 pupil plane masks that achieve EDF by factors of 2, 2.8, 2.9, 4 and 4.3, including two pupil masks that besides of extending the depth of field, are super-resolving in the transverse planes. These are shown as examples of solutions to particular optimisation problems in optical systems, where convexity properties have been given to the original problems to allow a convex optimisation, leading to optimised masks with a global nature in the optimisation scenario.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Sikdar, Ankita. "Depth based Sensor Fusion in Object Detection and Tracking". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1515075130647622.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Villarruel, Christina R. "Computer graphics and human depth perception with gaze-contingent depth of field /". Connect to online version, 2006. http://ada.mtholyoke.edu/setr/websrc/pdfs/www/2006/175.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Aldrovandi, Lorenzo. "Depth estimation algorithm for light field data". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Botcherby, Edward J. "Aberration free extended depth of field microscopy". Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:7ad8bc83-6740-459f-8c48-76b048c89978.

Texto completo da fonte
Resumo:
In recent years, the confocal and two photon microscopes have become ubiquitous tools in life science laboratories. The reason for this is that both these systems can acquire three dimensional image data from biological specimens. Specifically, this is done by acquiring a series of two-dimensional images from a set of equally spaced planes within the specimen. The resulting image stack can be manipulated and displayed on a computer to reveal a wealth of information. These systems can also be used in time lapse studies to monitor the dynamical behaviour of specimens by recording a number of image stacks at a sequence of time points. The time resolution in this situation is, however, limited by the maximum speed at which each constituent image stack can be acquired. Various techniques have emerged to speed up image acquisition and in most practical implementations a single, in-focus, image can be acquired very quickly. However, the real bottleneck in three dimensional imaging is the process of refocusing the system to image different planes. This is commonly done by physically changing the distance between the specimen and imaging lens, which is a relatively slow process. It is clear with the ever-increasing need to image biologically relevant specimens quickly that the speed limitation imposed by the refocusing process must be overcome. This thesis concerns the acquisition of data from a range of specimen depths without requiring the specimen to be moved. A new technique is demonstrated for two photon microscopy that enables data from a whole range of specimen depths to be acquired simultaneously so that a single two dimensional scan records extended depth of field image data directly. This circumvents the need to acquire a full three dimensional image stack and hence leads to a significant improvement in the temporal resolution for acquiring such data by more than an order of magnitude. In the remainder of this thesis, a new microscope architecture is presented that enables scanning to be carried out in three dimensions at high speed without moving the objective lens or specimen. Aberrations introduced by the objective lens are compensated by the introduction of an equal and opposite aberration with a second lens within the system enabling diffraction limited performance over a large range of specimen depths. Focusing is achieved by moving a very small mirror, allowing axial scan rates of several kHz; an improvement of some two orders of magnitude. This approach is extremely general and can be applied to any form of optical microscope with the very great advantage that the specimen is not disturbed. This technique is developed theoretically and experimental results are shown that demonstrate its potential application to a broad range of sectioning methods in microscopy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Möckelind, Christoffer. "Improving deep monocular depth predictions using dense narrow field of view depth images". Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.

Texto completo da fonte
Resumo:
In this work we study a depth prediction problem where we provide a narrow field of view depth image and a wide field of view RGB image to a deep network tasked with predicting the depth for the entire RGB image. We show that by providing a narrow field of view depth image, we improve results for the area outside the provided depth compared to an earlier approach only utilizing a single RGB image for depth prediction. We also show that larger depth maps provide a greater advantage than smaller ones and that the accuracy of the model decreases with the distance from the provided depth. Further, we investigate several architectures as well as study the effect of adding noise and lowering the resolution of the provided depth image. Our results show that models provided low resolution noisy data performs on par with the models provided unaltered depth.
I det här arbetet studerar vi ett djupapproximationsproblem där vi tillhandahåller en djupbild med smal synvinkel och en RGB-bild med bred synvinkel till ett djupt nätverk med uppgift att förutsäga djupet för hela RGB-bilden. Vi visar att genom att ge djupbilden till nätverket förbättras resultatet för området utanför det tillhandahållna djupet jämfört med en existerande metod som använder en RGB-bild för att förutsäga djupet. Vi undersöker flera arkitekturer och storlekar på djupbildssynfält och studerar effekten av att lägga till brus och sänka upplösningen på djupbilden. Vi visar att större synfält för djupbilden ger en större fördel och även att modellens noggrannhet minskar med avståndet från det angivna djupet. Våra resultat visar också att modellerna som använde sig av det brusiga lågupplösta djupet presterade på samma nivå som de modeller som använde sig av det omodifierade djupet.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Luraas, Knut. "Clinical aspects of Critical Flicker Fusion perimetry : an in-depth analysis". Thesis, Cardiff University, 2012. http://orca.cf.ac.uk/39684/.

Texto completo da fonte
Resumo:
The thesis evaluated, in three studies, the clinical potential of Critical Flicker Fusion perimetry (CFFP) undertaken using the Octopus 311 perimeter. The influence of the learning effect on the outcome of CFFP was evaluated, in each eye at each of five visits each separated by one week, for 28 normal individuals naïve to perimetry, 10 individuals with ocular hypertension (OHT) and 11 with open angle glaucoma (OAG) all of whom were experienced in Standard Automated perimetry (SAP). An improvement occurred in the height, rather than in the shape, of the visual field and was largest for those with OAG. The normal individuals reached optimum performance at the third visit and those with OHT or with OAG at the fourth or fifth visits. The influence of ocular media opacity was investigated in 22 individuals with age-related cataract who were naïve to both SAP and CFFP. All individuals underwent both CFFP and SAP in each eye at each of four visits each separated by one week. At the third and fourth visit, glare disability (GD) was measured with 100% and 10% contrast EDTRS LogMAR visual acuity charts in the presence, and absence, of three levels of glare using the Brightness Acuity Tester. The visual field for CFF improved in height, only. Little correlation was present between the various measures of GD and the visual field, largely due to the narrow range of cataract severity. The influence of optical defocus for both CFFP and SAP was investigated, in one designated eye at each of two visits, in 16 normal individuals all of whom had taken part in the first study. Sensitivity for SAP declined with increase in defocus whilst that for CFFP increased. The latter was attributed to the influence of the Granit-Harper Law arising from the increased size of the defocused stimulus.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Depth of field fusion"

1

Depth of field. Stockport, England: Dewi Lewis Pub., 2000.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Heyen, William. Depth of field: Poems. Pittsburgh: Carnegie Mellon University Press, 2005.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Applied depth of field. Boston: Focal Press, 1985.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Depth of field: Poems and photographs. Simsbury, CT: Antrim House, 2010.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

1944-, Slattery Dennis Patrick, e Corbett Lionel, eds. Depth psychology: Meditations from the field. Einsiedeln, Switzerland: Daimon, 2000.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Donal, Cooper, Leino Marika, Henry Moore Institute (Leeds, England) e Victoria and Albert Museum, eds. Depth of field: Relief sculpture in Renaissance Italy. Bern: Peter Lang, 2007.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Geoffrey, Cocks, Diedrick James e Perusek Glenn, eds. Depth of field: Stanley Kubrick, film, and the uses of history. Madison: University of Wisconsin Press, 2006.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Buch, Neeraj. Precast concrete panel systems for full-depth pavement repairs: Field trials. Washington, DC: Office of Infrastructure, Office of Pavement Technology, Federal Highway Administration, U.S. Department of Transportation, 2007.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Depth of field: Essays on photography, mass media, and lens culture. Albuquerque, NM: University of New Mexico Press, 1998.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Ruotoistenmäki, Tapio. Estimation of depth to potential field sources using the Fourier amplitude spectrum. Espoo: Geologian tutkimuskeskus, 1987.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Depth of field fusion"

1

Zhang, Yukun, Yongri Piao, Xinxin Ji e Miao Zhang. "Dynamic Fusion Network for Light Field Depth Estimation". In Pattern Recognition and Computer Vision, 3–15. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88007-1_1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Liu, Xinshi, Dongmei Fu, Chunhong Wu e Ze Si. "The Depth Estimation Method Based on Double-Cues Fusion for Light Field Images". In Proceedings of the 11th International Conference on Modelling, Identification and Control (ICMIC2019), 719–26. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-0474-7_67.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Gooch, Jan W. "Depth of Field". In Encyclopedic Dictionary of Polymers, 201–2. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-6247-8_3432.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Kemp, Jonathan. "Depth of field". In Film on Video, 55–64. London ; New York : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9780429468872-6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ravitz, Jeff, e James L. Moody. "Depth of Field". In Lighting for Televised Live Events, 75–79. First edition. | New York, NY : Routledge, 2021.: Routledge, 2021. http://dx.doi.org/10.4324/9780429288982-11.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Atchison, David A., e George Smith. "Depth-of-Field". In Optics of the Human Eye, 379–93. 2a ed. New York: CRC Press, 2023. http://dx.doi.org/10.1201/9781003128601-24.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Cai, Ziyun, Yang Long, Xiao-Yuan Jing e Ling Shao. "Adaptive Visual-Depth Fusion Transfer". In Computer Vision – ACCV 2018, 56–73. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Turaev, Vladimir, e Alexis Virelizier. "Fusion categories". In Monoidal Categories and Topological Field Theory, 65–87. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-49834-8_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Sandström, Erik, Martin R. Oswald, Suryansh Kumar, Silvan Weder, Fisher Yu, Cristian Sminchisescu e Luc Van Gool. "Learning Online Multi-sensor Depth Fusion". In Lecture Notes in Computer Science, 87–105. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19824-3_6.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Schedl, David C., Clemens Birklbauer, Johann Gschnaller e Oliver Bimber. "Generalized Depth-of-Field Light-Field Rendering". In Computer Vision and Graphics, 95–105. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46418-3_9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Depth of field fusion"

1

Ajdin, Boris, e Timo Ahonen. "Reduced depth of field using multi-image fusion". In IS&T/SPIE Electronic Imaging, editado por Cees G. M. Snoek, Lyndon S. Kennedy, Reiner Creutzburg, David Akopian, Dietmar Wüller, Kevin J. Matherson, Todor G. Georgiev e Andrew Lumsdaine. SPIE, 2013. http://dx.doi.org/10.1117/12.2008501.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Hariharan, Harishwaran, Andreas Koschan e Mongi Abidi. "Extending depth of field by intrinsic mode image fusion". In 2008 19th International Conference on Pattern Recognition (ICPR). IEEE, 2008. http://dx.doi.org/10.1109/icpr.2008.4761727.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Chantara, Wisarut, e Yo-Sung Ho. "Multi-focus image fusion for extended depth of field". In the 10th International Conference. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3240876.3240894.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Brizzi, Michele, Federica Battisti e Alessandro Neri. "Light Field Depth-of-Field Expansion and Enhancement Based on Multifocus Fusion". In 2019 8th European Workshop on Visual Information Processing (EUVIP). IEEE, 2019. http://dx.doi.org/10.1109/euvip47703.2019.8946218.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Liu, Xiaomin, Pengbo Chen, Mengzhu Du, Huaping Zang, Huace Hu, Yunfei Zhu, Zhibang Ma, Qiancheng Wang e Yuanye Niu. "Multi-information fusion depth estimation of compressed spectral light field images". In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: OSA, 2020. http://dx.doi.org/10.1364/3d.2020.dw1a.2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

wu, nanshou, Mingyi Wang, Guojian Yang e Zeng Yaguang. "Digital Depth-of-field expansion Using Contrast Pyramid fusion Algorithm for Full-field Optical Angiography". In Clinical and Translational Biophotonics. Washington, D.C.: OSA, 2018. http://dx.doi.org/10.1364/translational.2018.jtu3a.21.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Song, Xianlin. "Computed extended depth of field photoacoustic microscopy using ratio of low-pass pyramid fusion". In Signal Processing, Sensor/Information Fusion, and Target Recognition XXX, editado por Lynne L. Grewe, Erik P. Blasch e Ivan Kadar. SPIE, 2021. http://dx.doi.org/10.1117/12.2589659.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Cheng, Samuel, Hyohoon Choi, Qiang Wu e Kenneth R. Castleman. "Extended Depth-of-Field Microscope Imaging: MPP Image Fusion VS. WAVEFRONT CODING". In 2006 International Conference on Image Processing. IEEE, 2006. http://dx.doi.org/10.1109/icip.2006.312957.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Aslantas, Veysel, e Rifat Kurban. "Extending depth-of-field by image fusion using multi-objective genetic algorithm". In 2009 7th IEEE International Conference on Industrial Informatics (INDIN). IEEE, 2009. http://dx.doi.org/10.1109/indin.2009.5195826.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Pérez, Román Hurtado, Carina Toxqui-Quitl e Alfonso Padilla-Vivanco. "Image fusion of color microscopic images for extended the depth of field". In Frontiers in Optics. Washington, D.C.: OSA, 2015. http://dx.doi.org/10.1364/fio.2015.fth1f.7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Depth of field fusion"

1

McLean, William E. ANVIS Objective Lens Depth of Field. Fort Belvoir, VA: Defense Technical Information Center, março de 1996. http://dx.doi.org/10.21236/ada306571.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Al-Mutawaly, Nafia, Hubert de Bruin e Raymond D. Findlay. Magnetic Nerve Stimulation: Field Focality and Depth of Penetration. Fort Belvoir, VA: Defense Technical Information Center, outubro de 2001. http://dx.doi.org/10.21236/ada411028.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Peng, Y. K. M. Spherical torus, compact fusion at low field. Office of Scientific and Technical Information (OSTI), fevereiro de 1985. http://dx.doi.org/10.2172/6040602.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Paul, A. C., e V. K. Neil. Fixed Field Alternating Gradient recirculator for heavy ion fusion. Office of Scientific and Technical Information (OSTI), março de 1991. http://dx.doi.org/10.2172/5828376.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Cathey, W. T., Benjamin Braker e Sherif Sherif. Analysis and Design Tools for Passive Ranging and Reduced-Depth-of-Field Imaging. Fort Belvoir, VA: Defense Technical Information Center, setembro de 2003. http://dx.doi.org/10.21236/ada417814.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

G.J. Kramer, R. Nazikian e and E. Valeo. Correlation Reflectometry for Turbulence and Magnetic Field Measurements in Fusion Plasmas. Office of Scientific and Technical Information (OSTI), julho de 2002. http://dx.doi.org/10.2172/808282.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Claycomb, William R., Roy Maxion, Jason Clark, Bronwyn Woods, Brian Lindauer, David Jensen, Joshua Neil, Alex Kent, Sadie Creese e Phil Legg. Deep Focus: Increasing User Depth of Field" to Improve Threat Detection (Oxford Workshop Poster)". Fort Belvoir, VA: Defense Technical Information Center, outubro de 2014. http://dx.doi.org/10.21236/ada610980.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Grabowski, Theodore C. Directed Energy HPM, PP, & PPS Efforts: Magnetized Target Fusion - Field Reversed Configuration. Fort Belvoir, VA: Defense Technical Information Center, agosto de 2006. http://dx.doi.org/10.21236/ada460910.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Hasegawa, Akira, e Liu Chen. A D-He/sup 3/ fusion reactor based on a dipole magnetic field. Office of Scientific and Technical Information (OSTI), julho de 1989. http://dx.doi.org/10.2172/5819503.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Chu, Yuh-Yi. Fusion core start-up, ignition and burn simulations of reversed-field pinch (RFP) reactors. Office of Scientific and Technical Information (OSTI), janeiro de 1988. http://dx.doi.org/10.2172/5386865.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia