Literatura científica selecionada sobre o tema "Depth of field fusion"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Depth of field fusion".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Depth of field fusion"
Wang, Shuzhen, Haili Zhao e Wenbo Jing. "Fast all-focus image reconstruction method based on light field imaging". ITM Web of Conferences 45 (2022): 01030. http://dx.doi.org/10.1051/itmconf/20224501030.
Texto completo da fonteChen, Jiaxin, Shuo Zhang e Youfang Lin. "Attention-based Multi-Level Fusion Network for Light Field Depth Estimation". Proceedings of the AAAI Conference on Artificial Intelligence 35, n.º 2 (18 de maio de 2021): 1009–17. http://dx.doi.org/10.1609/aaai.v35i2.16185.
Texto completo da fontePiao, Yongri, Miao Zhang, Xiaohui Wang e Peihua Li. "Extended depth of field integral imaging using multi-focus fusion". Optics Communications 411 (março de 2018): 8–14. http://dx.doi.org/10.1016/j.optcom.2017.10.081.
Texto completo da fonteDe, Ishita, Bhabatosh Chanda e Buddhajyoti Chattopadhyay. "Enhancing effective depth-of-field by image fusion using mathematical morphology". Image and Vision Computing 24, n.º 12 (dezembro de 2006): 1278–87. http://dx.doi.org/10.1016/j.imavis.2006.04.005.
Texto completo da fontePu, Can, Runzi Song, Radim Tylecek, Nanbo Li e Robert Fisher. "SDF-MAN: Semi-Supervised Disparity Fusion with Multi-Scale Adversarial Networks". Remote Sensing 11, n.º 5 (27 de fevereiro de 2019): 487. http://dx.doi.org/10.3390/rs11050487.
Texto completo da fonteBouzos, Odysseas, Ioannis Andreadis e Nikolaos Mitianoudis. "Conditional Random Field-Guided Multi-Focus Image Fusion". Journal of Imaging 8, n.º 9 (5 de setembro de 2022): 240. http://dx.doi.org/10.3390/jimaging8090240.
Texto completo da fontePei, Xiangyu, Shujun Xing, Xunbo Yu, Gao Xin, Xudong Wen, Chenyu Ning, Xinhui Xie et al. "Three-dimensional light field fusion display system and coding scheme for extending depth of field". Optics and Lasers in Engineering 169 (outubro de 2023): 107716. http://dx.doi.org/10.1016/j.optlaseng.2023.107716.
Texto completo da fonteJie, Yuchan, Xiaosong Li, Mingyi Wang e Haishu Tan. "Multi-Focus Image Fusion for Full-Field Optical Angiography". Entropy 25, n.º 6 (16 de junho de 2023): 951. http://dx.doi.org/10.3390/e25060951.
Texto completo da fonteWang, Hui-Feng, Gui-ping Wang, Xiao-Yan Wang, Chi Ruan e Shi-qin Chen. "A kind of infrared expand depth of field vision sensor in low-visibility road condition for safety-driving". Sensor Review 36, n.º 1 (18 de janeiro de 2016): 7–13. http://dx.doi.org/10.1108/sr-04-2015-0055.
Texto completo da fonteXiao, Yuhao, Guijin Wang, Xiaowei Hu, Chenbo Shi, Long Meng e Huazhong Yang. "Guided, Fusion-Based, Large Depth-of-field 3D Imaging Using a Focal Stack". Sensors 19, n.º 22 (7 de novembro de 2019): 4845. http://dx.doi.org/10.3390/s19224845.
Texto completo da fonteTeses / dissertações sobre o assunto "Depth of field fusion"
Duan, Jun Wei. "New regional multifocus image fusion techniques for extending depth of field". Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3951602.
Texto completo da fonteHua, Xiaoben, e Yuxia Yang. "A Fusion Model For Enhancement of Range Images". Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2203.
Texto completo da fonteRoom 401, No.56, Lane 21, Yin Gao Road, Shanghai, China
Ocampo, Blandon Cristian Felipe. "Patch-Based image fusion for computational photography". Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0020.
Texto completo da fonteThe most common computational techniques to deal with the limited high dynamic range and reduced depth of field of conventional cameras are based on the fusion of images acquired with different settings. These approaches require aligned images and motionless scenes, otherwise ghost artifacts and irregular structures can arise after the fusion. The goal of this thesis is to develop patch-based techniques in order to deal with motion and misalignment for image fusion, particularly in the case of variable illumination and blur.In the first part of this work, we present a methodology for the fusion of bracketed exposure images for dynamic scenes. Our method combines a carefully crafted contrast normalization, a fast non-local combination of patches and different regularization steps. This yields an efficient way of producing contrasted and well-exposed images from hand-held captures of dynamic scenes, even in difficult cases (moving objects, non planar scenes, optical deformations, etc.).In a second part, we propose a multifocus image fusion method that also deals with hand-held acquisition conditions and moving objects. At the core of our methodology, we propose a patch-based algorithm that corrects local geometric deformations by relying on both color and gradient orientations.Our methods were evaluated on common and new datasets created for the purpose of this work. From the experiments we conclude that our methods are consistently more robust than alternative methods to geometric distortions and illumination variations or blur. As a byproduct of our study, we also analyze the capacity of the PatchMatch algorithm to reconstruct images in the presence of blur and illumination changes, and propose different strategies to improve such reconstructions
Ramirez, Hernandez Pavel. "Extended depth of field". Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9941.
Texto completo da fonteSikdar, Ankita. "Depth based Sensor Fusion in Object Detection and Tracking". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1515075130647622.
Texto completo da fonteVillarruel, Christina R. "Computer graphics and human depth perception with gaze-contingent depth of field /". Connect to online version, 2006. http://ada.mtholyoke.edu/setr/websrc/pdfs/www/2006/175.pdf.
Texto completo da fonteAldrovandi, Lorenzo. "Depth estimation algorithm for light field data". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.
Encontre o texto completo da fonteBotcherby, Edward J. "Aberration free extended depth of field microscopy". Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:7ad8bc83-6740-459f-8c48-76b048c89978.
Texto completo da fonteMöckelind, Christoffer. "Improving deep monocular depth predictions using dense narrow field of view depth images". Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.
Texto completo da fonteI det här arbetet studerar vi ett djupapproximationsproblem där vi tillhandahåller en djupbild med smal synvinkel och en RGB-bild med bred synvinkel till ett djupt nätverk med uppgift att förutsäga djupet för hela RGB-bilden. Vi visar att genom att ge djupbilden till nätverket förbättras resultatet för området utanför det tillhandahållna djupet jämfört med en existerande metod som använder en RGB-bild för att förutsäga djupet. Vi undersöker flera arkitekturer och storlekar på djupbildssynfält och studerar effekten av att lägga till brus och sänka upplösningen på djupbilden. Vi visar att större synfält för djupbilden ger en större fördel och även att modellens noggrannhet minskar med avståndet från det angivna djupet. Våra resultat visar också att modellerna som använde sig av det brusiga lågupplösta djupet presterade på samma nivå som de modeller som använde sig av det omodifierade djupet.
Luraas, Knut. "Clinical aspects of Critical Flicker Fusion perimetry : an in-depth analysis". Thesis, Cardiff University, 2012. http://orca.cf.ac.uk/39684/.
Texto completo da fonteLivros sobre o assunto "Depth of field fusion"
Depth of field. Stockport, England: Dewi Lewis Pub., 2000.
Encontre o texto completo da fonteHeyen, William. Depth of field: Poems. Pittsburgh: Carnegie Mellon University Press, 2005.
Encontre o texto completo da fonteApplied depth of field. Boston: Focal Press, 1985.
Encontre o texto completo da fonteDepth of field: Poems and photographs. Simsbury, CT: Antrim House, 2010.
Encontre o texto completo da fonte1944-, Slattery Dennis Patrick, e Corbett Lionel, eds. Depth psychology: Meditations from the field. Einsiedeln, Switzerland: Daimon, 2000.
Encontre o texto completo da fonteDonal, Cooper, Leino Marika, Henry Moore Institute (Leeds, England) e Victoria and Albert Museum, eds. Depth of field: Relief sculpture in Renaissance Italy. Bern: Peter Lang, 2007.
Encontre o texto completo da fonteGeoffrey, Cocks, Diedrick James e Perusek Glenn, eds. Depth of field: Stanley Kubrick, film, and the uses of history. Madison: University of Wisconsin Press, 2006.
Encontre o texto completo da fonteBuch, Neeraj. Precast concrete panel systems for full-depth pavement repairs: Field trials. Washington, DC: Office of Infrastructure, Office of Pavement Technology, Federal Highway Administration, U.S. Department of Transportation, 2007.
Encontre o texto completo da fonteDepth of field: Essays on photography, mass media, and lens culture. Albuquerque, NM: University of New Mexico Press, 1998.
Encontre o texto completo da fonteRuotoistenmäki, Tapio. Estimation of depth to potential field sources using the Fourier amplitude spectrum. Espoo: Geologian tutkimuskeskus, 1987.
Encontre o texto completo da fonteCapítulos de livros sobre o assunto "Depth of field fusion"
Zhang, Yukun, Yongri Piao, Xinxin Ji e Miao Zhang. "Dynamic Fusion Network for Light Field Depth Estimation". In Pattern Recognition and Computer Vision, 3–15. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88007-1_1.
Texto completo da fonteLiu, Xinshi, Dongmei Fu, Chunhong Wu e Ze Si. "The Depth Estimation Method Based on Double-Cues Fusion for Light Field Images". In Proceedings of the 11th International Conference on Modelling, Identification and Control (ICMIC2019), 719–26. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-0474-7_67.
Texto completo da fonteGooch, Jan W. "Depth of Field". In Encyclopedic Dictionary of Polymers, 201–2. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-6247-8_3432.
Texto completo da fonteKemp, Jonathan. "Depth of field". In Film on Video, 55–64. London ; New York : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9780429468872-6.
Texto completo da fonteRavitz, Jeff, e James L. Moody. "Depth of Field". In Lighting for Televised Live Events, 75–79. First edition. | New York, NY : Routledge, 2021.: Routledge, 2021. http://dx.doi.org/10.4324/9780429288982-11.
Texto completo da fonteAtchison, David A., e George Smith. "Depth-of-Field". In Optics of the Human Eye, 379–93. 2a ed. New York: CRC Press, 2023. http://dx.doi.org/10.1201/9781003128601-24.
Texto completo da fonteCai, Ziyun, Yang Long, Xiao-Yuan Jing e Ling Shao. "Adaptive Visual-Depth Fusion Transfer". In Computer Vision – ACCV 2018, 56–73. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_4.
Texto completo da fonteTuraev, Vladimir, e Alexis Virelizier. "Fusion categories". In Monoidal Categories and Topological Field Theory, 65–87. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-49834-8_4.
Texto completo da fonteSandström, Erik, Martin R. Oswald, Suryansh Kumar, Silvan Weder, Fisher Yu, Cristian Sminchisescu e Luc Van Gool. "Learning Online Multi-sensor Depth Fusion". In Lecture Notes in Computer Science, 87–105. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19824-3_6.
Texto completo da fonteSchedl, David C., Clemens Birklbauer, Johann Gschnaller e Oliver Bimber. "Generalized Depth-of-Field Light-Field Rendering". In Computer Vision and Graphics, 95–105. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46418-3_9.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Depth of field fusion"
Ajdin, Boris, e Timo Ahonen. "Reduced depth of field using multi-image fusion". In IS&T/SPIE Electronic Imaging, editado por Cees G. M. Snoek, Lyndon S. Kennedy, Reiner Creutzburg, David Akopian, Dietmar Wüller, Kevin J. Matherson, Todor G. Georgiev e Andrew Lumsdaine. SPIE, 2013. http://dx.doi.org/10.1117/12.2008501.
Texto completo da fonteHariharan, Harishwaran, Andreas Koschan e Mongi Abidi. "Extending depth of field by intrinsic mode image fusion". In 2008 19th International Conference on Pattern Recognition (ICPR). IEEE, 2008. http://dx.doi.org/10.1109/icpr.2008.4761727.
Texto completo da fonteChantara, Wisarut, e Yo-Sung Ho. "Multi-focus image fusion for extended depth of field". In the 10th International Conference. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3240876.3240894.
Texto completo da fonteBrizzi, Michele, Federica Battisti e Alessandro Neri. "Light Field Depth-of-Field Expansion and Enhancement Based on Multifocus Fusion". In 2019 8th European Workshop on Visual Information Processing (EUVIP). IEEE, 2019. http://dx.doi.org/10.1109/euvip47703.2019.8946218.
Texto completo da fonteLiu, Xiaomin, Pengbo Chen, Mengzhu Du, Huaping Zang, Huace Hu, Yunfei Zhu, Zhibang Ma, Qiancheng Wang e Yuanye Niu. "Multi-information fusion depth estimation of compressed spectral light field images". In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: OSA, 2020. http://dx.doi.org/10.1364/3d.2020.dw1a.2.
Texto completo da fontewu, nanshou, Mingyi Wang, Guojian Yang e Zeng Yaguang. "Digital Depth-of-field expansion Using Contrast Pyramid fusion Algorithm for Full-field Optical Angiography". In Clinical and Translational Biophotonics. Washington, D.C.: OSA, 2018. http://dx.doi.org/10.1364/translational.2018.jtu3a.21.
Texto completo da fonteSong, Xianlin. "Computed extended depth of field photoacoustic microscopy using ratio of low-pass pyramid fusion". In Signal Processing, Sensor/Information Fusion, and Target Recognition XXX, editado por Lynne L. Grewe, Erik P. Blasch e Ivan Kadar. SPIE, 2021. http://dx.doi.org/10.1117/12.2589659.
Texto completo da fonteCheng, Samuel, Hyohoon Choi, Qiang Wu e Kenneth R. Castleman. "Extended Depth-of-Field Microscope Imaging: MPP Image Fusion VS. WAVEFRONT CODING". In 2006 International Conference on Image Processing. IEEE, 2006. http://dx.doi.org/10.1109/icip.2006.312957.
Texto completo da fonteAslantas, Veysel, e Rifat Kurban. "Extending depth-of-field by image fusion using multi-objective genetic algorithm". In 2009 7th IEEE International Conference on Industrial Informatics (INDIN). IEEE, 2009. http://dx.doi.org/10.1109/indin.2009.5195826.
Texto completo da fontePérez, Román Hurtado, Carina Toxqui-Quitl e Alfonso Padilla-Vivanco. "Image fusion of color microscopic images for extended the depth of field". In Frontiers in Optics. Washington, D.C.: OSA, 2015. http://dx.doi.org/10.1364/fio.2015.fth1f.7.
Texto completo da fonteRelatórios de organizações sobre o assunto "Depth of field fusion"
McLean, William E. ANVIS Objective Lens Depth of Field. Fort Belvoir, VA: Defense Technical Information Center, março de 1996. http://dx.doi.org/10.21236/ada306571.
Texto completo da fonteAl-Mutawaly, Nafia, Hubert de Bruin e Raymond D. Findlay. Magnetic Nerve Stimulation: Field Focality and Depth of Penetration. Fort Belvoir, VA: Defense Technical Information Center, outubro de 2001. http://dx.doi.org/10.21236/ada411028.
Texto completo da fontePeng, Y. K. M. Spherical torus, compact fusion at low field. Office of Scientific and Technical Information (OSTI), fevereiro de 1985. http://dx.doi.org/10.2172/6040602.
Texto completo da fontePaul, A. C., e V. K. Neil. Fixed Field Alternating Gradient recirculator for heavy ion fusion. Office of Scientific and Technical Information (OSTI), março de 1991. http://dx.doi.org/10.2172/5828376.
Texto completo da fonteCathey, W. T., Benjamin Braker e Sherif Sherif. Analysis and Design Tools for Passive Ranging and Reduced-Depth-of-Field Imaging. Fort Belvoir, VA: Defense Technical Information Center, setembro de 2003. http://dx.doi.org/10.21236/ada417814.
Texto completo da fonteG.J. Kramer, R. Nazikian e and E. Valeo. Correlation Reflectometry for Turbulence and Magnetic Field Measurements in Fusion Plasmas. Office of Scientific and Technical Information (OSTI), julho de 2002. http://dx.doi.org/10.2172/808282.
Texto completo da fonteClaycomb, William R., Roy Maxion, Jason Clark, Bronwyn Woods, Brian Lindauer, David Jensen, Joshua Neil, Alex Kent, Sadie Creese e Phil Legg. Deep Focus: Increasing User Depth of Field" to Improve Threat Detection (Oxford Workshop Poster)". Fort Belvoir, VA: Defense Technical Information Center, outubro de 2014. http://dx.doi.org/10.21236/ada610980.
Texto completo da fonteGrabowski, Theodore C. Directed Energy HPM, PP, & PPS Efforts: Magnetized Target Fusion - Field Reversed Configuration. Fort Belvoir, VA: Defense Technical Information Center, agosto de 2006. http://dx.doi.org/10.21236/ada460910.
Texto completo da fonteHasegawa, Akira, e Liu Chen. A D-He/sup 3/ fusion reactor based on a dipole magnetic field. Office of Scientific and Technical Information (OSTI), julho de 1989. http://dx.doi.org/10.2172/5819503.
Texto completo da fonteChu, Yuh-Yi. Fusion core start-up, ignition and burn simulations of reversed-field pinch (RFP) reactors. Office of Scientific and Technical Information (OSTI), janeiro de 1988. http://dx.doi.org/10.2172/5386865.
Texto completo da fonte