Auswahl der wissenschaftlichen Literatur zum Thema „Depth of field fusion“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Depth of field fusion" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Depth of field fusion"
Wang, Shuzhen, Haili Zhao und Wenbo Jing. „Fast all-focus image reconstruction method based on light field imaging“. ITM Web of Conferences 45 (2022): 01030. http://dx.doi.org/10.1051/itmconf/20224501030.
Der volle Inhalt der QuelleChen, Jiaxin, Shuo Zhang und Youfang Lin. „Attention-based Multi-Level Fusion Network for Light Field Depth Estimation“. Proceedings of the AAAI Conference on Artificial Intelligence 35, Nr. 2 (18.05.2021): 1009–17. http://dx.doi.org/10.1609/aaai.v35i2.16185.
Der volle Inhalt der QuellePiao, Yongri, Miao Zhang, Xiaohui Wang und Peihua Li. „Extended depth of field integral imaging using multi-focus fusion“. Optics Communications 411 (März 2018): 8–14. http://dx.doi.org/10.1016/j.optcom.2017.10.081.
Der volle Inhalt der QuelleDe, Ishita, Bhabatosh Chanda und Buddhajyoti Chattopadhyay. „Enhancing effective depth-of-field by image fusion using mathematical morphology“. Image and Vision Computing 24, Nr. 12 (Dezember 2006): 1278–87. http://dx.doi.org/10.1016/j.imavis.2006.04.005.
Der volle Inhalt der QuellePu, Can, Runzi Song, Radim Tylecek, Nanbo Li und Robert Fisher. „SDF-MAN: Semi-Supervised Disparity Fusion with Multi-Scale Adversarial Networks“. Remote Sensing 11, Nr. 5 (27.02.2019): 487. http://dx.doi.org/10.3390/rs11050487.
Der volle Inhalt der QuelleBouzos, Odysseas, Ioannis Andreadis und Nikolaos Mitianoudis. „Conditional Random Field-Guided Multi-Focus Image Fusion“. Journal of Imaging 8, Nr. 9 (05.09.2022): 240. http://dx.doi.org/10.3390/jimaging8090240.
Der volle Inhalt der QuellePei, Xiangyu, Shujun Xing, Xunbo Yu, Gao Xin, Xudong Wen, Chenyu Ning, Xinhui Xie et al. „Three-dimensional light field fusion display system and coding scheme for extending depth of field“. Optics and Lasers in Engineering 169 (Oktober 2023): 107716. http://dx.doi.org/10.1016/j.optlaseng.2023.107716.
Der volle Inhalt der QuelleJie, Yuchan, Xiaosong Li, Mingyi Wang und Haishu Tan. „Multi-Focus Image Fusion for Full-Field Optical Angiography“. Entropy 25, Nr. 6 (16.06.2023): 951. http://dx.doi.org/10.3390/e25060951.
Der volle Inhalt der QuelleWang, Hui-Feng, Gui-ping Wang, Xiao-Yan Wang, Chi Ruan und Shi-qin Chen. „A kind of infrared expand depth of field vision sensor in low-visibility road condition for safety-driving“. Sensor Review 36, Nr. 1 (18.01.2016): 7–13. http://dx.doi.org/10.1108/sr-04-2015-0055.
Der volle Inhalt der QuelleXiao, Yuhao, Guijin Wang, Xiaowei Hu, Chenbo Shi, Long Meng und Huazhong Yang. „Guided, Fusion-Based, Large Depth-of-field 3D Imaging Using a Focal Stack“. Sensors 19, Nr. 22 (07.11.2019): 4845. http://dx.doi.org/10.3390/s19224845.
Der volle Inhalt der QuelleDissertationen zum Thema "Depth of field fusion"
Duan, Jun Wei. „New regional multifocus image fusion techniques for extending depth of field“. Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3951602.
Der volle Inhalt der QuelleHua, Xiaoben, und Yuxia Yang. „A Fusion Model For Enhancement of Range Images“. Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2203.
Der volle Inhalt der QuelleRoom 401, No.56, Lane 21, Yin Gao Road, Shanghai, China
Ocampo, Blandon Cristian Felipe. „Patch-Based image fusion for computational photography“. Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0020.
Der volle Inhalt der QuelleThe most common computational techniques to deal with the limited high dynamic range and reduced depth of field of conventional cameras are based on the fusion of images acquired with different settings. These approaches require aligned images and motionless scenes, otherwise ghost artifacts and irregular structures can arise after the fusion. The goal of this thesis is to develop patch-based techniques in order to deal with motion and misalignment for image fusion, particularly in the case of variable illumination and blur.In the first part of this work, we present a methodology for the fusion of bracketed exposure images for dynamic scenes. Our method combines a carefully crafted contrast normalization, a fast non-local combination of patches and different regularization steps. This yields an efficient way of producing contrasted and well-exposed images from hand-held captures of dynamic scenes, even in difficult cases (moving objects, non planar scenes, optical deformations, etc.).In a second part, we propose a multifocus image fusion method that also deals with hand-held acquisition conditions and moving objects. At the core of our methodology, we propose a patch-based algorithm that corrects local geometric deformations by relying on both color and gradient orientations.Our methods were evaluated on common and new datasets created for the purpose of this work. From the experiments we conclude that our methods are consistently more robust than alternative methods to geometric distortions and illumination variations or blur. As a byproduct of our study, we also analyze the capacity of the PatchMatch algorithm to reconstruct images in the presence of blur and illumination changes, and propose different strategies to improve such reconstructions
Ramirez, Hernandez Pavel. „Extended depth of field“. Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9941.
Der volle Inhalt der QuelleSikdar, Ankita. „Depth based Sensor Fusion in Object Detection and Tracking“. The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1515075130647622.
Der volle Inhalt der QuelleVillarruel, Christina R. „Computer graphics and human depth perception with gaze-contingent depth of field /“. Connect to online version, 2006. http://ada.mtholyoke.edu/setr/websrc/pdfs/www/2006/175.pdf.
Der volle Inhalt der QuelleAldrovandi, Lorenzo. „Depth estimation algorithm for light field data“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.
Den vollen Inhalt der Quelle findenBotcherby, Edward J. „Aberration free extended depth of field microscopy“. Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:7ad8bc83-6740-459f-8c48-76b048c89978.
Der volle Inhalt der QuelleMöckelind, Christoffer. „Improving deep monocular depth predictions using dense narrow field of view depth images“. Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.
Der volle Inhalt der QuelleI det här arbetet studerar vi ett djupapproximationsproblem där vi tillhandahåller en djupbild med smal synvinkel och en RGB-bild med bred synvinkel till ett djupt nätverk med uppgift att förutsäga djupet för hela RGB-bilden. Vi visar att genom att ge djupbilden till nätverket förbättras resultatet för området utanför det tillhandahållna djupet jämfört med en existerande metod som använder en RGB-bild för att förutsäga djupet. Vi undersöker flera arkitekturer och storlekar på djupbildssynfält och studerar effekten av att lägga till brus och sänka upplösningen på djupbilden. Vi visar att större synfält för djupbilden ger en större fördel och även att modellens noggrannhet minskar med avståndet från det angivna djupet. Våra resultat visar också att modellerna som använde sig av det brusiga lågupplösta djupet presterade på samma nivå som de modeller som använde sig av det omodifierade djupet.
Luraas, Knut. „Clinical aspects of Critical Flicker Fusion perimetry : an in-depth analysis“. Thesis, Cardiff University, 2012. http://orca.cf.ac.uk/39684/.
Der volle Inhalt der QuelleBücher zum Thema "Depth of field fusion"
Depth of field. Stockport, England: Dewi Lewis Pub., 2000.
Den vollen Inhalt der Quelle findenHeyen, William. Depth of field: Poems. Pittsburgh: Carnegie Mellon University Press, 2005.
Den vollen Inhalt der Quelle findenApplied depth of field. Boston: Focal Press, 1985.
Den vollen Inhalt der Quelle findenDepth of field: Poems and photographs. Simsbury, CT: Antrim House, 2010.
Den vollen Inhalt der Quelle finden1944-, Slattery Dennis Patrick, und Corbett Lionel, Hrsg. Depth psychology: Meditations from the field. Einsiedeln, Switzerland: Daimon, 2000.
Den vollen Inhalt der Quelle findenDonal, Cooper, Leino Marika, Henry Moore Institute (Leeds, England) und Victoria and Albert Museum, Hrsg. Depth of field: Relief sculpture in Renaissance Italy. Bern: Peter Lang, 2007.
Den vollen Inhalt der Quelle findenGeoffrey, Cocks, Diedrick James und Perusek Glenn, Hrsg. Depth of field: Stanley Kubrick, film, and the uses of history. Madison: University of Wisconsin Press, 2006.
Den vollen Inhalt der Quelle findenBuch, Neeraj. Precast concrete panel systems for full-depth pavement repairs: Field trials. Washington, DC: Office of Infrastructure, Office of Pavement Technology, Federal Highway Administration, U.S. Department of Transportation, 2007.
Den vollen Inhalt der Quelle findenDepth of field: Essays on photography, mass media, and lens culture. Albuquerque, NM: University of New Mexico Press, 1998.
Den vollen Inhalt der Quelle findenRuotoistenmäki, Tapio. Estimation of depth to potential field sources using the Fourier amplitude spectrum. Espoo: Geologian tutkimuskeskus, 1987.
Den vollen Inhalt der Quelle findenBuchteile zum Thema "Depth of field fusion"
Zhang, Yukun, Yongri Piao, Xinxin Ji und Miao Zhang. „Dynamic Fusion Network for Light Field Depth Estimation“. In Pattern Recognition and Computer Vision, 3–15. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88007-1_1.
Der volle Inhalt der QuelleLiu, Xinshi, Dongmei Fu, Chunhong Wu und Ze Si. „The Depth Estimation Method Based on Double-Cues Fusion for Light Field Images“. In Proceedings of the 11th International Conference on Modelling, Identification and Control (ICMIC2019), 719–26. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-0474-7_67.
Der volle Inhalt der QuelleGooch, Jan W. „Depth of Field“. In Encyclopedic Dictionary of Polymers, 201–2. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-6247-8_3432.
Der volle Inhalt der QuelleKemp, Jonathan. „Depth of field“. In Film on Video, 55–64. London ; New York : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9780429468872-6.
Der volle Inhalt der QuelleRavitz, Jeff, und James L. Moody. „Depth of Field“. In Lighting for Televised Live Events, 75–79. First edition. | New York, NY : Routledge, 2021.: Routledge, 2021. http://dx.doi.org/10.4324/9780429288982-11.
Der volle Inhalt der QuelleAtchison, David A., und George Smith. „Depth-of-Field“. In Optics of the Human Eye, 379–93. 2. Aufl. New York: CRC Press, 2023. http://dx.doi.org/10.1201/9781003128601-24.
Der volle Inhalt der QuelleCai, Ziyun, Yang Long, Xiao-Yuan Jing und Ling Shao. „Adaptive Visual-Depth Fusion Transfer“. In Computer Vision – ACCV 2018, 56–73. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20870-7_4.
Der volle Inhalt der QuelleTuraev, Vladimir, und Alexis Virelizier. „Fusion categories“. In Monoidal Categories and Topological Field Theory, 65–87. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-49834-8_4.
Der volle Inhalt der QuelleSandström, Erik, Martin R. Oswald, Suryansh Kumar, Silvan Weder, Fisher Yu, Cristian Sminchisescu und Luc Van Gool. „Learning Online Multi-sensor Depth Fusion“. In Lecture Notes in Computer Science, 87–105. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19824-3_6.
Der volle Inhalt der QuelleSchedl, David C., Clemens Birklbauer, Johann Gschnaller und Oliver Bimber. „Generalized Depth-of-Field Light-Field Rendering“. In Computer Vision and Graphics, 95–105. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46418-3_9.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Depth of field fusion"
Ajdin, Boris, und Timo Ahonen. „Reduced depth of field using multi-image fusion“. In IS&T/SPIE Electronic Imaging, herausgegeben von Cees G. M. Snoek, Lyndon S. Kennedy, Reiner Creutzburg, David Akopian, Dietmar Wüller, Kevin J. Matherson, Todor G. Georgiev und Andrew Lumsdaine. SPIE, 2013. http://dx.doi.org/10.1117/12.2008501.
Der volle Inhalt der QuelleHariharan, Harishwaran, Andreas Koschan und Mongi Abidi. „Extending depth of field by intrinsic mode image fusion“. In 2008 19th International Conference on Pattern Recognition (ICPR). IEEE, 2008. http://dx.doi.org/10.1109/icpr.2008.4761727.
Der volle Inhalt der QuelleChantara, Wisarut, und Yo-Sung Ho. „Multi-focus image fusion for extended depth of field“. In the 10th International Conference. New York, New York, USA: ACM Press, 2018. http://dx.doi.org/10.1145/3240876.3240894.
Der volle Inhalt der QuelleBrizzi, Michele, Federica Battisti und Alessandro Neri. „Light Field Depth-of-Field Expansion and Enhancement Based on Multifocus Fusion“. In 2019 8th European Workshop on Visual Information Processing (EUVIP). IEEE, 2019. http://dx.doi.org/10.1109/euvip47703.2019.8946218.
Der volle Inhalt der QuelleLiu, Xiaomin, Pengbo Chen, Mengzhu Du, Huaping Zang, Huace Hu, Yunfei Zhu, Zhibang Ma, Qiancheng Wang und Yuanye Niu. „Multi-information fusion depth estimation of compressed spectral light field images“. In 3D Image Acquisition and Display: Technology, Perception and Applications. Washington, D.C.: OSA, 2020. http://dx.doi.org/10.1364/3d.2020.dw1a.2.
Der volle Inhalt der Quellewu, nanshou, Mingyi Wang, Guojian Yang und Zeng Yaguang. „Digital Depth-of-field expansion Using Contrast Pyramid fusion Algorithm for Full-field Optical Angiography“. In Clinical and Translational Biophotonics. Washington, D.C.: OSA, 2018. http://dx.doi.org/10.1364/translational.2018.jtu3a.21.
Der volle Inhalt der QuelleSong, Xianlin. „Computed extended depth of field photoacoustic microscopy using ratio of low-pass pyramid fusion“. In Signal Processing, Sensor/Information Fusion, and Target Recognition XXX, herausgegeben von Lynne L. Grewe, Erik P. Blasch und Ivan Kadar. SPIE, 2021. http://dx.doi.org/10.1117/12.2589659.
Der volle Inhalt der QuelleCheng, Samuel, Hyohoon Choi, Qiang Wu und Kenneth R. Castleman. „Extended Depth-of-Field Microscope Imaging: MPP Image Fusion VS. WAVEFRONT CODING“. In 2006 International Conference on Image Processing. IEEE, 2006. http://dx.doi.org/10.1109/icip.2006.312957.
Der volle Inhalt der QuelleAslantas, Veysel, und Rifat Kurban. „Extending depth-of-field by image fusion using multi-objective genetic algorithm“. In 2009 7th IEEE International Conference on Industrial Informatics (INDIN). IEEE, 2009. http://dx.doi.org/10.1109/indin.2009.5195826.
Der volle Inhalt der QuellePérez, Román Hurtado, Carina Toxqui-Quitl und Alfonso Padilla-Vivanco. „Image fusion of color microscopic images for extended the depth of field“. In Frontiers in Optics. Washington, D.C.: OSA, 2015. http://dx.doi.org/10.1364/fio.2015.fth1f.7.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Depth of field fusion"
McLean, William E. ANVIS Objective Lens Depth of Field. Fort Belvoir, VA: Defense Technical Information Center, März 1996. http://dx.doi.org/10.21236/ada306571.
Der volle Inhalt der QuelleAl-Mutawaly, Nafia, Hubert de Bruin und Raymond D. Findlay. Magnetic Nerve Stimulation: Field Focality and Depth of Penetration. Fort Belvoir, VA: Defense Technical Information Center, Oktober 2001. http://dx.doi.org/10.21236/ada411028.
Der volle Inhalt der QuellePeng, Y. K. M. Spherical torus, compact fusion at low field. Office of Scientific and Technical Information (OSTI), Februar 1985. http://dx.doi.org/10.2172/6040602.
Der volle Inhalt der QuellePaul, A. C., und V. K. Neil. Fixed Field Alternating Gradient recirculator for heavy ion fusion. Office of Scientific and Technical Information (OSTI), März 1991. http://dx.doi.org/10.2172/5828376.
Der volle Inhalt der QuelleCathey, W. T., Benjamin Braker und Sherif Sherif. Analysis and Design Tools for Passive Ranging and Reduced-Depth-of-Field Imaging. Fort Belvoir, VA: Defense Technical Information Center, September 2003. http://dx.doi.org/10.21236/ada417814.
Der volle Inhalt der QuelleG.J. Kramer, R. Nazikian und and E. Valeo. Correlation Reflectometry for Turbulence and Magnetic Field Measurements in Fusion Plasmas. Office of Scientific and Technical Information (OSTI), Juli 2002. http://dx.doi.org/10.2172/808282.
Der volle Inhalt der QuelleClaycomb, William R., Roy Maxion, Jason Clark, Bronwyn Woods, Brian Lindauer, David Jensen, Joshua Neil, Alex Kent, Sadie Creese und Phil Legg. Deep Focus: Increasing User Depth of Field" to Improve Threat Detection (Oxford Workshop Poster)". Fort Belvoir, VA: Defense Technical Information Center, Oktober 2014. http://dx.doi.org/10.21236/ada610980.
Der volle Inhalt der QuelleGrabowski, Theodore C. Directed Energy HPM, PP, & PPS Efforts: Magnetized Target Fusion - Field Reversed Configuration. Fort Belvoir, VA: Defense Technical Information Center, August 2006. http://dx.doi.org/10.21236/ada460910.
Der volle Inhalt der QuelleHasegawa, Akira, und Liu Chen. A D-He/sup 3/ fusion reactor based on a dipole magnetic field. Office of Scientific and Technical Information (OSTI), Juli 1989. http://dx.doi.org/10.2172/5819503.
Der volle Inhalt der QuelleChu, Yuh-Yi. Fusion core start-up, ignition and burn simulations of reversed-field pinch (RFP) reactors. Office of Scientific and Technical Information (OSTI), Januar 1988. http://dx.doi.org/10.2172/5386865.
Der volle Inhalt der Quelle