Dissertations / Theses on the topic 'Depth of field'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Depth of field.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Ramirez, Hernandez Pavel. "Extended depth of field." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9941.
Full textVillarruel, Christina R. "Computer graphics and human depth perception with gaze-contingent depth of field /." Connect to online version, 2006. http://ada.mtholyoke.edu/setr/websrc/pdfs/www/2006/175.pdf.
Full textAldrovandi, Lorenzo. "Depth estimation algorithm for light field data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.
Find full textBotcherby, Edward J. "Aberration free extended depth of field microscopy." Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:7ad8bc83-6740-459f-8c48-76b048c89978.
Full textMöckelind, Christoffer. "Improving deep monocular depth predictions using dense narrow field of view depth images." Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.
Full textI det här arbetet studerar vi ett djupapproximationsproblem där vi tillhandahåller en djupbild med smal synvinkel och en RGB-bild med bred synvinkel till ett djupt nätverk med uppgift att förutsäga djupet för hela RGB-bilden. Vi visar att genom att ge djupbilden till nätverket förbättras resultatet för området utanför det tillhandahållna djupet jämfört med en existerande metod som använder en RGB-bild för att förutsäga djupet. Vi undersöker flera arkitekturer och storlekar på djupbildssynfält och studerar effekten av att lägga till brus och sänka upplösningen på djupbilden. Vi visar att större synfält för djupbilden ger en större fördel och även att modellens noggrannhet minskar med avståndet från det angivna djupet. Våra resultat visar också att modellerna som använde sig av det brusiga lågupplösta djupet presterade på samma nivå som de modeller som använde sig av det omodifierade djupet.
Ozkalayci, Burak Oguz. "Multi-view Video Coding Via Dense Depth Field." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607517/index.pdf.
Full textLindeberg, Tim. "Concealing rendering simplifications using gazecontingent depth of field." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189601.
Full textEtt sätt att öka renderingsprestanda i 3D applikationer är att använda foveated rendering. I denna uppsats presenteras en ny foveated rendering-teknik som kallas gaze contingent depth of field tessellering (GC DOF tessellering). Tessellering är när geometri delas i mindre delar för att öka detaljrikedom. Tekniken fungerar genom att applicera tessellering på alla objekt i fokalplanet och gradvis minska tesselleringsnivåer när oskärpan ökar. När användaren flyttar sin blick så flyttas fokalplanet och suddiga objekt blir skarpa samtidigt som detaljrikedomen i objektet ökar. Det kan hjälpa till att dölja de ’pops’ som uppstår när objekt ändrar form. Tekniken utvärderades i en användarstudie med 32 del- tagare. I den utvärderade scenen visade sig tekniken minska antalet renderade primitiver med ca 70 % och minska renderingstiden med ca 9 % jämfört med att använda full adaptiv tessellering. Användarstudien visade att när oskärpa ökade så minskade antalet som sa sig se ’pops’, vilket tyder på att tekniken kan användas för att dölja de ’pops’ som uppstår på grund av tessellering. Det behövs dock ytterligare forskning för att säkerställa dessa fynd.
Rangappa, Shreedhar. "Absolute depth using low-cost light field cameras." Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/36224.
Full textReinhart, William Frank. "Effects of depth cues on depth judgments using a field-sequential stereoscopic CRT display /." This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-07132007-143145/.
Full textReinhart, William Frank. "Effects of depth cues on depth judgements using a field-sequential stereoscopic CRT display." Diss., Virginia Tech, 1990. http://hdl.handle.net/10919/38796.
Full textLi, Yan. "Depth Estimation from Structured Light Fields." Doctoral thesis, Universite Libre de Bruxelles, 2020. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/309512.
Full textDoctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
Rathjens, Richard G. "PLANTING DEPTH OF TREES - A SURVEY OF FIELD DEPTH, EFFECT OF DEEP PLANTING, AND REMEDIATION." Columbus, Ohio : Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1243869972.
Full textZammit, Paul. "Extended depth-of-field imaging and ranging in microscopy." Thesis, University of Glasgow, 2017. http://theses.gla.ac.uk/8081/.
Full textAxelsson, Natalie. "Depth of Field Rendering from Sparsely Sampled Pinhole Images." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281771.
Full textOptiska linser får objekt i för- och bakgrunden att bli oskarpa i bilder. Varje punkt i scenen projiceras på bildplanet som en semitransparent oskärpecirkel (CoC) vars diameter beror på avståndet mellan punkten och linsen. I bilder renderade med nålhålskamera är hela scenen skarp men skärpedjup kan läggas till syntetiskt för fotorealism, estetik, eller i uppmärksamhetsledande syfte. Dock är många algoritmer för skärpedjupsrendering antingen resurskrävande eller präglade av artefakter. I denna rapport utvärderas två algoritmer för skärpedjupsrendering. Båda metoderna kan användas oberoende av renderingsteknik. Den första renderar endast en nålhålsbild och använder en ljusfältsbaserad metod för skärpedjupsrendering. Den andra renderar upp till 12 nålhålsbilder och använder CoC-samling för att skapa oskärpa. Idéer från båda algoritmerna kombineras i en ny metod som använder glesa nålhålsbilder för att approximera ljusfältet. Vår metod producerar bättre fysiska approximationer än de andra algoritmerna och undviker vanliga artefakter. Dock kan den orsaka spökbilder vid korta beräkningstider. Vi utvärderar metoderna genom att jämföra dem mot bilder genererade med accumulation buffer-tekniken som antas efterlikna den fysiska sanningen. Fysisk exakthet mäts med structural similarity (SSIM) och artefakter utvärderas visuellt. Beräkningstider mäts i programmet Inviwo
Liu, Yang. "Simulating depth of field using per-pixel linked list buffer." Thesis, Purdue University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1598036.
Full textIn this thesis, I present a method for simulating three characteristics of depth of field image: partial occlusion, bokeh and blur. Retrieving color from occluded surfaces is achieved by constructing a per-pixel linked list buffer, which only requires two render passes. Additionally, per-pixel linked list buffer eliminates the memory overhead of empty pixels in depth layers. Bokeh and blur effect are accomplished by image-space point splatting (Lee 2008). I demonstrate how point splatting can be used to account for the effect of aperture shape and intensity distribution on bokeh. Spherical aberration and chromatic aberration can be approximated using a custom pre-built sprite. Together as a package, this method is capable matching the realism of multi-perspective methods and layered methods.
Henriksson, Ola. "A Depth of Field Algorithm for Realtime 3D Graphics in OpenGL." Thesis, Linköping University, Department of Science and Technology, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1169.
Full textThe company where this thesis was formulated constructs VR applications for the medical environment. The hardware used is ordinary dektops with consumer level graphics cards and haptic devices. In medicin some operations require microscopes or cameras. In order to simulate these in a virtual reality environment for educational purposes, the effect of depth of field or focus have to be considered.
A working algorithm that generates this optical occurence in realtime, stereo rendered computer graphics is presented in this thesis. The algorithm is implemented in OpenGL and C++ to later be combined with a VR application simulating eye-surgery which is built with OpenGL Optimizer.
Several different approaches are described in this report. The call for realtime stereo rendering (~60 fps) means taking advantage of the graphics hardware to a great extent. In OpenGL this means using the extensions to a specific graphic chip for better performance, in this case the algorithm is implemented for a GeForce3 card.
To increase the speed of the algorithm much of the workload is moved from the CPU to the GPU (Graphics Processing Unit). By re-defining parts of the ordinary OpenGL pipeline via vertex programs, a distance-from-focus map can be stored in the alpha channel of the final image with little time loss.
This can effectively be used to blend a previously blurred version of the scene with a normal render. Different techniques to quickly blur a renderedimage is discussed, to keep the speed up solutions that require moving data from the graphics card is not an option.
Vörös, Csaba, Norbert Zajzon, Endre Turai, and László Vincze. "Magnetic field measurement possibilities in flooded mines at 500 m depth." Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2018. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-231251.
Full textMcDonnell, Ian. "Object segmentation from low depth of field images and video sequences." Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/58630/.
Full textRafiee, Gholamreza. "Automatic region-of-interest extraction in low depth-of-field images." Thesis, University of Newcastle upon Tyne, 2013. http://hdl.handle.net/10443/2194.
Full textVörös, Csaba, Norbert Zajzon, Endre Turai, and László Vincze. "Magnetic field measurement possibilities in flooded mines at 500 m depth." TU Bergakademie Freiberg, 2017. https://tubaf.qucosa.de/id/qucosa%3A23184.
Full textHe, Ruojun. "Square Coded Aperture: A Large Aperture with Infinite Depth of Field." University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1418078808.
Full textDuan, Jun Wei. "New regional multifocus image fusion techniques for extending depth of field." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3951602.
Full textHelbing, Katrin G. "Effects of display contrast and field of view on distance perception." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-10062009-020220/.
Full textGoss, Keith Michael. "Multi-dimensional polygon-based rendering for motion blur and depth of field." Thesis, Brunel University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.294033.
Full textWidjanarko, Taufiq. "Hyperspectral interferometry for single-shot profilometry and depth-resolved displacement field measurement." Thesis, Loughborough University, 2011. https://dspace.lboro.ac.uk/2134/8349.
Full textReddy, Serendra. "Automatic 2D-to-3D conversion of single low depth-of-field images." Doctoral thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/24475.
Full textAbbott, Joshua E. "Interactive Depth-Aware Effects for Stereo Image Editing." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3712.
Full textWillingham, David George Winograd Nicholas. "Strong-field photoionization of sputtered neutral molecules for chemical imaging and depth profiling." [University Park, Pa.] : Pennsylvania State University, 2009. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-4536/index.html.
Full textSanyal, Poulomi. "Depth of field enhancement of a particle analyzer microscope using wave-front coding." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=83931.
Full textOur choice of technique therefore was a novel optical wave front manipulation mechanism involving subsequent image restoration via digital post processing. This technique is known as wave front coding. The coding is achieved with the help an optical element known as a phase plate and then the coded image is electronically restored with the help of a digital post-processing filter.
The three steps involved in achieving our desired goal were, modeling the imaging system to be studied and studying its characteristics before DOF enhancement, designing the phase plate and finally, choosing and designing the appropriate decoding filter. After an appropriate phase plate was modeled, it was incorporated into the pre-existing optics and subsequently optimized. The intermediate image produced by the resulting system was then studied for defocus performance. Finally, the intermediate image was restored using a digital filter and studied once again for DOF characteristics. Other factors, like optical aberrations that might limit system performance were also taken into consideration.
In the end a simpler and cost-effective method of fabricating the suggested phase plate for single-wavelength operation was suggested. The results of our simulations were promising and sufficiently high resolution imaging was achievable within the entire enhanced DOF region of +/-200 mum from the point of best focus. The DOF without coding was around +/-50 mum, but with coding the spot size remained fairly constant over the entire 400 mum deep region of interest. Thus a 4 times increase in the overall system DOF was achieved due to wave front coding.
Sorensen, Jordan (Jordan P. ). "Software simulation of depth of field effects in video from small aperture cameras." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61577.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 71-73).
This thesis proposes a technique for post processing digital video to introduce a simulated depth of field effect. Because the technique is implemented in software, it affords the user greater control over the parameters of the effect (such as the amount of defocus, aperture shape, and defocus plane) and allows the effect to be used even on hardware which would not typically allow for depth of field. In addition, because it is a completely post processing technique and requires no change in capture method or hardware, it can be used on any video and introduces no new costs. This thesis describes the technique, evaluates its performance on example videos, and proposes further work to improve the technique's reliability.
by Jordan Sorensen.
M.Eng.
Hu, Guang-hua. "Extending the depth of focus using digital image filtering." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/45653.
Full textTwo types of image processing methods capable of forming a composite image from a set of image slices which have in-focus as well as out-of-focus segments are discussed. The first type is based on space domain operations and has been discussed in the literature. The second type, to be introduced, is based on the intuitive concept that the spectral energy distribution of a focused object is biased towards lower frequencies after blurring. This approach requires digital image filtering in the spatial frequency domain. A comparison among methods of both types is made using a quantitative uÌ delity criterion.
Master of Science
Atif, Muhammad [Verfasser], and Bernd [Akademischer Betreuer] Jähne. "Optimal Depth Estimation and Extended Depth of Field from Single Images by Computational Imaging using Chromatic Aberrations / Muhammad Atif ; Betreuer: Bernd Jähne." Heidelberg : Universitätsbibliothek Heidelberg, 2013. http://d-nb.info/1177382679/34.
Full textBrooker, Julian P. "Eye movement controlled synthetic depth of field blurring in stereographic displays of virtual environments." Thesis, University of Reading, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.413506.
Full textChristoffersson, Anton. "Real-time Depth of Field with Realistic Bokeh : with a Focus on Computer Games." Thesis, Linköpings universitet, Informationskodning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-163080.
Full textVorhies, John T. "Low-complexity Algorithms for Light Field Image Processing." University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590771210097321.
Full textGullapalli, Sai Krishna. "Wave-Digital FPGA Architectures of 4-D Depth Enhancement Filters for Real-Time Light Field Image Processing." University of Akron / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=akron1574443263497981.
Full textWilliams, Sarah. "Depth of field : aspects of photography and film in the selected work of Michael Ondaatje." Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2401/.
Full textLucotte, Bertrand M. "Fourier optics approaches to enhanced depth-of-field applications in millimetre-wave imaging and microscopy." Thesis, Heriot-Watt University, 2010. http://hdl.handle.net/10399/2323.
Full textKubíček, Martin. "Creating a Depth Map of Eye Iris in Visible Spectrum." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403824.
Full textKeogh, Teri M. "Changes in competition intensity, herbivory and stress along a soil depth gradient in an old field." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0021/MQ58467.pdf.
Full textO'Donnell, David Patrick. "FIELD AND MODELING STUDY OF THE EFFECTS OF STREAM DEPTH AND GROUND WATER DISCHARGE ON HYDROGEOPHYSICAL." Master's thesis, Temple University Libraries, 2012. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/187887.
Full textM.S.
Valley Creek, an urbanized stream in Southeastern Pennsylvania, has undergone changes typical of streams in urbanized areas, such as bank erosion, channel redirection, and habitat disruption. One area of disruption that has been little studied is the hyporheic zone, the top layer of the streambed where stream water exchanges with subsurface water and chemical transformations occur. The hyporheic zone of an 18 m reach of Valley Creek in Ecology Park was characterized using a tracer test coupled with a hydrogeophysical survey. Nested wells screened at depths of 20, 35, 50, and 65 cm were placed at four locations along the center of the stream to monitor the passage of the salt tracer through the hyporheic zone. Results from well sampling were compared with time-lapse Electrical Resistivity Tomography (ERT) monitoring of the stream tracer. The streambed was also characterized using temperature probes to calculate the stream water-groundwater flux and freeze core samples to characterize heterogeneities in streambed sediment. Models were created using MODFLOW, MATLAB, and EARTH IMAGER 2-D to understand differences between Ecology Park and Crabby Creek, a tributary within the Valley Creek watershed, where similar studies were performed in 2009 and 2010. Hyporheic exchange and ERT applicability differed between the two study sites. At Ecology Park, tracer was detected only in the 20 cm wells at nests 2 and 4 during the injection period. Noise in the falling limbs of the tracer test breakthrough curves made it difficult to determine whether tracer lingered in the hyporheic zone using well data. ERT surveys were unable to detect tracer lingering after the injection period. At Crabby Creek, tracer was present in all shallow wells, and lingering tracer was detected in the hyporheic zone using ERT during the post-injection period. ERT surveys at Ecology Park were less effective than at Crabby Creek for two reasons: the presence of groundwater discharge (which inhibited hyporheic exchange) and increased stream water depth at Ecology Park. Temperature modeling of heat flux data revealed groundwater discharge at three locations. MODFLOW models predicted that this discharge would diminish the length and residence time of subsurface flow paths. Groundwater discharge likely increased along the contact between the hydraulically conductive Elbrook Formation and the less conductive Ledger Formation. Models created with MATLAB and Earth-Imager 2-D showed ERT sensitivity to tracer in the hyporheic zone depended on stream thickness. With increased water depth, more current propagated through the stream, which reduced sensitivity to changes in the hyporheic zone. A sensitivity analysis showed that the resistivity change in the hyporheic zone at Ecology Park (average water depth 0.36 m) would have to exceed 30% to be detectable, which was greater than the induced change during the tracer test. Deeper water also amplified the confounding effect of changes in the background conductivity of the stream water, though time-lapse ERT detected no lingering tracer even after correcting for this drift. Studies performed at Crabby Creek were able to map lingering tracer in the hyporheic zone because the site had a thin water layer (0.1 m), a large percentage increase of conductivity during the tracer test, and no groundwater discharge. Conversely, at Ecology Park groundwater discharge inhibited hyporheic exchange, and imaging sensitivity was reduced by the thicker water layer, demonstrating the limitations of ERT for hyporheic zone characterization. The modified inversion routines used here demonstrated that, with accurate stream conductivity and depth measurements, ERT can be used in some streams as a method for hyporheic characterization by incorporating site-specific conditions.
Temple University--Theses
Kourakos, Alexander William. "Extended Depth-of-focus in a Laser Scanning System Employing a Synthesized Difference-of-Gaussians Pupil." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/33091.
Full textTraditional laser scanning systems, such as those used for microscopy, typically image objects of finite thickness. If the depth-of-focus of such systems is low, as is the case when a simple clear pupil is used, the object must be very thin or the image will be distorted. Several methods have been developed to deal with this problem. A microscope with a thin annular pupil has a very high depth-of-focus and can image the entire thickness of a sample, but most of the laser light is blocked, and the image shows poor contrast and high noise. In confocal laser microscopy, the depth-of-focus problem is eliminated by using a small aperture to discard information from all but one thin plane of the sample. However, such a system requires scanning passes at many different depths to yield an image of the entire thickness of the sample, which is a time-consuming process and is highly sensitive to registration errors.
In this thesis, a novel type of scanning system is considered. The sample is simultaneously scanned with a combination of two Gaussian laser beams of different widths and slightly different temporal frequencies. Information from scanning with the two beams is recorded with a photodetector, separated electronically, and processed to form an image. This image is similar to one formed by a system using a difference-of-Gaussians pupil, except no light has been blocked or wasted. Also, the entire sample can be scanned in one pass. The depth-of-focus characteristics of this synthesized difference-of-Gaussians pupil are examined and compared with those of well-known functions such as the circular, annular, and conventional difference-of-Gaussians pupils.
Master of Science
Derafshi, Mohammadali H. "The effect of depth of placement of phosphorus fertiliser on the growth and development of field peas." Title page, contents and anstract only, 1997. http://web4.library.adelaide.edu.au/theses/09PH/09phd427.pdf.
Full textRogers, Claire. "Depth conversion methods for the Torsk Oilfield : investigating the complex velocity field of the Seaspray Group, Gippsland Basin /." Title page, abstract and table of contents only, 2003. http://web4.library.adelaide.edu.au/theses/09SB/09sbr7421.pdf.
Full textLönroth, Per, and Mattias Unger. "Advanced Real-time Post-Processing using GPGPU techniques." Thesis, Linköping University, Department of Science and Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-14962.
Full text
Post-processing techniques are used to change a rendered image as a last step before presentation and include, but is not limited to, operations such as change of saturation or contrast, and also more advanced effects like depth-of-field and tone mapping.
Depth-of-field effects are created by changing the focus in an image; the parts close to the focus point are perfectly sharp while the rest of the image has a variable amount of blurriness. The effect is widely used in photography and movies as a depth cue but has in the latest years also been introduced into computer games.
Today’s graphics hardware gives new possibilities when it comes to computation capacity. Shaders and GPGPU languages can be used to do massive parallel operations on graphics hardware and are well suited for game developers.
This thesis presents the theoretical background of some of the recent and most valuable depth-of-field algorithms and describes the implementation of various solutions in the shader domain but also using GPGPU techniques. The main objective is to analyze various depth-of-field approaches and look at their visual quality and how the methods scale performance wise when using different techniques.
Chow, B. S. "Design a Phase Plate to Extend the Depth of Field for an Inexpensive Microscope System to Have the Muti-focus Ability." Thesis, Sumy State University, 2015. http://essuir.sumdu.edu.ua/handle/123456789/42557.
Full textJøraandstad, Susann. "Use of stacking velocity for depth prediction and lithological indication in the Challum field of the Cooper/Eromanga basin, Queensland /." Title page, abstract and contents only, 1999. http://web4.library.adelaide.edu.au/theses/09SB/09sbj818.pdf.
Full textTwo folded enclosures in pocket inside backcover. Includes bibliographical references (2 leaves).
"Flexible imaging for capturing depth and controlling field of view and depth of field." COLUMBIA UNIVERSITY, 2009. http://pqdtopen.proquest.com/#viewpdf?dispub=3348435.
Full textCustodio, Joao Pedro Barata Lourenco. "Depth Estimation using Light-Field Cameras." Master's thesis, 2014. http://hdl.handle.net/10316/40446.
Full textPlenoptic cameras or light field cameras are a recent type of imaging devices that are starting to regain some popularity. These cameras are able to acquire the plenoptic function (4D light field) and, consequently, able to output the depth of a scene, by making use of the redundancy created by the multi-view geometry, where a single 3D point is imaged several times. Despite the attention given in the literature to standard plenoptic cameras, like Lytro, due to their simplicity and lower price, we did our work based on results obtained from a multi-focus plenoptic camera (Raytrix, in our case), due to their quality and higher resolution images. In this master thesis, we present an automatic method to estimate the virtual depth of a scene. Since the capture is done using a multi-focus plenoptic camera, we are working with multi-view geometry and lens with different focal lengths, and we can use that to back trace the rays in order to obtain the depth. We start by finding salient points and their respective correspondences using a scaled SAD (sum of absolute differences) method. In order to perform the referred back trace, obtaining robust results, we developed a RANSAC-like method, which we call COMSAC (Complete Sample Consensus). It is an iterative method that back trace the ligth rays in order to estimate the depth, eliminating the outliers. Finally, and since the depth map obtained is sparse, we developed a way to make it dense, by random growing. Since we used a publicly available dataset from Raytrix, comparisons between our results and the manufacturers’ ones are also presented. A paper was also submitted to 3DV 2014 (International Conference on 3D Vision), a conference on three-dimensional vision.
Cˆamaras plen´opticas ou cˆamaras de campo de luz s˜ao dispositivos que est˜ao a voltar a ter alguma popularidade. Este tipo de cˆamaras s˜ao capazes de adquirir a fun¸c˜ao plen´optica (campo de luz 4D) e, consequentemente, capazes de estimar a profundidade de uma cena, usando a redundˆancia criada pela geometria ”multi-view”, onde um ponto 3D ´e projetado na imagem diversas vezes. Apesar de ter sido dada uma maior aten¸c˜ao na literatura `as cˆamaras plen´opticas standard, como s˜ao exemplo as Lytro, visto que s˜ao mais simples e com um pre¸co muito mais interessante, o nosso trabalho foi desenvolvido tendo por base os resultados obtidos por uma cˆamara plen´optica ”multi-focus” (uma Raytrix, no nosso caso), visto que a qualidade ´e superior e consegue obter imagens com maior resolu¸c˜ao. Nesta tese de mestrado, apresentamos um m´etodo autom´atico para estimar a profundidade de uma cena. Visto que a captura da cena ´e efetuada com uma cˆamara plen´optica ”multi-focus”, estamos a trabalhar com geometria ”multi-view” e com lentes que tˆem distˆancias focais diferentes, podendo ser usado esse facto para efetuar tra¸co de raios de forma a obter a profundidade. Come¸camos por encontrar pontos salientes e as suas respetivas correspondˆencias usando um valor escalado de SAD (soma das diferen¸cas absolutas). Por forma a efetuar o tra¸co de raios, obtendo resultados mais robustos, desenvolvemos um m´etodo estilo RANSAC, ao qual cham´amos COMSAC (Complete Sample Consensus). E um m´etodo iterativo que efetua o tra¸co de raios de ´ forma a estimar a profundidade, eliminando resultados indesejados. Finalmente, e visto que o mapa de profundidade obtido ´e esparso, desenvolvemos um m´etodo para o tornar denso, com recurso a um m´etodo de crescimento aleat´orio. Por forma a testar os algoritmos desenvolvidos, utiliz´amos um conjunto de dados abertos ao p´ublico da Raytrix e apresentamos tamb´em a compara¸c˜ao entre os nossos resultados e os resultados obtidos pela Raytrix. Foi tamb´em submetida um artigo para a 3DV (Conferˆencia Internacional em Vis˜ao 3D), uma conferˆencia em vis˜ao tri-dimensional.
Tseng, Yi-Ning, and 曾翊寧. "Realtime Depth of Field Rendering with Bokeh Effect via Interactive Depth Labeling." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/26446918239640282284.
Full text國立臺灣大學
資訊網路與多媒體研究所
100
Shallow focus is a kind of photographic technique, which means taking a picture with a small depth of field (DOF) so that the objects outside of the DOF will be blurred to highlight the subject inside of the DOF. Common digital cameras, unlike the Digital Single Lens Reflex Camera (DSLR), cannot produce this special vision effect due to its limitations in aperture and focal length. Commercial softwares and even some common digital cameras are endued with built-in shallow focus simulation programs. However, the rendering quality of the existing tools is not very well due to several problems: 1) boundaries of the focused objects are often over smoothed caused by poor auto-generated object segmentation result, 2) the blurring effect usually looks fake since the element of image formation principle, i.e. the depth of the object, is not properly utilized in the rendering process, and 3) even though some refocusing methods model the blurring effect with depth information, it is hard to extract accurate depth map from a single image. Therefore, we develop a friendly user interface which can interactively achieve reliable depth labeling based on the techniques of semi-automatic object segmentation and 3D modeling. Moreover, to realize shallow focus effect efficiently, we apply a real-time gather-based DOF rendering method and modify it to be capable of simulating the bokeh effect.