To see the other types of publications on this topic, follow the link: Depth of field.

Dissertations / Theses on the topic 'Depth of field'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Depth of field.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ramirez, Hernandez Pavel. "Extended depth of field." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9941.

Full text
Abstract:
In this thesis the extension of the depth of field of optical systems is investigated. The problem of achieving extended depth of field (EDF) while preserving the transverse resolution is also addressed. A new expression for the transport of intensity equation in the prolate spheroidal coordinates system is derived, with the aim of investigating the phase retrieval problem with applications to EDF. A framework for the optimisation of optical systems with EDF is also introduced, where the main motivation is to find an appropriate scenario that will allow a convex optimisation solution leading to global optima. The relevance in such approach is that it does not depend on the optimisation algorithms since each local optimum is a global one. The multi-objective optimisation framework for optical systems is also discussed, where the main focus is the optimisation of pupil plane masks. The solution for the multi-objective optimisation problem is presented not as a single mask but as a set of masks. Convex frameworks for this problem are further investigated and it is shown that the convex optimisation of pupil plane masks is possible, providing global optima to the optimisation problems for optical systems. Seven masks are provided as examples of the convex optimisation solutions for optical systems, in particular 5 pupil plane masks that achieve EDF by factors of 2, 2.8, 2.9, 4 and 4.3, including two pupil masks that besides of extending the depth of field, are super-resolving in the transverse planes. These are shown as examples of solutions to particular optimisation problems in optical systems, where convexity properties have been given to the original problems to allow a convex optimisation, leading to optimised masks with a global nature in the optimisation scenario.
APA, Harvard, Vancouver, ISO, and other styles
2

Villarruel, Christina R. "Computer graphics and human depth perception with gaze-contingent depth of field /." Connect to online version, 2006. http://ada.mtholyoke.edu/setr/websrc/pdfs/www/2006/175.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Aldrovandi, Lorenzo. "Depth estimation algorithm for light field data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Botcherby, Edward J. "Aberration free extended depth of field microscopy." Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:7ad8bc83-6740-459f-8c48-76b048c89978.

Full text
Abstract:
In recent years, the confocal and two photon microscopes have become ubiquitous tools in life science laboratories. The reason for this is that both these systems can acquire three dimensional image data from biological specimens. Specifically, this is done by acquiring a series of two-dimensional images from a set of equally spaced planes within the specimen. The resulting image stack can be manipulated and displayed on a computer to reveal a wealth of information. These systems can also be used in time lapse studies to monitor the dynamical behaviour of specimens by recording a number of image stacks at a sequence of time points. The time resolution in this situation is, however, limited by the maximum speed at which each constituent image stack can be acquired. Various techniques have emerged to speed up image acquisition and in most practical implementations a single, in-focus, image can be acquired very quickly. However, the real bottleneck in three dimensional imaging is the process of refocusing the system to image different planes. This is commonly done by physically changing the distance between the specimen and imaging lens, which is a relatively slow process. It is clear with the ever-increasing need to image biologically relevant specimens quickly that the speed limitation imposed by the refocusing process must be overcome. This thesis concerns the acquisition of data from a range of specimen depths without requiring the specimen to be moved. A new technique is demonstrated for two photon microscopy that enables data from a whole range of specimen depths to be acquired simultaneously so that a single two dimensional scan records extended depth of field image data directly. This circumvents the need to acquire a full three dimensional image stack and hence leads to a significant improvement in the temporal resolution for acquiring such data by more than an order of magnitude. In the remainder of this thesis, a new microscope architecture is presented that enables scanning to be carried out in three dimensions at high speed without moving the objective lens or specimen. Aberrations introduced by the objective lens are compensated by the introduction of an equal and opposite aberration with a second lens within the system enabling diffraction limited performance over a large range of specimen depths. Focusing is achieved by moving a very small mirror, allowing axial scan rates of several kHz; an improvement of some two orders of magnitude. This approach is extremely general and can be applied to any form of optical microscope with the very great advantage that the specimen is not disturbed. This technique is developed theoretically and experimental results are shown that demonstrate its potential application to a broad range of sectioning methods in microscopy.
APA, Harvard, Vancouver, ISO, and other styles
5

Möckelind, Christoffer. "Improving deep monocular depth predictions using dense narrow field of view depth images." Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.

Full text
Abstract:
In this work we study a depth prediction problem where we provide a narrow field of view depth image and a wide field of view RGB image to a deep network tasked with predicting the depth for the entire RGB image. We show that by providing a narrow field of view depth image, we improve results for the area outside the provided depth compared to an earlier approach only utilizing a single RGB image for depth prediction. We also show that larger depth maps provide a greater advantage than smaller ones and that the accuracy of the model decreases with the distance from the provided depth. Further, we investigate several architectures as well as study the effect of adding noise and lowering the resolution of the provided depth image. Our results show that models provided low resolution noisy data performs on par with the models provided unaltered depth.
I det här arbetet studerar vi ett djupapproximationsproblem där vi tillhandahåller en djupbild med smal synvinkel och en RGB-bild med bred synvinkel till ett djupt nätverk med uppgift att förutsäga djupet för hela RGB-bilden. Vi visar att genom att ge djupbilden till nätverket förbättras resultatet för området utanför det tillhandahållna djupet jämfört med en existerande metod som använder en RGB-bild för att förutsäga djupet. Vi undersöker flera arkitekturer och storlekar på djupbildssynfält och studerar effekten av att lägga till brus och sänka upplösningen på djupbilden. Vi visar att större synfält för djupbilden ger en större fördel och även att modellens noggrannhet minskar med avståndet från det angivna djupet. Våra resultat visar också att modellerna som använde sig av det brusiga lågupplösta djupet presterade på samma nivå som de modeller som använde sig av det omodifierade djupet.
APA, Harvard, Vancouver, ISO, and other styles
6

Ozkalayci, Burak Oguz. "Multi-view Video Coding Via Dense Depth Field." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607517/index.pdf.

Full text
Abstract:
Emerging 3-D applications and 3-D display technologies raise some transmission problems of the next-generation multimedia data. Multi-view Video Coding (MVC) is one of the challenging topics in this area, that is on its road for standardization via ISO MPEG. In this thesis, a 3-D geometry-based MVC approach is proposed and analyzed in terms of its compression performance. For this purpose, the overall study is partitioned into three preceding parts. The first step is dense depth estimation of a view from a fully calibrated multi-view set. The calibration information and smoothness assumptions are utilized for determining dense correspondences via a Markov Random Field (MRF) model, which is solved by Belief Propagation (BP) method. In the second part, the estimated dense depth maps are utilized for generating (predicting) arbitrary (other camera) views of a scene, that is known as novel view generation. A 3-D warping algorithm, which is followed by an occlusion-compatible hole-filling process, is implemented for this aim. In order to suppress the occlusion artifacts, an intermediate novel view generation method, which fuses two novel views generated from different source views, is developed. Finally, for the last part, dense depth estimation and intermediate novel view generation tools are utilized in the proposed H.264-based MVC scheme for the removal of the spatial redundancies between different views. The performance of the proposed approach is compared against the simulcast coding and a recent MVC proposal, which is expected to be the standard recommendation for MPEG in the near future. These results show that the geometric approaches in MVC can still be utilized, especially in certain 3-D applications, in addition to conventional temporal motion compensation techniques, although the rate-distortion performances of geometry-free approaches are quite superior.
APA, Harvard, Vancouver, ISO, and other styles
7

Lindeberg, Tim. "Concealing rendering simplifications using gazecontingent depth of field." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189601.

Full text
Abstract:
One way of increasing 3D rendering performance is the use of foveated rendering. In this thesis a novel foveated rendering technique called gaze contingent depth of field tessellation (GC DOF tessellation) is proposed. Tessellation is the process of subdividing geometry to increase detail. The technique works by applying tessellation to all objects within the focal plane, gradually decreasing tessellation levels as applied blur increases. As the user moves their gaze the focal plane shifts and objects go from blurry to sharp at the same time as the fidelity of the object increases. This can help hide the pops that occur as objects change shape. The technique was evaluated in a user study with 32 participants. For the evaluated scene the technique helped reduce the number of primitives rendered by around 70 % and frame time by around 9 % compared to using full adaptive tessellation. The user study showed that as the level of blur increased the detection rate for pops decreased, suggesting that the technique could be used to hide pops that occur due to tessellation. However, further research is needed to solidify these findings.
Ett sätt att öka renderingsprestanda i 3D applikationer är att använda foveated rendering. I denna uppsats presenteras en ny foveated rendering-teknik som kallas gaze contingent depth of field tessellering (GC DOF tessellering). Tessellering är när geometri delas i mindre delar för att öka detaljrikedom. Tekniken fungerar genom att applicera tessellering på alla objekt i fokalplanet och gradvis minska tesselleringsnivåer när oskärpan ökar. När användaren flyttar sin blick så flyttas fokalplanet och suddiga objekt blir skarpa samtidigt som detaljrikedomen i objektet ökar. Det kan hjälpa till att dölja de ’pops’ som uppstår när objekt ändrar form. Tekniken utvärderades i en användarstudie med 32 del- tagare. I den utvärderade scenen visade sig tekniken minska antalet renderade primitiver med ca 70 % och minska renderingstiden med ca 9 % jämfört med att använda full adaptiv tessellering. Användarstudien visade att när oskärpa ökade så minskade antalet som sa sig se ’pops’, vilket tyder på att tekniken kan användas för att dölja de ’pops’ som uppstår på grund av tessellering. Det behövs dock ytterligare forskning för att säkerställa dessa fynd.
APA, Harvard, Vancouver, ISO, and other styles
8

Rangappa, Shreedhar. "Absolute depth using low-cost light field cameras." Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/36224.

Full text
Abstract:
Digital cameras are increasingly used for measurement tasks within engineering scenarios, often being part of metrology platforms. Existing cameras are well equipped to provide 2D information about the fields of view (FOV) they observe, the objects within the FOV, and the accompanying environments. But for some applications these 2D results are not sufficient, specifically applications that require Z dimensional data (depth data) along with the X and Y dimensional data. New designs of camera systems have previously been developed by integrating multiple cameras to provide 3D data, ranging from 2 camera photogrammetry to multiple camera stereo systems. Many earlier attempts to record 3D data on 2D sensors have been completed, and likewise many research groups around the world are currently working on camera technology but from different perspectives; computer vision, algorithm development, metrology, etc. Plenoptic or Lightfield camera technology was defined as a technique over 100 years ago but has remained dormant as a potential metrology instrument. Lightfield cameras utilize an additional Micro Lens Array (MLA) in front of the imaging sensor, to create multiple viewpoints of the same scene and allow encoding of depth information. A small number of companies have explored the potential of lightfield cameras, but in the majority, these have been aimed at domestic consumer photography, only ever recording scenes as relative scale greyscale images. This research considers the potential for lightfield cameras to be used for world scene metrology applications, specifically to record absolute coordinate data. Specific interest has been paid to a range of low cost lightfield cameras to; understand the functional/behavioural characteristics of the optics, identify potential need for optical and/or algorithm development, define sensitivity, repeatability and accuracy characteristics and limiting thresholds of use, and allow quantified 3D absolute scale coordinate data to be extracted from the images. The novel output of this work is; an analysis of lightfield camera system sensitivity leading to the definition of Active Zones (linear data generation good data) and In-active Zones (non-linear data generation poor data), development of bespoke calibration algorithms that remove radial/tangential distortion from the data captured using any MLA based camera, and, a light field camera independent algorithm that allows the delivery of 3D coordinate data in absolute units within a well-defined measurable range from a given camera.
APA, Harvard, Vancouver, ISO, and other styles
9

Reinhart, William Frank. "Effects of depth cues on depth judgments using a field-sequential stereoscopic CRT display /." This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-07132007-143145/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Reinhart, William Frank. "Effects of depth cues on depth judgements using a field-sequential stereoscopic CRT display." Diss., Virginia Tech, 1990. http://hdl.handle.net/10919/38796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Yan. "Depth Estimation from Structured Light Fields." Doctoral thesis, Universite Libre de Bruxelles, 2020. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/309512.

Full text
Abstract:
Light fields have been populated as a new geometry representation of 3D scenes, which is composed of multiple views, offering large potentials to improve the depth perception in the scenes. The light fields can be captured by different camera sensors, in which different acquisitions give rise to different representations, mainly containing a line of camera views - 3D light field representation, a grid of camera views - 4D light field representation. When the captured position is uniformly distributed, the outputs are the structured light fields. This thesis focuses on depth estimation from the structured light fields. The light field representations (or setups) differ not only in terms of 3D and 4D, but also the density or baseline of camera views. Rather than the objective of reconstructing high quality depths from dense (narrow-baseline) light fields, we put efforts into a general objective, i.e. reconstructing depths from a wide range of light field setups. Hence a series of depth estimation methods from light fields, including traditional and deep learningbased methods, are presented in this thesis. Extra efforts are made for achieving the high performance on aspects of depth accuracy and computation efficiency. Specifically, 1) a robust traditional framework is put forward for estimating the depth in sparse (wide-baseline) light fields, where a combination of the cost calculation, the window-based filtering and the optimization are conducted; 2) the above-mentioned framework is extended with the extra new or alternative components to the 4D light fields. This new framework shows the ability of being independent of the number of views and/or baseline of 4D light fields when predicting the depth; 3) two new deep learning-based methods are proposed for the light fields with the narrow-baseline, where the features are learned from the Epipolar-Plane-Image and light field images. One of the methods is designed as a lightweight model for more practical goals; 4) due to the dataset deficiency, a large-scale and diverse synthetic wide-baseline dataset with labeled data are created. A new lightweight deep model is proposed for the 4D light fields with the wide-baseline. Besides, this model also works on the 4D light fields with the narrow baseline if trained on the narrow-baseline datasets. Evaluations are made on the public light field datasets. Experimental results show the proposed depth estimation methods from a wide range of light field setups are capable of achieving the high quality depths, and some even outperform state-of-the-art methods.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
12

Rathjens, Richard G. "PLANTING DEPTH OF TREES - A SURVEY OF FIELD DEPTH, EFFECT OF DEEP PLANTING, AND REMEDIATION." Columbus, Ohio : Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1243869972.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zammit, Paul. "Extended depth-of-field imaging and ranging in microscopy." Thesis, University of Glasgow, 2017. http://theses.gla.ac.uk/8081/.

Full text
Abstract:
Conventional 3D imaging techniques such as laser scanning, focus-stacking and confocal microscopy either require scanning in all or a subset of the spatial dimensions, or else are limited by their depth of field (DOF). Scanning increases the acquisition time, therefore techniques which rely on it cannot be used to image moving scenes. In order to acquire both the intensity of the scene and its depth, extending the DOF without scanning is therefore necessary. This is traditionally achieved by stopping the system down (reducing the f/#). This, however, has the highly undesirable effect of lowering both the throughput and the lateral resolution of the system. In microscopy in particular, both these parameters are critical, therefore there is scope in breaking this trade-off. The objective of this work, therefore, is to develop a practical and simple 3D imaging technique which is capable of acquiring both the irradiance of the scene and its depth in a single snapshot over an extended DOF without incurring a reduction in optical throughput and lateral resolution. To this end, a new imaging technique, referred to as complementary Kernel Matching (CKM), is proposed in this thesis. To extend the DOF, in CKM a hybrid imaging technique known as wavefront coding (WC) has been used. WC permits the DOF to be extended by an order of magnitude typically without reducing the efficiency and the resolution of the system. Moreover, WC only requires the introduction of a phase mask in the aperture of the system, hence it also has the benefit of simplicity and practicality. Unfortunately, in practice, WC systems are found to suffer from post-recovery artefacts and distortion, which substantially degrade the quality of the acquired image. To date, this long-standing problem has found no solution and is probably the cause for the lack of exploitation of this imaging technique by the industry. In CKM, use of a largely ignored phenomenon associated with WC was made to measure the depth of the sample. This is the lateral translation of the scene in proportion to its depth. Furthermore, once the depth of the scene is known, the ensuing artefacts and distortion due to the introduction of the WC element can be compensated for. As a result, a high quality intensity image of the scene and its depth profile (referred to in stereo vision parlance as a depth map) is obtained over a DOF which is typically an order of magnitude larger than that of an equivalent clear-aperture system. This implies that, besides being a 3D imaging technique, CKM is also a solution to one of the longest standing problem in WC itself. By means of WC, therefore, the DOF was extended without scanning and without reducing the throughput and the optical resolution, allowing both an intensity image of the scene to be acquired and its depth map. In addition, CKM is inherently monocular, therefore it does not suffer from occlusion, which is a major problem affecting triangulation-based 3D imaging techniques such as the popular stereo vision. One therefore concludes that CKM fulfils the objectives set for this project. In this work, various ways of implementing CKM were explored and compared; and the theory associated with them was developed. An experimental prototype was then built and the technique was demonstrated experimentally in microscopy. The results show that CKM eliminates WC artefacts and thus gives high quality images of the scene over an extended DOF. A DOF of ∼ 20μm was achieved on a 40×, 0.5NA system experimentally, however this can be increased if required. The experimental depth reconstructions of real samples (such as pollen grains and a silicon die) imaged in various modalities (reflection, transmission and fluorescence) were comparable to those given by a focus-stack. However, as with all other passive techniques, the performance of CKM depends on the texture and features in the scene itself. On a binary systematic scene consisting of regularly spaced dots with a linear depth gradient, an RMS error of ±0.15μm was obtained from an image signal-to-noise ratio of 60dB. Finally, owing to its simplicity and large DOF, there is scope in investigating the possibility of using the same CKM setup for 3D point localisation applications such as super resolution. An initial investigation was therefore conducted by localising sub-resolution fluorescent beads. On a 40×, 0.5NA system, a mean precision of 148nm in depth and < 30nm in the lateral dimensions was observed experimentally from 4, 000 photons per localisation over a DOF of 26μm. From these experimental values, a mean localisation precision of < 34nm in depth and < 13nm in the lateral dimensions from 2, 000 photons per localisation over a DOF of 3μm is expected on a more typical 100×, 1.4NA system. This compares favourably to the competition, therefore we conclude that there is scope in investigating this technique for 3D point localisation applications further.
APA, Harvard, Vancouver, ISO, and other styles
14

Axelsson, Natalie. "Depth of Field Rendering from Sparsely Sampled Pinhole Images." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281771.

Full text
Abstract:
Optical lenses cause foreground and background objects to appear blurred, causing an effect called depth of field. Each point in the scene is projected onto the imaging plane as a semitransparent circle of confusion (CoC) with diameter depending on the distance between the point and the lens. In images rendered with a pinhole camera, the entire scene is in focus, but depth of field may be added synthetically for photorealism, aesthetics, or attention guiding purposes. However, most algorithms for depth of field rendering are either computationally expensive or produce noticeable artifacts. This report evaluates two different algorithms for depth of field rendering. Both algorithms are independent of the rendering technique. The first renders only a single pinhole image and uses a light-field based method for image synthesis. The second renders up to 12 pinhole images and uses CoC gathering to create defocus blur. Ideas from both methods are combined in a novel algorithm which uses sparse samples to approximate the light field. Our method produces a closer physical approximation than the other algorithms and avoids common artifacts. However, it may produce ghosting artifacts at low computation times. We evaluate the methods by comparing rendered images to an assumed ground truth generated with the accumulation buffer method. Physical accuracy is measured through structural similarity (SSIM) while artifacts are evaluated through visual inspection. Computation times are measured in the Inviwo software.
Optiska linser får objekt i för- och bakgrunden att bli oskarpa i bilder. Varje punkt i scenen projiceras på bildplanet som en semitransparent oskärpecirkel (CoC) vars diameter beror på avståndet mellan punkten och linsen. I bilder renderade med nålhålskamera är hela scenen skarp men skärpedjup kan läggas till syntetiskt för fotorealism, estetik, eller i uppmärksamhetsledande syfte. Dock är många algoritmer för skärpedjupsrendering antingen resurskrävande eller präglade av artefakter. I denna rapport utvärderas två algoritmer för skärpedjupsrendering. Båda metoderna kan användas oberoende av renderingsteknik. Den första renderar endast en nålhålsbild och använder en ljusfältsbaserad metod för skärpedjupsrendering. Den andra renderar upp till 12 nålhålsbilder och använder CoC-samling för att skapa oskärpa. Idéer från båda algoritmerna kombineras i en ny metod som använder glesa nålhålsbilder för att approximera ljusfältet. Vår metod producerar bättre fysiska approximationer än de andra algoritmerna och undviker vanliga artefakter. Dock kan den orsaka spökbilder vid korta beräkningstider. Vi utvärderar metoderna genom att jämföra dem mot bilder genererade med accumulation buffer-tekniken som antas efterlikna den fysiska sanningen. Fysisk exakthet mäts med structural similarity (SSIM) och artefakter utvärderas visuellt. Beräkningstider mäts i programmet Inviwo
APA, Harvard, Vancouver, ISO, and other styles
15

Liu, Yang. "Simulating depth of field using per-pixel linked list buffer." Thesis, Purdue University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1598036.

Full text
Abstract:

In this thesis, I present a method for simulating three characteristics of depth of field image: partial occlusion, bokeh and blur. Retrieving color from occluded surfaces is achieved by constructing a per-pixel linked list buffer, which only requires two render passes. Additionally, per-pixel linked list buffer eliminates the memory overhead of empty pixels in depth layers. Bokeh and blur effect are accomplished by image-space point splatting (Lee 2008). I demonstrate how point splatting can be used to account for the effect of aperture shape and intensity distribution on bokeh. Spherical aberration and chromatic aberration can be approximated using a custom pre-built sprite. Together as a package, this method is capable matching the realism of multi-perspective methods and layered methods.

APA, Harvard, Vancouver, ISO, and other styles
16

Henriksson, Ola. "A Depth of Field Algorithm for Realtime 3D Graphics in OpenGL." Thesis, Linköping University, Department of Science and Technology, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1169.

Full text
Abstract:

The company where this thesis was formulated constructs VR applications for the medical environment. The hardware used is ordinary dektops with consumer level graphics cards and haptic devices. In medicin some operations require microscopes or cameras. In order to simulate these in a virtual reality environment for educational purposes, the effect of depth of field or focus have to be considered.

A working algorithm that generates this optical occurence in realtime, stereo rendered computer graphics is presented in this thesis. The algorithm is implemented in OpenGL and C++ to later be combined with a VR application simulating eye-surgery which is built with OpenGL Optimizer.

Several different approaches are described in this report. The call for realtime stereo rendering (~60 fps) means taking advantage of the graphics hardware to a great extent. In OpenGL this means using the extensions to a specific graphic chip for better performance, in this case the algorithm is implemented for a GeForce3 card.

To increase the speed of the algorithm much of the workload is moved from the CPU to the GPU (Graphics Processing Unit). By re-defining parts of the ordinary OpenGL pipeline via vertex programs, a distance-from-focus map can be stored in the alpha channel of the final image with little time loss.

This can effectively be used to blend a previously blurred version of the scene with a normal render. Different techniques to quickly blur a renderedimage is discussed, to keep the speed up solutions that require moving data from the graphics card is not an option.

APA, Harvard, Vancouver, ISO, and other styles
17

Vörös, Csaba, Norbert Zajzon, Endre Turai, and László Vincze. "Magnetic field measurement possibilities in flooded mines at 500 m depth." Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2018. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-231251.

Full text
Abstract:
The main target of the UNEXMIN project is to develop a fully autonomous submersible robot (UX-1) which can map flooded underground mines, and also deliver information about the potential raw materials of the mines. There are ca. 30 000 abandoned mines in Europe, from which many of them still could hold significant reserves of raw materials. Many of these mines are nowadays flooded and the latest information about them could be more than 100 years old. Although it is giving limited information, magnetic measurement methods, which detecting the local distortions of the Earth’s magnetic field can be very useful to identify raw materials in the mines. The source of the magnetic field which is independent of any human events comes from the Earths own magnetic field. The strength of this field depends by the magnetic materials in the near environment of the investigated point. The ferromagnetic materials have powerful effect to influence the magnetic field. In the nature, iron containing minerals, magnetite and hematite have the most powerful effect usually. The magnetic measurement methods are rapid and affordable techniques in geophysical engineering practice. For magnetic field strength and direction measurement FGM-1 sensors (manufactured by Speake & Co Llanfapley) were selected for the UX-1 robot. The sensor heads overall dimension are very small and their energy consumption is negligible. The FGM-1 sensor was placed and aligned in a plastic cylinder to ensure that the magnetic-axis aligned with the mechanical axis of the tube for more accurate measurement. There are 3 pairs of FGM-1 sensors needed for the proper determination of the current magnetic field (strength and direction). The position of sensor pairs need to be perpendicular compared to each other. The 3 pairs of FGM-1 sensors generate an arbitrary position Cartesian coordinate system. We further developed / had installed temperature sensors to all FGM-1 probes, to compensate the temperature dependency even though it has small effect. The UX-1 robot also contains the electronic block, which controls the three FGM-1 magnetic field sensor pairs, and store the measured data. The block contains the power module, the sensor interface modules with temperature compensation, the microcontroller module and the RS485 communication module also. The output data is a temperature compensated frequency value for each sensor pair. The measured magnetic signal from the local XYZ coordinate system (local for the UX-1) should be converted to a universal coordinate system during post processing of the data. The exact position, facing and inclination of the robot must be known in the whole dive time to be able to do the above conversion. The measured magnetic signal will be placed into the measured mine map, reconstructed from the delivered 3D point cloud, thus the exact location of the magnetic anomalies can be identified. Not much magnetic source is estimated in the operating environment of the robot, but its own generated magnetic noise can be significant. There will be many cooling fans, micro-controllers and multiple thrusters inside the pressure-hull of the UX-1, which generate magnetic field. The constant magnetic noise coming from the cooling fans can be compensated, but the varying fields caused by eg. the different thrusters’s speed is problematic. We design a calibration method, where the effect of the main thrusters (even with changing speed) and the effect of the constant cooling fans could be compensated. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 690008.
APA, Harvard, Vancouver, ISO, and other styles
18

McDonnell, Ian. "Object segmentation from low depth of field images and video sequences." Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/58630/.

Full text
Abstract:
This thesis addresses the problem of autonomous object segmentation. To do so the proposed segementation method uses some prior information, namely that the image to be segmented will have a low depth of field and that the object of interest will be more in focus than the background. To differentiate the object from the background scene, a multiscale wavelet based assessment is proposed. The focus assessment is used to generate a focus intensity map, and a sparse fields level set implementation of active contours is used to segment the object of interest. The initial contour is generated using a grid based technique. The method is extended to segment low depth of field video sequences with each successive initialisation for the active contours generated from the binary dilation of the previous frame's segmentation. Experimental results show good segmentations can be achieved with a variety of different images, video sequences, and objects, with no user interaction or input. The method is applied to two different areas. In the first the segmentations are used to automatically generate trimaps for use with matting algorithms. In the second, the method is used as part of a shape from silhouettes 3D object reconstruction system, replacing the need for a constrained background when generating silhouettes. In addition, not using a thresholding to perform the silhouette segmentation allows for objects with dark components or areas to be segmented accurately. Some examples of 3D models generated using silhouettes are shown.
APA, Harvard, Vancouver, ISO, and other styles
19

Rafiee, Gholamreza. "Automatic region-of-interest extraction in low depth-of-field images." Thesis, University of Newcastle upon Tyne, 2013. http://hdl.handle.net/10443/2194.

Full text
Abstract:
Automatic extraction of focused regions from images with low depth-of-field (DOF) is a problem without an efficient solution yet. The capability of extracting focused regions can help to bridge the semantic gap by integrating image regions which are meaningfully relevant and generally do not exhibit uniform visual characteristics. There exist two main difficulties for extracting focused regions from low DOF images using high-frequency based techniques: computational complexity and performance. A novel unsupervised segmentation approach based on ensemble clustering is proposed to extract the focused regions from low DOF images in two stages. The first stage is to cluster image blocks in a joint contrast-energy feature space into three constituent groups. To achieve this, we make use of a normal mixture-based model along with standard expectation-maximization (EM) algorithm at two consecutive levels of block size. To avoid the common problem of local optima experienced in many models, an ensemble EM clustering algorithm is proposed. As a result, relevant blocks, i.e., block-based region-of-interest (ROI), closely conforming to image objects are extracted. In stage two, two different approaches have been developed to extract pixel-based ROI. In the first approach, a binary saliency map is constructed from the relevant blocks at the pixel level, which is based on difference of Gaussian (DOG) and binarization methods. Then, a set of morphological operations is employed to create the pixel-based ROI from the map. Experimental results demonstrate that the proposed approach achieves an average segmentation performance of 91.3% and is computationally 3 times faster than the best existing approach. In the second approach, a minimal graph cut is constructed by using the max-flow method and also by using object/background seeds provided by the ensemble clustering algorithm. Experimental results demonstrate an average segmentation performance of 91.7% and approximately 50% reduction of the average computational time by the proposed colour based approach compared with existing unsupervised approaches.
APA, Harvard, Vancouver, ISO, and other styles
20

Vörös, Csaba, Norbert Zajzon, Endre Turai, and László Vincze. "Magnetic field measurement possibilities in flooded mines at 500 m depth." TU Bergakademie Freiberg, 2017. https://tubaf.qucosa.de/id/qucosa%3A23184.

Full text
Abstract:
The main target of the UNEXMIN project is to develop a fully autonomous submersible robot (UX-1) which can map flooded underground mines, and also deliver information about the potential raw materials of the mines. There are ca. 30 000 abandoned mines in Europe, from which many of them still could hold significant reserves of raw materials. Many of these mines are nowadays flooded and the latest information about them could be more than 100 years old. Although it is giving limited information, magnetic measurement methods, which detecting the local distortions of the Earth’s magnetic field can be very useful to identify raw materials in the mines. The source of the magnetic field which is independent of any human events comes from the Earths own magnetic field. The strength of this field depends by the magnetic materials in the near environment of the investigated point. The ferromagnetic materials have powerful effect to influence the magnetic field. In the nature, iron containing minerals, magnetite and hematite have the most powerful effect usually. The magnetic measurement methods are rapid and affordable techniques in geophysical engineering practice. For magnetic field strength and direction measurement FGM-1 sensors (manufactured by Speake & Co Llanfapley) were selected for the UX-1 robot. The sensor heads overall dimension are very small and their energy consumption is negligible. The FGM-1 sensor was placed and aligned in a plastic cylinder to ensure that the magnetic-axis aligned with the mechanical axis of the tube for more accurate measurement. There are 3 pairs of FGM-1 sensors needed for the proper determination of the current magnetic field (strength and direction). The position of sensor pairs need to be perpendicular compared to each other. The 3 pairs of FGM-1 sensors generate an arbitrary position Cartesian coordinate system. We further developed / had installed temperature sensors to all FGM-1 probes, to compensate the temperature dependency even though it has small effect. The UX-1 robot also contains the electronic block, which controls the three FGM-1 magnetic field sensor pairs, and store the measured data. The block contains the power module, the sensor interface modules with temperature compensation, the microcontroller module and the RS485 communication module also. The output data is a temperature compensated frequency value for each sensor pair. The measured magnetic signal from the local XYZ coordinate system (local for the UX-1) should be converted to a universal coordinate system during post processing of the data. The exact position, facing and inclination of the robot must be known in the whole dive time to be able to do the above conversion. The measured magnetic signal will be placed into the measured mine map, reconstructed from the delivered 3D point cloud, thus the exact location of the magnetic anomalies can be identified. Not much magnetic source is estimated in the operating environment of the robot, but its own generated magnetic noise can be significant. There will be many cooling fans, micro-controllers and multiple thrusters inside the pressure-hull of the UX-1, which generate magnetic field. The constant magnetic noise coming from the cooling fans can be compensated, but the varying fields caused by eg. the different thrusters’s speed is problematic. We design a calibration method, where the effect of the main thrusters (even with changing speed) and the effect of the constant cooling fans could be compensated. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 690008.
APA, Harvard, Vancouver, ISO, and other styles
21

He, Ruojun. "Square Coded Aperture: A Large Aperture with Infinite Depth of Field." University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1418078808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Duan, Jun Wei. "New regional multifocus image fusion techniques for extending depth of field." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3951602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Helbing, Katrin G. "Effects of display contrast and field of view on distance perception." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-10062009-020220/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Goss, Keith Michael. "Multi-dimensional polygon-based rendering for motion blur and depth of field." Thesis, Brunel University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.294033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Widjanarko, Taufiq. "Hyperspectral interferometry for single-shot profilometry and depth-resolved displacement field measurement." Thesis, Loughborough University, 2011. https://dspace.lboro.ac.uk/2134/8349.

Full text
Abstract:
A new approach to the absolute measurement of two-dimensional optical path differences is presented in this thesis. The method, which incorporates a white light interferometer and a hyperspectral imaging system, is referred to as Hyperspectral Interferometry. A prototype of the Hyperspectral Interferometry (HSI) system has been designed, constructed and tested for two types of measurement: for surface profilometry and for depth-resolved displacement measurement, both of which have been implemented so as to achieve single shot data acquisition. The prototype has been shown to be capable of performing a single-shot 3-D shape measurement of an optically-flat step-height sample, with less than 5% difference from the result obtained by a standard optical (microscope) based method. The HSI prototype has been demonstrated to be able to perform single-shot measurement with an unambiguous 352 (m depth range and a rms measurement error of around 80 nm. The prototype has also been tested to perform measurements on optically rough surfaces. The rms error of these measurements was found to increase to around 4× that of the smooth surface. For the depth-resolved displacement field measurements, an experimental setup was designed and constructed in which a weakly-scattering sample underwent simple compression with a PZT actuator. Depth-resolved displacement fields were reconstructed from pairs of hyperspectral interferograms. However, the experimental results did not show the expected result of linear phase variation with depth. Analysis of several possible causes has been carried out with the most plausible reasons being excessive scattering particle density inside the sample and the possibility of insignificant deformation of the sample due to insufficient physical contact between the transducer and the sample.
APA, Harvard, Vancouver, ISO, and other styles
26

Reddy, Serendra. "Automatic 2D-to-3D conversion of single low depth-of-field images." Doctoral thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/24475.

Full text
Abstract:
This research presents a novel approach to the automatic rendering of 3D stereoscopic disparity image pairs from single 2D low depth-of-field (LDOF) images. Initially a depth map is produced through the assignment of depth to every delineated object and region in the image. Subsequently the left and right disparity images are produced through depth imagebased rendering (DIBR). The objects and regions in the image are initially assigned to one of six proposed groups or labels. Labelling is performed in two stages. The first involves the delineation of the dominant object-of-interest (OOI). The second involves the global object and region grouping of the non-OOI regions. The matting of the OOI is also performed in two stages. Initially the in focus foreground or region-of-interest (ROI) is separated from the out of focus background. This is achieved through the correlation of edge, gradient and higher-order statistics (HOS) saliencies. Refinement of the ROI is performed using k-means segmentation and CIEDE2000 colour-difference matching. Subsequently the OOI is extracted from within the ROI through analysis of the dominant gradients and edge saliencies together with k-means segmentation. Depth is assigned to each of the six labels by correlating Gestalt-based principles with vanishing point estimation, gradient plane approximation and depth from defocus (DfD). To minimise some of the dis-occlusions that are generated through the 3D warping sub-process within the DIBR process the depth map is pre-smoothed using an asymmetric bilateral filter. Hole-filling of the remaining dis-occlusions is performed through nearest-neighbour horizontal interpolation, which incorporates depth as well as direction of warp. To minimising the effects of the lateral striations, specific directional Gaussian and circular averaging smoothing is applied independently to each view, with additional average filtering applied to the border transitions. Each stage of the proposed model is benchmarked against data from several significant publications. Novel contributions are made in the sub-speciality fields of ROI estimation, OOI matting, LDOF image classification, Gestalt-based region categorisation, vanishing point detection, relative depth assignment and hole-filling or inpainting. An important contribution is made towards the overall knowledge base of automatic 2D-to-3D conversion techniques, through the collation of existing information, expansion of existing methods and development of newer concepts.
APA, Harvard, Vancouver, ISO, and other styles
27

Abbott, Joshua E. "Interactive Depth-Aware Effects for Stereo Image Editing." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3712.

Full text
Abstract:
This thesis introduces methods for adding user-guided depth-aware effects to images captured with a consumer-grade stereo camera with minimal user interaction. In particular, we present methods for highlighted depth-of-field, haze, depth-of-field, and image relighting. Unlike many prior methods for adding such effects, we do not assume prior scene models or require extensive user guidance to create such models, nor do we assume multiple input images. We also do not require specialized camera rigs or other equipment such as light-field camera arrays, active lighting, etc. Instead, we use only an easily portable and affordable consumer-grade stereo camera. The depth is calculated from a stereo image pair using an extended version of PatchMatch Stereo designed to compute not only image disparities but also normals for visible surfaces. We also introduce a pipeline for rendering multiple effects in the order they would occur physically. Each can be added, removed, or adjusted in the pipeline without having to reapply subsequent effects. Individually or in combination, these effects can be used to enhance the sense of depth or structure in images and provide increased artistic control. Our interface also allows editing the stereo pair together in a fashion that preserves stereo consistency, or the effects can be applied to a single image only, thus leveraging the advantages of stereo acquisition even to produce a single photograph.
APA, Harvard, Vancouver, ISO, and other styles
28

Willingham, David George Winograd Nicholas. "Strong-field photoionization of sputtered neutral molecules for chemical imaging and depth profiling." [University Park, Pa.] : Pennsylvania State University, 2009. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-4536/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sanyal, Poulomi. "Depth of field enhancement of a particle analyzer microscope using wave-front coding." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=83931.

Full text
Abstract:
In this thesis we present an analytical solution to the problem of improving the depth of field (DOF) of a certain large magnification imaging system with a high numerical aperture (NA), illuminated with an incoherent source of light. As there is a definite trade-off between the focal depth and resolution achievable with such a system, our challenge was to find a system that would achieve both these objectives and at the same time be cost-effective and easy to implement.
Our choice of technique therefore was a novel optical wave front manipulation mechanism involving subsequent image restoration via digital post processing. This technique is known as wave front coding. The coding is achieved with the help an optical element known as a phase plate and then the coded image is electronically restored with the help of a digital post-processing filter.
The three steps involved in achieving our desired goal were, modeling the imaging system to be studied and studying its characteristics before DOF enhancement, designing the phase plate and finally, choosing and designing the appropriate decoding filter. After an appropriate phase plate was modeled, it was incorporated into the pre-existing optics and subsequently optimized. The intermediate image produced by the resulting system was then studied for defocus performance. Finally, the intermediate image was restored using a digital filter and studied once again for DOF characteristics. Other factors, like optical aberrations that might limit system performance were also taken into consideration.
In the end a simpler and cost-effective method of fabricating the suggested phase plate for single-wavelength operation was suggested. The results of our simulations were promising and sufficiently high resolution imaging was achievable within the entire enhanced DOF region of +/-200 mum from the point of best focus. The DOF without coding was around +/-50 mum, but with coding the spot size remained fairly constant over the entire 400 mum deep region of interest. Thus a 4 times increase in the overall system DOF was achieved due to wave front coding.
APA, Harvard, Vancouver, ISO, and other styles
30

Sorensen, Jordan (Jordan P. ). "Software simulation of depth of field effects in video from small aperture cameras." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61577.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 71-73).
This thesis proposes a technique for post processing digital video to introduce a simulated depth of field effect. Because the technique is implemented in software, it affords the user greater control over the parameters of the effect (such as the amount of defocus, aperture shape, and defocus plane) and allows the effect to be used even on hardware which would not typically allow for depth of field. In addition, because it is a completely post processing technique and requires no change in capture method or hardware, it can be used on any video and introduces no new costs. This thesis describes the technique, evaluates its performance on example videos, and proposes further work to improve the technique's reliability.
by Jordan Sorensen.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
31

Hu, Guang-hua. "Extending the depth of focus using digital image filtering." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/45653.

Full text
Abstract:

Two types of image processing methods capable of forming a composite image from a set of image slices which have in-focus as well as out-of-focus segments are discussed. The first type is based on space domain operations and has been discussed in the literature. The second type, to be introduced, is based on the intuitive concept that the spectral energy distribution of a focused object is biased towards lower frequencies after blurring. This approach requires digital image filtering in the spatial frequency domain. A comparison among methods of both types is made using a quantitative uÌ delity criterion.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
32

Atif, Muhammad [Verfasser], and Bernd [Akademischer Betreuer] Jähne. "Optimal Depth Estimation and Extended Depth of Field from Single Images by Computational Imaging using Chromatic Aberrations / Muhammad Atif ; Betreuer: Bernd Jähne." Heidelberg : Universitätsbibliothek Heidelberg, 2013. http://d-nb.info/1177382679/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Brooker, Julian P. "Eye movement controlled synthetic depth of field blurring in stereographic displays of virtual environments." Thesis, University of Reading, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.413506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Christoffersson, Anton. "Real-time Depth of Field with Realistic Bokeh : with a Focus on Computer Games." Thesis, Linköpings universitet, Informationskodning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-163080.

Full text
Abstract:
Depth of field is a naturally occurring effect in lenses describing the distance between theclosest and furthest object that appears in focus. The effect is commonly used in film andphotography to direct a viewers focus, give a scene more complexity, or to improve aes-thetics. In computer graphics, the same effect is possible, but since there are no naturaloccurrences of lenses in the virtual world, other ways are needed to achieve it. There aremany different approaches to simulate depth of field, but not all are suited for real-time usein computer games. In this thesis, multiple methods are explored and compared to achievedepth of field in real-time with a focus on computer games. The aspect of bokeh is alsocrucial when considering depth of field, so during the thesis, a method to simulate a bokeheffect similar to reality is explored. Three different methods based on the same approachwas implemented to research this subject, and their time and memory complexity weremeasured. A questionnaire was performed to measure the quality of the different meth-ods. The result is three similar methods, but with noticeable differences in both quality andperformance. The results give the reader an overview of different methods and directionsfor implementing it on their own, based on which requirements suits them.
APA, Harvard, Vancouver, ISO, and other styles
35

Vorhies, John T. "Low-complexity Algorithms for Light Field Image Processing." University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590771210097321.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Gullapalli, Sai Krishna. "Wave-Digital FPGA Architectures of 4-D Depth Enhancement Filters for Real-Time Light Field Image Processing." University of Akron / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=akron1574443263497981.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Williams, Sarah. "Depth of field : aspects of photography and film in the selected work of Michael Ondaatje." Thesis, University of Sussex, 2010. http://sro.sussex.ac.uk/id/eprint/2401/.

Full text
Abstract:
This thesis examines aspects of photography and film in the selected work of Michael Ondaatje, specifically analysing their implementation and function within The Collected Works of Billy the Kid, Running In The Family, In the Skin of a Lion and The English Patient. Ondaatje's two films, Sons of Captain Poetry and The Clinton Special, as well as Anthony Minghella's film adaptation of The English Patient, are also examined. My critical approach is eclectic and driven by the demands of individual texts, focusing on some of the ways in which photography and film affect and help define the formal and thematic components of the prose works. My approach addresses photographic perspective and reader response with specific reference to the ontological nature of photographic stillness, as well as various components of filmic writing and the challenges of prose to screen transfer in cinematic adaptation. This study reveals how an exploration of the photographic and filmic aspects of the texts provides new insights into the way Ondaatje's work promotes indeterminacy of meaning and a blurring of the boundaries between genres.
APA, Harvard, Vancouver, ISO, and other styles
38

Lucotte, Bertrand M. "Fourier optics approaches to enhanced depth-of-field applications in millimetre-wave imaging and microscopy." Thesis, Heriot-Watt University, 2010. http://hdl.handle.net/10399/2323.

Full text
Abstract:
In the first part of this thesis millimetre-wave interferometric imagers are considered for short-range applications such as concealed weapons detection. Compared to real aperture systems, synthetic aperture imagers at these wavelengths can provide improvements in terms of size, cost, depth-of-field (DoF) and imaging flexibility via digitalrefocusing. Mechanical scanning between the scene and the array is investigated to reduce the number of antennas and correlators which drive the cost of such imagers. The tradeoffs associated with this hardware reduction are assessed before to jointly optimise the array configuration and scanning motion. To that end, a novel metric is proposed to quantify the uniformity of the Fourier domain coverage of the array and is maximised with a genetic algorithm. The resulting array demonstrates clear improvements in imaging performances compared to a conventional power-law Y-shaped array. The DoF of antenna arrays, analysed via the Strehl ratio, is shown to be limited even for infinitely small antennas, with the exception of circular arrays. In the second part of this thesis increased DoF in optical systems with Wavefront Coding (WC) is studied. Images obtained with WC are shown to exhibit artifacts that limit the benefits of this technique. An image restoration procedure employing a metric of defocus is proposed to remove these artifacts and therefore extend the DoF beyond the limit of conventional WC systems. A transmission optical microscope was designed and implemented to operate with WC. After suppression of partial coherence effects, the proposed image restoration method was successfully applied and extended DoF images are presented.
APA, Harvard, Vancouver, ISO, and other styles
39

Kubíček, Martin. "Creating a Depth Map of Eye Iris in Visible Spectrum." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403824.

Full text
Abstract:
Diplomová práce si dává za cíl navrhnout a uvést v praxi metodiku snímání oční duhovky ve viditelném spektru. Klade přitom důraz na kvalitu snímků, věrohodné podání barev vůči reálnému podkladu a hlavně na kontinuální hloubku ostrosti, která odhaluje dosud nezkoumané aspekty a detaily duhovky. V poslední řadě se také soustředí na co nejmenší vystavení duhovky fyzickému stresu. Metodika obsahuje přesné postupy jak snímat duhovku a zajištuje tím konzistentnost snímků. Tím umožní vytvářet databáze duhovek s ohledem na jejich vývoj v čase či jiném aspektu jako je například psychologický stav snímané osoby. Na úvod je v práci představena anatomie lidského oka a zejména pak duhovky. Dále pak známé způsoby snímání duhovky. Následuje část, jež se zabývá správným osvětlením duhovky. To je nutné pro požadovanou úroveň kvality snímků zároveň ale vystavuje oko velkému fyzickému stresu. Je tedy nutné najít kompromis mezi těmito aspekty. Důležitý je popis samotné metodiky obsahující podrobný popis snímání. Dále se práce zabývá nutnými postprodukčními úpravami jako je například složení snímků s různou hloubkou ostrosti do jednoho kontinuálního snímku či aplikací filtrů pro odstranění vad na snímcích. Poslední část práce je rozdělena na zhodnocení výsledků a závěr, v němž se rozebírají možné rozšíření či úpravy metodiky tak, aby ji bylo možné použít i mimo laboratorní podmínky.
APA, Harvard, Vancouver, ISO, and other styles
40

Keogh, Teri M. "Changes in competition intensity, herbivory and stress along a soil depth gradient in an old field." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0021/MQ58467.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

O'Donnell, David Patrick. "FIELD AND MODELING STUDY OF THE EFFECTS OF STREAM DEPTH AND GROUND WATER DISCHARGE ON HYDROGEOPHYSICAL." Master's thesis, Temple University Libraries, 2012. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/187887.

Full text
Abstract:
Geology
M.S.
Valley Creek, an urbanized stream in Southeastern Pennsylvania, has undergone changes typical of streams in urbanized areas, such as bank erosion, channel redirection, and habitat disruption. One area of disruption that has been little studied is the hyporheic zone, the top layer of the streambed where stream water exchanges with subsurface water and chemical transformations occur. The hyporheic zone of an 18 m reach of Valley Creek in Ecology Park was characterized using a tracer test coupled with a hydrogeophysical survey. Nested wells screened at depths of 20, 35, 50, and 65 cm were placed at four locations along the center of the stream to monitor the passage of the salt tracer through the hyporheic zone. Results from well sampling were compared with time-lapse Electrical Resistivity Tomography (ERT) monitoring of the stream tracer. The streambed was also characterized using temperature probes to calculate the stream water-groundwater flux and freeze core samples to characterize heterogeneities in streambed sediment. Models were created using MODFLOW, MATLAB, and EARTH IMAGER 2-D to understand differences between Ecology Park and Crabby Creek, a tributary within the Valley Creek watershed, where similar studies were performed in 2009 and 2010. Hyporheic exchange and ERT applicability differed between the two study sites. At Ecology Park, tracer was detected only in the 20 cm wells at nests 2 and 4 during the injection period. Noise in the falling limbs of the tracer test breakthrough curves made it difficult to determine whether tracer lingered in the hyporheic zone using well data. ERT surveys were unable to detect tracer lingering after the injection period. At Crabby Creek, tracer was present in all shallow wells, and lingering tracer was detected in the hyporheic zone using ERT during the post-injection period. ERT surveys at Ecology Park were less effective than at Crabby Creek for two reasons: the presence of groundwater discharge (which inhibited hyporheic exchange) and increased stream water depth at Ecology Park. Temperature modeling of heat flux data revealed groundwater discharge at three locations. MODFLOW models predicted that this discharge would diminish the length and residence time of subsurface flow paths. Groundwater discharge likely increased along the contact between the hydraulically conductive Elbrook Formation and the less conductive Ledger Formation. Models created with MATLAB and Earth-Imager 2-D showed ERT sensitivity to tracer in the hyporheic zone depended on stream thickness. With increased water depth, more current propagated through the stream, which reduced sensitivity to changes in the hyporheic zone. A sensitivity analysis showed that the resistivity change in the hyporheic zone at Ecology Park (average water depth 0.36 m) would have to exceed 30% to be detectable, which was greater than the induced change during the tracer test. Deeper water also amplified the confounding effect of changes in the background conductivity of the stream water, though time-lapse ERT detected no lingering tracer even after correcting for this drift. Studies performed at Crabby Creek were able to map lingering tracer in the hyporheic zone because the site had a thin water layer (0.1 m), a large percentage increase of conductivity during the tracer test, and no groundwater discharge. Conversely, at Ecology Park groundwater discharge inhibited hyporheic exchange, and imaging sensitivity was reduced by the thicker water layer, demonstrating the limitations of ERT for hyporheic zone characterization. The modified inversion routines used here demonstrated that, with accurate stream conductivity and depth measurements, ERT can be used in some streams as a method for hyporheic characterization by incorporating site-specific conditions.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
42

Kourakos, Alexander William. "Extended Depth-of-focus in a Laser Scanning System Employing a Synthesized Difference-of-Gaussians Pupil." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/33091.

Full text
Abstract:

Traditional laser scanning systems, such as those used for microscopy, typically image objects of finite thickness. If the depth-of-focus of such systems is low, as is the case when a simple clear pupil is used, the object must be very thin or the image will be distorted. Several methods have been developed to deal with this problem. A microscope with a thin annular pupil has a very high depth-of-focus and can image the entire thickness of a sample, but most of the laser light is blocked, and the image shows poor contrast and high noise. In confocal laser microscopy, the depth-of-focus problem is eliminated by using a small aperture to discard information from all but one thin plane of the sample. However, such a system requires scanning passes at many different depths to yield an image of the entire thickness of the sample, which is a time-consuming process and is highly sensitive to registration errors.

In this thesis, a novel type of scanning system is considered. The sample is simultaneously scanned with a combination of two Gaussian laser beams of different widths and slightly different temporal frequencies. Information from scanning with the two beams is recorded with a photodetector, separated electronically, and processed to form an image. This image is similar to one formed by a system using a difference-of-Gaussians pupil, except no light has been blocked or wasted. Also, the entire sample can be scanned in one pass. The depth-of-focus characteristics of this synthesized difference-of-Gaussians pupil are examined and compared with those of well-known functions such as the circular, annular, and conventional difference-of-Gaussians pupils.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
43

Derafshi, Mohammadali H. "The effect of depth of placement of phosphorus fertiliser on the growth and development of field peas." Title page, contents and anstract only, 1997. http://web4.library.adelaide.edu.au/theses/09PH/09phd427.pdf.

Full text
Abstract:
Bibliography: leaves 190-212. This thesis reports on the results of 3 glasshouse and 3 field experiments. The glasshouse experiments measure the effects of depth of placement and level of phosphorus (P) on the growth of field peas (Pisum sativum L. cv. Alma). The results of all the experiments suggest that placing P fertiliser 4-5 cm below the seed of field pea crops will be beneficial in terms of nodulation, P uptake, grain yield and grain P concentration.
APA, Harvard, Vancouver, ISO, and other styles
44

Rogers, Claire. "Depth conversion methods for the Torsk Oilfield : investigating the complex velocity field of the Seaspray Group, Gippsland Basin /." Title page, abstract and table of contents only, 2003. http://web4.library.adelaide.edu.au/theses/09SB/09sbr7421.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Lönroth, Per, and Mattias Unger. "Advanced Real-time Post-Processing using GPGPU techniques." Thesis, Linköping University, Department of Science and Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-14962.

Full text
Abstract:

 

Post-processing techniques are used to change a rendered image as a last step before presentation and include, but is not limited to, operations such as change of saturation or contrast, and also more advanced effects like depth-of-field and tone mapping.

Depth-of-field effects are created by changing the focus in an image; the parts close to the focus point are perfectly sharp while the rest of the image has a variable amount of blurriness. The effect is widely used in photography and movies as a depth cue but has in the latest years also been introduced into computer games.

Today’s graphics hardware gives new possibilities when it comes to computation capacity. Shaders and GPGPU languages can be used to do massive parallel operations on graphics hardware and are well suited for game developers.

This thesis presents the theoretical background of some of the recent and most valuable depth-of-field algorithms and describes the implementation of various solutions in the shader domain but also using GPGPU techniques. The main objective is to analyze various depth-of-field approaches and look at their visual quality and how the methods scale performance wise when using different techniques.

 

APA, Harvard, Vancouver, ISO, and other styles
46

Chow, B. S. "Design a Phase Plate to Extend the Depth of Field for an Inexpensive Microscope System to Have the Muti-focus Ability." Thesis, Sumy State University, 2015. http://essuir.sumdu.edu.ua/handle/123456789/42557.

Full text
Abstract:
We propose an optical technique, also called wave front coding, that can extend the depth of field optically by phase plate without the need of post digital image processing (PDIP). This technique can replace the present expensive mechanical scanning for achieving 3D information recording or to avoid keeping adjusting the structure of the objective. This phase plate can be fabricated by the emerging technology of laser direct-write photoresist patterning and subsequent reactive ion etching on a germanium substrate. The niche of our innovation is the exempt of PDIP. The dependence on PDIP causes the conventional researches in this field to develop a deteriorating phase plate to deteriorate the images in different depth to be the same worse. The deteriorated images can thus be fixed digitally by a same inverse optical transfer function method. In contrast, we replace the deteriorating plate with an improving plate. This plate can improve the defocus image in general. This freedom in limitation enables us to have a better imaging effect when designing the improvement plate. For example, in our design we do not need to worry about the null points existing in the inverse method since there is no inverse method for us at all.
APA, Harvard, Vancouver, ISO, and other styles
47

Jøraandstad, Susann. "Use of stacking velocity for depth prediction and lithological indication in the Challum field of the Cooper/Eromanga basin, Queensland /." Title page, abstract and contents only, 1999. http://web4.library.adelaide.edu.au/theses/09SB/09sbj818.pdf.

Full text
Abstract:
Thesis (B.Sc.(Hons.))--University of Adelaide, National Centre of Petroleum Geology and Geophyiscs, 1999.
Two folded enclosures in pocket inside backcover. Includes bibliographical references (2 leaves).
APA, Harvard, Vancouver, ISO, and other styles
48

"Flexible imaging for capturing depth and controlling field of view and depth of field." COLUMBIA UNIVERSITY, 2009. http://pqdtopen.proquest.com/#viewpdf?dispub=3348435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Custodio, Joao Pedro Barata Lourenco. "Depth Estimation using Light-Field Cameras." Master's thesis, 2014. http://hdl.handle.net/10316/40446.

Full text
Abstract:
Dissertação de Mestrado Integrado em Engenharia Electrotécnica e de Computadores apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Plenoptic cameras or light field cameras are a recent type of imaging devices that are starting to regain some popularity. These cameras are able to acquire the plenoptic function (4D light field) and, consequently, able to output the depth of a scene, by making use of the redundancy created by the multi-view geometry, where a single 3D point is imaged several times. Despite the attention given in the literature to standard plenoptic cameras, like Lytro, due to their simplicity and lower price, we did our work based on results obtained from a multi-focus plenoptic camera (Raytrix, in our case), due to their quality and higher resolution images. In this master thesis, we present an automatic method to estimate the virtual depth of a scene. Since the capture is done using a multi-focus plenoptic camera, we are working with multi-view geometry and lens with different focal lengths, and we can use that to back trace the rays in order to obtain the depth. We start by finding salient points and their respective correspondences using a scaled SAD (sum of absolute differences) method. In order to perform the referred back trace, obtaining robust results, we developed a RANSAC-like method, which we call COMSAC (Complete Sample Consensus). It is an iterative method that back trace the ligth rays in order to estimate the depth, eliminating the outliers. Finally, and since the depth map obtained is sparse, we developed a way to make it dense, by random growing. Since we used a publicly available dataset from Raytrix, comparisons between our results and the manufacturers’ ones are also presented. A paper was also submitted to 3DV 2014 (International Conference on 3D Vision), a conference on three-dimensional vision.
Cˆamaras plen´opticas ou cˆamaras de campo de luz s˜ao dispositivos que est˜ao a voltar a ter alguma popularidade. Este tipo de cˆamaras s˜ao capazes de adquirir a fun¸c˜ao plen´optica (campo de luz 4D) e, consequentemente, capazes de estimar a profundidade de uma cena, usando a redundˆancia criada pela geometria ”multi-view”, onde um ponto 3D ´e projetado na imagem diversas vezes. Apesar de ter sido dada uma maior aten¸c˜ao na literatura `as cˆamaras plen´opticas standard, como s˜ao exemplo as Lytro, visto que s˜ao mais simples e com um pre¸co muito mais interessante, o nosso trabalho foi desenvolvido tendo por base os resultados obtidos por uma cˆamara plen´optica ”multi-focus” (uma Raytrix, no nosso caso), visto que a qualidade ´e superior e consegue obter imagens com maior resolu¸c˜ao. Nesta tese de mestrado, apresentamos um m´etodo autom´atico para estimar a profundidade de uma cena. Visto que a captura da cena ´e efetuada com uma cˆamara plen´optica ”multi-focus”, estamos a trabalhar com geometria ”multi-view” e com lentes que tˆem distˆancias focais diferentes, podendo ser usado esse facto para efetuar tra¸co de raios de forma a obter a profundidade. Come¸camos por encontrar pontos salientes e as suas respetivas correspondˆencias usando um valor escalado de SAD (soma das diferen¸cas absolutas). Por forma a efetuar o tra¸co de raios, obtendo resultados mais robustos, desenvolvemos um m´etodo estilo RANSAC, ao qual cham´amos COMSAC (Complete Sample Consensus). E um m´etodo iterativo que efetua o tra¸co de raios de ´ forma a estimar a profundidade, eliminando resultados indesejados. Finalmente, e visto que o mapa de profundidade obtido ´e esparso, desenvolvemos um m´etodo para o tornar denso, com recurso a um m´etodo de crescimento aleat´orio. Por forma a testar os algoritmos desenvolvidos, utiliz´amos um conjunto de dados abertos ao p´ublico da Raytrix e apresentamos tamb´em a compara¸c˜ao entre os nossos resultados e os resultados obtidos pela Raytrix. Foi tamb´em submetida um artigo para a 3DV (Conferˆencia Internacional em Vis˜ao 3D), uma conferˆencia em vis˜ao tri-dimensional.
APA, Harvard, Vancouver, ISO, and other styles
50

Tseng, Yi-Ning, and 曾翊寧. "Realtime Depth of Field Rendering with Bokeh Effect via Interactive Depth Labeling." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/26446918239640282284.

Full text
Abstract:
碩士
國立臺灣大學
資訊網路與多媒體研究所
100
Shallow focus is a kind of photographic technique, which means taking a picture with a small depth of field (DOF) so that the objects outside of the DOF will be blurred to highlight the subject inside of the DOF. Common digital cameras, unlike the Digital Single Lens Reflex Camera (DSLR), cannot produce this special vision effect due to its limitations in aperture and focal length. Commercial softwares and even some common digital cameras are endued with built-in shallow focus simulation programs. However, the rendering quality of the existing tools is not very well due to several problems: 1) boundaries of the focused objects are often over smoothed caused by poor auto-generated object segmentation result, 2) the blurring effect usually looks fake since the element of image formation principle, i.e. the depth of the object, is not properly utilized in the rendering process, and 3) even though some refocusing methods model the blurring effect with depth information, it is hard to extract accurate depth map from a single image. Therefore, we develop a friendly user interface which can interactively achieve reliable depth labeling based on the techniques of semi-automatic object segmentation and 3D modeling. Moreover, to realize shallow focus effect efficiently, we apply a real-time gather-based DOF rendering method and modify it to be capable of simulating the bokeh effect.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography