Academic literature on the topic 'Depth of field'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Depth of field.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Depth of field"

1

Baylin, Eric. "Depth of Field/Depth of Understanding." Schools 7, no. 1 (May 2010): 86–100. http://dx.doi.org/10.1086/651297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Harper, Graeme. "Depth of Field." Creative Industries Journal 11, no. 3 (September 2, 2018): 223–24. http://dx.doi.org/10.1080/17510694.2018.1534414.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Değirmenci, Koray. "Depth of field." Philosophy of Photography 5, no. 2 (December 1, 2014): 123–29. http://dx.doi.org/10.1386/pop.5.2.123_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vasudevan, Krishnan. "Depth of Field." Journalism Practice 13, no. 2 (January 4, 2018): 229–46. http://dx.doi.org/10.1080/17512786.2017.1419826.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Brown, Deeadra. "Depth of field." Dialectical Anthropology 33, no. 2 (June 2009): 201–2. http://dx.doi.org/10.1007/s10624-009-9115-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Soler, Cyril, Kartic Subr, Frédo Durand, Nicolas Holzschuch, and François Sillion. "Fourier depth of field." ACM Transactions on Graphics 28, no. 2 (April 2009): 1–12. http://dx.doi.org/10.1145/1516522.1516529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Tingting, Louise O’hare, Paul B. Hibbard, Harold T. Nefs, and Ingrid Heynderickx. "Depth of Field Affects Perceived Depth in Stereographs." ACM Transactions on Applied Perception 11, no. 4 (January 9, 2015): 1–18. http://dx.doi.org/10.1145/2667227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Landers, Mark N., and David S. Mueller. "Evaluation of Selected Pier-Scour Equations Using Field Data." Transportation Research Record: Journal of the Transportation Research Board 1523, no. 1 (January 1996): 186–95. http://dx.doi.org/10.1177/0361198196152300123.

Full text
Abstract:
Field measurements of channel scour at bridges are needed to improve the understanding of scour processes and the ability to accurately predict scour depths. An extensive data base of pier-scour measurements has been developed over the last several years in cooperative studies between state highway departments, the Federal Highway Administration, and the U.S. Geological Survey. Selected scour processes and scour design equations are evaluated using 139 measurements of local scour in live-bed and clear-water conditions. Pier-scour measurements were made at 44 bridges around 90 bridge piers in 12 states. The influence of pier width on scour depth is linear in logarithmic space. The maximum observed ratio of pier width to scour depth is 2.1 for piers aligned to the flow. Flow depth and scour depth were found to have a relation that is linear in logarithmic space and that is not bounded by some critical ratio of flow depth to pier width. Comparisons of computed and observed scour depths indicate that none of the selected equations accurately estimate the depth of scour for all of the measured conditions. Some of the equations performed well as conservative design equations; however, they overpredict many observed scour depths by large amounts. Some equations fit the data well for observed scour depths less than about 3 m (9.8 ft), but significantly underpredict larger observed scour depths.
APA, Harvard, Vancouver, ISO, and other styles
9

Moon, Won-Leep. "Depth of field and magnification." journal of the moving image technology associon of korea 1, no. 12 (June 2010): 25–41. http://dx.doi.org/10.34269/mitak.2010.1.12.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Geuens, Jean-Pierre. "The Depth of the Field." Quarterly Review of Film and Video 31, no. 6 (June 2, 2014): 572–85. http://dx.doi.org/10.1080/10509208.2012.686812.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Depth of field"

1

Ramirez, Hernandez Pavel. "Extended depth of field." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9941.

Full text
Abstract:
In this thesis the extension of the depth of field of optical systems is investigated. The problem of achieving extended depth of field (EDF) while preserving the transverse resolution is also addressed. A new expression for the transport of intensity equation in the prolate spheroidal coordinates system is derived, with the aim of investigating the phase retrieval problem with applications to EDF. A framework for the optimisation of optical systems with EDF is also introduced, where the main motivation is to find an appropriate scenario that will allow a convex optimisation solution leading to global optima. The relevance in such approach is that it does not depend on the optimisation algorithms since each local optimum is a global one. The multi-objective optimisation framework for optical systems is also discussed, where the main focus is the optimisation of pupil plane masks. The solution for the multi-objective optimisation problem is presented not as a single mask but as a set of masks. Convex frameworks for this problem are further investigated and it is shown that the convex optimisation of pupil plane masks is possible, providing global optima to the optimisation problems for optical systems. Seven masks are provided as examples of the convex optimisation solutions for optical systems, in particular 5 pupil plane masks that achieve EDF by factors of 2, 2.8, 2.9, 4 and 4.3, including two pupil masks that besides of extending the depth of field, are super-resolving in the transverse planes. These are shown as examples of solutions to particular optimisation problems in optical systems, where convexity properties have been given to the original problems to allow a convex optimisation, leading to optimised masks with a global nature in the optimisation scenario.
APA, Harvard, Vancouver, ISO, and other styles
2

Villarruel, Christina R. "Computer graphics and human depth perception with gaze-contingent depth of field /." Connect to online version, 2006. http://ada.mtholyoke.edu/setr/websrc/pdfs/www/2006/175.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Aldrovandi, Lorenzo. "Depth estimation algorithm for light field data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Botcherby, Edward J. "Aberration free extended depth of field microscopy." Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:7ad8bc83-6740-459f-8c48-76b048c89978.

Full text
Abstract:
In recent years, the confocal and two photon microscopes have become ubiquitous tools in life science laboratories. The reason for this is that both these systems can acquire three dimensional image data from biological specimens. Specifically, this is done by acquiring a series of two-dimensional images from a set of equally spaced planes within the specimen. The resulting image stack can be manipulated and displayed on a computer to reveal a wealth of information. These systems can also be used in time lapse studies to monitor the dynamical behaviour of specimens by recording a number of image stacks at a sequence of time points. The time resolution in this situation is, however, limited by the maximum speed at which each constituent image stack can be acquired. Various techniques have emerged to speed up image acquisition and in most practical implementations a single, in-focus, image can be acquired very quickly. However, the real bottleneck in three dimensional imaging is the process of refocusing the system to image different planes. This is commonly done by physically changing the distance between the specimen and imaging lens, which is a relatively slow process. It is clear with the ever-increasing need to image biologically relevant specimens quickly that the speed limitation imposed by the refocusing process must be overcome. This thesis concerns the acquisition of data from a range of specimen depths without requiring the specimen to be moved. A new technique is demonstrated for two photon microscopy that enables data from a whole range of specimen depths to be acquired simultaneously so that a single two dimensional scan records extended depth of field image data directly. This circumvents the need to acquire a full three dimensional image stack and hence leads to a significant improvement in the temporal resolution for acquiring such data by more than an order of magnitude. In the remainder of this thesis, a new microscope architecture is presented that enables scanning to be carried out in three dimensions at high speed without moving the objective lens or specimen. Aberrations introduced by the objective lens are compensated by the introduction of an equal and opposite aberration with a second lens within the system enabling diffraction limited performance over a large range of specimen depths. Focusing is achieved by moving a very small mirror, allowing axial scan rates of several kHz; an improvement of some two orders of magnitude. This approach is extremely general and can be applied to any form of optical microscope with the very great advantage that the specimen is not disturbed. This technique is developed theoretically and experimental results are shown that demonstrate its potential application to a broad range of sectioning methods in microscopy.
APA, Harvard, Vancouver, ISO, and other styles
5

Möckelind, Christoffer. "Improving deep monocular depth predictions using dense narrow field of view depth images." Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.

Full text
Abstract:
In this work we study a depth prediction problem where we provide a narrow field of view depth image and a wide field of view RGB image to a deep network tasked with predicting the depth for the entire RGB image. We show that by providing a narrow field of view depth image, we improve results for the area outside the provided depth compared to an earlier approach only utilizing a single RGB image for depth prediction. We also show that larger depth maps provide a greater advantage than smaller ones and that the accuracy of the model decreases with the distance from the provided depth. Further, we investigate several architectures as well as study the effect of adding noise and lowering the resolution of the provided depth image. Our results show that models provided low resolution noisy data performs on par with the models provided unaltered depth.
I det här arbetet studerar vi ett djupapproximationsproblem där vi tillhandahåller en djupbild med smal synvinkel och en RGB-bild med bred synvinkel till ett djupt nätverk med uppgift att förutsäga djupet för hela RGB-bilden. Vi visar att genom att ge djupbilden till nätverket förbättras resultatet för området utanför det tillhandahållna djupet jämfört med en existerande metod som använder en RGB-bild för att förutsäga djupet. Vi undersöker flera arkitekturer och storlekar på djupbildssynfält och studerar effekten av att lägga till brus och sänka upplösningen på djupbilden. Vi visar att större synfält för djupbilden ger en större fördel och även att modellens noggrannhet minskar med avståndet från det angivna djupet. Våra resultat visar också att modellerna som använde sig av det brusiga lågupplösta djupet presterade på samma nivå som de modeller som använde sig av det omodifierade djupet.
APA, Harvard, Vancouver, ISO, and other styles
6

Ozkalayci, Burak Oguz. "Multi-view Video Coding Via Dense Depth Field." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607517/index.pdf.

Full text
Abstract:
Emerging 3-D applications and 3-D display technologies raise some transmission problems of the next-generation multimedia data. Multi-view Video Coding (MVC) is one of the challenging topics in this area, that is on its road for standardization via ISO MPEG. In this thesis, a 3-D geometry-based MVC approach is proposed and analyzed in terms of its compression performance. For this purpose, the overall study is partitioned into three preceding parts. The first step is dense depth estimation of a view from a fully calibrated multi-view set. The calibration information and smoothness assumptions are utilized for determining dense correspondences via a Markov Random Field (MRF) model, which is solved by Belief Propagation (BP) method. In the second part, the estimated dense depth maps are utilized for generating (predicting) arbitrary (other camera) views of a scene, that is known as novel view generation. A 3-D warping algorithm, which is followed by an occlusion-compatible hole-filling process, is implemented for this aim. In order to suppress the occlusion artifacts, an intermediate novel view generation method, which fuses two novel views generated from different source views, is developed. Finally, for the last part, dense depth estimation and intermediate novel view generation tools are utilized in the proposed H.264-based MVC scheme for the removal of the spatial redundancies between different views. The performance of the proposed approach is compared against the simulcast coding and a recent MVC proposal, which is expected to be the standard recommendation for MPEG in the near future. These results show that the geometric approaches in MVC can still be utilized, especially in certain 3-D applications, in addition to conventional temporal motion compensation techniques, although the rate-distortion performances of geometry-free approaches are quite superior.
APA, Harvard, Vancouver, ISO, and other styles
7

Lindeberg, Tim. "Concealing rendering simplifications using gazecontingent depth of field." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189601.

Full text
Abstract:
One way of increasing 3D rendering performance is the use of foveated rendering. In this thesis a novel foveated rendering technique called gaze contingent depth of field tessellation (GC DOF tessellation) is proposed. Tessellation is the process of subdividing geometry to increase detail. The technique works by applying tessellation to all objects within the focal plane, gradually decreasing tessellation levels as applied blur increases. As the user moves their gaze the focal plane shifts and objects go from blurry to sharp at the same time as the fidelity of the object increases. This can help hide the pops that occur as objects change shape. The technique was evaluated in a user study with 32 participants. For the evaluated scene the technique helped reduce the number of primitives rendered by around 70 % and frame time by around 9 % compared to using full adaptive tessellation. The user study showed that as the level of blur increased the detection rate for pops decreased, suggesting that the technique could be used to hide pops that occur due to tessellation. However, further research is needed to solidify these findings.
Ett sätt att öka renderingsprestanda i 3D applikationer är att använda foveated rendering. I denna uppsats presenteras en ny foveated rendering-teknik som kallas gaze contingent depth of field tessellering (GC DOF tessellering). Tessellering är när geometri delas i mindre delar för att öka detaljrikedom. Tekniken fungerar genom att applicera tessellering på alla objekt i fokalplanet och gradvis minska tesselleringsnivåer när oskärpan ökar. När användaren flyttar sin blick så flyttas fokalplanet och suddiga objekt blir skarpa samtidigt som detaljrikedomen i objektet ökar. Det kan hjälpa till att dölja de ’pops’ som uppstår när objekt ändrar form. Tekniken utvärderades i en användarstudie med 32 del- tagare. I den utvärderade scenen visade sig tekniken minska antalet renderade primitiver med ca 70 % och minska renderingstiden med ca 9 % jämfört med att använda full adaptiv tessellering. Användarstudien visade att när oskärpa ökade så minskade antalet som sa sig se ’pops’, vilket tyder på att tekniken kan användas för att dölja de ’pops’ som uppstår på grund av tessellering. Det behövs dock ytterligare forskning för att säkerställa dessa fynd.
APA, Harvard, Vancouver, ISO, and other styles
8

Rangappa, Shreedhar. "Absolute depth using low-cost light field cameras." Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/36224.

Full text
Abstract:
Digital cameras are increasingly used for measurement tasks within engineering scenarios, often being part of metrology platforms. Existing cameras are well equipped to provide 2D information about the fields of view (FOV) they observe, the objects within the FOV, and the accompanying environments. But for some applications these 2D results are not sufficient, specifically applications that require Z dimensional data (depth data) along with the X and Y dimensional data. New designs of camera systems have previously been developed by integrating multiple cameras to provide 3D data, ranging from 2 camera photogrammetry to multiple camera stereo systems. Many earlier attempts to record 3D data on 2D sensors have been completed, and likewise many research groups around the world are currently working on camera technology but from different perspectives; computer vision, algorithm development, metrology, etc. Plenoptic or Lightfield camera technology was defined as a technique over 100 years ago but has remained dormant as a potential metrology instrument. Lightfield cameras utilize an additional Micro Lens Array (MLA) in front of the imaging sensor, to create multiple viewpoints of the same scene and allow encoding of depth information. A small number of companies have explored the potential of lightfield cameras, but in the majority, these have been aimed at domestic consumer photography, only ever recording scenes as relative scale greyscale images. This research considers the potential for lightfield cameras to be used for world scene metrology applications, specifically to record absolute coordinate data. Specific interest has been paid to a range of low cost lightfield cameras to; understand the functional/behavioural characteristics of the optics, identify potential need for optical and/or algorithm development, define sensitivity, repeatability and accuracy characteristics and limiting thresholds of use, and allow quantified 3D absolute scale coordinate data to be extracted from the images. The novel output of this work is; an analysis of lightfield camera system sensitivity leading to the definition of Active Zones (linear data generation good data) and In-active Zones (non-linear data generation poor data), development of bespoke calibration algorithms that remove radial/tangential distortion from the data captured using any MLA based camera, and, a light field camera independent algorithm that allows the delivery of 3D coordinate data in absolute units within a well-defined measurable range from a given camera.
APA, Harvard, Vancouver, ISO, and other styles
9

Reinhart, William Frank. "Effects of depth cues on depth judgments using a field-sequential stereoscopic CRT display /." This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-07132007-143145/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Reinhart, William Frank. "Effects of depth cues on depth judgements using a field-sequential stereoscopic CRT display." Diss., Virginia Tech, 1990. http://hdl.handle.net/10919/38796.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Depth of field"

1

Depth of field. Stockport, England: Dewi Lewis Pub., 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Applied depth of field. Boston: Focal Press, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Heyen, William. Depth of field: Poems. Pittsburgh: Carnegie Mellon University Press, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Depth of field: Poems and photographs. Simsbury, CT: Antrim House, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Geoffrey, Cocks, Diedrick James, and Perusek Glenn, eds. Depth of field: Stanley Kubrick, film, and the uses of history. Madison: University of Wisconsin Press, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Depth of field: Essays on photography, mass media, and lens culture. Albuquerque, NM: University of New Mexico Press, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Goss, Keith Michael. Multi-dimensional polygon-based rendering for motion blur and depth of field. Uxbridge: Brunel University, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ruotoistenmäki, Tapio. Estimation of depth to potential field sources using the Fourier amplitude spectrum. Espoo: Geologian tutkimuskeskus, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

N, Rajagopalan A., ed. Depth from defocus: A real aperture imaging approach. New York: Springer, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Timmins 2003 Field Conference Ore Deposits at Depth 2003. Timmins 2003 Field Conference: Ore deposits at depth challenges and opportunities, Technical sessions abstract volume and field trip guide, CIM. Timmins, ON: Canadian Institute of Mining and Metallurgy, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Depth of field"

1

Gooch, Jan W. "Depth of Field." In Encyclopedic Dictionary of Polymers, 201–2. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-6247-8_3432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kemp, Jonathan. "Depth of field." In Film on Video, 55–64. London ; New York : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9780429468872-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ravitz, Jeff, and James L. Moody. "Depth of Field." In Lighting for Televised Live Events, 75–79. First edition. | New York, NY : Routledge, 2021.: Routledge, 2021. http://dx.doi.org/10.4324/9780429288982-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Atchison, David A., and George Smith. "Depth-of-Field." In Optics of the Human Eye, 379–93. 2nd ed. New York: CRC Press, 2023. http://dx.doi.org/10.1201/9781003128601-24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schedl, David C., Clemens Birklbauer, Johann Gschnaller, and Oliver Bimber. "Generalized Depth-of-Field Light-Field Rendering." In Computer Vision and Graphics, 95–105. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46418-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nagahara, Hajime, Sujit Kuthirummal, Changyin Zhou, and Shree K. Nayar. "Flexible Depth of Field Photography." In Lecture Notes in Computer Science, 60–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-88693-8_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huntley, Jonathan M., and Pablo D. Ruiz. "Depth-Resolved Displacement Field Measurement." In Advances in Speckle Metrology and Related Techniques, 37–103. Weinheim, Germany: Wiley-VCH Verlag GmbH & Co. KGaA, 2011. http://dx.doi.org/10.1002/9783527633852.ch2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stephenson, Ian. "Motion Blur and Depth of Field." In Essential Series, 133–44. London: Springer London, 2003. http://dx.doi.org/10.1007/978-1-4471-3800-6_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dudkiewicz, Krzysztof. "Real-Time Depth of Field Algorithm." In Image Processing for Broadcast and Video Production, 257–68. London: Springer London, 1995. http://dx.doi.org/10.1007/978-1-4471-3035-2_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Alió, Jorge L., Andrzej Grzybowski, and Piotr Kanclerz. "Extended Depth-of-Field Intraocular Lenses." In Essentials in Ophthalmology, 335–44. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21282-7_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Depth of field"

1

A. Dangerfield, J., H. Raymondi, O. Fjeld, and P. Haskey. "Imaging at Emblar Field." In EAGE/SEG Workshop - Depth Imaging of Reservoir Attributes. European Association of Geoscientists & Engineers, 1998. http://dx.doi.org/10.3997/2214-4609.201406704.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mauderer, Michael, Simone Conte, Miguel A. Nacenta, and Dhanraj Vishwanath. "Depth perception with gaze-contingent depth of field." In CHI '14: CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2556288.2557089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chauris, H., and M. Noble. "Differential Semblance Optimization for 2D Velocity Field Estimation." In EAGE/SEG Workshop - Depth Imaging of Reservoir Attributes. European Association of Geoscientists & Engineers, 1998. http://dx.doi.org/10.3997/2214-4609.201406681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Choi, Whan, Kyung-Rae Kim, and Chang-Su Kim. "Reliable Depth-of-Field Rendering Using Estimated Depth Maps." In 2018 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2018. http://dx.doi.org/10.1109/iscas.2018.8351453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kao, Yi-Hao, Chia-Kai Liang, Li-Wen Chang, and Homer H. Chen. "Depth Detection of Light Field." In 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07. IEEE, 2007. http://dx.doi.org/10.1109/icassp.2007.366052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Smith, Christopher, Edward Botcherby, Martin Booth, Rimas Juskaitis, and Tony Wilson. "Extended depth-of-field microscopy." In BiOS, edited by Jose-Angel Conchello, Carol J. Cogswell, Tony Wilson, and Thomas G. Brown. SPIE, 2010. http://dx.doi.org/10.1117/12.842160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Jiaqi, Jingsuo He, Lihua Geng, Zhichen Bai, Bo Su, and Cunlin Zhang. "Research on depth-of-field depth-of-light field microscopy in the terahertz band." In Infrared, Millimeter-Wave, and Terahertz Technologies VI, edited by Xi-Cheng Zhang, Masahiko Tani, and Cunlin Zhang. SPIE, 2019. http://dx.doi.org/10.1117/12.2537710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Zijin, Xingyi Li, Juewen Peng, Hao Lu, Zhiguo Cao, and Weicai Zhong. "DoF-NeRF: Depth-of-Field Meets Neural Radiance Fields." In MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3548088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Grosset, A. V. Pascal, Mathias Schott, Georges-Pierre Bonneau, and Charles D. Hansen. "Evaluation of Depth of Field for depth perception in DVR." In 2013 IEEE Pacific Visualization Symposium (PacificVis). IEEE, 2013. http://dx.doi.org/10.1109/pacificvis.2013.6596131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tosic, Ivana, and Kathrin Berkner. "Light Field Scale-Depth Space Transform for Dense Depth Estimation." In 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2014. http://dx.doi.org/10.1109/cvprw.2014.71.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Depth of field"

1

McLean, William E. ANVIS Objective Lens Depth of Field. Fort Belvoir, VA: Defense Technical Information Center, March 1996. http://dx.doi.org/10.21236/ada306571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Al-Mutawaly, Nafia, Hubert de Bruin, and Raymond D. Findlay. Magnetic Nerve Stimulation: Field Focality and Depth of Penetration. Fort Belvoir, VA: Defense Technical Information Center, October 2001. http://dx.doi.org/10.21236/ada411028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cathey, W. T., Benjamin Braker, and Sherif Sherif. Analysis and Design Tools for Passive Ranging and Reduced-Depth-of-Field Imaging. Fort Belvoir, VA: Defense Technical Information Center, September 2003. http://dx.doi.org/10.21236/ada417814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Claycomb, William R., Roy Maxion, Jason Clark, Bronwyn Woods, Brian Lindauer, David Jensen, Joshua Neil, Alex Kent, Sadie Creese, and Phil Legg. Deep Focus: Increasing User Depth of Field" to Improve Threat Detection (Oxford Workshop Poster)". Fort Belvoir, VA: Defense Technical Information Center, October 2014. http://dx.doi.org/10.21236/ada610980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nelson, Jonathan M. Laboratory and Field Application of River Depth Estimation Techniques Using Remotely Sensed Data: Annual Report Year 1. Fort Belvoir, VA: Defense Technical Information Center, September 2013. http://dx.doi.org/10.21236/ada597757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wei, X., J. Braverman, M. Miranda, M. E. Rosario, and C. J. Costantino. Depth-dependent Vertical-to-Horizontal (V/H) Ratios of Free-Field Ground Motion Response Spectra for Deeply Embedded Nuclear Structures. Office of Scientific and Technical Information (OSTI), February 2015. http://dx.doi.org/10.2172/1176998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Niple, E. R., and H. E. Scott. Biogenic Aerosols – Effects on Climate and Clouds. Cloud Optical Depth (COD) Sensor Three-Waveband Spectrally-Agile Technique (TWST) Field Campaign Report. Office of Scientific and Technical Information (OSTI), April 2016. http://dx.doi.org/10.2172/1248494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ray, Laura, Madeleine Jordan, Steven Arcone, Lynn Kaluzienski, Benjamin Walker, Peter Ortquist Koons, James Lever, and Gordon Hamilton. Velocity field in the McMurdo shear zone from annual ground penetrating radar imaging and crevasse matching. Engineer Research and Development Center (U.S.), December 2021. http://dx.doi.org/10.21079/11681/42623.

Full text
Abstract:
The McMurdo shear zone (MSZ) is strip of heavily crevassed ice oriented in the south-north direction and moving northward. Previous airborne surveys revealed a chaotic crevasse structure superimposed on a set of expected crevasse orientations at 45 degrees to the south-north flow (due to shear stress mechanisms). The dynamics that produced this chaotic structure are poorly understood. Our purpose is to present our field methodology and provide field data that will enable validation of models of the MSZ evolution, and here, we present a method for deriving a local velocity field from ground penetrating radar (GPR) data towards that end. Maps of near-surface crevasses were derived from two annual GPR surveys of a 28 km² region of the MSZ using Eulerian sampling. Our robot-towed and GPS navigated GPR enabled a dense survey grid, with transects of the shear zone at 50 m spacing. Each survey comprised multiple crossings of long (> 1 km) crevasses that appear in echelon on the western and eastern boundaries of the shear zone, as well as two or more crossings of shorter crevasses in the more chaotic zone between the western and eastern boundaries. From these maps, we derived a local velocity field based on the year-to-year movement of the same crevasses. Our velocity field varies significantly from fields previously established using remote sensing and provides more detail than one concurrently derived from a 29-station GPS network. Rather than a simple velocity gradient expected for crevasses oriented approximately 45 degrees to flow direction, we find constant velocity contours oriented diagonally across the shear zone with a wavy fine structure. Although our survey is based on near-surface crevasses, similar crevassing found in marine ice at 160 m depth leads us to conclude that this surface velocity field may hold through the body of meteoric and marine ice. Our success with robot-towed GPR with GPS navigation suggests we may greatly increase our survey areas.
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, D. H., K. E. Gray, J. D. Hettinger, and J. H. Kang. Determination of the temperature dependence of the penetration depth of Nb in Nb/AlO{sub x}/Nb Josephson junctions from a resistive measurement in a magnetic field. Office of Scientific and Technical Information (OSTI), November 1993. http://dx.doi.org/10.2172/195746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Berney, Ernest, Jami Lynn Daugherty, and Lulu Edwards. Validation of the automatic dynamic cone penetrometer. Engineer Research and Development Center (U.S.), July 2022. http://dx.doi.org/10.21079/11681/44704.

Full text
Abstract:
The U.S. military requires a rapid means of measuring subsurface soil strength for construction and repair of expeditionary pavement surfaces. Traditionally, a dynamic cone penetrometer (DCP) has served this purpose, providing strength with depth profiles in natural and prepared pavement surfaces. To improve upon this device, the Engineer Research and Development Center (ERDC) validated a new battery-powered automatic dynamic cone penetrometer (A-DCP) apparatus that automates the driving process by using a motor-driven hammering cap placed on top of a traditional DCP rod. The device improves upon a traditional DCP by applying three to four blows per second while digitally recording depth, blow count, and California Bearing Ratio (CBR). An integrated Global Positioning Sensor (GPS) and Bluetooth® connection allow for real-time data capture and stationing. Similarities were illustrated between the DCP and the A-DCP by generation of a new A-DCP calibration curve. This curve relates penetration rate to field CBR that nearly follows the DCP calibration with the exception of a slight offset. Field testing of the A-DCP showed less variability and more consistent strength measurement with depth at a speed five times greater than that of the DCP with minimal physical exertion by the operator.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography