Siga este link para ver outros tipos de publicações sobre o tema: Depth of field fusion.

Teses / dissertações sobre o tema "Depth of field fusion"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Depth of field fusion".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Duan, Jun Wei. "New regional multifocus image fusion techniques for extending depth of field". Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3951602.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Hua, Xiaoben, e Yuxia Yang. "A Fusion Model For Enhancement of Range Images". Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2203.

Texto completo da fonte
Resumo:
In this thesis, we would like to present a new way to enhance the “depth map” image which is called as the fusion of depth images. The goal of our thesis is to try to enhance the “depth images” through a fusion of different classification methods. For that, we will use three similar but different methodologies, the Graph-Cut, Super-Pixel and Principal Component Analysis algorithms to solve the enhancement and output of our result. After that, we will compare the effect of the enhancement of our result with the original depth images. This result indicates the effectiveness of our methodology.
Room 401, No.56, Lane 21, Yin Gao Road, Shanghai, China
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ocampo, Blandon Cristian Felipe. "Patch-Based image fusion for computational photography". Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0020.

Texto completo da fonte
Resumo:
Dans de nombreuses situations, la dynamique des capteurs ou la profondeur de champ des appareils photographiques conventionnels sont insuffisantes pour capturer fidèlement des scènes naturelles. Une méthode classique pour contourner ces limitations est de fusionner des images acquises avec des paramètres de prise de vue variables. Ces méthodes nécessitent que les images soient parfaitement alignées et que les scènes soient statiques, faute de quoi des artefacts (fantômes) ou des structures irrégulières apparaissent lors de la fusion. Le but de cette thèse est de développer des techniques permettant de traiter directement des images dynamiques et non-alignées, en exploitant des mesures de similarité locales par patchs entre images.Dans la première partie de cette thèse, nous présentons une méthode pour la fusion d'images de scènes dynamiques capturées avec des temps d'exposition variables. Notre méthode repose sur l'utilisation jointe d'une normalisation de contraste, de combinaisons non-locales de patchs et de régularisations. Ceci permet de produire de manière efficace des images contrastées et bien exposées, même dans des cas difficiles (objets en mouvement, scènes non planes, déformations optiques, etc.).Dans la deuxième partie de la thèse nous proposons, toujours dans des cas dynamiques, une méthode de fusion d'images acquises avec des mises au point variables. Le cœur de notre méthode repose sur une comparaison de patchs entre images ayant des niveaux de flou variables.Nos méthodes ont été évaluées sur des bases de données classiques et sur d'autres, nouvelles, crées pour les besoins de ce travail. Les expériences montrent la robustesse des méthodes aux distortions géométriques, aux variations d'illumination et au flou. Ces méthodes se comparent favorablement à des méthodes de l'état de l'art, à un coût algorithmique moindre. En marge de ces travaux, nous analysons également la capacité de l'algorithme PatchMatch à reconstruire des images en présence de flou et de changements d'illumination, et nous proposons différentes stratégies pour améliorer ces reconstructions
The most common computational techniques to deal with the limited high dynamic range and reduced depth of field of conventional cameras are based on the fusion of images acquired with different settings. These approaches require aligned images and motionless scenes, otherwise ghost artifacts and irregular structures can arise after the fusion. The goal of this thesis is to develop patch-based techniques in order to deal with motion and misalignment for image fusion, particularly in the case of variable illumination and blur.In the first part of this work, we present a methodology for the fusion of bracketed exposure images for dynamic scenes. Our method combines a carefully crafted contrast normalization, a fast non-local combination of patches and different regularization steps. This yields an efficient way of producing contrasted and well-exposed images from hand-held captures of dynamic scenes, even in difficult cases (moving objects, non planar scenes, optical deformations, etc.).In a second part, we propose a multifocus image fusion method that also deals with hand-held acquisition conditions and moving objects. At the core of our methodology, we propose a patch-based algorithm that corrects local geometric deformations by relying on both color and gradient orientations.Our methods were evaluated on common and new datasets created for the purpose of this work. From the experiments we conclude that our methods are consistently more robust than alternative methods to geometric distortions and illumination variations or blur. As a byproduct of our study, we also analyze the capacity of the PatchMatch algorithm to reconstruct images in the presence of blur and illumination changes, and propose different strategies to improve such reconstructions
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ramirez, Hernandez Pavel. "Extended depth of field". Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9941.

Texto completo da fonte
Resumo:
In this thesis the extension of the depth of field of optical systems is investigated. The problem of achieving extended depth of field (EDF) while preserving the transverse resolution is also addressed. A new expression for the transport of intensity equation in the prolate spheroidal coordinates system is derived, with the aim of investigating the phase retrieval problem with applications to EDF. A framework for the optimisation of optical systems with EDF is also introduced, where the main motivation is to find an appropriate scenario that will allow a convex optimisation solution leading to global optima. The relevance in such approach is that it does not depend on the optimisation algorithms since each local optimum is a global one. The multi-objective optimisation framework for optical systems is also discussed, where the main focus is the optimisation of pupil plane masks. The solution for the multi-objective optimisation problem is presented not as a single mask but as a set of masks. Convex frameworks for this problem are further investigated and it is shown that the convex optimisation of pupil plane masks is possible, providing global optima to the optimisation problems for optical systems. Seven masks are provided as examples of the convex optimisation solutions for optical systems, in particular 5 pupil plane masks that achieve EDF by factors of 2, 2.8, 2.9, 4 and 4.3, including two pupil masks that besides of extending the depth of field, are super-resolving in the transverse planes. These are shown as examples of solutions to particular optimisation problems in optical systems, where convexity properties have been given to the original problems to allow a convex optimisation, leading to optimised masks with a global nature in the optimisation scenario.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Sikdar, Ankita. "Depth based Sensor Fusion in Object Detection and Tracking". The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1515075130647622.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Villarruel, Christina R. "Computer graphics and human depth perception with gaze-contingent depth of field /". Connect to online version, 2006. http://ada.mtholyoke.edu/setr/websrc/pdfs/www/2006/175.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Aldrovandi, Lorenzo. "Depth estimation algorithm for light field data". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Botcherby, Edward J. "Aberration free extended depth of field microscopy". Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:7ad8bc83-6740-459f-8c48-76b048c89978.

Texto completo da fonte
Resumo:
In recent years, the confocal and two photon microscopes have become ubiquitous tools in life science laboratories. The reason for this is that both these systems can acquire three dimensional image data from biological specimens. Specifically, this is done by acquiring a series of two-dimensional images from a set of equally spaced planes within the specimen. The resulting image stack can be manipulated and displayed on a computer to reveal a wealth of information. These systems can also be used in time lapse studies to monitor the dynamical behaviour of specimens by recording a number of image stacks at a sequence of time points. The time resolution in this situation is, however, limited by the maximum speed at which each constituent image stack can be acquired. Various techniques have emerged to speed up image acquisition and in most practical implementations a single, in-focus, image can be acquired very quickly. However, the real bottleneck in three dimensional imaging is the process of refocusing the system to image different planes. This is commonly done by physically changing the distance between the specimen and imaging lens, which is a relatively slow process. It is clear with the ever-increasing need to image biologically relevant specimens quickly that the speed limitation imposed by the refocusing process must be overcome. This thesis concerns the acquisition of data from a range of specimen depths without requiring the specimen to be moved. A new technique is demonstrated for two photon microscopy that enables data from a whole range of specimen depths to be acquired simultaneously so that a single two dimensional scan records extended depth of field image data directly. This circumvents the need to acquire a full three dimensional image stack and hence leads to a significant improvement in the temporal resolution for acquiring such data by more than an order of magnitude. In the remainder of this thesis, a new microscope architecture is presented that enables scanning to be carried out in three dimensions at high speed without moving the objective lens or specimen. Aberrations introduced by the objective lens are compensated by the introduction of an equal and opposite aberration with a second lens within the system enabling diffraction limited performance over a large range of specimen depths. Focusing is achieved by moving a very small mirror, allowing axial scan rates of several kHz; an improvement of some two orders of magnitude. This approach is extremely general and can be applied to any form of optical microscope with the very great advantage that the specimen is not disturbed. This technique is developed theoretically and experimental results are shown that demonstrate its potential application to a broad range of sectioning methods in microscopy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Möckelind, Christoffer. "Improving deep monocular depth predictions using dense narrow field of view depth images". Thesis, KTH, Robotik, perception och lärande, RPL, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235660.

Texto completo da fonte
Resumo:
In this work we study a depth prediction problem where we provide a narrow field of view depth image and a wide field of view RGB image to a deep network tasked with predicting the depth for the entire RGB image. We show that by providing a narrow field of view depth image, we improve results for the area outside the provided depth compared to an earlier approach only utilizing a single RGB image for depth prediction. We also show that larger depth maps provide a greater advantage than smaller ones and that the accuracy of the model decreases with the distance from the provided depth. Further, we investigate several architectures as well as study the effect of adding noise and lowering the resolution of the provided depth image. Our results show that models provided low resolution noisy data performs on par with the models provided unaltered depth.
I det här arbetet studerar vi ett djupapproximationsproblem där vi tillhandahåller en djupbild med smal synvinkel och en RGB-bild med bred synvinkel till ett djupt nätverk med uppgift att förutsäga djupet för hela RGB-bilden. Vi visar att genom att ge djupbilden till nätverket förbättras resultatet för området utanför det tillhandahållna djupet jämfört med en existerande metod som använder en RGB-bild för att förutsäga djupet. Vi undersöker flera arkitekturer och storlekar på djupbildssynfält och studerar effekten av att lägga till brus och sänka upplösningen på djupbilden. Vi visar att större synfält för djupbilden ger en större fördel och även att modellens noggrannhet minskar med avståndet från det angivna djupet. Våra resultat visar också att modellerna som använde sig av det brusiga lågupplösta djupet presterade på samma nivå som de modeller som använde sig av det omodifierade djupet.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Luraas, Knut. "Clinical aspects of Critical Flicker Fusion perimetry : an in-depth analysis". Thesis, Cardiff University, 2012. http://orca.cf.ac.uk/39684/.

Texto completo da fonte
Resumo:
The thesis evaluated, in three studies, the clinical potential of Critical Flicker Fusion perimetry (CFFP) undertaken using the Octopus 311 perimeter. The influence of the learning effect on the outcome of CFFP was evaluated, in each eye at each of five visits each separated by one week, for 28 normal individuals naïve to perimetry, 10 individuals with ocular hypertension (OHT) and 11 with open angle glaucoma (OAG) all of whom were experienced in Standard Automated perimetry (SAP). An improvement occurred in the height, rather than in the shape, of the visual field and was largest for those with OAG. The normal individuals reached optimum performance at the third visit and those with OHT or with OAG at the fourth or fifth visits. The influence of ocular media opacity was investigated in 22 individuals with age-related cataract who were naïve to both SAP and CFFP. All individuals underwent both CFFP and SAP in each eye at each of four visits each separated by one week. At the third and fourth visit, glare disability (GD) was measured with 100% and 10% contrast EDTRS LogMAR visual acuity charts in the presence, and absence, of three levels of glare using the Brightness Acuity Tester. The visual field for CFF improved in height, only. Little correlation was present between the various measures of GD and the visual field, largely due to the narrow range of cataract severity. The influence of optical defocus for both CFFP and SAP was investigated, in one designated eye at each of two visits, in 16 normal individuals all of whom had taken part in the first study. Sensitivity for SAP declined with increase in defocus whilst that for CFFP increased. The latter was attributed to the influence of the Granit-Harper Law arising from the increased size of the defocused stimulus.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Zhang, Guanghua. "Edge labelling and depth reconstruction by fusion of range and intensitydata". Thesis, Heriot-Watt University, 1992. http://hdl.handle.net/10399/1502.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Raine, Mark John. "High field superconductors for fusion energy applications". Thesis, Durham University, 2015. http://etheses.dur.ac.uk/11153/.

Texto completo da fonte
Resumo:
The fabrication and processing by solid-state heat-treatment, mechanical ball milling and hot isostatic pressing of microcrystalline and nanocrystalline niobium carbonitride is reported. This material is subjected to a number of characterisation measurements including x-ray diffraction, resistivity, ac-susceptibility, dc-extraction and heat capacity. The resultant measurement data are used to assess the adequacy of the material’s processing and quality with respect to the fundamental superconducting characteristics, transition temperature, T_c, upper critical magnetic field, B_c2, and critical current density, J_c. It is shown that a substantial increase in B_c2 from ~ 11 T (in the microcrystalline material) to ~ 21 T (in the nanocrystalline material) has been produced. A fortyfold increase in J_c from 1.8 x 107 Am^(-2) (in microcrystalline material measured at 3 T and 6 K) to 7.4 x 108 Am^(-2) (in nanocrystalline material measured at 3 T and 5.9 K) has also been produced. These substantial increases have been made with only a 32 % reduction in T_c from ~17.6 K to ~ 11.9 K, well above the temperature of liquid helium. The accurate large quantity metrology of 10,000 Nb3Sn samples for the International Thermonuclear Experimental Reactor toroidal field coils is also reported and an overview analysis of the data provided. In particular, all seven measurement types; critical current, hysteresis loss, residual resistivity ratio, diameter, chromium plating thickness, twist pitch and copper to non-copper volume ratio are discussed in relation to the accuracy with which they were performed. The methodology in performing the heat-treatments and measurements is discussed and the detail of the necessary equipment set up is given. The results from some additional experiments that deal with the effect of heat-treatment cleanliness and sample geometry on various measurement types is provided.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Ozkalayci, Burak Oguz. "Multi-view Video Coding Via Dense Depth Field". Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607517/index.pdf.

Texto completo da fonte
Resumo:
Emerging 3-D applications and 3-D display technologies raise some transmission problems of the next-generation multimedia data. Multi-view Video Coding (MVC) is one of the challenging topics in this area, that is on its road for standardization via ISO MPEG. In this thesis, a 3-D geometry-based MVC approach is proposed and analyzed in terms of its compression performance. For this purpose, the overall study is partitioned into three preceding parts. The first step is dense depth estimation of a view from a fully calibrated multi-view set. The calibration information and smoothness assumptions are utilized for determining dense correspondences via a Markov Random Field (MRF) model, which is solved by Belief Propagation (BP) method. In the second part, the estimated dense depth maps are utilized for generating (predicting) arbitrary (other camera) views of a scene, that is known as novel view generation. A 3-D warping algorithm, which is followed by an occlusion-compatible hole-filling process, is implemented for this aim. In order to suppress the occlusion artifacts, an intermediate novel view generation method, which fuses two novel views generated from different source views, is developed. Finally, for the last part, dense depth estimation and intermediate novel view generation tools are utilized in the proposed H.264-based MVC scheme for the removal of the spatial redundancies between different views. The performance of the proposed approach is compared against the simulcast coding and a recent MVC proposal, which is expected to be the standard recommendation for MPEG in the near future. These results show that the geometric approaches in MVC can still be utilized, especially in certain 3-D applications, in addition to conventional temporal motion compensation techniques, although the rate-distortion performances of geometry-free approaches are quite superior.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Lindeberg, Tim. "Concealing rendering simplifications using gazecontingent depth of field". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-189601.

Texto completo da fonte
Resumo:
One way of increasing 3D rendering performance is the use of foveated rendering. In this thesis a novel foveated rendering technique called gaze contingent depth of field tessellation (GC DOF tessellation) is proposed. Tessellation is the process of subdividing geometry to increase detail. The technique works by applying tessellation to all objects within the focal plane, gradually decreasing tessellation levels as applied blur increases. As the user moves their gaze the focal plane shifts and objects go from blurry to sharp at the same time as the fidelity of the object increases. This can help hide the pops that occur as objects change shape. The technique was evaluated in a user study with 32 participants. For the evaluated scene the technique helped reduce the number of primitives rendered by around 70 % and frame time by around 9 % compared to using full adaptive tessellation. The user study showed that as the level of blur increased the detection rate for pops decreased, suggesting that the technique could be used to hide pops that occur due to tessellation. However, further research is needed to solidify these findings.
Ett sätt att öka renderingsprestanda i 3D applikationer är att använda foveated rendering. I denna uppsats presenteras en ny foveated rendering-teknik som kallas gaze contingent depth of field tessellering (GC DOF tessellering). Tessellering är när geometri delas i mindre delar för att öka detaljrikedom. Tekniken fungerar genom att applicera tessellering på alla objekt i fokalplanet och gradvis minska tesselleringsnivåer när oskärpan ökar. När användaren flyttar sin blick så flyttas fokalplanet och suddiga objekt blir skarpa samtidigt som detaljrikedomen i objektet ökar. Det kan hjälpa till att dölja de ’pops’ som uppstår när objekt ändrar form. Tekniken utvärderades i en användarstudie med 32 del- tagare. I den utvärderade scenen visade sig tekniken minska antalet renderade primitiver med ca 70 % och minska renderingstiden med ca 9 % jämfört med att använda full adaptiv tessellering. Användarstudien visade att när oskärpa ökade så minskade antalet som sa sig se ’pops’, vilket tyder på att tekniken kan användas för att dölja de ’pops’ som uppstår på grund av tessellering. Det behövs dock ytterligare forskning för att säkerställa dessa fynd.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Rangappa, Shreedhar. "Absolute depth using low-cost light field cameras". Thesis, Loughborough University, 2018. https://dspace.lboro.ac.uk/2134/36224.

Texto completo da fonte
Resumo:
Digital cameras are increasingly used for measurement tasks within engineering scenarios, often being part of metrology platforms. Existing cameras are well equipped to provide 2D information about the fields of view (FOV) they observe, the objects within the FOV, and the accompanying environments. But for some applications these 2D results are not sufficient, specifically applications that require Z dimensional data (depth data) along with the X and Y dimensional data. New designs of camera systems have previously been developed by integrating multiple cameras to provide 3D data, ranging from 2 camera photogrammetry to multiple camera stereo systems. Many earlier attempts to record 3D data on 2D sensors have been completed, and likewise many research groups around the world are currently working on camera technology but from different perspectives; computer vision, algorithm development, metrology, etc. Plenoptic or Lightfield camera technology was defined as a technique over 100 years ago but has remained dormant as a potential metrology instrument. Lightfield cameras utilize an additional Micro Lens Array (MLA) in front of the imaging sensor, to create multiple viewpoints of the same scene and allow encoding of depth information. A small number of companies have explored the potential of lightfield cameras, but in the majority, these have been aimed at domestic consumer photography, only ever recording scenes as relative scale greyscale images. This research considers the potential for lightfield cameras to be used for world scene metrology applications, specifically to record absolute coordinate data. Specific interest has been paid to a range of low cost lightfield cameras to; understand the functional/behavioural characteristics of the optics, identify potential need for optical and/or algorithm development, define sensitivity, repeatability and accuracy characteristics and limiting thresholds of use, and allow quantified 3D absolute scale coordinate data to be extracted from the images. The novel output of this work is; an analysis of lightfield camera system sensitivity leading to the definition of Active Zones (linear data generation good data) and In-active Zones (non-linear data generation poor data), development of bespoke calibration algorithms that remove radial/tangential distortion from the data captured using any MLA based camera, and, a light field camera independent algorithm that allows the delivery of 3D coordinate data in absolute units within a well-defined measurable range from a given camera.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Reinhart, William Frank. "Effects of depth cues on depth judgments using a field-sequential stereoscopic CRT display /". This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-07132007-143145/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Reinhart, William Frank. "Effects of depth cues on depth judgements using a field-sequential stereoscopic CRT display". Diss., Virginia Tech, 1990. http://hdl.handle.net/10919/38796.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Rathjens, Richard G. "PLANTING DEPTH OF TREES - A SURVEY OF FIELD DEPTH, EFFECT OF DEEP PLANTING, AND REMEDIATION". Columbus, Ohio : Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view.cgi?acc%5Fnum=osu1243869972.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Manolopoulos, Dimitris. "Fusion of perturbed defects in conformal field theory". Thesis, King's College London (University of London), 2012. https://kclpure.kcl.ac.uk/portal/en/theses/fusion-of-perturbed-defects-in-conformal-field-theory(23890e88-b5cf-4c59-910a-5ed4d8f8acfc).html.

Texto completo da fonte
Resumo:
The infinite-dimensional symmetry algebra of a conformal field theory (CFT), the Virasoro algebra, is generated by the holomorphic and anti-holomorphic part of the stress tensor. Besides such 'chiral symmetries' the CFT also has an integrable symmetry, that is, infinite families of commuting conserved charges. In this thesis a step towards combining these two symmetries into a single formalism is taken, by identifying integrable stuctures of a CFT through studying the representation category of the underlying chiral algebra. Then by introducing defects in the system, conserved charges can be constructed by perturbing certain conformal defects. Starting from an abelian rigid braided monoidal category C one defines an abelian rigid monoidal category CF which captures some aspects of perturbed conformal defects in two-dimensional CFT. Namely, for V a rational vertex operator algebra one considers the charge-conjugation CFT constructed from V (the Cardy case). Then C = Rep(V) and an object in CF corresponds to a conformal defect condition together with a direction of perturbation. To each object in CF one assigns a perturbed defect operator on the space of states of the CFT and then shows that the assignment factors through the Grothendieck ring of CF. This allows one to find functional relations between perturbed defect operators. Such relations are interesting because they contain information about the integrable structure of the CFT.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Zammit, Paul. "Extended depth-of-field imaging and ranging in microscopy". Thesis, University of Glasgow, 2017. http://theses.gla.ac.uk/8081/.

Texto completo da fonte
Resumo:
Conventional 3D imaging techniques such as laser scanning, focus-stacking and confocal microscopy either require scanning in all or a subset of the spatial dimensions, or else are limited by their depth of field (DOF). Scanning increases the acquisition time, therefore techniques which rely on it cannot be used to image moving scenes. In order to acquire both the intensity of the scene and its depth, extending the DOF without scanning is therefore necessary. This is traditionally achieved by stopping the system down (reducing the f/#). This, however, has the highly undesirable effect of lowering both the throughput and the lateral resolution of the system. In microscopy in particular, both these parameters are critical, therefore there is scope in breaking this trade-off. The objective of this work, therefore, is to develop a practical and simple 3D imaging technique which is capable of acquiring both the irradiance of the scene and its depth in a single snapshot over an extended DOF without incurring a reduction in optical throughput and lateral resolution. To this end, a new imaging technique, referred to as complementary Kernel Matching (CKM), is proposed in this thesis. To extend the DOF, in CKM a hybrid imaging technique known as wavefront coding (WC) has been used. WC permits the DOF to be extended by an order of magnitude typically without reducing the efficiency and the resolution of the system. Moreover, WC only requires the introduction of a phase mask in the aperture of the system, hence it also has the benefit of simplicity and practicality. Unfortunately, in practice, WC systems are found to suffer from post-recovery artefacts and distortion, which substantially degrade the quality of the acquired image. To date, this long-standing problem has found no solution and is probably the cause for the lack of exploitation of this imaging technique by the industry. In CKM, use of a largely ignored phenomenon associated with WC was made to measure the depth of the sample. This is the lateral translation of the scene in proportion to its depth. Furthermore, once the depth of the scene is known, the ensuing artefacts and distortion due to the introduction of the WC element can be compensated for. As a result, a high quality intensity image of the scene and its depth profile (referred to in stereo vision parlance as a depth map) is obtained over a DOF which is typically an order of magnitude larger than that of an equivalent clear-aperture system. This implies that, besides being a 3D imaging technique, CKM is also a solution to one of the longest standing problem in WC itself. By means of WC, therefore, the DOF was extended without scanning and without reducing the throughput and the optical resolution, allowing both an intensity image of the scene to be acquired and its depth map. In addition, CKM is inherently monocular, therefore it does not suffer from occlusion, which is a major problem affecting triangulation-based 3D imaging techniques such as the popular stereo vision. One therefore concludes that CKM fulfils the objectives set for this project. In this work, various ways of implementing CKM were explored and compared; and the theory associated with them was developed. An experimental prototype was then built and the technique was demonstrated experimentally in microscopy. The results show that CKM eliminates WC artefacts and thus gives high quality images of the scene over an extended DOF. A DOF of ∼ 20μm was achieved on a 40×, 0.5NA system experimentally, however this can be increased if required. The experimental depth reconstructions of real samples (such as pollen grains and a silicon die) imaged in various modalities (reflection, transmission and fluorescence) were comparable to those given by a focus-stack. However, as with all other passive techniques, the performance of CKM depends on the texture and features in the scene itself. On a binary systematic scene consisting of regularly spaced dots with a linear depth gradient, an RMS error of ±0.15μm was obtained from an image signal-to-noise ratio of 60dB. Finally, owing to its simplicity and large DOF, there is scope in investigating the possibility of using the same CKM setup for 3D point localisation applications such as super resolution. An initial investigation was therefore conducted by localising sub-resolution fluorescent beads. On a 40×, 0.5NA system, a mean precision of 148nm in depth and < 30nm in the lateral dimensions was observed experimentally from 4, 000 photons per localisation over a DOF of 26μm. From these experimental values, a mean localisation precision of < 34nm in depth and < 13nm in the lateral dimensions from 2, 000 photons per localisation over a DOF of 3μm is expected on a more typical 100×, 1.4NA system. This compares favourably to the competition, therefore we conclude that there is scope in investigating this technique for 3D point localisation applications further.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Axelsson, Natalie. "Depth of Field Rendering from Sparsely Sampled Pinhole Images". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281771.

Texto completo da fonte
Resumo:
Optical lenses cause foreground and background objects to appear blurred, causing an effect called depth of field. Each point in the scene is projected onto the imaging plane as a semitransparent circle of confusion (CoC) with diameter depending on the distance between the point and the lens. In images rendered with a pinhole camera, the entire scene is in focus, but depth of field may be added synthetically for photorealism, aesthetics, or attention guiding purposes. However, most algorithms for depth of field rendering are either computationally expensive or produce noticeable artifacts. This report evaluates two different algorithms for depth of field rendering. Both algorithms are independent of the rendering technique. The first renders only a single pinhole image and uses a light-field based method for image synthesis. The second renders up to 12 pinhole images and uses CoC gathering to create defocus blur. Ideas from both methods are combined in a novel algorithm which uses sparse samples to approximate the light field. Our method produces a closer physical approximation than the other algorithms and avoids common artifacts. However, it may produce ghosting artifacts at low computation times. We evaluate the methods by comparing rendered images to an assumed ground truth generated with the accumulation buffer method. Physical accuracy is measured through structural similarity (SSIM) while artifacts are evaluated through visual inspection. Computation times are measured in the Inviwo software.
Optiska linser får objekt i för- och bakgrunden att bli oskarpa i bilder. Varje punkt i scenen projiceras på bildplanet som en semitransparent oskärpecirkel (CoC) vars diameter beror på avståndet mellan punkten och linsen. I bilder renderade med nålhålskamera är hela scenen skarp men skärpedjup kan läggas till syntetiskt för fotorealism, estetik, eller i uppmärksamhetsledande syfte. Dock är många algoritmer för skärpedjupsrendering antingen resurskrävande eller präglade av artefakter. I denna rapport utvärderas två algoritmer för skärpedjupsrendering. Båda metoderna kan användas oberoende av renderingsteknik. Den första renderar endast en nålhålsbild och använder en ljusfältsbaserad metod för skärpedjupsrendering. Den andra renderar upp till 12 nålhålsbilder och använder CoC-samling för att skapa oskärpa. Idéer från båda algoritmerna kombineras i en ny metod som använder glesa nålhålsbilder för att approximera ljusfältet. Vår metod producerar bättre fysiska approximationer än de andra algoritmerna och undviker vanliga artefakter. Dock kan den orsaka spökbilder vid korta beräkningstider. Vi utvärderar metoderna genom att jämföra dem mot bilder genererade med accumulation buffer-tekniken som antas efterlikna den fysiska sanningen. Fysisk exakthet mäts med structural similarity (SSIM) och artefakter utvärderas visuellt. Beräkningstider mäts i programmet Inviwo
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Schwarz, Sebastian. "Gaining Depth : Time-of-Flight Sensor Fusion for Three-Dimensional Video Content Creation". Doctoral thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21938.

Texto completo da fonte
Resumo:
The successful revival of three-dimensional (3D) cinema has generated a great deal of interest in 3D video. However, contemporary eyewear-assisted displaying technologies are not well suited for the less restricted scenarios outside movie theaters. The next generation of 3D displays, autostereoscopic multiview displays, overcome the restrictions of traditional stereoscopic 3D and can provide an important boost for 3D television (3DTV). Then again, such displays require scene depth information in order to reduce the amount of necessary input data. Acquiring this information is quite complex and challenging, thus restricting content creators and limiting the amount of available 3D video content. Nonetheless, without broad and innovative 3D television programs, even next-generation 3DTV will lack customer appeal. Therefore simplified 3D video content generation is essential for the medium's success. This dissertation surveys the advantages and limitations of contemporary 3D video acquisition. Based on these findings, a combination of dedicated depth sensors, so-called Time-of-Flight (ToF) cameras, and video cameras, is investigated with the aim of simplifying 3D video content generation. The concept of Time-of-Flight sensor fusion is analyzed in order to identify suitable courses of action for high quality 3D video acquisition. In order to overcome the main drawback of current Time-of-Flight technology, namely the high sensor noise and low spatial resolution, a weighted optimization approach for Time-of-Flight super-resolution is proposed. This approach incorporates video texture, measurement noise and temporal information for high quality 3D video acquisition from a single video plus Time-of-Flight camera combination. Objective evaluations show benefits with respect to state-of-the-art depth upsampling solutions. Subjective visual quality assessment confirms the objective results, with a significant increase in viewer preference by a factor of four. Furthermore, the presented super-resolution approach can be applied to other applications, such as depth video compression, providing bit rate savings of approximately 10 percent compared to competing depth upsampling solutions. The work presented in this dissertation has been published in two scientific journals and five peer-reviewed conference proceedings.  In conclusion, Time-of-Flight sensor fusion can help to simplify 3D video content generation, consequently supporting a larger variety of available content. Thus, this dissertation provides important inputs towards broad and innovative 3D video content, hopefully contributing to the future success of next-generation 3DTV.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Liu, Yang. "Simulating depth of field using per-pixel linked list buffer". Thesis, Purdue University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1598036.

Texto completo da fonte
Resumo:

In this thesis, I present a method for simulating three characteristics of depth of field image: partial occlusion, bokeh and blur. Retrieving color from occluded surfaces is achieved by constructing a per-pixel linked list buffer, which only requires two render passes. Additionally, per-pixel linked list buffer eliminates the memory overhead of empty pixels in depth layers. Bokeh and blur effect are accomplished by image-space point splatting (Lee 2008). I demonstrate how point splatting can be used to account for the effect of aperture shape and intensity distribution on bokeh. Spherical aberration and chromatic aberration can be approximated using a custom pre-built sprite. Together as a package, this method is capable matching the realism of multi-perspective methods and layered methods.

Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Henriksson, Ola. "A Depth of Field Algorithm for Realtime 3D Graphics in OpenGL". Thesis, Linköping University, Department of Science and Technology, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1169.

Texto completo da fonte
Resumo:

The company where this thesis was formulated constructs VR applications for the medical environment. The hardware used is ordinary dektops with consumer level graphics cards and haptic devices. In medicin some operations require microscopes or cameras. In order to simulate these in a virtual reality environment for educational purposes, the effect of depth of field or focus have to be considered.

A working algorithm that generates this optical occurence in realtime, stereo rendered computer graphics is presented in this thesis. The algorithm is implemented in OpenGL and C++ to later be combined with a VR application simulating eye-surgery which is built with OpenGL Optimizer.

Several different approaches are described in this report. The call for realtime stereo rendering (~60 fps) means taking advantage of the graphics hardware to a great extent. In OpenGL this means using the extensions to a specific graphic chip for better performance, in this case the algorithm is implemented for a GeForce3 card.

To increase the speed of the algorithm much of the workload is moved from the CPU to the GPU (Graphics Processing Unit). By re-defining parts of the ordinary OpenGL pipeline via vertex programs, a distance-from-focus map can be stored in the alpha channel of the final image with little time loss.

This can effectively be used to blend a previously blurred version of the scene with a normal render. Different techniques to quickly blur a renderedimage is discussed, to keep the speed up solutions that require moving data from the graphics card is not an option.

Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Vörös, Csaba, Norbert Zajzon, Endre Turai e László Vincze. "Magnetic field measurement possibilities in flooded mines at 500 m depth". Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2018. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-231251.

Texto completo da fonte
Resumo:
The main target of the UNEXMIN project is to develop a fully autonomous submersible robot (UX-1) which can map flooded underground mines, and also deliver information about the potential raw materials of the mines. There are ca. 30 000 abandoned mines in Europe, from which many of them still could hold significant reserves of raw materials. Many of these mines are nowadays flooded and the latest information about them could be more than 100 years old. Although it is giving limited information, magnetic measurement methods, which detecting the local distortions of the Earth’s magnetic field can be very useful to identify raw materials in the mines. The source of the magnetic field which is independent of any human events comes from the Earths own magnetic field. The strength of this field depends by the magnetic materials in the near environment of the investigated point. The ferromagnetic materials have powerful effect to influence the magnetic field. In the nature, iron containing minerals, magnetite and hematite have the most powerful effect usually. The magnetic measurement methods are rapid and affordable techniques in geophysical engineering practice. For magnetic field strength and direction measurement FGM-1 sensors (manufactured by Speake & Co Llanfapley) were selected for the UX-1 robot. The sensor heads overall dimension are very small and their energy consumption is negligible. The FGM-1 sensor was placed and aligned in a plastic cylinder to ensure that the magnetic-axis aligned with the mechanical axis of the tube for more accurate measurement. There are 3 pairs of FGM-1 sensors needed for the proper determination of the current magnetic field (strength and direction). The position of sensor pairs need to be perpendicular compared to each other. The 3 pairs of FGM-1 sensors generate an arbitrary position Cartesian coordinate system. We further developed / had installed temperature sensors to all FGM-1 probes, to compensate the temperature dependency even though it has small effect. The UX-1 robot also contains the electronic block, which controls the three FGM-1 magnetic field sensor pairs, and store the measured data. The block contains the power module, the sensor interface modules with temperature compensation, the microcontroller module and the RS485 communication module also. The output data is a temperature compensated frequency value for each sensor pair. The measured magnetic signal from the local XYZ coordinate system (local for the UX-1) should be converted to a universal coordinate system during post processing of the data. The exact position, facing and inclination of the robot must be known in the whole dive time to be able to do the above conversion. The measured magnetic signal will be placed into the measured mine map, reconstructed from the delivered 3D point cloud, thus the exact location of the magnetic anomalies can be identified. Not much magnetic source is estimated in the operating environment of the robot, but its own generated magnetic noise can be significant. There will be many cooling fans, micro-controllers and multiple thrusters inside the pressure-hull of the UX-1, which generate magnetic field. The constant magnetic noise coming from the cooling fans can be compensated, but the varying fields caused by eg. the different thrusters’s speed is problematic. We design a calibration method, where the effect of the main thrusters (even with changing speed) and the effect of the constant cooling fans could be compensated. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 690008.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

McDonnell, Ian. "Object segmentation from low depth of field images and video sequences". Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/58630/.

Texto completo da fonte
Resumo:
This thesis addresses the problem of autonomous object segmentation. To do so the proposed segementation method uses some prior information, namely that the image to be segmented will have a low depth of field and that the object of interest will be more in focus than the background. To differentiate the object from the background scene, a multiscale wavelet based assessment is proposed. The focus assessment is used to generate a focus intensity map, and a sparse fields level set implementation of active contours is used to segment the object of interest. The initial contour is generated using a grid based technique. The method is extended to segment low depth of field video sequences with each successive initialisation for the active contours generated from the binary dilation of the previous frame's segmentation. Experimental results show good segmentations can be achieved with a variety of different images, video sequences, and objects, with no user interaction or input. The method is applied to two different areas. In the first the segmentations are used to automatically generate trimaps for use with matting algorithms. In the second, the method is used as part of a shape from silhouettes 3D object reconstruction system, replacing the need for a constrained background when generating silhouettes. In addition, not using a thresholding to perform the silhouette segmentation allows for objects with dark components or areas to be segmented accurately. Some examples of 3D models generated using silhouettes are shown.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Rafiee, Gholamreza. "Automatic region-of-interest extraction in low depth-of-field images". Thesis, University of Newcastle upon Tyne, 2013. http://hdl.handle.net/10443/2194.

Texto completo da fonte
Resumo:
Automatic extraction of focused regions from images with low depth-of-field (DOF) is a problem without an efficient solution yet. The capability of extracting focused regions can help to bridge the semantic gap by integrating image regions which are meaningfully relevant and generally do not exhibit uniform visual characteristics. There exist two main difficulties for extracting focused regions from low DOF images using high-frequency based techniques: computational complexity and performance. A novel unsupervised segmentation approach based on ensemble clustering is proposed to extract the focused regions from low DOF images in two stages. The first stage is to cluster image blocks in a joint contrast-energy feature space into three constituent groups. To achieve this, we make use of a normal mixture-based model along with standard expectation-maximization (EM) algorithm at two consecutive levels of block size. To avoid the common problem of local optima experienced in many models, an ensemble EM clustering algorithm is proposed. As a result, relevant blocks, i.e., block-based region-of-interest (ROI), closely conforming to image objects are extracted. In stage two, two different approaches have been developed to extract pixel-based ROI. In the first approach, a binary saliency map is constructed from the relevant blocks at the pixel level, which is based on difference of Gaussian (DOG) and binarization methods. Then, a set of morphological operations is employed to create the pixel-based ROI from the map. Experimental results demonstrate that the proposed approach achieves an average segmentation performance of 91.3% and is computationally 3 times faster than the best existing approach. In the second approach, a minimal graph cut is constructed by using the max-flow method and also by using object/background seeds provided by the ensemble clustering algorithm. Experimental results demonstrate an average segmentation performance of 91.7% and approximately 50% reduction of the average computational time by the proposed colour based approach compared with existing unsupervised approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Vörös, Csaba, Norbert Zajzon, Endre Turai e László Vincze. "Magnetic field measurement possibilities in flooded mines at 500 m depth". TU Bergakademie Freiberg, 2017. https://tubaf.qucosa.de/id/qucosa%3A23184.

Texto completo da fonte
Resumo:
The main target of the UNEXMIN project is to develop a fully autonomous submersible robot (UX-1) which can map flooded underground mines, and also deliver information about the potential raw materials of the mines. There are ca. 30 000 abandoned mines in Europe, from which many of them still could hold significant reserves of raw materials. Many of these mines are nowadays flooded and the latest information about them could be more than 100 years old. Although it is giving limited information, magnetic measurement methods, which detecting the local distortions of the Earth’s magnetic field can be very useful to identify raw materials in the mines. The source of the magnetic field which is independent of any human events comes from the Earths own magnetic field. The strength of this field depends by the magnetic materials in the near environment of the investigated point. The ferromagnetic materials have powerful effect to influence the magnetic field. In the nature, iron containing minerals, magnetite and hematite have the most powerful effect usually. The magnetic measurement methods are rapid and affordable techniques in geophysical engineering practice. For magnetic field strength and direction measurement FGM-1 sensors (manufactured by Speake & Co Llanfapley) were selected for the UX-1 robot. The sensor heads overall dimension are very small and their energy consumption is negligible. The FGM-1 sensor was placed and aligned in a plastic cylinder to ensure that the magnetic-axis aligned with the mechanical axis of the tube for more accurate measurement. There are 3 pairs of FGM-1 sensors needed for the proper determination of the current magnetic field (strength and direction). The position of sensor pairs need to be perpendicular compared to each other. The 3 pairs of FGM-1 sensors generate an arbitrary position Cartesian coordinate system. We further developed / had installed temperature sensors to all FGM-1 probes, to compensate the temperature dependency even though it has small effect. The UX-1 robot also contains the electronic block, which controls the three FGM-1 magnetic field sensor pairs, and store the measured data. The block contains the power module, the sensor interface modules with temperature compensation, the microcontroller module and the RS485 communication module also. The output data is a temperature compensated frequency value for each sensor pair. The measured magnetic signal from the local XYZ coordinate system (local for the UX-1) should be converted to a universal coordinate system during post processing of the data. The exact position, facing and inclination of the robot must be known in the whole dive time to be able to do the above conversion. The measured magnetic signal will be placed into the measured mine map, reconstructed from the delivered 3D point cloud, thus the exact location of the magnetic anomalies can be identified. Not much magnetic source is estimated in the operating environment of the robot, but its own generated magnetic noise can be significant. There will be many cooling fans, micro-controllers and multiple thrusters inside the pressure-hull of the UX-1, which generate magnetic field. The constant magnetic noise coming from the cooling fans can be compensated, but the varying fields caused by eg. the different thrusters’s speed is problematic. We design a calibration method, where the effect of the main thrusters (even with changing speed) and the effect of the constant cooling fans could be compensated. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 690008.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

He, Ruojun. "Square Coded Aperture: A Large Aperture with Infinite Depth of Field". University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1418078808.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Grossnickle, James A. "Deep fueling of large tokamaks by field-reversed configuration injection /". Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/9980.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Wasserman, Thomas A. "A reduced tensor product of braided fusion categories over a symmetric fusion category". Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:58c6aae3-cb0e-4381-821f-f7291ff95657.

Texto completo da fonte
Resumo:
The main goal of this thesis is to construct a tensor product on the 2-category BFC-A of braided fusion categories containing a symmetric fusion category A. We achieve this by introducing the new notion of Z(A)-crossed braided categories. These are categories enriched over the Drinfeld centre Z(A) of the symmetric fusion category. We show that Z(A) admits an additional symmetric tensor structure, which makes it into a 2-fold monoidal category. ByTannaka duality, A= Rep(G) (or Rep(G; w)) for a finite group G (or finite super-group (G,w)). Under this identication Z(A) = VectG[G], the category of G-equivariant vector bundles over G, and we show that the symmetric tensor product corresponds to (a super version of) to the brewise tensor product. We use the additional symmetric tensor product on Z(A) to define the composition in Z(A)-crossed braided categories, whereas the usual tensor product is used for the monoidal structure. We further require this monoidal structure to be braided for the switch map that uses the braiding in Z(A). We show that the 2-category Z(A)-XBF is equivalent to both BFC=A and the 2-category of (super)-G-crossed braided categories. Using the former equivalence, the reduced tensor product on BFC-A is dened in terms of the enriched Cartesian product of Z(A)-enriched categories on Z(A)-XBF. The reduced tensor product obtained in this way has as unit Z(A). It induces a pairing between minimal modular extensions of categories having A as their Mueger centre.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Agresti, Gianluca. "Data Driven Approaches for Depth Data Denoising". Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3422722.

Texto completo da fonte
Resumo:
The scene depth is an important information that can be used to retrieve the scene geometry, a missing element in standard color images. For this reason, the depth information is usually employed in many applications such as 3D reconstruction, autonomous driving and robotics. The last decade has seen the spread of different commercial devices able to sense the scene depth. Among these, Time-of-Flight (ToF) cameras are becoming popular because they are relatively cheap and they can be miniaturized and implemented on portable devices. Stereo vision systems are the most widespread 3D sensors and they are simply composed by two standard color cameras. However, they are not free from flaws, in particular they fail when the scene has no texture. Active stereo and structured light systems have been developed to overcome this issue by using external light projectors. This thesis collects the findings of my Ph.D. research, which are mainly devoted to the denoising of depth data. First, some of the most widespread commercial 3D sensors are introduced with their strengths and limitations. Then, some techniques for the quality enhancement of ToF depth acquisition are presented and compared with other state-of-the-art methods. A first proposed method is based on a hardware modification of the standard ToF projector. A second approach instead uses multi-frequency ToF recordings as input of a deep learning network to improve the depth estimation. A particular focus will be given to how the denoising performance degrades, when the network is trained on synthetic data and tested on real data. Thus, a method to reduce the gap in performance will be proposed. Since ToF and stereo vision systems have complementary characteristics, the possibility to fuse the information coming from these sensors is analysed and a method based on a locally consistent fusion, guided by a learning based reliability measure for the two sensors, is proposed. A part of this thesis is dedicated to the description of the data acquisition procedures and the related labeling required to collect the datasets we used for the training and evaluation of the proposed methods.
La profondità della scena è un importante informazione che può essere usata per recuperare la geometria della scena stessa, un elemento mancante nelle semplici immagini a colori. Per questo motivo, questi dati sono spesso usati in molte applicazioni come ricostruzione 3D, guida autonoma e robotica. L'ultima decade ha visto il diffondersi di diversi dispositivi capaci di stimare la profondità di una scena. Tra questi, le telecamere Time-of-Flight (ToF) stanno diventando sempre più popolari poiché sono relativamente poco costose e possono essere miniaturizzate e implementate su dispositivi portatili. I sistemi a visione stereoscopica sono i sensori 3D più diffusi e sono composti da due semplici telecamere a colori. Questi sensori non sono però privi di difetti, in particolare non riescono a stimare in maniera corretta la profondità di scene prive di texture. I sistemi stereoscopici attivi e i sistemi a luce strutturata sono stati sviluppati per risolvere questo problema usando un proiettore esterno. Questa tesi presenta i risultati che ho ottenuto durante il mio Dottorato di Ricerca presso l'Università degli Studi di Padova. Lo scopo principale del mio lavoro è stato quello di presentare metodi per il miglioramento dei dati 3D acquisiti con sensori commerciali. Nella prima parte della tesi i sensori 3D più diffusi verranno presentati introducendo i loro punti di forza e debolezza. In seguito verranno descritti dei metodi per il miglioramento della qualità dei dati di profondità acquisiti con telecamere ToF. Un primo metodo sfrutta una modifica hardware del proiettore ToF. Il secondo utilizza una rete neurale convoluzionale (CNN) che sfrutta dati acquisiti da una telecamera ToF per stimare un'accurata mappa di profondità della scena. Nel mio lavoro è stata data attenzione a come le prestazioni di questo metodo peggiorano quando la CNN è allenata su dati sintetici e testata su dati reali. Di conseguenza, un metodo per ridurre tale perdita di prestazioni verrà presentato. Poiché le mappe di profondità acquisite con sensori ToF e sistemi stereoscopici hanno proprietà complementari, la possibilità di fondere queste due sorgenti di informazioni è stata investigata. In particolare, è stato presentato un metodo di fusione che rinforza la consistenza locale dei dati e che sfrutta una stima dell'accuratezza dei due sensori, calcolata con una CNN, per guidare il processo di fusione. Una parte della tesi è dedita alla descrizione delle procedure di acquisizione dei dati utilizzati per l'allenamento e la valutazione dei metodi presentati.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Goss, Keith Michael. "Multi-dimensional polygon-based rendering for motion blur and depth of field". Thesis, Brunel University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.294033.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Widjanarko, Taufiq. "Hyperspectral interferometry for single-shot profilometry and depth-resolved displacement field measurement". Thesis, Loughborough University, 2011. https://dspace.lboro.ac.uk/2134/8349.

Texto completo da fonte
Resumo:
A new approach to the absolute measurement of two-dimensional optical path differences is presented in this thesis. The method, which incorporates a white light interferometer and a hyperspectral imaging system, is referred to as Hyperspectral Interferometry. A prototype of the Hyperspectral Interferometry (HSI) system has been designed, constructed and tested for two types of measurement: for surface profilometry and for depth-resolved displacement measurement, both of which have been implemented so as to achieve single shot data acquisition. The prototype has been shown to be capable of performing a single-shot 3-D shape measurement of an optically-flat step-height sample, with less than 5% difference from the result obtained by a standard optical (microscope) based method. The HSI prototype has been demonstrated to be able to perform single-shot measurement with an unambiguous 352 (m depth range and a rms measurement error of around 80 nm. The prototype has also been tested to perform measurements on optically rough surfaces. The rms error of these measurements was found to increase to around 4× that of the smooth surface. For the depth-resolved displacement field measurements, an experimental setup was designed and constructed in which a weakly-scattering sample underwent simple compression with a PZT actuator. Depth-resolved displacement fields were reconstructed from pairs of hyperspectral interferograms. However, the experimental results did not show the expected result of linear phase variation with depth. Analysis of several possible causes has been carried out with the most plausible reasons being excessive scattering particle density inside the sample and the possibility of insignificant deformation of the sample due to insufficient physical contact between the transducer and the sample.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Reddy, Serendra. "Automatic 2D-to-3D conversion of single low depth-of-field images". Doctoral thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/24475.

Texto completo da fonte
Resumo:
This research presents a novel approach to the automatic rendering of 3D stereoscopic disparity image pairs from single 2D low depth-of-field (LDOF) images. Initially a depth map is produced through the assignment of depth to every delineated object and region in the image. Subsequently the left and right disparity images are produced through depth imagebased rendering (DIBR). The objects and regions in the image are initially assigned to one of six proposed groups or labels. Labelling is performed in two stages. The first involves the delineation of the dominant object-of-interest (OOI). The second involves the global object and region grouping of the non-OOI regions. The matting of the OOI is also performed in two stages. Initially the in focus foreground or region-of-interest (ROI) is separated from the out of focus background. This is achieved through the correlation of edge, gradient and higher-order statistics (HOS) saliencies. Refinement of the ROI is performed using k-means segmentation and CIEDE2000 colour-difference matching. Subsequently the OOI is extracted from within the ROI through analysis of the dominant gradients and edge saliencies together with k-means segmentation. Depth is assigned to each of the six labels by correlating Gestalt-based principles with vanishing point estimation, gradient plane approximation and depth from defocus (DfD). To minimise some of the dis-occlusions that are generated through the 3D warping sub-process within the DIBR process the depth map is pre-smoothed using an asymmetric bilateral filter. Hole-filling of the remaining dis-occlusions is performed through nearest-neighbour horizontal interpolation, which incorporates depth as well as direction of warp. To minimising the effects of the lateral striations, specific directional Gaussian and circular averaging smoothing is applied independently to each view, with additional average filtering applied to the border transitions. Each stage of the proposed model is benchmarked against data from several significant publications. Novel contributions are made in the sub-speciality fields of ROI estimation, OOI matting, LDOF image classification, Gestalt-based region categorisation, vanishing point detection, relative depth assignment and hole-filling or inpainting. An important contribution is made towards the overall knowledge base of automatic 2D-to-3D conversion techniques, through the collation of existing information, expansion of existing methods and development of newer concepts.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Helbing, Katrin G. "Effects of display contrast and field of view on distance perception". Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-10062009-020220/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Lovell, Jack James. "Development of smart, compact fusion diagnostics using field-programmable gate arrays". Thesis, Durham University, 2017. http://etheses.dur.ac.uk/12401/.

Texto completo da fonte
Resumo:
Fusion research requires high quality diagnostics to understand the complex physical processes involved. Traditional analogue systems are complex, large and expensive, and expansion of diagnostic capabilities is often impossible without building a completely new system at considerable expense. Field-programmable gate array (FPGA) technology can provide a solution to this problem. By implementing complex functionality and digital signal processing on an FPGA chip, diagnostic hardware can be greatly simplified and compacted. In this thesis we describe the enhancements of two diagnostics for the MAST-Upgrade tokamak using FPGA technology. Firstly, the design of the back end electronics for the new divertor bolometer is described. Results of tests of the new electronics at a number of sites, including lab-based testing and tokamak installations, are also presented. We demonstrate the correct functionality of the electronics and illustrate a number of important effects which must be taken into account when interpreting bolometer data on MAST-U. Secondly, we describe the new control and acquisition electronics developed for the MAST-U divertor Langmuir probe diagnostic. Much of the analogue control circuitry of the previous system has been upgraded to a digital implementation on an FPGA, which results in a significantly more compact and cost effective design. Given that MAST-Upgrade will feature around 850 Langmuir probes, these improvements are extremely important to keep the diagnostic manageable. Again, results are presented from the testing of the system at several sites, which both demonstrate the correct functionality of the new system and provide information on the diagnostic behaviour which needs to be accounted for when interpreting the probe data during MAST-U experiments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Li, Yan. "Depth Estimation from Structured Light Fields". Doctoral thesis, Universite Libre de Bruxelles, 2020. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/309512.

Texto completo da fonte
Resumo:
Light fields have been populated as a new geometry representation of 3D scenes, which is composed of multiple views, offering large potentials to improve the depth perception in the scenes. The light fields can be captured by different camera sensors, in which different acquisitions give rise to different representations, mainly containing a line of camera views - 3D light field representation, a grid of camera views - 4D light field representation. When the captured position is uniformly distributed, the outputs are the structured light fields. This thesis focuses on depth estimation from the structured light fields. The light field representations (or setups) differ not only in terms of 3D and 4D, but also the density or baseline of camera views. Rather than the objective of reconstructing high quality depths from dense (narrow-baseline) light fields, we put efforts into a general objective, i.e. reconstructing depths from a wide range of light field setups. Hence a series of depth estimation methods from light fields, including traditional and deep learningbased methods, are presented in this thesis. Extra efforts are made for achieving the high performance on aspects of depth accuracy and computation efficiency. Specifically, 1) a robust traditional framework is put forward for estimating the depth in sparse (wide-baseline) light fields, where a combination of the cost calculation, the window-based filtering and the optimization are conducted; 2) the above-mentioned framework is extended with the extra new or alternative components to the 4D light fields. This new framework shows the ability of being independent of the number of views and/or baseline of 4D light fields when predicting the depth; 3) two new deep learning-based methods are proposed for the light fields with the narrow-baseline, where the features are learned from the Epipolar-Plane-Image and light field images. One of the methods is designed as a lightweight model for more practical goals; 4) due to the dataset deficiency, a large-scale and diverse synthetic wide-baseline dataset with labeled data are created. A new lightweight deep model is proposed for the 4D light fields with the wide-baseline. Besides, this model also works on the 4D light fields with the narrow baseline if trained on the narrow-baseline datasets. Evaluations are made on the public light field datasets. Experimental results show the proposed depth estimation methods from a wide range of light field setups are capable of achieving the high quality depths, and some even outperform state-of-the-art methods.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Olofsson, Erik. "Closed-loop control and identification of resistive shell magnetohydrodynamics for the reversed-field pinch". Licentiate thesis, KTH, School of Electrical Engineering (EES), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-12794.

Texto completo da fonte
Resumo:

It is demonstrated that control software updates for the magnetic confinement fusion experiment EXTRAP T2R can enable novel studies of plasma physics. Specifically, it is shown that the boundary radial magnetic field in T2R can be maintained at finite levels by feedback. System identification methods to measure in situ magnetohydrodynamic stability are developed and applied with encouraging results. Subsequently, results from closed-loop identification are used for retooling the T2R regulator. The track of research here pursued could possibly be relevant for future thermonuclear fusion reactors.


QC 20100518
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Abbott, Joshua E. "Interactive Depth-Aware Effects for Stereo Image Editing". BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3712.

Texto completo da fonte
Resumo:
This thesis introduces methods for adding user-guided depth-aware effects to images captured with a consumer-grade stereo camera with minimal user interaction. In particular, we present methods for highlighted depth-of-field, haze, depth-of-field, and image relighting. Unlike many prior methods for adding such effects, we do not assume prior scene models or require extensive user guidance to create such models, nor do we assume multiple input images. We also do not require specialized camera rigs or other equipment such as light-field camera arrays, active lighting, etc. Instead, we use only an easily portable and affordable consumer-grade stereo camera. The depth is calculated from a stereo image pair using an extended version of PatchMatch Stereo designed to compute not only image disparities but also normals for visible surfaces. We also introduce a pipeline for rendering multiple effects in the order they would occur physically. Each can be added, removed, or adjusted in the pipeline without having to reapply subsequent effects. Individually or in combination, these effects can be used to enhance the sense of depth or structure in images and provide increased artistic control. Our interface also allows editing the stereo pair together in a fashion that preserves stereo consistency, or the effects can be applied to a single image only, thus leveraging the advantages of stereo acquisition even to produce a single photograph.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Willingham, David George Winograd Nicholas. "Strong-field photoionization of sputtered neutral molecules for chemical imaging and depth profiling". [University Park, Pa.] : Pennsylvania State University, 2009. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-4536/index.html.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Sanyal, Poulomi. "Depth of field enhancement of a particle analyzer microscope using wave-front coding". Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=83931.

Texto completo da fonte
Resumo:
In this thesis we present an analytical solution to the problem of improving the depth of field (DOF) of a certain large magnification imaging system with a high numerical aperture (NA), illuminated with an incoherent source of light. As there is a definite trade-off between the focal depth and resolution achievable with such a system, our challenge was to find a system that would achieve both these objectives and at the same time be cost-effective and easy to implement.
Our choice of technique therefore was a novel optical wave front manipulation mechanism involving subsequent image restoration via digital post processing. This technique is known as wave front coding. The coding is achieved with the help an optical element known as a phase plate and then the coded image is electronically restored with the help of a digital post-processing filter.
The three steps involved in achieving our desired goal were, modeling the imaging system to be studied and studying its characteristics before DOF enhancement, designing the phase plate and finally, choosing and designing the appropriate decoding filter. After an appropriate phase plate was modeled, it was incorporated into the pre-existing optics and subsequently optimized. The intermediate image produced by the resulting system was then studied for defocus performance. Finally, the intermediate image was restored using a digital filter and studied once again for DOF characteristics. Other factors, like optical aberrations that might limit system performance were also taken into consideration.
In the end a simpler and cost-effective method of fabricating the suggested phase plate for single-wavelength operation was suggested. The results of our simulations were promising and sufficiently high resolution imaging was achievable within the entire enhanced DOF region of +/-200 mum from the point of best focus. The DOF without coding was around +/-50 mum, but with coding the spot size remained fairly constant over the entire 400 mum deep region of interest. Thus a 4 times increase in the overall system DOF was achieved due to wave front coding.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Sorensen, Jordan (Jordan P. ). "Software simulation of depth of field effects in video from small aperture cameras". Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61577.

Texto completo da fonte
Resumo:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 71-73).
This thesis proposes a technique for post processing digital video to introduce a simulated depth of field effect. Because the technique is implemented in software, it affords the user greater control over the parameters of the effect (such as the amount of defocus, aperture shape, and defocus plane) and allows the effect to be used even on hardware which would not typically allow for depth of field. In addition, because it is a completely post processing technique and requires no change in capture method or hardware, it can be used on any video and introduces no new costs. This thesis describes the technique, evaluates its performance on example videos, and proposes further work to improve the technique's reliability.
by Jordan Sorensen.
M.Eng.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Alinder, Simon. "Effect of the convective electric field on the ion number density around a low activity comet". Thesis, Uppsala universitet, Institutionen för fysik och astronomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-337017.

Texto completo da fonte
Resumo:
Vigren et al. (2015) presents an integral expression to calculate the ion number density around a low activity comet immersed in the solar wind's convective electric field. A certain parameter of the integral takes values of either 1 or 0 depending on whether a corresponding ion trajectory is feasible or not. The criteria used in the paper has been found not to be strict enough, yielding overestimated ion number densities in the cometary wake. The present project finds two new options for the criteria, one analytical and one numerical. The new numerical condition is tested in the same computations done in the original paper and compares the results of the old and new criteria. The new conditionis found to correct the previous error.

Projektet gjort inom: Fördjupningskurs i fysik - projektkurs, 5.0 hp. Kurskod:1FA566.

Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Barber, Julien (Julien Victor). "Investigation of cryogenic cooling for a high-field toroidal field magnet used in the SPARC fusion reactor design". Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118738.

Texto completo da fonte
Resumo:
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 111-114).
Rare Earth Barium Copper Oxide (REBCO) High Temperature Superconducting (HTS) tapes are being considered for the Toroidal Field (TF) magnets of the highly compact, high-field SPARC Version 0 (V0) reactor design. The V0 design is set to operate at magnetic fields as high as 20 T, and operating temperatures ranging from 10-30 K. Due to the increase in range of operating conditions made available through the HTS-based magnets, a new set of cryogenic fluids are being considered for forced flow cooling. This thesis analyzes the thermophysical properties of helium, hydrogen, and neon, and constructs a numerical model to investigate the forced flow cooling for REBCO HTS tapes under the extreme heating conditions present in the SPARC V0 design. Four design criteria are used to assess each cryogen, including the current sharing temperature, fluid inlet temperature, cable pressure drop ([delta]P), and operating pressure. From the results of the model, neon is removed from consideration due to its high required pressure drop and low temperature margins imposed by the superconductor current sharing limit. Hydrogen provides the highest effective heat transfer rate operating at inlet conditions of 1.5 MPa and 15 K, but is constrained by safety considerations. Helium is also able to meet the current sharing condition, but with higher initial pressure and lower initial temperature. Using the numerical model, an analysis using the four design criteria finds an optimal operating condition for helium of 2.5 MPa and 10 K based on minimizing cable pressure drop ([delta]P) and inlet pressure, while maximizing the fluid's inlet temperature. With a target operating point defined, an experimental cryogenic flow loop is designed with the purpose of verifying the high heat transfer rates required for the high-pressure, supercritical helium flow in the SPARC reactor. The flow loop uses a pressure differential to drive flow at a target mass flow rate of 46 g/s. To simulate a plasma pulse, the fluid flow is subject to heat fluxes up to 45 kW/m² for a minimum duration of ten seconds.
Supported by the U.S. Department of Energy, Office of Fusion Energy Science Grant: DE-FC02-93ER54186
by Julien Barber.
S.M.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Atif, Muhammad [Verfasser], e Bernd [Akademischer Betreuer] Jähne. "Optimal Depth Estimation and Extended Depth of Field from Single Images by Computational Imaging using Chromatic Aberrations / Muhammad Atif ; Betreuer: Bernd Jähne". Heidelberg : Universitätsbibliothek Heidelberg, 2013. http://d-nb.info/1177382679/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Walter, Erwin. "Field-Aligned Currents and Flow Bursts in the Earth’s Magnetotail". Thesis, Umeå universitet, Institutionen för fysik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-148525.

Texto completo da fonte
Resumo:
We use electric and magnetic field data from MMS spacecraft between 2016 and 2017 tostatistically investigate earthward propagating plasma flow bursts and field-aligned currents(FACs) inside the plasma sheet of the geomagnetic tail. We observe that the occurrence rateof flow burst peaks around the midnight region with decreasing trend towards Earth and theplasma sheet flanks. Further, we distinguish between long and short FACs. Long FACs laston average 6 sec and have a magnitude of 5-20 nA/m 2 . Short FACs last on average 10 timesshorter and have an magnitude of 10-50 nA/m 2 . Both, long and short FACs occur on averageone time per flow burst, on minimum 0 times and on maximum 4 times per flow burst. Intotal, 43 % of the observed FACs are located in a flow burst, 40 % before and 17 % right after aflow burst.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Hu, Guang-hua. "Extending the depth of focus using digital image filtering". Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/45653.

Texto completo da fonte
Resumo:

Two types of image processing methods capable of forming a composite image from a set of image slices which have in-focus as well as out-of-focus segments are discussed. The first type is based on space domain operations and has been discussed in the literature. The second type, to be introduced, is based on the intuitive concept that the spectral energy distribution of a focused object is biased towards lower frequencies after blurring. This approach requires digital image filtering in the spatial frequency domain. A comparison among methods of both types is made using a quantitative uÌ delity criterion.


Master of Science
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Marin, Giulio. "3D data fusion from multiple sensors and its applications". Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3425367.

Texto completo da fonte
Resumo:
The introduction of depth cameras in the mass market contributed to make computer vision applicable to many real world applications, such as human interaction in virtual environments, autonomous driving, robotics and 3D reconstruction. All these problems were originally tackled by means of standard cameras, but the intrinsic ambiguity in the bidimensional images led to the development of depth cameras technologies. Stereo vision was first introduced to provide an estimate of the 3D geometry of the scene. Structured light depth cameras were developed to use the same concepts of stereo vision but overcome some of the problems of passive technologies. Finally, Time-of-Flight (ToF) depth cameras solve the same depth estimation problem by using a different technology. This thesis focuses on the acquisition of depth data from multiple sensors and presents techniques to efficiently combine the information of different acquisition systems. The three main technologies developed to provide depth estimation are first reviewed, presenting operating principles and practical issues of each family of sensors. The use of multiple sensors then is investigated, providing practical solutions to the problem of 3D reconstruction and gesture recognition. Data from stereo vision systems and ToF depth cameras are combined together to provide a higher quality depth map. A confidence measure of depth data from the two systems is used to guide the depth data fusion. The lack of datasets with data from multiple sensors is addressed by proposing a system for the collection of data and ground truth depth, and a tool to generate synthetic data from standard cameras and ToF depth cameras. For gesture recognition, a depth camera is paired with a Leap Motion device to boost the performance of the recognition task. A set of features from the two devices is used in a classification framework based on Support Vector Machines and Random Forests.
L'introduzione di sensori di profondità nel mercato di massa ha contribuito a rendere la visione artificiale applicabile in molte applicazioni reali, come l'interazione dell'uomo in ambienti virtuali, la guida autonoma, la robotica e la ricostruzione 3D. Tutti questi problemi sono stati originariamente affrontati con l'utilizzo di normali telecamere ma l'ambiguità intrinseca delle immagini bidimensionali ha portato allo sviluppo di tecnologie per sensori di profondità. La visione stereoscopica è stata la prima tecnologia a permettere di stimare la geometria tridimensionale della scena. Sensori a luce strutturata sono stati sviluppati per sfruttare gli stessi principi della visione stereoscopica ma risolvere alcuni problemi dei dispositivi passivi. Infine i sensori a tempo di volo cercano di risolvere lo stesso problema di stima della distanza utilizzando una differente tecnologia. Questa tesi si focalizza nell'acquisizione di dati di profondità da diversi sensori e presenta tecniche per combinare efficacemente le informazioni dei diversi sistemi di acquisizione. Per prima cosa le tre principali tecnologie sviluppate per fornire una stima di profondità sono esaminate in dettaglio, presentando i principi di funzionamento e i problemi dei diversi sistemi. Successivamente è stato studiato l'utilizzo congiunto di sensori, fornendo delle soluzioni pratiche al problema della ricostruzione 3D e del riconoscimento dei gesti. I dati di un sistema stereoscopico e di un sensore a tempo di volo sono stati combinati per fornire una mappa di profondità più precisa. Per ognuno dei due sensori sono state sviluppate delle mappe di confidenza utilizzate per controllare la fusione delle mappe di profondità. La mancanza di collezioni con dati di diversi sensori è stato affrontato proponendo un sistema per la collezione di dati da diversi sensori e la generazione di mappe di profondità molto precise, oltre ad un sistema per la generazioni di dati sintetici per sistemi stereoscopici e sensori a tempo di volo. Per il problema del riconoscimento dei gesti è stato sviluppato un sistema per l'utilizzo congiunto di un sensore di profondità e un sensore Leap Motion, per migliorare le prestazioni dell'attività riconoscimento. Un insieme di descrittori ricavato dai due sistemi è stato utilizzato per la classificazione dei gesti con un sistema basato su Support Vector Machines e Random Forests.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Dahlin, Jon-Erik. "Numerical studies of current profile control in the reversed-field pinch". Doctoral thesis, Stockholm : Alfvén Laboratory, Royal Institute of Technology, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4167.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia