Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Scene depth.

Zeitschriftenartikel zum Thema „Scene depth“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Scene depth" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Fernandes, Suzette, und Monica S. Castelhano. „The Foreground Bias: Initial Scene Representations Across the Depth Plane“. Psychological Science 32, Nr. 6 (21.05.2021): 890–902. http://dx.doi.org/10.1177/0956797620984464.

Der volle Inhalt der Quelle
Annotation:
When you walk into a large room, you perceive visual information that is both close to you in depth and farther in the background. Here, we investigated how initial scene representations are affected by information across depth. We examined the role of background and foreground information on scene gist by using chimera scenes (images with a foreground and background from different scene categories). Across three experiments, we found a foreground bias: Information in the foreground initially had a strong influence on the interpretation of the scene. This bias persisted when the initial fixation position was on the scene background and when the task was changed to emphasize scene information. We concluded that the foreground bias arises from initial processing of scenes for understanding and suggests that scene information closer to the observer is initially prioritized. We discuss the implications for theories of scene and depth perception.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Nefs, Harold T. „Depth of Field Affects Perceived Depth-width Ratios in Photographs of Natural Scenes“. Seeing and Perceiving 25, Nr. 6 (2012): 577–95. http://dx.doi.org/10.1163/18784763-00002400.

Der volle Inhalt der Quelle
Annotation:
The aim of the study was to find out how much influence depth of field has on the perceived ratio of depth and width in photographs of natural scenes. Depth of field is roughly defined as the distance range that is perceived as sharp in the photograph. Four different semi-natural scenes consisting of a central and two flanking figurines were used. For each scene, five series of photos were made, in which the distance in depth between the central figurine and the flanking figurines increased. These series of photographs had different amounts of depth of field. In the first experiment participants adjusted the position of the two flanking figurines relative to a central figurine, until the perceived distance in the depth dimension equaled the perceived lateral distance between the two flanking figurines. Viewing condition was either monocular or binocular (non-stereo). In the second experiment, the participants did the same task but this time we varied the viewing distance. We found that the participants’ depth/width settings increased with increasing depth of field. As depth of field increased, the perceived depth in the scene was reduced relative to the perceived width. Perceived depth was reduced relative to perceived width under binocular viewing conditions compared to monocular viewing conditions. There was a greater reduction when the viewing distance was increased. As photographs of natural scenes contain many highly redundant or conflicting depth cues, we conclude therefore that local image blur is an important cue to depth. Moreover, local image blur is not only taken into account in the perception of egocentric distances, but also affects the perception of depth within the scene relative to lateral distances within the scene.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Chlubna, T., T. Milet und P. Zemčík. „Real-time per-pixel focusing method for light field rendering“. Computational Visual Media 7, Nr. 3 (27.02.2021): 319–33. http://dx.doi.org/10.1007/s41095-021-0205-0.

Der volle Inhalt der Quelle
Annotation:
AbstractLight field rendering is an image-based rendering method that does not use 3D models but only images of the scene as input to render new views. Light field approximation, represented as a set of images, suffers from so-called refocusing artifacts due to different depth values of the pixels in the scene. Without information about depths in the scene, proper focusing of the light field scene is limited to a single focusing distance. The correct focusing method is addressed in this work and a real-time solution is proposed for focusing of light field scenes, based on statistical analysis of the pixel values contributing to the final image. Unlike existing techniques, this method does not need precomputed or acquired depth information. Memory requirements and streaming bandwidth are reduced and real-time rendering is possible even for high resolution light field data, yielding visually satisfactory results. Experimental evaluation of the proposed method, implemented on a GPU, is presented in this paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lee, Jaeho, Seungwoo Yoo, Changick Kim und Bhaskaran Vasudev. „Estimating Scene-Oriented Pseudo Depth With Pictorial Depth Cues“. IEEE Transactions on Broadcasting 59, Nr. 2 (Juni 2013): 238–50. http://dx.doi.org/10.1109/tbc.2013.2240131.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sauer, Craig W., Myron L. Braunstein, Asad Saidpour und George J. Andersen. „Propagation of Depth Information from Local Regions in 3-D Scenes“. Perception 31, Nr. 9 (September 2002): 1047–59. http://dx.doi.org/10.1068/p3261.

Der volle Inhalt der Quelle
Annotation:
The effects of regions with local linear perspective on judgments of the depth separation between two objects in a scene were investigated for scenes consisting of a ground plane, a quadrilateral region, and two poles separated in depth. The poles were either inside or outside the region. Two types of displays were used: motion-parallax dot displays, and a still photograph of a real scene on which computer-generated regions and objects were superimposed. Judged depth separations were greater for regions with greater linear perspective, both for objects inside and outside the region. In most cases, the effect of the region's shape was reduced for objects outside the region. Some systematic differences were found between the two types of displays. For example, adding a region with any shape increased judged depth in motion-parallax displays, but only high-perspective regions increased judged depth in real-scene displays. We conclude that depth information present in local regions affects perceived depth within the region, and that these effects propagate, to a lesser degree, outside the region.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Torralba, A., und A. Oliva. „Depth perception from familiar scene structure“. Journal of Vision 2, Nr. 7 (14.03.2010): 494. http://dx.doi.org/10.1167/2.7.494.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Groen, Iris I. A., Sennay Ghebreab, Victor A. F. Lamme und H. Steven Scholte. „The time course of natural scene perception with reduced attention“. Journal of Neurophysiology 115, Nr. 2 (01.02.2016): 931–46. http://dx.doi.org/10.1152/jn.00896.2015.

Der volle Inhalt der Quelle
Annotation:
Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the neural correlates of this ability by examining how attention affects electrophysiological markers of scene perception. In two electro-encephalography (EEG) experiments, human subjects categorized real-world scenes as manmade or natural (full attention condition) or performed tasks on unrelated stimuli in the center or periphery of the scenes (reduced attention conditions). Scene processing was examined in two ways: traditional trial averaging was used to assess the presence of a categorical manmade/natural distinction in event-related potentials, whereas single-trial analyses assessed whether EEG activity was modulated by scene statistics that are diagnostic of naturalness of individual scenes. The results indicated that evoked activity up to 250 ms was unaffected by reduced attention, showing intact categorical differences between manmade and natural scenes and strong modulations of single-trial activity by scene statistics in all conditions. Thus initial processing of both categorical and individual scene information remained intact with reduced attention. Importantly, however, attention did have profound effects on later evoked activity; full attention on the scene resulted in prolonged manmade/natural differences, increased neural sensitivity to scene statistics, and enhanced scene memory. These results show that initial processing of real-world scene information is intact with diminished attention but that the depth of processing of this information does depend on attention.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Qiu, Yue, Yutaka Satoh, Ryota Suzuki, Kenji Iwata und Hirokatsu Kataoka. „Indoor Scene Change Captioning Based on Multimodality Data“. Sensors 20, Nr. 17 (23.08.2020): 4761. http://dx.doi.org/10.3390/s20174761.

Der volle Inhalt der Quelle
Annotation:
This study proposes a framework for describing a scene change using natural language text based on indoor scene observations conducted before and after a scene change. The recognition of scene changes plays an essential role in a variety of real-world applications, such as scene anomaly detection. Most scene understanding research has focused on static scenes. Most existing scene change captioning methods detect scene changes from single-view RGB images, neglecting the underlying three-dimensional structures. Previous three-dimensional scene change captioning methods use simulated scenes consisting of geometry primitives, making it unsuitable for real-world applications. To solve these problems, we automatically generated large-scale indoor scene change caption datasets. We propose an end-to-end framework for describing scene changes from various input modalities, namely, RGB images, depth images, and point cloud data, which are available in most robot applications. We conducted experiments with various input modalities and models and evaluated model performance using datasets with various levels of complexity. Experimental results show that the models that combine RGB images and point cloud data as input achieve high performance in sentence generation and caption correctness and are robust for change type understanding for datasets with high complexity. The developed datasets and models contribute to the study of indoor scene change understanding.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Warrant, Eric. „The eyes of deep–sea fishes and the changing nature of visual scenes with depth“. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 355, Nr. 1401 (29.09.2000): 1155–59. http://dx.doi.org/10.1098/rstb.2000.0658.

Der volle Inhalt der Quelle
Annotation:
The visual scenes viewed by ocean animals change dramatically with depth. In the brighter epipelagic depths, daylight provides an extended field of illumination. In mesopelagic depths down to 1000 m the visual scene is semi–extended, with the downwelling daylight providing increasingly dim extended illumination with depth. In contrast, greater depths increase the prominence of point–source bioluminescent flashes. In bathypelagic depths (below 1000 m) daylight no longer penetrates, and the visual scene consists exclusively of point–source bioluminescent flashes. In this paper, I show that the eyes of fishes match this change from extended to point–source illumination, becoming increasingly foveate and spatially acute with increasing depth. A sharp fovea is optimal for localizing point sources. Quite contrary to their reputation as ‘degenerate’ and ‘regressed’, I show here that the remarkably prominent foveae and relatively large pupils of bathypelagic fishes give them excellent perception and localization of bioluminescent flashes up to a few tens of metres distant. In a world with almost no food, where fishes are weak and must swim very slowly, this range of detection (and interception) is energetically realistic, with distances greater than this physically beyond range. Larger and more sensitive eyes would give bathypelagic fishes little more than the useless ability to see flashes beyond reach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Madhuanand, L., F. Nex und M. Y. Yang. „DEEP LEARNING FOR MONOCULAR DEPTH ESTIMATION FROM UAV IMAGES“. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (03.08.2020): 451–58. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-451-2020.

Der volle Inhalt der Quelle
Annotation:
Abstract. Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Casser, Vincent, Soeren Pirk, Reza Mahjourian und Anelia Angelova. „Depth Prediction without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 8001–8. http://dx.doi.org/10.1609/aaai.v33i01.33018001.

Der volle Inhalt der Quelle
Annotation:
Learning to predict scene depth from RGB inputs is a challenging task both for indoor and outdoor robot navigation. In this work we address unsupervised learning of scene depth and robot ego-motion where supervision is provided by monocular videos, as cameras are the cheapest, least restrictive and most ubiquitous sensor for robotics. Previous work in unsupervised image-to-depth learning has established strong baselines in the domain. We propose a novel approach which produces higher quality results, is able to model moving objects and is shown to transfer across data domains, e.g. from outdoors to indoor scenes. The main idea is to introduce geometric structure in the learning process, by modeling the scene and the individual objects; camera ego-motion and object motions are learned from monocular videos as input. Furthermore an online refinement method is introduced to adapt learning on the fly to unknown domains. The proposed approach outperforms all state-of-the-art approaches, including those that handle motion e.g. through learned flow. Our results are comparable in quality to the ones which used stereo as supervision and significantly improve depth prediction on scenes and datasets which contain a lot of object motion. The approach is of practical relevance, as it allows transfer across environments, by transferring models trained on data collected for robot navigation in urban scenes to indoor navigation settings. The code associated with this paper can be found at https://sites.google.com/view/struct2depth.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Chan, Chang Yuen, Li Hua Li, Wing Bun Lee und Ya Hui Liu. „A Novel Format of Depth Map Image“. Key Engineering Materials 679 (Februar 2016): 97–101. http://dx.doi.org/10.4028/www.scientific.net/kem.679.97.

Der volle Inhalt der Quelle
Annotation:
When the micro lens array machined by an ultra-precision machine was used in 3D computer graphics, a novel format of depth map image was invented with adaptively variable data length for multi requirements of different 3D computer graphics applications. A depth map is an image or image channel that contains information relating to the distance of the surfaces of objects a scene from a viewpoint. Depth maps can be applied for many functions: defocusing, rendering of 3D scenes, shadow mapping and other distance information-related applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Cheng, Fang-Hsuan, und Yun-Hui Liang. „Depth map generation based on scene categories“. Journal of Electronic Imaging 18, Nr. 4 (2009): 043006. http://dx.doi.org/10.1117/1.3263920.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Maki, Atsuto, Peter Nordlund und Jan-Olof Eklundh. „Attentional Scene Segmentation: Integrating Depth and Motion“. Computer Vision and Image Understanding 78, Nr. 3 (Juni 2000): 351–73. http://dx.doi.org/10.1006/cviu.2000.0840.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Li, Lerenhan, Jinshan Pan, Wei-Sheng Lai, Changxin Gao, Nong Sang und Ming-Hsuan Yang. „Dynamic Scene Deblurring by Depth Guided Model“. IEEE Transactions on Image Processing 29 (2020): 5273–88. http://dx.doi.org/10.1109/tip.2020.2980173.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Han, J. H., M. Jung, C. Lee und E. Y. Ha. „Panorama field rendering with scene depth estimation“. Electronics Letters 38, Nr. 14 (2002): 704. http://dx.doi.org/10.1049/el:20020511.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Wang, Haixia, Yehao Sun, Zhiguo Zhang, Xiao Lu und Chunyang Sheng. „Depth estimation for a road scene using a monocular image sequence based on fully convolutional neural network“. International Journal of Advanced Robotic Systems 17, Nr. 3 (01.05.2020): 172988142092530. http://dx.doi.org/10.1177/1729881420925305.

Der volle Inhalt der Quelle
Annotation:
An advanced driving assistant system is one of the most popular topics nowadays, and depth estimation is an important cue for advanced driving assistant system. Depth prediction is a key problem in understanding the geometry of a road scene for advanced driving assistant system. In comparison to other depth estimation methods using stereo depth perception, determining depth relation using a monocular camera is considerably challenging. In this article, a fully convolutional neural network with skip connection based on a monocular video sequence is proposed. With the integration framework that combines skip connection, fully convolutional network and the consistency between consecutive frames of the input sequence, high-resolution depth maps are obtained with lightweight network training and fewer computations. The proposed method models depth estimation as a regression problem and trains the proposed network using a scale invariance optimization based on L2 loss function, which measures the relationships between points in the consecutive frames. The proposed method can be used for depth estimation of a road scene without the need for any extra information or geometric priors. Experiments on road scene data sets demonstrate that the proposed approach outperforms previous methods for monocular depth estimation in dynamic scenes. Compared with the currently proposed method, our method has achieved good results when using the Eigen split evaluation method. The obvious prominent one is that the linear root mean squared error result is 3.462 and the δ < 1.25 result is 0.892.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Liu, Shuntao, Dedong Gao, Peng Wang, Xifeng Guo, Jing Xu und Du-Xin Liu. „A Depth-Based Weighted Point Cloud Registration for Indoor Scene“. Sensors 18, Nr. 11 (24.10.2018): 3608. http://dx.doi.org/10.3390/s18113608.

Der volle Inhalt der Quelle
Annotation:
Point cloud registration plays a key role in three-dimensional scene reconstruction, and determines the effect of reconstruction. The iterative closest point algorithm is widely used for point cloud registration. To improve the accuracy of point cloud registration and the convergence speed of registration error, point pairs with smaller Euclidean distances are used as the points to be registered, and the depth measurement error model and weight function are analyzed. The measurement error is taken into account in the registration process. The experimental results of different indoor scenes demonstrate that the proposed method effectively improves the registration accuracy and the convergence speed of registration error.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Tyler, Christopher W. „An Accelerated Cue Combination Principle Accounts for Multi-cue Depth Perception“. Journal of Perceptual Imaging 3, Nr. 1 (01.01.2020): 10501–1. http://dx.doi.org/10.2352/j.percept.imaging.2020.3.1.010501.

Der volle Inhalt der Quelle
Annotation:
Abstract For the visual world in which we operate, the core issue is to conceptualize how its three-dimensional structure is encoded through the neural computation of multiple depth cues and their integration to a unitary depth structure. One approach to this issue is the full Bayesian model of scene understanding, but this is shown to require selection from the implausibly large number of possible scenes. An alternative approach is to propagate the implied depth structure solution for the scene through the “belief propagation” algorithm on general probability distributions. However, a more efficient model of local slant propagation is developed as an alternative.The overall depth percept must be derived from the combination of all available depth cues, but a simple linear summation rule across, say, a dozen different depth cues, would massively overestimate the perceived depth in the scene in cases where each cue alone provides a close-to-veridical depth estimate. On the other hand, a Bayesian averaging or “modified weak fusion” model for depth cue combination does not provide for the observed enhancement of perceived depth from weak depth cues. Thus, the current models do not account for the empirical properties of perceived depth from multiple depth cues.The present analysis shows that these problems can be addressed by an asymptotic, or hyperbolic Minkowski, approach to cue combination. With appropriate parameters, this first-order rule gives strong summation for a few depth cues, but the effect of an increasing number of cues beyond that remains too weak to account for the available degree of perceived depth magnitude. Finally, an accelerated asymptotic rule is proposed to match the empirical strength of perceived depth as measured, with appropriate behavior for any number of depth cues.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Vidakovic, Vesna, und Suncica Zdravkovic. „Influence of depth cues on multiple objects tracking in 3D scene“. Psihologija 43, Nr. 4 (2010): 389–409. http://dx.doi.org/10.2298/psi1004389v.

Der volle Inhalt der Quelle
Annotation:
Multiple-object-tracking tasks require an observer to track a group of identical objects moving in 2D space. The current study was conducted in an attempt to examine object tracking in 3D space. We were interested in testing influence of classical depth cues (texture gradients, relative size and contrast) on tracking. In Experiment 1 we varied the presence of these depth cues while subjects were tracking four (out of eight) identical, moving objects. Texture gradient, a cue related to scene layout, did not influence object tracking. Experiment 2 was designed to clarify the differences between contrast and relative size effects. Results revealed that contrast was a more effective cue for multiple object tracking in 3D scenes. The effect of occlusion was also examined. Several occluders, presented in the scene, were occasionally masking the targets. Tracking was more successful when occluders were arranged in different depth planes, mimicking more natural conditions. Increasing the number of occlusions led to poorer performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Zhu, Zhiqin, Yaqin Luo, Hongyan Wei, Yong Li, Guanqiu Qi, Neal Mazur, Yuanyuan Li und Penglong Li. „Atmospheric Light Estimation Based Remote Sensing Image Dehazing“. Remote Sensing 13, Nr. 13 (22.06.2021): 2432. http://dx.doi.org/10.3390/rs13132432.

Der volle Inhalt der Quelle
Annotation:
Remote sensing images are widely used in object detection and tracking, military security, and other computer vision tasks. However, remote sensing images are often degraded by suspended aerosol in the air, especially under poor weather conditions, such as fog, haze, and mist. The quality of remote sensing images directly affect the normal operations of computer vision systems. As such, haze removal is a crucial and indispensable pre-processing step in remote sensing image processing. Additionally, most of the existing image dehazing methods are not applicable to all scenes, so the corresponding dehazed images may have varying degrees of color distortion. This paper proposes a novel atmospheric light estimation based dehazing algorithm to obtain high visual-quality remote sensing images. First, a differentiable function is used to train the parameters of a linear scene depth model for the scene depth map generation of remote sensing images. Second, the atmospheric light of each hazy remote sensing image is estimated by the corresponding scene depth map. Then, the corresponding transmission map is estimated on the basis of the estimated atmospheric light by a haze-lines model. Finally, according to the estimated atmospheric light and transmission map, an atmospheric scattering model is applied to remove haze from remote sensing images. The colors of the images dehazed by the proposed method are in line with the perception of human eyes in different scenes. A dataset with 100 remote sensing images from hazy scenes was built for testing. The performance of the proposed image dehazing method is confirmed by theoretical analysis and comparative experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Stathopoulou, E. K., und F. Remondino. „MULTI-VIEW STEREO WITH SEMANTIC PRIORS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W15 (26.08.2019): 1135–40. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w15-1135-2019.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract.</strong> Patch-based stereo is nowadays a commonly used image-based technique for dense 3D reconstruction in large scale multi-view applications. The typical steps of such a pipeline can be summarized in stereo pair selection, depth map computation, depth map refinement and, finally, fusion in order to generate a complete and accurate representation of the scene in 3D. In this study, we aim to support the standard dense 3D reconstruction of scenes as implemented in the open source library OpenMVS by using semantic priors. To this end, during the depth map fusion step, along with the depth consistency check between depth maps of neighbouring views referring to the same part of the 3D scene, we impose extra semantic constraints in order to remove possible errors and selectively obtain segmented point clouds per label, boosting automation towards this direction. In order to reassure semantic coherence between neighbouring views, additional semantic criterions can be considered, aiming to eliminate mismatches of pixels belonging in different classes.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Matthews, Harold, Harold Hill und Stephen Palmisano. „Independent Effects of Local and Global Binocular Disparity on the Perceived Convexity of Stereoscopically Presented Faces in Scenes“. Perception 41, Nr. 2 (01.01.2012): 168–74. http://dx.doi.org/10.1068/p7187.

Der volle Inhalt der Quelle
Annotation:
Evidence suggests that experiencing the hollow-face illusion involves perceptual reversal of the binocular disparities associated with the face even though the rest of the scene appears unchanged. This suggests stereoscopic processing of object shape may be independent of scene-based processing of the layout of objects in depth. We investigated the effects of global scene-based and local object-based disparity on the compellingness of the perceived convexity of the face. We took stereoscopic photographs of people in scenes, and independently reversed the binocular disparities associated with the head and scene. Participants rated perceived convexity of a natural disparity (“convex”) or reversed disparity (“concave”) face shown either in its original context with reversed or natural disparities or against a black background. Faces with natural disparity were rated as more convincingly convex independent of the background, showing that the local disparities can affect perceived convexity independent of disparities across the rest of the image. However, the apparent convexity of the faces was also greater in natural disparity scenes compared to either a reversed disparity scene or a zero disparity black background. This independent effect of natural scene disparity suggests that the ‘solidity’ associated with natural scene disparities spread to enhance the perceived convexity of the face itself. Together, these findings suggest that global and local disparity exert independent and additive effects upon the perceived convexity of the face.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Liu, Hongmin, Xincheng Tang und Shuhan Shen. „Depth-map completion for large indoor scene reconstruction“. Pattern Recognition 99 (März 2020): 107112. http://dx.doi.org/10.1016/j.patcog.2019.107112.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Jia, Tong, BingNan Wang, ZhongXuan Zhou und Haixiu Meng. „Scene Depth Perception Based on Omnidirectional Structured Light“. IEEE Transactions on Image Processing 25, Nr. 9 (September 2016): 4369–78. http://dx.doi.org/10.1109/tip.2016.2590304.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Seijdel, Noor, Nikos Tsakmakidis, Edward H. F. de Haan, Sander M. Bohte und H. Steven Scholte. „Depth in convolutional neural networks solves scene segmentation“. PLOS Computational Biology 16, Nr. 7 (24.07.2020): e1008022. http://dx.doi.org/10.1371/journal.pcbi.1008022.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

孙, 丹. „Research for Scene Depth Structure of Monocular Image“. Computer Science and Application 08, Nr. 04 (2018): 522–31. http://dx.doi.org/10.12677/csa.2018.84058.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Albert, Marc K. „Surface Formation and Depth in Monocular Scene Perception“. Perception 28, Nr. 11 (November 1999): 1347–60. http://dx.doi.org/10.1068/p2987.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Wu, Xingyu, Xia Mao, Lijiang Chen, Yuli Xue und Alberto Rovetta. „Depth image-based hand tracking in complex scene“. Optik 126, Nr. 20 (Oktober 2015): 2757–63. http://dx.doi.org/10.1016/j.ijleo.2015.07.027.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Li, Jian-Wei, Wei Gao und Yi-Hong Wu. „Elaborate Scene Reconstruction with a Consumer Depth Camera“. International Journal of Automation and Computing 15, Nr. 4 (17.04.2018): 443–53. http://dx.doi.org/10.1007/s11633-018-1114-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Son, Jung-Young, Kyung-Tae Kim und Vladimir Ivanovich Bobrinev. „Depth resolution and displayable depth of a scene in three-dimensional images“. Journal of the Optical Society of America A 22, Nr. 9 (01.09.2005): 1739. http://dx.doi.org/10.1364/josaa.22.001739.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Gapper, Justin, Hesham El-Askary, Erik Linstead und Thomas Piechota. „Evaluation of Spatial Generalization Characteristics of a Robust Classifier as Applied to Coral Reef Habitats in Remote Islands of the Pacific Ocean“. Remote Sensing 10, Nr. 11 (09.11.2018): 1774. http://dx.doi.org/10.3390/rs10111774.

Der volle Inhalt der Quelle
Annotation:
This study was an evaluation of the spectral signature generalization properties of coral across four remote Pacific Ocean reefs. The sites under consideration have not been the subject of previous studies for coral classification using remote sensing data. Previous research regarding using remote sensing to identify reefs has been limited to in-situ assessment, with some researchers also performing temporal analysis of a selected area of interest. This study expanded the previous in-situ analyses by evaluating the ability of a basic predictor, Linear Discriminant Analysis (LDA), trained on Depth Invariant Indices calculated from the spectral signature of coral in one location to generalize to other locations, both within the same scene and in other scenes. Three Landsat 8 scenes were selected and masked for null, land, and obstructed pixels, and corrections for sun glint and atmospheric interference were applied. Depth Invariant Indices (DII) were then calculated according to the method of Lyzenga and an LDA classifier trained on ground truth data from a single scene. The resulting LDA classifier was then applied to other locations and the coral classification accuracy evaluated. When applied to ground truth data from the Palmyra Atoll location in scene path/row 065/056, the initial model achieved an accuracy of 80.3%. However, when applied to ground truth observations from another location within the scene, namely, Kingman Reef, it achieved an accuracy of 78.6%. The model was then applied to two additional scenes (Howland Island and Baker Island Atoll), which yielded an accuracy of 69.2% and 71.4%, respectively. Finally, the algorithm was retrained using data gathered from all four sites, which produced an overall accuracy of 74.1%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Mather, George, und David R. R. Smith. „Blur Discrimination and its Relation to Blur-Mediated Depth Perception“. Perception 31, Nr. 10 (Oktober 2002): 1211–19. http://dx.doi.org/10.1068/p3254.

Der volle Inhalt der Quelle
Annotation:
Retinal images of three-dimensional scenes often contain regions that are spatially blurred by different amounts, owing to depth variation in the scene and depth-of-focus limitations in the eye. Variations in blur between regions in the retinal image therefore offer a cue to their relative physical depths. In the first experiment we investigated apparent depth ordering in images containing two regions of random texture separated by a vertical sinusoidal border. The texture was sharp on one side of the border, and blurred on the other side. In some presentations the border itself was also blurred. Results showed that blur variation alone is sufficient to determine the apparent depth ordering. A subsequent series of experiments measured blur-discrimination thresholds with stimuli similar to those used in the depth-ordering experiment. Weber fractions for blur discrimination ranged from 0.28 to 0.56. It is concluded that the utility of blur variation as a depth cue is constrained by the relatively mediocre ability of observers to discriminate different levels of blur. Blur is best viewed as a relatively coarse, qualitative depth cue.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Frenz, Harald, und Markus Lappe. „Visual Distance Estimation in Static Compared to Moving Virtual Scenes“. Spanish Journal of Psychology 9, Nr. 2 (November 2006): 321–31. http://dx.doi.org/10.1017/s1138741600006223.

Der volle Inhalt der Quelle
Annotation:
Visual motion is used to control direction and speed of self-motion and time-to-contact with an obstacle. In earlier work, we found that human subjects can discriminate between the distances of different visually simulated self-motions in a virtual scene. Distance indication in terms of an exocentric interval adjustment task, however, revealed linear correlation between perceived and indicated distances but with a profound distance underestimation. One possible explanation for this underestimation is the perception of visual space in virtual environments. Humans perceive visual space in natural scenes as curved, and distances are increasingly underestimated with increasing distance from the observer. Such spatial compression may also exist in our virtual environment. We therefore surveyed perceived visual space in a static virtual scene. We asked observers to compare two horizontal depth intervals, similar to experiments performed in natural space. Subjects had to indicate the size of one depth interval relative to a second interval. Our observers perceived visual space in the virtual environment as compressed, similar to the perception found in natural scenes. However, the nonlinear depth function we found can not explain the observed distance underestimation of visual simulated self-motions in the same environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Deris, A., I. Trigonis, A. Aravanis und E. K. Stathopoulou. „DEPTH CAMERAS ON UAVs: A FIRST APPROACH“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W3 (23.02.2017): 231–36. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w3-231-2017.

Der volle Inhalt der Quelle
Annotation:
Accurate depth information retrieval of a scene is a field under investigation in the research areas of photogrammetry, computer vision and robotics. Various technologies, active, as well as passive, are used to serve this purpose such as laser scanning, photogrammetry and depth sensors, with the latter being a promising innovative approach for fast and accurate 3D object reconstruction using a broad variety of measuring principles including stereo vision, infrared light or laser beams. In this study we investigate the use of the newly designed Stereolab's ZED depth camera based on passive stereo depth calculation, mounted on an Unmanned Aerial Vehicle with an ad-hoc setup, specially designed for outdoor scene applications. Towards this direction, the results of its depth calculations and scene reconstruction generated by Simultaneous Localization and Mapping (SLAM) algorithms are compared and evaluated based on qualitative and quantitative criteria with respect to the ones derived by a typical Structure from Motion (SfM) and Multiple View Stereo (MVS) pipeline for a challenging cultural heritage application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Morrison, H. Boyd. „Depth and Image Quality of Three-Dimensional, Lenticular-Sheet Images“. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 41, Nr. 2 (Oktober 1997): 1338–42. http://dx.doi.org/10.1177/1071181397041002135.

Der volle Inhalt der Quelle
Annotation:
This study investigated the inherent tradeoff between depth and image quality in lenticular-sheet (LS) imaging. Four different scenes were generated as experimental stimuli to represent a range of typical LS images. The overall amount of depth in each image, as well as the degree of foreground and background disparity, were varied, and the images were rated by subjects using the free-modulus magnitude estimation procedure. Generally, subjects preferred images which had smaller amounts of overall depth and tended to dislike excessive amounts of foreground or background disparity. The most preferred image was also determined for each scene by selecting the image with the highest mean rating. In a second experiment, these most preferred LS images for each scene were shown to subjects along with the analogous two-dimensional (2D) photographic versions. Results indicate that observers from the general population looked at the LS images longer than they did at the 2D versions and rated them higher on the attributes of quality of depth and attention-getting ability, although the LS images were rated lower on sharpness. No difference was found in overall quality or likeability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Zou, Nan, Zhiyu Xiang, Yiman Chen, Shuya Chen und Chengyu Qiao. „Simultaneous Semantic Segmentation and Depth Completion with Constraint of Boundary“. Sensors 20, Nr. 3 (23.01.2020): 635. http://dx.doi.org/10.3390/s20030635.

Der volle Inhalt der Quelle
Annotation:
As the core task of scene understanding, semantic segmentation and depth completion play a vital role in lots of applications such as robot navigation, AR/VR and autonomous driving. They are responsible for parsing scenes from the angle of semantics and geometry, respectively. While great progress has been made in both tasks through deep learning technologies, few works have been done on building a joint model by deeply exploring the inner relationship of the above tasks. In this paper, semantic segmentation and depth completion are jointly considered under a multi-task learning framework. By sharing a common encoder part and introducing boundary features as inner constraints in the decoder part, the two tasks can properly share the required information from each other. An extra boundary detection sub-task is responsible for providing the boundary features and constructing cross-task joint loss functions for network training. The entire network is implemented end-to-end and evaluated with both RGB and sparse depth input. Experiments conducted on synthesized and real scene datasets show that our proposed multi-task CNN model can effectively improve the performance of every single task.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Fry, Edward W. S., Sophie Triantaphillidou, Robin B. Jenkin, Ralph E. Jacobson und John R. Jarvis. „Noise Power Spectrum Scene-Dependency in Simulated Image Capture Systems“. Electronic Imaging 2020, Nr. 9 (26.01.2020): 345–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.9.iqsp-345.

Der volle Inhalt der Quelle
Annotation:
The Noise Power Spectrum (NPS) is a standard measure for image capture system noise. It is derived traditionally from captured uniform luminance patches that are unrepresentative of pictorial scene signals. Many contemporary capture systems apply nonlinear content-aware signal processing, which renders their noise scene-dependent. For scene-dependent systems, measuring the NPS with respect to uniform patch signals fails to characterize with accuracy: i) system noise concerning a given input scene, ii) the average system noise power in real-world applications. The sceneand- process-dependent NPS (SPD-NPS) framework addresses these limitations by measuring temporally varying system noise with respect to any given input signal. In this paper, we examine the scene-dependency of simulated camera pipelines in-depth by deriving SPD-NPSs from fifty test scenes. The pipelines apply either linear or non-linear denoising and sharpening, tuned to optimize output image quality at various opacity levels and exposures. Further, we present the integrated area under the mean of SPD-NPS curves over a representative scene set as an objective system noise metric, and their relative standard deviation area (RSDA) as a metric for system noise scene-dependency. We close by discussing how these metrics can also be computed using scene-and-processdependent Modulation Transfer Functions (SPD-MTF).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Su, Junli. „Scene Matching Method for Children’s Psychological Distress Based on Deep Learning Algorithm“. Complexity 2021 (03.02.2021): 1–11. http://dx.doi.org/10.1155/2021/6638522.

Der volle Inhalt der Quelle
Annotation:
In the process of children’s psychological development, various levels of psychological distress often occur, such as attention problems, emotional problems, adaptation problems, language problems, and motor coordination problems; these problems have seriously affected children’s healthy growth. Scene matching in the treatment of psychological distress can prompt children to change from a third-person perspective to a first-person perspective and shorten the distance between scene contents and child’s perceptual experience. As a part of machine learning, deep learning can perform mapping transformations in huge data, process huge data with the help of complex models, and extract multilayer features of scene information. Based on the summary and analysis of previous research works, this paper expounded the research status and significance of the scene matching method for children’s psychological distress, elaborated the development background, current status, and future challenges of deep learning algorithm, introduced the methods and principles of depth spatiotemporal feature extraction algorithm and dynamic scene understanding algorithm, constructed a scene matching model for children’s psychological distress based on deep learning algorithm, analyzed the scene feature extraction and matching function construction of children’s psychological distress, proposed a scene matching method for children’s psychological distress based on deep learning algorithm, performed scene feature matching and information processing of children’s psychological distress, and finally conduced a simulation experiment and analyzed its results. The results show that the deep learning algorithm can have a deep and abstract mining on the characteristics of children’s psychological distress scenes and obtain a large amount of more representative characteristic information through training on large-scale data, thereby improving the accuracy of classification and matching of children’s psychological distress scenes. The study results of this paper provide a reference for further researches on the scene matching method for children’s psychological distress based on deep learning algorithm.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Saleh, Shadi, Shanmugapriyan Manoharan und Wolfram Hardt. „Real-time 3D Perception of Scene with Monocular Camera“. Embedded Selforganising Systems 7, Nr. 2 (24.09.2020): 4–7. http://dx.doi.org/10.14464/ess.v7i2.436.

Der volle Inhalt der Quelle
Annotation:
Depth is a vital prerequisite for the fulfillment of various tasks such as perception, navigation, and planning. Estimating depth using only a single image is a challenging task since the analytic mapping is not available between the intensity image and its depth where the features cue of the context is usually absent in the single image. Furthermore, most current researchers rely on the supervised Learning approach to handle depth estimation. Therefore, the demand for recorded ground truth depth is important at the training time, which is actually tricky and costly. This study presents two approaches (unsupervised learning and semi-supervised learning) to learn the depth information using only a single RGB-image. The main objective of depth estimation is to extract a representation of the spatial structure of the environment and to restore the 3D shape and visual appearance of objects in imagery.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Eby, David W., und Myron L. Braunstein. „The Perceptual Flattening of Three-Dimensional Scenes Enclosed by a Frame“. Perception 24, Nr. 9 (September 1995): 981–93. http://dx.doi.org/10.1068/p240981.

Der volle Inhalt der Quelle
Annotation:
The effects of a visible frame around a three-dimensional scene on perceived depth within the scene was investigated in three experiments. In experiment 1 subjects judged the slant of an object that had been rotated about a vertical axis. Judged slant was reduced when the frame was illuminated. In experiments 2 and 3 subjects judged the shape (width-to-height ratio) of the object. The object was judged to be narrower when the frame was illuminated (experiment 2) or when a frame was added to the scene in an illuminated room (experiment 3). These results demonstrate that the presence of a frame around a three-dimensional scene serves as a flatness cue, reducing perceived depth within the scene.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Calderon, Francisco C., Carlos A. Parra und Cesar L. Niño. „DEPTH MAP ESTIMATION IN LIGHT FIELDS USING AN STEREO-LIKE TAXONOMY“. Revista de Investigaciones Universidad del Quindío 28, Nr. 1 (31.03.2016): 92–100. http://dx.doi.org/10.33975/riuq.vol28n1.37.

Der volle Inhalt der Quelle
Annotation:
The light field or LF is a function that describes the amount of light traveling in every direction (angular) through every point (spatial) in a scene, this LF can be captured in several ways, using arrays of cameras, or more recently using a single camera with an special lens, that allows the capture of angular and spatial information of light rays of a scene (LF). This recent camera implementation gives a different approach to find the dept of a scene using only a single camera. In order to estimate the depth, we describe a taxonomy, similar to the one used in stereo Depth-map algorithms. That consist in the creation of a cost tensor to represent the matching cost between different disparities, then, using a support weight window, aggregate the cost tensor, finally, using a winner-takes-all optimization algorithm, search for the best disparities. This paper explains in detail the several changes made to an stereo-like taxonomy, to be applied in a light field, and evaluate this algorithm using a recent database that for the first time, provides several ground-truth light fields, with a respective ground-truth depth map.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Li, Xiuxiu, Yanjuan Liu, Haiyan Jin, Lei Cai und Jiangbin Zheng. „RGBD Scene Flow Estimation with Global Nonrigid and Local Rigid Assumption“. Discrete Dynamics in Nature and Society 2020 (29.06.2020): 1–9. http://dx.doi.org/10.1155/2020/8215389.

Der volle Inhalt der Quelle
Annotation:
RGBD scene flow has attracted increasing attention in the computer vision with the popularity of depth sensor. To estimate the 3D motion of object accurately, a RGBD scene flow estimation method with global nonrigid and local rigid motion assumption is proposed in this paper. Firstly, the preprocessing is implemented, which includes the colour-depth registration and depth image inpainting, to processing holes and noises in the depth image; secondly, the depth image is segmented to obtain different motion regions with different depth values; thirdly, scene flow is estimated based on the global nonrigid and local rigid assumption and spatial-temporal correlation of RGBD information. In the global nonrigid and local rigid assumption, each segmented region is divided into several blocks, and each block has a rigid motion. With this assumption, the interaction of motion from different parts in the same segmented region is avoided, especially the nonrigid object, e.g., a human body. Experiments are implemented on RGBD tracking dataset and deformable 3D reconstruction dataset. The visual comparison shows that the proposed method can distinguish the motion parts from the static parts in the same region better, and the quantitative comparisons proved more accurate scene flow can be obtained.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Du, Ting Wei, und Bo Liu. „Kinect Depth Data Segmentation Based on Gauss Mixture Model Clustering“. Advanced Materials Research 760-762 (September 2013): 1556–61. http://dx.doi.org/10.4028/www.scientific.net/amr.760-762.1556.

Der volle Inhalt der Quelle
Annotation:
Indoor scene understanding based on the depth image data is a cutting-edge issue in the field of three-dimensional computer vision. Taking the layout characteristics of the indoor scenes and more plane features in these scenes into account, this paper presents a depth image segmentation method based on Gauss Mixture Model clustering. First, transform the Kinect depth image data into point cloud which is in the form of discrete three-dimensional point data, and denoise and down-sample the point cloud data; second, calculate the point normal of all points in the entire point cloud, then cluster the entire normal using Gaussian Mixture Model, and finally implement the entire point clouds segmentation by RANSAC algorithm. Experimental results show that the divided regions have obvious boundaries and segmentation quality is above normal, and lay a good foundation for object recognition.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Myszkowski, Karol, Okan Tarhan Tursun, Petr Kellnhofer, Krzysztof Templin, Elena Arabadzhiyska, Piotr Didyk und Hans-Peter Seidel. „Perceptual Display: Apparent Enhancement of Scene Detail and Depth“. Electronic Imaging 2018, Nr. 14 (28.01.2018): 1–10. http://dx.doi.org/10.2352/issn.2470-1173.2018.14.hvei-501.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Wu, Kewei, Yang Gao, Hailong Ma, Yongxuan Sun, Tingting Yao und Zhao Xie. „A deep generative directed network for scene depth ordering“. Journal of Visual Communication and Image Representation 58 (Januar 2019): 554–64. http://dx.doi.org/10.1016/j.jvcir.2018.12.034.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Adán, Antonio, Pilar Merchán und Santiago Salamanca. „3D scene retrieval and recognition with Depth Gradient Images“. Pattern Recognition Letters 32, Nr. 9 (Juli 2011): 1337–53. http://dx.doi.org/10.1016/j.patrec.2011.03.016.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Quiroga, Julian, Frédéric Devernay und James Crowley. „Local scene flow by tracking in intensity and depth“. Journal of Visual Communication and Image Representation 25, Nr. 1 (Januar 2014): 98–107. http://dx.doi.org/10.1016/j.jvcir.2013.03.018.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Arikan, Murat, Reinhold Preiner und Michael Wimmer. „Multi-Depth-Map Raytracing for Efficient Large-Scene Reconstruction“. IEEE Transactions on Visualization and Computer Graphics 22, Nr. 2 (01.02.2016): 1127–37. http://dx.doi.org/10.1109/tvcg.2015.2430333.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Jiang, Bo, Wanxu Zhang, Jian Zhao, Yi Ru, Min Liu, Xiaolei Ma, Xiaoxuan Chen und Hongqi Meng. „Gray-Scale Image Dehazing Guided by Scene Depth Information“. Mathematical Problems in Engineering 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/7809214.

Der volle Inhalt der Quelle
Annotation:
Combined with two different types of image dehazing strategies based on image enhancement and atmospheric physical model, respectively, a novel method for gray-scale image dehazing is proposed in this paper. For image-enhancement-based strategy, the characteristics of its simplicity, effectiveness, and no color distortion are preserved, and the common guided image filter is modified to match the application of image enhancement. Through wavelet decomposition, the high frequency boundary of original image is preserved in advance. Moreover, the process of image dehazing can be guided by the image of scene depth proportion directly estimated from the original gray-scale image. Our method has the advantages of brightness consistency and no distortion over the state-of-the-art methods based on atmospheric physical model. Particularly, our method overcomes the essential shortcoming of the abovementioned methods that are mainly working for color image. Meanwhile, an image of scene depth proportion is acquired as a byproduct of image dehazing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie