Articles de revues sur le sujet « Illumination map »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Illumination map.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleurs articles de revues pour votre recherche sur le sujet « Illumination map ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

Xu, Jun-Peng, Chenyu Zuo, Fang-Lue Zhang et Miao Wang. « Rendering-Aware HDR Environment Map Prediction from a Single Image ». Proceedings of the AAAI Conference on Artificial Intelligence 36, no 3 (28 juin 2022) : 2857–65. http://dx.doi.org/10.1609/aaai.v36i3.20190.

Texte intégral
Résumé :
High dynamic range (HDR) illumination estimation from a single low dynamic range (LDR) image is a significant task in computer vision, graphics, and augmented reality. We present a two-stage deep learning-based method to predict an HDR environment map from a single narrow field-of-view LDR image. We first learn a hybrid parametric representation that sufficiently covers high- and low-frequency illumination components in the environment. Taking the estimated illuminations as guidance, we build a generative adversarial network to synthesize an HDR environment map that enables realistic rendering effects. We specifically consider the rendering effect by supervising the networks using rendering losses in both stages, on the predicted environment map as well as the hybrid illumination representation. Quantitative and qualitative experiments demonstrate that our approach achieves lower relighting errors for virtual object insertion and is preferred by users compared to state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Christensen, Per H. « Faster Photon Map Global Illumination ». Journal of Graphics Tools 4, no 3 (janvier 1999) : 1–10. http://dx.doi.org/10.1080/10867651.1999.10487505.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Borisagar, Viral H., et Mukesh A. Zaveri. « Disparity Map Generation from Illumination Variant Stereo Images Using Efficient Hierarchical Dynamic Programming ». Scientific World Journal 2014 (2014) : 1–12. http://dx.doi.org/10.1155/2014/513417.

Texte intégral
Résumé :
A novel hierarchical stereo matching algorithm is presented which gives disparity map as output from illumination variant stereo pair. Illumination difference between two stereo images can lead to undesirable output. Stereo image pair often experience illumination variations due to many factors like real and practical situation, spatially and temporally separated camera positions, environmental illumination fluctuation, and the change in the strength or position of the light sources. Window matching and dynamic programming techniques are employed for disparity map estimation. Good quality disparity map is obtained with the optimized path. Homomorphic filtering is used as a preprocessing step to lessen illumination variation between the stereo images. Anisotropic diffusion is used to refine disparity map to give high quality disparity map as a final output. The robust performance of the proposed approach is suitable for real life circumstances where there will be always illumination variation between the images. The matching is carried out in a sequence of images representing the same scene, however in different resolutions. The hierarchical approach adopted decreases the computation time of the stereo matching problem. This algorithm can be helpful in applications like robot navigation, extraction of information from aerial surveys, 3D scene reconstruction, and military and security applications. Similarity measure SAD is often sensitive to illumination variation. It produces unacceptable disparity map results for illumination variant left and right images. Experimental results show that our proposed algorithm produces quality disparity maps for both wide range of illumination variant and invariant stereo image pair.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Middya, Asif Iqbal, Sarbani Roy et Debjani Chattopadhyay. « CityLightSense : A Participatory Sensing-based System for Monitoring and Mapping of Illumination levels ». ACM Transactions on Spatial Algorithms and Systems 8, no 1 (31 mars 2022) : 1–22. http://dx.doi.org/10.1145/3487364.

Texte intégral
Résumé :
Adequate nighttime lighting of city streets is necessary for safe vehicle and pedestrian movement, deterrent of crime, improvement of the citizens’ perceptions of safety, and so on. However, monitoring and mapping of illumination levels in city streets during the nighttime is a tedious activity that is usually based on manual inspection reports. The advancement in smartphone technology comes up with a better way to monitor city illumination using a rich set of smartphone-equipped inexpensive but powerful sensors (e.g., light sensor, GPS, etc). In this context, the main objective of this work is to use the power of smartphone sensors and IoT-cloud-based framework to collect, store, and analyze nighttime illumination data from citizens to generate high granular city illumination map. The development of high granular illumination map is an effective way of visualizing and assessing the illumination of city streets during nighttime. In this article, an illumination mapping algorithm called Street Illumination Mapping is proposed that works on participatory sensing-based illumination data collected using smartphones as IoT devices to generate city illumination map. The proposed method is evaluated on a real-world illumination dataset collected by participants in two different urban areas of city Kolkata. The results are also compared with the baseline mapping techniques, namely, Spatial k-Nearest Neighbors, Inverse Distance Weighting, Random Forest Regressor, Support Vector Regressor, and Artificial Neural Network.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Bonin, Keith, Amanda Smelser, Naike Salvador Moreno, George Holzwarth, Kevin Wang, Preston Levy et Pierre-Alexandre Vidi. « Structured illumination to spatially map chromatin motions ». Journal of Biomedical Optics 23, no 05 (15 mai 2018) : 1. http://dx.doi.org/10.1117/1.jbo.23.5.056007.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Liu, Weidong, Jiyu Li, Wenbo Zhang et Le Li. « Underwater image enhancement method with non-uniform illumination based on Retinex and ADMM ». Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 39, no 4 (août 2021) : 824–30. http://dx.doi.org/10.1051/jnwpu/20213940824.

Texte intégral
Résumé :
In order to solve the image blurring and distortion problem caused by underwater non-uniform and low illumination, this paper proposes an underwater image enhancement algorithm based on the Retinex theory and the Alternating Direction Method of Multipliers (ADMM). Firstly, the L component of the original image in the Lab space is extracted as the initial illumination map, and an Augmented Lagrange Multiplier (ALM) framework is constructed based on the ADMM to optimize the initial illumination map in order to obtain an accurate illumination image. In addition, the illumination map is further corrected in the luminance region with the Gamma Correction. Secondly, combined with the color constancy characteristics in the Retinex theory, the reflected image of the object is obtained. Finally, the bilateral filter is picked to suppress the underwater noise and obtain a more detailed enhanced image. The experimental results show that the underwater image enhancement algorithm can effectively solve the non-uniform illumination problem caused by natural light or artificial light source and improve the underwater image quality, thus having a better performance than other algorithms.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Dong, Jun, Xue Yuan et Fanlun Xiong. « Lighting Equilibrium Distribution Maps and Their Application to Face Recognition Under Difficult Lighting Conditions ». International Journal of Pattern Recognition and Artificial Intelligence 31, no 03 (février 2017) : 1756003. http://dx.doi.org/10.1142/s0218001417560031.

Texte intégral
Résumé :
In this paper, a novel facial-patch based recognition framework is proposed to deal with the problem of face recognition (FR) on the serious illumination condition. First, a novel lighting equilibrium distribution maps (LEDM) for illumination normalization is proposed. In LEDM, an image is analyzed in logarithm domain with wavelet transform, and the approximation coefficients of the image are mapped according to a reference-illumination map in order to normalize the distribution of illumination energy due to different lighting effects. Meanwhile, the detail coefficients are enhanced to achieve detail information emphasis. The LEDM is obtained by blurring the distances between the test image and the reference illumination map in the logarithm domain, which may express the entire distribution of illumination variations. Then, a facial-patch based framework and a credit degree based facial patches synthesizing algorithm are proposed. Each normalized face images is divided into several stacked patches. And, all patches are individually classified, then each patch from the test image casts a vote toward the parent image classification. A novel credit degree map is established based on the LEDM, which is deciding a credit degree for each facial patch. The main idea of credit degree map construction is the over-and under-illuminated regions should be assigned lower credit degree than well-illuminated regions. Finally, results are obtained by the credit degree based facial patches synthesizing. The proposed method provides state-of-the-art performance on three data sets that are widely used for testing FR under different illumination conditions: Extended Yale-B, CAS-PEAL-R1, and CMUPIE. Experimental results show that our FR frame outperforms several existing illumination compensation methods.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Hey, Heinrich, et Werner Purgathofer. « Advanced Radiance Estimation For Photon Map Global Illumination ». Computer Graphics Forum 21, no 3 (septembre 2002) : 541–45. http://dx.doi.org/10.1111/1467-8659.00704.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Li, Zhen, Lin Zhou et Zeqin Lin. « Robust Visual Place Recognition Method for Robot Facing Drastic Illumination Changes ». Journal of Physics : Conference Series 2209, no 1 (1 février 2022) : 012001. http://dx.doi.org/10.1088/1742-6596/2209/1/012001.

Texte intégral
Résumé :
Abstract The robustness of visual place recognition determines the accuracy of the SLAM to construct the environmental map. However, when the robot moves in the outdoor environment for a long time, it must face the challenge of drastic illumination changes (time shift, season or rain and fog weather factors), which leads to the robot’s ability to identify places is greatly restricted. This paper proposes a method for visual place recognition that is more robust to severe illumination changes. First, a generative adversarial network is introduced into visual SLAM to enhance the quality of candidate keyframes, and consistency of geographic locations corresponding to images before and after quality enhancement is evaluated, and then the image descriptor with illumination invariance is established for the robot's new observation image and keyframes of the map. Finally, the performance of the method in this paper is tested on the public dataset. The experimental results show that this method is conducive to improving the quality of environmental map nodes, and enables the robot to show a highly robust ability for visual place recognition in the face of severe illumination changes, which provides a powerful tool for robot to build an accurate environmental map.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Schleicher, Jörg, Jessé C. Costa et Amélia Novais. « A comparison of imaging conditions for wave-equation shot-profile migration ». GEOPHYSICS 73, no 6 (novembre 2008) : S219—S227. http://dx.doi.org/10.1190/1.2976776.

Texte intégral
Résumé :
The application of a deconvolution imaging condition in wave-equation shot-profile migration is important to provide illumination compensation and amplitude recovery. Particularly if the aim is to successfully recover a measure of the medium reflectivity, an imaging condition that destroys amplitudes is unacceptable. We study a set of imaging conditions with illumination compensation. The imaging conditions are evaluated by the quality of the output amplitudes and artifacts produced. In numerical experiments using a vertically inhomogeneous velocity model, the best of all imaging conditions we tested is the one that divides the crosscorrelation of upgoing and downgoing wavefields by the autocorrelation of the downgoing wavefield, also known as the illumination map. In an application to Marmousi data, unconditional division by autocorrelation turned out to be unstable. Effective stabilization was achieved by smoothing the illumination map.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Fischer, Brian, Gary Dester, Timothy Conn et Richard W. Hanberg. « Practical Illumination Uncertainty Assessment i-n Ground-Based Radar Measurements ». IEEE Antennas and Propagation Magazine 52, no 1 (février 2010) : 250–56. http://dx.doi.org/10.1109/map.2010.5466464.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Pan, Xinxin, Changli Li, Zhigeng Pan, Jingwen Yan, Shiqiang Tang et Xinghui Yin. « Low-Light Image Enhancement Method Based on Retinex Theory by Improving Illumination Map ». Applied Sciences 12, no 10 (23 mai 2022) : 5257. http://dx.doi.org/10.3390/app12105257.

Texte intégral
Résumé :
Recently, low-light image enhancement has attracted much attention. However, some problems still exist. For instance, sometimes dark regions are not fully improved, but bright regions near the light source or auxiliary light source are overexposed. To address these problems, a retinex based method that strengthens the illumination map is proposed, which utilizes a brightness enhancement function (BEF) that is a weighted sum of the Sigmoid function cascading by Gamma correction (GC) and Sine function, and an improved adaptive contrast enhancement (IACE) to enhance the estimated illumination map through multi-scale fusion. Specifically, firstly, the illumination map is obtained according to retinex theory via the weighted sum method, which considers neighborhood information. Then, the Gaussian Laplacian pyramid is used to fuse two input images that are derived by BEF and IACE, so that it can improve brightness and contrast of the illuminance component acquired above. Finally, the adjusted illuminance map is multiplied by the reflection map to obtain the enhanced image according to the retinex theory. Extensive experiments show that our method has better results in subjective vision and quantitative index evaluation compared with other state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Tong, Ying, et Jin Chen. « Infrared and Visible Image Fusion Under Different Illumination Conditions Based on Illumination Effective Region Map ». IEEE Access 7 (2019) : 151661–68. http://dx.doi.org/10.1109/access.2019.2944963.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Chalmers, Andrew, Todd Zickler et Taehyun Rhee. « Illumination Browser : An intuitive representation for radiance map databases ». Computers & ; Graphics 103 (avril 2022) : 101–8. http://dx.doi.org/10.1016/j.cag.2022.01.006.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Guo, Xiaojie, Yu Li et Haibin Ling. « LIME : Low-Light Image Enhancement via Illumination Map Estimation ». IEEE Transactions on Image Processing 26, no 2 (février 2017) : 982–93. http://dx.doi.org/10.1109/tip.2016.2639450.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Choi, Jaewon, et Sungkil Lee. « Dual Paraboloid Map-Based Real-Time Indirect Illumination Rendering ». Journal of KIISE 46, no 11 (30 novembre 2019) : 1099–105. http://dx.doi.org/10.5626/jok.2019.46.11.1099.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Feng, Zexin, Brittany D. Froese et Rongguang Liang. « Freeform illumination optics construction following an optimal transport map ». Applied Optics 55, no 16 (24 mai 2016) : 4301. http://dx.doi.org/10.1364/ao.55.004301.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Hao, Shijie, Zhuang Feng et Yanrong Guo. « Low-light image enhancement with a refined illumination map ». Multimedia Tools and Applications 77, no 22 (30 novembre 2017) : 29639–50. http://dx.doi.org/10.1007/s11042-017-5448-5.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Gaier, Adam, Alexander Asteroth et Jean-Baptiste Mouret. « Data-Efficient Design Exploration through Surrogate-Assisted Illumination ». Evolutionary Computation 26, no 3 (septembre 2018) : 381–410. http://dx.doi.org/10.1162/evco_a_00231.

Texte intégral
Résumé :
Design optimization techniques are often used at the beginning of the design process to explore the space of possible designs. In these domains illumination algorithms, such as MAP-Elites, are promising alternatives to classic optimization algorithms because they produce diverse, high-quality solutions in a single run, instead of only a single near-optimal solution. Unfortunately, these algorithms currently require a large number of function evaluations, limiting their applicability. In this article, we introduce a new illumination algorithm, Surrogate-Assisted Illumination (SAIL), that leverages surrogate modeling techniques to create a map of the design space according to user-defined features while minimizing the number of fitness evaluations. On a two-dimensional airfoil optimization problem, SAIL produces hundreds of diverse but high-performing designs with several orders of magnitude fewer evaluations than MAP-Elites or CMA-ES. We demonstrate that SAIL is also capable of producing maps of high-performing designs in realistic three-dimensional aerodynamic tasks with an accurate flow simulation. Data-efficient design exploration with SAIL can help designers understand what is possible, beyond what is optimal, by considering more than pure objective-based optimization.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Ya-Nan Li, Ya-Nan Li, Zhen-Feng Zhang Ya-Nan Li, Yi-Fan Chen Zhen-Feng Zhang et Chu-Hua Huang Yi-Fan Chen. « Feature Fusion Method for Low-Illumination Images ». 電腦學刊 33, no 6 (décembre 2022) : 167–80. http://dx.doi.org/10.53106/199115992022123306014.

Texte intégral
Résumé :
<p>Aiming at the problem of inaccurate feature extraction of low illumination images, a method is proposed that fuses Scale Invariant Feature Transform into SuperPoint. Firstly, the low illumination image is light-enhanced. Secondly, SuperPoint and SIFT features are fused at feature map level, changing the deep neural network weight by labeling the prob output of the network with the SIFT of input image as the maximum of the current prob at pixel-level. Finally, the loss function is constructed based on homography transformation, its principle between image pairs is used to realize the constraint on network parameters. The training and evaluation are conducted on ExDark dataset, tests and comparisons are conducted on multiple indicators of SOTA on HPatches common dataset. The experimental results show that our method improves the precision and recall than SuperPoint, and performs well in multiple evaluation indicators.</p> <p>&nbsp;</p>
Styles APA, Harvard, Vancouver, ISO, etc.
21

Wille, V., M. O. Al-Nuaimi et C. J. Haslett. « Obstacle diffraction and site shielding under varying incident illumination ». IEE Proceedings - Microwaves, Antennas and Propagation 143, no 1 (1996) : 87. http://dx.doi.org/10.1049/ip-map:19960035.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Tu, Huan, Gesang Duoji, Qijun Zhao et Shuang Wu. « Improved Single Sample Per Person Face Recognition via Enriching Intra-Variation and Invariant Features ». Applied Sciences 10, no 2 (14 janvier 2020) : 601. http://dx.doi.org/10.3390/app10020601.

Texte intégral
Résumé :
Face recognition using a single sample per person is a challenging problem in computer vision. In this scenario, due to the lack of training samples, it is difficult to distinguish between inter-class variations caused by identity and intra-class variations caused by external factors such as illumination, pose, etc. To address this problem, we propose a scheme to improve the recognition rate by both generating additional samples to enrich the intra-variation and eliminating external factors to extract invariant features. Firstly, a 3D face modeling module is proposed to recover the intrinsic properties of the input image, i.e., 3D face shape and albedo. To obtain the complete albedo, we come up with an end-to-end network to estimate the full albedo UV map from incomplete textures. The obtained albedo UV map not only eliminates the influence of the illumination, pose, and expression, but also retains the identity information. With the help of the recovered intrinsic properties, we then generate images under various illuminations, expressions, and poses. Finally, the albedo and the generated images are used to assist single sample per person face recognition. The experimental results on Face Recognition Technology (FERET), Labeled Faces in the Wild (LFW), Celebrities in Frontal-Profile (CFP) and other face databases demonstrate the effectiveness of the proposed method.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Cao, Jun, et Joel D. Brewer. « Critical reflection illumination analysis ». Interpretation 1, no 1 (1 août 2013) : T57—T61. http://dx.doi.org/10.1190/int-2013-0031.1.

Texte intégral
Résumé :
Poor imaging is frequently observed in many subsalt regions, making the subsalt stratigraphy interpretation and prospect evaluation challenging. We propose a critical reflection illumination analysis to evaluate subsalt illumination in areas where high-velocity contrasts create illumination and imaging shadows. Critical reflection often occurs at the base or flank of salt bodies. If critical reflection occurred, continued iterations of processing and imaging would generate little, if any, improvement in imaging results. Similarly, increasing the offset/azimuth of the acquisition would offer limited or no advantage. We introduce the critical reflection illumination map and illumination rose diagram to efficiently and effectively evaluate the probability of critical reflection for the target. This analysis can help avoid expensive processing, imaging, and acquisition efforts for areas that are in the critical/postcritical reflection regime. Critical reflection illumination analysis can also be applied to other high-velocity contrast scenarios.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Shieh, Ming-Yuan, et Tsung-Min Hsieh. « Fast Facial Detection by Depth Map Analysis ». Mathematical Problems in Engineering 2013 (2013) : 1–10. http://dx.doi.org/10.1155/2013/694321.

Texte intégral
Résumé :
In order to obtain correct facial recognition results, one needs to adopt appropriate facial detection techniques. Moreover, the effects of facial detection are usually affected by the environmental conditions such as background, illumination, and complexity of objectives. In this paper, the proposed facial detection scheme, which is based on depth map analysis, aims to improve the effectiveness of facial detection and recognition under different environmental illumination conditions. The proposed procedures consist of scene depth determination, outline analysis, Haar-like classification, and related image processing operations. Since infrared light sources can be used to increase dark visibility, the active infrared visual images captured by a structured light sensory device such as Kinect will be less influenced by environmental lights. It benefits the accuracy of the facial detection. Therefore, the proposed system will detect the objective human and face firstly and obtain the relative position by structured light analysis. Next, the face can be determined by image processing operations. From the experimental results, it demonstrates that the proposed scheme not only improves facial detection under varying light conditions but also benefits facial recognition.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Wang, Zhaoyang, Dan Zhao et Yunfeng Cao. « Image Quality Enhancement with Applications to Unmanned Aerial Vehicle Obstacle Detection ». Aerospace 9, no 12 (15 décembre 2022) : 829. http://dx.doi.org/10.3390/aerospace9120829.

Texte intégral
Résumé :
Aiming at the problem that obstacle avoidance of unmanned aerial vehicles (UAVs) cannot effectively detect obstacles under low illumination, this research proposes an enhancement algorithm for low-light airborne images, which is based on the camera response model and Retinex theory. Firstly, the mathematical model of low-illumination image enhancement is established, and the relationship between the camera response function (CRF) and brightness transfer function (BTF) is constructed by a common parameter equation. Secondly, to solve the problem that the enhancement algorithm using the camera response model will lead to blurred image details, Retinex theory is introduced into the camera response model to design an enhancement algorithm framework suitable for UAV obstacle avoidance. Thirdly, to shorten the time consumption of the algorithm, an acceleration solver is adopted to calculate the illumination map, and the exposure matrix is further calculated via the illumination map. Additionally, the maximum exposure value is set for low signal-to-noise ratio (SNR) pixels to suppress noise. Finally, a camera response model and exposure matrix are used to adjust the low-light image to obtain an enhanced image. The enhancement experiment for the constructed dataset shows that the proposed algorithm can significantly enhance the brightness of low-illumination images, and is superior to other similar available algorithms in quantitative evaluation metrics. Compared with the illumination enhancement algorithm based on infrared and visible image fusion, the proposed algorithm can achieve illumination enhancement without introducing additional airborne sensors. The obstacle object detection experiment shows that the proposed algorithm can increase the AP (average precision) value by 0.556.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Zhichao, Lian, et Er Meng Joo. « Local Relation Map : A Novel Illumination Invariant Face Recognition Approach ». International Journal of Advanced Robotic Systems 9, no 4 (octobre 2012) : 128. http://dx.doi.org/10.5772/51667.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Peng Cai, Dehui Kong, Baocai Yin et Yong Zhang. « A Geometry-Bias-Based Photon Map Reconstruction And Illumination Estimation ». International Journal of Advancements in Computing Technology 4, no 17 (30 septembre 2012) : 560–70. http://dx.doi.org/10.4156/ijact.vol4.issue17.65.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Quintero, Jesús M., Antoni Sudrià, Charles E. Hunt et Josep Carreras. « Color rendering map : a graphical metric for assessment of illumination ». Optics Express 20, no 5 (13 février 2012) : 4939. http://dx.doi.org/10.1364/oe.20.004939.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Seo, Yeong-Hyeon, Hyunwoo Kim, Sung-Pyo Yang, Kyungmin Hwang et Ki-Hun Jeong. « Lissajous scanned variable structured illumination for dynamic stereo depth map ». Optics Express 28, no 10 (4 mai 2020) : 15173. http://dx.doi.org/10.1364/oe.392953.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
30

Aly, Saleh, Naoyuki Tsuruta et Rin-Ichiro Taniguchi. « Face recognition under varying illumination using Mahalanobis self-organizing map ». Artificial Life and Robotics 13, no 1 (décembre 2008) : 298–301. http://dx.doi.org/10.1007/s10015-008-0555-z.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Zhan, Fangneng, Changgong Zhang, Yingchen Yu, Yuan Chang, Shijian Lu, Feiying Ma et Xuansong Xie. « EMLight : Lighting Estimation via Spherical Distribution Approximation ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 4 (18 mai 2021) : 3287–95. http://dx.doi.org/10.1609/aaai.v35i4.16440.

Texte intégral
Résumé :
Illumination estimation from a single image is critical in 3D rendering and it has been investigated extensively in the computer vision and computer graphic research community. On the other hand, existing works estimate illumination by either regressing light parameters or generating illumination maps that are often hard to optimize or tend to produce inaccurate predictions. We propose Earth Mover’s Light (EMLight), an illumination estimation framework that leverages a regression network and a neural projector for accurate illumination estimation. We decompose the illumination map into spherical light distribution, light intensity and the ambient term, and define the illumination estimation as a parameter regression task for the three illumination components. Motivated by the Earth Mover's distance, we design a novel spherical mover's loss that guides to regress light distribution parameters accurately by taking advantage of the subtleties of spherical distribution. Under the guidance of the predicted spherical distribution, light intensity and ambient term, the neural projector synthesizes panoramic illumination maps with realistic light frequency. Extensive experiments show that EMLight achieves accurate illumination estimation and the generated relighting in 3D object embedding exhibits superior plausibility and fidelity as compared with state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Wan, X., J. Liu et H. Yan. « Phase Correlation based Local Illumination-invariant Method for Multi-Tempro Remote Sensing Image Matching ». ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3 (11 août 2014) : 365–72. http://dx.doi.org/10.5194/isprsarchives-xl-3-365-2014.

Texte intégral
Résumé :
This paper aims at image matching under significantly different illumination conditions, especially illumination angle changes, without prior knowledge of lighting conditions. We investigated the illumination impact on Phase Correlation (PC) matrix by mathematical derivation and from which, we decomposed PC matrix as the multiplication product of the illumination impact matrix and the translation matrix. Thus the robustness to illumination variation of the widely used Absolute Dirichlet Curve-fitting (AD-CF) algorithm for pixel-wise disparity estimation is proved. Further, an improved PC matching algorithm is proposed: Absolute Dirichlet SVD (AD-SVD), to achieve illumination invariant image alignment. Experiments of matching DEM simulated terrain shading images under very different illumination angles demonstrated that AD-SVD achieved 1/20 pixels accuracy for image alignment and it is nearly entirely invariant to daily and seasonal solar position variation. The AD-CF algorithm was tested for generating disparity map from multi-illumination angle stereo pairs and the results demonstrated high fidelity to the original DEM and the Normalised Correlation Coefficient (NCC) between the two is 0.96.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Xie, Bin, Fan Guo et Zi Xing Cai. « Image Defogging Based on Fog Veil Subtraction ». Applied Mechanics and Materials 121-126 (octobre 2011) : 887–91. http://dx.doi.org/10.4028/www.scientific.net/amm.121-126.887.

Texte intégral
Résumé :
In this paper, we propose a new defog algorithm based on fog veil subtraction to remove fog from a single image. The proposed algorithm first estimates the illumination component of the image by applying smoothing to the degraded image, and then obtains the uniform distributed fog veil through a mean calculation of the illumination component. Next, we multiply the uniform veil by the original image to obtain a depth-like map and extract its intensity component to produce a fog veil whose distribution is according with real fog density of the scene. Once the fog veil is calculated, the reflectance map can be obtained by subtracting the veil from the degraded image. Finally, we apply an adaptive contrast stretching to the reflectance map to obtain an enhanced result. This algorithm can be easily extended to video domains and is verified by both real-scene photographs and videos.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Hong, Sungchul, Pranjay Shyam, Antyanta Bangunharcana et Hyuseoung Shin. « Robotic Mapping Approach under Illumination-Variant Environments at Planetary Construction Sites ». Remote Sensing 14, no 4 (20 février 2022) : 1027. http://dx.doi.org/10.3390/rs14041027.

Texte intégral
Résumé :
In planetary construction, the semiautonomous teleoperation of robots is expected to perform complex tasks for site preparation and infrastructure emplacement. A highly detailed 3D map is essential for construction planning and management. However, the planetary surface imposes mapping restrictions due to rugged and homogeneous terrains. Additionally, changes in illumination conditions cause the mapping result (or 3D point-cloud map) to have inconsistent color properties that hamper the understanding of the topographic properties of a worksite. Therefore, this paper proposes a robotic construction mapping approach robust to illumination-variant environments. The proposed approach leverages a deep learning-based low-light image enhancement (LLIE) method to improve the mapping capabilities of the visual simultaneous localization and mapping (SLAM)-based robotic mapping method. In the experiment, the robotic mapping system in the emulated planetary worksite collected terrain images during the daytime from noon to late afternoon. Two sets of point-cloud maps, which were created from original and enhanced terrain images, were examined for comparison purposes. The experiment results showed that the LLIE method in the robotic mapping method significantly enhanced the brightness, preserving the inherent colors of the original terrain images. The visibility and the overall accuracy of the point-cloud map were consequently increased.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Kumar, Vineet, Abhijit Asati et Anu Gupta. « A Novel Edge-Map Creation Approach for Highly Accurate Pupil Localization in Unconstrained Infrared Iris Images ». Journal of Electrical and Computer Engineering 2016 (2016) : 1–10. http://dx.doi.org/10.1155/2016/4709876.

Texte intégral
Résumé :
Iris segmentation in the iris recognition systems is a challenging task under noncooperative environments. The iris segmentation is a process of detecting the pupil, iris’s outer boundary, and eyelids in the iris image. In this paper, we propose a pupil localization method for locating the pupils in the non-close-up and frontal-view iris images that are captured under near-infrared (NIR) illuminations and contain the noise, such as specular and lighting reflection spots, eyeglasses, nonuniform illumination, low contrast, and occlusions by the eyelids, eyelashes, and eyebrow hair. In the proposed method, first, a novel edge-map is created from the iris image, which is based on combining the conventional thresholding and edge detection based segmentation techniques, and then, the general circular Hough transform (CHT) is used to find the pupil circle parameters in the edge-map. Our main contribution in this research is a novel edge-map creation technique, which reduces the false edges drastically in the edge-map of the iris image and makes the pupil localization in the noisy NIR images more accurate, fast, robust, and simple. The proposed method was tested with three iris databases: CASIA-Iris-Thousand (version 4.0), CASIA-Iris-Lamp (version 3.0), and MMU (version 2.0). The average accuracy of the proposed method is 99.72% and average time cost per image is 0.727 sec.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Zhu, Wenqi, Zili Zhang, Xing Zhao et Yinghua Fu. « Changed Detection Based on Patch Robust Principal Component Analysis ». Applied Sciences 12, no 15 (31 juillet 2022) : 7713. http://dx.doi.org/10.3390/app12157713.

Texte intégral
Résumé :
Change detection on retinal fundus image pairs mainly seeks to compare the important differences between a pair of images obtained at two different time points such as in anatomical structures or lesions. Illumination variation usually challenges the change detection methods in many cases. Robust principal component analysis (RPCA) takes intensity normalization and linear interpolation to greatly reduce the illumination variation between the continuous frames and then decomposes the image matrix to obtain the robust background model. The matrix-RPCA can obtain clear change regions, but when there are local bright spots on the image, the background model is vulnerable to illumination, and the change detection results are inaccurate. In this paper, a patch-based RPCA (P-RPCA) is proposed to detect the change of fundus image pairs, where a pair of fundus images is normalized and linearly interpolated to expand a low-rank image sequence; then, images are divided into many patches to obtain an image-patch matrix, and finally, the change regions are obtained by the low-rank decomposition. The proposed method is validated on a set of large lesion image pairs in clinical data. The area under curve (AUC) and mean average precision (mAP) of the method proposed in this paper are 0.9832 and 0.8641, respectively. For a group of small lesion image pairs with obvious local illumination changes in clinical data, the AUC and mAP obtained by the P-RPCA method are 0.9893 and 0.9401, respectively. The results show that the P-RPCA method is more robust to local illumination changes than the RPCA method, and has stronger performance in change detection than the RPCA method.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Kai, H., J. Hirokawa et M. Ando. « Analysis of inner fields and aperture illumination of an oversize rectangular slotted waveguide ». IEE Proceedings - Microwaves, Antennas and Propagation 150, no 6 (2003) : 415. http://dx.doi.org/10.1049/ip-map:20030769.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Tseng, C. H., et T. H. Chu. « Principle and results of microwave diversity imaging of conducting objects using multisource illumination ». IEE Proceedings - Microwaves, Antennas and Propagation 151, no 2 (2004) : 149. http://dx.doi.org/10.1049/ip-map:20040179.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Ye, Xiufen, Haibo Yang, Chuanlong Li, Yunpeng Jia et Peng Li. « A Gray Scale Correction Method for Side-Scan Sonar Images Based on Retinex ». Remote Sensing 11, no 11 (29 mai 2019) : 1281. http://dx.doi.org/10.3390/rs11111281.

Texte intégral
Résumé :
When side-scan sonars collect data, sonar energy attenuation, the residual of time varying gain, beam patterns, angular responses, and sonar altitude variations occur, which lead to an uneven gray level in side-scan sonar images. Therefore, gray scale correction is needed before further processing of side-scan sonar images. In this paper, we introduce the causes of gray distortion in side-scan sonar images and the commonly used optical and side-scan sonar gray scale correction methods. As existing methods cannot effectively correct distortion, we propose a simple, yet effective gray scale correction method for side-scan sonar images based on Retinex given the characteristics of side-scan sonar images. Firstly, we smooth the original image and add a constant as an illumination map. Then, we divide the original image by the illumination map to produce the reflection map. Finally, we perform element-wise multiplication between the reflection map and a constant coefficient to produce the final enhanced image. Two different schemes are used to implement our algorithm. For gray scale correction of side-scan sonar images, the proposed method is more effective than the latest similar methods based on the Retinex theory, and the proposed method is faster. Experiments prove the validity of the proposed method.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Smithson, H. E., et T. Morimoto. « Surface color under environmental illumination ». London Imaging Meeting 2020, no 1 (29 septembre 2020) : 33–38. http://dx.doi.org/10.2352/issn.2694-118x.2020.lim-44.

Texte intégral
Résumé :
Objects in real three-dimensional environments receive illumination from all directions, characterized in computer graphics by an environmental illumination map. The spectral content of this illumination can vary widely with direction [1], which means that the computational task of recovering surface color under environmental illumination cannot be reduced to correction for a single illuminant. We report the performance of human observers in selecting a target surface color from three distractors, one rendered under the same environmental illumination as the target, and two rendered under a different environmental illumination. Surface colors were selected such that, in the vast majority of trials, observers could identify the environment that contained non-identical surface colors, and color constancy performance was analyzed as the percentage of correct choices between the remaining two surfaces. The target and distractor objects were either matte or glossy and presented either with surrounding context or in a dark void. Mean performance ranged from 70% to 80%. There was a significant improvement in the presence of context, but no difference for matte and glossy stimuli, and no interaction between gloss and context. Analysis of trial-by-trial responses showed a dependence on the statistical properties of previously viewed images. Such analyses provide a means of investigating mechanisms that depend on environmental features, and not only on the properties of the instantaneous proximal image.
Styles APA, Harvard, Vancouver, ISO, etc.
41

SMITH, WILLIAM A. P., et EDWIN R. HANCOCK. « ESTIMATING FACIAL ALBEDO FROM A SINGLE IMAGE ». International Journal of Pattern Recognition and Artificial Intelligence 20, no 06 (septembre 2006) : 955–70. http://dx.doi.org/10.1142/s0218001406005071.

Texte intégral
Résumé :
This paper describes how a facial albedo map can be recovered from a single image using a statistical model that captures variations in surface normal direction. We fit the model to intensity data using constraints on the surface normal direction provided by Lambert's law and then use the differences between observed and reconstructed image brightness to estimate the albedo. We show that this process is stable under varying illumination. We then show how eigenfaces trained on albedo maps may provide a better representation for illumination insensitive recognition than those trained on raw image intensity.
Styles APA, Harvard, Vancouver, ISO, etc.
42

An, Ziheng, Chao Xu, Kai Qian, Jubao Han, Wei Tan, Dou Wang et Qianqian Fang. « EIEN : Endoscopic Image Enhancement Network Based on Retinex Theory ». Sensors 22, no 14 (21 juillet 2022) : 5464. http://dx.doi.org/10.3390/s22145464.

Texte intégral
Résumé :
In recent years, deep convolutional neural network (CNN)-based image enhancement has shown outstanding performance. However, due to the problems of uneven illumination and low contrast existing in endoscopic images, the implementation of medical endoscopic image enhancement using CNN is still an exploratory and challenging task. An endoscopic image enhancement network (EIEN) based on the Retinex theory is proposed in this paper to solve these problems. The structure consists of three parts: decomposition network, illumination correction network, and reflection component enhancement algorithm. First, the decomposition network model of pre-trained Retinex-Net is retrained on the endoscopic image dataset, and then the images are decomposed into illumination and reflection components by this decomposition network. Second, the illumination components are corrected by the proposed self-attention guided multi-scale pyramid structure. The pyramid structure is used to capture the multi-scale information of the image. The self-attention mechanism is based on the imaging nature of the endoscopic image, and the inverse image of the illumination component is fused with the features of the green and blue channels of the image to be enhanced to generate a weight map that reassigns weights to the spatial dimension of the feature map, to avoid the loss of details in the process of multi-scale feature fusion and image reconstruction by the network. The reflection component enhancement is achieved by sub-channel stretching and weighted fusion, which is used to enhance the vascular information and image contrast. Finally, the enhanced illumination and reflection components are multiplied to obtain the reconstructed image. We compare the results of the proposed method with six other methods on a test set. The experimental results show that EIEN enhances the brightness and contrast of endoscopic images and highlights vascular and tissue information. At the same time, the method in this paper obtained the best results in terms of visual perception and objective evaluation.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Zhao Xinyu, 赵馨宇, et 黄福珍 Huang Fuzhen. « Image Enhancement Based on Dual-Channel Prior and Illumination Map Guided Filtering ». Laser & ; Optoelectronics Progress 58, no 8 (2021) : 0810001. http://dx.doi.org/10.3788/lop202158.0810001.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

WANG, Li-Li, Zheng YANG, Zhi-Qiang MA et Qin-Ping ZHAO. « Real-Time Global Illumination Rendering for Mesostructure Surface Based on Gradient Map ». Journal of Software 22, no 10 (25 octobre 2011) : 2454–66. http://dx.doi.org/10.3724/sp.j.1001.2011.03881.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Liu, Ting, Takaya Yuizono, Zhisheng Wang et Haiwen Gao. « The Influence of Classroom Illumination Environment on the Efficiency of Foreign Language Learning ». Applied Sciences 10, no 6 (11 mars 2020) : 1901. http://dx.doi.org/10.3390/app10061901.

Texte intégral
Résumé :
This paper investigated foreign language learning efficiency in four different illumination environments (in different illuminance and color temperatures), focusing on the influence of the illumination environment on foreign language learners’ sentimental status, by means of foreign language skills testing in mind-map, objective evaluation of physiological reaction, and subjective evaluation of psychological reaction. It was shown that in different illumination environments, the language skills of foreign language learners were different, and their psychological and physiological reactions varied, which influenced the efficiency of foreign language learning. The results indicated that the ideal learning space was in high illuminance and low color temperature, which increased the stimulation in foreign language learners; promoted the formation of optimistic sentiment; and enhanced their interest in, and the quality and efficiency of, foreign language learning.
Styles APA, Harvard, Vancouver, ISO, etc.
46

YU Mu-xin, 余慕欣, 周文超 ZHOU Wen-chao et 吴一辉 WU Yi-hui. « Improving the Imaging Performance of Plasmonic Structured Illumination Microscopy Using MAP Estimation Method ». ACTA PHOTONICA SINICA 47, no 4 (2018) : 422003. http://dx.doi.org/10.3788/gzxb20184704.0422003.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Han, Sung-Ju, Jun-Sup Shin, Kyungnyun Kim, Sang-Yoon Lee et Hyunki Hong. « Using Human Objects for Illumination Estimation and Shadow Generation in Outdoor Environments ». Symmetry 11, no 10 (10 octobre 2019) : 1266. http://dx.doi.org/10.3390/sym11101266.

Texte intégral
Résumé :
In computer graphics and augmented reality applications, the illumination information in an outdoor environment enables us to generate a realistic shadow for a virtual object. This paper presents a method by which to estimate the illumination information using a human object in a scene. A Gaussian mixture model, in which the mixtures of Gaussian distributions are symmetrical, is employed to learn the background. The human object is then segmented from the input images and the disparity map obtained by a stereo camera. The ground plane in the scene, which is important for estimating the location of the human object on the ground, is then detected using the v-disparity map. The altitude and the azimuth value of the sun are computed from the geometric relationship of three scene elements: the ground, human object, and human-shadow region. The experimental results showed that the proposed method can estimate the sun information accurately and generate a shadow in the scene for a virtual object.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Xiang, Hu Yan, et Xi Rong Ma. « An Improved Multi-Exposure Image Fusion Algorithm ». Advanced Materials Research 403-408 (novembre 2011) : 2200–2205. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.2200.

Texte intégral
Résumé :
An improved Multi-Exposure image fusion scheme is proposed to fuse visual images for wide range illumination applications. While previous image fusion approaches perform the fusion only concern with local details such as regional contrast and gradient, the proposed algorithm takes global illumination contrast into consideration at the same time; this can extend the dynamic range evidently. Wavelet is used as Multi-Scale analysis tool in intensity fusion. For color fusion, HSI color model and weight map based method is used. The experimental results showed that the proposed fusion scheme has significant advantages in dynamic range, regional contrast and color saturation.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Zhou, Wenzhang, Yong Chen et Siyuan Liang. « Sparse Haar-Like Feature and Image Similarity-Based Detection Algorithm for Circular Hole of Engine Cylinder Head ». Applied Sciences 8, no 10 (22 octobre 2018) : 2006. http://dx.doi.org/10.3390/app8102006.

Texte intégral
Résumé :
If the circular holes of an engine cylinder head are distorted, cracked, defective, etc., the normal running of the equipment will be affected. For detecting these faults with high accuracy, this paper proposes a detection method based on feature point matching, which can reduce the detection error caused by distortion and light interference. First, the effective and robust feature vectors of pixels are extracted based on improved sparse Haar-like features. Then we calculate the similarity and find the most similar matching point from the image. In order to improve the robustness to the illumination, this paper uses the method based on image similarity to map the original image, so that the same region under different illumination conditions has similar spatial distribution. The experiments show that the algorithm not only has high matching accuracy, but also has good robustness to the illumination.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Faria, Tcharles V. B., et Fernando J. S. Moreira. « New technique for shaping axisymmetric dual-reflector antennas using conic sections to control aperture illumination ». IET Microwaves, Antennas & ; Propagation 14, no 12 (7 octobre 2020) : 1310–15. http://dx.doi.org/10.1049/iet-map.2020.0156.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie