To see the other types of publications on this topic, follow the link: Image based shading.

Journal articles on the topic 'Image based shading'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image based shading.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

BARNES, NICK, and ZHI-QIANG LIU. "KNOWLEDGE-BASED SHAPE-FROM-SHADING." International Journal of Pattern Recognition and Artificial Intelligence 13, no. 01 (February 1999): 1–23. http://dx.doi.org/10.1142/s0218001499000021.

Full text
Abstract:
In this paper, we study the problem of recovering approximate shape from the shading of a three-dimensional object in a single image when knowledge about the object is available. The application of knowledge-based methods to low-level image processing tasks will help overcome problems that arise from processing images using a pixel-based approach. Shape-from-shading has generally been approached by precognitive vision methods where a standard operator is applied to the image based on assumptions about the imaging process and generic properties of what appears. This paper explores some advantages of applying knowledge and hypotheses about what appears in the image. The knowledge and hypotheses used here come from domain knowledge and edge-matching. Specifically, we are able to find solutions to some problems that cannot be solved by other methods and gain advantages in terms of computation speed over similar approaches. Further, we can fully automate the derivation of the approximate shape of an object. This paper demonstrates the efficacy of using knowledge in the basic operation of an early vision operator, and so introduces a new paradigm for computer vision that may be applied to other early vision operators.
APA, Harvard, Vancouver, ISO, and other styles
2

LI, PING, HANQIU SUN, JIANBING SHEN, and CHEN HUANG. "HDR IMAGE RERENDERING USING GPU-BASED PROCESSING." International Journal of Image and Graphics 12, no. 01 (January 2012): 1250007. http://dx.doi.org/10.1142/s0219467812500076.

Full text
Abstract:
One essential process in image rerendering is to replace existing texture in the region of interest by other user-preferred textures, while preserving the shading and similar texture distortion. In this paper, we propose the graphics processing units (GPU)-accelerated high dynamic range (HDR) image rerendering using revisited NLM processing in parallel on GPU-CUDA platform, to reproduce the realistic rendering of HDR images with retexturing and transparent/translucent effects. Our image-based approach using GPU-based pipeline in gradient domain provides efficient processing with easy-control image retexturing and special shading effects. The experimental results showed the efficiency and high-quality performance of our approach.
APA, Harvard, Vancouver, ISO, and other styles
3

Vergne, Romain, Pascal Barla, Roland W. Fleming, and Xavier Granier. "Surface flows for image-based shading design." ACM Transactions on Graphics 31, no. 4 (August 5, 2012): 1–9. http://dx.doi.org/10.1145/2185520.2185590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gądek-Moszczak, Aneta, Leszek Wojnar, and Adam Piwowarczyk. "Comparison of Selected Shading Correction Methods." System Safety: Human - Technical Facility - Environment 1, no. 1 (March 1, 2019): 819–26. http://dx.doi.org/10.2478/czoto-2019-0105.

Full text
Abstract:
AbstractShade effect is a defect of the images very often invisible for human vision perception but may cause difficulties in proper image processing and object detection especially if the aim of the task is to proceed detection and quantitative analysis of the objects. There are several methods in image processing systems or presented in the literature, however some of them introduce unexpected changes in the images, what may interfere the final quantitative analysis. In order to solve this problem, authors proposed a new method for shade correction, which is based on simulation of the image background based on analytical methods which return pixel values representing smooth grey level changes. Comparison of the effects of correction by applying standard methods and the method proposed are presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Tak, Yoon-Oh, Anjin Park, Janghoon Choi, Jonghyun Eom, Hyuk-Sang Kwon, and Joo Beom Eom. "Simple Shading Correction Method for Brightfield Whole Slide Imaging." Sensors 20, no. 11 (May 29, 2020): 3084. http://dx.doi.org/10.3390/s20113084.

Full text
Abstract:
Whole slide imaging (WSI) refers to the process of creating a high-resolution digital image of a whole slide. Since digital images are typically produced by stitching image sequences acquired from different fields of view, the visual quality of the images can be degraded owing to shading distortion, which produces black plaid patterns on the images. A shading correction method for brightfield WSI is presented, which is simple but robust not only against typical image artifacts caused by specks of dust and bubbles, but also against fixed-pattern noise, or spatial variations in pixel values under uniform illumination. The proposed method comprises primarily of two steps. The first step constructs candidates of a shading distortion model from a stack of input image sequences. The second step selects the optimal model from the candidates. The proposed method was compared experimentally with two previous state-of-the-art methods, regularized energy minimization (CIDRE) and background and shading correction (BaSiC) and showed better correction scores, as smooth operations and constraints were not imposed when estimating the shading distortion. The correction scores, averaged over 40 image collections, were as follows: proposed method, 0.39 ± 0.099; CIDRE method, 0.67 ± 0.047; BaSiC method, 0.55 ± 0.038. Based on the quantitative evaluations, we can confirm that the proposed method can correct not only shading distortion, but also fixed-pattern noise, compared with the two previous state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Baslamisli, Anil S., Yang Liu, Sezer Karaoglu, and Theo Gevers. "Physics-based shading reconstruction for intrinsic image decomposition." Computer Vision and Image Understanding 205 (April 2021): 103183. http://dx.doi.org/10.1016/j.cviu.2021.103183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tian, Lei, Aiguo Song, Dapeng Chen, and Dejing Ni. "Haptic Display of Image Based on Multi-Feature Extraction." International Journal of Pattern Recognition and Artificial Intelligence 30, no. 08 (July 17, 2016): 1655023. http://dx.doi.org/10.1142/s0218001416550235.

Full text
Abstract:
Image feature extraction is one of the key technologies of image haptic display. In this paper, multi-feature extraction method of the object in image is proposed to improve image-based haptic perception. The multi-feature extraction includes contour shape extraction, pattern extraction and detail texture extraction. Firstly, we use an intrinsic decomposition method to decompose an image into shading image and reflectance image. The reflectance image describes nonillumination affected color patterns spread on the surface. Then, the shading image is utilized in contour shape and detail texture extraction. Contour shape extraction is based on partial differential equation (PDE), to reconstruct three-dimensional (3D) surface model in virtual environments. Detailed texture extraction is based on fractional differential method simultaneously. Finally, the various features extracted above are haptic rendered by different methods. The experimental results show the effectiveness and potentiality of the proposed method for improving the ability of haptic perception and recognition of human in virtual environments.
APA, Harvard, Vancouver, ISO, and other styles
8

Abada, Lyes, and Saliha Aouat. "Tabu Search to Solve the Shape from Shading Ambiguity." International Journal on Artificial Intelligence Tools 24, no. 05 (October 2015): 1550035. http://dx.doi.org/10.1142/s0218213015500359.

Full text
Abstract:
The three-dimensional reconstruction of an object from one gray-scale image is a basic problem in computer vision. This problem is known by Shape From Shading. It is considered as an ill-posed due to the ambiguity of the convexity or the concavity of the surface to be reconstructed. A method of solving the ambiguity based on the singular points of the image was proposed in this paper. It uses tabu search to determine the optimal solution of the solution space. The proposed method determines all 3D objects for an image in a short time. The proposed method has been tested on synthetic and real images, it was compared with another method of resolution.
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Jin, and Zhi Jie Shen. "Research of Lighting Technology Based on the OpenGL." Advanced Materials Research 588-589 (November 2012): 2113–16. http://dx.doi.org/10.4028/www.scientific.net/amr.588-589.2113.

Full text
Abstract:
The manipulating technique of lighting is a very important component of realistic image rendering, including lighting model and shading. This paper, first, introduces the basic concepts, the principles and the programming of the general thoughts of Lighting with OpenGL; then, introduces how to compute the light intensity and shading interpolation by investigating the reflex factors on object surface; last, provides a set of cases to show the ideal effect with different rendering techniques.
APA, Harvard, Vancouver, ISO, and other styles
10

OKATANI, Takayuki, and Koichiro DEGUCHI. "3D Shape Reconstruction from an Endoscope Image Based on Shading." Transactions of the Society of Instrument and Control Engineers 33, no. 10 (1997): 1035–42. http://dx.doi.org/10.9746/sicetr1965.33.1035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ma, Ji Quan, Fan Hui Kong, Peng Zhao, and Bing Gong. "A Specular Shape-from-Shading Method with Constrained Inpaiting." Applied Mechanics and Materials 530-531 (February 2014): 377–81. http://dx.doi.org/10.4028/www.scientific.net/amm.530-531.377.

Full text
Abstract:
The existence of specular highlights is a great obstacle of shape-from-Shading. For a single gray-scale image with only intensity information, the existed highlights detection methods based on chroma or polarization analysis cannot directly be applied on it. So, a method using surface shape is provided, it makes full use of the imaging process. Through the surface normal estimation, compute the maximum a posteriori probability of each reflection composition under simulated annealing and detect the highlights areas. Finally, remove the highlights based on the assumption of curvature continuity. Experiments reveal that this method is effective on synthetic and real-world images, improves the accuracy of surface recovery for image combined specular highlights.
APA, Harvard, Vancouver, ISO, and other styles
12

Wilson, Antony Deol. "Detection and Classification of Plant Diseases by Image Processing." International Journal for Research in Applied Science and Engineering Technology 9, no. VIII (August 15, 2021): 789–93. http://dx.doi.org/10.22214/ijraset.2021.37482.

Full text
Abstract:
The point of this undertaking is to configuration, carry out and assess a picture handling programming based answer for programmed recognition and grouping of plant leaf infection. Anyway contemplates show that depending on unadulterated unaided eye perception of specialists to recognize and group infections can be tedious and costly, particularly in country regions and agricultural nations. So we present quick, programmed, modest and precise picture preparing based arrangement. Arrangement is made out of four primary stages; in the main stage we make a shading change structure for the RGB leaf picture and afterward, we apply shading space change for the shading change structure. Then, in the subsequent stage, the pictures are sectioned utilizing the K-implies bunching strategy. In the third stage, we figure the surface components for the portioned contaminated items. At long last, in the fourth stage the separated provisions are gone through a pre-prepared neural organization.
APA, Harvard, Vancouver, ISO, and other styles
13

Tanabe, Yoshinori, and Takayuki Ishida. "Development of a Quantitative Method Based on the Hill-Shading Technique for Assessing Morphological Changes in the Bone During Image-Guided Radiotherapy for Bone Metastasis." Journal of Medical Imaging and Health Informatics 11, no. 8 (August 1, 2021): 2173–77. http://dx.doi.org/10.1166/jmihi.2021.3818.

Full text
Abstract:
We hypothesized that positioning corrections matching the image in image-guided radiotherapy (IGRT) for bone metastasis contain information on temporal structural changes in irradiated bone metastatic lesions during the treatment period. To extract and quantify these changes, a hill-shading technique was used to emphasize the characteristics of the structure. Spatial frequency components of the bone lesions were described based on hillshading transformations of the images, and a quantification method was suggested. The matching images of 11 patients who received IGRT for lytic bone metastases were evaluated retrospectively. The hill-shading technique was applied to images of both metastatic lesions and normal bone outside the irradiation field. Outlined bone microstructures were analyzed by a two-dimensional power spectrum using fast Fourier transformation, and the frequency components were quantified. Correlations between the frequency components and cumulative radiation doses were analyzed between the irradiated metastatic bone and normal bone outside the irradiation field. The high-frequency components of the metastatic bone lesion images decreased by a mean of 7% (minimum:−0.2%, maximum: −13.2%) following cumulative irradiation doses of 20–30 Gy versus 0–10 Gy. In the normal bone outside the irradiation field, high-frequency components increased by a mean of 0.07% (minimum:−2.0%, maximum: +4.4%) following irradiation doses of 20–30 Gy versus 0–10 Gy. High-frequency components were significantly different between the normal bone and bone metastases following radiotherapy (p <0.0001). Hill-shading transformation of matching images obtained during the treatment period enabled adaptation and evaluation of irradiation-induced changes in the microstructure of bone metastatic lesions.
APA, Harvard, Vancouver, ISO, and other styles
14

Sajko, Robert, and Zeljka Mihajlovic. "Fast Image-Based Ambient Occlusion IBAO." International Journal of Virtual Reality 10, no. 4 (January 1, 2011): 61–65. http://dx.doi.org/10.20870/ijvr.2011.10.4.2830.

Full text
Abstract:
The quality of computer rendering and perception of realism greatly depend on the shading method used to implement the interaction of light with the surfaces of objects in a scene. Ambient occlusion (AO) enhances the realistic impression of rendered objects and scenes. Properties that make Screen Space Ambient Occlusion (SSAO) interesting for real-time graphics are scene complexity independence, and support for fully dynamic scenes. However, there are also important issues with current approaches: poor texture cache use, introduction of noise, and performance swings. In this paper, a straightforward solution is presented. Instead of a traditional, geometry-based sampling method, a novel, image-based sampling method is developed, coupled with a revised heuristic function for computing occlusion. Proposed algorithm harnessing GPU power improves texture cache use and reduces aliasing artifacts. Two implementations are developed, traditional and novel, and their comparison reveals improved performance and quality of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
15

Seo Song, Eun, Gi Tae Kim, and Sung Dae Hong. "Real-Time Shading Image Implementation Technology for Physical Viod Display." International Journal of Engineering & Technology 7, no. 3.34 (September 1, 2018): 86. http://dx.doi.org/10.14419/ijet.v7i3.34.18779.

Full text
Abstract:
Background/Objectives: The purpose of this study is control technology to reflect user's appearance and movement in the void display in real time.Methods/Statistical analysis: In this paper, we have developed real-time shading image data acquisition based on RGB-D sensor and real-time interaction image control structure for realizing 0-255 Depth image of physical void display. We also study integrated interlocking control solution for integrated interlocking of hardware and software.Findings: Conventional flip displays show data in 0,1 image representation. On the other hand, the void display we are studying acquires real-time data based on RGB-D and shows the data in depth 0-255 image representation.Improvements/Applications: In the void display, the image representation of 0.1 was extended to the depth 0-255 representation.
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Yong Chen, and Xiang Dong Gao. "Study of 3D Reconstruction of Laser Welding Pool Surface Based on Shape from Shading." Applied Mechanics and Materials 201-202 (October 2012): 344–47. http://dx.doi.org/10.4028/www.scientific.net/amm.201-202.344.

Full text
Abstract:
Shape from shading (SFS) is one of the critical techniques in 3D shape recovery in computer vision. The task of SFS is to reconstruct 3D shape of the visible surface of an object from one single picture using its gray variation. Analyzing the 3D shape of welding pool is important to evaluate the welding quality. During the disk laser welding experiments, a high-power laser was used as the auxiliary illuminant, and a high-speed image acquisition system with infrared filter was applied to capture the welding pools in real time. The slant and tilt of illuminant source were estimated by statistics to reconstruct the 3D shape of welding pool visible surface by using the localization method of SFS. Methods of median filter and cubic spline interpolation were used to denoise images and smooth the image shapes. Experimental results showed that the proposed technique could reconstruct parts of welding pool effectively.
APA, Harvard, Vancouver, ISO, and other styles
17

Chen, Wei. "Metal Label Pressed Protuberant Characters Segmentation Based on Shape Index." Advanced Materials Research 187 (February 2011): 672–76. http://dx.doi.org/10.4028/www.scientific.net/amr.187.672.

Full text
Abstract:
According to metal label image characteristic, the pressed protuberant characters segmentation method is proposed based on shape index. The pressed protuberant characters and the print character image is withdraws with the morphology processing. obtains the surface gradient is gathered through the shape from shading method, then obtains the shape index operator, and realizes the classification by this as the feature vector through the support vector machines method. Compares with the classics image segmentation algorithm, this algorithm has the very big serviceability when grayscale image distribute is non-uniform. This method algorithm is simple, the effect is fine, not label layout restraint.
APA, Harvard, Vancouver, ISO, and other styles
18

QING, LAIYUN, SHIGUANG SHAN, WEN GAO, and BO DU. "FACE RECOGNITION UNDER GENERIC ILLUMINATION BASED ON HARMONIC RELIGHTING." International Journal of Pattern Recognition and Artificial Intelligence 19, no. 04 (June 2005): 513–31. http://dx.doi.org/10.1142/s0218001405004186.

Full text
Abstract:
The performances of the current face recognition systems suffer heavily from the variations in lighting. To deal with this problem, this paper presents an illumination normalization approach by relighting face images to a canonical illumination based on the harmonic images model. Benefiting from the observations that human faces share similar shape, and the albedos of the face surfaces are quasi-constant, we first estimate the nine low-frequency components of the illumination from the input facial image. The facial image is then normalized to the canonical illumination by re-rendering it using the illumination ratio image technique. For the purpose of face recognition, two kinds of canonical illuminations, the uniform illumination and a frontal flash with the ambient lights, are considered, among which the former encodes merely the texture information, while the latter encodes both the texture and shading information. Our experiments on the CMU-PIE face database and the Yale B face database have shown that the proposed relighting normalization can significantly improve the performance of a face recognition system when the probes are collected under varying lighting conditions.
APA, Harvard, Vancouver, ISO, and other styles
19

Petikam, Lohit, Ken Anjyo, and Taehyun Rhee. "Shading Rig: Dynamic Art-directable Stylised Shading for 3D Characters." ACM Transactions on Graphics 40, no. 5 (October 31, 2021): 1–14. http://dx.doi.org/10.1145/3461696.

Full text
Abstract:
Despite the popularity of three-dimensional (3D) animation techniques, the style of 2D cel animation is seeing increased use in games and interactive applications. However, conventional 3D toon shading frequently requires manual editing to clean up undesired shadows or add stylistic details based on art direction. This editing is impractical for the frame-by-frame editing in cartoon feature film post-production. For interactive stylised media and games, post-production is unavailable due to real-time constraints, so art-direction must be preserved automatically. For these reasons, artists often resort to mesh and texture edits to mitigate undesired shadows typical of toon shaders. Such edits allow real-time rendering but are limited in resolution, animation quality and lack detail control for stylised shadow design. In our framework, artists build a “shading rig,” a collection of these edits, that allows artists to animate toon shading. Artists pre-animate the shading rig under changing lighting, to dynamically preserve artistic intent in a live application, without manual intervention. We show our method preserves continuous motion and shape interpolation, with fewer keyframes than previous work. Our shading shape interpolation is computationally cheaper than state-of-the-art image interpolation techniques. We achieve these improvements while preserving vector quality rendering, without resorting either to high texture resolution or mesh density.
APA, Harvard, Vancouver, ISO, and other styles
20

Ragragui, Anouar, Adnane Ouazzani Chahdi, Akram Halli, and Khalid Satori. "Image-based extrusion with realistic surface wrinkles." Journal of Computational Design and Engineering 7, no. 1 (February 1, 2020): 30–43. http://dx.doi.org/10.1093/jcde/qwaa004.

Full text
Abstract:
Abstract Per-pixel extrusion mapping is an image-based modeling and rendering technique that can be used in a large variety of computer graphic applications. This technique makes it possible to create geometries without polygonal meshes by extrusion of a basic form stored in a 2D binary image. It is available in four versions: basic extrusion, outward extrusion, beveled extrusion, and chamfered extrusion, and all allow the production of satisfactory results in terms of interactivity or visual quality. The problem is that the extruded surfaces do not exhibit microrelief effects and have a lack of realistic superficial wrinkles. To solve this problem, we proposed in this paper a combination of the per-pixel extrusion mapping and the bump mapping in order to add a microrelief effect to extruded surfaces without changing the basic geometry. For the real-time synchronization between the depth scaling of the microreliefs and the resulting shading, we proposed to use a scale factor to further disturb the normal in a function of depths.
APA, Harvard, Vancouver, ISO, and other styles
21

Yi, Renjiao, Ping Tan, and Stephen Lin. "Leveraging Multi-View Image Sets for Unsupervised Intrinsic Image Decomposition and Highlight Separation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12685–92. http://dx.doi.org/10.1609/aaai.v34i07.6961.

Full text
Abstract:
We present an unsupervised approach for factorizing object appearance into highlight, shading, and albedo layers, trained by multi-view real images. To do so, we construct a multi-view dataset by collecting numerous customer product photos online, which exhibit large illumination variations that make them suitable for training of reflectance separation and can facilitate object-level decomposition. The main contribution of our approach is a proposed image representation based on local color distributions that allows training to be insensitive to the local misalignments of multi-view images. In addition, we present a new guidance cue for unsupervised training that exploits synergy between highlight separation and intrinsic image decomposition. Over a broad range of objects, our technique is shown to yield state-of-the-art results for both of these tasks.
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Guo Hui, Jin Cheng, and Quan Rui Wei. "A New Perspective Shape from Shading Approach for Hybrid Surfaces." Applied Mechanics and Materials 519-520 (February 2014): 602–5. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.602.

Full text
Abstract:
Shape from shading (SFS) is a classical and important problem in the domain of computer vision. The first paper of the authors gives a new image irradiance equation for perspective SFS method to reconstruct the hybrid surfaces that have both diffuse reflection and specular reflection. This paper presents a new approach to compute the viscosity solution of the resulting Hamilton- Jacobi (H-J) equation considering the boundary conditions. The approach is based on 2D central Hamiltonian and iterative sweeping method. Experiments are performed on three images and the results show the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
23

Chung, Kuo-Liang, and Shou-Yi Tseng. "New progressive image transmission based on quadtree and shading approach with resolution control." Pattern Recognition Letters 22, no. 14 (December 2001): 1545–55. http://dx.doi.org/10.1016/s0167-8655(01)00106-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Dihl, Leandro, and Soraia Raupp Musse. "Recovering 3D human pose based on biomechanical constraints, postures comfort and image shading." Expert Systems with Applications 41, no. 14 (October 2014): 6305–14. http://dx.doi.org/10.1016/j.eswa.2014.03.049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Henderson, Paul, and Vittorio Ferrari. "Learning Single-Image 3D Reconstruction by Generative Modelling of Shape, Pose and Shading." International Journal of Computer Vision 128, no. 4 (October 16, 2019): 835–54. http://dx.doi.org/10.1007/s11263-019-01219-8.

Full text
Abstract:
Abstract We present a unified framework tackling two problems: class-specific 3D reconstruction from a single image, and generation of new 3D shape samples. These tasks have received considerable attention recently; however, most existing approaches rely on 3D supervision, annotation of 2D images with keypoints or poses, and/or training with multiple views of each object instance. Our framework is very general: it can be trained in similar settings to existing approaches, while also supporting weaker supervision. Importantly, it can be trained purely from 2D images, without pose annotations, and with only a single view per instance. We employ meshes as an output representation, instead of voxels used in most prior work. This allows us to reason over lighting parameters and exploit shading information during training, which previous 2D-supervised methods cannot. Thus, our method can learn to generate and reconstruct concave object classes. We evaluate our approach in various settings, showing that: (i) it learns to disentangle shape from pose and lighting; (ii) using shading in the loss improves performance compared to just silhouettes; (iii) when using a standard single white light, our model outperforms state-of-the-art 2D-supervised methods, both with and without pose supervision, thanks to exploiting shading cues; (iv) performance improves further when using multiple coloured lights, even approaching that of state-of-the-art 3D-supervised methods; (v) shapes produced by our model capture smooth surfaces and fine details better than voxel-based approaches; and (vi) our approach supports concave classes such as bathtubs and sofas, which methods based on silhouettes cannot learn.
APA, Harvard, Vancouver, ISO, and other styles
26

Governi, Lapo, Rocco Furferi, Luca Puggelli, and Yary Volpe. "A Practical Approach Based on Shape from Shading and Fast Marching for 3D Geometry Recovery under Oblique Illumination." Applied Mechanics and Materials 472 (January 2014): 503–9. http://dx.doi.org/10.4028/www.scientific.net/amm.472.503.

Full text
Abstract:
Design of new industrial objects characterized by high stylistic content often starts from sketches or images of the product to be, subsequently, represented in a 3D digital form by using CAD software. To speed up this phase, a number of methods for automatic or semi-automatic translation of sketches or images into a 3D model have been devised all over the world also for reverse engineering purposes. When the image shading is a crucial information for recovering the final 3D shape, Fast Marching is recognized to be among the best method to date, especially for frontally illuminated scenes. Unfortunately, such a method cannot be directly applied when object illumination in the considered image is oblique. The present work is aimed to propose a simple, but effective, approach for recovering 3D shape of objects starting from single side illuminated scenes i.e. for solving non-eikonal SFS problems. Tested against a set of case studies, the method proved its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
27

MATSUSHITA, YASUYUKI, STEPHEN LIN, HEUNG-YEUNG SHUM, XIN TONG, and SING BING KANG. "LIGHTING AND SHADOW INTERPOLATION USING INTRINSIC LUMIGRAPHS." International Journal of Image and Graphics 04, no. 04 (October 2004): 585–604. http://dx.doi.org/10.1142/s0219467804001555.

Full text
Abstract:
Densely-sampled image representations such as the light field or Lumigraph have been effective in enabling photorealistic image synthesis. Unfortunately, lighting interpolation with such representations has not been shown to be possible without the use of accurate 3D geometry and surface reflectance properties. In this paper, we propose an approach to image-based lighting interpolation that is based on estimates of geometry and shading from relatively few images. We decompose light fields captured at different lighting conditions into intrinsic images (reflectance and illumination images), and estimate view-dependent scene geometries using multi-view stereo. We call the resulting representation an Intrinsic Lumigraph. In the same way that the Lumigraph uses geometry to permit more accurate view interpolation, the Intrinsic Lumigraph uses both geometry and intrinsic images to allow high-quality interpolation at different views and lighting conditions. The joint use of geometry and intrinsic images is effective in computing shadow masks for shadow prediction at new lighting conditions. We illustrate our approach with images of real scenes.
APA, Harvard, Vancouver, ISO, and other styles
28

Kubecka, Libor, Jiri Jan, and Radim Kolar. "Retrospective Illumination Correction of Retinal Images." International Journal of Biomedical Imaging 2010 (2010): 1–10. http://dx.doi.org/10.1155/2010/780262.

Full text
Abstract:
A method for correction of nonhomogenous illumination based on optimization of parameters of B-spline shading model with respect to Shannon's entropy is presented. The evaluation of Shannon's entropy is based on Parzen windowing method (Mangin, 2000) with the spline-based shading model. This allows us to express the derivatives of the entropy criterion analytically, which enables efficient use of gradient-based optimization algorithms. Seven different gradient- and nongradient-based optimization algorithms were initially tested on a set of 40 simulated retinal images, generated by a model of the respective image acquisition system. Among the tested optimizers, the gradient-based optimizer with varying step has shown to have the fastest convergence while providing the best precision. The final algorithm proved to be able of suppressing approximately 70% of the artificially introduced non-homogenous illumination. To assess the practical utility of the method, it was qualitatively tested on a set of 336 real retinal images; it proved the ability of eliminating the illumination inhomogeneity substantially in most of cases. The application field of this method is especially in preprocessing of retinal images, as preparation for reliable segmentation or registration.
APA, Harvard, Vancouver, ISO, and other styles
29

Dang, Hong She, and Chu Jia Guo. "Volume Measurement Method for Irregular Objects Based on Shape from Shading." Applied Mechanics and Materials 494-495 (February 2014): 789–92. http://dx.doi.org/10.4028/www.scientific.net/amm.494-495.789.

Full text
Abstract:
In this paper, we propose a volume measurement method for irregular objects. And three cameras were used in the image acquisition system. In order to reduce the intensity level and be more coincident with the 3D reconstruction method, a method called Histogram Acceleration has been used. Instead of using the regular shape from shading method, the relation between intensity and the missed 3D information was found. This method is valid within a certain error range. Its showed by experiment that this method has a good performance when dealing with objects with a smooth and convex surface.
APA, Harvard, Vancouver, ISO, and other styles
30

Wang, Guohui, Xuan Zhang, and Jin Cheng. "A Unified Shape-From-Shading Approach for 3D Surface Reconstruction Using Fast Eikonal Solvers." International Journal of Optics 2020 (May 29, 2020): 1–12. http://dx.doi.org/10.1155/2020/6156058.

Full text
Abstract:
Object shape reconstruction from images has been an active topic in computer vision. Shape-from-shading (SFS) is an important approach for inferring 3D surface from a single shading image. In this paper, we present a unified SFS approach for surfaces of various reflectance properties using fast eikonal solvers. The whole approach consists of three main components: a unified SFS model, a unified eikonal-type partial differential image irradiance (PDII) equation, and fast eikonal solvers for the PDII equation. The first component is designed to address different reflectance properties including diffuse, specular, and hybrid reflections in the imaging process of the camera. The second component is meant to derive the PDII equation under an orthographic camera projection and a single distant point light source whose direction is the same as the camera. Finally, the last component is targeted at solving the resultant PDII equation by using fast eikonal solvers. It comprises two Godunov-based schemes with fast sweeping method that can handle the eikonal-type PDII equation. Experiments on several synthetic and real images demonstrate that each type of the surfaces can be effectively reconstructed with more accurate results and less CPU running time.
APA, Harvard, Vancouver, ISO, and other styles
31

Wan, X., J. Liu, and H. Yan. "Phase Correlation based Local Illumination-invariant Method for Multi-Tempro Remote Sensing Image Matching." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-3 (August 11, 2014): 365–72. http://dx.doi.org/10.5194/isprsarchives-xl-3-365-2014.

Full text
Abstract:
This paper aims at image matching under significantly different illumination conditions, especially illumination angle changes, without prior knowledge of lighting conditions. We investigated the illumination impact on Phase Correlation (PC) matrix by mathematical derivation and from which, we decomposed PC matrix as the multiplication product of the illumination impact matrix and the translation matrix. Thus the robustness to illumination variation of the widely used Absolute Dirichlet Curve-fitting (AD-CF) algorithm for pixel-wise disparity estimation is proved. Further, an improved PC matching algorithm is proposed: Absolute Dirichlet SVD (AD-SVD), to achieve illumination invariant image alignment. Experiments of matching DEM simulated terrain shading images under very different illumination angles demonstrated that AD-SVD achieved 1/20 pixels accuracy for image alignment and it is nearly entirely invariant to daily and seasonal solar position variation. The AD-CF algorithm was tested for generating disparity map from multi-illumination angle stereo pairs and the results demonstrated high fidelity to the original DEM and the Normalised Correlation Coefficient (NCC) between the two is 0.96.
APA, Harvard, Vancouver, ISO, and other styles
32

Wan, Xue, Chenhui Wang, and Shengyang Li. "The Extension of Phase Correlation to Image Perspective Distortions Based on Particle Swarm Optimization." Sensors 19, no. 14 (July 15, 2019): 3117. http://dx.doi.org/10.3390/s19143117.

Full text
Abstract:
Phase correlation is one of the widely used image registration method in medical image processing and remote sensing. One of the main limitations of the phase correlation-based registration method is that it can only cope with Euclidean transformations, such as translation, rotation and scale, which constrain its application in wider fields, such as multi-view image matching, image-based navigation, etc. In this paper, we extended the phase correlation to perspective transformation by the combination of particle swarm optimization. Inspired by optic lens alignment based on interference, we propose to use the quality of PC fringes as the similarity, and then the aim of registration is to search for the optimized geometric transformation operator, which obtain the maximize value of PC-based similarity function through particle swarm optimization approach. The proposed method is validated by image registration experiments using simulated terrain shading, texture and natural landscape images containing different challenges, including illumination variation, lack of texture, motion blur, occlusion and geometric distortions. Further, image-based navigation experiments are carried out to demonstrate that the proposed method is able to correctly recover the trajectory of camera using multimodal target and reference image. Even under great radiometric and geometric distortions, the proposed method is able to achieve 0.1 sub-pixel matching accuracy on average while other methods fail to find the correspondence.
APA, Harvard, Vancouver, ISO, and other styles
33

Shen, Li-Sheng, Ed F. Deprettere, and P. Dewilde. "A parallel image-rendering algorithm and architecture based on ray tracing and radiosity shading." Computers & Graphics 19, no. 2 (March 1995): 281–96. http://dx.doi.org/10.1016/0097-8493(94)00154-q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Zwolinski, A., and M. Jarzemski. "Computing and monitoring potential of public spaces by shading analysis using 3d lidar data and advanced image analysis." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-7/W3 (April 29, 2015): 743–50. http://dx.doi.org/10.5194/isprsarchives-xl-7-w3-743-2015.

Full text
Abstract:
The paper regards specific context of public spaces in “shadow” of tall buildings located in European cities. Majority of tall buildings in European cities were built in last 15 years. Tall buildings appear mainly in city centres, directly at important public spaces being viable environment for inhabitants with variety of public functions (open spaces, green areas, recreation places, shops, services etc.). All these amenities and services are under direct impact of extensive shading coming from the tall buildings. The paper focuses on analyses and representation of impact of shading from tall buildings on various public spaces in cities using 3D city models. Computer environment of 3D city models in cityGML standard uses 3D LiDAR data as one of data types for definition of 3D cities. The structure of cityGML allows analytic applications using existing computer tools, as well as developing new techniques to estimate extent of shading coming from high-risers, affecting life in public spaces. These measurable shading parameters in specific time are crucial for proper functioning, viability and attractiveness of public spaces – finally it is extremely important for location of tall buildings at main public spaces in cities. The paper explores impact of shading from tall buildings in different spatial contexts on the background of using cityGML models based on core LIDAR data to support controlled urban development in sense of viable public spaces. The article is prepared within research project 2TaLL: Application of 3D Virtual City Models in Urban Analyses of Tall Buildings, realized as a part of Polish-Norway Grants.
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Ruihu, and Bin Fang. "A Combined Approach on RBC Image Segmentation through Shape Feature Extraction." Mathematical Problems in Engineering 2012 (2012): 1–20. http://dx.doi.org/10.1155/2012/194953.

Full text
Abstract:
The classification of erythrocyte plays an important role in clinic diagnosis. In terms of the fact that the shape deformability of red blood cell brings more difficulty in detecting and recognize for operating automatically, we believed that the recovered 3D shape surface feature would give more information than traditional 2D intensity image processing methods. This paper proposed a combined approach for complex surface segmentation of red blood cell based on shape-from-shading technique and multiscale surface fitting. By means of the image irradiance equation under SEM imaging condition, the 3D height field could be recovered from the varied shading. Afterwards the depth maps of each point on the surfaces were applied to calculate Gaussian curvature and mean curvature, which were used to produce surface-type label image. Accordingly the surface was segmented into different parts through multiscale bivariate polynomials function fitting. The experimental results showed that this approach was easily implemented and promising.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhu, Ge, Li Na Lou, Xian Quan Wang, and Jing Luo. "Design for Pellet Measurement System Based on TCD1206." Applied Mechanics and Materials 40-41 (November 2010): 21–26. http://dx.doi.org/10.4028/www.scientific.net/amm.40-41.21.

Full text
Abstract:
The article proposed a kind of new measuring technique suitable for pellet outer diameter --shading of measurement, using high quality laser as the light, linear CCD image sensor TCD1206 acts testing device, TCD1206 produced drive without delay by single-chip microcomputer, process pellet outer diameter signal coming from TCD1206 by floating threshold method of binary, finally the processed binary signals send back to the single-chip microcomputer, through MAX232 serial communication transfer to PC for displaying. The whole process realizes real-time detection to pellet outer diameter, and makes measurement system had higher precision.
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Rui Hu. "Automatic Red Blood Cell Classification for MICAD Based on PSO-CSVM." Applied Mechanics and Materials 121-126 (October 2011): 1952–56. http://dx.doi.org/10.4028/www.scientific.net/amm.121-126.1952.

Full text
Abstract:
The automatic classification of erythrocyte is critical to clinic blood-related disease treatment in Medical Image Computer Aided Diagnosing(MICAD). After 3D height field recovered from the varied shading, the depth map of each point on the surfaces is applied to calculate Gaussian curvature and mean curvature, which are used to produce surface type label image. Accordingly the surface is segmented into different parts through multi-scale bivariate polynomials function fitting. The count of different surface types is used to design a classifier for training and classifing the red blood cell by means of support vector machine and particle swarm optimization. The experimental result shows that this approach is easily to implement and promising.
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Wei Dong, Xi Qing Zhao, and Wei Zheng. "The Application of Heuristic Method in Images Pseudo Coloring." Applied Mechanics and Materials 155-156 (February 2012): 805–9. http://dx.doi.org/10.4028/www.scientific.net/amm.155-156.805.

Full text
Abstract:
In this paper, we use the surface of the analytical sample image recognition color histogram and adaptive modeling, black and white image are doing pseudo shading. Choose a different value in the layer R, G, and B is the most important achievement, of the technology in this method based on the analysis of the image histogram in characteristic value of different colors in the designated, different levels of image, we take action. The implementation results concept for this article to improve compared with other similar methods.
APA, Harvard, Vancouver, ISO, and other styles
39

Qiao, Zhi, and Takashi Kanai. "A GAN-based temporally stable shading model for fast animation of photorealistic hair." Computational Visual Media 7, no. 1 (January 18, 2021): 127–38. http://dx.doi.org/10.1007/s41095-020-0201-9.

Full text
Abstract:
AbstractWe introduce an unsupervised GAN-based model for shading photorealistic hair animations. Our model is much faster than previous rendering algorithms and produces fewer artifacts than other neural image translation methods. The main idea is to extend the Cycle-GAN structure to avoid semitransparent hair appearance and to exactly reproduce the interaction of the lights with the scene. We use two constraints to ensure temporal coherence and highlight stability. Our approach outperforms and is computationally more efficient than previous methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Wu, Feng He, Jin Fen Wang, Jun Wang, and Yu Bo Ren. "3D Surface Reconstruction from Single 2D Image." Key Engineering Materials 522 (August 2012): 886–90. http://dx.doi.org/10.4028/www.scientific.net/kem.522.886.

Full text
Abstract:
The image-based 3D reconstruction technique is an important method of 3D data passive acquired in computer vision. Aimed at the limitation of traditional SFS based 3D surface reconstruction method, a combined image-based 3D surface reconstruction method is given in this paper. Firstly, the pretreatment method of gradation is applied to convert the input color image to 256 gray image; secondly, shape from shading method is applied to recover the 3D profile of object; finally, the B-spline technique is adopted to reconstruct the model of surface and improve the resolution of surface. The key techniques such as the principle and algorithm of B-spline interpolation reconstruction are analyzed, and the experimental results are also given. The single-image based 3D reconstruction method given in this paper will supply a helpful condition for the extensive used of SFS technique.
APA, Harvard, Vancouver, ISO, and other styles
41

Tatabhatla, Venkata Madhava Ram, Anshul Agarwal, and Tirupathiraju Kanumuri. "Chaotic baker map-based array reconfiguration in solar photo-voltaic systems under shading conditions." Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy 233, no. 5 (August 2019): 559–75. http://dx.doi.org/10.1177/0957650919857436.

Full text
Abstract:
The location of shaded and unshaded panels affect power generation of solar photo-voltaic array, which not only causes the disturbance in characteristic curves but also achieves multiple maximum peaks. In order to overcome these issues, panel reconfiguration is proposed based on chaotic baker map technique, which employs the concept of image processing. In this approach, the static position of a panel is changed without altering the electrical circuit to minimize the power loss and enhance the maximum power. The proposed method is tested with different sizes of solar photo-voltaic array considering a wide range of shading spectrum on both software and hardware platforms. The proposed approach augments the output power and minimizes the mismatch power loss by reducing the mismatch in row currents with both simulations and prototype results. The results of the proposed approach are compared with hitherto known configurations under various shading conditions, which support the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Joo-Rim, and Yang-Hee Nam. "Image based Shading Techniques for Surfaces with Irregular and Complex Textures Formed by Heterogeneous Materials." Journal of the Korea Contents Association 10, no. 1 (January 28, 2010): 1–9. http://dx.doi.org/10.5392/jkca.2010.10.1.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Senthilkumar, M., and PS Periasamy. "Contourlet transform and adaptive neuro-fuzzy strategy–based color image watermarking." Measurement and Control 53, no. 3-4 (January 6, 2020): 287–95. http://dx.doi.org/10.1177/0020294019878873.

Full text
Abstract:
In the today Internet era, protection of digital content during transmission is an indigent. Alphanumeric watermarking is a resolution to the copyright defense than the endorsement of information into the system. In exhibit watermarking calculation, wellbeing of such watermarking process is moderately low. For expanding the soundness, an approach is presented, which is contourlet change with neuro-fuzzy-based watermark inserting process. The conventional approaches having loss during data recovery, this situation will be resolved using proposed watermarking scheme and also increase the security of watermarked image. The proposed color image watermarking scheme binary image is embedded over the shading image which utilizes contourlet and converse contourlet calculation for preprocessing of an image and neuro-fuzzy calculation to implant the bits in the green plane of an image. After completing the watermarking process, the results are analyzing using the quality assessment metrics like Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) etc., It is implemented using MATLAB R2013 software. The developed MATLAB code is converted into Hardware Description Language (HDL) and then implemented for Virtex-5 L110T Field Programmable Gate Array (FPGA) kit.
APA, Harvard, Vancouver, ISO, and other styles
44

Zheng, Jian Ming, Bian Guo, Ming Shun Yang, Yan Li, and X. N. Wang. "Roughness Detection Based on 3D Shape Reconstruction of Workpiece Surface Micro-Vision Image." Key Engineering Materials 455 (December 2010): 373–78. http://dx.doi.org/10.4028/www.scientific.net/kem.455.373.

Full text
Abstract:
The Shape from Shading (SFS) method is adopted to detect the micro-topography and roughness parameter of machining surface based on computer micro-vision in this paper. According to the reflection features of micro metal surface, the illumination model is improved by the weighting superposition of the diffuse component of the simplified Oren-Nayar model and the specular component of the Torrance-Sparrow model. The minimized calculation method of SFS is given based on the improved illumination model. The 3D topography reconstruction and the roughness parameter detection of the turning surface are accomplished. The experimental results show that this method can rapidly realize the accurate detection of surface roughness parameter, and provides new ideas and methods for detection the roughness on-line in machining process.
APA, Harvard, Vancouver, ISO, and other styles
45

Naidu, V. P. S. "Image Fusion Technique using Multi-resolution Singular Value Decomposition." Defence Science Journal 61, no. 5 (September 2, 2011): 479. http://dx.doi.org/10.14429/dsj.61.705.

Full text
Abstract:
<!--[if gte mso 9]><xml> <w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:TrackMoves /> <w:TrackFormatting /> <w:PunctuationKerning /> <w:ValidateAgainstSchemas /> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:DoNotPromoteQF /> <w:LidThemeOther>EN-US</w:LidThemeOther> <w:LidThemeAsian>X-NONE</w:LidThemeAsian> <w:LidThemeComplexScript>X-NONE</w:LidThemeComplexScript> <w:Compatibility> <w:BreakWrappedTables /> <w:SnapToGridInCell /> <w:WrapTextWithPunct /> <w:UseAsianBreakRules /> <w:DontGrowAutofit /> <w:SplitPgBreakAndParaMark /> <w:DontVertAlignCellWithSp /> <w:DontBreakConstrainedForcedTables /> <w:DontVertAlignInTxbx /> <w:Word11KerningPairs /> <w:CachedColBalance /> </w:Compatibility> <w:BrowserLevel>MicrosoftInternetExplorer4</w:BrowserLevel> <m:mathPr> <m:mathFont m:val="Cambria Math" /> <m:brkBin m:val="before" /> <m:brkBinSub m:val=" " /> <m:smallFrac m:val="off" /> <m:dispDef /> <m:lMargin m:val="0" /> <m:rMargin m:val="0" /> <m:defJc m:val="centerGroup" /> <m:wrapIndent m:val="1440" /> <m:intLim m:val="subSup" /> <m:naryLim m:val="undOvr" /> </m:mathPr></w:WordDocument> </xml><![endif]--><!--[if gte mso 9]><xml> <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true" DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267"> <w:LsdException Locked="false" Priority="0" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Normal" /> <w:LsdException Locked="false" Priority="9" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="heading 1" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 2" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 3" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 4" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 5" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 6" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 7" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 8" /> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 9" /> <w:LsdException Locked="false" Priority="39" Name="toc 1" /> <w:LsdException Locked="false" Priority="39" Name="toc 2" /> <w:LsdException Locked="false" Priority="39" Name="toc 3" /> <w:LsdException Locked="false" Priority="39" Name="toc 4" /> <w:LsdException Locked="false" Priority="39" Name="toc 5" /> <w:LsdException Locked="false" Priority="39" Name="toc 6" /> <w:LsdException Locked="false" Priority="39" Name="toc 7" /> <w:LsdException Locked="false" Priority="39" Name="toc 8" /> <w:LsdException Locked="false" Priority="39" Name="toc 9" /> <w:LsdException Locked="false" Priority="35" QFormat="true" Name="caption" /> <w:LsdException Locked="false" Priority="10" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Title" /> <w:LsdException Locked="false" Priority="1" Name="Default Paragraph Font" /> <w:LsdException Locked="false" Priority="11" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtitle" /> <w:LsdException Locked="false" Priority="22" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Strong" /> <w:LsdException Locked="false" Priority="20" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Emphasis" /> <w:LsdException Locked="false" Priority="59" SemiHidden="false" UnhideWhenUsed="false" Name="Table Grid" /> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Placeholder Text" /> <w:LsdException Locked="false" Priority="1" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="No Spacing" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 1" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 1" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 1" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 1" /> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Revision" /> <w:LsdException Locked="false" Priority="34" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="List Paragraph" /> <w:LsdException Locked="false" Priority="29" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Quote" /> <w:LsdException Locked="false" Priority="30" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Quote" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 1" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 1" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 1" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 1" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 1" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 2" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 2" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 2" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 2" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 2" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 2" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 2" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 2" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 2" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 3" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 3" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 3" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 3" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 3" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 3" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 3" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 3" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 3" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 4" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 4" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 4" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 4" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 4" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 4" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 4" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 4" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 4" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 5" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 5" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 5" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 5" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 5" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 5" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 5" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 5" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 5" /> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 6" /> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 6" /> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 6" /> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6" /> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6" /> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 6" /> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 6" /> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6" /> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6" /> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6" /> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 6" /> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 6" /> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 6" /> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 6" /> <w:LsdException Locked="false" Priority="19" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis" /> <w:LsdException Locked="false" Priority="21" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis" /> <w:LsdException Locked="false" Priority="31" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference" /> <w:LsdException Locked="false" Priority="32" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Reference" /> <w:LsdException Locked="false" Priority="33" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Book Title" /> <w:LsdException Locked="false" Priority="37" Name="Bibliography" /> <w:LsdException Locked="false" Priority="39" QFormat="true" Name="TOC Heading" /> </w:LatentStyles> </xml><![endif]--><!--[if gte mso 10]> <mce:style><! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} --> <!--[endif] --> <p class="MsoNormal" style="text-align: justify;"><span style="font-size: 10pt; font-family: &quot;Arial&quot;,&quot;sans-serif&quot;;">A novel image fusion technique based on multi-resolution singular value decomposition (MSVD) has been presented and evaluated. The performance of this algorithm is compared with that of well known image fusion technique using wavelets. It is observed that image fusion by MSVD perform almost similar to that of wavelets. It is computationally very simple and it could be well suited for real time applications. Moreover, MSVD does not have a fixed set of basis vectors like FFT, DCT and wavelet etc. and its basis vectors depend on the data set.</span></p><p class="MsoNormal" style="text-align: justify;"><span style="font-size: 10pt; font-family: &quot;Arial&quot;,&quot;sans-serif&quot;;"><strong>Defence Science Journal, 2011, 61(5), pp.479-484</strong><strong><strong>, DOI:http://dx.doi.org/10.14429/dsj.61.705</strong></strong><br /></span></p>
APA, Harvard, Vancouver, ISO, and other styles
46

Mallett, Ian, Cem Yuksel, and Larry Seiler. "Efficient Adaptive Deferred Shading with Hardware Scatter Tiles." Proceedings of the ACM on Computer Graphics and Interactive Techniques 3, no. 2 (August 26, 2020): 1–17. http://dx.doi.org/10.1145/3406184.

Full text
Abstract:
Adaptive shading is an effective mechanism for reducing the number of shaded pixels to a subset of the image resolution with minimal impact on final rendering quality. We present a new scheduling method based on on-chip tiles that, along with relatively minor modifications to the GPU architecture, provides efficient hardware support. As compared to software implementations on current hardware using compute shaders, our approach dramatically reduces memory bandwidth requirements, thereby significantly improving performance and energy use. We also introduce the concept of a fragment pre-shader for programmatically controlling when a fragment shader is invoked, and describe advanced techniques for utilizing our approach to further reduce the number of shaded pixels via temporal filtering, or to adjust rendering quality to maintain stable framerates.
APA, Harvard, Vancouver, ISO, and other styles
47

KIPS, Robin, Loïc TRAN, Emmanuel MALHERBE, and Matthieu PERROT. "Beyond Color Correction : Skin Color Estimation In The Wild Through Deep Learning." Electronic Imaging 2020, no. 5 (January 26, 2020): 82–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.5.maap-060.

Full text
Abstract:
Estimating skin color from an uncontrolled facial image is a challenging task. Many factors such as illumination, camera and shading variations directly affect the appearance of skin color in the image. Furthermore, using a color calibration target in order to correct the image pixels leads to a complex user experience. We propose a skin color estimation method from images in the wild, taken with unknown camera, under an unknown lighting, and without a calibration target. While prior methods relied on explicit intermediate steps of color correction of image pixels and skin region segmentation, we propose an end-to-end color regression model named LabNet, in which color correction and skin region segmentation are implicitly learnt by the model. Our method is based on a convolutional neural network trained on a dataset of smartphone images, labeled with L*a*b* measures of skin colors. We compare our method with standard skin color estimation approaches and found that our method over-perform these models while removing the need of color calibration target.
APA, Harvard, Vancouver, ISO, and other styles
48

Shi, Wei-Chao, Jian-Ming Zheng, Yan Li, and Xu-Bo Li. "Three-Dimensional Reconstruction Method for Machined Surface Topography Based on Gray Gradient Constraints." Applied Sciences 9, no. 3 (February 11, 2019): 591. http://dx.doi.org/10.3390/app9030591.

Full text
Abstract:
In the modern engineering field, recovering the machined surface topography is important for studying mechanical product function and surface characteristics by using the shape from shading (SFS)-based reconstruction method. However, due to the limitations of many constraints and oversmoothing, the existing SFS-based reconstruction methods are not suitable for machined surface topography. This paper presents a new three-dimensional (3D) reconstruction method of machined surface topography. By combining the basic principle of SFS and the analytic method, the analytic model of a surface gradient is established using the gray gradient as a constraint condition. By efficiently solving the effect of quantization errors and ambiguity of the gray scale on reconstruction accuracy using a wavelet denoising algorithm and image processing technology, the reconstruction algorithm is implemented for machined surface topography. Experimental results on synthetic images and machined surface topography images show that the proposed algorithm can accurately and efficiently recover the 3D shape of machined surface topography.
APA, Harvard, Vancouver, ISO, and other styles
49

Zhao, Hui Huang, Yao Nan Wang, Ya Qi Sun, and Jian Zhen Chen. "Human Face Fast-3D-Reconstruction from a Single Image." Applied Mechanics and Materials 157-158 (February 2012): 1008–11. http://dx.doi.org/10.4028/www.scientific.net/amm.157-158.1008.

Full text
Abstract:
Human face three-dimensional (3D) reconstruction is a challenging problem. In this paper, we propose a human face fast- 3D- reconstruction method based on image processing with a single image. Shape from shading (SFS) is chosen to reconstruct the human face. First, SFS theory is introduced. It has the advantage of fast 3D reconstruction and only need a single image. Secondly, because the noise will affect the 3D reconstruction result greatly, wavelet transform and wavelet packet transform are introduced and used in image denoising respectively. The experiment has shown that the method based on wavelet transform produces the best denoising result than wavelet packet transform. At last, a human face 3D reconstruction algorithm based on a single image is proposed. The experimental results show that a human face 3D model can be reconstructed in fast by proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
50

Łuszczuk, Krystyn, Mieczysław Goc, and Andrzej Łuszczuk. "Use of scangraphy for computer visualization of handwritten text shading." Issues of Forensic Science 300 (2018): 55–66. http://dx.doi.org/10.34836/pk.2018.300.4.

Full text
Abstract:
The article indicates the importance of computer programs as tools supporting the handwriting expertise and increasing its objectivity. Some of these programs are based on computer scangraphy. Scangraphy is a handwriting examination method in which a text sample is treated as a raster image (a bitmap), i.e. a set of single (dots) pixels. The bitmap pixel pattern can be tested in a variety of different ways, as required. The article describes the first computer program to use scangraphy in handwriting examination.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography