Academic literature on the topic 'Image based shading'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Image based shading.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Image based shading"

1

BARNES, NICK, and ZHI-QIANG LIU. "KNOWLEDGE-BASED SHAPE-FROM-SHADING." International Journal of Pattern Recognition and Artificial Intelligence 13, no. 01 (February 1999): 1–23. http://dx.doi.org/10.1142/s0218001499000021.

Full text
Abstract:
In this paper, we study the problem of recovering approximate shape from the shading of a three-dimensional object in a single image when knowledge about the object is available. The application of knowledge-based methods to low-level image processing tasks will help overcome problems that arise from processing images using a pixel-based approach. Shape-from-shading has generally been approached by precognitive vision methods where a standard operator is applied to the image based on assumptions about the imaging process and generic properties of what appears. This paper explores some advantages of applying knowledge and hypotheses about what appears in the image. The knowledge and hypotheses used here come from domain knowledge and edge-matching. Specifically, we are able to find solutions to some problems that cannot be solved by other methods and gain advantages in terms of computation speed over similar approaches. Further, we can fully automate the derivation of the approximate shape of an object. This paper demonstrates the efficacy of using knowledge in the basic operation of an early vision operator, and so introduces a new paradigm for computer vision that may be applied to other early vision operators.
APA, Harvard, Vancouver, ISO, and other styles
2

LI, PING, HANQIU SUN, JIANBING SHEN, and CHEN HUANG. "HDR IMAGE RERENDERING USING GPU-BASED PROCESSING." International Journal of Image and Graphics 12, no. 01 (January 2012): 1250007. http://dx.doi.org/10.1142/s0219467812500076.

Full text
Abstract:
One essential process in image rerendering is to replace existing texture in the region of interest by other user-preferred textures, while preserving the shading and similar texture distortion. In this paper, we propose the graphics processing units (GPU)-accelerated high dynamic range (HDR) image rerendering using revisited NLM processing in parallel on GPU-CUDA platform, to reproduce the realistic rendering of HDR images with retexturing and transparent/translucent effects. Our image-based approach using GPU-based pipeline in gradient domain provides efficient processing with easy-control image retexturing and special shading effects. The experimental results showed the efficiency and high-quality performance of our approach.
APA, Harvard, Vancouver, ISO, and other styles
3

Vergne, Romain, Pascal Barla, Roland W. Fleming, and Xavier Granier. "Surface flows for image-based shading design." ACM Transactions on Graphics 31, no. 4 (August 5, 2012): 1–9. http://dx.doi.org/10.1145/2185520.2185590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gądek-Moszczak, Aneta, Leszek Wojnar, and Adam Piwowarczyk. "Comparison of Selected Shading Correction Methods." System Safety: Human - Technical Facility - Environment 1, no. 1 (March 1, 2019): 819–26. http://dx.doi.org/10.2478/czoto-2019-0105.

Full text
Abstract:
AbstractShade effect is a defect of the images very often invisible for human vision perception but may cause difficulties in proper image processing and object detection especially if the aim of the task is to proceed detection and quantitative analysis of the objects. There are several methods in image processing systems or presented in the literature, however some of them introduce unexpected changes in the images, what may interfere the final quantitative analysis. In order to solve this problem, authors proposed a new method for shade correction, which is based on simulation of the image background based on analytical methods which return pixel values representing smooth grey level changes. Comparison of the effects of correction by applying standard methods and the method proposed are presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Tak, Yoon-Oh, Anjin Park, Janghoon Choi, Jonghyun Eom, Hyuk-Sang Kwon, and Joo Beom Eom. "Simple Shading Correction Method for Brightfield Whole Slide Imaging." Sensors 20, no. 11 (May 29, 2020): 3084. http://dx.doi.org/10.3390/s20113084.

Full text
Abstract:
Whole slide imaging (WSI) refers to the process of creating a high-resolution digital image of a whole slide. Since digital images are typically produced by stitching image sequences acquired from different fields of view, the visual quality of the images can be degraded owing to shading distortion, which produces black plaid patterns on the images. A shading correction method for brightfield WSI is presented, which is simple but robust not only against typical image artifacts caused by specks of dust and bubbles, but also against fixed-pattern noise, or spatial variations in pixel values under uniform illumination. The proposed method comprises primarily of two steps. The first step constructs candidates of a shading distortion model from a stack of input image sequences. The second step selects the optimal model from the candidates. The proposed method was compared experimentally with two previous state-of-the-art methods, regularized energy minimization (CIDRE) and background and shading correction (BaSiC) and showed better correction scores, as smooth operations and constraints were not imposed when estimating the shading distortion. The correction scores, averaged over 40 image collections, were as follows: proposed method, 0.39 ± 0.099; CIDRE method, 0.67 ± 0.047; BaSiC method, 0.55 ± 0.038. Based on the quantitative evaluations, we can confirm that the proposed method can correct not only shading distortion, but also fixed-pattern noise, compared with the two previous state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Baslamisli, Anil S., Yang Liu, Sezer Karaoglu, and Theo Gevers. "Physics-based shading reconstruction for intrinsic image decomposition." Computer Vision and Image Understanding 205 (April 2021): 103183. http://dx.doi.org/10.1016/j.cviu.2021.103183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tian, Lei, Aiguo Song, Dapeng Chen, and Dejing Ni. "Haptic Display of Image Based on Multi-Feature Extraction." International Journal of Pattern Recognition and Artificial Intelligence 30, no. 08 (July 17, 2016): 1655023. http://dx.doi.org/10.1142/s0218001416550235.

Full text
Abstract:
Image feature extraction is one of the key technologies of image haptic display. In this paper, multi-feature extraction method of the object in image is proposed to improve image-based haptic perception. The multi-feature extraction includes contour shape extraction, pattern extraction and detail texture extraction. Firstly, we use an intrinsic decomposition method to decompose an image into shading image and reflectance image. The reflectance image describes nonillumination affected color patterns spread on the surface. Then, the shading image is utilized in contour shape and detail texture extraction. Contour shape extraction is based on partial differential equation (PDE), to reconstruct three-dimensional (3D) surface model in virtual environments. Detailed texture extraction is based on fractional differential method simultaneously. Finally, the various features extracted above are haptic rendered by different methods. The experimental results show the effectiveness and potentiality of the proposed method for improving the ability of haptic perception and recognition of human in virtual environments.
APA, Harvard, Vancouver, ISO, and other styles
8

Abada, Lyes, and Saliha Aouat. "Tabu Search to Solve the Shape from Shading Ambiguity." International Journal on Artificial Intelligence Tools 24, no. 05 (October 2015): 1550035. http://dx.doi.org/10.1142/s0218213015500359.

Full text
Abstract:
The three-dimensional reconstruction of an object from one gray-scale image is a basic problem in computer vision. This problem is known by Shape From Shading. It is considered as an ill-posed due to the ambiguity of the convexity or the concavity of the surface to be reconstructed. A method of solving the ambiguity based on the singular points of the image was proposed in this paper. It uses tabu search to determine the optimal solution of the solution space. The proposed method determines all 3D objects for an image in a short time. The proposed method has been tested on synthetic and real images, it was compared with another method of resolution.
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Jin, and Zhi Jie Shen. "Research of Lighting Technology Based on the OpenGL." Advanced Materials Research 588-589 (November 2012): 2113–16. http://dx.doi.org/10.4028/www.scientific.net/amr.588-589.2113.

Full text
Abstract:
The manipulating technique of lighting is a very important component of realistic image rendering, including lighting model and shading. This paper, first, introduces the basic concepts, the principles and the programming of the general thoughts of Lighting with OpenGL; then, introduces how to compute the light intensity and shading interpolation by investigating the reflex factors on object surface; last, provides a set of cases to show the ideal effect with different rendering techniques.
APA, Harvard, Vancouver, ISO, and other styles
10

OKATANI, Takayuki, and Koichiro DEGUCHI. "3D Shape Reconstruction from an Endoscope Image Based on Shading." Transactions of the Society of Instrument and Control Engineers 33, no. 10 (1997): 1035–42. http://dx.doi.org/10.9746/sicetr1965.33.1035.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Image based shading"

1

Lind, Fredrik, and Escalante Andrés Diaz. "Maximizing performance gain of Variable Rate Shading tier 2 while maintaining image quality : Using post processing effects to mask image degradation." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21868.

Full text
Abstract:
Background. Performance optimization is of great importance for games as it constrains the possibilities of content or complexity of systems. Modern games support high resolution rendering but higher resolutions require more pixels to be computed and solutions are needed to reduce this workload. Currently used methods include uniformly lowering the shading rates across the whole screen to reduce the amount of pixels needing computation. Variable Rate Shading is a new hardware supported technique with several functionality tiers. Tier 1 is similar to previous methods in that it lowers the shading rate for the whole screen. Tier 2 supports screen space image shading. With tier 2 screen space image shading, various different shading rates can be set across the screen which gives developers the choice of where and when to set specific shading rates. Objectives. The aim of this thesis is to examine how close Variable Rate Shading tier 2 screen space shading can come to the performance gains of Variable Rate Shading tier 1 while trying to maintain an acceptable image quality with the help of commonly used post processing effects. Methods. A lightweight scene is set up and Variable Rate Shading tier 2 methods are set to an acceptable image quality as baseline. Evaluation of performance is done by measuring the times of specific passes required by and affected by Variable Rate Shading. Image quality is measured by capturing sequences of images with no Variable Rate Shading on as reference, then with Variable Rate Shading tier 1 and several methods with tier 2 to be compared with Structural Similarity Index. Results. Highest measured performance gains from tier 2 was 28.0%. The result came from using edge detection to create the shading rate image in 3840x2160 resolution. This translates to 36.7% of the performance gains of tier 1 but with better image quality with SSIM values of 0.960 against tier 1’s 0.802, which corresponds to good and poor image quality respectively. Conclusions. Variable Rate Shading tier 2 shows great potential in increasing performance while maintaining image quality, especially with edge detection. Postprocessing effects are effective at maintaining a good image quality. Performance gains also scale well as they increase with higher resolutions.
Bakgrund. Prestandaoptimering är väldigt viktigt för spel eftersom det kan begränsa möjligheterna av innehåll eller komplexitet av system. Moderna spel stödjer rendering för höga upplösningar men höga upplösningar kräver beräkningar för mera pixlar och lösningar behövs för att minska arbetsbördan. Metoder som för närvarande används omfattar bland annat enhetlig sänkning av skuggningsförhållande över hela skärmen för att minska antalet pixlar som behöver beräkningar. Variable Rate Shading är en ny hårdvarustödd teknik med flera funktionalitetsnivåer. Nivå 1 är likt tidigare metoder eftersom skuggningsförhållandet enhetligt sänks över hela skärmen. Nivå 2 stödjer skärmrymdsbildskuggning. Med skärmrymdsbildskuggning kan skuggningsförhållanden varieras utspritt över skärmen vilket ger utvecklare valmöjligheter att bestämma var och när specifika skuggningsförhållanden ska sättas. Syfte. Syftet med examensarbetet är att undersöka hur nära Variable Rate Shading nivå 2 skärmrymdsbildskuggning kan komma prestandavinsterna av Variable Rate Shading nivå 1 samtidigt som bildkvaliteten behålls acceptabel med hjälp av vanligt använda efterbearbetningseffekter. Metod. En simpel scen skapades och metoder för Variable Rate Shading nivå 2 sattes till en acceptabel bildkvalitet som utgångspunkt. Utvärdering av prestanda gjordes genom att mäta tiderna för specifika pass som behövdes för och påverkades av Variable Rate Shading. Bildkvalitet mättes genom att spara bildsekvenser utan Variable Rate Shading på som referensbilder, sedan med Variable Rate Shading nivå 1 och flera metoder med nivå 2 för att jämföras med Structural Similarity Index. Resultat. Högsta uppmätta prestandavinsten från nivå 2 var 28.0%. Resultatet kom ifrån kantdetektering för skapandet av skuggningsförhållandebilden, med upplösningen 3840x2160. Det motsvarar 36.7% av prestandavinsten för nivå 1 men med mycket bättre bildkvalitet med SSIM-värde på 0.960 gentemot 0.802 för nivå 1, vilka motsvarar bra och dålig bildkvalitet. Slutsatser. Variable Rate Shading nivå 2 visar stor potential i prestandavinster med bibehållen bildkvalitet, speciellt med kantdetektering. Efterbearbetningseffekter är effektiva på att upprätthålla en bra bildkvalitet. Prestandavinster skalar även bra då de ökar vid högre upplösningar.
APA, Harvard, Vancouver, ISO, and other styles
2

Johansson, Erik. "3D Reconstruction of Human Faces from Reflectance Fields." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2365.

Full text
Abstract:

Human viewers are extremely sensitive to the appearanceof peoples faces, which makes the rendering of realistic human faces a challenging problem. Techniques for doing this have continuously been invented and evolved since more than thirty years.

This thesis makes use of recent methods within the area of image based rendering, namely the acquisition of reflectance fields from human faces. The reflectance fields are used to synthesize and realistically render models of human faces.

A shape from shading technique, assuming that human skin adheres to the Phong model, has been used to estimate surface normals. Belief propagation in graphs has then been used to enforce integrability before reconstructing the surfaces. Finally, the additivity of light has been used to realistically render the models.

The resulting models closely resemble the subjects from which they were created, and can realistically be rendered from novel directions in any illumination environment.

APA, Harvard, Vancouver, ISO, and other styles
3

Ang, Jason. "Offset Surface Light Fields." Thesis, University of Waterloo, 2003. http://hdl.handle.net/10012/1100.

Full text
Abstract:
For producing realistic images, reflection is an important visual effect. Reflections of the environment are important not only for highly reflective objects, such as mirrors, but also for more common objects such as brushed metals and glossy plastics. Generating these reflections accurately at real-time rates for interactive applications, however, is a difficult problem. Previous works in this area have made assumptions that sacrifice accuracy in order to preserve interactivity. I will present an algorithm that tries to handle reflection accurately in the general case for real-time rendering. The algorithm uses a database of prerendered environment maps to render both the original object itself and an additional bidirectional reflection distribution function (BRDF). The algorithm performs image-based rendering in reflection space in order to achieve accurate results. It also uses graphics processing unit (GPU) features to accelerate rendering.
APA, Harvard, Vancouver, ISO, and other styles
4

Huang, Tz-kuei, and 黃子魁. "Multi-Resolution-Based Shading/ Reflectance and Specularity/Diffusion Separation Using a Single Image." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/68824075818513182971.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系碩博士班
96
Both shading and specular are caused by lighting, and produce troublesome effects for algorithms of computer vision and image processing, such as segmentation, object tracking, recognition, detection, etc. Therefore, develop a system to separate them out is required. In this paper, we use multi-resolution based methods to solve shading/specular problem using a single image. Firstly, we discuss a method to separate shading and reflectance with graphical mode. Unlike previous methods of estimating shading images, our training data are generated from image sequences shot from different scenes and lightings. Training sets are generated by pairing original image patches with their illumination image patches at different scales, and then their relationships are modeled by Markov network. During testing, belief propagation is utilized to search for illumination patches that best described the input image patches by both considering shading consistency and minimizing the total amount of edge derivates. In the second part, we detect specular by a method solely on colors. Then we do down-sampling until there is no any specular components exists. When up-sampling, we update the specular pixels by lower resolution and keep diffuse pixels. Finally, least square solution is used for correcting our results.
APA, Harvard, Vancouver, ISO, and other styles
5

Binyahib, Roba S. "Image-based Exploration of Iso-surfaces for Large Multi- Variable Datasets using Parameter Space." Thesis, 2013. http://hdl.handle.net/10754/292460.

Full text
Abstract:
With an increase in processing power, more complex simulations have resulted in larger data size, with higher resolution and more variables. Many techniques have been developed to help the user to visualize and analyze data from such simulations. However, dealing with a large amount of multivariate data is challenging, time- consuming and often requires high-end clusters. Consequently, novel visualization techniques are needed to explore such data. Many users would like to visually explore their data and change certain visual aspects without the need to use special clusters or having to load a large amount of data. This is the idea behind explorable images (EI). Explorable images are a novel approach that provides limited interactive visualization without the need to re-render from the original data [40]. In this work, the concept of EI has been used to create a workflow that deals with explorable iso-surfaces for scalar fields in a multivariate, time-varying dataset. As a pre-processing step, a set of iso-values for each scalar field is inferred and extracted from a user-assisted sampling technique in time-parameter space. These iso-values are then used to generate iso- surfaces that are then pre-rendered (from a fixed viewpoint) along with additional buffers (i.e. normals, depth, values of other fields, etc.) to provide a compressed representation of iso-surfaces in the dataset. We present a tool that at run-time allows the user to interactively browse and calculate a combination of iso-surfaces superimposed on each other. The result is the same as calculating multiple iso- surfaces from the original data but without the memory and processing overhead. Our tool also allows the user to change the (scalar) values superimposed on each of the surfaces, modify their color map, and interactively re-light the surfaces. We demonstrate the effectiveness of our approach over a multi-terabyte combustion dataset. We also illustrate the efficiency and accuracy of our technique by comparing our results with those from a more traditional visualization pipeline.
APA, Harvard, Vancouver, ISO, and other styles
6

Chu, Chien-Hung, and 朱健宏. "Weighted-Map or Adaboost-Based Separation of Reflectance and Shading Using a Single Image." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/13709367240759393846.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系碩博士班
96
Intrinsic image extraction has long been an important problem for computer vision applications. However, this problem is nontrivial at all because it is an ill-posed problem. An input image is a product of its shading image and reflectance image. Two methods for extracting intrinsic images form a sing image is presented. The method first convolves the input image with derivative filters. The pixels of filtered images are then classified shading-related or reflectance-related. This paper assumed that the derivative is either caused by shading or reflectance, but not both. We use weighted-map method to separate one single color image and use adaboost-based method to separate a gray-scale image. Finally, the intrinsic images of input image can be reintegrated from the classification results of the filtered images. We use a synthetic image to demonstrate result. Besides, our approach also works on real image data in experimental results.
APA, Harvard, Vancouver, ISO, and other styles
7

Nagoor, Omniah H. "Image-based Exploration of Large-Scale Pathline Fields." Thesis, 2014. http://hdl.handle.net/10754/321000.

Full text
Abstract:
While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Image based shading"

1

Lew, Michael S., Michel Chaudron, Nies Huijsmans, Alfred She, and Thomas S. Huang. "Convergence of model based shape from shading." In Image Analysis and Processing, 582–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63507-6_248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pommert, A., U. Tiede, G. Wiebecke, and K. H. Höhne. "Image Quality in Voxel-Based Surface Shading." In CAR’89 Computer Assisted Radiology / Computergestützte Radiologie, 737–41. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/978-3-642-52311-3_128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Okazaki, Kozo, and Shinichi Tamura. "Spherical Shading Correction of Eye Fundus Image by Parabola Function." In Computer-Based Automation, 399–409. Boston, MA: Springer US, 1985. http://dx.doi.org/10.1007/978-1-4684-7559-3_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Knops, Z. F., J. B. Antoine Maintz, M. A. Viergever, and J. P. W. Pluim. "Normalized Mutual Information Based PET-MR Registration Using K-Means Clustering and Shading Correction." In Biomedical Image Registration, 31–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39701-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Deprettere, E. F., and L. S. Shen. "A Parallel Image Rendering Algorithm and Architecture Based on Ray Tracing and Radiosity Shading." In Linear Algebra for Large Scale and Real-Time Applications, 91–108. Dordrecht: Springer Netherlands, 1993. http://dx.doi.org/10.1007/978-94-015-8196-7_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Srivastava, Rajeev. "PDE-Based Image Processing." In Geographic Information Systems, 569–607. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2038-4.ch035.

Full text
Abstract:
This chapter describes the basic concepts of partial differential equations (PDEs) based image modelling and their applications to image restoration. The general basic concepts of partial differential equation (PDE)-based image modelling and processing techniques are discussed for image restoration problems. These techniques can also be used in the design and development of efficient tools for various image processing and vision related tasks such as restoration, enhancement, segmentation, registration, inpainting, shape from shading, 3D reconstruction of objects from multiple views, and many more. As a case study, the topic in consideration is oriented towards image restoration using PDEs formalism since image restoration is considered to be an important pre-processing task for 3D surface geometry, reconstruction, and many other applications. An image may be subjected to various types of noises during its acquisition leading to degraded quality of the image, and hence, the noise must be reduced. The noise may be additive or multiplicative in nature. Here, the PDE-based models for removal of both types of noises are discussed. As examples, some PDE-based schemes have been implemented and their comparative study with other existing techniques has also been presented.
APA, Harvard, Vancouver, ISO, and other styles
7

Srivastava, Rajeev. "PDE-Based Image Processing." In 3-D Surface Geometry and Reconstruction, 49–89. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0113-0.ch003.

Full text
Abstract:
This chapter describes the basic concepts of partial differential equations (PDEs) based image modelling and their applications to image restoration. The general basic concepts of partial differential equation (PDE)-based image modelling and processing techniques are discussed for image restoration problems. These techniques can also be used in the design and development of efficient tools for various image processing and vision related tasks such as restoration, enhancement, segmentation, registration, inpainting, shape from shading, 3D reconstruction of objects from multiple views, and many more. As a case study, the topic in consideration is oriented towards image restoration using PDEs formalism since image restoration is considered to be an important pre-processing task for 3D surface geometry, reconstruction, and many other applications. An image may be subjected to various types of noises during its acquisition leading to degraded quality of the image, and hence, the noise must be reduced. The noise may be additive or multiplicative in nature. Here, the PDE-based models for removal of both types of noises are discussed. As examples, some PDE-based schemes have been implemented and their comparative study with other existing techniques has also been presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Kashyap, Ramgopal, and Surendra Rahamatkar. "Healthcare Informatics Using Modern Image Processing Approaches." In Medical Data Security for Bioengineers, 254–77. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7952-6.ch013.

Full text
Abstract:
Medical image segmentation is the first venture for abnormal state image analysis, significantly lessening the multifaceted nature of substance investigation of pictures. The local region-based active contour may have a few burdens. Segmentation comes about to intensely rely on the underlying shape choice which is an exceptionally capable errand. In a few circumstances, manual collaborations are infeasible. To defeat these deficiencies, the proposed method for unsupervised segmentation of viewer's consideration object of medical images given the technique with the help of the shading boosting Harris finder and the center saliency map. Investigated distinctive techniques to consider the image data and present a formerly utilized energy-based active contour method dependent on the choice of high certainty forecasts to allocate pseudo-names consequently with the point of diminishing the manual explanations.
APA, Harvard, Vancouver, ISO, and other styles
9

Cuevas, Erik, Daniel Zaldivar, Marco Perez-Cisneros, and Marco Block. "LVQ Neural Networks in Color Segmentation." In Soft Computing Methods for Practical Environment Solutions, 45–63. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-61520-893-7.ch004.

Full text
Abstract:
Segmentation in color images is a complex and challenging task in particular to overcome changes in light intensity caused by noise and shadowing. Most of the segmentation algorithms do not tolerate variations in color hue corresponding to the same object. By means of the Learning Vector Quantization (LVQ) networks, neighboring neurons are able to learn how to recognize close sections of the input space. Neighboring neurons would thus correspond to color regions illuminated in different ways. This chapter presents an image segmentator approach based on LVQ networks which considers the segmentation process as a color-based pixel classification. The segmentator operates directly upon the image pixels using the classification properties of the LVQ networks. The algorithm is effectively applied to process sampled images showing its capacity to satisfactorily segment color despite remarkable illumination differences.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Image based shading"

1

El-Melegy, Moumen. "Shading-Based Image Intrinsics: Derivation and Potential Applications." In 2006 International Conference on Computer Engineering and Systems. IEEE, 2006. http://dx.doi.org/10.1109/icces.2006.320457.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bezerra, H., B. Feijo, and L. Velho. "An Image-Based Shading Pipeline for 2D Animation." In XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05). IEEE, 2005. http://dx.doi.org/10.1109/sibgrapi.2005.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kong, Fan-Hui. "A New Method of Inspection Based on Shape from Shading." In 2008 Congress on Image and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/cisp.2008.292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Yu, Charlie C. L. Wang, and Matthew M. F. Yuen. "3D “Micro’’-Geometry Modeling From Image Cues." In ASME 2002 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/detc2002/cie-34479.

Full text
Abstract:
This paper presents an approach to construct 3D microstructure of a surface from a single image. By integrating our technique with traditional CAD tools, the surface modeling results have clear ‘micro’ surface structures. Firstly, a multi-pyramid image segmentation algorithm is applied on the single image to extract the objective region. Our segmentation algorithm is an energy-based model, and it is accelerated by multi-pyramid. In the following, we use a modified shape from shading (SFS) model to construct the 3D surface described in the extracted image region. Our method is a global SFS approach, which is based on a new energy function that represents the shading difference between the reconstructed image (obtained from the constructed 3D object) and the input image.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Tingting, and Xiaoying Sun. "Electrostatic tactile rendering of image based on shape from shading." In 2014 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2014. http://dx.doi.org/10.1109/icalip.2014.7009900.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tang, Lixin, Bin Xu, and Hanmin Shi. "Self-organizing shape from shading method based on hybrid reflection model." In International Symposium on Multispectral Image Processing and Pattern Recognition, edited by S. J. Maybank, Mingyue Ding, F. Wahl, and Yaoting Zhu. SPIE, 2007. http://dx.doi.org/10.1117/12.751078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ge, Huayong, and Shujuan Fang. "Detecting image forgery using linear constraints based on shading and shadows." In 2015 International Conference on Informative and Cybernetics for Computational Social Systems (ICCSS). IEEE, 2015. http://dx.doi.org/10.1109/iccss.2015.7281160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Katayama, Akihiro, Yukio Sakagawa, Hiroyuki Yamamoto, and Hideyuki Tamura. "Shading and shadow casting in image-based rendering without geometric models." In ACM SIGGRAPH 99 Conference abstracts and applications. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/311625.312293.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Deng, Teng, Jianmin Zheng, Jianfei Cai, and Tat-Jen Cham. "SubdSH: Subdivision-based Spherical Harmonics Field for Real-time Shading-based Refinement under Challenging Unknown Illumination." In 2018 IEEE Visual Communications and Image Processing (VCIP). IEEE, 2018. http://dx.doi.org/10.1109/vcip.2018.8698629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Li, and Chew-lim Tan. "Warped Document Image Restoration Using Shape-from-Shading and Physically-Based Modeling." In 2007 IEEE Workshop on Applications of Computer Vision (WAC V '07). IEEE, 2007. http://dx.doi.org/10.1109/wacv.2007.65.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography