Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Image based shading“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Image based shading" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Image based shading"
BARNES, NICK, und ZHI-QIANG LIU. „KNOWLEDGE-BASED SHAPE-FROM-SHADING“. International Journal of Pattern Recognition and Artificial Intelligence 13, Nr. 01 (Februar 1999): 1–23. http://dx.doi.org/10.1142/s0218001499000021.
Der volle Inhalt der QuelleLI, PING, HANQIU SUN, JIANBING SHEN und CHEN HUANG. „HDR IMAGE RERENDERING USING GPU-BASED PROCESSING“. International Journal of Image and Graphics 12, Nr. 01 (Januar 2012): 1250007. http://dx.doi.org/10.1142/s0219467812500076.
Der volle Inhalt der QuelleVergne, Romain, Pascal Barla, Roland W. Fleming und Xavier Granier. „Surface flows for image-based shading design“. ACM Transactions on Graphics 31, Nr. 4 (05.08.2012): 1–9. http://dx.doi.org/10.1145/2185520.2185590.
Der volle Inhalt der QuelleGądek-Moszczak, Aneta, Leszek Wojnar und Adam Piwowarczyk. „Comparison of Selected Shading Correction Methods“. System Safety: Human - Technical Facility - Environment 1, Nr. 1 (01.03.2019): 819–26. http://dx.doi.org/10.2478/czoto-2019-0105.
Der volle Inhalt der QuelleTak, Yoon-Oh, Anjin Park, Janghoon Choi, Jonghyun Eom, Hyuk-Sang Kwon und Joo Beom Eom. „Simple Shading Correction Method for Brightfield Whole Slide Imaging“. Sensors 20, Nr. 11 (29.05.2020): 3084. http://dx.doi.org/10.3390/s20113084.
Der volle Inhalt der QuelleBaslamisli, Anil S., Yang Liu, Sezer Karaoglu und Theo Gevers. „Physics-based shading reconstruction for intrinsic image decomposition“. Computer Vision and Image Understanding 205 (April 2021): 103183. http://dx.doi.org/10.1016/j.cviu.2021.103183.
Der volle Inhalt der QuelleTian, Lei, Aiguo Song, Dapeng Chen und Dejing Ni. „Haptic Display of Image Based on Multi-Feature Extraction“. International Journal of Pattern Recognition and Artificial Intelligence 30, Nr. 08 (17.07.2016): 1655023. http://dx.doi.org/10.1142/s0218001416550235.
Der volle Inhalt der QuelleAbada, Lyes, und Saliha Aouat. „Tabu Search to Solve the Shape from Shading Ambiguity“. International Journal on Artificial Intelligence Tools 24, Nr. 05 (Oktober 2015): 1550035. http://dx.doi.org/10.1142/s0218213015500359.
Der volle Inhalt der QuelleXu, Jin, und Zhi Jie Shen. „Research of Lighting Technology Based on the OpenGL“. Advanced Materials Research 588-589 (November 2012): 2113–16. http://dx.doi.org/10.4028/www.scientific.net/amr.588-589.2113.
Der volle Inhalt der QuelleOKATANI, Takayuki, und Koichiro DEGUCHI. „3D Shape Reconstruction from an Endoscope Image Based on Shading“. Transactions of the Society of Instrument and Control Engineers 33, Nr. 10 (1997): 1035–42. http://dx.doi.org/10.9746/sicetr1965.33.1035.
Der volle Inhalt der QuelleDissertationen zum Thema "Image based shading"
Lind, Fredrik, und Escalante Andrés Diaz. „Maximizing performance gain of Variable Rate Shading tier 2 while maintaining image quality : Using post processing effects to mask image degradation“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21868.
Der volle Inhalt der QuelleBakgrund. Prestandaoptimering är väldigt viktigt för spel eftersom det kan begränsa möjligheterna av innehåll eller komplexitet av system. Moderna spel stödjer rendering för höga upplösningar men höga upplösningar kräver beräkningar för mera pixlar och lösningar behövs för att minska arbetsbördan. Metoder som för närvarande används omfattar bland annat enhetlig sänkning av skuggningsförhållande över hela skärmen för att minska antalet pixlar som behöver beräkningar. Variable Rate Shading är en ny hårdvarustödd teknik med flera funktionalitetsnivåer. Nivå 1 är likt tidigare metoder eftersom skuggningsförhållandet enhetligt sänks över hela skärmen. Nivå 2 stödjer skärmrymdsbildskuggning. Med skärmrymdsbildskuggning kan skuggningsförhållanden varieras utspritt över skärmen vilket ger utvecklare valmöjligheter att bestämma var och när specifika skuggningsförhållanden ska sättas. Syfte. Syftet med examensarbetet är att undersöka hur nära Variable Rate Shading nivå 2 skärmrymdsbildskuggning kan komma prestandavinsterna av Variable Rate Shading nivå 1 samtidigt som bildkvaliteten behålls acceptabel med hjälp av vanligt använda efterbearbetningseffekter. Metod. En simpel scen skapades och metoder för Variable Rate Shading nivå 2 sattes till en acceptabel bildkvalitet som utgångspunkt. Utvärdering av prestanda gjordes genom att mäta tiderna för specifika pass som behövdes för och påverkades av Variable Rate Shading. Bildkvalitet mättes genom att spara bildsekvenser utan Variable Rate Shading på som referensbilder, sedan med Variable Rate Shading nivå 1 och flera metoder med nivå 2 för att jämföras med Structural Similarity Index. Resultat. Högsta uppmätta prestandavinsten från nivå 2 var 28.0%. Resultatet kom ifrån kantdetektering för skapandet av skuggningsförhållandebilden, med upplösningen 3840x2160. Det motsvarar 36.7% av prestandavinsten för nivå 1 men med mycket bättre bildkvalitet med SSIM-värde på 0.960 gentemot 0.802 för nivå 1, vilka motsvarar bra och dålig bildkvalitet. Slutsatser. Variable Rate Shading nivå 2 visar stor potential i prestandavinster med bibehållen bildkvalitet, speciellt med kantdetektering. Efterbearbetningseffekter är effektiva på att upprätthålla en bra bildkvalitet. Prestandavinster skalar även bra då de ökar vid högre upplösningar.
Johansson, Erik. „3D Reconstruction of Human Faces from Reflectance Fields“. Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2365.
Der volle Inhalt der QuelleHuman viewers are extremely sensitive to the appearanceof peoples faces, which makes the rendering of realistic human faces a challenging problem. Techniques for doing this have continuously been invented and evolved since more than thirty years.
This thesis makes use of recent methods within the area of image based rendering, namely the acquisition of reflectance fields from human faces. The reflectance fields are used to synthesize and realistically render models of human faces.
A shape from shading technique, assuming that human skin adheres to the Phong model, has been used to estimate surface normals. Belief propagation in graphs has then been used to enforce integrability before reconstructing the surfaces. Finally, the additivity of light has been used to realistically render the models.
The resulting models closely resemble the subjects from which they were created, and can realistically be rendered from novel directions in any illumination environment.
Ang, Jason. „Offset Surface Light Fields“. Thesis, University of Waterloo, 2003. http://hdl.handle.net/10012/1100.
Der volle Inhalt der QuelleHuang, Tz-kuei, und 黃子魁. „Multi-Resolution-Based Shading/ Reflectance and Specularity/Diffusion Separation Using a Single Image“. Thesis, 2008. http://ndltd.ncl.edu.tw/handle/68824075818513182971.
Der volle Inhalt der Quelle國立成功大學
資訊工程學系碩博士班
96
Both shading and specular are caused by lighting, and produce troublesome effects for algorithms of computer vision and image processing, such as segmentation, object tracking, recognition, detection, etc. Therefore, develop a system to separate them out is required. In this paper, we use multi-resolution based methods to solve shading/specular problem using a single image. Firstly, we discuss a method to separate shading and reflectance with graphical mode. Unlike previous methods of estimating shading images, our training data are generated from image sequences shot from different scenes and lightings. Training sets are generated by pairing original image patches with their illumination image patches at different scales, and then their relationships are modeled by Markov network. During testing, belief propagation is utilized to search for illumination patches that best described the input image patches by both considering shading consistency and minimizing the total amount of edge derivates. In the second part, we detect specular by a method solely on colors. Then we do down-sampling until there is no any specular components exists. When up-sampling, we update the specular pixels by lower resolution and keep diffuse pixels. Finally, least square solution is used for correcting our results.
Binyahib, Roba S. „Image-based Exploration of Iso-surfaces for Large Multi- Variable Datasets using Parameter Space“. Thesis, 2013. http://hdl.handle.net/10754/292460.
Der volle Inhalt der QuelleChu, Chien-Hung, und 朱健宏. „Weighted-Map or Adaboost-Based Separation of Reflectance and Shading Using a Single Image“. Thesis, 2008. http://ndltd.ncl.edu.tw/handle/13709367240759393846.
Der volle Inhalt der Quelle國立成功大學
資訊工程學系碩博士班
96
Intrinsic image extraction has long been an important problem for computer vision applications. However, this problem is nontrivial at all because it is an ill-posed problem. An input image is a product of its shading image and reflectance image. Two methods for extracting intrinsic images form a sing image is presented. The method first convolves the input image with derivative filters. The pixels of filtered images are then classified shading-related or reflectance-related. This paper assumed that the derivative is either caused by shading or reflectance, but not both. We use weighted-map method to separate one single color image and use adaboost-based method to separate a gray-scale image. Finally, the intrinsic images of input image can be reintegrated from the classification results of the filtered images. We use a synthetic image to demonstrate result. Besides, our approach also works on real image data in experimental results.
Nagoor, Omniah H. „Image-based Exploration of Large-Scale Pathline Fields“. Thesis, 2014. http://hdl.handle.net/10754/321000.
Der volle Inhalt der QuelleBuchteile zum Thema "Image based shading"
Lew, Michael S., Michel Chaudron, Nies Huijsmans, Alfred She und Thomas S. Huang. „Convergence of model based shape from shading“. In Image Analysis and Processing, 582–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63507-6_248.
Der volle Inhalt der QuellePommert, A., U. Tiede, G. Wiebecke und K. H. Höhne. „Image Quality in Voxel-Based Surface Shading“. In CAR’89 Computer Assisted Radiology / Computergestützte Radiologie, 737–41. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989. http://dx.doi.org/10.1007/978-3-642-52311-3_128.
Der volle Inhalt der QuelleOkazaki, Kozo, und Shinichi Tamura. „Spherical Shading Correction of Eye Fundus Image by Parabola Function“. In Computer-Based Automation, 399–409. Boston, MA: Springer US, 1985. http://dx.doi.org/10.1007/978-1-4684-7559-3_18.
Der volle Inhalt der QuelleKnops, Z. F., J. B. Antoine Maintz, M. A. Viergever und J. P. W. Pluim. „Normalized Mutual Information Based PET-MR Registration Using K-Means Clustering and Shading Correction“. In Biomedical Image Registration, 31–39. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39701-4_4.
Der volle Inhalt der QuelleDeprettere, E. F., und L. S. Shen. „A Parallel Image Rendering Algorithm and Architecture Based on Ray Tracing and Radiosity Shading“. In Linear Algebra for Large Scale and Real-Time Applications, 91–108. Dordrecht: Springer Netherlands, 1993. http://dx.doi.org/10.1007/978-94-015-8196-7_6.
Der volle Inhalt der QuelleSrivastava, Rajeev. „PDE-Based Image Processing“. In Geographic Information Systems, 569–607. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2038-4.ch035.
Der volle Inhalt der QuelleSrivastava, Rajeev. „PDE-Based Image Processing“. In 3-D Surface Geometry and Reconstruction, 49–89. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0113-0.ch003.
Der volle Inhalt der QuelleKashyap, Ramgopal, und Surendra Rahamatkar. „Healthcare Informatics Using Modern Image Processing Approaches“. In Medical Data Security for Bioengineers, 254–77. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7952-6.ch013.
Der volle Inhalt der QuelleCuevas, Erik, Daniel Zaldivar, Marco Perez-Cisneros und Marco Block. „LVQ Neural Networks in Color Segmentation“. In Soft Computing Methods for Practical Environment Solutions, 45–63. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-61520-893-7.ch004.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Image based shading"
El-Melegy, Moumen. „Shading-Based Image Intrinsics: Derivation and Potential Applications“. In 2006 International Conference on Computer Engineering and Systems. IEEE, 2006. http://dx.doi.org/10.1109/icces.2006.320457.
Der volle Inhalt der QuelleBezerra, H., B. Feijo und L. Velho. „An Image-Based Shading Pipeline for 2D Animation“. In XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05). IEEE, 2005. http://dx.doi.org/10.1109/sibgrapi.2005.9.
Der volle Inhalt der QuelleKong, Fan-Hui. „A New Method of Inspection Based on Shape from Shading“. In 2008 Congress on Image and Signal Processing. IEEE, 2008. http://dx.doi.org/10.1109/cisp.2008.292.
Der volle Inhalt der QuelleWang, Yu, Charlie C. L. Wang und Matthew M. F. Yuen. „3D “Micro’’-Geometry Modeling From Image Cues“. In ASME 2002 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/detc2002/cie-34479.
Der volle Inhalt der QuelleWang, Tingting, und Xiaoying Sun. „Electrostatic tactile rendering of image based on shape from shading“. In 2014 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2014. http://dx.doi.org/10.1109/icalip.2014.7009900.
Der volle Inhalt der QuelleTang, Lixin, Bin Xu und Hanmin Shi. „Self-organizing shape from shading method based on hybrid reflection model“. In International Symposium on Multispectral Image Processing and Pattern Recognition, herausgegeben von S. J. Maybank, Mingyue Ding, F. Wahl und Yaoting Zhu. SPIE, 2007. http://dx.doi.org/10.1117/12.751078.
Der volle Inhalt der QuelleGe, Huayong, und Shujuan Fang. „Detecting image forgery using linear constraints based on shading and shadows“. In 2015 International Conference on Informative and Cybernetics for Computational Social Systems (ICCSS). IEEE, 2015. http://dx.doi.org/10.1109/iccss.2015.7281160.
Der volle Inhalt der QuelleKatayama, Akihiro, Yukio Sakagawa, Hiroyuki Yamamoto und Hideyuki Tamura. „Shading and shadow casting in image-based rendering without geometric models“. In ACM SIGGRAPH 99 Conference abstracts and applications. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/311625.312293.
Der volle Inhalt der QuelleDeng, Teng, Jianmin Zheng, Jianfei Cai und Tat-Jen Cham. „SubdSH: Subdivision-based Spherical Harmonics Field for Real-time Shading-based Refinement under Challenging Unknown Illumination“. In 2018 IEEE Visual Communications and Image Processing (VCIP). IEEE, 2018. http://dx.doi.org/10.1109/vcip.2018.8698629.
Der volle Inhalt der QuelleZhang, Li, und Chew-lim Tan. „Warped Document Image Restoration Using Shape-from-Shading and Physically-Based Modeling“. In 2007 IEEE Workshop on Applications of Computer Vision (WAC V '07). IEEE, 2007. http://dx.doi.org/10.1109/wacv.2007.65.
Der volle Inhalt der Quelle