Auswahl der wissenschaftlichen Literatur zum Thema „Multi-pass rendering“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Multi-pass rendering" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Multi-pass rendering"

1

XIE, Guo-Fu, und Wen-Cheng WANG. „Single-Pass Data Access for Multi-Fragment Effects Rendering on GPUs“. Chinese Journal of Computers 34, Nr. 3 (19.05.2011): 473–81. http://dx.doi.org/10.3724/sp.j.1016.2011.00473.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Luo, Dening, Yi Lin und Jianwei Zhang. „GPU-based multi-slice per pass algorithm in interactive volume illumination rendering“. Frontiers of Information Technology & Electronic Engineering 22, Nr. 8 (August 2021): 1092–103. http://dx.doi.org/10.1631/fitee.2000214.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Zhou, Guo, Dengming Zhu, Yi Wei und Zhaoqi Wang. „Hybrid Transmittance Fitting for Rendering Transparency on the GPU“. International Journal of Pattern Recognition and Artificial Intelligence 30, Nr. 05 (21.04.2016): 1654004. http://dx.doi.org/10.1142/s0218001416540045.

Der volle Inhalt der Quelle
Annotation:
In real-time rendering transparency is an important multi-fragment effect to visualize the structure of three-dimensional models. The per-pixel transmittance implicitly describes how the light is attenuated by traveling through several transparent fragments. We present a hybrid approach to fit the transmittance using the Heaviside step function and the trigonometric function. The k fragments with the largest contribution are exactly composited and the remaining ones are accurately compressed in an unified formulation. With a single geometry pass, fragments are sorted into a fixed-size array and overflowing ones are expanded by a truncated Fourier series. Then the transmittance is reconstructed on the fly to modulate the surface color in another geometry pass. Our approach favors high scene complexity but operates in bounded memory without losing noticeable high-frequency detail. We demonstrate that it is able to closely match the image quality at competitive frame rate, comparing to a realtime A-buffer implementation and other approximate transparency techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kubo, Hiroyuki, Yoshinori Dobashi und Shigeo Morishima. „Curvature-Dependent Reflectance Function for Interactive Rendering of Subsurface Scattering“. International Journal of Virtual Reality 10, Nr. 1 (01.01.2011): 45–51. http://dx.doi.org/10.20870/ijvr.2011.10.1.2801.

Der volle Inhalt der Quelle
Annotation:
Simulating sub-surface scattering is one of the most effective ways to realistically synthesize translucent materials such as marble, milk or human skin. We have developed a curvature-dependent reflectance function (CDRF) which mimics the presence of subsurface scattering. In this approach, we provide only a single parameter that represents the intensity of theincident light scattered in a translucent material. The parameter is not only provided by curve-fitting to a simulated data-set, but also manipulated by an artist. Furthermore, this approach is easily implementable on a GPU and does not require any complicated pre-processing and multi-pass rendering as is often the case in this area of research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Thiel, F., S. Discher, R. Richter und J. Döllner. „INTERACTION AND LOCOMOTION TECHNIQUES FOR THE EXPLORATION OF MASSIVE 3D POINT CLOUDS IN VR ENVIRONMENTS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4 (19.09.2018): 623–30. http://dx.doi.org/10.5194/isprs-archives-xlii-4-623-2018.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract.</strong> Emerging virtual reality (VR) technology allows immersively exploring digital 3D content on standard consumer hardware. Using in-situ or remote sensing technology, such content can be automatically derived from real-world sites. External memory algorithms allow for the non-immersive exploration of the resulting 3D point clouds on a diverse set of devices with vastly different rendering capabilities. Applications for VR environments raise additional challenges for those algorithms as they are highly sensitive towards visual artifacts that are typical for point cloud depictions (i.e., overdraw and underdraw), while simultaneously requiring higher frame rates (i.e., around 90<span class="thinspace"></span>fps instead of 30&amp;ndash;60<span class="thinspace"></span>fps). We present a rendering system for the immersive exploration and inspection of massive 3D point clouds on state-of-the-art VR devices. Based on a multi-pass rendering pipeline, we combine point-based and image-based rendering techniques to simultaneously improve the rendering performance and the visual quality. A set of interaction and locomotion techniques allows users to inspect a 3D point cloud in detail, for example by measuring distances and areas or by scaling and rotating visualized data sets. All rendering, interaction and locomotion techniques can be selected and configured dynamically, allowing to adapt the rendering system to different use cases. Tests on data sets with up to 2.6 billion points show the feasibility and scalability of our approach.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Xu, Tianchen, Enhua Wu, Mo Chen und Ming Xie. „Real-time Character Motion Effect Enhancement Based on Fluid Simulation“. International Journal of Virtual Reality 11, Nr. 1 (01.01.2012): 59–68. http://dx.doi.org/10.20870/ijvr.2012.11.1.2838.

Der volle Inhalt der Quelle
Annotation:
In fast figure animation, motion blur is of crucial importance, and this is especially true when an artist wants to generate exaggerating effect through figure motion. For a quite long period of time, animators seek the answer by using certain kind of image blending, no matter by the means of hardware or software. In recent years, methods based on 3D geometry of the motion figure with global illumination become gradually in demand, as they could deliver relatively high quality of motion blur effect. However, the computation cost in those methods is always very high, thus real time rendering become quite difficult to achieve. In this paper, a real-time motion effect based on 3D geometric approach is proposed, in which a special effect along the motion trajectory based on fluid simulation is combined with the volumetric motion blur. Furthermore, the motion trajectory would be decomposed and multi-pass geometry rendering would be employed to achieve geometry instancing for reuse. In this manner, the redundant calculation of each frame could be avoided, and the limitation of trajectory generation would be broken. In the pipeline, we separate motion tracking and fluid solution, to support various fluid effects flexibly. The scheme we present makes use of GPU geometry shading in parallel, aiming at guaranteeing high efficiency of computation while delivering splendid rendering. As a result, real time rendering including the motion blur effect is achieved.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Schuster, Kersten, Philip Trettner und Leif Kobbelt. „High-Performance Image Filters via Sparse Approximations“. Proceedings of the ACM on Computer Graphics and Interactive Techniques 3, Nr. 2 (26.08.2020): 1–19. http://dx.doi.org/10.1145/3406182.

Der volle Inhalt der Quelle
Annotation:
We present a numerical optimization method to find highly efficient (sparse) approximations for convolutional image filters. Using a modified parallel tempering approach, we solve a constrained optimization that maximizes approximation quality while strictly staying within a user-prescribed performance budget. The results are multi-pass filters where each pass computes a weighted sum of bilinearly interpolated sparse image samples, exploiting hardware acceleration on the GPU. We systematically decompose the target filter into a series of sparse convolutions, trying to find good trade-offs between approximation quality and performance. Since our sparse filters are linear and translation-invariant, they do not exhibit the aliasing and temporal coherence issues that often appear in filters working on image pyramids. We show several applications, ranging from simple Gaussian or box blurs to the emulation of sophisticated Bokeh effects with user-provided masks. Our filters achieve high performance as well as high quality, often providing significant speed-up at acceptable quality even for separable filters. The optimized filters can be baked into shaders and used as a drop-in replacement for filtering tasks in image processing or rendering pipelines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Karmiris-Obratański, Panagiotis, Nikolaos E. Karkalos, Rafał Kudelski, Emmanouil L. Papazoglou und Angelos P. Markopoulos. „On the Effect of Multiple Passes on Kerf Characteristics and Efficiency of Abrasive Waterjet Cutting“. Metals 11, Nr. 1 (01.01.2021): 74. http://dx.doi.org/10.3390/met11010074.

Der volle Inhalt der Quelle
Annotation:
Abrasive waterjet cutting is a well-established non-conventional technique for the processing of difficult-to-cut material and rendering of various complex geometries with high accuracy. However, as in every machining process, it is also required that high efficiency and productivity are achieved. For that reason, in the present study, the effect of performing the machining process by multiple passes is investigated, and the evaluation of this approach is performed in terms of total depth of penetration, kerf width, kerf taper angle, mean material removal rate, and cutting efficiency. In the case of multiple passes, the passes are performed in the same direction with the traverse speed adjusted accordingly in order to maintain the total machining time constant in each case. From the experimental results, it was found that the effect of multiple passes on the kerf characteristics, mean material removal rate, and cutting efficiency depends on the process conditions, especially regarding the depth of penetration, and it is possible to achieve significantly higher efficiency by the multi-pass cutting technique when the appropriate process conditions are selected.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Sriganesh, Pranav, Rick Dehner und Ahmet Selamet. „Silencer for high-frequency turbocharger compressor noise via an acoustic straightener“. INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, Nr. 5 (01.08.2021): 1744–55. http://dx.doi.org/10.3397/in-2021-1917.

Der volle Inhalt der Quelle
Annotation:
Decades of successful research and development on automotive silencers for engine breathing systems have brought about significant reductions in emitted engine noise. A majority of this research has pursued airborne noise at relatively low frequencies, which typically involve plane wave propagation. However, with the increasing demand for downsized turbocharged engines in passenger cars, high-frequency compressor noise has become a challenge in engine induction systems. Elevated frequencies promote multi-dimensional wave propagation rendering at times conventional silencer treatments ineffective due to the underlying assumption of one-dimensional wave propagation in their design. The present work focuses on developing a high-frequency silencer that targets tonal noise at the blade-pass frequency within the compressor inlet duct for a wide range of rotational speeds. The approach features a novel "acoustic straightener" that creates exclusive plane wave propagation near the silencing elements. An analytical treatment is combined with a three-dimensional acoustic finite element method to guide the early design process. The effects of mean flow and nonlinearities on acoustics are then captured by three-dimensional computational fluid dynamics simulations. The configuration developed by the current computational effort will set the stage for further refinement through future experiments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

He, Peisong, Haoliang Li, Hongxia Wang und Ruimei Zhang. „Detection of Computer Graphics Using Attention-Based Dual-Branch Convolutional Neural Network from Fused Color Components“. Sensors 20, Nr. 17 (22.08.2020): 4743. http://dx.doi.org/10.3390/s20174743.

Der volle Inhalt der Quelle
Annotation:
With the development of 3D rendering techniques, people can create photorealistic computer graphics (CG) easily with the advanced software, which is of great benefit to the video game and film industries. On the other hand, the abuse of CGs has threatened the integrity and authenticity of digital images. In the last decade, several detection methods of CGs have been proposed successfully. However, existing methods cannot provide reliable detection results for CGs with the small patch size and post-processing operations. To overcome the above-mentioned limitation, we proposed an attention-based dual-branch convolutional neural network (AD-CNN) to extract robust representations from fused color components. In pre-processing, raw RGB components and their blurred version with Gaussian low-pass filter are stacked together in channel-wise as the input for the AD-CNN, which aims to help the network learn more generalized patterns. The proposed AD-CNN starts with a dual-branch structure where two branches work in parallel and have the identical shallow CNN architecture, except that the first convolutional layer in each branch has various kernel sizes to exploit low-level forensics traces in multi-scale. The output features from each branch are jointly optimized by the attention-based fusion module which can assign the asymmetric weights to different branches automatically. Finally, the fused feature is fed into the following fully-connected layers to obtain final detection results. Comparative and self-analysis experiments have demonstrated the better detection capability and robustness of the proposed detection compared with other state-of-the-art methods under various experimental settings, especially for image patch with the small size and post-processing operations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Multi-pass rendering"

1

Bobuľa, Matej. „Neeuklidovské vykreslování ve VR“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445563.

Der volle Inhalt der Quelle
Annotation:
The main goal of this master's thesis is to research different approaches of rendering geometries and spaces in virtual reality. Learn more about the terms, non-Euclidean geometry and non-Euclidean spaces, their origin and different principles used in video game industry to simulate such geometries or spaces. Based on the research, a selection of an optimal API is needed for the implementation of such application. Application is designed to run on desktop computers with Microsoft Windows operating system. Application, in it's core, is a video game and the main goal of the player is to successfully complete each and every level of the game. These levels are designed in a specific way so that they each individually represent some form of non-Euclidean geometry or space.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Multi-pass rendering"

1

„Multi-Pass Rendering with CINEWARE“. In After Effects and Cinema 4D lite, 241–56. Routledge, 2014. http://dx.doi.org/10.4324/9781315772325-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Multi-pass rendering"

1

Diefenbach, Paul J., und Norman I. Badler. „Multi-pass pipeline rendering“. In the 1997 symposium. New York, New York, USA: ACM Press, 1997. http://dx.doi.org/10.1145/253284.253308.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Schild, Jonas, Sven Seele, Jonas Fischer und Maic Masuch. „Multi-pass rendering of stereoscopic video on consumer graphics cards“. In Symposium on Interactive 3D Graphics and Games. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/1944745.1944789.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Fukushima, Norishige, Toshiaki Fujii, Yutaka Ishibashi, Tomohiro Yendo und Masayuki Tanimoto. „Real-time free viewpoint image rendering by using fast multi-pass dynamic programming“. In 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON 2010). IEEE, 2010. http://dx.doi.org/10.1109/3dtv.2010.5506337.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Yoon, Hee Chang, Ji Hye Oh und Young Rag Do. „High color rendering index of remote-type white LEDs with multi-layered quantum dot-phosphor films and short-wavelength pass dichroic filters“. In SPIE Optical Engineering + Applications, herausgegeben von Matthew H. Kane, Jianzhong Jiao, Nikolaus Dietz und Jian-Jang Huang. SPIE, 2014. http://dx.doi.org/10.1117/12.2061511.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie