Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Performance rendering.

Статті в журналах з теми "Performance rendering"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Performance rendering".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Akeley, Kurt, and Tom Jermoluk. "High-performance polygon rendering." ACM SIGGRAPH Computer Graphics 22, no. 4 (August 1988): 239–46. http://dx.doi.org/10.1145/378456.378516.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Silva, C. T., A. E. Kaufman, and C. Pavlakos. "PVR: high-performance volume rendering." IEEE Computational Science and Engineering 3, no. 4 (1996): 18–28. http://dx.doi.org/10.1109/99.556509.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yin, Sai. "Explore the Use of Computer-Aided Design in the Landscape Renderings." Applied Mechanics and Materials 687-691 (November 2014): 1166–69. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1166.

Повний текст джерела
Анотація:
The landscape renderings project performance, landscape renderings important form of program scrutiny, accomplished by means of computer design has become the mainstream of the industry, it has performance visualization, and process reversible, diversified visual affects features. Computer graphics has three stages, namely modeling, rendering, and image processing, as opposed to proper professional graphics software; "3DSMAX + VRAY + PHOTOSHOP" is a typical process landscape renderings produced.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mazuryk, Tomasz, Dieter Schmalstieg, and Michael Gervautz. "Zoom Rendering: Improving 3-D Rendering Performance with 2-D Operations." International Journal of Virtual Reality 2, no. 3 (January 1, 1996): 15–35. http://dx.doi.org/10.20870/ijvr.1996.2.3.2611.

Повний текст джерела
Анотація:
We propose a new method to accelerate rendering during the interactive visualization of complex scenes. The method is applicable if the cost of per-pixel processing is high compared to simple frame buffer transfer operations, as supported by low-end graphics systems with high-performance CPUs or 2-D graphics accelerators. In this case rendering an appropriately down-scaled image and then enlarging it allows a trade-off of rendering speed and image quality. Using this method, more uniform frame rates can be achieved and the dynamic viewing error can be reduced.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Huang, Kejia, Chenliang Wang, Shaohua Wang, Runying Liu, Guoxiong Chen, and Xianglong Li. "An Efficient, Platform-Independent Map Rendering Framework for Mobile Augmented Reality." ISPRS International Journal of Geo-Information 10, no. 9 (September 8, 2021): 593. http://dx.doi.org/10.3390/ijgi10090593.

Повний текст джерела
Анотація:
With the extensive application of big spatial data and the emergence of spatial computing, augmented reality (AR) map rendering has attracted significant attention. A common issue in existing solutions is that AR-GIS systems rely on different platform-specific graphics libraries on different operating systems, and rendering implementations can vary across various platforms. This causes performance degradation and rendering styles that are not consistent across environments. However, high-performance rendering consistency across devices is critical in AR-GIS, especially for edge collaborative computing. In this paper, we present a high-performance, platform-independent AR-GIS rendering engine; the augmented reality universal graphics library (AUGL) engine. A unified cross-platform interface is proposed to preserve AR-GIS rendering style consistency across platforms. High-performance AR-GIS map symbol drawing models are defined and implemented based on a unified algorithm interface. We also develop a pre-caching strategy, optimized spatial-index querying, and a GPU-accelerated vector drawing algorithm that minimizes IO latency throughout the rendering process. Comparisons to existing AR-GIS visualization engines indicate that the performance of the AUGL engine is two times higher than that of the AR-GIS rendering engine on the Android, iOS, and Vuforia platforms. The drawing efficiency for vector polygons is improved significantly. The rendering performance is more than three times better than the average performances of existing Android and iOS systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Li, Hongle, and Seongki Kim. "Mesh Simplification Algorithms for Rendering Performance." International Journal of Engineering Research and Technology 13, no. 6 (June 30, 2020): 1110. http://dx.doi.org/10.37624/ijert/13.6.2020.1110-1119.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Li, Wang, Guan, Xie, Huang, Wen, and Zhou. "A High-performance Cross-platform Map Rendering Engine for Mobile Geographic Information System (GIS)." ISPRS International Journal of Geo-Information 8, no. 10 (September 20, 2019): 427. http://dx.doi.org/10.3390/ijgi8100427.

Повний текст джерела
Анотація:
With the diversification of terminal equipment and operating systems, higher requirements are placed on the rendering performance of maps. The traditional map rendering engine relies on the corresponding operating system graphics library, and there are problems such as the inability to cross the operating system, low rendering performance, and inconsistent rendering style. With the development of hardware, graphics processing unit (GPU) appears in various platforms. How to use GPU hardware to improve map rendering performance has become a critical challenge. In order to address the above problems, this study proposes a cross-platform and high-performance map rendering (Graphics Library engine, GL engine), which uses mask drawing technology and texture dictionary text rendering technology. It can be used on different hardware platforms and different operating systems based on the OpenGL graphics library. The high-performance map rendering engine maintains a consistent map rendering style on different platforms. The results of the benchmark experiments show that the performance of GL engine is 1.75 times and 1.54 times better than the general map rendering engine in the iOS system and in the Android system, respectively, and the rendering performance for vector tiles is 11.89 times and 9.52 times better than rendering in the Mapbox in the iOS system and in the Android system, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hong, Zhi Guo, Yong Bin Wang, and Min Yong Shi. "A Performance-Based Policy for Job Assignment under Distributed Rendering Environments." Applied Mechanics and Materials 543-547 (March 2014): 2949–52. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.2949.

Повний текст джерела
Анотація:
Based on the characteristics of rendering phase, a performance-based policy for job assignment under Distributed Rendering Environments (DREs) is proposed. Firstly, a method of evaluating rendering nodes is designed by taking CPU ratio, RAM size, CPU occupation, RAM usage into account. Furthermore, the corresponding algorithm of job assignment for dividing rendering tasks into sub-tasks on available rendering nodes is presented. Whats more, this algorithms correctness is validated by three groups of experiments thereby which offers quantitative guide for optimal job assignment of rendering.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhang, Qiongxian. "Advanced techniques and high-performance computing optimization for real-time rendering." Applied and Computational Engineering 90, no. 1 (August 27, 2024): 14–19. http://dx.doi.org/10.54254/2755-2721/90/2024melb0061.

Повний текст джерела
Анотація:
Real-time rendering is a cornerstone of modern interactive media, enabling the creation of immersive and dynamic visual experiences. This paper explores advanced techniques and high-performance computing (HPC) optimization in real-time rendering, focusing on the use of game engines like Unity and Unreal Engine. It delves into mathematical models and algorithms that enhance rendering performance and visual quality, including Level of Detail (LOD) management, occlusion culling, and shader optimization. The study also examines the impact of GPU acceleration, parallel processing, and compute shaders on rendering efficiency. Furthermore, the paper discusses the integration of ray tracing, global illumination, and temporal rendering techniques, and addresses the challenges of balancing quality and performance, particularly in virtual and augmented reality applications. The future role of artificial intelligence and machine learning in optimizing real-time rendering pipelines is also considered. By providing a comprehensive overview of current methodologies and identifying key areas for future research, this paper aims to contribute to the ongoing advancement of real-time rendering technologies.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Peng, and Zhibin Yu. "RenderBench: The CPU Rendering Benchmark Suite Based on Microarchitecture-Independent Characteristics." Electronics 12, no. 19 (October 6, 2023): 4153. http://dx.doi.org/10.3390/electronics12194153.

Повний текст джерела
Анотація:
This research addresses the issue of evaluating CPU rendering performance by introducing the innovative benchmark test suite construction method RenderBench. This method combines CPU microarchitecture features with rendering task characteristics to comprehensively assess CPU performance across various rendering tasks. Adhering to principles of representativeness and comprehensiveness, the constructed benchmark test suite encompasses diverse rendering tasks and scenarios, ensuring accurate capture of CPU performance features. Through data sampling and in-depth analysis, this study focuses on the role of microarchitecture-independent features in rendering programs, including instruction-level parallelism, instruction mix, branch prediction capability, register dependency distance, data flow stride, and memory reuse distance. The research findings reveal significant variations in rendering programs across these features. For instance, in terms of instruction-level parallelism, rendering programs demonstrate a high level of ILP (instruction-level parallelism), with an average value of 5.70 for ILP256, surpassing benchmarks such as Mibench and NAS Parallel Benchmark. Furthermore, in aspects such as instruction mix, branch prediction capability, register dependency distance, data flow stride, and memory reuse distance, rendering programs exhibit distinct characteristics. Through the application of the RenderBench method, a scalable and highly representative benchmark test suite was constructed, facilitating an in-depth exploration of CPU performance bottlenecks in rendering tasks. By delving into microarchitecture-independent features, this study provides profound insights into rendering program performance, offering valuable guidance for optimizing CPU rendering performance. The application of ensemble learning models, such as random forest, XGBoost, and ExtraTrees, reveals the significant influence of features like floating-point computation, memory access patterns, and register usage on CPU rendering program performance. These insights not only offer robust guidance for performance optimization but also underscore the importance of feature selection and algorithm choice. In summary, the results of feature importance ranking in this study provide beneficial directions and deep insights for the optimization and enhancement of CPU rendering program performance. These findings are poised to exert a positive impact on future research and development endeavors.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Taguti, Tomoyasu. "Rendering piano music via a performance model." Journal of the Acoustical Society of America 84, S1 (November 1988): S105. http://dx.doi.org/10.1121/1.2025652.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Abdellah, Marwan, Ayman Eldeib, and Amr Sharawi. "High Performance GPU-Based Fourier Volume Rendering." International Journal of Biomedical Imaging 2015 (2015): 1–13. http://dx.doi.org/10.1155/2015/590727.

Повний текст джерела
Анотація:
Fourier volume rendering (FVR) is a significant visualization technique that has been used widely in digital radiography. As a result of itsO(N2log⁡N)time complexity, it provides a faster alternative to spatial domain volume rendering algorithms that areO(N3)computationally complex. Relying on theFourier projection-slice theorem, this technique operates on the spectral representation of a 3D volume instead of processing its spatial representation to generate attenuation-only projections that look likeX-ray radiographs. Due to the rapid evolution of its underlying architecture, the graphics processing unit (GPU) became an attractive competent platform that can deliver giant computational raw power compared to the central processing unit (CPU) on a per-dollar-basis. The introduction of the compute unified device architecture (CUDA) technology enables embarrassingly-parallel algorithms to run efficiently on CUDA-capable GPU architectures. In this work, a high performance GPU-accelerated implementation of the FVR pipeline on CUDA-enabled GPUs is presented. This proposed implementation can achieve a speed-up of 117x compared to a single-threaded hybrid implementation that uses the CPU and GPU together by taking advantage of executing the rendering pipeline entirely on recent GPU architectures.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Laine, Samuli, Janne Hellsten, Tero Karras, Yeongho Seol, Jaakko Lehtinen, and Timo Aila. "Modular primitives for high-performance differentiable rendering." ACM Transactions on Graphics 39, no. 6 (November 26, 2020): 1–14. http://dx.doi.org/10.1145/3414685.3417861.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Mansuan, Melki Sadekh, and Benfano Soewito. "Distributed Rendering Based on Fine-Grained and Coarse-Grained Strategy to Speed up Time and Increase Efficiency of Rendering Process." ComTech: Computer, Mathematics and Engineering Applications 10, no. 1 (June 30, 2019): 15. http://dx.doi.org/10.21512/comtech.v10i1.5067.

Повний текст джерела
Анотація:
The purpose of this research was to solve several problems in the rendering process such as slow rendering time and complex calculations, which caused inefficient rendering. This research analyzed the efficiency in the rendering process. This research was an experimental study by implementing a distributed rendering system with fine-grained and coarse-grained parallel decomposition in computer laboratory. The primary data used was the rendering time obtained from the rendering process of three scenes animation. Descriptive analysis method was used to compare performance using speedup and efficiency of parallel performance metrics. The results show that the distributed rendering method succeeds in increasing the rendering speed with speedup value of 9,43. Moreover, the efficiency of processor use is 94% when it is applied to solve the problem of slow rendering time in the rendering process.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Wang, Peng, and Zhibin Yu. "RayBench: An Advanced NVIDIA-Centric GPU Rendering Benchmark Suite for Optimal Performance Analysis." Electronics 12, no. 19 (October 2, 2023): 4124. http://dx.doi.org/10.3390/electronics12194124.

Повний текст джерела
Анотація:
This study aims to collect GPU rendering programs and analyze their characteristics to construct a benchmark dataset that reflects the characteristics of GPU rendering programs, providing a reference basis for designing the next generation of graphics processors. The research framework includes four parts: GPU rendering program integration, data collection, program analysis, and similarity analysis. In the program integration and data collection phase, 1000 GPU rendering programs were collected from open-source repositories, and 100 representative programs were selected as the initial benchmark dataset. The program analysis phase involves instruction-level, thread-level, and memory-level analysis, as well as five machine learning algorithms for importance ranking. Finally, through Pearson similarity analysis, rendering programs with high similarity were eliminated, and the final GPU rendering program benchmark dataset was selected based on the benchmark’s comprehensiveness and representativeness. The experimental results of this study show that, due to the need to load and process texture and geometry data in rendering programs, the average global memory access efficiency is generally lower compared to the averages of the Rodinia and Parboil benchmarks. The GPU occupancy rate is related to the computationally intensive tasks of rendering programs. The efficiency of stream processor execution and thread bundle execution is influenced by branch statements and conditional judgments. Common operations such as lighting calculations and texture sampling in rendering programs require branch judgments, which reduce the execution efficiency. Bandwidth utilization is improved because rendering programs reduce frequent memory access and data transfer to the main memory through data caching and reuse. Furthermore, this study used multiple machine learning methods to rank the importance of 160 characteristics of 100 rendering programs on four different NVIDIA GPUs. Different methods demonstrate robustness and stability when facing different data distributions and characteristic relationships. By comparing the results of multiple methods, biases inherent to individual methods can be reduced, thus enhancing the reliability of the results. The contribution of this study lies in the analysis of workload characteristics of rendering programs, enabling targeted performance optimization to improve the efficiency and quality of rendering programs. By comprehensively collecting GPU rendering program data and performing characteristic analysis and importance ranking using machine learning methods, reliable reference guidelines are provided for GPU design. This is of significant importance in driving the development of rendering technology.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Chandran, Prashanth, Sebastian Winberg, Gaspard Zoss, Jérémy Riviere, Markus Gross, Paulo Gotardo, and Derek Bradley. "Rendering with style." ACM Transactions on Graphics 40, no. 6 (December 2021): 1–14. http://dx.doi.org/10.1145/3478513.3480509.

Повний текст джерела
Анотація:
For several decades, researchers have been advancing techniques for creating and rendering 3D digital faces, where a lot of the effort has gone into geometry and appearance capture, modeling and rendering techniques. This body of research work has largely focused on facial skin, with much less attention devoted to peripheral components like hair, eyes and the interior of the mouth. As a result, even with the best technology for facial capture and rendering, in most high-end productions a lot of artist time is still spent modeling the missing components and fine-tuning the rendering parameters to combine everything into photo-real digital renders. In this work we propose to combine incomplete, high-quality renderings showing only facial skin with recent methods for neural rendering of faces, in order to automatically and seamlessly create photo-realistic full-head portrait renders from captured data without the need for artist intervention. Our method begins with traditional face rendering, where the skin is rendered with the desired appearance, expression, viewpoint, and illumination. These skin renders are then projected into the latent space of a pre-trained neural network that can generate arbitrary photo-real face images (StyleGAN2). The result is a sequence of realistic face images that match the identity and appearance of the 3D character at the skin level, but is completed naturally with synthesized hair, eyes, inner mouth and surroundings. Notably, we present the first method for multi-frame consistent projection into this latent space, allowing photo-realistic rendering and preservation of the identity of the digital human over an animated performance sequence, which can depict different expressions, lighting conditions and viewpoints. Our method can be used in new face rendering pipelines and, importantly, in other deep learning applications that require large amounts of realistic training data with ground-truth 3D geometry, appearance maps, lighting, and viewpoint.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Shin, Donghee, Kyungwoon Cho, and Hyokyung Bahn. "File Type and Access Pattern Aware Buffer Cache Management for Rendering Systems." Electronics 9, no. 1 (January 15, 2020): 164. http://dx.doi.org/10.3390/electronics9010164.

Повний текст джерела
Анотація:
Rendering is the process of generating high-resolution images by software, which is widely used in animation, video games and visual effects in movies. Although rendering is a computation-intensive job, we observe that storage accesses may become another performance bottleneck in desktop-rendering systems. In this article, we present a new buffer cache management scheme specialized for rendering systems. Unlike general-purpose computing systems, rendering systems exhibit specific file access patterns, and we show that this results in significant performance degradation in the buffer cache system. To cope with this situation, we collect various file input/output (I/O) traces of rendering workloads and analyze their access patterns. The results of this analysis show that file I/Os in rendering processes consist of long loops for configuration, short loops for texture input, random reads for input, and single-writes for output. Based on this observation, we propose a new buffer cache management scheme for improving the storage performance of rendering systems. Experimental results show that the proposed scheme improves the storage I/O performance by an average of 19% and a maximum of 55% compared to the conventional buffer cache system.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Zhao, Fuqiang, Yuheng Jiang, Kaixin Yao, Jiakai Zhang, Liao Wang, Haizhao Dai, Yuhui Zhong, et al. "Human Performance Modeling and Rendering via Neural Animated Mesh." ACM Transactions on Graphics 41, no. 6 (November 30, 2022): 1–17. http://dx.doi.org/10.1145/3550454.3555451.

Повний текст джерела
Анотація:
We have recently seen tremendous progress in the neural advances for photo-real human modeling and rendering. However, it's still challenging to integrate them into an existing mesh-based pipeline for downstream applications. In this paper, we present a comprehensive neural approach for high-quality reconstruction, compression, and rendering of human performances from dense multi-view videos. Our core intuition is to bridge the traditional animated mesh workflow with a new class of highly efficient neural techniques. We first introduce a neural surface reconstructor for high-quality surface generation in minutes. It marries the implicit volumetric rendering of the truncated signed distance field (TSDF) with multi-resolution hash encoding. We further propose a hybrid neural tracker to generate animated meshes, which combines explicit non-rigid tracking with implicit dynamic deformation in a self-supervised framework. The former provides the coarse warping back into the canonical space, while the latter implicit one further predicts the displacements using the 4D hash encoding as in our reconstructor. Then, we discuss the rendering schemes using the obtained animated meshes, ranging from dynamic texturing to lumigraph rendering under various bandwidth settings. To strike an intricate balance between quality and bandwidth, we propose a hierarchical solution by first rendering 6 virtual views covering the performer and then conducting occlusion-aware neural texture blending. We demonstrate the efficacy of our approach in a variety of mesh-based applications and photo-realistic free-view experiences on various platforms, i.e., inserting virtual human performances into real environments through mobile AR or immersively watching talent shows with VR headsets.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Jain1, Saloni, Sunita GP2, and Sampath Kumar S3. "Tire Texture Monitoring (VGG 19 VS Efficient Net b7)." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 07 (July 24, 2024): 1–12. http://dx.doi.org/10.55041/ijsrem36751.

Повний текст джерела
Анотація:
Tires are crucial components of vehicles, continuously in contact with the road. Monitoring tire conditions is vital for safety and performance, as degradation in tire treads and sidewalls can affect traction, fuel efficiency, longevity, and road noise. This research leverages both VGG19 and Efficient Net B7 algorithms to enhance tire image rendering, addressing limitations of traditional techniques. Using a binary classification algorithm, we classify tire images as healthy or cracked. By fine-tuning VGG19 and EfficientNet B7 on a specialized tire dataset, we achieve high-quality, photorealistic renderings. Our results demonstrate remarkable improvements in texture quality and visual realism compared to traditional methods. The rendered images exhibit finer details and more accurate representations of the tire’s tread patterns and material properties. This research contributes to the field of computer graphics by presenting a novel application of deep learning techniques to a specific industrial need, paving the way for future advancements in high-quality rendering of complex tire textures. Key Words: VGG19,Photorealistic rendering, deep learning.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

PENG, HAOYU, HUA XIONG, ZHEN LIU, and JIAOYING SHI. "RESEARCH OF NESTED PARALLEL PIPELINES ON PARALLEL GRAPHICS RENDERING SYSTEM." International Journal of Image and Graphics 08, no. 02 (April 2008): 209–22. http://dx.doi.org/10.1142/s0219467808003052.

Повний текст джерела
Анотація:
Existing parallel graphics rendering systems only support single level parallel rendering pipeline. This paper presents a novel high performance parallel graphics rendering architecture on PC-Cluster supporting tiled display wall. It employs a hybrid sort-first and sort-last architecture based on a new rendering and scheduling structure, called dynamic rendering team (DRT as abbreviation), which is composed of multiple PCs instead of single PC to act as a rendering node. Each DRT responds for a certain projector area in the tiled display wall and all DRTs form a outer level parallel rendering pipeline natively. Inside separate DRT there is an optimized parallel Rendering-Composing-Display (R-C-D) pipeline reconstructing the serial workflow to parallel one. The Optimized parallel R-C-D pipeline along with the outer parallel rendering pipelines among DRTs forms a special nested parallel pipeline architecture, which promotes the whole rendering performance of our system greatly. Experiments show that parallel rendering system with the proposed architecture and nested parallel rendering pipelines, called Parallel-SG, can render over 13M triangles at an average speed of 12 frames per second without any accelerating technologies on a tile-display wall of 5*3 projectors array.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Zhang, Hongyi. "The role of linear algebra in real-time graphics rendering: A comparison and analysis of mathematical library performance." Applied and Computational Engineering 73, no. 1 (July 5, 2024): 42–49. http://dx.doi.org/10.54254/2755-2721/73/20240358.

Повний текст джерела
Анотація:
With the rise of concepts like the metaverse, virtual reality, and augmented reality, real-time graphics rendering technology has garnered significant attention. Among its key performance indicators, frame rate and graphical quality stand out. Particularly in real-time rendering, linear algebra, especially matrix and vector operations, play a crucial role in determining the position and transformation of models in multidimensional space. This study aims to explore methods for enhancing matrix operation performance in graphics rendering. We compare the performance of two popular mathematical libraries in practical rendering scenarios and discuss the potential of leveraging their strengths to achieve more efficient performance. The research results demonstrate that optimized matrix operations can significantly improve frame rates, providing users with smoother visual experiences. This holds great importance for real-time graphics rendering applications such as games, 3D simulations, and the metaverse. The paper also reviews relevant literature, presents specific comparative data, analyzes the reasons behind performance differences, and discusses the limitations and future directions of the research.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Canazza, Sergio, Giovanni De Poli, and Antonio Rodà. "CaRo 2.0: An Interactive System for Expressive Music Rendering." Advances in Human-Computer Interaction 2015 (2015): 1–13. http://dx.doi.org/10.1155/2015/850474.

Повний текст джерела
Анотація:
In several application contexts in multimedia field (educational, extreme gaming), the interaction with the user requests that system is able to render music in expressive way. The expressiveness is the added value of a performance and is part of the reason that music is interesting to listen. Understanding and modeling expressive content communication is important for many engineering applications in information technology (e.g., Music Information Retrieval, as well as several applications in the affective computing field). In this paper, we present an original approach to modify the expressive content of a performance in a gradual way, applying a smooth morphing among performances with different expressive content in order to adapt the audio expressive character to the user’s desires. The system won the final stage of Rencon 2011. This performance RENdering CONtest is a research project that organizes contests for computer systems generating expressive musical performances.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Fjodorovs, Iļja, and Sergejs Kodors. "JETPACK COMPOSE AND XML LAYOUT RENDERING PERFORMANCE COMPARISON." HUMAN. ENVIRONMENT. TECHNOLOGIES. Proceedings of the Students International Scientific and Practical Conference, no. 25 (April 23, 2021): 49–54. http://dx.doi.org/10.17770/het2021.25.6779.

Повний текст джерела
Анотація:
The aim of the work is to find out the rendering performance of new Google Android user interface framework “Jetpack Compose”. Author has built two applications for Android platform with identical user interfaces: one uses classic approach with Kotlin + XML layout file, another application is developed using Jetpack Compose. In the results, the performance comparison of each approach is provided.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Blissett, Sarah. "Algae Sympoiesis in Performance: Rendering-with Nonhuman Ecologies." Performance Philosophy 6, no. 2 (November 1, 2021): 117–36. http://dx.doi.org/10.21476/pp.2021.62326.

Повний текст джерела
Анотація:
This article explores an ecodramaturgical approach to performance-making and research with algae. The first part considers the notion of ‘algae rendering’ as a methodological tool for theorising algae ecological relations which highlights links between representations of algae and their material effects. The second part considers how my embodied encounters with cyanobacteria algae, in the form of lichen, inspire new modes of working with algae in creative practice that explore how algae agencies ‘render’ bodies and environments. I also draw on an artistic case study by The Harrissons (1971) to illustrate principles of what I consider examples of ‘algae rendering’ in artistic practice. The third part considers my approach to making-with algae in a series performance experiments that develop the concept of ‘rendering-with algae’ in practice. This work attempts to depart from anthropocentric binaries that mark different algae species according to their use-value for humans as either ‘healthy’ or ‘harmful’ and investigates embodied ways of working with algae as co-creators, inspired by material ecological relations. The fourth part considers how these performance encounters, experiments and analysis together compose an ecodramaturgical framework that generates new thinking about algae-human relationships in performance and in wider ecologies. Drawing on Donna Haraway’s (2016) concept of ‘sympoiesis’, I develop the term ‘algae sympoiesis’ to describe my embodied ecodramaturgical approach to rendering-with algae in this research. The concept of algae sympoiesis explores how humans and algae shape matter and meaning together in performance and seeks to invite new ways of thinking about how broader algae-human material ecologies are performative of environmental change.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Reisin, Gail. "Experiencing "Macbeth": From Text Rendering to Multicultural Performance." English Journal 82, no. 4 (April 1993): 52. http://dx.doi.org/10.2307/820850.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Muennoi, Atitayaporn, and Daranee Hormdee. "3D Web-based HMI with WebGL Rendering Performance." MATEC Web of Conferences 77 (2016): 09003. http://dx.doi.org/10.1051/matecconf/20167709003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Wu, Jasmine, Chia-Chen Kuo, Shu-Hsin Liu, Chuan-Lin Lai, Chiang-Hsiang Lien, Ming-Jen Wang, and Chih-Wei Wang. "High-Performance Computing for Visual Simulations and Rendering." Proceedings of International Conference on Artificial Life and Robotics 24 (January 10, 2019): 600–602. http://dx.doi.org/10.5954/icarob.2019.os25-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Dunnett, G. J., M. White, P. F. Lister, R. L. Grimsdale, and F. Gelmot. "The Image chip for high performance 3D rendering." IEEE Computer Graphics and Applications 12, no. 6 (November 1992): 41–52. http://dx.doi.org/10.1109/38.163624.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Reisin, Gail. "Experiencing Macbeth: From Text Rendering to Multicultural Performance." English Journal 82, no. 4 (April 1, 1993): 52–53. http://dx.doi.org/10.58680/ej19937858.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Yan, Yiming, Weikun Zhou, Nan Su, and Chi Zhang. "UniRender: Reconstructing 3D Surfaces from Aerial Images with a Unified Rendering Scheme." Remote Sensing 15, no. 18 (September 21, 2023): 4634. http://dx.doi.org/10.3390/rs15184634.

Повний текст джерела
Анотація:
While recent advances in the field of neural rendering have shown impressive 3D reconstruction performance, it is still a challenge to accurately capture the appearance and geometry of a scene by using neural rendering, especially for remote sensing scenes. This is because both rendering methods, i.e., surface rendering and volume rendering, have their own limitations. Furthermore, when neural rendering is applied to remote sensing scenes, the view sparsity and content complexity that characterize these scenes will severely hinder its performance. In this work, we aim to address these challenges and to make neural rendering techniques available for 3D reconstruction in remote sensing environments. To achieve this, we propose a novel 3D surface reconstruction method called UniRender. UniRender offers three improvements in locating an accurate 3D surface by using neural rendering: (1) unifying surface and volume rendering by employing their strengths while discarding their weaknesses, which enables accurate 3D surface position localization in a coarse-to-fine manner; (2) incorporating photometric consistency constraints during rendering, and utilizing the points reconstructed by structure from motion (SFM) or multi-view stereo (MVS), to constrain reconstructed surfaces, which significantly improves the accuracy of 3D reconstruction; (3) improving the sampling strategy by locating sampling points in the foreground regions where the surface needs to be reconstructed, thus obtaining better detail in the reconstruction results. Extensive experiments demonstrate that UniRender can reconstruct high-quality 3D surfaces in various remote sensing scenes.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Clifton, Shirley, and Kathryn Grushka. "Rendering Artful and Empathic Arts-Based Performance as Action." LEARNing Landscapes 15, no. 1 (June 23, 2022): 89–107. http://dx.doi.org/10.36510/learnland.v15i1.1066.

Повний текст джерела
Анотація:
There is a critical need to consider ways to enrich the educational experiences and well-being of adolescents when the lack of empathy in the world is high. This paper presents the concepts of Artful Empathy and Artful and Empathic Learning Ecology. The concepts are exemplified from a multi-site case study within Australian secondary visual art studio classrooms. The article demonstrates how learning and making art in an artfully empathic ecology can support the legitimacy of diverse and marginalized voices. Arts-based performative approaches may facilitate empathic knowing across disciplines with global traction.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Lee, Jae-Won, and Youngsik Kim. "Rendering Performance Evaluation of 3D Games with Interior Mapping." Journal of Korea Game Society 19, no. 6 (December 31, 2019): 49–59. http://dx.doi.org/10.7583/jkgs.2019.19.6.49.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Kim, Younguk, and Sungkil Lee. "High-Performance Multi-GPU Rendering Based on Implicit Synchronization." Journal of KIISE 42, no. 11 (November 15, 2015): 1332–38. http://dx.doi.org/10.5626/jok.2015.42.11.1332.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Su, Chen, Qing Zhong, Yifan Peng, Liang Xu, Rui Wang, Haifeng Li, and Xu Liu. "Grayscale performance enhancement for time-multiplexing light field rendering." Optics Express 23, no. 25 (December 10, 2015): 32622. http://dx.doi.org/10.1364/oe.23.032622.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Aste, Niccolò, Fabrizio Leonforte, and Antonio Piccolo. "Color rendering performance of smart glazings for building applications." Solar Energy 176 (December 2018): 51–61. http://dx.doi.org/10.1016/j.solener.2018.10.026.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Liu, Ko-Yun, Wan-Chun Ma, Chun-Fa Chang, Chuan-Chang Wang, and Paul Debevec. "A framework for locally retargeting and rendering facial performance." Computer Animation and Virtual Worlds 22, no. 2-3 (April 2011): 159–67. http://dx.doi.org/10.1002/cav.404.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Boutsi, Argyro-Maria, Charalabos Ioannidis, and Styliani Verykokou. "Multi-Resolution 3D Rendering for High-Performance Web AR." Sensors 23, no. 15 (August 3, 2023): 6885. http://dx.doi.org/10.3390/s23156885.

Повний текст джерела
Анотація:
In the context of web augmented reality (AR), 3D rendering that maintains visual quality and frame rate requirements remains a challenge. The lack of a dedicated and efficient 3D format often results in the degraded visual quality of the original data and compromises the user experience. This paper examines the integration of web-streamable view-dependent representations of large-sized and high-resolution 3D models in web AR applications. The developed cross-platform prototype exploits the batched multi-resolution structures of the Nexus.js library as a dedicated lightweight web AR format and tests it against common formats and compression techniques. Built with AR.js and Three.js open-source libraries, it allows the overlay of the multi-resolution models by interactively adjusting the position, rotation and scale parameters. The proposed method includes real-time view-dependent rendering, geometric instancing and 3D pose regression for two types of AR: natural feature tracking (NFT) and location-based positioning for large and textured 3D overlays. The prototype achieves up to a 46% speedup in rendering time compared to optimized glTF models, while a 34 M vertices 3D model is visible in less than 4 s without degraded visual quality in slow 3D networks. The evaluation under various scenes and devices offers insights into how a multi-resolution scheme can be adopted in web AR for high-quality visualization and real-time performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Chandel,, Aditya. "Real-time Graph Visualization with JavaFX: Exploring Large-scale Network Structures." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 20, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem31221.

Повний текст джерела
Анотація:
The ability to visualize and comprehend complex network structures is crucial in various domains, including social media analysis, computer networks, and transportation systems. However, visualizing large-scale graphs poses significant challenges due to computational limitations and rendering performance constraints. This research paper presents a novel approach to real-time graph visualization using JavaFX, a powerful Java-based framework for developing rich client applications. By leveraging efficient data structures, rendering optimizations, and multithreading techniques, our proposed system achieves real-time visualization of large-scale graphs, enabling users to explore and interact with dynamic network structures seamlessly. The system incorporates advanced layout algorithms and visual encodings to enhance the clarity and interpretability of the visualizations. Extensive experiments were conducted using real-world and synthetic datasets to evaluate the system's performance, scalability, and usability. The results demonstrate the effectiveness of our approach in rendering large graphs in real-time, outperforming existing techniques. Furthermore, a user study was conducted to assess the system's usability and gather feedback on interaction and exploration features, highlighting potential applications in various domains. Keywords— Graph Visualization, Real-time Rendering, JavaFX, Large-scale Networks, Layout Algorithms, Rendering Optimizations, User Interaction, Performance Evaluation.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Zhang, Huan, Shreyan Chowdhury, Carlos Eduardo Cancino-Chacón, Jinhua Liang, Simon Dixon, and Gerhard Widmer. "DExter: Learning and Controlling Performance Expression with Diffusion Models." Applied Sciences 14, no. 15 (July 26, 2024): 6543. http://dx.doi.org/10.3390/app14156543.

Повний текст джерела
Анотація:
In the pursuit of developing expressive music performance models using artificial intelligence, this paper introduces DExter, a new approach leveraging diffusion probabilistic models to render Western classical piano performances. The main challenge faced in performance rendering tasks is the continuous and sequential modeling of expressive timing and dynamics over time, which is critical for capturing the evolving nuances that characterize live musical performances. In this approach, performance parameters are represented in a continuous expression space, and a diffusion model is trained to predict these continuous parameters while being conditioned on a musical score. Furthermore, DExter also enables the generation of interpretations (expressive variations of a performance) guided by perceptually meaningful features by being jointly conditioned on score and perceptual-feature representations. Consequently, we find that our model is useful for learning expressive performance, generating perceptually steered performances, and transferring performance styles. We assess the model through quantitative and qualitative analyses, focusing on specific performance metrics regarding dimensions like asynchrony and articulation, as well as through listening tests that compare generated performances with different human interpretations. The results show that DExter is able to capture the time-varying correlation of the expressive parameters, and it compares well to existing rendering models in subjectively evaluated ratings. The perceptual-feature-conditioned generation and transferring capabilities of DExter are verified via a proxy model predicting perceptual characteristics of differently steered performances.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Liu, Xiao, Tingting Wu, Wenjian Xu, and Qing Liu. "Design and Implementation of WebGL Model Rendering Engine." Scientific Journal of Intelligent Systems Research 6, no. 7 (July 26, 2024): 18–23. http://dx.doi.org/10.54691/778tvh90.

Повний текст джерела
Анотація:
This thesis conducts an in-depth study and implements a model rendering engine based on WebGL. It elaborates on the basic concepts, characteristics of WebGL, and its application prospects in the field of real-time graphics rendering. The implementation methods are discussed, and a design scheme of the model rendering engine based on WebGL is proposed. When introducing the background and basic principles of WebGL, it is pointed out that it is a technology for high-performance graphics rendering in the browser using JavaScript. Based on the OpenGL ES 2.0 standard, it is achieved by embedding its context in the HTML5 Canvas element, and has advantages such as cross-platform, high performance, easy learning and use. In addition, the design scheme of this engine is proposed, including key technologies such as model loading and lighting calculation. Common rendering engine frameworks and tools are also introduced to help developers efficiently build WebGL applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Cai, Xing Quan, Bao Xin Qian, and Jin Hong Li. "Modeling and Rendering Realistic Trees." Applied Mechanics and Materials 50-51 (February 2011): 849–53. http://dx.doi.org/10.4028/www.scientific.net/amm.50-51.849.

Повний текст джерела
Анотація:
In this paper, we present an efficient modeling and rendering realistic trees method based on recursive fractals. Firstly, according to the growth characteristics of poplar trees, we divide the trees model into three parts, trunks, branches and leaves. We provide a multi-branches tree topological structure to organize all the trunks and the branches, and add leaves to leaf nodes of our multi-branches tree structure. Then we define the recursive fractals iterator to generate trunks model and side branches model, and use billboard to simulate the tree leaves. After that, we render the trees. Finally, we implement our method, and model and render different shape trees. The experiments prove that our method is feasible and high performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Curado, António, Ricardo Figueiras, Hélder Gonçalves, Filipe Sambento, and Leonel J. R. Nunes. "Novel High-Performance ETICS Coatings with Cool Pigments Incorporation." Sustainability 15, no. 12 (June 15, 2023): 9644. http://dx.doi.org/10.3390/su15129644.

Повний текст джерела
Анотація:
External Thermal Insulation Composite Systems (ETICS) enhance building aesthetics and optimize thermal performance while offering protection against weather, fire, and harmful agents. Key to these capabilities are properties of ETICS rendering. We have applied specialized organic renderings, including modified acrylic resins, additives, and reflective pigments, to mitigate color bleaching and stress cracking induced by high surface temperatures, resulting in improved color stability and water protection. In a practical application at a shopping center in Portugal, we observed reduced coating layer failures, better thermal resistance, and lower maintenance costs over one year. Subsequent research reveals the benefits of Near Infrared Reflective (NIR) pigments and nanocomposites such as titanium dioxide, which increase solar reflectance, enhance resistance to dirt, and promote self-cleaning. Synthetic colored inorganic pigments improve heat stability, thermal inertia, and mechanical resistance. The application of cool pigments also reduces surface temperature by up to 10 °C. These advancements in ETICS technology mark a significant step towards sustainable building practices.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Mileff, Péter, and Judit Dudra. "Effective Pixel Rendering in Practice." Production Systems and Information Engineering 10, no. 1 (2022): 1–15. http://dx.doi.org/10.32968/psaie.2022.1.1.

Повний текст джерела
Анотація:
The graphics processing unit (GPU) has now become an integral part of our lives through both desktop and portable devices. Thanks to dedicated hardware, visualization has been significantly accelerated, softwares today only use the GPU for rasterization. As a result of this development, now we use only triangle-based rendering, and pixel-based image manipulations can only be performed using shaders. It can be stated that today’s GPU pipeline cannot provide the same flexibility as the previous software implementation. This paper discusses an efficient software implementation of pixel-based rasterization. After reviewing the current GPU-based drawing process, we will show how to access pixel level drawing in this environment. Finally, a more efficient storage and display format than the classic solution is presented, which performance far exceeds the previous solution.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Dong, Zheng, Ke Xu, Yaoan Gao, Qilin Sun, Hujun Bao, Weiwei Xu, and Rynson W. H. Lau. "SAILOR: Synergizing Radiance and Occupancy Fields for Live Human Performance Capture." ACM Transactions on Graphics 42, no. 6 (December 5, 2023): 1–15. http://dx.doi.org/10.1145/3618370.

Повний текст джерела
Анотація:
Immersive user experiences in live VR/AR performances require a fast and accurate free-view rendering of the performers. Existing methods are mainly based on Pixel-aligned Implicit Functions (PIFu) or Neural Radiance Fields (NeRF). However, while PIFu-based methods usually fail to produce photorealistic view-dependent textures, NeRF-based methods typically lack local geometry accuracy and are computationally heavy ( e.g. , dense sampling of 3D points, additional fine-tuning, or pose estimation). In this work, we propose a novel generalizable method, named SAILOR, to create high-quality human free-view videos from very sparse RGBD live streams. To produce view-dependent textures while preserving locally accurate geometry, we integrate PIFu and NeRF such that they work synergistically by conditioning the PIFu on depth and then rendering view-dependent textures through NeRF. Specifically, we propose a novel network, named SRONet, for this hybrid representation. SRONet can handle unseen performers without fine-tuning. Besides, a neural blending-based ray interpolation approach, a tree-based voxel-denoising scheme, and a parallel computing pipeline are incorporated to reconstruct and render live free-view videos at 10 fps on average. To evaluate the rendering performance, we construct a real-captured RGBD benchmark from 40 performers. Experimental results show that SAILOR outperforms existing human reconstruction and performance capture methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Tariq, Taimoor, Cara Tursun, and Piotr Didyk. "Noise-based enhancement for foveated rendering." ACM Transactions on Graphics 41, no. 4 (July 2022): 1–14. http://dx.doi.org/10.1145/3528223.3530101.

Повний текст джерела
Анотація:
Human visual sensitivity to spatial details declines towards the periphery. Novel image synthesis techniques, so-called foveated rendering, exploit this observation and reduce the spatial resolution of synthesized images for the periphery, avoiding the synthesis of high-spatial-frequency details that are costly to generate but not perceived by a viewer. However, contemporary techniques do not make a clear distinction between the range of spatial frequencies that must be reproduced and those that can be omitted. For a given eccentricity, there is a range of frequencies that are detectable but not resolvable. While the accurate reproduction of these frequencies is not required, an observer can detect their absence if completely omitted. We use this observation to improve the performance of existing foveated rendering techniques. We demonstrate that this specific range of frequencies can be efficiently replaced with procedural noise whose parameters are carefully tuned to image content and human perception. Consequently, these frequencies do not have to be synthesized during rendering, allowing more aggressive foveation, and they can be replaced by noise generated in a less expensive post-processing step, leading to improved performance of the rendering system. Our main contribution is a perceptually-inspired technique for deriving the parameters of the noise required for the enhancement and its calibration. The method operates on rendering output and runs at rates exceeding 200 FPS at 4K resolution, making it suitable for integration with real-time foveated rendering systems for VR and AR devices. We validate our results and compare them to the existing contrast enhancement technique in user experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

López de Mántaras, Ramon. "Playing with Cases: Rendering Expressive Music with Case-Based Reasoning." AI Magazine 33, no. 4 (December 21, 2012): 22. http://dx.doi.org/10.1609/aimag.v33i4.2405.

Повний текст джерела
Анотація:
This paper surveys significant research on the problem of rendering expressive music by means of AI techniques with an emphasis on Case-Based Reasoning. Following a brief overview discussing why we prefer listening to expressive music instead of lifeless synthesized music, we examine a representative selection of well-known approaches to expressive computer music performance with an emphasis on AI-related approaches. In the main part of the paper we focus on the existing CBR approaches to the problem of synthesizing expressive music, and particularly on TempoExpress, a case-based reasoning system developed at our Institute, for applying musically acceptable tempo transformations to monophonic audio recordings of musical performances. Finally we briefly describe an ongoing extension of our previous work consisting on complementing audio information with information of the gestures of the musician. Music is played through our bodies, therefore capturing the gesture of the performer is a fundamental aspect that has to be taken into account in future expressive music renderings. This paper is based on the “2011 Robert S. Engelmore Memorial Lecture” given by the first author at AAAI/IAAI 2011.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Maia Pederneiras, Cinthia, Rosário Veiga, and Jorge de Brito. "Physical and Mechanical Performance of Coir Fiber-Reinforced Rendering Mortars." Materials 14, no. 4 (February 9, 2021): 823. http://dx.doi.org/10.3390/ma14040823.

Повний текст джерела
Анотація:
Coir fiber is a by-product waste generated in large scale. Considering that most of these wastes do not have a proper disposal, several applications to coir fibers in engineering have been investigated in order to provide a suitable use, since coir fibers have interesting properties, namely high tensile strength, high elongation at break, low modulus of elasticity, and high abrasion resistance. Currently, coir fiber is widely used in concrete, roofing, boards and panels. Nonetheless, only a few studies are focused on the incorporation of coir fibers in rendering mortars. This work investigates the feasibility to incorporate coir fibers in rendering mortars with two different binders. A cement CEM II/B-L 32.5 N was used at 1:4 volumetric cement to aggregate ratio. Cement and air-lime CL80-S were used at a volumetric ratio of 1:1:6, with coir fibers were produced with 1.5 cm and 3.0 cm long fibers and added at 10% and 20% by total mortar volume. Physical and mechanical properties of the coir fiber-reinforced mortars were discussed. The addition of coir fibers reduced the workability of the mortars, requiring more water that affected the hardened properties of the mortars. The modulus of elasticity and the compressive strength of the mortars with coir fibers decreased with increase in fiber volume fraction and length. Coir fiber’s incorporation improved the flexural strength and the fracture toughness of the mortars. The results emphasize that the cement-air-lime based mortars presented a better post-peak behavior than that of the cementitious mortars. These results indicate that the use of coir fibers in rendering mortars presents a potential technical and sustainable feasibility for reinforcement of cement and cement-air-lime mortars.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

JooYejong, Woonam Chung, Woochan Park, and Dukki Hong. "Cache simulation for measuring cache performance suitable for sound rendering." Journal of the Korea Computer Graphics Society 23, no. 3 (July 2017): 123–33. http://dx.doi.org/10.15701/kcgs.2017.23.3.123.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Tominaga, J., S. Tsutsumi, and N. Saito. "Improvement of life-performance on high color rendering HPS lamps." JOURNAL OF THE ILLUMINATING ENGINEERING INSTITUTE OF JAPAN 73, Appendix (1989): 31. http://dx.doi.org/10.2150/jieij1980.73.appendix_31.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Kim, Daewoong, Kilhyung Cha, and Soo-Ik Chae. "A High-Performance OpenVG Accelerator with Dual-Scanline Filling Rendering." IEEE Transactions on Consumer Electronics 54, no. 3 (August 2008): 1303–11. http://dx.doi.org/10.1109/tce.2008.4637621.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії