Journal articles on the topic 'Rendering'

To see the other types of publications on this topic, follow the link: Rendering.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Rendering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Blanco-Fernández, Vítor. "Rendering Volumetrically, Rendering Queerly." A Peer-Reviewed Journal About 11, no. 1 (October 18, 2022): 104–15. http://dx.doi.org/10.7146/aprja.v11i1.134308.

Full text
Abstract:
The main aim of this article is to describe the conceptual basis and challenges of the project Volumetric Frictions. Volumetric Frictions is a queer virtual reality resulting from my will to render my ongoing PhD research (“Digital Speculation, Volumetric Fictions. Volumetric/3D CGI within the queer contemporary debate”) differently. To contextualize the project, I start by addressing contemporary debates about the role of queer, and the practice of queering, in academic institutions. Then, I move forward to describe my PhD research and its pro-visional results. I name these results “volumetric frictions”, as they define crossing paths between queer theories and 3D/volumetric aesthetics. Finally, I summarize some of the challenges currently being faced in the design of the project. Throughout the article, I make use of contemporary 3D/volumetric art to illustrate ideas, concepts, and possible solutions.
APA, Harvard, Vancouver, ISO, and other styles
2

Nimeroff, Jeffry S., Eero Simoncelli, Norman I. Badler, and Julie Dorsey. "Rendering Spaces for Architectural Environments." Presence: Teleoperators and Virtual Environments 4, no. 3 (January 1995): 286–96. http://dx.doi.org/10.1162/pres.1995.4.3.286.

Full text
Abstract:
We present a new framework for rendering virtual environments. This framework is proposed as a complete scene description, which embodies the space of all possible renderings, under all possible lighting scenarios of the given scene. In effect, this hypothetical rendering space includes all possible light sources as part of the geometric model. While it would be impractical to implement the general framework, this approach does allow us to look at the rendering problem in a new way. Thus, we propose new representations that are subspaces of the entire rendering space. Some of these subspaces are computationally tractable and may be carefully chosen to serve a particular application. The approach is useful both for real and virtual scenes. The framework includes methods for rendering environments which are illuminated by artificial light, natural light, or a combination of the two models.
APA, Harvard, Vancouver, ISO, and other styles
3

Chandran, Prashanth, Sebastian Winberg, Gaspard Zoss, Jérémy Riviere, Markus Gross, Paulo Gotardo, and Derek Bradley. "Rendering with style." ACM Transactions on Graphics 40, no. 6 (December 2021): 1–14. http://dx.doi.org/10.1145/3478513.3480509.

Full text
Abstract:
For several decades, researchers have been advancing techniques for creating and rendering 3D digital faces, where a lot of the effort has gone into geometry and appearance capture, modeling and rendering techniques. This body of research work has largely focused on facial skin, with much less attention devoted to peripheral components like hair, eyes and the interior of the mouth. As a result, even with the best technology for facial capture and rendering, in most high-end productions a lot of artist time is still spent modeling the missing components and fine-tuning the rendering parameters to combine everything into photo-real digital renders. In this work we propose to combine incomplete, high-quality renderings showing only facial skin with recent methods for neural rendering of faces, in order to automatically and seamlessly create photo-realistic full-head portrait renders from captured data without the need for artist intervention. Our method begins with traditional face rendering, where the skin is rendered with the desired appearance, expression, viewpoint, and illumination. These skin renders are then projected into the latent space of a pre-trained neural network that can generate arbitrary photo-real face images (StyleGAN2). The result is a sequence of realistic face images that match the identity and appearance of the 3D character at the skin level, but is completed naturally with synthesized hair, eyes, inner mouth and surroundings. Notably, we present the first method for multi-frame consistent projection into this latent space, allowing photo-realistic rendering and preservation of the identity of the digital human over an animated performance sequence, which can depict different expressions, lighting conditions and viewpoints. Our method can be used in new face rendering pipelines and, importantly, in other deep learning applications that require large amounts of realistic training data with ground-truth 3D geometry, appearance maps, lighting, and viewpoint.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Ayoung, and Atanas Gotchev. "Spectral Rendering: From Input to Rendering Process." Electronic Imaging 36, no. 10 (January 21, 2024): 251–1. http://dx.doi.org/10.2352/ei.2024.36.10.ipas-251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Camarini, Gladis, Lia Lorena Pimentel, and Natalia Haluska Rodrigues De Sá. "Assessment of the Material Loss in Walls Renderings with β-Hemihydrate Paste." Applied Mechanics and Materials 71-78 (July 2011): 1242–45. http://dx.doi.org/10.4028/www.scientific.net/amm.71-78.1242.

Full text
Abstract:
In civil construction, β-hemihydrate pastes have been used as decorative ornaments, plasterboards, dry-wall and renderings. The application procedure of β-hemihydrate pastes for rendering generates large amounts of waste due to its hydration kinetics. Waste production also occurs due to the technique of preparation and application of this material. It is essential to minimize gypsum waste. The aim of this work was to quantify the amount of gypsum waste produced by the process of using it as renderings. It was observed the influence of workers applying pastes rendering on gypsum waste production. The application process of plaster renderings was followed measuring the waste generated and the time for finishing the process was also observed. Results pointed out waste production values in the range between 16% and 47%. The application technique influences the amount of waste production.
APA, Harvard, Vancouver, ISO, and other styles
6

Miner, Valerie, and Pat Barker. "Rendering Truth." Women's Review of Books 21, no. 10/11 (July 2004): 14. http://dx.doi.org/10.2307/3880372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Frenkel, Karen A. "Volume rendering." Communications of the ACM 32, no. 4 (April 1989): 426–35. http://dx.doi.org/10.1145/63334.63335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Watt, Alan. "Rendering techniques." ACM Computing Surveys 28, no. 1 (March 1996): 157–59. http://dx.doi.org/10.1145/234313.234380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ihrke, Ivo, Gernot Ziegler, Art Tevs, Christian Theobalt, Marcus Magnor, and Hans-Peter Seidel. "Eikonal rendering." ACM Transactions on Graphics 26, no. 3 (July 29, 2007): 59. http://dx.doi.org/10.1145/1276377.1276451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Takala, Tapio, and James Hahn. "Sound rendering." ACM SIGGRAPH Computer Graphics 26, no. 2 (July 1992): 211–20. http://dx.doi.org/10.1145/142920.134063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Christensen, Per H. "Contour rendering." ACM SIGGRAPH Computer Graphics 33, no. 1 (February 1999): 58–61. http://dx.doi.org/10.1145/563666.563688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Drebin, Robert A., Loren Carpenter, and Pat Hanrahan. "Volume rendering." ACM SIGGRAPH Computer Graphics 22, no. 4 (August 1988): 65–74. http://dx.doi.org/10.1145/378456.378484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Tamas, Sophie. "Sketchy Rendering." Qualitative Inquiry 15, no. 3 (October 15, 2008): 607–17. http://dx.doi.org/10.1177/1077800408318421.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Finkelstein, A., and L. Markosian. "Nonphotorealistic rendering." IEEE Computer Graphics and Applications 23, no. 4 (July 2003): 26–27. http://dx.doi.org/10.1109/mcg.2003.1210861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Udupa, J. K., and D. Odhner. "Shell rendering." IEEE Computer Graphics and Applications 13, no. 6 (November 1993): 58–67. http://dx.doi.org/10.1109/38.252558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pang, A. "Spray rendering." IEEE Computer Graphics and Applications 14, no. 5 (September 1994): 57–63. http://dx.doi.org/10.1109/38.310727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Cox, Geoff, and Christian Ulrik Andersen. "Rendering Research." A Peer-Reviewed Journal About 11, no. 1 (October 18, 2022): 4–9. http://dx.doi.org/10.7146/aprja.v11i1.134302.

Full text
Abstract:
To render is to give something “cause to be” or “hand over” (from the Latin reddere “give back”) and enter into an obligation to do or make something like a decision. More familiar perhaps in computing, to render is to take an image or file and convert it into another format or apply a modification of some kind, or in the case of 3D animation or scanning. To render is to animate it or give it volume. In this issue, we ask, what it means to render research? How does the rendering of research typically reinforce certain limitations of thought and action? We ask these questions in the context of more and more demands on researchers to produce academic outputs in standardised forms, in peer-reviewed journals and such like that are legitimised by normative values. So, then, how to render research otherwise?
APA, Harvard, Vancouver, ISO, and other styles
18

Sen, P., and S. Darabi. "Compressive Rendering: A Rendering Application of Compressed Sensing." IEEE Transactions on Visualization and Computer Graphics 17, no. 4 (April 2011): 487–99. http://dx.doi.org/10.1109/tvcg.2010.46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Fiume, Eugene. "A mathematical semantics of rendering I: Ideal Rendering." Computer Vision, Graphics, and Image Processing 48, no. 1 (October 1989): 145. http://dx.doi.org/10.1016/0734-189x(89)90109-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Fiume, Eugene. "A mathematical semantics of rendering I: ideal rendering." Computer Vision, Graphics, and Image Processing 48, no. 3 (December 1989): 281–303. http://dx.doi.org/10.1016/0734-189x(89)90145-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Beeson, Brett, David G. Barnes, and Paul D. Bourke. "A Distributed Data Implementation of the Perspective Shear-Warp Volume Rendering Algorithm for Visualisation of Large Astronomical Cubes." Publications of the Astronomical Society of Australia 20, no. 3 (2003): 300–313. http://dx.doi.org/10.1071/as03039.

Full text
Abstract:
AbstractWe describe the first distributed data implementation of the perspective shear-warp volume rendering algorithm and explore its applications to large astronomical data cubes and simulation realisations. Our system distributes sub-volumes of 3-dimensional images to leaf nodes of a Beowulf-class cluster, where the rendering takes place. Junction nodes composite the sub-volume renderings together and pass the combined images upwards for further compositing or display. We demonstrate that our system out-performs other software solutions and can render a 'worst-case' 512 × 512 × 512 data volume in less than four seconds using 16 rendering and 15 compositing nodes. Our system also performs very well compared with much more expensive hardware systems. With appropriate commodity hardware, such as Swinburne's Virtual Reality Theatre or a 3Dlabs Wildcat graphics card, stereoscopic display is possible.
APA, Harvard, Vancouver, ISO, and other styles
22

Abdelkarim, Majda Babiker Ahmed, and Ali Albashir Mohammed Alhaj. "Cultural and Lexical Challenges Faced in Translating Some Selected Verses of Surat Maryam into English: A Thematic Comparative Review." International Journal of Linguistics, Literature and Translation 6, no. 2 (February 28, 2023): 178–84. http://dx.doi.org/10.32996/ijllt.2023.6.2.23.

Full text
Abstract:
Translating the Arabic Qur’anic cultural and lexical expression into English has always been a strenuous and complicated task. It is ever more problematic than the translation of any genre. The recent study is a caveat-lector endeavor especially to scrutinize cultural and lexical challenges faced in translating some selected verses of Surat Maryam into English and their rendering losses. The foremost significance of this study is how the three selected Quranic translators attempted to achieve adequate cultural and lexical equivalence when rendering implicative meaning and profound meaning of the cultural and lexical expression in Surat Maryam. The study demonstrates that three targeted. Quranic translators' renderings encountered cultural and lexical challenges while translating some selected verses of Surat Maryam into English. It is also discovered that proper linguistic and explicative analyses are priorities for accurate translation, which avert discrepancies in implicative meaning and rendering loss. The study concludes that the three notable Quranic translators employed literal translation, verbatim translation, Semantic translation, and communicative translation methods in rendering some selected ayahs[ verses] of Surat Maryam into English comprising lexical and cultural challenges.
APA, Harvard, Vancouver, ISO, and other styles
23

Roessle, Barbara, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, and Matthias Niessner. "GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields." ACM Transactions on Graphics 42, no. 6 (December 5, 2023): 1–14. http://dx.doi.org/10.1145/3618402.

Full text
Abstract:
Neural Radiance Fields (NeRF) have shown impressive novel view synthesis results; nonetheless, even thorough recordings yield imperfections in reconstructions, for instance due to poorly observed areas or minor lighting changes. Our goal is to mitigate these imperfections from various sources with a joint solution: we take advantage of the ability of generative adversarial networks (GANs) to produce realistic images and use them to enhance realism in 3D scene reconstruction with NeRFs. To this end, we learn the patch distribution of a scene using an adversarial discriminator, which provides feedback to the radiance field reconstruction, thus improving realism in a 3D-consistent fashion. Thereby, rendering artifacts are repaired directly in the underlying 3D representation by imposing multi-view path rendering constraints. In addition, we condition a generator with multi-resolution NeRF renderings which is adversarially trained to further improve rendering quality. We demonstrate that our approach significantly improves rendering quality, e.g., nearly halving LPIPS scores compared to Nerfacto while at the same time improving PSNR by 1.4dB on the advanced indoor scenes of Tanks and Temples.
APA, Harvard, Vancouver, ISO, and other styles
24

Yin, Sai. "Explore the Use of Computer-Aided Design in the Landscape Renderings." Applied Mechanics and Materials 687-691 (November 2014): 1166–69. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1166.

Full text
Abstract:
The landscape renderings project performance, landscape renderings important form of program scrutiny, accomplished by means of computer design has become the mainstream of the industry, it has performance visualization, and process reversible, diversified visual affects features. Computer graphics has three stages, namely modeling, rendering, and image processing, as opposed to proper professional graphics software; "3DSMAX + VRAY + PHOTOSHOP" is a typical process landscape renderings produced.
APA, Harvard, Vancouver, ISO, and other styles
25

Kennelly, Patrick J., and A. Jon Kimerling. "Non-Photorealistic Rendering and Terrain Representation." Cartographic Perspectives, no. 54 (June 1, 2006): 35–54. http://dx.doi.org/10.14714/cp54.345.

Full text
Abstract:
In recent years, a branch of computer graphics termed non-photorealistic rendering (NPR) has defined its own niche in the computer graphics community. While photorealistic rendering attempts to render virtual objects into images that cannot be distinguished from a photograph, NPR looks at techniques designed to achieve other ends. Its goals can be as diverse as imitating an artistic style, mimicking a look comparable to images created with specific reproduction techniques, or adding highlights and details to images. In doing so, NPR has overlapped the study of cartography concerned with representing terrain in two ways. First, NPR has formulated several techniques that are similar or identical to antecedent terrain rendering techniques including inclined contours and hachures. Second, NPR efforts to highlight or add information in renderings often focus on the use of innovative and meaningful combinations of visual variables such as orientation and color. Such efforts are similar to recent terrain rendering research focused on methods to symbolize disparate areas of slope and aspect on shaded terrain representations. We compare these fields of study in an effort to increase awareness and foster collaboration between researchers with similar interests.
APA, Harvard, Vancouver, ISO, and other styles
26

Mileto, Camilla, Fernando Vegas, and Vincenzina La Spina. "Is Gypsum External Rendering Possible? The Use of Gypsum Mortar for Rendering Historic Façades of Valencia's City Centre." Advanced Materials Research 250-253 (May 2011): 1301–4. http://dx.doi.org/10.4028/www.scientific.net/amr.250-253.1301.

Full text
Abstract:
Valencia is a city located in the East of Spain by the Mediterranean Sea. It has a huge historic centre with ancient winding streets that contains buildings with a singular architectural heritage. The buildings’ façades are protected by a traditional external rendering, sometimes in bad state of conservation or modified or substituted in restoration works. The study carried out on historic renderings in Valencia points out the great employment of gypsum mortar or gypsum-lime mortar, among other peculiarities. Gypsum external rendering is one of the many uses for gypsum mortars in Valencia traditional architecture. This fact contradicts the general belief of the exclusive use of lime mortars for rendering a façade. The knowledge of the characteristics of historic mortars will allow us to implement a proper restoration work of architectural heritage with suitable mortars, as it is essential to guarantee the adherence and compatibility of any repair.
APA, Harvard, Vancouver, ISO, and other styles
27

Dashti Shafii, Ali, Babak Monir Abbasi, and Shiva Jabari. "Manual Rendering Techniques in Architecture." International Journal of Engineering and Technology 8, no. 2 (February 2016): 141–45. http://dx.doi.org/10.7763/ijet.2016.v6.874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Dashti Shafii, Ali, Babak Monir Abbasi, and Shiva Jabari. "Manual Rendering Techniques in Architecture." International Journal of Engineering and Technology 8, no. 2 (February 2016): 141–45. http://dx.doi.org/10.7763/ijet.2016.v8.874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Wu, Xiuchao, Jiamin Xu, Zihan Zhu, Hujun Bao, Qixing Huang, James Tompkin, and Weiwei Xu. "Scalable neural indoor scene rendering." ACM Transactions on Graphics 41, no. 4 (July 2022): 1–16. http://dx.doi.org/10.1145/3528223.3530153.

Full text
Abstract:
We propose a scalable neural scene reconstruction and rendering method to support distributed training and interactive rendering of large indoor scenes. Our representation is based on tiles. Tile appearances are trained in parallel through a background sampling strategy that augments each tile with distant scene information via a proxy global mesh. Each tile has two low-capacity MLPs: one for view-independent appearance (diffuse color and shading) and one for view-dependent appearance (specular highlights, reflections). We leverage the phenomena that complex view-dependent scene reflections can be attributed to virtual lights underneath surfaces at the total ray distance to the source. This lets us handle sparse samplings of the input scene where reflection highlights do not always appear consistently in input images. We show interactive free-viewpoint rendering results from five scenes, one of which covers an area of more than 100 m 2 . Experimental results show that our method produces higher-quality renderings than a single large-capacity MLP and five recent neural proxy-geometry and voxel-based baseline methods. Our code and data are available at project webpage https://xchaowu.github.io/papers/scalable-nisr.
APA, Harvard, Vancouver, ISO, and other styles
30

Kobimbo, Mary Mercy. "The Translation of יהוה: Part 2, The Case of Dholuo." Bible Translator 73, no. 2 (August 2022): 213–26. http://dx.doi.org/10.1177/20516770221105876.

Full text
Abstract:
The first part of this study ( TBT 72[1]: 50–60) reviewed the history of the rendering of the key term יהוה YHWH in Bible translations into Dholuo (spoken in southwestern Kenya and northwestern Tanzania). This second part considers the translation of this key term within the context of modern Dholuo language and culture. The different renderings in two existing translations are analyzed and put in the broader perspective of Bible translation in Africa. Finally, the paper proposes a rendering for יהוה that does justice to the Dholuo culture and tradition, while maintaining the specific characteristics that are present in the source text.
APA, Harvard, Vancouver, ISO, and other styles
31

Tavares, Martha, Maria do Rosário Veiga, and Ana Fragata. "Conservation of old renderings - the consolidation of rendering with loss of cohesion." Conservar Património 8 (2008): 13–19. http://dx.doi.org/10.14568/cp8_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

M.Jawad, Najat. "Rendering of ‘La'ala’ & ‘Asaa’ in the Holy Quran." Al-Adab Journal, no. 146 (September 15, 2023): 1–14. http://dx.doi.org/10.31973/aj.v2i146.3985.

Full text
Abstract:
The object of the study is to look at the English (TT) translation of لعل la’ala and عسى ‘asaa in Arabic Text (ST) of the Holy Quran. They have the same meaning with little differences. Their renderings are investigated and analyzed semantically according to contextual and cohesive meaning of certain texts, i.e. the aayas (verses). The question is: do the translators (al-Hilali &Khan and Irving) succeed in rendering the meanings of the two items from Arabic (source text) into English (target text)? Are there differences between the two translations, or the two items? The study is drawn out that the translators have relatively succeeded in rendering l’ala and ‘asaa. However, it is shown that Arabic (SL) is semantically more precise than English, and the latter has limited expressions compared to the much meanings of the Quranic expressions. Moreover, the two translations are not different in rendering them though Irving’s is more adequate than al-Hilali &Khan’s.
APA, Harvard, Vancouver, ISO, and other styles
33

Arend, Johannes M., Melissa Ramírez, Heinrich R. Liesefeld, and Christoph Pӧrschmann. "Do near-field cues enhance the plausibility of non-individual binaural rendering in a dynamic multimodal virtual acoustic scene?" Acta Acustica 5 (2021): 55. http://dx.doi.org/10.1051/aacus/2021048.

Full text
Abstract:
It is commonly believed that near-field head-related transfer functions (HRTFs) provide perceptual benefits over far-field HRTFs that enhance the plausibility of binaural rendering of nearby sound sources. However, to the best of our knowledge, no study has systematically investigated whether using near-field HRTFs actually provides a perceptually more plausible virtual acoustic environment. To assess this question, we conducted two experiments in a six-degrees-of-freedom multimodal augmented reality experience where participants had to compare non-individual anechoic binaural renderings based on either synthesized near-field HRTFs or intensity-scaled far-field HRTFs and judge which of the two rendering methods led to a more plausible representation. Participants controlled the virtual sound source position by moving a small handheld loudspeaker along a prescribed trajectory laterally and frontally near the head, which provided visual and proprioceptive cues in addition to the auditory cues. The results of both experiments show no evidence that near-field cues enhance the plausibility of non-individual binaural rendering of nearby anechoic sound sources in a dynamic multimodal virtual acoustic scene as examined in this study. These findings suggest that, at least in terms of plausibility, the additional effort of including near-field cues in binaural rendering may not always be worthwhile for virtual or augmented reality applications.
APA, Harvard, Vancouver, ISO, and other styles
34

Wetherill, P. M., and Lawrence R. Schehr. "Rendering French Realism." Modern Language Review 95, no. 2 (April 2000): 514. http://dx.doi.org/10.2307/3736191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lessig. "Rendering Sensible Salient." Good Society 27, no. 1-2 (2019): 171. http://dx.doi.org/10.5325/goodsociety.27.1-2.0171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Bednarik, Robert G. "Rendering Humanities Sustainable." Humanities 1, no. 1 (October 19, 2011): 64–71. http://dx.doi.org/10.3390/h1010064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Cardoso, Florentino. "Rendering of accounts." Revista da Associação Médica Brasileira 60, no. 4 (July 2014): 283. http://dx.doi.org/10.1590/1806-9282.60.04.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Kang, Sing Bing, Yin Li, Xin Tong, and Heung-Yeung Shum. "Image-Based Rendering." Foundations and Trends® in Computer Graphics and Vision 2, no. 3 (2006): 173–258. http://dx.doi.org/10.1561/0600000012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ramamoorthi, Ravi. "Precomputation-Based Rendering." Foundations and Trends® in Computer Graphics and Vision 3, no. 4 (2007): 281–369. http://dx.doi.org/10.1561/0600000021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Yi, Shinyoung, Donggun Kim, Kiseok Choi, Adrian Jarabo, Diego Gutierrez, and Min H. Kim. "Differentiable transient rendering." ACM Transactions on Graphics 40, no. 6 (December 2021): 1–11. http://dx.doi.org/10.1145/3478513.3480498.

Full text
Abstract:
Recent differentiable rendering techniques have become key tools to tackle many inverse problems in graphics and vision. Existing models, however, assume steady-state light transport, i.e., infinite speed of light. While this is a safe assumption for many applications, recent advances in ultrafast imaging leverage the wealth of information that can be extracted from the exact time of flight of light. In this context, physically-based transient rendering allows to efficiently simulate and analyze light transport considering that the speed of light is indeed finite. In this paper, we introduce a novel differentiable transient rendering framework, to help bring the potential of differentiable approaches into the transient regime. To differentiate the transient path integral we need to take into account that scattering events at path vertices are no longer independent; instead, tracking the time of flight of light requires treating such scattering events at path vertices jointly as a multidimensional, evolving manifold. We thus turn to the generalized transport theorem, and introduce a novel correlated importance term, which links the time-integrated contribution of a path to its light throughput, and allows us to handle discontinuities in the light and sensor functions. Last, we present results in several challenging scenarios where the time of flight of light plays an important role such as optimizing indices of refraction, non-line-of-sight tracking with nonplanar relay walls, and non-line-of-sight tracking around two corners.
APA, Harvard, Vancouver, ISO, and other styles
41

Meng, Xiaoxu, Ruofei Du, Matthias Zwicker, and Amitabh Varshney. "Kernel Foveated Rendering." Proceedings of the ACM on Computer Graphics and Interactive Techniques 1, no. 1 (July 25, 2018): 1–20. http://dx.doi.org/10.1145/3203199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Fout, Nathaniel, and Kwan-Liu Ma. "Fuzzy Volume Rendering." IEEE Transactions on Visualization and Computer Graphics 18, no. 12 (December 2012): 2335–44. http://dx.doi.org/10.1109/tvcg.2012.227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

McMillan, Leonard, and Steven Gortler. "Image-based rendering." ACM SIGGRAPH Computer Graphics 33, no. 4 (November 4, 1999): 61–64. http://dx.doi.org/10.1145/345370.345415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Malzbender, Tom. "Fourier volume rendering." ACM Transactions on Graphics 12, no. 3 (July 2, 1993): 233–50. http://dx.doi.org/10.1145/169711.169705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Overbeck, Ryan S., Craig Donner, and Ravi Ramamoorthi. "Adaptive wavelet rendering." ACM Transactions on Graphics 28, no. 5 (December 2009): 1–12. http://dx.doi.org/10.1145/1618452.1618486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bowen, David. "Growth Rendering Device." Leonardo 42, no. 4 (August 2009): 362–63. http://dx.doi.org/10.1162/leon.2009.42.4.362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Kajiya, James T. "The rendering equation." ACM SIGGRAPH Computer Graphics 20, no. 4 (August 31, 1986): 143–50. http://dx.doi.org/10.1145/15886.15902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Séquin, Carlo H., and Raymond Shiau. "Rendering Pacioli's rhombicuboctahedron." Journal of Mathematics and the Arts 9, no. 3-4 (July 24, 2015): 103–10. http://dx.doi.org/10.1080/17513472.2015.1068639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Gerrard, Alan, and Nam Do. "Dynamic acoustic rendering." Journal of the Acoustical Society of America 123, no. 1 (2008): 20. http://dx.doi.org/10.1121/1.2832823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Cholewiak, Steven, Gordon Love, and Martin Banks. "Rendering correct blur." Journal of Vision 17, no. 10 (August 31, 2017): 403. http://dx.doi.org/10.1167/17.10.403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography