Artykuły w czasopismach na temat „Rendering”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Rendering.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Rendering”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Blanco-Fernández, Vítor. "Rendering Volumetrically, Rendering Queerly". A Peer-Reviewed Journal About 11, nr 1 (18.10.2022): 104–15. http://dx.doi.org/10.7146/aprja.v11i1.134308.

Pełny tekst źródła
Streszczenie:
The main aim of this article is to describe the conceptual basis and challenges of the project Volumetric Frictions. Volumetric Frictions is a queer virtual reality resulting from my will to render my ongoing PhD research (“Digital Speculation, Volumetric Fictions. Volumetric/3D CGI within the queer contemporary debate”) differently. To contextualize the project, I start by addressing contemporary debates about the role of queer, and the practice of queering, in academic institutions. Then, I move forward to describe my PhD research and its pro-visional results. I name these results “volumetric frictions”, as they define crossing paths between queer theories and 3D/volumetric aesthetics. Finally, I summarize some of the challenges currently being faced in the design of the project. Throughout the article, I make use of contemporary 3D/volumetric art to illustrate ideas, concepts, and possible solutions.
Style APA, Harvard, Vancouver, ISO itp.
2

Nimeroff, Jeffry S., Eero Simoncelli, Norman I. Badler i Julie Dorsey. "Rendering Spaces for Architectural Environments". Presence: Teleoperators and Virtual Environments 4, nr 3 (styczeń 1995): 286–96. http://dx.doi.org/10.1162/pres.1995.4.3.286.

Pełny tekst źródła
Streszczenie:
We present a new framework for rendering virtual environments. This framework is proposed as a complete scene description, which embodies the space of all possible renderings, under all possible lighting scenarios of the given scene. In effect, this hypothetical rendering space includes all possible light sources as part of the geometric model. While it would be impractical to implement the general framework, this approach does allow us to look at the rendering problem in a new way. Thus, we propose new representations that are subspaces of the entire rendering space. Some of these subspaces are computationally tractable and may be carefully chosen to serve a particular application. The approach is useful both for real and virtual scenes. The framework includes methods for rendering environments which are illuminated by artificial light, natural light, or a combination of the two models.
Style APA, Harvard, Vancouver, ISO itp.
3

Chandran, Prashanth, Sebastian Winberg, Gaspard Zoss, Jérémy Riviere, Markus Gross, Paulo Gotardo i Derek Bradley. "Rendering with style". ACM Transactions on Graphics 40, nr 6 (grudzień 2021): 1–14. http://dx.doi.org/10.1145/3478513.3480509.

Pełny tekst źródła
Streszczenie:
For several decades, researchers have been advancing techniques for creating and rendering 3D digital faces, where a lot of the effort has gone into geometry and appearance capture, modeling and rendering techniques. This body of research work has largely focused on facial skin, with much less attention devoted to peripheral components like hair, eyes and the interior of the mouth. As a result, even with the best technology for facial capture and rendering, in most high-end productions a lot of artist time is still spent modeling the missing components and fine-tuning the rendering parameters to combine everything into photo-real digital renders. In this work we propose to combine incomplete, high-quality renderings showing only facial skin with recent methods for neural rendering of faces, in order to automatically and seamlessly create photo-realistic full-head portrait renders from captured data without the need for artist intervention. Our method begins with traditional face rendering, where the skin is rendered with the desired appearance, expression, viewpoint, and illumination. These skin renders are then projected into the latent space of a pre-trained neural network that can generate arbitrary photo-real face images (StyleGAN2). The result is a sequence of realistic face images that match the identity and appearance of the 3D character at the skin level, but is completed naturally with synthesized hair, eyes, inner mouth and surroundings. Notably, we present the first method for multi-frame consistent projection into this latent space, allowing photo-realistic rendering and preservation of the identity of the digital human over an animated performance sequence, which can depict different expressions, lighting conditions and viewpoints. Our method can be used in new face rendering pipelines and, importantly, in other deep learning applications that require large amounts of realistic training data with ground-truth 3D geometry, appearance maps, lighting, and viewpoint.
Style APA, Harvard, Vancouver, ISO itp.
4

Kim, Ayoung, i Atanas Gotchev. "Spectral Rendering: From Input to Rendering Process". Electronic Imaging 36, nr 10 (21.01.2024): 251–1. http://dx.doi.org/10.2352/ei.2024.36.10.ipas-251.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Camarini, Gladis, Lia Lorena Pimentel i Natalia Haluska Rodrigues De Sá. "Assessment of the Material Loss in Walls Renderings with β-Hemihydrate Paste". Applied Mechanics and Materials 71-78 (lipiec 2011): 1242–45. http://dx.doi.org/10.4028/www.scientific.net/amm.71-78.1242.

Pełny tekst źródła
Streszczenie:
In civil construction, β-hemihydrate pastes have been used as decorative ornaments, plasterboards, dry-wall and renderings. The application procedure of β-hemihydrate pastes for rendering generates large amounts of waste due to its hydration kinetics. Waste production also occurs due to the technique of preparation and application of this material. It is essential to minimize gypsum waste. The aim of this work was to quantify the amount of gypsum waste produced by the process of using it as renderings. It was observed the influence of workers applying pastes rendering on gypsum waste production. The application process of plaster renderings was followed measuring the waste generated and the time for finishing the process was also observed. Results pointed out waste production values in the range between 16% and 47%. The application technique influences the amount of waste production.
Style APA, Harvard, Vancouver, ISO itp.
6

Miner, Valerie, i Pat Barker. "Rendering Truth". Women's Review of Books 21, nr 10/11 (lipiec 2004): 14. http://dx.doi.org/10.2307/3880372.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Frenkel, Karen A. "Volume rendering". Communications of the ACM 32, nr 4 (kwiecień 1989): 426–35. http://dx.doi.org/10.1145/63334.63335.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Watt, Alan. "Rendering techniques". ACM Computing Surveys 28, nr 1 (marzec 1996): 157–59. http://dx.doi.org/10.1145/234313.234380.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Ihrke, Ivo, Gernot Ziegler, Art Tevs, Christian Theobalt, Marcus Magnor i Hans-Peter Seidel. "Eikonal rendering". ACM Transactions on Graphics 26, nr 3 (29.07.2007): 59. http://dx.doi.org/10.1145/1276377.1276451.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Takala, Tapio, i James Hahn. "Sound rendering". ACM SIGGRAPH Computer Graphics 26, nr 2 (lipiec 1992): 211–20. http://dx.doi.org/10.1145/142920.134063.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Christensen, Per H. "Contour rendering". ACM SIGGRAPH Computer Graphics 33, nr 1 (luty 1999): 58–61. http://dx.doi.org/10.1145/563666.563688.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Drebin, Robert A., Loren Carpenter i Pat Hanrahan. "Volume rendering". ACM SIGGRAPH Computer Graphics 22, nr 4 (sierpień 1988): 65–74. http://dx.doi.org/10.1145/378456.378484.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Tamas, Sophie. "Sketchy Rendering". Qualitative Inquiry 15, nr 3 (15.10.2008): 607–17. http://dx.doi.org/10.1177/1077800408318421.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Finkelstein, A., i L. Markosian. "Nonphotorealistic rendering". IEEE Computer Graphics and Applications 23, nr 4 (lipiec 2003): 26–27. http://dx.doi.org/10.1109/mcg.2003.1210861.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Udupa, J. K., i D. Odhner. "Shell rendering". IEEE Computer Graphics and Applications 13, nr 6 (listopad 1993): 58–67. http://dx.doi.org/10.1109/38.252558.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Pang, A. "Spray rendering". IEEE Computer Graphics and Applications 14, nr 5 (wrzesień 1994): 57–63. http://dx.doi.org/10.1109/38.310727.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Cox, Geoff, i Christian Ulrik Andersen. "Rendering Research". A Peer-Reviewed Journal About 11, nr 1 (18.10.2022): 4–9. http://dx.doi.org/10.7146/aprja.v11i1.134302.

Pełny tekst źródła
Streszczenie:
To render is to give something “cause to be” or “hand over” (from the Latin reddere “give back”) and enter into an obligation to do or make something like a decision. More familiar perhaps in computing, to render is to take an image or file and convert it into another format or apply a modification of some kind, or in the case of 3D animation or scanning. To render is to animate it or give it volume. In this issue, we ask, what it means to render research? How does the rendering of research typically reinforce certain limitations of thought and action? We ask these questions in the context of more and more demands on researchers to produce academic outputs in standardised forms, in peer-reviewed journals and such like that are legitimised by normative values. So, then, how to render research otherwise?
Style APA, Harvard, Vancouver, ISO itp.
18

Sen, P., i S. Darabi. "Compressive Rendering: A Rendering Application of Compressed Sensing". IEEE Transactions on Visualization and Computer Graphics 17, nr 4 (kwiecień 2011): 487–99. http://dx.doi.org/10.1109/tvcg.2010.46.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Fiume, Eugene. "A mathematical semantics of rendering I: Ideal Rendering". Computer Vision, Graphics, and Image Processing 48, nr 1 (październik 1989): 145. http://dx.doi.org/10.1016/0734-189x(89)90109-6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Fiume, Eugene. "A mathematical semantics of rendering I: ideal rendering". Computer Vision, Graphics, and Image Processing 48, nr 3 (grudzień 1989): 281–303. http://dx.doi.org/10.1016/0734-189x(89)90145-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Beeson, Brett, David G. Barnes i Paul D. Bourke. "A Distributed Data Implementation of the Perspective Shear-Warp Volume Rendering Algorithm for Visualisation of Large Astronomical Cubes". Publications of the Astronomical Society of Australia 20, nr 3 (2003): 300–313. http://dx.doi.org/10.1071/as03039.

Pełny tekst źródła
Streszczenie:
AbstractWe describe the first distributed data implementation of the perspective shear-warp volume rendering algorithm and explore its applications to large astronomical data cubes and simulation realisations. Our system distributes sub-volumes of 3-dimensional images to leaf nodes of a Beowulf-class cluster, where the rendering takes place. Junction nodes composite the sub-volume renderings together and pass the combined images upwards for further compositing or display. We demonstrate that our system out-performs other software solutions and can render a 'worst-case' 512 × 512 × 512 data volume in less than four seconds using 16 rendering and 15 compositing nodes. Our system also performs very well compared with much more expensive hardware systems. With appropriate commodity hardware, such as Swinburne's Virtual Reality Theatre or a 3Dlabs Wildcat graphics card, stereoscopic display is possible.
Style APA, Harvard, Vancouver, ISO itp.
22

Abdelkarim, Majda Babiker Ahmed, i Ali Albashir Mohammed Alhaj. "Cultural and Lexical Challenges Faced in Translating Some Selected Verses of Surat Maryam into English: A Thematic Comparative Review". International Journal of Linguistics, Literature and Translation 6, nr 2 (28.02.2023): 178–84. http://dx.doi.org/10.32996/ijllt.2023.6.2.23.

Pełny tekst źródła
Streszczenie:
Translating the Arabic Qur’anic cultural and lexical expression into English has always been a strenuous and complicated task. It is ever more problematic than the translation of any genre. The recent study is a caveat-lector endeavor especially to scrutinize cultural and lexical challenges faced in translating some selected verses of Surat Maryam into English and their rendering losses. The foremost significance of this study is how the three selected Quranic translators attempted to achieve adequate cultural and lexical equivalence when rendering implicative meaning and profound meaning of the cultural and lexical expression in Surat Maryam. The study demonstrates that three targeted. Quranic translators' renderings encountered cultural and lexical challenges while translating some selected verses of Surat Maryam into English. It is also discovered that proper linguistic and explicative analyses are priorities for accurate translation, which avert discrepancies in implicative meaning and rendering loss. The study concludes that the three notable Quranic translators employed literal translation, verbatim translation, Semantic translation, and communicative translation methods in rendering some selected ayahs[ verses] of Surat Maryam into English comprising lexical and cultural challenges.
Style APA, Harvard, Vancouver, ISO itp.
23

Roessle, Barbara, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder i Matthias Niessner. "GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields". ACM Transactions on Graphics 42, nr 6 (5.12.2023): 1–14. http://dx.doi.org/10.1145/3618402.

Pełny tekst źródła
Streszczenie:
Neural Radiance Fields (NeRF) have shown impressive novel view synthesis results; nonetheless, even thorough recordings yield imperfections in reconstructions, for instance due to poorly observed areas or minor lighting changes. Our goal is to mitigate these imperfections from various sources with a joint solution: we take advantage of the ability of generative adversarial networks (GANs) to produce realistic images and use them to enhance realism in 3D scene reconstruction with NeRFs. To this end, we learn the patch distribution of a scene using an adversarial discriminator, which provides feedback to the radiance field reconstruction, thus improving realism in a 3D-consistent fashion. Thereby, rendering artifacts are repaired directly in the underlying 3D representation by imposing multi-view path rendering constraints. In addition, we condition a generator with multi-resolution NeRF renderings which is adversarially trained to further improve rendering quality. We demonstrate that our approach significantly improves rendering quality, e.g., nearly halving LPIPS scores compared to Nerfacto while at the same time improving PSNR by 1.4dB on the advanced indoor scenes of Tanks and Temples.
Style APA, Harvard, Vancouver, ISO itp.
24

Yin, Sai. "Explore the Use of Computer-Aided Design in the Landscape Renderings". Applied Mechanics and Materials 687-691 (listopad 2014): 1166–69. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1166.

Pełny tekst źródła
Streszczenie:
The landscape renderings project performance, landscape renderings important form of program scrutiny, accomplished by means of computer design has become the mainstream of the industry, it has performance visualization, and process reversible, diversified visual affects features. Computer graphics has three stages, namely modeling, rendering, and image processing, as opposed to proper professional graphics software; "3DSMAX + VRAY + PHOTOSHOP" is a typical process landscape renderings produced.
Style APA, Harvard, Vancouver, ISO itp.
25

Kennelly, Patrick J., i A. Jon Kimerling. "Non-Photorealistic Rendering and Terrain Representation". Cartographic Perspectives, nr 54 (1.06.2006): 35–54. http://dx.doi.org/10.14714/cp54.345.

Pełny tekst źródła
Streszczenie:
In recent years, a branch of computer graphics termed non-photorealistic rendering (NPR) has defined its own niche in the computer graphics community. While photorealistic rendering attempts to render virtual objects into images that cannot be distinguished from a photograph, NPR looks at techniques designed to achieve other ends. Its goals can be as diverse as imitating an artistic style, mimicking a look comparable to images created with specific reproduction techniques, or adding highlights and details to images. In doing so, NPR has overlapped the study of cartography concerned with representing terrain in two ways. First, NPR has formulated several techniques that are similar or identical to antecedent terrain rendering techniques including inclined contours and hachures. Second, NPR efforts to highlight or add information in renderings often focus on the use of innovative and meaningful combinations of visual variables such as orientation and color. Such efforts are similar to recent terrain rendering research focused on methods to symbolize disparate areas of slope and aspect on shaded terrain representations. We compare these fields of study in an effort to increase awareness and foster collaboration between researchers with similar interests.
Style APA, Harvard, Vancouver, ISO itp.
26

Mileto, Camilla, Fernando Vegas i Vincenzina La Spina. "Is Gypsum External Rendering Possible? The Use of Gypsum Mortar for Rendering Historic Façades of Valencia's City Centre". Advanced Materials Research 250-253 (maj 2011): 1301–4. http://dx.doi.org/10.4028/www.scientific.net/amr.250-253.1301.

Pełny tekst źródła
Streszczenie:
Valencia is a city located in the East of Spain by the Mediterranean Sea. It has a huge historic centre with ancient winding streets that contains buildings with a singular architectural heritage. The buildings’ façades are protected by a traditional external rendering, sometimes in bad state of conservation or modified or substituted in restoration works. The study carried out on historic renderings in Valencia points out the great employment of gypsum mortar or gypsum-lime mortar, among other peculiarities. Gypsum external rendering is one of the many uses for gypsum mortars in Valencia traditional architecture. This fact contradicts the general belief of the exclusive use of lime mortars for rendering a façade. The knowledge of the characteristics of historic mortars will allow us to implement a proper restoration work of architectural heritage with suitable mortars, as it is essential to guarantee the adherence and compatibility of any repair.
Style APA, Harvard, Vancouver, ISO itp.
27

Dashti Shafii, Ali, Babak Monir Abbasi i Shiva Jabari. "Manual Rendering Techniques in Architecture". International Journal of Engineering and Technology 8, nr 2 (luty 2016): 141–45. http://dx.doi.org/10.7763/ijet.2016.v6.874.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Dashti Shafii, Ali, Babak Monir Abbasi i Shiva Jabari. "Manual Rendering Techniques in Architecture". International Journal of Engineering and Technology 8, nr 2 (luty 2016): 141–45. http://dx.doi.org/10.7763/ijet.2016.v8.874.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Wu, Xiuchao, Jiamin Xu, Zihan Zhu, Hujun Bao, Qixing Huang, James Tompkin i Weiwei Xu. "Scalable neural indoor scene rendering". ACM Transactions on Graphics 41, nr 4 (lipiec 2022): 1–16. http://dx.doi.org/10.1145/3528223.3530153.

Pełny tekst źródła
Streszczenie:
We propose a scalable neural scene reconstruction and rendering method to support distributed training and interactive rendering of large indoor scenes. Our representation is based on tiles. Tile appearances are trained in parallel through a background sampling strategy that augments each tile with distant scene information via a proxy global mesh. Each tile has two low-capacity MLPs: one for view-independent appearance (diffuse color and shading) and one for view-dependent appearance (specular highlights, reflections). We leverage the phenomena that complex view-dependent scene reflections can be attributed to virtual lights underneath surfaces at the total ray distance to the source. This lets us handle sparse samplings of the input scene where reflection highlights do not always appear consistently in input images. We show interactive free-viewpoint rendering results from five scenes, one of which covers an area of more than 100 m 2 . Experimental results show that our method produces higher-quality renderings than a single large-capacity MLP and five recent neural proxy-geometry and voxel-based baseline methods. Our code and data are available at project webpage https://xchaowu.github.io/papers/scalable-nisr.
Style APA, Harvard, Vancouver, ISO itp.
30

Kobimbo, Mary Mercy. "The Translation of יהוה: Part 2, The Case of Dholuo". Bible Translator 73, nr 2 (sierpień 2022): 213–26. http://dx.doi.org/10.1177/20516770221105876.

Pełny tekst źródła
Streszczenie:
The first part of this study ( TBT 72[1]: 50–60) reviewed the history of the rendering of the key term יהוה YHWH in Bible translations into Dholuo (spoken in southwestern Kenya and northwestern Tanzania). This second part considers the translation of this key term within the context of modern Dholuo language and culture. The different renderings in two existing translations are analyzed and put in the broader perspective of Bible translation in Africa. Finally, the paper proposes a rendering for יהוה that does justice to the Dholuo culture and tradition, while maintaining the specific characteristics that are present in the source text.
Style APA, Harvard, Vancouver, ISO itp.
31

Tavares, Martha, Maria do Rosário Veiga i Ana Fragata. "Conservation of old renderings - the consolidation of rendering with loss of cohesion". Conservar Património 8 (2008): 13–19. http://dx.doi.org/10.14568/cp8_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

M.Jawad, Najat. "Rendering of ‘La'ala’ & ‘Asaa’ in the Holy Quran". Al-Adab Journal, nr 146 (15.09.2023): 1–14. http://dx.doi.org/10.31973/aj.v2i146.3985.

Pełny tekst źródła
Streszczenie:
The object of the study is to look at the English (TT) translation of لعل la’ala and عسى ‘asaa in Arabic Text (ST) of the Holy Quran. They have the same meaning with little differences. Their renderings are investigated and analyzed semantically according to contextual and cohesive meaning of certain texts, i.e. the aayas (verses). The question is: do the translators (al-Hilali &Khan and Irving) succeed in rendering the meanings of the two items from Arabic (source text) into English (target text)? Are there differences between the two translations, or the two items? The study is drawn out that the translators have relatively succeeded in rendering l’ala and ‘asaa. However, it is shown that Arabic (SL) is semantically more precise than English, and the latter has limited expressions compared to the much meanings of the Quranic expressions. Moreover, the two translations are not different in rendering them though Irving’s is more adequate than al-Hilali &Khan’s.
Style APA, Harvard, Vancouver, ISO itp.
33

Arend, Johannes M., Melissa Ramírez, Heinrich R. Liesefeld i Christoph Pӧrschmann. "Do near-field cues enhance the plausibility of non-individual binaural rendering in a dynamic multimodal virtual acoustic scene?" Acta Acustica 5 (2021): 55. http://dx.doi.org/10.1051/aacus/2021048.

Pełny tekst źródła
Streszczenie:
It is commonly believed that near-field head-related transfer functions (HRTFs) provide perceptual benefits over far-field HRTFs that enhance the plausibility of binaural rendering of nearby sound sources. However, to the best of our knowledge, no study has systematically investigated whether using near-field HRTFs actually provides a perceptually more plausible virtual acoustic environment. To assess this question, we conducted two experiments in a six-degrees-of-freedom multimodal augmented reality experience where participants had to compare non-individual anechoic binaural renderings based on either synthesized near-field HRTFs or intensity-scaled far-field HRTFs and judge which of the two rendering methods led to a more plausible representation. Participants controlled the virtual sound source position by moving a small handheld loudspeaker along a prescribed trajectory laterally and frontally near the head, which provided visual and proprioceptive cues in addition to the auditory cues. The results of both experiments show no evidence that near-field cues enhance the plausibility of non-individual binaural rendering of nearby anechoic sound sources in a dynamic multimodal virtual acoustic scene as examined in this study. These findings suggest that, at least in terms of plausibility, the additional effort of including near-field cues in binaural rendering may not always be worthwhile for virtual or augmented reality applications.
Style APA, Harvard, Vancouver, ISO itp.
34

Wetherill, P. M., i Lawrence R. Schehr. "Rendering French Realism". Modern Language Review 95, nr 2 (kwiecień 2000): 514. http://dx.doi.org/10.2307/3736191.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Lessig. "Rendering Sensible Salient". Good Society 27, nr 1-2 (2019): 171. http://dx.doi.org/10.5325/goodsociety.27.1-2.0171.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Bednarik, Robert G. "Rendering Humanities Sustainable". Humanities 1, nr 1 (19.10.2011): 64–71. http://dx.doi.org/10.3390/h1010064.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Cardoso, Florentino. "Rendering of accounts". Revista da Associação Médica Brasileira 60, nr 4 (lipiec 2014): 283. http://dx.doi.org/10.1590/1806-9282.60.04.001.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Kang, Sing Bing, Yin Li, Xin Tong i Heung-Yeung Shum. "Image-Based Rendering". Foundations and Trends® in Computer Graphics and Vision 2, nr 3 (2006): 173–258. http://dx.doi.org/10.1561/0600000012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Ramamoorthi, Ravi. "Precomputation-Based Rendering". Foundations and Trends® in Computer Graphics and Vision 3, nr 4 (2007): 281–369. http://dx.doi.org/10.1561/0600000021.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Yi, Shinyoung, Donggun Kim, Kiseok Choi, Adrian Jarabo, Diego Gutierrez i Min H. Kim. "Differentiable transient rendering". ACM Transactions on Graphics 40, nr 6 (grudzień 2021): 1–11. http://dx.doi.org/10.1145/3478513.3480498.

Pełny tekst źródła
Streszczenie:
Recent differentiable rendering techniques have become key tools to tackle many inverse problems in graphics and vision. Existing models, however, assume steady-state light transport, i.e., infinite speed of light. While this is a safe assumption for many applications, recent advances in ultrafast imaging leverage the wealth of information that can be extracted from the exact time of flight of light. In this context, physically-based transient rendering allows to efficiently simulate and analyze light transport considering that the speed of light is indeed finite. In this paper, we introduce a novel differentiable transient rendering framework, to help bring the potential of differentiable approaches into the transient regime. To differentiate the transient path integral we need to take into account that scattering events at path vertices are no longer independent; instead, tracking the time of flight of light requires treating such scattering events at path vertices jointly as a multidimensional, evolving manifold. We thus turn to the generalized transport theorem, and introduce a novel correlated importance term, which links the time-integrated contribution of a path to its light throughput, and allows us to handle discontinuities in the light and sensor functions. Last, we present results in several challenging scenarios where the time of flight of light plays an important role such as optimizing indices of refraction, non-line-of-sight tracking with nonplanar relay walls, and non-line-of-sight tracking around two corners.
Style APA, Harvard, Vancouver, ISO itp.
41

Meng, Xiaoxu, Ruofei Du, Matthias Zwicker i Amitabh Varshney. "Kernel Foveated Rendering". Proceedings of the ACM on Computer Graphics and Interactive Techniques 1, nr 1 (25.07.2018): 1–20. http://dx.doi.org/10.1145/3203199.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Fout, Nathaniel, i Kwan-Liu Ma. "Fuzzy Volume Rendering". IEEE Transactions on Visualization and Computer Graphics 18, nr 12 (grudzień 2012): 2335–44. http://dx.doi.org/10.1109/tvcg.2012.227.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

McMillan, Leonard, i Steven Gortler. "Image-based rendering". ACM SIGGRAPH Computer Graphics 33, nr 4 (4.11.1999): 61–64. http://dx.doi.org/10.1145/345370.345415.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Malzbender, Tom. "Fourier volume rendering". ACM Transactions on Graphics 12, nr 3 (2.07.1993): 233–50. http://dx.doi.org/10.1145/169711.169705.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Overbeck, Ryan S., Craig Donner i Ravi Ramamoorthi. "Adaptive wavelet rendering". ACM Transactions on Graphics 28, nr 5 (grudzień 2009): 1–12. http://dx.doi.org/10.1145/1618452.1618486.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Bowen, David. "Growth Rendering Device". Leonardo 42, nr 4 (sierpień 2009): 362–63. http://dx.doi.org/10.1162/leon.2009.42.4.362.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Kajiya, James T. "The rendering equation". ACM SIGGRAPH Computer Graphics 20, nr 4 (31.08.1986): 143–50. http://dx.doi.org/10.1145/15886.15902.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Séquin, Carlo H., i Raymond Shiau. "Rendering Pacioli's rhombicuboctahedron". Journal of Mathematics and the Arts 9, nr 3-4 (24.07.2015): 103–10. http://dx.doi.org/10.1080/17513472.2015.1068639.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Gerrard, Alan, i Nam Do. "Dynamic acoustic rendering". Journal of the Acoustical Society of America 123, nr 1 (2008): 20. http://dx.doi.org/10.1121/1.2832823.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Cholewiak, Steven, Gordon Love i Martin Banks. "Rendering correct blur". Journal of Vision 17, nr 10 (31.08.2017): 403. http://dx.doi.org/10.1167/17.10.403.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii