Gotowa bibliografia na temat „Rendu 3D”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Rendu 3D”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Rendu 3D"
Bailly, Sean. "Un œil entier rendu transparent pour l’imagerie 3D". Pour la Science N° 554 – décembre, nr 12 (5.12.2023): 10. http://dx.doi.org/10.3917/pls.554.0010.
Pełny tekst źródłaNakkabi, Ismail, Mohammed Ridal, Najib Benmansour, Karim Nadour, Ali El Boukhari i Mohammed Noureddine El Amine El Alami. "ANATOMIE DE LOREILLE INTERNE : LES CANAUX SEMI-CIRCULAIRE DAPRES UNE MODELISATION TRIDIMENTIONNELLE". International Journal of Advanced Research 10, nr 06 (30.06.2022): 288–94. http://dx.doi.org/10.21474/ijar01/14886.
Pełny tekst źródłaLegeai, Audrey, i Gwenola Thomas. "Sélection de traits caractéristiques d'objets 3D lisses pour le rendu non photoréaliste". Techniques et sciences informatiques 26, nr 8 (30.11.2007): 945–74. http://dx.doi.org/10.3166/tsi.26.945-974.
Pełny tekst źródłaLharti, Habiba, Colette Siriex, Joëlle Riss, Cécile Verdet i Delphine Lacanette. "Choix d’une méthode de classification pour la partition d’un massif rocheux à partir de TRE". E3S Web of Conferences 342 (2022): 02004. http://dx.doi.org/10.1051/e3sconf/202234202004.
Pełny tekst źródłaSchenkel, Arnaud, Rudy Ercek i Olivier Debeir. "Numérisation 3D de l’Hôtel de Ville de Bruxelles et de la statue du saint Michel : exploitation architecturale et archéologique". Studia Bruxellae N° 12, nr 1 (2.10.2018): 29–36. http://dx.doi.org/10.3917/stud.012.0029.
Pełny tekst źródłaMonezi, Vinicius Giovani, i Roberto Hirochi Okada. "MÉTODO DE PRODUÇÃO DE CALÇADO IMPRESSO EM 3D". Revista Interface Tecnológica 18, nr 1 (3.11.2021): 513–24. http://dx.doi.org/10.31510/infa.v18i1.1120.
Pełny tekst źródłaRibeiro, Felipe Garcia, Marcos Vinicio Wink Junior, Thais Waideman Niquito i Ândrea Leite Bergamann. "Diplomados, mas desinteressados pelo mercado de trabalho ou desempregados : a geração 3D". Pesquisa e Planejamento Econômico (PPE), v. 51, n. 01 (15.04.2021): 51–71. http://dx.doi.org/10.38116/ppe51n1art2.
Pełny tekst źródłaKertész, Tamás. "Egy sportszer élete: szerből rendszer : Bemutatkozik a variálható sport létra 3d sport/rend/szer". Acta Universitatis de Carolo Eszterházy Nominatae. Sectio Sport, nr 48 (2020): 65–74. http://dx.doi.org/10.33040/actauniveszterhazysport.2020.1.65.
Pełny tekst źródłaHeinkelé, Christophe, Pierre Charbonnier, Philippe Foucher i Emmanuel Moisan. "Pré-localisation des données pour la modélisation 3D de tunnels : développements et évaluations". Revue Française de Photogrammétrie et de Télédétection 1, nr 221 (2.03.2020): 49–63. http://dx.doi.org/10.52638/rfpt.2019.440.
Pełny tekst źródłaRomanowicz, Barbara. "Imagerie globale de la Terre par les ondes sismiques". Reflets de la physique, nr 56 (styczeń 2018): 4–9. http://dx.doi.org/10.1051/refdp/201856004.
Pełny tekst źródłaRozprawy doktorskie na temat "Rendu 3D"
Baele, Xavier. "Génération et rendu 3D temps réel d'arbres botaniques". Doctoral thesis, Universite Libre de Bruxelles, 2003. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211314.
Pełny tekst źródłaBleron, Alexandre. "Rendu stylisé de scènes 3D animées temps-réel". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM060/document.
Pełny tekst źródłaThe goal of stylized rendering is to render 3D scenes in the visual style intended by an artist.This often entails reproducing, with some degree of automation,the visual features typically found in 2D illustrationsthat constitute the "style" of an artist.Examples of these features include the depiction of light and shade,the representation of the contours of objects,or the strokes on a canvas that make a painting.This field is relevant today in domains such as computer-generated animation orvideo games, where studios seek to differentiate themselveswith styles that deviate from photorealism.In this thesis, we explore stylization techniques that can be easilyinserted into existing real-time rendering pipelines, and propose two novel techniques in this domain.Our first contribution is a workflow that aims to facilitatethe design of complex stylized shading models for 3D objects.Designing a stylized shading model that follows artistic constraintsand stays consistent under a variety of lightingconditions and viewpoints is a difficult and time-consuming process.Specialized shading models intended for stylization existbut are still limited in the range of appearances and behaviors they can reproduce.We propose a way to build and experiment with complex shading modelsby combining several simple shading behaviors using a layered approach,which allows a more intuitive and efficient exploration of the design space of shading models.In our second contribution, we present a pipeline to render 3D scenes in painterly styles,simulating the appearance of brush strokes,using a combination of procedural noise andlocal image filtering in screen-space.Image filtering techniques can achieve a wide range of stylized effects on 2D pictures and video:our goal is to use those existing filtering techniques to stylize 3D scenes,in a way that is coherent with the underlying animation or camera movement.This is not a trivial process, as naive approaches to filtering in screen-spacecan introduce visual inconsistencies around the silhouette of objects.The proposed method ensures motion coherence by guiding filters with informationfrom G-buffers, and ensures a coherent stylization of silhouettes in a generic way
Tobor, Ireneusz. "Utilisation des surfels dans le rendu des surfaces 3D". Bordeaux 1, 2002. http://www.theses.fr/2002BOR12640.
Pełny tekst źródłaBoehm, Mathilde. "Contribution à l'amélioration du rendu volumique de données médicales 3D". Paris, ENMP, 2004. http://www.theses.fr/2004ENMP1271.
Pełny tekst źródłaDuguet, Florent. "Rendu et reconstruction de très gros nuages de points 3D". Nice, 2005. http://www.theses.fr/2005NICE4031.
Pełny tekst źródłaCunat, Christophe. "Accélération matérielle pour le rendu de scènes multimédia vidéo et 3D". Phd thesis, Télécom ParisTech, 2004. http://tel.archives-ouvertes.fr/tel-00077593.
Pełny tekst źródłaCette thèse s'inscrit dans le cadre de la composition d'objets visuels qui peuvent être de natures différentes (séquences vidéo, images fixes, objets synthétiques 3D, etc.). Néanmoins, les puissances de calcul nécessaires afin d'effectuer cette composition demeurent prohibitives sans mise en place d'accélérateurs matériels spécialisés et deviennent critiques dans un contexte de terminal portable.
Une revue tant algorithmique qu'architecturale des différents domaines est effectuée afin de souligner à la fois les points de convergence et de différence. Ensuite, trois axes (interdépendants) de réflexions concernant les problématiques de représentation des données, d'accès aux données et d'organisation des traitements sont principalement discutés.
Ces réflexions sont alors appliquées au cas concret d'un terminal portable pour la labiophonie : application de téléphonie où le visage de l'interlocuteur est reconstruit à partir d'un maillage de triangles et d'un placage de texture. Une architecture unique d'un compositeur d'image capable de traiter indifféremment ces objets visuels est ensuite définie. Enfin, une synthèse sur une plateforme de prototypage de cet opérateur autorise une comparaison avec des solutions existantes, apparues pour la plupart au cours de cette thèse.
Chakib, Reda. "Acquisition et rendu 3D réaliste à partir de périphériques "grand public"". Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0101/document.
Pełny tekst źródłaDigital imaging, from the synthesis of images to computer vision isexperiencing a strong evolution, due among other factors to the democratization and commercial success of 3D cameras. In the same context, the consumer 3D printing, which is experiencing a rapid rise, contributes to the strong demand for this type of camera for the needs of 3D scanning. The objective of this thesis is to acquire and master a know-how in the field of the capture / acquisition of 3D models in particular on the rendered aspect. The realization of a 3D scanner from a RGB-D camera is part of the goal. During the acquisition phase, especially for a portable device, there are two main problems, the problem related to the repository of each capture and the final rendering of the reconstructed object
Cunat, Christophe. "Accélération matérielle pour le rendu de scènes multimédia vidéo et 3D /". Paris : École nationale supérieure des télécommunications, 2004. http://catalogue.bnf.fr/ark:/12148/cb399010770.
Pełny tekst źródłaMoulin, Samuel. "Quel son spatialisé pour la vidéo 3D ? : influence d'un rendu Wave Field Synthesis sur l'expérience audio-visuelle 3D". Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015PA05H102/document.
Pełny tekst źródłaThe digital entertainment industry is undergoing a major evolution due to the recent spread of stereoscopic-3D videos. It is now possible to experience 3D by watching movies, playing video games, and so on. In this context, video catches most of the attention but what about the accompanying audio rendering? Today, the most often used sound reproduction technologies are based on lateralization effects (stereophony, 5.1 surround systems). Nevertheless, it is quite natural to wonder about the need of introducing a new audio technology adapted to this new visual dimension: the depth. Many alternative technologies seem to be able to render 3D sound environments (binaural technologies, ambisonics, Wave Field Synthesis). Using these technologies could potentially improve users' quality of experience. It could impact the feeling of realism by adding audio-visual spatial congruence, but also the immersion sensation. In order to validate this hypothesis, a 3D audio-visual rendering system is set-up. The visual rendering provides stereoscopic-3D images and is coupled with a Wave Field Synthesis sound rendering. Three research axes are then studied: 1/ Depth perception using unimodal or bimodal presentations. How the audio-visual system is able to render the depth of visual, sound, and audio-visual objects? The conducted experiments show that Wave Field Synthesis can render virtual sound sources perceived at different distances. Moreover, visual and audio-visual objects can be localized with a higher accuracy in comparison to sound objects. 2/ Crossmodal integration in the depth dimension. How to guarantee the perception of congruence when audio-visual stimuli are spatially misaligned? The extent of the integration window was studied at different visual object distances. In other words, according to the visual stimulus position, we studied where sound objects should be placed to provide the perception of a single unified audio-visual stimulus. 3/ 3D audio-visual quality of experience. What is the contribution of sound depth rendering on the 3D audio-visual quality of experience? We first assessed today's quality of experience using sound systems dedicated to the playback of 5.1 soundtracks (5.1 surround system, headphones, soundbar) in combination with 3D videos. Then, we studied the impact of sound depth rendering using the set-up audio-visual system (3D videos and Wave Field Synthesis)
Decaudin, Philippe. "Modélisation par fusion de formes 3D pour la synthèse d'images : rendu de scènes 3D imitant le style "dessin animé"". Compiègne, 1996. http://www.theses.fr/1996COMPD938.
Pełny tekst źródłaIn the main section, we introduce new tools for modeling three¬dimensionnal objects for computer graphics. They allow interactive modeling of smooth shapes such as organic-looking shapes (animals, human bodies) and help animating and texturing them. A complex object is created by applying a succession of fusion and twist deformations to a simple object. The fusion tool allows deformation of the shape of the object by merging it with a simple 3D-shape (sphere, ellipsoid,. . . ); the object is deformed so that it embeds the simple shape. The twist tool allows creation of articulations which can be used to animate the deformable object. In a second section, we introduce a non-photorealistic rendering algorithm. It produces images having the appearance of a traditional cartoon from a 3D description of the scene (a static or an animated scene). The 3D scene is rendered with techniques allowing to outline the profiles and edges of objects, to color uniformly the patches, and to render shadows (self-shadows and projected-shadows) due to light sources
Książki na temat "Rendu 3D"
Couwenbergh, Jean-Pierre. AutoCAD 3D: Mode lisation et rendu. Paris: Eyrolles, 2007.
Znajdź pełny tekst źródłaCardoso, Jamie. 3D Photorealistic Rendering: Product Designs. CRC Press, 2021.
Znajdź pełny tekst źródłaCardoso, Jamie. 3D Photorealistic Rendering: Interiors and Exteriors with V-Ray and 3ds Max. CRC Press LLC, 2017.
Znajdź pełny tekst źródła3D Photorealistic Rendering: Interiors and Exteriors with V-Ray and 3ds Max. Taylor & Francis Group, 2016.
Znajdź pełny tekst źródła3D Photorealistic Rendering: Interiors and Exteriors with V-Ray and 3ds Max. CRC Press LLC, 2017.
Znajdź pełny tekst źródła3D Photorealistic Rendering: Interiors and Exteriors with V-Ray and 3ds Max. Taylor & Francis Group, 2015.
Znajdź pełny tekst źródłaimpression 3D Me Rend Heureux Vous Pas Assez: Un Carnet de Notes Ligné Pour les Passionnés et Fan de L'impression 3D. Independently Published, 2019.
Znajdź pełny tekst źródłaMarques, Marcia Alessandra Arantes, red. Saúde Única: Uma Abordagem Multidisciplinar. Bookerfield Editora, 2022. http://dx.doi.org/10.53268/bkf22080300.
Pełny tekst źródłaStreszczenia konferencji na temat "Rendu 3D"
Saltos García, Cynthia Estefanía, Vanessa Dayana Díaz Quishpe, Bryan Alejandro Reyes Analuisa i Christian Iván Mejía Escobar. "Generation of digital maps in 2D and 3D as essential tools for geological and mining works (Case study: India)". W VIII Congreso Internacional de Investigación REDU. Medwave, 2022. http://dx.doi.org/10.5867/medwave.2022.s1.ci32.
Pełny tekst źródłaSapountzaki, Galini, Eleni Efthimiou, Stavroula-Evita Fotinea, Katerina Papadimitriou i Gerasimos Potamianos. "3D GREEK SIGN LANGUAGE CLASSIFIERS AS A LEARNING OBJECT IN THE SL-REDU ONLINE EDUCATION PLATFORM". W 14th International Conference on Education and New Learning Technologies. IATED, 2022. http://dx.doi.org/10.21125/edulearn.2022.1449.
Pełny tekst źródła