Rozprawy doktorskie na temat „Rendu 3D”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 45 najlepszych rozpraw doktorskich naukowych na temat „Rendu 3D”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Baele, Xavier. "Génération et rendu 3D temps réel d'arbres botaniques". Doctoral thesis, Universite Libre de Bruxelles, 2003. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211314.
Pełny tekst źródłaBleron, Alexandre. "Rendu stylisé de scènes 3D animées temps-réel". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM060/document.
Pełny tekst źródłaThe goal of stylized rendering is to render 3D scenes in the visual style intended by an artist.This often entails reproducing, with some degree of automation,the visual features typically found in 2D illustrationsthat constitute the "style" of an artist.Examples of these features include the depiction of light and shade,the representation of the contours of objects,or the strokes on a canvas that make a painting.This field is relevant today in domains such as computer-generated animation orvideo games, where studios seek to differentiate themselveswith styles that deviate from photorealism.In this thesis, we explore stylization techniques that can be easilyinserted into existing real-time rendering pipelines, and propose two novel techniques in this domain.Our first contribution is a workflow that aims to facilitatethe design of complex stylized shading models for 3D objects.Designing a stylized shading model that follows artistic constraintsand stays consistent under a variety of lightingconditions and viewpoints is a difficult and time-consuming process.Specialized shading models intended for stylization existbut are still limited in the range of appearances and behaviors they can reproduce.We propose a way to build and experiment with complex shading modelsby combining several simple shading behaviors using a layered approach,which allows a more intuitive and efficient exploration of the design space of shading models.In our second contribution, we present a pipeline to render 3D scenes in painterly styles,simulating the appearance of brush strokes,using a combination of procedural noise andlocal image filtering in screen-space.Image filtering techniques can achieve a wide range of stylized effects on 2D pictures and video:our goal is to use those existing filtering techniques to stylize 3D scenes,in a way that is coherent with the underlying animation or camera movement.This is not a trivial process, as naive approaches to filtering in screen-spacecan introduce visual inconsistencies around the silhouette of objects.The proposed method ensures motion coherence by guiding filters with informationfrom G-buffers, and ensures a coherent stylization of silhouettes in a generic way
Tobor, Ireneusz. "Utilisation des surfels dans le rendu des surfaces 3D". Bordeaux 1, 2002. http://www.theses.fr/2002BOR12640.
Pełny tekst źródłaBoehm, Mathilde. "Contribution à l'amélioration du rendu volumique de données médicales 3D". Paris, ENMP, 2004. http://www.theses.fr/2004ENMP1271.
Pełny tekst źródłaDuguet, Florent. "Rendu et reconstruction de très gros nuages de points 3D". Nice, 2005. http://www.theses.fr/2005NICE4031.
Pełny tekst źródłaCunat, Christophe. "Accélération matérielle pour le rendu de scènes multimédia vidéo et 3D". Phd thesis, Télécom ParisTech, 2004. http://tel.archives-ouvertes.fr/tel-00077593.
Pełny tekst źródłaCette thèse s'inscrit dans le cadre de la composition d'objets visuels qui peuvent être de natures différentes (séquences vidéo, images fixes, objets synthétiques 3D, etc.). Néanmoins, les puissances de calcul nécessaires afin d'effectuer cette composition demeurent prohibitives sans mise en place d'accélérateurs matériels spécialisés et deviennent critiques dans un contexte de terminal portable.
Une revue tant algorithmique qu'architecturale des différents domaines est effectuée afin de souligner à la fois les points de convergence et de différence. Ensuite, trois axes (interdépendants) de réflexions concernant les problématiques de représentation des données, d'accès aux données et d'organisation des traitements sont principalement discutés.
Ces réflexions sont alors appliquées au cas concret d'un terminal portable pour la labiophonie : application de téléphonie où le visage de l'interlocuteur est reconstruit à partir d'un maillage de triangles et d'un placage de texture. Une architecture unique d'un compositeur d'image capable de traiter indifféremment ces objets visuels est ensuite définie. Enfin, une synthèse sur une plateforme de prototypage de cet opérateur autorise une comparaison avec des solutions existantes, apparues pour la plupart au cours de cette thèse.
Chakib, Reda. "Acquisition et rendu 3D réaliste à partir de périphériques "grand public"". Thesis, Limoges, 2018. http://www.theses.fr/2018LIMO0101/document.
Pełny tekst źródłaDigital imaging, from the synthesis of images to computer vision isexperiencing a strong evolution, due among other factors to the democratization and commercial success of 3D cameras. In the same context, the consumer 3D printing, which is experiencing a rapid rise, contributes to the strong demand for this type of camera for the needs of 3D scanning. The objective of this thesis is to acquire and master a know-how in the field of the capture / acquisition of 3D models in particular on the rendered aspect. The realization of a 3D scanner from a RGB-D camera is part of the goal. During the acquisition phase, especially for a portable device, there are two main problems, the problem related to the repository of each capture and the final rendering of the reconstructed object
Cunat, Christophe. "Accélération matérielle pour le rendu de scènes multimédia vidéo et 3D /". Paris : École nationale supérieure des télécommunications, 2004. http://catalogue.bnf.fr/ark:/12148/cb399010770.
Pełny tekst źródłaMoulin, Samuel. "Quel son spatialisé pour la vidéo 3D ? : influence d'un rendu Wave Field Synthesis sur l'expérience audio-visuelle 3D". Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015PA05H102/document.
Pełny tekst źródłaThe digital entertainment industry is undergoing a major evolution due to the recent spread of stereoscopic-3D videos. It is now possible to experience 3D by watching movies, playing video games, and so on. In this context, video catches most of the attention but what about the accompanying audio rendering? Today, the most often used sound reproduction technologies are based on lateralization effects (stereophony, 5.1 surround systems). Nevertheless, it is quite natural to wonder about the need of introducing a new audio technology adapted to this new visual dimension: the depth. Many alternative technologies seem to be able to render 3D sound environments (binaural technologies, ambisonics, Wave Field Synthesis). Using these technologies could potentially improve users' quality of experience. It could impact the feeling of realism by adding audio-visual spatial congruence, but also the immersion sensation. In order to validate this hypothesis, a 3D audio-visual rendering system is set-up. The visual rendering provides stereoscopic-3D images and is coupled with a Wave Field Synthesis sound rendering. Three research axes are then studied: 1/ Depth perception using unimodal or bimodal presentations. How the audio-visual system is able to render the depth of visual, sound, and audio-visual objects? The conducted experiments show that Wave Field Synthesis can render virtual sound sources perceived at different distances. Moreover, visual and audio-visual objects can be localized with a higher accuracy in comparison to sound objects. 2/ Crossmodal integration in the depth dimension. How to guarantee the perception of congruence when audio-visual stimuli are spatially misaligned? The extent of the integration window was studied at different visual object distances. In other words, according to the visual stimulus position, we studied where sound objects should be placed to provide the perception of a single unified audio-visual stimulus. 3/ 3D audio-visual quality of experience. What is the contribution of sound depth rendering on the 3D audio-visual quality of experience? We first assessed today's quality of experience using sound systems dedicated to the playback of 5.1 soundtracks (5.1 surround system, headphones, soundbar) in combination with 3D videos. Then, we studied the impact of sound depth rendering using the set-up audio-visual system (3D videos and Wave Field Synthesis)
Decaudin, Philippe. "Modélisation par fusion de formes 3D pour la synthèse d'images : rendu de scènes 3D imitant le style "dessin animé"". Compiègne, 1996. http://www.theses.fr/1996COMPD938.
Pełny tekst źródłaIn the main section, we introduce new tools for modeling three¬dimensionnal objects for computer graphics. They allow interactive modeling of smooth shapes such as organic-looking shapes (animals, human bodies) and help animating and texturing them. A complex object is created by applying a succession of fusion and twist deformations to a simple object. The fusion tool allows deformation of the shape of the object by merging it with a simple 3D-shape (sphere, ellipsoid,. . . ); the object is deformed so that it embeds the simple shape. The twist tool allows creation of articulations which can be used to animate the deformable object. In a second section, we introduce a non-photorealistic rendering algorithm. It produces images having the appearance of a traditional cartoon from a 3D description of the scene (a static or an animated scene). The 3D scene is rendered with techniques allowing to outline the profiles and edges of objects, to color uniformly the patches, and to render shadows (self-shadows and projected-shadows) due to light sources
Bénard, Pierre. "Stylisation temporellement cohérente d'animations 3D basée sur des textures". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00630112.
Pełny tekst źródłaMichel, Élie. "Interactive authoring of 3D shapes represented as programs". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT027.
Pełny tekst źródłaAlthough hardware and techniques have considerably improved over the years at handling heavy content, digital 3D creation remains fairly complex, partly because the bottleneck also lies in the cognitive load imposed over the designers. A recent shift to higher-order representation of shapes, encoding them as computer programs that generate their geometry, enables creation pipelines that better manage the cognitive load, but this also comes with its own sources of friction. We study in this thesis new challenges and opportunities introduced by program-based representations of 3D shapes in the context of digital content authoring. We investigate ways for the interaction with the shapes to remain as much as possible in 3D space, rather than operating on abstract symbols in program space. This includes both assisting the creation of the program, by allowing manipulation in 3D space while still ensuring a good generalization upon changes of the free variables of the program, and helping one to tune these variables by enabling direct manipulation of the output of the program. We explore diversity of program-based representations, focusing various paradigms of visual programming interfaces, from the imperative directed acyclic graphs (DAG) to the declarative Wang tiles, through more hybrid approaches. In all cases we study shape programs that evaluate at interactive rate, so that they fit in a creation process, and we push this by studying synergies of program-based representations with real time rendering pipelines.We enable the use of direct manipulation methods on DAG output thanks to automated rewriting rules and a non-linear filtering of differential data. We help the creation of imperative shape programs by turning geometric selection into semantic queries and of declarative programs by proposing an interface-first editing scheme for authoring 3D content in Wang tiles. We extend tiling engines to handle continuous tile parameters and arbitrary slot graphs, and to suggest new tiles to add to the set. We blend shape programs into the visual feedback loop by delegating tile content evaluation to the real-time rendering pipeline or exploiting the program's semantics to drive an impostor-based level-of-details system. Overall, our series of contributions aims at leveraging program-based representations of shapes to make the process of authoring 3D digital scenes more of an artistic act and less of a technical task
Fülöp-Balogh, Beatrix-Emőke. "Acquisition multi-vues et rendu de scènes animées". Thesis, Lyon, 2021. http://www.theses.fr/2021LYSE1308.
Pełny tekst źródłaRecent technological breakthroughs have led to an abundance of consumer friendly video recording devices. Nowadays new smart phone models, for instance, are equipped not only with multiple cameras, but also depth sensors. This means that any event can easily be captured by several different devices and technologies at the same time, and it raises questions about how one can process the data in order to render a meaningful 3D scene. Most current solutions focus on static scenes only, LiDar scanners produce extremely accurate depth maps, and multi-view stereo algorithms can reconstruct a scene in 3D based on a handful of images. However, these ideas are not directly applicable in case of dynamic scenes. Depth sensors trade accuracy for speed, or vice versa, and color image based methods suffer from temporal inconsistencies or are too computationally demanding. In this thesis we aim to provide consumer friendly solutions to fuse multiple, possibly heterogeneous, technologies to reconstruct and render 3D dynamic scenes. Firstly, we introduce an algorithm that corrects distortions produced by small motions in time-of-flight acquisitions and outputs a corrected animated sequence. We do so by combining a slow but high-resolution time-of-flight LiDAR system and a fast but low-resolution consumer depth sensor. We cast the problem as a curve-to-volume registration, by seeing the LiDAR point cloud as a curve in the 4-dimensional spacetime and the captured low-resolution depth video as a 4-dimensional spacetime volume. We then advect the details of the high-resolution point cloud to the depth video using its optical flow. Second, we tackle the case of the reconstruction and rendering of dynamic scenes captured by multiple RGB cameras. In casual settings, the two problems are hard to merge: structure from motion (SfM) produces spatio-temporally unstable and sparse point clouds, while the rendering algorithms that rely on the reconstruction need to produce temporally consistent videos. To ease the challenge, we consider the two steps together. First, for SfM, we recover stable camera poses, then we defer the requirement for temporally-consistent points across the scene and reconstruct only a sparse point cloud per timestep that is noisy in space-time. Second, for rendering, we present a variational diffusion formulation on depths and colors that lets us robustly cope with the noise by enforcing spatio-temporal consistency via per-pixel reprojection weights derived from the input views. Overall, our work contributes to the understanding of the acquisition and rendering of casually captured dynamic scenes
Lerbour, Raphaël. "Chargement progressif et rendu adaptatif de vastes terrains". Phd thesis, Rennes 1, 2009. https://theses.hal.science/docs/00/46/16/67/PDF/thesis_lerbour_final.pdf.
Pełny tekst źródłaIn this thesis, we propose solutions to perform adaptive streaming and rendering of arbitrary large terrains. One interesting application is to interactively visualize the Earth in 3D on a computer while loading the required data from a huge database over a network. In the first part of this thesis, we introduce a generic solution to handle sample maps of any size from a server hard disk all the way to a client rendering system. Our methods adapt to the speed of the network and of the rendering and avoid handling redundant data. In a second part, we build upon the generic solution to propose real-time 3D rendering of large textured terrains on graphics hardware. In addition, we support planetary terrains and use techniques that prevent handling redundant data and limit typical rendering inconsistencies due to map projection and rendering precision. Finally, we propose preprocessing algorithms that allow to build server databases from huge sample maps
Lerbour, Raphaël. "Chargement progressif et rendu adaptatif de vastes terrains". Phd thesis, Université Rennes 1, 2009. http://tel.archives-ouvertes.fr/tel-00461667.
Pełny tekst źródłaGraglia, Florian. "Amélioration du photon mapping pour un scénario walkthrough dans un objectif de rendu physiquement réaliste en temps réel". Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4072.
Pełny tekst źródłaOne of the goals when developing the product is to immediately obtain a real and valid prototype. This thesis provide new rendering methods to increase the quality of the simulations during the upstream work of the production pipeline. The latter usually requires a walkthrough rendering. Thus, we focuses on the physically-based rendering methods of complex scenes in walkthrough. During the rendering, the end-users must be able to measure the illuminate rates and to interactively modify the power of the light source to test different lighting ambiances. Based on the original photon mapping method, our work shows how some modifications can decrease the calculation time and improve the quality of the resulting images according to this specific context
Grabli, Stéphane. "Le style dans le rendu non-photoréaliste de dessins au trait à partir de scènes 3D : une approche programmable". Phd thesis, Université Joseph Fourier (Grenoble), 2005. http://tel.archives-ouvertes.fr/tel-00009401.
Pełny tekst źródłaVanhoey, Kenneth. "Traitement conjoint de la géométrie et de la radiance d'objets 3D numérisés". Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD005/document.
Pełny tekst źródłaVision and computer graphics communities have built methods for digitizing, processing and rendering 3D objects. There is an increasing demand coming from cultural communities for these technologies, especially for archiving, remote studying and restoring cultural artefacts like statues, buildings or caves. Besides digitizing geometry, there can be a demand for recovering the photometry with more or less complexity : simple textures (2D), light fields (4D), SV-BRDF (6D), etc. In this thesis, we present steady solutions for constructing and treating surface light fields represented by hemispherical radiance functions attached to the surface in real-world on-site conditions. First, we tackle the algorithmic reconstruction-phase of defining these functions based on photographic acquisitions from several viewpoints in real-world "on-site" conditions. That is, the photographic sampling may be unstructured and very sparse or noisy. We propose a process for deducing functions in a manner that is robust and generates a surface light field that may vary from "expected" and artefact-less to high quality, depending on the uncontrolled conditions. Secondly, a mesh simplification algorithm is guided by a new metric that measures quality loss both in terms of geometry and radiance. Finally, we propose a GPU-compatible radiance interpolation algorithm that allows for coherent radiance interpolation over the mesh. This generates a smooth visualisation of the surface light field, even for poorly tessellated meshes. This is particularly suited for very simplified models
Pujades, Rocamora Sergi. "Modèles de caméras et algorithmes pour la création de contenu video 3D". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM039/document.
Pełny tekst źródłaOptics with long focal length have been extensively used for shooting 2D cinema and television, either to virtually get closer to the scene or to produce an aesthetical effect through the deformation of the perspective. However, in 3D cinema or television, the use of long focal length either creates a “cardboard effect” or causes visual divergence. To overcome this problem, state-of-the-art methods use disparity mapping techniques, which is a generalization of view interpolation, and generate new stereoscopic pairs from the two image sequences. We propose to use more than two cameras to solve for the remaining issues in disparity mapping methods.In the first part of the thesis, we review the causes of visual fatigue and visual discomfort when viewing a stereoscopic film. We then model the depth perception from stereopsis of a 3D scene shot with two cameras, and projected in a movie theater or on a 3DTV. We mathematically characterize this 3D distortion, and derive the mathematical constraints associated with the causes of visual fatigue and discomfort. We illustrate these 3D distortions with a new interactive software, “The Virtual Projection Room”.In order to generate the desired stereoscopic images, we propose to use image-based rendering. Those techniques usually proceed in two stages. First, the input images are warped into the target view, and then the warped images are blended together. The warps are usually computed with the help of a geometric proxy (either implicit or explicit). Image blending has been extensively addressed in the literature and a few heuristics have proven to achieve very good performance. Yet the combination of the heuristics is not straightforward, and requires manual adjustment of many parameters.In this thesis, we propose a new Bayesian approach to the problem of novel view synthesis, based on a generative model taking into account the uncertainty of the image warps in the image formation model. The Bayesian formalism allows us to deduce the energy of the generative model and to compute the desired images as the Maximum a Posteriori estimate. The method outperforms state-of-the-art image-based rendering techniques on challenging datasets. Moreover, the energy equations provide a formalization of the heuristics widely used in image-based rendering techniques. Besides, the proposed generative model also addresses the problem of super-resolution, allowing to render images at a higher resolution than the initial ones.In the last part of this thesis, we apply the new rendering technique to the case of the stereoscopic zoom and show its performance
Leray, Pascal. "Modélisation et architecture de machines de sysnthèse d'images pour la représentation d'images et le rendu d'objets tri-dimensionnels (3D)sur écrans graphiques à balayage". Paris 2, 1990. http://www.theses.fr/1990PA020117.
Pełny tekst źródłaSolid modelling and image rendering are the two fundamental tasks for 3d image synthesis a solid modelling is used to represent internal 3d data base, strongly correlated with mental representation of the world in the humain brain. B image rendering is used to display realistic images of our 3d world : it can be coupled with the drawing possibilities of mankind. But at present in our graphic workstations, these two tasks are not associated, at the difference of the humain beeing, which analyses in real time what he draws, and compares it to its mental representations of the environment. The first goal of this report is to analyse existing tools which are currently used in tasks a and b after we describe two new solutions for : a direct digititizing system of 3d objects a new architecture for rendering at the end, we introduce the new concept of neural networks, and we explain how it could be used to perform tasks a and b , as it is associated in the humain brain with image analysis by shape from shading techniques. Conclusion is a projection on future image synthesis systems, where 3d object analysis based on shape from shading and neural network could be tightly coupled with image rendering. These two processes could be at that time reversible, giving large new possibilities to computer design, picture data bases
Li, Rui. "Multi-scale simplification and visualization of large 3D models". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS051.
Pełny tekst źródłaThis thesis focuses on the creation and interactive visualization of surface meshes derived from massive data, such as those obtained from the use of laser remote sensing scanners or multiple-view image captures. Recent advancements in the development, miniaturization, and widespread accessibility of these capture devices aim to provide a detailed representation of reality.The raw data resulting from these captures, whether in geographic information, archaeology, or medicine, often pose challenges due to their voluminous and disorganized nature. To address this challenge, the first step in a processing pipeline typically involves the creation of a high-resolution mesh. The chosen solution must effectively mitigate measurement noise and accurately reproduce the topology of the target object. As capture devices have advanced in resolution, visualization hardware has improved in memory capacity and processing speed. However, it has become impractical to interactively visualize high-resolution meshes, containing tens of millions or even billions of elements. Handling such data flow by hardware and transferring it between pipeline components present significant challenges. The subsequent step involves constructing a hierarchy of simplified meshes derived from the high-resolution mesh, with each simplification introducing controlled detail loss. While it might appear that processing capacity comes at the expense of detail, for visualization purposes, only the apparent level of detail matters. Portions distant from the point of observation require a lower absolute level of detail than those nearby. This enables the subdivision of different mesh levels into subparts, dynamically selecting and recombining them to adapt to changes in viewpoint, ensuring an optimal apparent level of detail.In the first part of the thesis, we introduce an approach aimed at addressing the two main challenges inherent in the simultaneous management of multiple pieces of meshes with different resolutions: mesh junction issues (known as "mesh cracks") and temporal discontinuity problems resulting from resolution changes (referred to as "LOD popping"). This approach, situated at the intersection of techniques based on mesh morphing and those rooted in multi-triangulations, continues a long lineage of research in these areas. We accompany it with both a reference implementation and a lightweight visualization prototype serving as a proof of concept. The preprocessing phase, responsible for constructing the hierarchy from the high-resolution mesh, stands out with a processing capacity on the order of one million triangles per second per processing core. This represents a substantial order of magnitude improvement compared to the existing solutions we have had the opportunity to evaluate.In the second part of the thesis, we dig into the construction of high-resolution meshes from a point cloud, specifically in the case of airborne LiDAR. We demonstrate that the reliability of conventional methods for normal estimation can be significantly enhanced by taking into account data specific to such surveys, including point timestamps and laser beam angles. This improvement has a crucial impact on the quality of subsequent surface mesh reconstruction using the Poisson method. The Poisson method derives the mesh as the level surface of a scalar function, itself obtained by solving a Poisson problem in which the oriented point cloud contributes to defining the source term. We apply these methods to recent open LiDAR data from Swiss (SwissSurface3D) and French (Lidar HD) LiDAR programs, demonstrating that they enable a more faithful reproduction compared to classical methods based on regular altitude grids (digital elevation models), particularly in mountainous areas
Holländer, Matthias. "Synthèse géométrique temps réel". Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0009/document.
Pełny tekst źródłaEal-time geometry synthesis is an emerging topic in computer graphics.Today's interactive 3D applications have to face a variety of challengesto fulfill the consumer's request for more realism and high quality images.Often, visual effects and quality known from offline-rendered feature films or special effects in movie productions are the ultimate goal but hard to achieve in real time.This thesis offers real-time solutions by exploiting the Graphics Processing Unit (GPU)and efficient geometry processing.In particular, a variety of topics related to classical fields in computer graphics such assubdivision surfaces, global illumination and anti-aliasing are discussedand new approaches and techniques are presented
Sarton, Jonathan. "Visualisations interactives haute-performance de données volumiques massives : une approche out-of-core multi-résolution basée GPUs". Thesis, Reims, 2018. http://www.theses.fr/2018REIMS022/document.
Pełny tekst źródłaThese thesis studies are part of the PIA2 project 3DNeuroSecure. This one aims to provide a collaborative system of interactive multi-scale navigation within visual big data (VDB) with ultra-high definition (tera-voxels), potentially multimodal, 3D biomedical imaging as application framework. In addition, this system will be able to integrate a variety of processing and/or annotations (tags) through remote HPC resources. All of these treatments must be possible in an out-of-core context. Because of the visual big data, we have to decoupled the location of acquisition from ones of storage and high performance computation and from ones for the manipulation of the data (various connected devices, mobile or not, smartphone, PC, large display wall, virtual reality room ...). The streaming visualization will be adapted to the user device in terms of both resolution (Full HD to GigaPixel) and 3D rendering (classic rendering on 2D screens, stereoscopic with glasses or autostereoscopic without glasses). All these developments supported by the CReSTIC with the support of MaSCA (Maison de la Simulation de Champagne-Ardenne) can therefore be summarized as: - the definition and implementation of the data structures adapted to the out-of-core visualization of the targeted visual big data. - the adaptation of the specific treatments partners, like interactive 3D rendering, to these new data structures. - the technical architecture choices for the HPC and the virtualization of the navigation software application, to take advantage of "ROMEO", the local datacenter. The auto-/stereoscopic rendering with or without glasses will be operated within the MINT software of the "université de Reims Champagne-Ardenne"
Rocher, Pierre-Olivier. "Transmodalité de flux d'images de synthèse". Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET2026/document.
Pełny tekst źródłaThe use of video as an information dissemination support has become preponderant during the last few years. According to some analysts, by 2017 approximately 90% of the world's bandwidth will be consumed by video streaming services. These services have encouraged cloud gaming solutions to become more democratic. Such solutions have been devised in the context of strong development of the cloud-computing paradigm, and they were driven by the proliferation of mobile devices as well as growing network quality. The technologies used in this kind of solutions refer to as remote rendering. They allow the execution of multiple applications, while maximizing the number of clients per server. Thus, it is essential to control the necessary bandwidth to allow the required functionality of various services. The existing cloud gaming solutions in the literature use various methods of video compression to transmit images between sever and clients (pixels reigns supreme). However, there are various other ways of encoding digital images, including parametric map storage and a number of studies encourage this approach (for both image and video). In this thesis, we propose a hybrid representation of space in order to reduce the bit rate. Our approach utilizes both pixel and parametric approaches for the compression of video stream. The use of two compression techniques requires defining the area to be covered by different encoders. This is accomplished by including user to the life cycle of rendering, and attending to the area mostly concerned to the user. In order to identify the area an eye-tracker device was used on several games and several testers. We also establish a correlation between the characteristics of images and the type of game. This helps to identify areas that the player looks directly or indirectly (“maps of selective attention"), and thus, encoders are manager accordingly. For this thesis, we details and implement the architecture and algorithms for such multi-model encoder (which we call "transmodeur") as proof of concept. We also provide an analytical study of out model and the influence of various parameters on transmodeur and describe in effectiveness through an objective study. Our transmodeur (rendering system) has been successfully integrated into XLcloud project for rendering purposes. A number of improvement (especially in performance) will be required for production use, but it is now possible to use it smoothly using spatial resolutions slightly lower than 720p at 30 frames per second
Schertzer, Jérémie. "Exploiting modern GPUs architecture for real-time rendering of massive line sets". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT037.
Pełny tekst źródłaIn this thesis, we consider massive line sets generated from brain tractograms. They describe neural connections that are represented with millions of poly-line fibers, summing up to billions of segments. Thanks to the two-staged mesh shader pipeline, we build a tractogram renderer surpassing state-of-the-art performances by two orders of magnitude.Our performances come from fiblets: a compressed representation of segment blocks. By combining temporal coherence and morphological dilation on the z-buffer, we define a fast occlusion culling test for fiblets. Thanks to our heavily-optimized parallel decompression algorithm, surviving fiblets are swiftly synthesized to poly-lines. We also showcase how our fiblet pipeline speeds-up advanced tractogram interaction features.For the general case of line rendering, we propose morphological marching: a screen-space technique rendering custom-width tubes from the thin rasterized lines of the G-buffer. By approximating a tube as the union of spheres densely distributed along its axes, each sphere shading each pixel is retrieved relying on a multi-pass neighborhood propagation filter. Accelerated by the compute pipeline, we reach real-time performances for the rendering of depth-dependant wide lines.To conclude our work, we implement a virtual reality prototype combining fiblets and morphological marching. It makes possible for the first time the immersive visualization of huge tractograms with fast shading of thick fibers, thus paving the way for diverse perspectives
Bouville, Rozenn. "Interopérabilité des environnements virtuels 3D : modèle de réconciliation des contenus et des composants logiciels". Phd thesis, INSA de Rennes, 2012. http://tel.archives-ouvertes.fr/tel-00909107.
Pełny tekst źródłaZanuttini, Antoine. "Du photoréalisme au rendu expressif en image 3D temps réel dans le jeu vidéo : programmation graphique pour la profondeur de champ, la matière, la réflexion, les fluides et les contours". Paris 8, 2012. http://octaviana.fr/document/171326563#?c=0&m=0&s=0&cv=0.
Pełny tekst źródłaThis study seeks to go beyond the standardized video games aesthetics by adding new depiction techniques for real-time digital imagery. Photorealistic rendering is often limited in control and flexibility for an artist who searches beyond fidelity to real. Achieving credibility and immersion will then often require stylization for the view to be more convincing and aesthetic. Expressive rendering goes further by taking care of the artist's personal vision, while being based on some real life attributes and phenomena, altering them at the same time. We will show that photorealism and expressive rendering join and complete each other under numerous aspects. Three themes related to photorealism will be presented, then some personal original techniques will be introduced. The theme of depth of field will lead us to consider the shape of the virtual camera's lens through the Hexagonal Summed Area Table algorithm. We will then look at material, light and especially ambient and specular reflections and the importance of parallax correction with regards to them. Our third theme will be the rendering of fluid motion and the advection of textures according to the flow for easy and efficient detail addition. These three subjects will then be integrated into expressive rendering and used as expression tools for the artist, through the creation of a dream effect, of screen-space rendered fluids, and of hatched material shading. Finally, we will show our creations specifically dedicated to expressive rendering and stroke stylization
Chaurasia, Gaurav. "Algorithmes et analyses perceptuelles pour la navigation interactive basée image". Phd thesis, Université Nice Sophia Antipolis, 2014. http://tel.archives-ouvertes.fr/tel-00976621.
Pełny tekst źródłaHadim, Julien. "Etude en vue de la multirésolution de l’apparence". Thesis, Bordeaux 1, 2009. http://www.theses.fr/2009BOR13794/document.
Pełny tekst źródłaIn recent years, Bidirectional Texture Function (BTF) has emerged as a flexible solution for realistic and real-time rendering of material with complex appearance and low cost computing. However one drawback of this approach is the resulting huge amount of data: several methods have been proposed in order to compress and manage this data. In this document, we propose a new BTF representation that improves data coherency and allows thus a better data compression. In a first part, we study acquisition and digital generation methods of BTFs and more particularly, compression methods suitable for GPU rendering. Then, We realise a study with our software BTFInspect in order to determine among the different visual phenomenons present in BTF which ones induce mainly the data coherence per texel. In a second part, we propose a new BTF representation, named Flat Bidirectional Texture Function (Flat-BTF), which improves data coherency and thus, their compression. The analysis of results show statistically and visually the gain in coherency as well as the absence of a noticeable loss of quality compared to the original representation. In a third and last part, we demonstrate how our new representation may be used for realtime rendering applications on GPUs. Then, we introduce a compression of the appearance thanks to a quantification method on GPU which is presented in the context of a 3D data streaming between a server of 3D data and a client which want visualize them
Gauthier, Alban. "Morphing and level-of-detail operators for interactive digital material design and rendering". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT036.
Pełny tekst źródłaThe Physically Based Rendering workflow has become a standard for rendering digital materials for the creative industries, such as video games, visual special effects, product design and architecture. It enables developers and artists to create and share ready-to-use photorealistic materials among a wide variety of applications.In this workflow, 3D surfaces are mapped to a 2D texture space where their Spatially Varying Bidirectional Reflectance Distribution Functions are encoded as a set of bitmap images called PBR maps queried efficiently at runtime. These maps represent interpretable physically based quantities while allowing for the reproduction of a wide range of material appearances. They can be reconstructed from real-world photographs or generated procedurally.Unfortunately, both approaches to PBR material authoring require advanced skills and a significant amount of time to model convincing materials to be used by photorealistic renderers. In addition, while all channels are encoded in the same pixel grid, they describe heterogeneous quantities of very different nature at different scales that are partly correlated. The information described in the maps can be either geometrical for the height, normal, and roughness or colorimetric for the albedo. The roughness relates to the distribution of microfacet normals, embedded atop the normal's tangent plane, which location is given by the height map. This description allows for efficient renderings but prevents the use of simple image processing operators jointly across maps for interpolating or filtering.In this thesis, we explore efficient morphing and level-of-detail operators to tackle these difficulties. We propose a novel morphing operator which allows creating new materials by simply blending two existing ones while preserving their dominant structures and features all along the interpolation. This operator allows exploring large regions of the space of possible materials using exemplars as anchors and our interpolation scheme as a navigation means. We also propose a novel approach for SVBRDF mipmapping which preserves material appearance under varying view distances and lighting conditions. As a result, we obtain a drop-in replacement for standard material mipmapping, offering a significant improvement in appearance preservation while still boiling down to a single per-pixel mipmap texture fetch. These operators have been experimentally validated on a large dataset of examples.Overall, our proposed methods allow for interpolating materials in the canonical space of textures as well as along the downscaling pyramid for preserving and exploring appearance
Hadim, Julien. "Etude en vue de la multirésolution de l'apparence". Phd thesis, Université Sciences et Technologies - Bordeaux I, 2009. http://tel.archives-ouvertes.fr/tel-00434058.
Pełny tekst źródłaHolländer, Matthias. "Synthèse géométrique temps réel". Electronic Thesis or Diss., Paris, ENST, 2013. http://www.theses.fr/2013ENST0009.
Pełny tekst źródłaEal-time geometry synthesis is an emerging topic in computer graphics.Today's interactive 3D applications have to face a variety of challengesto fulfill the consumer's request for more realism and high quality images.Often, visual effects and quality known from offline-rendered feature films or special effects in movie productions are the ultimate goal but hard to achieve in real time.This thesis offers real-time solutions by exploiting the Graphics Processing Unit (GPU)and efficient geometry processing.In particular, a variety of topics related to classical fields in computer graphics such assubdivision surfaces, global illumination and anti-aliasing are discussedand new approaches and techniques are presented
Devillers, Olivier. "Méthodes d'optimisation du tracé de rayons". Phd thesis, Université Paris Sud - Paris XI, 1988. http://tel.archives-ouvertes.fr/tel-00772857.
Pełny tekst źródłaCirio, Gabriel. "Retour Multimodal et Techniques d'Interaction pour des Environnements Virtuels Basés Physique et Larges". Phd thesis, INSA de Rennes, 2011. http://tel.archives-ouvertes.fr/tel-00652077.
Pełny tekst źródłaDupont, de Dinechin Grégoire. "Towards comfortable virtual reality viewing of virtual environments created from photographs of the real world". Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLM049.
Pełny tekst źródłaThere are many applications to capturing and digitally recreating real-world people and places for virtual reality (VR), such as preserving and promoting cultural heritage sites, placing users face-to-face with faraway family and friends, and creating photorealistic replicas of specific locations for therapy and training. This is typically done by transforming sets of input images, i.e. photographs and videos, into immersive 360° scenes and interactive 3D objects. However, such image-based virtual environments are often flawed such that they fail to provide users with a comfortable viewing experience. In particular, accurately recovering the scene's 3D geometry is a difficult task, causing many existing approaches to make approximations that are likely to cause discomfort, e.g. as the scene appears distorted or seems to move with the viewer during head motion. In the same way, existing solutions most often fail to accurately render the scene's visual appearance in a comfortable fashion. Standard 3D reconstruction pipelines thus commonly average out captured view-dependent effects such as specular reflections, whereas complex image-based rendering algorithms often fail to achieve VR-compatible framerates, and are likely to cause distracting visual artifacts outside of a small range of head motion. Finally, further complications arise when the goal is to virtually recreate people, as inaccuracies in the appearance of the displayed 3D characters or unconvincing responsive behavior may be additional sources of unease. Therefore, in this thesis, we investigate the extent to which users can be made more comfortable when viewing digital replicas of the real world in VR, by enhancing, combining, and designing new solutions for creating virtual environments from input sets of photographs. We thus demonstrate and evaluate solutions for (1) providing motion parallax during the viewing of 360° images, using a VR interface for estimating depth information, (2) automatically generating responsive 3D virtual agents from 360° videos, by combining pre-trained deep learning networks, and (3) rendering captured view-dependent effects at high framerates in a game engine widely used for VR development, which we apply to digitally recreate a museum's mineralogy collection. We evaluate and discuss each approach by way of user studies, and make our codebase available as an open-source toolkit
Chaurasia, Gaurav. "Algorithmes et analyses perceptuelles pour la navigation interactive basé image". Phd thesis, Université Nice Sophia Antipolis, 2014. http://tel.archives-ouvertes.fr/tel-00979913.
Pełny tekst źródłaDorn, M?rcio. "Uma proposta para a predi??o computacional da estrutura 3D aproximada de polipept?deos com redu??o do espa?o conformacional utilizando an?lise de intervalos". Pontif?cia Universidade Cat?lica do Rio Grande do Sul, 2008. http://tede2.pucrs.br/tede2/handle/tede/5008.
Pełny tekst źródłaAs prote?nas s?o polipept?deos formados por uma longa cadeia covalente de res?duos de amino?cidos que, em condi??es fisiol?gicas (ambiente nativo), adota uma topologia 3D ?nica. Estas macromol?culas est?o envolvidas na maior parte das transforma??es moleculares nas c?lulas vivas. A estrutura nativa dita a fun??o bioqu?mica espec?fica da prote?na. Conhecer a estrutura 3D da prote?na implica em tamb?m conhecer a sua fun??o. Assim, conhecendo a sua estrutura ? poss?vel interferir ativando ou inibindo a sua fun??o, como nas doen?as onde os alvos dos f?rmacos s?o as prote?nas. Experimentalmente, a estrutura 3D de uma prote?na pode ser obtida atrav?s de t?cnicas de cristalografia por difra??o de raios X ou por resson?ncia magn?tica nuclear. Por?m, devido ?s diversas dificuldades, incluindo o alto custo e o elevado tempo demandado por estas t?cnicas, a determina??o da estrutura 3D de prote?nas ainda ? um problema que desafia os cientistas. Diversos m?todos de predi??o in s?lico foram criados durante os ?ltimos anos buscando a solu??o deste problema. Estes m?todos est?o organizados em dois grandes grupos. Ao primeiro, pertencem os m?todos de modelagem comparativa por homologia e m?todos baseados em conhecimento como os de alinhavamento (threading). Ao segundo, pertencem os m?todos ab initio e os m?todos de novo. No entanto, estes m?todos de predi??o possuem limita??es: m?todos baseados em modelagem comparativa por homologia e alinhavamento somente podem realizar a predi??o de estruturas que possuem seq??ncias id?nticas ou similares ? outras prote?nas armazenadas no Protein Data Bank (PDB). M?todos de novo e ab initio, por sua vez, tornam poss?vel a obten??o de novas formas de enovelamento. Entretanto, a complexidade e a grande dimens?o do espa?o de busca conformacional, mesmo para uma pequena mol?cula de prote?na, torna o problema da predi??o intrat?vel computacionalmente (Paradoxo de Levinthal). Apesar do relativo sucesso obtido por estes m?todos para prote?nas de pequeno tamanho, muitos esfor?os ainda s?o necess?rios para o desenvolvimento de estrat?gias para extra??o e manipula??o de dados experimentais, bem como o desenvolvimento de metodologias que fa?am utiliza??o destas informa??es com o prop?sito de predizer corretamente, a partir apenas da seq??ncia de amino?cidos de uma prote?na, a sua estrutura 3D. Nesta disserta??o ? apresentada uma nova proposta para a predi??o in s?lico da estrutura 3D aproximada de polipept?deos e prote?nas. Um novo algoritmo foi desenvolvido, baseando-se na an?lise de informa??es obtidas de moldes do PDB. T?cnicas de minera??o de dados, representa??o de intervalos e de manipula??o das informa??es estruturais s?o utilizadas neste algoritmo. Os intervalos de varia??o angular de cada res?duo de amino?cido da cadeia polipept?dica s?o reduzidos como o objetivo de encontrar um intervalo fechado que cont?m a conforma??o com a menor energia potencial. Seis estudos de caso demonstram a aplica??o do m?todo desenvolvido.
Houde, Jean-Christophe. "Reconfiguration stéréoscopique". Mémoire, Université de Sherbrooke, 2012. http://hdl.handle.net/11143/5751.
Pełny tekst źródłaConze, Pierre-Henri. "Estimation de mouvement dense long-terme et évaluation de qualité de la synthèse de vues. Application à la coopération stéréo-mouvement". Phd thesis, INSA de Rennes, 2014. http://tel.archives-ouvertes.fr/tel-00992940.
Pełny tekst źródłaDufort, Jean-François. "Rendu interactif de détails de surface par textures 3D semi-transparentes". Thèse, 2005. http://hdl.handle.net/1866/16678.
Pełny tekst źródłaSt-Amour, Jean-François. "Génération d'ombres floues provenant de sources de lumière surfaciques à l'aide de tampons d'ombre étendus". Thèse, 2004. http://hdl.handle.net/1866/14568.
Pełny tekst źródłaRozon, Frédérik. "Peinture de lumière incidente dans des scènes 3D". Thèse, 2009. http://hdl.handle.net/1866/3160.
Pełny tekst źródłaLighting design is usually a task that is done manually, where the artists must manipulate the parameters of several light sources to obtain the desired result. This task is difficult because it is not intuitive. Some systems already exist that enable a user to paint light directly on objects in a scene to position or alter light sources. Unfortunately, these systems have some limitations such that they only consider local lighting, or the camera must be fixed, etc. Either way, this limitates the accuracy or the versatility of these systems. Global illumination is important because it adds a lot of realism to a scene by capturing all the light interreflections on the surfaces. This means that light sources can influence surfaces even if they are not directly exposed. In this M. Sc. thesis, we study a subset of the lighting design problem: the selection and alteration of the intensity of light sources. We present two different systems to design lighting on objects in 3D scenes. The user paints light intentions directly on the objects to alter the surface illumination. From these paint strokes, the systems find the light sources and alter their intensity to obtain as much as possible what the user wants. The novelty of our technique is that global illumination, transparent surfaces and subsurface scattering are all considered, and also that the camera is free to take any position. We also present strategies for selecting and altering the light sources. The first system uses an environment map as an intermediate representation of the environment surrounding the objects. The second system saves all the information of the environment for each vertex of each object.
Parent-Lévesque, Jérôme. "Towards deep unsupervised inverse graphics". Thesis, 2020. http://hdl.handle.net/1866/25467.
Pełny tekst źródłaA long standing goal of computer vision is to infer the underlying 3D content in a scene from a single photograph, a task known as inverse graphics. Machine learning has, in recent years, enabled many approaches to make great progress towards solving this problem. However, most approaches rely on 3D supervision data which is expensive and sometimes impossible to obtain and therefore limits the learning capabilities of such work. In this work, we explore the deep unsupervised inverse graphics training pipeline and propose two methods based on distinct 3D representations and associated differentiable rendering algorithms: namely surfels and a novel Voronoi-based representation. In the first method based on surfels, we show that, while effective at maintaining view-consistency, producing view-dependent surfels using a learned depth map results in ambiguities as the mapping between depth map and rendering is non-bijective. In our second method, we introduce a novel 3D representation based on Voronoi diagrams which models objects/scenes both explicitly and implicitly simultaneously, thereby combining the benefits of both. We show how this representation can be used in both a supervised and unsupervised context and discuss its advantages compared to traditional 3D representations.
Yuan, Lin Tzu, i 林子淵. "Presentation of the Classic Video Images by 3D Dot Matrix Holograms─ Rendy Lu Kissing His Finger and Raising It to the Sky as an Example". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/37568003261894599561.
Pełny tekst źródła國立臺灣師範大學
圖文傳播學系
100
To protect the copyright, the use of OVD (Optical variable device) can effectively prevent the original document from illegal copy. Hologram is a kind of the OVDs. It can be used not only for artistic creation but also for security purposes. With the rapid development of digital archives, it can bring the possibility of the value-added application for many old commodities. However, in the past, the research of holography technology has always been focused on discussing the function of anti-counterfeiting, and less efforts has been devoted to the conformity application between digital archives and the video-images. Therefore, this research will propose a method to combine dot-matrix holograms and classic video-images because dot-matrix holograms can profoundly enhance its value-added application. The main purpose of this paper is to transform the classic video-images from Rendy Lu, who won the ticket of quarter-final in Wimbledon Tennis Tournament in 2010, into a series of images. These images show that Rendy Lu kisses his fingers and raises his hand to the sky. Then, we take the 3D model of Rendy Lu as elements to design an optical variable effect by using dot-matrix hologram. In the end, we make this dot matrix hologram become a value-added product, and give it a story to promote the product in the market. The result shows that the dot matrix holograms which we design can directly display 3D video-images without using any electricity or power. Consequently, this dot-matrix hologram design which combines technology and digital archives will have many possibilities of value-added applications.
PETRINI, Maria Celeste. "IL MARKETING INTERNAZIONALE DI UN ACCESSORIO-MODA IN MATERIALE PLASTICO ECO-COMPATIBILE: ASPETTI ECONOMICI E PROFILI GIURIDICI. UN PROGETTO PER LUCIANI LAB". Doctoral thesis, 2018. http://hdl.handle.net/11393/251084.
Pełny tekst źródła