Indice
Letteratura scientifica selezionata sul tema "Infographie procédurale"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Infographie procédurale".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Tesi sul tema "Infographie procédurale"
Cavalier, Arthur. "Génération procédurale de textures pour enrichir les détails surfaciques". Thesis, Limoges, 2019. http://www.theses.fr/2019LIMO0108.
Testo completoWith the increasing power of consumer machines, Computer Graphics is offering us the opportunity to immerse ourselves in ever more detailed virtual worlds. The artists are thus tasked to model and animate these complex virtual scenes. This leads to a prohibitive authoring time, a bigger memory cost and difficulties to correctly and efficiently render this abundance of details. Many tools for procedural content generation have been proposed to resolve these issues. In this thesis, we focused our work on on-the-fly generation of mesoscopic details in order to easily add tiny details on 3D mesh surfaces. By focusing on procedural texture synthesis, we proposed some improvements in order to correctly render textures that modify not only the surface color but faking the surface meso-geometry in real-time. We have presented a methodology for rendering high quality textures without aliasing issues for controllable structured pattern synthesis. We also proposed an on-the-fly normal map generation to disturb the shading calculation and to add irregularites and relief to the textured surface
Grenier, Charline. "Génération procédurale et rendu en temps réel de motifs structurés". Electronic Thesis or Diss., Strasbourg, 2024. http://www.theses.fr/2024STRAD007.
Testo completoIn a virtual world, objects represented are described as much by their geometry as by the luminous behaviour of their surface. The latter is an important element in the realism of the scene created. To simulate the luminous behaviour of surfaces, a commonly used method consists of covering the geometry of objects with textures.In this thesis, we focus on structured stochastic textures. More specifically, we are interested in their generation and rendering at different level of detail, in real time. These textures are characterised by a stochastic nature which gives them a more organic and natural appearance, and by abrupt variations in colour or contrast giving rise to distinct patterns which we call structures.We propose a new method for procedural generation of structured patterns. It is based on the composition of a procedural vector noise by a multivariate transfer function. This method takes advantage of the separation between the structural information contained in the transfer function and the stochastic information provided by the noise
Baldi, Guillaume. "Contributions à la modélisation procédurale de structures cellulaires stochastoques 2D et à leur génération par l'exemple". Electronic Thesis or Diss., Strasbourg, 2024. http://www.theses.fr/2024STRAD001.
Testo completoThe creation of procedural materials and textures requires considerable expertise, and is time-consuming, tedious and costly. We are therefore looking to develop tools for the automatic generation of procedural textures and materials from input exemplars provided in the form of images: This is known as inverse procedural modeling.In this thesis, we propose a procedural model called Cellular Point Process Texture Basis Function (C-PPTBF) for representing 2D stochastic cellular structures, involving functions that are differentiable with respect to most of their parameters, making it possible to estimate these parameters from examples without resorting entirely to deep neural networks. We have set up a processing pipeline to estimate the parameters of our model from structural examples provided in the form of binary images, combining an estimation performed using a convolutional neural network trained on images produced with our C-PPTBF model and an estimation phase using gradient descent directly on the parameters of the procedural model
Emilien, Arnaud. "Création interactive de mondes virtuels : combiner génération procédurale et contrôle utilisateur intuitif". Thèse, Grenoble, 2014. http://hdl.handle.net/1866/11661.
Testo completoLa complexité des mondes virtuels ne cesse d’augmenter et les techniques de modélisation classiques peinent à satisfaire les contraintes de quantité nécessaires à la production de telles scènes. Les techniques de génération procédurale permettent la création automatisée de mondes virtuels complexes à l’aide d’algorithmes, mais sont souvent contre-intuitives et par conséquent réservées à des artistes expérimentés. En effet, ces méthodes offrent peu de contrôle à l’utilisateur et sont rarement interactives. De plus, il s’agit souvent pour l’utilisateur de trouver des valeurs pour leurs nombreux paramètres en effectuant des séries d’essais et d’erreurs jusqu’à l’obtention d’un résultat satisfaisant, ce qui est souvent long et fastidieux. L’objectif de cette thèse est de combiner la puissance créatrice de la génération procédurale avec un contrôle utilisateur intuitif afin de proposer de nouvelles méthodes interactives de modéli- sation de mondes virtuels. Tout d’abord, nous présentons une méthode de génération procédurale de villages sur des terrains accidentés, dont les éléments sont soumis à de fortes contraintes de l’environnement. Ensuite, nous proposons une méthode interactive de modélisation de cascades, basée sur un contrôle utilisateur fin et la génération automatisée d’un contenu cohérent en regard de l’hydrologie et du terrain. Puis, nous présentons une méthode d’édition de terrains par croquis, où les éléments caractéristiques du terrain comme les lignes de crêtes sont analysés et déformés pour correspondre aux silhouettes complexes tracées par l’utilisateur. Enfin, nous proposons une métaphore de peinture pour la création et l’édition interactive des mondes virtuels, où des tech- niques de synthèse d’éléments vectoriels sont utilisées pour automatiser la déformation et l’édition de la scène tout en préservant sa cohérence.
The complexity required for virtual worlds is always increasing. Conventional modeling tech- niques are struggling to meet the constraints and efficiency required for the production of such scenes. Procedural generation techniques use algorithms for the automated creation of virtual worlds, but are often non-intuitive and therefore reserved to experienced programmers. Indeed, these methods offer fewer controls to users and are rarely interactive. Moreover, the user often needs to find values for several parameters. The user only gets indirect control through a series of trials and errors, which makes modeling tasks long and tedious. The objective of this thesis is to combine the power of procedural modeling techniques with intuitive user control towards interactive methods for designing virtual worlds. First, we present a technique for procedural modeling of villages over arbitrary terrains, where elements are subjected to strong environmental constraints. Second, we propose an interactive technique for the procedural modeling of waterfall sceneries, combining intuitive user control with the automated generation of consistent content, in regard of hydrology and terrain constraints. Then, we describe an interactive sketch-based technique for editing terrains, where terrain features are extracted and deformed to fit the user sketches. Finally, we present a painting metaphor for virtual world creation and editing, where methods for example-based synthesis of vectorial elements are used to automate deformation and editing of the scene while maintaining its consistency.
Tricaud, Martin. "Designing interactions, interacting with designs : Towards instruments and substrates in procedural computer graphics and beyond". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG067.
Testo completoProcedural Computer Graphics (PCG) is an umbrella term for a variety of techniques that entail building and amending algorithmic procedures to generate graphical content. These procedural models reify the chain of operations leading to a design, turning the design process itself into an interactive object. Through manipulating such abstractions, artists and designers tap into the capacity of computers to produce outputs that require the repetitive and/or parallel application of rules to be obtained, or the storage of multiple objects in working memory. Yet, the expressiveness of PCG techniques remains constrained by how procedural models are represented and how users interact with these representations. A common frustration in PCG is the reliance on sliders to explore design spaces—what Alan Perlis would call a Turing tar-pit: everything is possible, but nothing is easy. This problem echoes a central question in Human-Computer Interaction: Which software artifacts are best suited to mediate our actions on information substrates? Direct manipulation interfaces—once considered more ergonomic from a cognitive standpoint—seem to be losing ground to conversational interactions: Generative AI models promise easy access to sophisticated results through verbal or textual commands. However, the metaphor of the interface as a "world" rather than as an interlocutor is not obsolete: Some things will always be more easily done than said. Recent research in cognitive science on tool-based action and technical reasoning has provided ample evidence that these faculties are distinct from symbolic reasoning and precede it in human evolution. Can this tacit technical reasoning, fundamental in artistic practices, extend to environments that aren't governed by rules analogous to those of the physical world? And if so, how? Answering these questions requires redefining materiality not as a quality of the environment but of an agent's relationship to it. Yet, the redefinitions proposed by design research often struggle to yield actionable principles for interface design. Three questions emerge from these observations:1. If materiality is a relationship between agent and environment, how does it develop between computational artists and software—if at all? 2. What obstacles hinder this process, and what specific software artifacts can support it?3. How can interactions and interfaces in general be architected to foster materiality with software environments? The first question is addressed through an ethnographic study of 12 artists and designers, proposing that materiality develops through epistemic processes. Artists build non-declarative knowledge through epistemic actions, externalizing this knowledge into artifacts that foster further exploration. I contextualize these findings with works that reflect similar intuitions. To tackle the second question, I develop a software prototype featuring novel interaction tools to facilitate navigation in large procedural model parameter spaces. Reflecting on the design process and participant feedback, I critique traditional usability and creativity evaluation methods, proposing alternative approaches inspired by instrumental interaction and information theory. In answering the third question, I argue that the difficulties HCI faces in bringing innovative interaction techniques and frameworks (particularly instrumental interaction) into the mainstream - stem not from the absence of adequate evaluation methods, but in the lack of adequate architecture. I speculate that if the building blocks of a software's interaction model have well-behaved mathematical semantics, we can extend the model-world metaphor beyond physicality and bring materiality to various information substrates
Rodriguez, Simon. "Méthodes basées image pour le rendu d’effets dépendants du point de vue dans les scènes réelles et synthétiques". Thesis, Université Côte d'Azur, 2020. http://theses.univ-cotedazur.fr/2020COAZ4038.
Testo completoThe creation and interactive rendering of realistic environments is a time consuming problem requiring human interaction and tweaking at all steps. Image-based approaches use viewpoints of a real world or high quality synthetic scene to simplify these tasks. These viewpoints can be captured in the real world or generated with accurate offline techniques. When rendering a novel viewpoint, geometry and lighting information are inferred from the data in existing views, allowing for interactive exploration of the environment. But sparse coverage of the scene in the spatial or angular domain introduces artifacts; a small number of input images is adversarial to real world scene reconstruction, as is the presence of complex materials with view-dependent effects when rendering both real and synthetic scenes. We aim at lifting these constraints by refining the way information is stored and aggregated from the viewpoints and exploiting redundancy. We also design geometric tools to properly reproject view-dependent effects in the novel view.For real world scenes, we rely on semantic information to infer material and object placement. We first introduce a method to perform reconstruction of architectural elements from a set of three to five photographs of a scene. We exploit the repetitive nature of such elements to extract them and aggregate their color and geometry information, generating a common ideal representation that can then be reinserted in the initial scene. Aggregation helps improve the accuracy of the viewpoint pose estimation, of the element geometry reconstruction, and is used to detect locations exhibiting view-dependent specular effects.We describe a second method designed to similarly rely on semantic scene information to improve rendering of street-level scenery. In these scenes cars exhibit many view-dependent effects that make them hard to reconstruct and render accurately. We detect and extract them, relying on a per-object view parameterization to refine their geometry, including a simplified representation of reflective surfaces such as windows. To render view-dependent effects we perform a specular layer separation and use our analytical reflector representation to reproject the information in the novel view following the flow of reflections.Finally, while synthetic scenes provide completely accurate geometric and material information, rendering high quality view-dependent effects interactively is still difficult. Inspired by our second method, we propose a novel approach to render such effects using precomputed global illumination. We can precompute specular information at a set of predefined locations in the scene, stored in panoramic probes. Using a simplified geometric representation we robustly estimate the specular flow, gathering information from the probes at a novel viewpoint. Combined with an adaptive parameterization of the probes and a material-aware reconstruction filter, we render specular and glossy effects interactively.The results obtained with our methods show an improvement in the quality of recreated view-dependent effects, using both synthetic and real world data and pave the way for further research in image-based techniques for realistic rendering
Slimani, Yahya. "Structures de données et langages non-procéduraux en informatique graphique". Lille 1, 1986. http://www.theses.fr/1986LIL10002.
Testo completoHnaidi, Houssam. "Contrôle dans la génération de formes naturelles". Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10139.
Testo completoThe generation of natural shapes has been the subject of much research for many years. Several methods have been proposed to generate realistic natural objects such as terrain, plants and trees, clouds, etc... Iterative models are well known in this field of research due to their ability to generate complex and rough shapes that are adapted to the representation of natural objects. The major drawback of such models is the lack of control over the final result. The latter can come from the stochastic construction method which prevents any control by definition. For models whose construction is deterministic, the parameters of generation are often non-intuitive and thus limit control. For these reasons many studieshave focused on the problem of controlling these models as well as the possibility of using non-iterative models (sketches, based on examples, etc). Often, control introduced by these models is a global control, on the whole final object and therefore does not include local details of this object. In our work, we focus in the problem of control over natural shapes, taking into account local control. To this end, we introduce two different models. The first is based on an iterative formalism with detail concept which is divided into two subfamilies, one based on IFS and the other one based on subdivisionsurfaces. The second model allows the editing of terrain features under a form of vectorial primitives which one used to generate the terrain by guided diffusion method. The latter is the subject of a parallel implementation on graphics card (GPU)
Causeret, Maxime. "Peindre avec des matières dynamiques : les systèmes procéduraux pour la création et l'expérimentation artistique". Paris 8, 2013. http://www.theses.fr/2013PA083960.
Testo completoFor over twenty years, special effects in film drastically changed with the ever increasing tools dealing with digital images (image synthesis and image processing). The latter accurately simulate materials such as fire, smoke or even water. This study investigates the innovative potential of procedural techniques to paint with dynamic materials. Through creations and experiments, I created my own materials which I used and adapted to propose new graphics and narratives alternatives. Through the study of many film productions, I analyze how the state-of-the-art techniques work. Beyond physical and realistic simulations I provide the artists with adequate ways to play with these dynamic materials for any creative purpose. I propose new creations and experimentations following various research topics. Firstly, I investigate the portraits using dynamic materials to picture the face in motion. The point is to analyze the 3D scene and then to control the materials through interactions that follow dedicated workflows. Secondly, I study how these dynamics systems could be linked to music in order to produce novel creative choreographies. Following these studies, I attempt to paint the motion recorded from data acquisition using new analysis processes I proposed. Finally, I play freely with this procedural material setting it at the very heart of the process
Michel, Élie. "Interactive authoring of 3D shapes represented as programs". Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAT027.
Testo completoAlthough hardware and techniques have considerably improved over the years at handling heavy content, digital 3D creation remains fairly complex, partly because the bottleneck also lies in the cognitive load imposed over the designers. A recent shift to higher-order representation of shapes, encoding them as computer programs that generate their geometry, enables creation pipelines that better manage the cognitive load, but this also comes with its own sources of friction. We study in this thesis new challenges and opportunities introduced by program-based representations of 3D shapes in the context of digital content authoring. We investigate ways for the interaction with the shapes to remain as much as possible in 3D space, rather than operating on abstract symbols in program space. This includes both assisting the creation of the program, by allowing manipulation in 3D space while still ensuring a good generalization upon changes of the free variables of the program, and helping one to tune these variables by enabling direct manipulation of the output of the program. We explore diversity of program-based representations, focusing various paradigms of visual programming interfaces, from the imperative directed acyclic graphs (DAG) to the declarative Wang tiles, through more hybrid approaches. In all cases we study shape programs that evaluate at interactive rate, so that they fit in a creation process, and we push this by studying synergies of program-based representations with real time rendering pipelines.We enable the use of direct manipulation methods on DAG output thanks to automated rewriting rules and a non-linear filtering of differential data. We help the creation of imperative shape programs by turning geometric selection into semantic queries and of declarative programs by proposing an interface-first editing scheme for authoring 3D content in Wang tiles. We extend tiling engines to handle continuous tile parameters and arbitrary slot graphs, and to suggest new tiles to add to the set. We blend shape programs into the visual feedback loop by delegating tile content evaluation to the real-time rendering pipeline or exploiting the program's semantics to drive an impostor-based level-of-details system. Overall, our series of contributions aims at leveraging program-based representations of shapes to make the process of authoring 3D digital scenes more of an artistic act and less of a technical task