Dissertationen zum Thema „Render time“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-40 Dissertationen für die Forschung zum Thema "Render time" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Danliden, Alexander, und Steven Cederrand. „Multi Sub-Pass & Multi Render-Target Shading In Vulkan : Performance Based Comparison In Real-time“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20199.
Der volle Inhalt der QuelleLinné, Andreas. „Evaluating the Impact of V-Ray Rendering Engine Settings on Perceived Visual Quality and Render Time : A Perceptual Study“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19161.
Der volle Inhalt der QuellePersson, Jonna. „SCALABILITY OF JAVASCRIPTLIBRARIES FOR DATAVISUALIZATION“. Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19994.
Der volle Inhalt der QuelleKoblik, Katerina. „Simulation of rain on a windshield : Creating a real-time effect using GPGPU computing“. Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185027.
Der volle Inhalt der QuelleGrier, Sarah K. „Time rendered form“. Thesis, Virginia Polytechnic Institute and State University, 1993. http://hdl.handle.net/10919/52129.
Der volle Inhalt der QuelleMaster of Architecture
Björketun, Aron. „Compositing alternatives to full 3D : Reduce render times of static objects in a moving sequence using comp“. Thesis, Luleå tekniska universitet, Institutionen för konst, kommunikation och lärande, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-70954.
Der volle Inhalt der QuelleRendering tar tid, speciellt utan tillgång till kraftfull hårdvara. Finns det något sätt att dra ner på tiden en rendering tar? Med hjälp av compositing är det absolut möjligt att sänka tiderna för en rendering genom en rad alternativa metoder. Totalt kommer tre olika metoder att testas för att spara in tid, men även för att försöka komma så nära kvalitén av en bild renderad direkt från en 3D-appliaktion som möjligt. Projektioner på plan och geometri och användandet av deep-data kommer användas i ett försök att hitta det mest effektiva och användbara tillvägagångssättet att spara tid.
Pradel, Benjamin. „Rendez-vous en ville ! Urbanisme temporaire et urbanité évènementielle : les nouveaux rythmes collectifs“. Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00546513.
Der volle Inhalt der QuelleSchneider, David Valentin Maria [Verfasser], Matthias [Gutachter] Girndt, Lutz [Gutachter] Renders und Timm [Gutachter] Westhoff. „Auswirkung einer Spendernierenproteinurie auf die Transplantatfunktion und das Patientenüberleben / David Valentin Maria Schneider ; Gutachter: Matthias Girndt, Lutz Renders, Timm Westhoff“. Halle (Saale) : Universitäts- und Landesbibliothek Sachsen-Anhalt, 2020. http://d-nb.info/1214241042/34.
Der volle Inhalt der QuelleYousif, Bashir H. „Effects of Heat Treatment of Ultrafiltered Milk on its Rennet Coagulation Time and on Whey Protein Denaturation“. DigitalCommons@USU, 1991. https://digitalcommons.usu.edu/etd/5379.
Der volle Inhalt der QuelleRossetti, Nara. „Análise das volatilidades dos mercados brasileiros de renda fixa e renda variável no período 1986 - 2006“. Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/96/96133/tde-29042008-115430/.
Der volle Inhalt der QuelleThis work aims to study the volatility of the fixed income market and the stock market in Brazil, from March 1986 to February 2006, through CDI (Interbank Interest Rate), IRF-M (Fixed Income Index), as a fixed income market indicators, and IBOVESPA (BOVESPA index), as a stock market indicator. Through the comparison of the volatility of these assets it is possible to observe if there is time frame coincidence between the two markets, in relation to the peaks of volatility due to, mainly the influence of macroeconomics variables. Such analysis is important so that portfolio managers, responsible for decisions such investments allocation, know the history and the actual relationship between the markets volatility. Such analysis is important so that portfolio managers, responsible for decisions such investments allocation, know the history and the actual relationship between the markets volatility. Those fixed income market and stock markets volatilities were calculated through the annual standard deviation of the monthly returns and from a GARCH(1,1) model. The results show that, in Brazil, during the studied period, both markets presents: coincident volatility peaks periods, high change in the behavioral pattern of volatility after the deployment of the Plano Real and little stability in the relationship between the volatility.
Raymond, Boris. „Contrôle de l'apparence des matériaux anisotropes“. Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0142/document.
Der volle Inhalt der QuelleIn computer graphics, material appearance is a fundamental component of the final image quality. Many models have contributed to improve material appearance. Today, some materials remains hard to represent because of their complexity. Among them, anisotopic materials are especially complex and little studied. In this thesis, we propose a better comprehension of anisotropic materials providing a representation model and an editing tool to control their appearance. Our scratched material model is based on a light transport simulation in the micro-geometry of a scratch, preserves all the details and keeps an interactive rendering time. Our anisotropic reflections edition tool uses BRDF orientation fields to give the user the impression to draw or deform reflections directly on the surface
Murray, David. „Legible Visualization of Semi-Transparent Objects using Light Transport“. Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0326/document.
Der volle Inhalt der QuelleExploring and understanding volumetric or surface data is one of the challenges of Computer Graphics. The appearance of these data can be modeled and visualized using light transport theory. For the sake of understanding such a data visualization, transparent materials are widely used. If solutions exist to correctly simulate the light propagation and display semi-transparent objects, offering a understandable visualization remains an open research topic. The goal of this thesis is twofold. First, an in-depth analysis of the optical model for light transport and its implication on computer generated images is performed. Second, this knowledge can be used to tackle the problematic of providing efficient and reliable solution to visualize transparent and semi-transparent media. In this manuscript, we first introduce the general optical model for light transport in participating media, its simplification to surfaces, and how it is used in computer graphics to generate images. Second, we present a solution to improve shape depiction in the special case of surfaces. The proposed technique uses light transport as a basis to change the lighting process and modify the materials appearance and opacity. Third, we focus on the problematic of using full volumetric data instead of the simplified case of surfaces. In this case, changing only the material properties has a limited impact, thus we study how light transport can be used to provide useful information for participating media. Last, we present our light transport model for participating media that aims at exploring part of interest of a volume
Bleron, Alexandre. „Rendu stylisé de scènes 3D animées temps-réel“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM060/document.
Der volle Inhalt der QuelleThe goal of stylized rendering is to render 3D scenes in the visual style intended by an artist.This often entails reproducing, with some degree of automation,the visual features typically found in 2D illustrationsthat constitute the "style" of an artist.Examples of these features include the depiction of light and shade,the representation of the contours of objects,or the strokes on a canvas that make a painting.This field is relevant today in domains such as computer-generated animation orvideo games, where studios seek to differentiate themselveswith styles that deviate from photorealism.In this thesis, we explore stylization techniques that can be easilyinserted into existing real-time rendering pipelines, and propose two novel techniques in this domain.Our first contribution is a workflow that aims to facilitatethe design of complex stylized shading models for 3D objects.Designing a stylized shading model that follows artistic constraintsand stays consistent under a variety of lightingconditions and viewpoints is a difficult and time-consuming process.Specialized shading models intended for stylization existbut are still limited in the range of appearances and behaviors they can reproduce.We propose a way to build and experiment with complex shading modelsby combining several simple shading behaviors using a layered approach,which allows a more intuitive and efficient exploration of the design space of shading models.In our second contribution, we present a pipeline to render 3D scenes in painterly styles,simulating the appearance of brush strokes,using a combination of procedural noise andlocal image filtering in screen-space.Image filtering techniques can achieve a wide range of stylized effects on 2D pictures and video:our goal is to use those existing filtering techniques to stylize 3D scenes,in a way that is coherent with the underlying animation or camera movement.This is not a trivial process, as naive approaches to filtering in screen-spacecan introduce visual inconsistencies around the silhouette of objects.The proposed method ensures motion coherence by guiding filters with informationfrom G-buffers, and ensures a coherent stylization of silhouettes in a generic way
Weber, Yoann. „Rendu multi-échelle de pluie et interaction avec l’environnement naturel en temps réel“. Thesis, Limoges, 2016. http://www.theses.fr/2016LIMO0109/document.
Der volle Inhalt der QuelleThis dissertation aims to present a coherent multiscale model for real-time rain rendering which takes into account local and global properties of rainy scenes. Our goal is to simulate visible rain streaks close to the camera as well as the progressive loss of visibility induced by atmospheric phenomena. Our model proposes to correlate the attenuation of visibility, which is due in part to the extinction phenomenon, and the distribution of raindrops in terms of rainfall intensity and camera's parameters. Furthermore, this method proposes an original rain streaks generation based on spectral analysis and sparse convolution theory. This allows an accurate control of rainfall intensity and streaks appearance, improving the global realism of rainy scenes. We also aim at rendering interactive visual effects inherent to complex interactions between trees and rain in real-time in order to increase the realism of natural rainy scenes. Such a complex phenomenon involves a great number of physical processes influenced by various interlinked factors and its rendering represents a thorough challenge in Computer Graphics. We approach this problem by introducing an original method to render drops dripping from leaves after interception of raindrops by foliage. Our method introduces a new hydrological model representing interactions between rain and foliage through a phenomenological approach. Our model reduces the complexity of the phenomenon by representing multiple dripping drops with a new fully functional form evaluated per-pixel on-the-fly and providing improved control over density and physical properties. Furthermore, an efficient real-time rendering scheme, taking full advantage of latest GPU hardware capabilities, allows the rendering of a large number of dripping drops even for complex scenes
Silva, Janaina Cabral da. „Essays on Poverty, Income inequality and Economic Growth in Brazil“. Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=14162.
Der volle Inhalt der QuelleEsta dissertaÃÃo à composta de trÃs artigos, sendo que cada um se torna um capÃtulo. No primeiro capÃtulo intitulado âRelaÃÃo entre Desigualdade de Renda e Crescimento EconÃmico no Brasilâ analisa-se esta relaÃÃo a partir da hipÃtese do U-invertido de Kuznets no perÃodo de 1995 a 2012. A suposiÃÃo do U-invertido â hipÃtese de Kuznets (1955) â alude, no curto prazo, que hà uma conexÃo positiva entre a desigualdade de renda e o nÃvel de renda per capita. Jà no longo prazo, percebe-se uma relaÃÃo de U-invertido, pois hà uma inversÃo desta relaÃÃo. Para tanto, utiliza-se um modelo de estimaÃÃo para dados em painel dinÃmico e o mÃtodo de estimaÃÃo empregado à o dos Momentos Generalizado-sistema (MMG-sistema), desenvolvido por Arellano-Bond (1991), Arellano-Bover (1995) e Blundell e Bond (1998). Dentre outros resultados, conclui-se que a hipÃtese de Kuznets à confirmada nos Estados brasileiros. Fundamentando-se nas teorias que procuram relacionar pobreza, desigualdade, crescimento econÃmico e bem estar, o capÃtulo dois tem por objetivo decompor a variaÃÃo da pobreza, baseando-se nos seguintes fatores: efeito tendÃncia, efeito crescimento, efeito desigualdade e efeito residual, para os estados brasileiros entre 2001 e 2012. Para isso, partiu-se da estimaÃÃo de um modelo estatÃstico com dados em painel, utilizando as variÃveis pobreza, renda familiar per capita e o coeficiente de Gini, extraÃdas da PNAD. Os resultados estimados do modelo permitem inferir que na maior parte dos estados, o efeito crescimento se sobressaiu em relaÃÃo aos demais em se tratando da explicaÃÃo da reduÃÃo da pobreza no perÃodo analisado. NÃo obstante, o efeito distribuiÃÃo tambÃm teve sua importÃncia nesse processo, seguido do efeito tendÃncia. PorÃm, o efeito residual foi de pouca relevÃncia. Assim, finalizando este trabalho, no capÃtulo trÃs realiza-se uma anÃlise da pobreza de tempo para o Brasil, tendo por parÃmetro o uso da alocaÃÃo do tempo como um indicador de bem estar, em que a pobreza de tempo à mensurada adaptando as medidas de pobreza de renda da classe Foster, Greer e Thorbecke (1984) â FGT, especificamente para a pobreza de tempo, utilizando como indicadores a proporÃÃo de pobres de tempo ( ), o hiato de pobreza de tempo que mede a sua intensidade ( ) e o hiato de pobreza de tempo ao quadrado que mensura sua severidade ( ). Com isso, parte-se da estimaÃÃo de um modelo estatÃstico com dados em painel, utilizando as variÃveis ârendimentos de todos os trabalhosâ, âidadeâ, âanos mÃdios de estudoâ para explicar a pobreza de tempo nos estados brasileiros. Os resultados encontrados indicam que havendo uma elevaÃÃo nos rendimentos, diminui-se a pobreza de tempo; quanto mais elevada for a idade do indivÃduo, maior a chance de ser pobre de tempo; e quanto maior o nÃvel de escolaridade das pessoas, maior serà sua privaÃÃo de tempo em detrimento aos de menor escolaridade.
This dissertation is composed by 3 papers which, where each paper is a dissertation chapter. This first chapter entitled âRelationship between wealth inequality and economic growth in Brazilâ analyses the relationship from Kuznets inverted U hypothesis from 1995 to 2012. The inverted U supposition â Kuznets hypothesis (1955) â deals, in short term, exists a positive connection among wealth inequality and the per capita income level. In the long term, we observe a inverted U relationship, because there is a inversion of this relation. Thus, we use dynamic panel model and the Generalized method of moments system estimation method, developed by Arellano-Bond (1991), Arellano-Bover(1995) and Blundell e Bond (1998). The results show that the Kuznets hypothesis was confirmed. Based in the theories which seek relates poverty, inequality, economic growth and welfare, the chapter two aims to decompose the poverty variation, basing in the following factors: trend effect, growth effect, inequality effect and residual effect, for Brazilian states between 2001 and 2012. To reach this objective, we estimated a statistic model with panel data, using poverty variables, per capita familiar income and gini coefficient, extracted from PNAD. The results estimated allow to infer which in the most of Brazilian states, the growth effect stood out in relation to another effects regarding the explanation of poverty reduction in the period analyzed. Nonetheless, the distribution effect too had its importance in this process, followed by trend effect. However, the residual effect had low explanation power. Thus, finishing this dissertation, the chapter three analyses the poverty time for Brazil, using indicators as ratio of poor in time ( ), the time poverty gap, which measures its intensity ( ) and the squared poverty time, who measures its severity ( ). Then, from estimation of statistical model with panel data, using the variables âall labor incomeâ, âageâ, average years of study to explain the poverty time in the Brazilian states. The results indicate the a increase in the income have been occurring, reducing the poverty in time; how much higher is the individual age, higher the chance of they being poverty in time; and, higher education level of people, greater their privation of time in comparison to people with lower education level.
Rockenbach, Felipe Luis. „VIABILIDADE ECONÔMICA DA PRODUÇÃO DE BIOGÁS EM GRANJAS DE SUÍNOS, POR MEIO DA ANÁLISE DE SÉRIES TEMPORAIS“. Universidade Federal de Santa Maria, 2014. http://repositorio.ufsm.br/handle/1/8320.
Der volle Inhalt der QuelleO presente estudo visa reunir, analisar e interpretar o potencial e a viabilidade econômica do sequestro de gás carbônico e/ou a geração de energia renovável proveniente do tratamento de dejetos suínos, por meio da análise de séries temporais, com utilização dos modelos de previsão Box e Jenkins (1970). Como os sistemas de produção de suínos geram grande quantidade de dejetos altamente poluentes e impactantes no meio ambiente, principalmente os gases de efeito estufa, estes, ao serem tratados, produzirão o gás metano e dentro deste o biogás que, ao ser eliminado corretamente é convertido em energia elétrica e em créditos de carbono, restando na propriedade um biofertilizante de alta qualidade. Cenário do sequestro de gás carbônico que só foi possível após a aprovação do Protocolo de Quioto que prevê o Mecanismo de Desenvolvimento Limpo (MDL), a comercialização dos Certificados de Emissões Reduzidas (CER) e, no âmbito da energia elétrica, pela Resolução Regulamentadora 482/2012 da Agência Nacional de Energia Elétrica (ANEEL). Para o levantamento dessas informações, contou-se com o apoio da Associação Municipal dos Suinocultores de Toledo (AMST) e da Empresa BRF. Esta pesquisa demonstra que é viável a utilização do biogás e que o período de retorno obedece a uma escala de produção com um funcionamento diário dos equipamentos de 10 horas ou mais. Dessa forma, o período do retorno do investimento será entre 70 a 80 meses, com a produção de energia elétrica e/ou em conjunto com a produção de créditos de carbono. Mostra também que o volume de produção é viável, desde que aloje um número mínimo de suínos. Além disso, o uso de modelos de previsão é importante ferramenta, pois antecipam o comportamento da série em estudo, fornecendo ao produtor subsídios para que o investimento seja feito reduzindo o risco ao suinocultor, tanto nos aspectos financeiros, pois a granja terá mais uma fonte de renda, quanto na redução dos custos de produção, já que terá autonomia no fornecimento de energia elétrica ininterruptamente, o que é necessário nos sistemas de criação cada vez mais tecnificados.
Lasram, Anass. „Exploration et rendu de textures synthétisées“. Thesis, Université de Lorraine, 2012. http://www.theses.fr/2012LORR0183/document.
Der volle Inhalt der QuelleTexture synthesis is a technique that algorithmically generates textures at rendering time. The automatic synthesis reduces authoring time and memory requirements since only the algorithm and its parameters need to be stored or transferred. However, two difficulties often arise when using texture synthesis: First, the visualization and parameters selection of synthesized textures are difficult. Second, most synthesizers generate textures in a bitmap format leading to high memory usage. To address these difficulties we propose the following approaches: First, to improve the visualization of synthesized textures we propose the idea of a procedural texture preview: A single static image summarizing in a limited pixel space the appearances produced by a given synthesizer. The main challenge is to ensure that most appearances are visible, are allotted a similar pixel area, and are ordered in a smooth manner throughout the preview. Furthermore, to improve parameters selection we augment sliders controlling parameters with visual previews revealing the changes that will be introduced upon manipulation. Second, to allow user interactions with these visual previews we rely on a fast patch-based synthesizer. This synthesizer achieves a high degree of parallelism and is implemented entirely on the GPU. Finally, rather than generating the output of the synthesizer as a bitmap texture we encode the result in a compact representation and allow to decoding texels from this representation during rendering
Lambert, Thibaud. „Level-Of-Details Rendering with Hardware Tessellation“. Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0948/document.
Der volle Inhalt der QuelleIn the last two decades, real-time applications have exhibited colossal improvements in the generation of photo-realistic images. This is mainly due to the availability of 3D models with an increasing amount of details. Currently, the traditional approach to represent and visualize highly detailed 3D objects is to decompose them into a low-frequency mesh and a displacement map encoding the details. The hardware tessellation is the ideal support to implement an efficient rendering of this representation. In this context, we propose a general framework for the generation and the rendering of multi-resolution feature-aware meshes compatible with hardware tessellation. First, we introduce a view-dependent metric capturing both geometric and parametric distortions, allowing to select the appropriate resolution at rendertime. Second, we present a novel hierarchical representation enabling on the one hand smooth temporal and spatial transitions between levels and on the other hand a non-uniform hardware tessellation. Last, we devise a simplification process to generate our hierarchical representation while minimizing our error metric. Our framework leads to huge improvements both in terms of triangle count and rendering time in comparison to alternative methods
M'halla, Anis. „Contribution à la gestion des perturbations dans les systèmes manufacturiers à contraintes de temps“. Thesis, Ecole centrale de Lille, 2010. http://www.theses.fr/2010ECLI0008/document.
Der volle Inhalt der QuelleThe works proposed in this thesis are interested in controlling and monitoring of a particular class of production system : manufacturing job-shops with time constraints. We suppose in the study that the resources are allocated and the operations order is fixed by the module of planning/scheduling. The assumptions of repetitive functioning mode with and without assembling tasks are adopted. For this type of problems, the formalism of P-time Petri nets is used in order to study the operations time constraints.A study of the robustness of the manufacturing workshop to time constraints, has been developed. The robustness is approached with and without control reaction qualified as active robustness and passive robustness respectively, towards time disturbances. A computing algorithm of the upper bound of the passive robustness is presented. In addition, three robust control strategies facing time disturbances were developed.Furthermore, uncertainty in manufacturing systems has been studied. Our contribution in this context is by integration of the analytical knowledge of the robustness in the filtering mechanism of sensors signals that are associated to operations, by using fuzzy logic.Starting from a controlled system, we have presented in detail, a method to be followed for the implementation of a monitoring model based on the chronicles and fuzzy fault tree analysis. This approach is applied to a milk production unit
Giroud, Anthony. „Modélisation et rendu temps-réel de milieux participants à l'aide du GPU“. Phd thesis, Université Paris-Est, 2012. http://tel.archives-ouvertes.fr/tel-00802395.
Der volle Inhalt der QuelleBaril, Jérôme. „Modèles de représentation multi-résolution pour le rendu photo-réaliste de matériaux complexes“. Phd thesis, Université Sciences et Technologies - Bordeaux I, 2010. http://tel.archives-ouvertes.fr/tel-00525125.
Der volle Inhalt der QuelleGraglia, Florian. „Amélioration du photon mapping pour un scénario walkthrough dans un objectif de rendu physiquement réaliste en temps réel“. Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4072.
Der volle Inhalt der QuelleOne of the goals when developing the product is to immediately obtain a real and valid prototype. This thesis provide new rendering methods to increase the quality of the simulations during the upstream work of the production pipeline. The latter usually requires a walkthrough rendering. Thus, we focuses on the physically-based rendering methods of complex scenes in walkthrough. During the rendering, the end-users must be able to measure the illuminate rates and to interactively modify the power of the light source to test different lighting ambiances. Based on the original photon mapping method, our work shows how some modifications can decrease the calculation time and improve the quality of the resulting images according to this specific context
Schlindwein, Madalena Maria. „Influência do custo de oportunidade do tempo da mulher sobre o padrão de consumo alimentar das famílias brasileiras“. Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/11/11132/tde-19062006-164635/.
Der volle Inhalt der QuelleThe main objective of this thesis was to test the hypothesis that the womans opportunity costs of time affects positively the consumption of foods that are easy and quick to prepare and negatively the consumption of foods that are time intensive. This study uses micro data from the Pesquisa de Orçamentos Familiares - Family Budget Research (POF) 2002- 2003, carried out by the Instituto Brasileiro de Geografia e Estatística IBGE the Brazilian Institute of Geography and Statistics. Based on the theory of Household Production and making use of an econometric model Heckman two stages procedure, it was aimed to evaluate the influence of the womans opportunity costs of time and other factors such as the level of household income, the family formation, urbanization, among others, on the household consumption of a distinct group of foods bean, rice, potato, cassava, meat, wheat flour, readymade foods, bread, yogurt, soft drinks and juices - and on the consumption of foods eaten outside the home. The main results show that there has been a significant change in the standards of eating habits of the Brazilian families since the 1970s. As an example, it was cited the reduction of 46% in household rice consumption and 37% in the bean consumption and, an increase of 490% in the soft drinks consumption and 216% in the ready-made foods. The main socioeconomic indicators show intensification in the urbanization process in Brazil, that is, currently 83% of the Brazilian population lives in urban areas, while in 1970 this percentage was only 56%. Besides that, 54% of the Brazilian women that are the head of the family or spouse are working and 26% of the heads of families in Brazil nowadays are women. As for the factors that affect the standards of consumption, it was verified that the womans opportunity cost of time is directly related to an increase in the probability of consumption and of the household expenses on foods that demand a shorter time to be prepared, for example, the ready-made foods, bread, soft drinks and juices, yogurt and foods eaten outside the home and, to a reduction, in the probability of consumption as well as in the household expenses, on traditional foods such as bean, rice, cassava, meat and wheat flour that, in general, demand a longer time to prepare. All the variables, womans opportunity costs of time, income level, urbanization and family formation were highly significant and important to determine the standards of food consumption in Brazil.
Henninger, Helen Clare. „Étude des solutions du transfert orbital avec une poussée faible dans le problème des deux et trois corps“. Thesis, Nice, 2015. http://www.theses.fr/2015NICE4074/document.
Der volle Inhalt der QuelleThe technique of averaging is an effective way to simplify optimal low-thrust satellite transfers in a controlled two-body Kepler problem. This study takes the form of both an analytical and numerical investigation of low-thrust time-optimal transfers, extending the application of averaging from the two-body problem to transfers in the perturbed low-thrust two body problem and a low-thrust transfer from Earth orbit to the L1 Lagrange point in the bicircular four-body setting. In the low-thrust two-body transfer, we compare the time-minimal case with the energy-minimal case, and determine that the elliptic domain under time-minimal orbital transfers (reduced in some sense) is geodesically convex. We then consider the Lunar perturbation of an energy-minimal low-thrust satellite transfer, finding a representation of the optimal Hamiltonian that relates the problem to a Zermelo navigation problem and making a numerical study of the conjugate points. Finally, we construct and implement numerically a transfer from an Earth orbit to the L1 Lagrange point, using averaging on one (near-Earth) arc in order to simplify analytic and numerical computations. In this last result we see that such a `time-optimal' transfer is indeed comparable to a true time-optimal transfer (without averaging) in these coordinates
Guideli, Douglas Albuquerque. „As mudanças de cidades de times da NFL geram impacto econômico?: uma investigação usando a metodologia ArCo - Artificial Counterfactual“. reponame:Repositório Institucional do FGV, 2018. http://hdl.handle.net/10438/24518.
Der volle Inhalt der QuelleApproved for entry into archive by Thais Oliveira (thais.oliveira@fgv.br) on 2018-08-02T22:34:31Z (GMT) No. of bitstreams: 1 Dissertacao_08022018.pdf: 717184 bytes, checksum: b4ea6d8225322985348430a05aa0e914 (MD5)
Approved for entry into archive by Suzane Guimarães (suzane.guimaraes@fgv.br) on 2018-08-03T12:29:09Z (GMT) No. of bitstreams: 1 Dissertacao_08022018.pdf: 717184 bytes, checksum: b4ea6d8225322985348430a05aa0e914 (MD5)
Made available in DSpace on 2018-08-03T12:29:09Z (GMT). No. of bitstreams: 1 Dissertacao_08022018.pdf: 717184 bytes, checksum: b4ea6d8225322985348430a05aa0e914 (MD5) Previous issue date: 2018-07-11
Nesse trabalho serão analisados possíveis impactos das mudanças de cidades de equipes da NFL durante a década de 90 e início dos anos 2000. Foram analisados dados de renda e emprego dessas regiões e a existência ou não de impacto econômico virá da análise do comportamento das variáveis após a mudança da equipe para a nova cidade. Será utilizado a metodologia ArCo desenvolvido em Carvalho, Masini e Medeiros (2018). Foi encontrada uma grande bibliografia de estudos de impactos econômicos gerados pela prática profissional esportiva em regiões, sempre associando os benefícios concedidos pelo governo com o benefício econômico gerado pela mudança ou chegada de uma equipe profissional a uma região. Esse trabalho se difere dos demais pois em nenhum deles há a aplicação de um modelo contra factual, ou seja, a abordagem desse trabalho será a primeira a tentar evidenciar impactos econômicos após a chegada de equipes profissionais esportivas utilizando esse tipo de modelo estatístico. Não foram encontrados efeitos nas variáveis e regiões estudadas nesse trabalho da prática profissional esportiva em uma região.
The present project aims at analyzing possible impacts from changes in the base-city of NFL teams during the period comprising from 1990 to the early 21th century. Data regarding employment rate and income in those regions would be the main basis of the diagnosis. Therefore, a detailed analysis of the behavior of these variables after the moment of the move from the team to a new town would reveal whether there was an economic impact. The model applied would be the ArCo, developed in Carvalho, Masini e Medeiros (2018). There is an extensive biography available regarding studies related to economic impacts generated by professional sports’ practice in specific regions, always associating the benefits granted by the government with the economic benefit generated by the arrival or departure of a professional team to that region. However, this work significantly differs from the others because in none of them there is the application of a counterfactual model. Therefore, the approach of the present work will be the first one to attempt at demonstrating the economic impacts after the arrival of professional sports teams using this type of statistical model. Author did not found relevant economic effects in variables that were analyzed in this study.
Lobarinhas, Roberto Beier. „Modelos black-litterman e GARCH ortogonal para uma carteira de títulos do tesouro nacional“. Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-17092012-145914/.
Der volle Inhalt der QuelleOne major challenge to financial management resides in associating traditional management with quantitative methods. Traditional managers tend to be skeptical about the quantitative methods contributions, whereas quantitative analysts tend to disregard the importance of the traditional view, creating clear disharmony and inefficiency in the risk management process. A model that seeks to diminish the distance between these two views is the Black-Litterman model (BLM). More specifically, it comes as a solution to difficulties faced when using modern portfolio in practice, particularly those derived from the usage of the Markowitz model. Although the Markowitz model has constituted the basis of portfolio theory for over half century, since the publication of the article Portfolio Selection [Mar52], its impact on the investment world has been quite limited. The Markowitz model addresses the most central objectives of an investment: maximizing the expected return, for a given level of risk. Even though it has had a standout role in the mean-average approach to academics, several difficulties arise when one attempts to make use of it in practice. Despite the disadvantages of its practical usage, the idea of maximizing the return for a given level of risk is so appealing to investors, that the search for models with better behavior continued, and is in this context that the Black-Litterman model came out. In 1992, Fischer Black and Robert Litterman wrote an article on the Black-Litterman model. One intrinsic difference between the BLM and a traditional mean-average one is that, while the second provides the weights of the assets in a portfolio out of a optimization routine, the BLM has its starting point at the long-run equilibrium market portfolio(CAPM). Another highlighting point of the BLM is the ability to provide one clear structucture that is able to combine the long term equilibrium information with the investors views, providing a set of expected returns, which, together, will be the input to generate the weights on the assets. As far as the estimation process is concerned, and for the purpose of choosing the most appropriate model, it was taken into consideration the fact that the risk of a portfolio is determined by the covariation matrix of its assets and, being so, matrices with large dimensions play an important role in the analysis of investments. Whereas, provided the application under study, it is desirable to have a model that is able to carry out the analysis for a considerable number of assets. For these reasons, the Orthogonal GARCH was selected, once it can generate the matrix of covariation of the original system from just a few univariate volatilities, and for this reason, it is a computationally simple method. The orthogonal factors are obtained with principal components analysis. Decomposing the variance of the system into risk factors is highly important, once it allows the risk manager to focus separately on each relevant source of risk. The main idea behind the orthogonalization consists in working with a reduced dimension of components. In this kind of model, sufficient risk factors are considered, thus, the variability not perceived by the model will be considered insigficant noise to the system. Nevertheless, the precision, when not using all the components, will depend on the number of components be sufficient to explain the major part of the variability. Moreover, the model will provide reasonable results depending on principal component analysis performing properly as well, what will be more likely to happen, in highly correlated systems. It is worthy of note that the Orthogonal GARCH is equally useful and feasible when one intends to analyse a portfolio consisting of assets across various types of risk, it means, a system which is not highly correlated. It is common to have such a portfolio, with, for instance, currency rates, stocks, fixed income and commodities. In order to make it to perform properly, it is necessary to separate groups with the same kind of risk and then carry out the principal component analysis by group and then merge the covariance matrices, producing the covariance matrix of the original system. To work together with the orthogonalization method, the GARCH model was chosen because it is able to draw the main stylized facts which characterize financial time series. Stylized facts are statistical patterns empirically observed, which are believed to be present in a number of time series. Financial time series which sufficient high frequency (intraday, daily and even weekly) usually present such behavior. For estimating returns purposes, it was used a ARMA model, and together with the covariance matrix estimation, we have all the parameters needed to perform the BLM study, coming out, in the end, with the optimal portfolio in a given initial time. In addition, we will make forecasts with the GARCH model, obtaining optimal portfolio for the following weeks. We will show that the association of the BLM with the Orthogonal GARCH model can generate satisfactory and coherent with intuition results and, at the same time, keeping the model simple. Our application is on fixed income returns, more specifically, returns of bonds issued in the domestic market by the Brazilian National Treasury. The motivation of this work was to put together statistical tolls and finance uses and applications, more specifically those related to the bonds issued by the National Treasuy, which have become more and more popular due to the \"Tesouro Direto\" program. In conclusion, this work aims to bring useful information either for investors or to debt managers, once the mean-variance model can be useful for those who want to maximize return at a given level or risk as for those who issue bonds, and, thus, seek to reduce their issuance costs at prudential levels of risk.
Celeste, Roger Keller. „Desigualdades socioeconômicas e saúde bucal“. Universidade do Estado do Rio de Janeiro, 2009. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=7153.
Der volle Inhalt der QuelleEsta tese tem como foco os efeitos da desigualdade de renda na saúde bucal e as tendências em desigualdades socioeconômicas em saúde bucal. Qualquer injustiça social, pelo caráter moral é digna de estudo, porém nem toda desigualdade de renda é socialmente injusta. Ela se torna injusta quando as pessoas com menos recursos são aquelas que permitem que as desigualdades econômicas afetem direitos humanos, como o direito a um nível de vida que assegure ao indivíduo e a sua família uma vida saudável. As desigualdades de renda foram estudadas em duas vertentes:a) efeitos contextuais da desigualdade de renda na saúde bucal ; b) tendências na diferença de saúde bucal entre pessoas com maior e menor renda. A primeira parte contém quatro artigos originais que estudaram a associação e os mecanismos contextuais p elos quais a desigualdade de renda afeta a saúde bucal. Para isso, foram utilizados dados do inquérito em saúde bucal SBBrasil de 2002. Os resultados mostraram que: a) a associação entre desigualdade de renda e saúde bucal é mais forte em relação à cárie dental do que outras doenças bucais (e.g. doenças periodontais e maloclusões); b)seus efeitos estão mais fortemente associados à doenças bucais de menor latência; c) os efeitos associados à cárie dental afetam pobres e ricos igualmente; e d) a ausência de políticas públicas parece ser a melhor explicação para os efeitos da excessiva desigualdade de renda no Brasil. Ainda em relação às políticas públicas, foi encontrados que os ricos beneficiam-se mais de políticas públicas municipais do que os pobres. A segunda parte desta tese contém dois artigos originais que descrevem as tendências em saúde bucal e o uso dos serviços odontológicos em grupos de maior e menor renda, no Brasil e na Suécia. Para essas análises, foram usados dados dos inquéritos em saúde bucal no Brasil dos anos de 1986 e 2002, e para Suécia foram obtidos dados do "Swedish Level of Living Survey" para 1968, 1974, 1981, 1991 e 2000. As tendências relacionadas à prevalência de edentulismo mostraram que houve uma redução das desigualdade em percentuais absolutos nos dois países, porém, no Brasil houve um aumento das diferenças quando o desfecho foi a prevalência de nenhum dente perdido. As reduções das disparidades em edentulimo estiveram associadas à presença de uma diferença inicial significativa ,já o aumento das desigualdade na prevalência de nenhum dente perdido esteve relacionado a uma pequena desigualdade no início da coleta de dados. Em relação às desigualdades de uso dos serviços, ressalta-se que o grupo mais pobre permanece utilizando menos os serviços odontológicos em ambos os países e as diferenças continuam significantes através dos tempos. Entretanto, tanto no Brasil como na Suécia, essas diferenças reduziram levemente nas coortes jovens em função do declínio no percentual de pessoas mais ricas que visitam o dentista. Nossos dados permitem concluir que as desigualdades, em saúde bucal, mesmo em países altamente igualitários, como a Suécia.
This thesis focuses on the effect of income distribution on oral health and trends on socioeconomic disparities in oral health. Any social injustice, because of moral issues, is worth studying, though not all inequality of is unfair. Income inequality is unfair when people with less economic resources are penalized with poor health because of their condition of poverty. Unjust societies are those that allow economic inequalities to affect human rights as the right to a standard of living that ensures the individuals and their family a healthy life. Income inequalities were studied in two aspects: a) the contextual effects of income inequality in oral health, and; b) trends in the difference in oral health among people with higher and lower income. The first part contains 4 original articles that studied the association and the contextual mechanism by which income inequality affects oral health. For this we used data of the oral health survey SSBrasil in 2002. The results showed that: a) the association between income inequality and oral health is stronger in relation to dental caries than other oral diseases (e.g. periodontal diseases and malocclusions); b) the effects of inequality of income are more strongly associated with oral diseases of a shorter latency: c) that the effects associated with dental caries affect equally the rich and the poor. The second part of this thesis contains two original articles that described the trends in oral health and in the use of dental services into groups of higher and lower income, in Brazil and Sweden. For this analysis data were obtained from the Brazilian oral health surveys for the year 2002, while for Sweden were used data from the "Swedish Level of Living Survey" for the years 1968, 1974, 1981, 1991 and 2000. Trends in the prevalence of edentulismo showed a reduction in absolute disparities in both countries, but in Brazil trends in the prevalence of "no missing tooth" increased. Reductions in disparities in edentulismo were associated with the presence of a significant initiak difference, while the increase in inequality for outcome "no missing tooth" was related to small inequalities in the begining of data collection. Trends in the use of dental services highlighted that the poorer have been using less the dental services in both countries and the difference remain saignificant over time. however, in Brazil and Sweden, these differences decrease slightly in the cohort of young people because there was a decline in the percentage of rich people who visit the dentist. Our data show that income inequalities in oral health and use of dental serviceshave historically favored the more affluent population even in highly egalitarian countires as Sweden.
Trommenschlager, Marion. „Évolution du commerce et des formes urbaines à travers la transformation numérique“. Thesis, Rennes 2, 2019. http://www.theses.fr/2019REN20008/document.
Der volle Inhalt der QuelleDue to the social acceleration of "late modernity", new political and economic issues are taking more space in territorial compositions. Confronting them with a strong recomposition of temporalities that affect the lived world. The digital transformation is not the cause but is part of this dynamic "ephemeral present"? And is likely to strengthen it in concrete terms at various levels of scale. The aim of this work is to understand is to understand how the relations between commercial forms and spatial forms are recomposed, redrawn by the numerical mutation. By studying, within the framework of a CIFRE, the respective evolutions of the shops and the territory of Rennes’s city center. The current research program will help you understad the link between city practices, commercial practices, places and spaces, but also temporalities.This research program takes part of the "Between Form and Standards" program of the PREFIcs team. It is based on an extended conception of the logics of information and communication which consider that information, to make sens, symbolically, must also be a process of formatting, considering the articulation of organizational forms. This research paper therefore questions the reconfiguration in space and time, those of public spheres and material assignments as a framework for commercial logics and consumption imaginaries
Pinto, Jeronymo Marcondes. „Benefícios do governo federal: uma análise com base na teoria dos ciclos eleitorais“. Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/11/11132/tde-13022012-145926/.
Der volle Inhalt der QuelleThis research aims to analyze the dynamics of the welfare benefits of the federal government, seeking to understand whether it is consistent with the theory of electoral cycles. Accordingly, we assessed whether the number of concessions to these benefits tend to increase with the approaching elections. To achieve this, we used monthly data on the number of leases of three major Brazilian welfare benefits: the Benefício de Prestação Continuada, the Bolsa Família and Auxílio Doença. Based on the analysis of time series and panel data associated with the latter, it was possible to detect that the proximity of elections tends to affect the number of grants of welfare benefits in most cases analyzed. However, the discussion of the results indicate that the effects of elections are not solely the results of manipulations of politicians who seek reelection, but fruits of a wider dynamic that would be characteristic of elections.
Haumont, Dominique. „Calcul et représentation de l'information de visibilité pour l'exploration interactive de scènes tridimensionnelles“. Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210880.
Der volle Inhalt der QuelleLes méthodes d'affichage interactives permettent à l'utilisateur d'explorer des environnements virtuels en réalisant l'affichage des images à une cadence suffisamment élevée pour donner une impression de continuité et d'immersion. Malgré les progrès réalisés par le matériel, de nouveaux besoins supplantent toujours les capacités de traitement, et des techniques d'accélération sont nécessaires pour parvenir à maintenir une cadence d'affichage suffisante. Ce travail s'inscrit précisemment dans ce cadre. Il est consacré à la problématique de l'élimination efficace des objets masqués, en vue d'accélérer l'affichage de scènes complexes. Nous nous sommes plus particulièrement intéressé aux méthodes de précalcul, qui effectuent les calculs coûteux de visibilité durant une phase de prétraitement et les réutilisent lors de la phase de navigation interactive. Les méthodes permettant un précalcul complet et exact sont encore hors de portée à l'heure actuelle, c'est pourquoi des techniques approchées leur sont préférée en pratique. Nous proposons trois méthodes de ce type.
La première, présentée dans le chapitre 4, est un algorithme permettant de déterminer de manière exacte si deux polygones convexes sont mutuellement visibles, lorsque des écrans sont placés entre eux. Nos contributions principales ont été de simplifier cette requête, tant du point de vue théorique que du point de vue de l'implémentation, ainsi que d'accélérer son temps moyen d'exécution à l'aide d'un ensemble de techniques d'optimisation. Il en résulte un algorithme considérablement plus simple à mettre en oeuvre que les algorithmes exacts existant dans la littérature. Nous montrons qu'il est également beaucoup plus efficace que ces derniers en termes de temps de calcul.
La seconde méthode, présentée dans le chapitre 5, est une approche originale pour encoder l'information de visibilité, qui consiste à stocker l'ombre que générerait chaque objet de la scène s'il était remplacé par une source lumineuse. Nous présentons une analyse des avantages et des inconvénients de cette nouvelle représentation.
Finalement, nous proposons dans le chapitre 6 une méthode de calcul de visibilité adaptée aux scènes d'intérieur. Dans ce type d'environnements, les graphes cellules-portails sont très répandus pour l'élimination des objets masqués, en raison de leur faible coût mémoire et de leur grande efficacité. Nous reformulons le problème de la génération de ces graphes en termes de segmentation d'images, et adaptons un algorithme classique, appelé «watershed», pour les obtenir de manière automatique. Nous montrons que la décomposition calculée de la sorte est proche de la décomposition classique, et qu'elle peut être utilisée pour l'élimination des objets masqués.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
Lu, Heqi. „Echantillonage d'importance des sources de lumières réalistes“. Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0001/document.
Der volle Inhalt der QuelleRealistic images can be rendered by simulating light transport with Monte Carlo techniques. The possibility to use realistic light sources for synthesizing images greatly contributes to their physical realism. Among existing models, the ones based on environment maps and light fields are attractive due to their ability to capture faithfully the far-field and near-field effects as well as their possibility of being acquired directly. Since acquired light sources have arbitrary frequencies and possibly high dimension (4D), using such light sources for realistic rendering leads to performance problems.In this thesis, we focus on how to balance the accuracy of the representation and the efficiency of the simulation. Our work relies on generating high quality samples from the input light sources for unbiased Monte Carlo estimation. In this thesis, we introduce three novel methods.The first one is to generate high quality samples efficiently from dynamic environment maps that are changing over time. We achieve this by introducing a GPU approach that generates light samples according to an approximation of the form factor and combines the samples from BRDF sampling for each pixel of a frame. Our method is accurate and efficient. Indeed, with only 256 samples per pixel, we achieve high quality results in real time at 1024 × 768 resolution. The second one is an adaptive sampling strategy for light field light sources (4D), we generate high quality samples efficiently by restricting conservatively the sampling area without reducing accuracy. With a GPU implementation and without any visibility computations, we achieve high quality results with 200 samples per pixel in real time at 1024 × 768 resolution. The performance is still interactive as long as the visibility is computed using our shadow map technique. We also provide a fully unbiased approach by replacing the visibility test with a offline CPU approach. Since light-based importance sampling is not very effective when the underlying material of the geometry is specular, we introduce a new balancing technique for Multiple Importance Sampling. This allows us to combine other sampling techniques with our light-based importance sampling. By minimizing the variance based on a second-order approximation, we are able to find good balancing between different sampling techniques without any prior knowledge. Our method is effective, since we actually reduce in average the variance for all of our test scenes with different light sources, visibility complexity, and materials. Our method is also efficient, by the fact that the overhead of our "black-box" approach is constant and represents 1% of the whole rendering process
Bouchiba, Hassan. „Contributions en traitements basés points pour le rendu et la simulation en mécanique des fluides“. Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEM076/document.
Der volle Inhalt der QuelleMost surface 3D scanning techniques produce 3D point clouds. This thesis tackles the problem of using points as only explicit surface representation. It presents two contributions in point-based processing. The first contribution is a new raw and massive point cloud screen-space rendering algorithm. This new method can be applied to a wide variety of data from small objects to complex scenes. A sequence of screen-space pyramidal operators is used to reconstruct in real-time a surface and estimate its normals, which are later used to perform deferred shading. In addition, the use of pyramidal operators allows to achieve framerate one order of magnitude higher than state of the art methods. The second proposed contribution is a new immersed boundary computational fluid dynamics method by extended implicit surface reconstruction. The proposed method is based on a new implicit surface definition from a point cloud by extended moving least squares. This surface is then used to define the boundary conditions of a finite-elements immersed boundary transient Navier-Stokes solver, which is used to compute flows around the object sampled by the point cloud. The solver is interfaced with an anisotropic and adaptive meshing algorithm which refines the computational grid around both the geometry defined by point cloud and the flow at each timestep of the simulation
Bertails, Florence. „Simulation de chevelures virtuelles“. Phd thesis, Grenoble INPG, 2006. http://tel.archives-ouvertes.fr/tel-00105799.
Der volle Inhalt der Quellesimulation de chevelures est devenue, ces dernières années, un thème de recherche très actif en informatique graphique. Par ailleurs, la simulation physique de cheveux attire de plus en plus l'attention des cosmétologues, qui voient dans le prototypage virtuel un moyen efcace pour mettre au point des produits capillaires.
Cette thèse s'attaque à deux grandes difcultés antagonistes liées à la simulation de chevelures : d'une part, la simulation en temps interactif d'une chevelure complète ; d'autre part, le réalisme physique de la forme et du mouvement d'une chevelure.
Dans un premier temps, nous élaborons de nouveaux algorithmes visant à réduire le coût de calcul inhérent
aux méthodes classiques d'animation de chevelures. Nos approches exploitent pour la première fois l'animation multi-résolution et le rendu volumique de longs cheveux, donnant lieu à des simulations interactives.
Dans un second temps, nous proposons un modèle physiquement réaliste de chevelure, réalisé en collaboration avec des spécialistes en modélisation mécanique et en cosmétologie. Nous présentons tout d'abord le modèle mécanique précis de cheveu unique, issu de la théorie de Kirchhoff sur les tiges élastiques, dont nous avons participé à l'élaboration au cours de ce partenariat. Étendu à l'échelle de la chevelure complète, ce modèle est ensuite appliqué à la génération réaliste de coiffures naturelles tatiques, puis à la simulation dynamique de chevelures d'origines ethniques variées, avant d'être nalement validé à travers un ensemble de comparaisons avec le réel.
Loyet, Raphaël. „Dynamic sound rendering of complex environments“. Phd thesis, Université Claude Bernard - Lyon I, 2012. http://tel.archives-ouvertes.fr/tel-00995328.
Der volle Inhalt der QuelleChou, Ching-Tang, und 周靖棠. „VR-based Motion Simulator for Ships on Real-time Rendered Dynamic Ocean“. Thesis, 2007. http://ndltd.ncl.edu.tw/handle/98740417913969125463.
Der volle Inhalt der Quelle國立臺灣大學
電機工程學研究所
95
This thesis is focused on the construction of physical dynamic system about the ship modeling and motion simulation on virtual reality. We introduce the deep ocean surface constructed as our virtual environment in real-time by personal computer which is mounted NVIDIA GeForce Go 7400 graphics card and 2.00GHz CPU. We introduce an ocean spectrum theorem, so the ocean wave is created by defining the gravity and the wind. Further more, in order to obtain the real-time and realistic rendering ocean scenery, we adopt Graphics Processor Unit (GPU) hardware on shading color. The reflection phenomenon and Fresnel effect are concerned on our dynamic ocean. On ship modeling, the forces and torques are calculated from the generated dynamic waves, which are based on the hydrodynamics and transferred to our ship model. We present a new algorithm to assign points on ship hull, and apply the 3D mathematics theorem to locate each point. According to the calculation of the volume of ship below the ocean surface, we can approach the dynamics of the ship. This simulation can also be integrated with the 6 degree-of-freedom motion platform to generate realistic motion sensation.
Chou, Ching-Tang. „VR-based Motion Simulator for Ships on Real-time Rendered Dynamic Ocean“. 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-1707200717335300.
Der volle Inhalt der QuelleConte, Melino. „Real-time rendering of cities at night“. Thèse, 2018. http://hdl.handle.net/1866/22119.
Der volle Inhalt der QuelleBlanchard, Eric. „Rendu de matériaux semi-transparents hétérogènes en temps réel“. Thèse, 2012. http://hdl.handle.net/1866/9021.
Der volle Inhalt der QuelleWe find in nature several semi-transparent materials such as marble, jade or skin, as well as liquids such as milk or juices. Whether it be for digital movies or video games, having an efficient method to render these materials is an important goal. Although a large body of previous academic work exists in this area, few of these works provide an interactive solution. This thesis presents a new method for simulating light scattering inside heterogeneous semi-transparent materials in real time. The core of our technique relies on a geometric mesh voxelization to simplify the diffusion domain. The diffusion process solves the diffusion equation in order to achieve a fast and efficient simulation. Our method differs mainly from previous approaches by its completely dynamic execution requiring no pre-computations and hence allowing complete deformations of the geometric mesh.
Guertin-Renaud, Jean-Philippe. „Extended distribution effects for realistic appearance and light transport“. Thesis, 2021. http://hdl.handle.net/1866/25591.
Der volle Inhalt der QuelleModern computer generated imagery strives to be ever more faithful to the physical reality around us, and one such key physical phenomenon is the notion of distribution effects. Distribution effects are a category of light transport behaviors characterized by their distributed nature across some given dimension(s). For instance, motion blur is a distribution effect across time, while depth of field introduces a physical aperture for the camera, thus adding two more dimensions. These effects are commonplace in film and real life, thus making them desirable to reproduce. In this manuscript-based thesis, we present four papers which leverage, extend or inspire themselves from distribution effects. First, we propose a novel technique to render non-linear motion blur for real-time applications while conserving important scalability and efficiency characteristics. We leverage Bézier curves to approximate non-linear motion from just a few keyframes and rasterize synthesized geometry to replicate motion. Second, we present an algorithm to render glinty high-frequency materials illuminated by large environment maps. Using a combination of a compact half-vector histogram scheme and multiscale spherical harmonics, we can efficiently represent dense surface normals and render their interaction with large, filtered light sources. Third, we introduce a new method for rendering subsurface scattering by taking advantage of frequency analysis and dual-tree traversal. Computing screen-space subsurface light transport, we can quickly analyze signal frequency and determine efficient bandwidths which we then use to limit our traversal through a shading/illumination dual-tree. Finally, we show a novel real-time diffuse global illumination scheme using dynamically updated irradiance probes. Thanks to efficient spherical radiance distribution updates, we can update irradiance probes at runtime, taking into consideration dynamic objects and changing lighting, and combine it with a more robust filtered irradiance query, making dense irradiance probe grids tractable in real-time with minimal artifacts.
Martel, Geneviève. „L’accès adapté au sein du réseau de cliniques universitaires de l’Université de Montréal : une étude observationnelle“. Thèse, 2017. http://hdl.handle.net/1866/20559.
Der volle Inhalt der Quelle