Academic literature on the topic 'Interactive volume visualization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Interactive volume visualization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Interactive volume visualization"

1

Dauitbayeva, A. O., A. A. Myrzamuratova, and A. B. Bexeitova. "INTERACTIVE VISUALIZATION TECHNOLOGY IN AUGMENTED REALITY." Bulletin of the Korkyt Ata Kyzylorda University 58, no. 3 (2021): 137–42. http://dx.doi.org/10.52081/bkaku.2021.v58.i3.080.

Full text
Abstract:
This article is devoted to the issues of visualization and information processing, in particular, improving the visualization of three-dimensional objects using augmented reality and virtual reality technologies. The globalization of virtual reality has led to the introduction of a new term "augmented reality"into scientific circulation. If the current technologies of user interfaces are focused mainly on the interaction of a person and a computer, then augmented reality with the help of computer technologies offers improving the interface of a person and the real world around them. Computer graphics are perceived by the system in the synthesized image in connection with the reproduction of monocular observation conditions, increasing the image volume, spatial arrangement of objects in a linear perspective, obstructing one object to another, changing the nature of shadows and tones in the image field. The experience of observation is of great importance for the perception of volume and space, so that the user "completes" the volume structure of the observed representation. Thus, the visualization offered by augmented reality in a real environment familiar to the user contributes to a better perception of three-dimensional object.
APA, Harvard, Vancouver, ISO, and other styles
2

Stoppel, Sergej, and Stefan Bruckner. "Vol2velle: Printable Interactive Volume Visualization." IEEE Transactions on Visualization and Computer Graphics 23, no. 1 (January 2017): 861–70. http://dx.doi.org/10.1109/tvcg.2016.2599211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Parker, S., M. Parker, Y. Livnat, P. P. Sloan, C. Hansen, and P. Shirley. "Interactive ray tracing for volume visualization." IEEE Transactions on Visualization and Computer Graphics 5, no. 3 (1999): 238–50. http://dx.doi.org/10.1109/2945.795215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Muigg, P., M. Hadwiger, H. Doleisch, and E. Groller. "Interactive Volume Visualization of General Polyhedral Grids." IEEE Transactions on Visualization and Computer Graphics 17, no. 12 (December 2011): 2115–24. http://dx.doi.org/10.1109/tvcg.2011.216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

van der Voort, H. T. M., J. M. Messerli, H. J. Noordmans, and A. W. M. Smeulders. "Volume visualization for interactive microscopic image analysis." Bioimaging 1, no. 1 (March 1993): 20–29. http://dx.doi.org/10.1002/1361-6374(199303)1:1<20::aid-bio5>3.3.co;2-u.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Boyles, Michael, and Shiaofen Fang. "3Dive: An Immersive Environment for Interactive Volume Data Exploration." International Journal of Virtual Reality 5, no. 1 (January 1, 2001): 38–51. http://dx.doi.org/10.20870/ijvr.2001.5.1.2667.

Full text
Abstract:
This paper describes an immersive system, called 3DIVE, for interactive volume data visualization and exploration inside the CAVE virtual environment. Combining interactive volume rendering and virtual reality provides a natural immersive environment for volumetric data visualization. More advanced data exploration operations, such as object level data manipulation, simulation and analysis, are supported in 3DIVE by several new techniques: volume primitives and texture regions are used for the rendering, manipulation, and collision detection of volumetric objects; the region based rendering pipeline is integrated with 3D image filters to provide an image-based mechanism for interactive transfer function design; a collaborative visualization module allows remote sites to collaborate over common datasets with passive or active view sharing. The system has been recently released as public domain software for CAVE/ImmersaDesk users, and is currently being actively used by a 3D microscopy visualization project.
APA, Harvard, Vancouver, ISO, and other styles
7

Kirmizibayrak, Can, Nadezhda Radeva, Mike Wakid, John Philbeck, John Sibert, and James Hahn. "Evaluation of Gesture Based Interfaces for Medical Volume Visualization Tasks." International Journal of Virtual Reality 11, no. 2 (January 1, 2012): 1–13. http://dx.doi.org/10.20870/ijvr.2012.11.2.2839.

Full text
Abstract:
Interactive systems are increasingly used in medical applications with the widespread availability of various imaging modalities. Gesture-based interfaces can be beneficial to interact with these kinds of systems in a variety of settings, as they can be easier to learn and can eliminate several shortcomings of traditional tactile systems, especially for surgical applications. We conducted two user studies that explore different gesture-based interfaces for interaction with volume visualizations. The first experiment focused on rotation tasks, where the performance of the gesture-based interface (using Microsoft Kinect) was compared to using the mouse. The second experiment studied localization of internal structures, comparing slice-based visualizations via gestures and the mouse, in addition to a 3D Magic Lens visualization. The results of the user studies showed that the gesture-based interface outperform the traditional mouse both in time and accuracy in the orientation matching task. The traditional mouse was the superior interface for the second experiment in terms of accuracy. However, the gesture-based Magic Lens interface was found to have the fastest target localization time. We discuss these findings and their further implications in the use of gesture-based interfaces in medical volume visualization, and discuss the possible underlying psychological mechanisms why these methods can outperform traditional interaction methods
APA, Harvard, Vancouver, ISO, and other styles
8

Cruz, António, Joel P. Arrais, and Penousal Machado. "Interactive and coordinated visualization approaches for biological data analysis." Briefings in Bioinformatics 20, no. 4 (March 26, 2018): 1513–23. http://dx.doi.org/10.1093/bib/bby019.

Full text
Abstract:
AbstractThe field of computational biology has become largely dependent on data visualization tools to analyze the increasing quantities of data gathered through the use of new and growing technologies. Aside from the volume, which often results in large amounts of noise and complex relationships with no clear structure, the visualization of biological data sets is hindered by their heterogeneity, as data are obtained from different sources and contain a wide variety of attributes, including spatial and temporal information. This requires visualization approaches that are able to not only represent various data structures simultaneously but also provide exploratory methods that allow the identification of meaningful relationships that would not be perceptible through data analysis algorithms alone. In this article, we present a survey of visualization approaches applied to the analysis of biological data. We focus on graph-based visualizations and tools that use coordinated multiple views to represent high-dimensional multivariate data, in particular time series gene expression, protein–protein interaction networks and biological pathways. We then discuss how these methods can be used to help solve the current challenges surrounding the visualization of complex biological data sets.
APA, Harvard, Vancouver, ISO, and other styles
9

Huang, Jin Ming, and Kun Liang Liu. "Design and Implement on Volume Data Visualization System." Advanced Materials Research 433-440 (January 2012): 5680–85. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.5680.

Full text
Abstract:
Visualization of volume data is an important branch of science computing visualization and its application is very wide. This paper realized an interactive visualization system of regular volume data. Based on volume data visualization, this paper also realized 3-D model’s volume slice and equal-surface real time. It provides strong support for the researcher to find the laws behind the volume data.
APA, Harvard, Vancouver, ISO, and other styles
10

Suter, S. K., Jose A. Iglesias Guitian, F. Marton, M. Agus, A. Elsener, C. P. E. Zollikofer, M. Gopi, E. Gobbetti, and R. Pajarola. "Interactive Multiscale Tensor Reconstruction for Multiresolution Volume Visualization." IEEE Transactions on Visualization and Computer Graphics 17, no. 12 (December 2011): 2135–43. http://dx.doi.org/10.1109/tvcg.2011.214.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Interactive volume visualization"

1

Yang, Jun. "Interactive volume queries in a 3D visualization system." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ38421.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

ESPINHA, RODRIGO DE SOUZA LIMA. "INTERACTIVE VOLUME VISUALIZATION OF UNSTRUCTURED MESHES USING PROGRAMMABLE GRAPHICS CARDS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2005. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=6586@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
A visualização volumétrica é uma importante técnica para a exploração de dados tridimensionais complexos, como, por exemplo, o resultado de análises numéricas usando o método dos elementos finitos. A aplicação eficiente dessa técnica a malhas não-estruturadas tem sido uma importante área de pesquisa nos últimos anos. Há dois métodos básicos para a visualização dos dados volumétricos: extração de superfícies e renderização direta de volumes. Na primeira, iso-superfícies de um campo escalar são extraídas explicitamente. Na segunda, que é a utilizada neste trabalho, dados escalares são classificados a partir de uma função de transferência, que mapeia valores do campo escalar em cor e opacidade, para serem visualizados. Com a evolução das placas gráficas (GPU) dos computadores pessoais, foram desenvolvidas novas técnicas para visualização volumétrica interativa de malhas não-estruturadas. Os novos algoritmos tiram proveito da aceleração e da possibilidade de programação dessas placas, cujo poder de processamento cresce a um ritmo superior ao dos processadores convencionais (CPU). Este trabalho avalia e compara dois algoritmos para visualização volumétrica de malhas não-estruturadas, baseados em GPU: projeção de células independente do observador e traçado de raios. Adicionalmente, são propostas duas adaptações dos algoritmos estudados. Para o algoritmo de projeção de células, propõe-se uma estruturação dos dados na GPU para eliminar o alto custo de transferência de dados para a placa gráfica. Para o algoritmo de traçado de raios, propõe-se fazer a integração da função de transferência na GPU, melhorando a qualidade da imagem final obtida e permitindo a alteração da função de transferência de maneira interativa.
Volume visualization is an important technique for the exploration of threedimensional complex data sets, such as the results of numerical analysis using the finite elements method. The efficient application of this technique to unstructured meshes has been an important area of research in the past few years. There are two basic methods to visualize volumetric data: surface extraction and direct volume rendering. In the first, the iso-surfaces of the scalar field are explicitly extracted. In the second, which is the one used in this work, scalar data are classified by a transfer function, which maps the scalar values to color and opacity, to be visualized. With the evolution of personal computer graphics cards (GPU), new techniques for volume visualization have been developed. The new algorithms take advantage of modern programmable graphics cards, whose processing power increases at a faster rate than the one observed in conventional processors (CPU). This work evaluates and compares two GPU- based algorithms for volume visualization of unstructured meshes: view- independent cell projection (VICP) and ray-tracing. In addition, two adaptations of the studied algorithms are proposed. For the cell projection algorithm, we propose a GPU data structure in order to eliminate the high costs of the CPU to GPU data transfer. For the raytracing algorithm, we propose to integrate the transfer function in the GPU, which increases the quality of the generated image and allows to interactively change the transfer function.
APA, Harvard, Vancouver, ISO, and other styles
3

Sondershaus, Ralf. "Multi resolution representations and interactive visualization of huge unstructured volume meshes." [S.l. : s.n.], 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Frishert, Willem Jan. "Interactive Visualization Of Large Scale Time-Varying Datasets." Thesis, Linköping University, Department of Science and Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12283.

Full text
Abstract:

Visualization of large scale time-varying volumetric datasets is an active topic of research. Technical limitations in terms of bandwidth and memory usage become a problem when visualizing these datasets on commodity computers at interactive frame rates. The overall objective is to overcome these limitations by adapting the methods of an existing Direct Volume Rendering pipeline. The objective is considered to be a proof of concept to assess the feasibility of visualizing large scale time-varying datasets using this pipeline. The pipeline consists of components from previous research, which make extensive use of graphics hardware to visualize large scale static data on commodity computers.

This report presents a diploma work, which adapts the pipeline to visualize flow features concealed inside the large scale Computational Fluid Dynamics dataset. The work provides a foundation to address the technical limitations of the commodity computer to visualize time-varying datasets. The report describes the components making up the Direct Volume Rendering pipeline together with the adaptations. It also briefly describes the Computational Fluid Dynamics simulation, the flow features and an earlier visualization approach to show the system’s limitations when exploring the dataset.

APA, Harvard, Vancouver, ISO, and other styles
5

Campoalegre, Vera Lázaro. "Contributions to the interactive visualization of medical volume models in mobile devices." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/285166.

Full text
Abstract:
With current medical imaging improvements, specialists are being able to obtain correct information of anatomical structures of the human organism. By using different image visualization techniques, experts can obtain suitable images for bones, soft tissues, bloodstream among others. Present algorithms generate images with better and better resolution and information accuracy. Medical doctors are being more familiarized with three-dimensional structures reconstructed from bi-dimensional images. As a result, hospitals are becoming interested in tele-medicine and tele-diagnostic solutions. Client-server applications allow these functionalities. Sometimes the use of mobile devices is necessary due to their portability and easy maintenance. However, transmission time for the volumetric information and low performance hardware properties make quite complex the design of efficient visualization systems on these devices. The main objective of this thesis is to enrich user experience during the interactive visualization of volumetric medical models in low performance devices. To achieve this, a new transfer-function aware compression/decompression mechanism adapted to transmission, reconstruction and visualization has been studied. This work proposes several schemes to exploit the use of transfer functions (TFs) to enhance volume compression during data transmission to mobile devices. As far as we know, this possibility has not been considered by any of the described approaches in the previous work. The Wavelet-Based Volume Compression for Remote Visualization approach is a TF-aware compression scheme. It supports inspection of complex volume models with maximum level of detail in selected regions of interest (ROIs). It uses a GPU-based, ROI-aware ray-casting rendering algorithm in the client, with a limited amount of information being sent over the Network, decreasing storage size in the client side. Regarding the Remote Exploration of Volume Models using Gradient Octrees scheme, we have shown that this technique can efficiently encode volume datasets. It supports high-quality visualizations with Transfer Functions from a predefined TFs set. In the present implementation, Transfer Function sets can encode up to ten different volume materials. Gradient Octrees are multi-resolution, supporting progressive transmission and avoiding gradient computations in the client device. That is, Gradient Octrees encodes precomputed gradients to save costly computations in the client, and support illumination-based ray-casting without extra computations in the client GPU. The proposed scheme presents a minimum loss of visual quality as compared to state of the art ray-casting renderings. The octree structure is compacted into a small volume array and a set of texture-coded arrays, with only one bit per octree node. The proposed scheme supports planar volume sections which are visualized with high-resolution volume information, besides interactive extrusion of specific structures. As a final contribution, a Hybrid ROI-based Visualization Algorithm has been proposed. It inherits the advantages of the previously described contributions while keeping a good performance in terms of bandwidth requirements and storage needs in client devices. The scheme is flexible enough to represent several materials and volume structures in the ROI area at high resolution with a very limited information transmission cost. The Hybrid approach has been proved to be specially well suited in the case of large models. Experimental results show that this Hybrid approach is a scalable scheme, with compression rates that decrease when the size of the volume model increases.
Los adelantos actuales en imagenes médicas están permitiendo a los especialistas obtener información cada vez más precisa de las estructuras anatómicas del organismo humano. Mediante la utilización de diferentes técnicas de visualización, los expertos pueden obtener imágenes de calidad para los huesos, tejidos blandos y torrente sanguíneo, entre otros. Los actuales algoritmos de procesamiento de imágenes garantizan el equilibrio entre la resolución y la exactitud de la información. Paralelamente, los médicos están más familiarizados con las estructuras tridimensionales reconstruidas a partir de imágenes en dos dimensiones. Por otro lado, los hospitales están incorporando la tele-medicina y el tele-diagnóstico entre sus soluciones técnicas. Las aplicaciones cliente-servidor permiten estas funcionalidades. En ocasiones el uso de dispositivos móviles es necesario debido a su fácil mantenimiento y a su portabilidad. Sin embargo, el tiempo de transmisión de la información volumétrica así como el bajo rendimiento del hardware en estos dispositivos, hacen que el diseño de sistemas eficientes de visualización sea todavía una tarea compleja. El objetivo principal de esta tesis es enriquecer la experiencia del usuario en la visualización interactiva de modelos volumétricos de medicina en dispositivos de bajo rendimiento. Para conseguir esto, se ha puesto en práctica la implementación de un mecanismo de compresión/descompresión que depende de funciones de transferencia para optimizar la transmisión, reconstrucción y la visualización en estos dispositivos. Esta tesis, por lo tanto, propone varios esquemas para aprovechar el uso de las funciones de transferencia (TFs) e incrementar el ratio de compresión del volumen durante la transmisión a los dispositivos móviles. De acuerdo con nuestros conocimientos, ninguna de las técnicas descritas en los trabajos presentados anteriormente ha considerado esta posibilidad. El esquema de compresión de volumen basado en Wavelets para la visualización remota, es una propuesta para compresión que tiene en cuenta la función de transferencia. Permite la inspección de modelos de volumen complejos con máximos niveles de detalles en regiones de interés seleccionados. El rendering ejecuta un ray-casting adaptado a modelos con regiones de interés orientado a la GPU en el cliente con una cantidad de información muy limitada que se envía por la red. La otra contribución de esta tesis es la implementación de un esquema para la exploración remota de modelos volumétricos mediante Gradient Octrees. Esta técnica codifica de manera eficiente datos de volumen mientras garantiza visualizaciones de alta calidad con funciones de transferencias predefinidas en un determinado conjunto. La actual implementación permite codificiar hasta 10 materiales diferentes en los datos de Volumen. Gradient Octrees es una técnica multi-resolución, permite la transmisión progresiva y evita los cálculos del gradiente en el dispositivo cliente. En efecto, esta aproximación codifica gradientes previamente calculados para reducir el coste de los cálculos en la GPU del cliente y garantizar el ray-casting con iluminación en la GPU del dispositivo. En comparación con las propuestas estudiadas la pérdida de la calidad visual en los Gradient Octrees es mínima. La estructura del octree es compacta, compuesta de un pequeño vector de volumen y un conjunto de vectores de texturas codificadas, que utilizan solo 1 bit por nodo del octree. El esquema soporta además secciones planas de volumen que contienen información de alta resolución, además de la extrusión de estructuras en los modelos visualizados
APA, Harvard, Vancouver, ISO, and other styles
6

Vidholm, Erik. "Visualization and Haptics for Interactive Medical Image Analysis." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis Acta Universitatis Upsaliensis, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-8409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Berg, Matthias, and Jonathan Grangien. "Implementing an Interactive Simulation Data Pipeline for Space Weather Visualization." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-162477.

Full text
Abstract:
This thesis details work carried out by two students working as contractors at the Community Coordinated Modelling Center at Goddard Space Flight Center of the National Aeronautics and Space Administration. The thesis is made possible by and aims to contribute to the OpenSpace project. The first track of the work implemented is the handling of and putting together new data for a visualization of coronal mass ejections in OpenSpace. The new data allows for observation of coronal mass ejections at their origin by the surface of the Sun, whereas previous data visualized them from 30 solar radii out from the Sun and outwards. Previously implemented visualization techniques are used together to visualize different volume data and fieldlines, which together with a synoptic magnetogram of the Sun gives a multi-layered visualization. The second track is an experimental implementation of a generalized and less user involved process for getting new data into OpenSpace, with a priority on volume data as that was a subject of experience. The results show a space weather model visualization, and how one such model can be adapted to fit within the parameters of the OpenSpace project. Additionally, the results show how a GUI connected to a series of background events can form a data pipeline to make complicated space weather models more easily available.
APA, Harvard, Vancouver, ISO, and other styles
8

Huff, Rafael. "Recorte volumétrico usando técnicas de interação 2D e 3D." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2006. http://hdl.handle.net/10183/7385.

Full text
Abstract:
A visualização de conjuntos de dados volumétricos é comum em diversas áreas de aplicação e há já alguns anos os diversos aspectos envolvidos nessas técnicas vêm sendo pesquisados. No entanto, apesar dos avanços das técnicas de visualização de volumes, a interação com grandes volumes de dados ainda apresenta desafios devido a questões de percepção (ou isolamento) de estruturas internas e desempenho computacional. O suporte do hardware gráfico para visualização baseada em texturas permite o desenvolvimento de técnicas eficientes de rendering que podem ser combinadas com ferramentas de recorte interativas para possibilitar a inspeção de conjuntos de dados tridimensionais. Muitos estudos abordam a otimização do desempenho de ferramentas de recorte, mas muito poucos tratam das metáforas de interação utilizadas por essas ferramentas. O objetivo deste trabalho é desenvolver ferramentas interativas, intuitivas e fáceis de usar para o recorte de imagens volumétricas. Inicialmente, é apresentado um estudo sobre as principais técnicas de visualização direta de volumes e como é feita a exploração desses volumes utilizando-se recorte volumétrico. Nesse estudo é identificada a solução que melhor se enquadra no presente trabalho para garantir a interatividade necessária. Após, são apresentadas diversas técnicas de interação existentes, suas metáforas e taxonomias, para determinar as possíveis técnicas de interação mais fáceis de serem utilizadas por ferramentas de recorte. A partir desse embasamento, este trabalho apresenta o desenvolvimento de três ferramentas de recorte genéricas implementadas usando-se duas metáforas de interação distintas que são freqüentemente utilizadas por usuários de aplicativos 3D: apontador virtual e mão virtual. A taxa de interação dessas ferramentas é obtida através de programas de fragmentos especiais executados diretamente no hardware gráfico. Estes programas especificam regiões dentro do volume a serem descartadas durante o rendering, com base em predicados geométricos. Primeiramente, o desempenho, precisão e preferência (por parte dos usuários) das ferramentas de recorte volumétrico são avaliados para comparar as metáforas de interação empregadas. Após, é avaliada a interação utilizando-se diferentes dispositivos de entrada para a manipulação do volume e ferramentas. A utilização das duas mãos ao mesmo tempo para essa manipulação também é testada. Os resultados destes experimentos de avaliação são apresentados e discutidos.
Visualization of volumetric datasets is common in many fields and has been an active area of research in the past two decades. In spite of developments in volume visualization techniques, interacting with large datasets still demands research efforts due to perceptual and performance issues. The support of graphics hardware for texture-based visualization allows efficient implementation of rendering techniques that can be combined with interactive sculpting tools to enable interactive inspection of 3D datasets. Many studies regarding performance optimization of sculpting tools have been reported, but very few are concerned with the interaction techniques employed. The purpose of this work is the development of interactive, intuitive, and easy-to-use sculpting tools. Initially, a review of the main techniques for direct volume visualization and sculpting is presented. The best solution that guarantees the required interaction is highlighted. Afterwards, in order to identify the most user-friendly interaction technique for volume sculpting, several interaction techniques, metaphors and taxonomies are presented. Based on that, this work presents the development of three generic sculpting tools implemented using two different interaction metaphors, which are often used by users of 3D applications: virtual pointer and virtual hand. Interactive rates for these sculpting tools are obtained by running special fragment programs on the graphics hardware which specify regions within the volume to be discarded from rendering based on geometric predicates. After development, the performance, precision and user preference of the sculpting tools were evaluated to compare the interaction metaphors. Afterward, the tools were evaluated by comparing the use of a 3D mouse against a conventional wheel mouse for guiding volume and tools manipulation. Two-handed input was also tested with both types of mouse. The results from the evaluation experiments are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Prauchner, João Luis. "Especificação de funções de transferência para visualização volumétrica." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2005. http://hdl.handle.net/10183/164626.

Full text
Abstract:
Técnicas de visualização volumétrica direta são utilizadas para visualizar e explorar volumes de dados complexos. Dados volumétricos provêm de diversas fontes, tais como dispositivos de diagnóstico médico, radares de sensoriamento remoto ou ainda simulações científicas assistidas por computador. Um problema fundamental na visualização volumétrica é a especificação de Funções de Transferência (FTs) que atribuem cor e opacidade aos valores escalares que compõem o volume de dados. Essas funções são importantes para a exibição de características e objetos de interesse do volume, porém sua definição não é trivial ou intuitiva. Abordagens tradicionais permitem a edição manual de pontos de controle que representam a FT a ser utilizada no volume. No entanto, essas técnicas acabam conduzindo o usuário a um processo de “tentativa e erro” para serem obtidos os resultados desejados. Considera-se também que técnicas automáticas que excluem o usuário do processo não são consideradas as mais adequadas, visto que o mesmo deve possuir algum controle sobre o processo de visualização. Este trabalho apresenta uma ferramenta semi-automática e interativa destinada a auxiliar o usuário na geração de FTs de cor e opacidade. A ferramenta proposta possui dois níveis de interação com o usuário. No primeiro nível são apresentados várias FTs candidatas renderizadas como thumbnails 3D, seguindo o método conhecido como Design Galleries (MARKS et al., 1997). São aplicadas técnicas para reduzir o escopo das funções candidatas para um conjunto mais razoável, sendo possível ainda um refinamento das mesmas. No segundo nível é possível definir cores para a FT de opacidade escolhida, e ainda refinar essa função de modo a melhorála de acordo com as necessidades do usuário. Dessa forma, um dos objetivos desse trabalho é permitir ao usuário lidar com diferentes aspectos da especificação de FTs, que normalmente são dependentes da aplicação em questão e do volume de dados sendo visualizado. Para o rendering do volume, são exploradas as capacidades de mapeamento de textura e os recursos do hardware gráfico programável provenientes das plácas gráficas atuais visando a interação em tempo real. Os resultados obtidos utilizam volumes de dados médicos e sintéticos, além de volumes conhecidos, para a análise da ferramenta proposta. No entanto, é dada ênfase na especificação de FTs de propósito geral, sem a necessidade do usuário prover um mapeamento direto representando a função desejada.
Direct volume rendering techniques are used to visualize and explore large scalar volumes. Volume data can be acquired from many sources including medical diagnoses scanners, remote sensing radars or even computer-aided scientific simulations. A key issue in volume rendering is the specification of Transfer Functions (TFs) which assign color and opacity to the scalar values which comprise the volume. These functions are important to the exhibition of features and objects of interest from the volume, but their specification is not trivial or intuitive. Traditional approaches allow the manual editing of a graphic plot with control points representing the TF being applied to the volume. However, these techniques lead the user to an unintuitive trial and error task, which is time-consuming. It is also considered that automatic methods that exclude the user from the process should be avoided, since the user must have some control of the visualization process. This work presents a semi-automatic and interactive tool to assist the user in the specification of color and opacity TFs. The proposed tool has two levels of user interaction. The first level presents to the user several candidate TFs rendered as 3D thumbnails, following the method known as Design Galleries (MARKS et al., 1997). Techniques are applied to reduce the scope of the candidate functions to a more reasonable one. It is also possible to further refine these functions at this level. In the second level is permitted to define and edit colors in the chosen TF, and refine this function if desired. One of the objectives of this work is to allow users to deal with different aspects of TF specification, which is generally dependent of the application or the dataset being visualized. To render the volume, the programmability of the current generation of graphics hardware is explored, as well as the features of texture mapping in order to achieve real time interaction. The tool is applied to medical and synthetic datasets, but the main objective is to propose a general-purpose tool to specify TFs without the need for an explicit mapping from the user.
APA, Harvard, Vancouver, ISO, and other styles
10

Armstrong, Christopher J. "Live Surface." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/1029.

Full text
Abstract:
Live Surface allows users to segment and render complex surfaces from 3D image volumes at interactive (sub-second) rates using a novel, Cascading Graph Cut (CGC). Live Surface consists of two phases. (1) Preprocessing for generation of a complete 3D watershed hierarchy followed by tracking of all catchment basin surfaces. (2) User interaction in which, with each mouse movement, the 3D object is selected and rendered in real time. Real-time segmentation is ccomplished by cascading through the 3D watershed hierarchy from the top, applying graph cut successively at each level only to catchment basins bordering the segmented surface from the previous level. CGC allows the entire image volume to be segmented an order of magnitude faster than existing techniques that make use of graph cut. OpenGL rendering provides for display and update of the segmented surface at interactive rates. The user selects objects by tagging voxels with either (object) foreground or background seeds. Seeds can be placed on image cross-sections or directly on the 3D rendered surface. Interaction with the rendered surface improves the user's ability to steer the segmentation, augmenting or subtracting from the current selection. Segmentation and rendering, combined, is accomplished in about 0.5 seconds, allowing 3D surfaces to be displayed and updated dynamically as each additional seed is deposited. The immediate feedback of Live Surface allows for the segmentation of 3D image volumes with an interaction paradigm similar to the Live Wire (Intelligent Scissors) tool used in 2D images.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Interactive volume visualization"

1

Cai, Wenli. Interactive Volume Visualization in the Context of Virtual Radiotherapy Treatment Planning (European University Studies: Series, Informatic, 41). Peter Lang Publishing, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cai, Wenli. Interactive Volume Visualization In The Context Of Virtual Radiotherapy Treatment Planning (European University Studies: Series, Informatic, 41). Peter Lang Pub Inc, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Interactive volume visualization"

1

Wilson, Brett, Eric B. Lum, and Kwan-Liu Ma. "Interactive Multi-volume Visualization." In Lecture Notes in Computer Science, 102–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46080-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wittenbrink, Craig M., Hans J. Wolters, and Mike Goss. "CellFast: Interactive Unstructured Volume Rendering and Classification." In Data Visualization, 141–56. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4615-1177-9_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hlawitschka, Mario, Gunther H. Weber, Alfred Anwander, Owen T. Carmichael, Bernd Hamann, and Gerik Scheuermann. "Interactive Volume Rendering of Diffusion Tensor Data." In Mathematics and Visualization, 161–76. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-88378-4_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ropinski, Timo, and Klaus Hinrichs. "Interactive Volume Visualization Techniques for Subsurface Data." In Visual Information and Information Systems, 121–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11590064_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Spalt, Alfred. "A pipeline algorithm for interactive volume visualization." In Parallel Computation, 105–13. Berlin, Heidelberg: Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-57314-3_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

McPhail, Travis, Powei Feng, and Joe Warren. "Fast Cube Cutting for Interactive Volume Visualization." In Advances in Visual Computing, 620–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10331-5_58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Frühauf, Martin, and Kennet Karlsson. "The Rotating Cube: Interactive Specification of Viewing for Volume Visualization." In Visualization in Scientific Computing, 181–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 1994. http://dx.doi.org/10.1007/978-3-642-77902-2_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tawara, Takehiro. "Interactive Volume Segmentation and Visualization in Augmented Reality." In Handbook of Augmented Reality, 199–210. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4614-0064-6_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Heinzlreiter, P., A. Wasserbauer, H. Baumgartner, D. Kranzlmüller, G. Kurka, and J. Volkert. "Interactive Virtual Reality Volume Visualization on the Grid." In Distributed and Parallel Systems, 90–97. Boston, MA: Springer US, 2002. http://dx.doi.org/10.1007/978-1-4615-1167-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Miyawaki, Miwa, Kyoko Hasegawa, Liang Li, and Satoshi Tanaka. "Transparent Fused Visualization of Surface and Volume Based on Iso-Surface Highlighting." In Intelligent Interactive Multimedia Systems and Services, 260–66. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-92231-7_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Interactive volume visualization"

1

Parker, Steven, Michael Parker, Yarden Livnat, Peter-Pike Sloan, Charles Hansen, and Peter Shirley. "Interactive ray tracing for volume visualization." In ACM SIGGRAPH 2005 Courses. New York, New York, USA: ACM Press, 2005. http://dx.doi.org/10.1145/1198555.1198754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Qiqi, Yinlong Sun, Bartek Rajwa, and J. P. Robinson. "Interactive volume visualization of cellular structures." In Electronic Imaging 2006, edited by Charles A. Bouman, Eric L. Miller, and Ilya Pollak. SPIE, 2006. http://dx.doi.org/10.1117/12.643616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Root, Gary, C. Sims, R. Pillutla, and Samuel M. Goldwasser. "Interactive 3D dose volume visualization in radiation therapy." In Visualization in Biomedical Computing 1994, edited by Richard A. Robb. SPIE, 1994. http://dx.doi.org/10.1117/12.185213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Yubo, and Kwan-Liu Ma. "Fast global illumination for interactive volume visualization." In the ACM SIGGRAPH Symposium. New York, New York, USA: ACM Press, 2013. http://dx.doi.org/10.1145/2448196.2448205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

LaMar, Eric, Bernd Hamann, and Kenneth I. Joy. "Multiresolution techniques for interactive texture-based volume visualization." In Electronic Imaging, edited by Robert F. Erbacher, Philip C. Chen, Jonathan C. Roberts, and Craig M. Wittenbrink. SPIE, 2000. http://dx.doi.org/10.1117/12.378913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Moran, Patrick J. "Nicer-slicer-dicer: an interactive volume visualization tool." In IS&T/SPIE 1994 International Symposium on Electronic Imaging: Science and Technology, edited by Carol J. Cogswell and Kjell Carlsson. SPIE, 1994. http://dx.doi.org/10.1117/12.172095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"INTERACTIVE DEFORMATION AND VISUALIZATION OF LARGE VOLUME DATASETS." In International Conference on Computer Graphics Theory and Applications. SciTePress - Science and and Technology Publications, 2007. http://dx.doi.org/10.5220/0002082200390046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Knoll, Aaron, Sebastian Thelen, Ingo Wald, Charles D. Hansen, Hans Hagen, and Michael E. Papka. "Full-resolution interactive CPU volume rendering with coherent BVH traversal." In 2011 IEEE Pacific Visualization Symposium (PacificVis). IEEE, 2011. http://dx.doi.org/10.1109/pacificvis.2011.5742355.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hong, Fan, Can Liu, and Xiaoru Yuan. "DNN-VolVis: Interactive Volume Visualization Supported by Deep Neural Network." In 2019 IEEE Pacific Visualization Symposium (PacificVis). IEEE, 2019. http://dx.doi.org/10.1109/pacificvis.2019.00041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

State, Andrei, Jonathan McAllister, Ulrich Neumann, Hong Chen, Tim J. Cullip, David T. Chen, and Henry Fuchs. "Interactive volume visualization on a heterogeneous message-passing multicomputer." In the 1995 symposium. New York, New York, USA: ACM Press, 1995. http://dx.doi.org/10.1145/199404.199416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography