Auswahl der wissenschaftlichen Literatur zum Thema „Real-time 3D visual simulation“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Real-time 3D visual simulation" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Real-time 3D visual simulation"

1

Isaacs, John P., David J. Blackwood, Daniel Gilmour und Ruth E. Falconer. „Real-Time Visual Simulation of Urban Sustainability“. International Journal of E-Planning Research 2, Nr. 1 (Januar 2013): 20–42. http://dx.doi.org/10.4018/ijepr.2013010102.

Der volle Inhalt der Quelle
Annotation:
Sustainable decision making for strategic planning is a challenging process: requiring an understanding of the complex interactions among environmental, economic and social factors. Commonly, such decisions are dominated by economic factors hence there is a need for a framework that supports inclusive decision making throughout all stages of urban and rural planning projects. Towards this the authors have developed the Sustainable City Visualization Tool (S-CITY VT) which comprises 1) indicators (these provide the basis for assessment and monitoring of sustainability) selected according to scale and development 2) modelling techniques that provide indicator values, as not all of the indicators can be measured, and allows spatio-temporal prediction of indicators 3) Interactive 3D visualisation techniques to facilitate effective communication with a wide range of stakeholders. The sustainability modelling and 3D visualisations are shown to have the potential to enhance community engagement within the planning process thus enhancing public acceptance and participation within the urban or rural development project.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Agus, Marco, Andrea Giachetti, Enrico Gobbetti, Gianluigi Zanetti und Antonio Zorcolo. „Real-Time Haptic and Visual Simulation of Bone Dissection“. Presence: Teleoperators and Virtual Environments 12, Nr. 1 (Februar 2003): 110–22. http://dx.doi.org/10.1162/105474603763835378.

Der volle Inhalt der Quelle
Annotation:
Bone dissection is an important component of many surgical procedures. In this paper, we discuss a haptic and visual simulation of a bone-cutting burr that is being developed as a component of a training system for temporal bone surgery. We use a physically motivated model to describe the burr-bone interaction, which includes haptic forces evaluation, the bone erosion process, and the resulting debris. The current implementation, directly operating on a voxel discretization of patient-specific 3D CT and MR imaging data, is efficient enough to provide real-time feedback on a low-end multiprocessing PC platform.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Sennersten, Charlotte C., und Craig A. Lindley. „Real Time Eye Gaze Logging in a 3D Game/Simulation World“. Key Engineering Materials 437 (Mai 2010): 555–59. http://dx.doi.org/10.4028/www.scientific.net/kem.437.555.

Der volle Inhalt der Quelle
Annotation:
Evaluating the effectiveness of virtual environments as training and analysis systems one must take into account both strongly and weakly defined measures of visual behaviour and associated experience. The investigation of cross correlations between strongly defined measures of logged gaze behaviours, and weakly defined measures of subjective perceptions of visual behaviour, reveals significant discrepancies. The existence of these discrepancies casts doubt upon the effectiveness of using self-reporting questionnaires to assess training effectiveness. However, making participants aware of these discrepancies can be a potentially powerful method for increasing the effectiveness of training using virtual worlds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Zhang, Zhi Chun, Song Wei Li, Song Yan Lu, Wen Xu und Yun He. „3D Cloud Simulation Technology in Flight Visual System“. Advanced Materials Research 909 (März 2014): 418–22. http://dx.doi.org/10.4028/www.scientific.net/amr.909.418.

Der volle Inhalt der Quelle
Annotation:
The simulation technology of 3D cloud can be the ideal method to meet both making human visual scenes realistic and generating weather radar images in flight simulations. This paper describes a 3D cloud simulation method and technology that focuses in three aspects of cloud modeling, lighting and rendering. Firstly, the 3D cloud was modeled in a particle system to specify the atmosphere characteristics of cloud in natural world, then the textures were mapped to the particles to improve the cloud authentic and the lighting model was established to make the cloud environments realistic. Finally, the impostor technology was used to accelerate the rendering speed. The implementation on PC platform shows that the method and technology can generate realistic 3D cloud and the real time ability is satisfied.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Shen, Helong, Yong Yin, Yongjin Li und Pengcheng Wang. „Real-time Dynamic Simulation of 3D Cloud for Marine Search and Rescue Simulator“. International Journal of Virtual Reality 8, Nr. 2 (01.01.2009): 59–63. http://dx.doi.org/10.20870/ijvr.2009.8.2.2725.

Der volle Inhalt der Quelle
Annotation:
As the main scenery of sky, the effect of 3D cloud influences the fidelity of visual system and the immersion effect of simulator. In this paper, based on the work of Y. Dobashi and T. Nishita, small region Cellular Automaton is generated and more realistic cloud simulation is improved. The experimental results show that the visual system of simulator can run in real-time and has a relatively higher refresh rate after changeable 3D cloud being applied.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Lee, Jyh-Fa, Ming-Shium Hsieh, Chih-Wei Kuo, Ming-Dar Tsai und Ming Ma. „REAL-TIME THREE-DIMENSIONAL RECONSTRUCTION FOR VOLUME-BASED SURGERY SIMULATIONS“. Biomedical Engineering: Applications, Basis and Communications 20, Nr. 04 (August 2008): 205–18. http://dx.doi.org/10.4015/s1016237208000799.

Der volle Inhalt der Quelle
Annotation:
This paper describes a three-dimensional reconstruction method to provide real-time visual responses for volume (constituted by tomographic slices) based surgery simulations. The proposed system uses dynamical data structures to record tissue triangles obtained from 3D reconstruction computation. Each tissue triangle in the structures can be modified or every structure can be deleted or allocated independently. Moreover, triangle reconstruction is optimized by only deleting or adding vertices from manipulated voxels that are classified as erosion (in which the voxels are changed from tissue to null) or generation (the voxels are changed from null to tissue). Therefore, by manipulating these structures, 3D reconstruction can be locally implemented for only manipulated voxels to achieve the highest efficiency without reconstructing tissue surfaces in the whole volume as general methods do. Three surgery simulation examples demonstrate that the proposed method can provide time-critical visual responses even under other time-consuming computations such as volume manipulations and haptic interactions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lei, Hua Feng, Ning Shan und Hao Wang. „A Study on Optimization Technology of Visual Simulation Model Based on Creator“. Applied Mechanics and Materials 556-562 (Mai 2014): 3633–36. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.3633.

Der volle Inhalt der Quelle
Annotation:
Establishing 3d model is basis for visual simulation, it has a great effect on real-time and authenticity in process of rendering and taking on Vega visual simulation platform. Therefore, reasonably optimizing model is very important for solving the contradiction between real-time and authenticity and realizing effects of establishing authentic and fast establishing in visual simulation. This paper will analyze how to use modeling optimization techniques to optimize armored vehicle model based on Creator, so as to make it better to meet immersive and interactive requirements in visual simulation for human, vehicle and driving environment, and perfectly present real-time and authenticity effect for armored vehicle in visual simulation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lynen, Simon, Bernhard Zeisl, Dror Aiger, Michael Bosse, Joel Hesch, Marc Pollefeys, Roland Siegwart und Torsten Sattler. „Large-scale, real-time visual–inertial localization revisited“. International Journal of Robotics Research 39, Nr. 9 (07.07.2020): 1061–84. http://dx.doi.org/10.1177/0278364920931151.

Der volle Inhalt der Quelle
Annotation:
The overarching goals in image-based localization are scale, robustness, and speed. In recent years, approaches based on local features and sparse 3D point-cloud models have both dominated the benchmarks and seen successful real-world deployment. They enable applications ranging from robot navigation, autonomous driving, virtual and augmented reality to device geo-localization. Recently, end-to-end learned localization approaches have been proposed which show promising results on small-scale datasets. However, the positioning accuracy, scalability, latency, and compute and storage requirements of these approaches remain open challenges. We aim to deploy localization at a global scale where one thus relies on methods using local features and sparse 3D models. Our approach spans from offline model building to real-time client-side pose fusion. The system compresses the appearance and geometry of the scene for efficient model storage and lookup leading to scalability beyond what has been demonstrated previously. It allows for low-latency localization queries and efficient fusion to be run in real-time on mobile platforms by combining server-side localization with real-time visual–inertial-based camera pose tracking. In order to further improve efficiency, we leverage a combination of priors, nearest-neighbor search, geometric match culling, and a cascaded pose candidate refinement step. This combination outperforms previous approaches when working with large-scale models and allows deployment at unprecedented scale. We demonstrate the effectiveness of our approach on a proof-of-concept system localizing 2.5 million images against models from four cities in different regions of the world achieving query latencies in the 200 ms range.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Zheng, Xing, Gu Chang Wang, He Yang und Hong Xiang Liu. „Design and Implementation of UAV 3D Visual Simulation Training System“. Applied Mechanics and Materials 336-338 (Juli 2013): 1361–65. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.1361.

Der volle Inhalt der Quelle
Annotation:
UAV visual simulation training system is a vital part of the UAV simulation training system. It is made up of simulation models for UAV and airborne platform mathematical, UAV visual and ground-station control system. It prevents a realistic virtual environment for UAV operator by simulating UAV flight control law and the actual flight environments. In order to train operational level, test weapons and validate tactical thinking economically and efficiently. This paper prevented the function, architecture, hardware deployment, running conditions and realization of real simulation module.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

DE CARVALHO, PAULO ROBERTO, MAIKON CISMOSKI DOS SANTOS, WILLIAM ROBSON SCHWARTZ und HELIO PEDRINI. „AN IMPROVED VIEW FRUSTUM CULLING METHOD USING OCTREES FOR 3D REAL-TIME RENDERING“. International Journal of Image and Graphics 13, Nr. 03 (Juli 2013): 1350009. http://dx.doi.org/10.1142/s0219467813500095.

Der volle Inhalt der Quelle
Annotation:
The generation of real-time 3D graphics scenes normally demands high computational requirements. Several applications can benefit from efficient algorithms for rendering complex virtual environments, such as computer games, terrain visualization, virtual reality and visual simulation. This paper describes an improved view frustum culling method using spatial partitioning based on octrees for 3D real-time rendering. The proposed method is compared against two other approaches. Experiments using four different scenes are conducted to evaluate the performance of each tested method. Results demonstrate that the proposed method presents superior frame rate for all scenes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Real-time 3D visual simulation"

1

Christoforidis, Constantin. „Optimizing your data structure for real-time 3D rendering in the web : A comparison between object-oriented programming and data-oriented design“. Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20048.

Der volle Inhalt der Quelle
Annotation:
Performance is something that is always of concern when developing real-time 3D graphics applications. The way programs are made today with object-oriented programming has certain flaws that are rooted in the methodology itself. By exploring different programming paradigms we can eliminate some of these issues and find what is best for programming in different areas. Because real-time 3D applications need high performance the data-oriented design paradigm that makes the data the center of the application is experimented with. By using data-oriented design we can eliminate certain issues with object-oriented programming and deliver improved applications when it comes to performance, flexibility, and architecture. In this study, an experiment creating the same type of program with the help of different programming paradigms is made to compare the performance of the two. Some additional up- and downsides of the paradigms are also mentioned

Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.

APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rojas, Vanessa. „Real time wind simulation in a 3D game“. Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176704.

Der volle Inhalt der Quelle
Annotation:
While many games incorporate physics to simulate different aspects of gameplay, this is uncommon when it comes to fluid flows like wind, due to the complexity of the associated equations. The challenge increases in 3-dimensional worlds with large world maps and a real-time simulation. It is however possible to simplify a simulation by prioritizing visual and gameplay effects rather than physical accuracy, while still using a physically-sound system as a base. What this means for each game will differ depending on the architecture of the game, the desired outcome and acceptable performance costs. This paper addresses the implementation of a real-time, grid-based wind simulation in Rust for the game Veloren. A preliminary implementation with a simple graphical output was used before the simulation was integrated with the game. In Veloren, the resulting implementation is primarily server-based with a windsim system that runs the simulation itself, while the client side receives updates for the player's position, allowing the player to fly with a handglider using the wind currents created by the simulation. The performance cost of the implementation was measured for both the server and the client, using frames per second according to the grid size (space resolution) and how often the simulation is run (time resolution). When compared to the baseline before the implementation, it showed a performance cost for the server that increased with the time and space resolution. For the client side, no detectable performance cost was observed, but a lower simulation frequency resulted in sharp changes in wind direction from the player's perspective. Given that many options for optimization exist which were not systematically explored, the results show promise for the feasibility of this type of simulation in Veloren by expanding the current implementation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Cristino, Filipe. „Investigation into a real time 3D visual inspection system for industrial use“. Thesis, Liverpool John Moores University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.402879.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Mackey, Randall Lee. „NPSNET : hierarchical data structures for real-time three-dimensional visual simulation“. Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28402.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Forsberg, Sean Michael. „NETWORK CHANNEL VISUALIZING SIMULATOR: A REAL-TIME, 3D, INTERACTIVE NETWORK SIMULATION PLATFORM“. DigitalCommons@CalPoly, 2012. https://digitalcommons.calpoly.edu/theses/784.

Der volle Inhalt der Quelle
Annotation:
With a focus of always being connected, it's become typical for laptops and mobile devices to include multiple wireless network devices. Though the additional network devices have created mobility and versatility of how a user is connected, it is common for only one to be active at any given time. While likely that new mesh protocols will help maximize connectivity and power consumption by utilizing lower-power multi-hop techniques, it is still difficult to visualize these protocols due to the complexity created by each node's simple choices. Further challenges are presented by the variety of network devices which share frequency ranges with different output power, sensitivities, and antenna radiation patterns. Due to the complexity of these configurations and environments, it becomes clear that reproducible simulations are required. While several network simulators have been thoroughly tested over their many years of use, they often lack realistic handling of key factors that affect wireless networks. A few examples include cross-channel interference, propagation delays, interference caused by nodes beyond communication range, channel switching delays, and non-uniform radiation patterns. Another key limitation of these past tools is their limited methods for clearly displaying characteristics of multi-channel communication. Furthermore, these past utilities lack the graphical and interactive functions which promote the discovery of edge cases through the use of human intuition and pattern recognition. Even with their other limitations, many of these simulators are also extendable with new components and simulation abilities. As a result, a large set of protocols and other useful discoveries have been developed. While the concepts are well tested and verified, a new challenge is found when moving code from prototype to production due to code portability problems. Due to the sophistication of these creations, even small changes in code during a protocols release can have dramatic effects on its functionality. Both to encourage quicker development cycles and maintain code validation, it would be advantageous to provide simulation interfaces which directly match that of production systems. To overcome the various challenges presented and encourage the use of innate human abilities, this paper presents a novel simulation framework, Network Channel Visualizing Simulator (NCVS), with a real-time, interactive, 3D environment with clear representation and simulation of multi-channel RF communication through multiple network device types.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sundaraj, Kenneth. „Real-time dynamic simulation and 3D interaction of biological tissue : application to medical simulators“. Grenoble INPG, 2004. http://www.theses.fr/2004INPG0012.

Der volle Inhalt der Quelle
Annotation:
L'avènement de l'imagerie médicale et de nouvelles techniques opératoires a bouleversé les méthodes de travail des médecins. Mais ce changement nécessitera une formation renforcée des praticiens et chirurgiens. C'est pourquoi le dévelopment d'outils appropriés comme les simulateurs médico-chirurgicaux se fait de plus en plus ressentir. Dans ce cadre, nous nous sommes intéressés au problème de la modélisation des phénomènes de déformation de tissu biologique et à la détection des collisions dans un environment virtuel. Dans un premier temps, nous présentons les différents modèles physiques existants et les différentes méthodes de résolution numérique associées aux objets déformable. Nous proposons ensuite un modèle développé pour la simulation de tissu biologique, en présentant successivement les aspects liés à la formulation du modèle, à la résolution du modèle, et au traitement des interactions physiques. Ce modèle, basé sur l'utilisation du principe de Pascal, permet de modéliser de manière relativement satisfaisante des corps biologiques, tout en permettant une simulation interactive. Dans un deuxième temps, nous présentons les différents algorithmes existants pour la détectionde collision, ainsi que la difficulté d'adapter ces algorithmes aux simulateurs médicaux où les objets déformables complexes forment la base du modèle. Nous proposons ensuite les algorithmes développés pour traiter ce problème dans le cadre des simulateurs médicaux. Ces algorithmes présentent des caractéristiques de robustesse numérique et d'efficacité supérieures á l'existant, et permettent de traiter des corps déformables. Nous appliquons ces résultats dans le cadre de deux simulateur medicaux
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Bain, Matthew N. „Real Time Music Visualization: A Study in the Visual Extension of Music“. Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1213207395.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Srisamang, Richard, Richard Todd, Sudarshan Bhat und Terry Moore. „UAV INTEGRATED VISUAL CONTROL AND SIMULATION SYSTEM ARCHITECTURE AND CAPABILITIES IN ACTION“. International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/606815.

Der volle Inhalt der Quelle
Annotation:
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
Unmanned Aerial Vehicles (UAV) are becoming a significant asset to the military. This has given rise to the development of the Vehicle Control and Simulation System (VCSS), a low-cost ground support and control system deployable to any UAV testing site, with the capability to support ground crew and pilot training, real-time telemetry simulation, distribution, transmission and reception, mission planning, and Global Positioning System (GPS) reception. This paper describes the development of the VCSS detailing its capabilities, demonstrating its use in the field, and showing its novel use of internet technology for vehicle control telemetry distribution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Martínez, Ana Laura, und Natali Arvidsson. „Balance Between Performance and Visual Quality in 3D Game Assets : Appropriateness of Assets for Games and Real-Time Rendering“. Thesis, Uppsala universitet, Institutionen för speldesign, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413871.

Der volle Inhalt der Quelle
Annotation:
This thesis explores the balance between visual quality and the performance of a 3D object for computer games. Additionally, it aims to help new 3D artists to create assets that are both visually adequate and optimized for real-time rendering. It further investigates the differences in the judgement of the visual quality of thosethat know computer graphics, and thosenot familiar with it. Many explanations of 3D art optimization are often highly technical and challenging for graphic artists to grasp. Additionally, they regularly neglect the effects of optimization to the visual quality of the assets. By testing several 3D assets to measure their render time while using a survey to gather their visual assessments, it was discovered that 3D game art is very contextual. No definite or straightforward way was identified to find the balance between art quality and performance universally. Neither when it comes to performance nor visuals. However, some interesting findings regarding the judgment of visual quality were observed and presented.
Den här uppsatsen utforskar balansen mellan visuell kvalitéoch prestanda i 3D modeller för spel. Vidare eftersträvar den att utgöra ett stöd för nya 3D-modelleingskonstnärer för att skapa modeller som är både visuellt adekvata och optimerade för att renderas i realtid. Dessutom undersöks skillnaden mellan omdömet av den visuella kvalitén mellan de som är bekanta med 3D datorgrafik och de som inte är det. Många förklaringar gällande optimering av 3D grafik är högst tekniska och utgör en utmaning för grafiker att förståsig på och försummar dessutom ofta effekten av hur optimering påverkar resultatet rent visuallet. Genom att testa ett flertal 3D modeller, mäta tiden det tar för dem att renderas, samt omdömen gällande visuella intryck, drogs slutsatsen att bedömning av 3D modellering för spel är väldigt kontextuell. Inget definitivt och enkelt sätt att hitta balansen mellan visuella kvalitén upptäcktes. Varken gällande prestanda eller visuell kvalité. Däremot gjordes några intressanta upptäckter angående bedömningen av den visuella kvalitén som observerades och presenterades.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Merrell, Thomas Yates. „Escape Simulation Suite“. Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/31754.

Der volle Inhalt der Quelle
Annotation:
Ever since we were children the phrase â In case of an emergency, walk, DONâ T run, to the nearest exitâ has been drilled into our heads. How to evacuate a large number of people from a given area as quickly and safely as possible has been a question of great importance since the first congregation of man; a question that has yet to be optimally answered. There have been many attempts at finding an answer and many more yet to be made. In light of recent world events, 9/11 for instance, the need for a better answer is apparent. While finding a solution to this problem is the end objective, the goal of this thesis is to develop an application or tool that will aid in the search of an answer to this problem. There are several aspects of traditional evacuation plans that make them inherently suboptimal. First among these is that they are static by nature. When a building is designed, there is some care taken in analyzing its floor plan and finding an optimal evacuation route for everyone. These plans are made under several assumptions and with the obvious constant that they cannot be modified during the actual emergency. Yes, it is possible for such a plan to actually end up being the optimal plan during any given evacuation, but the likelihood of this being the case is most definitely less then 100%. There are many reasons for this. The most obvious is this: the situation that the plan is trying to solve is a very dynamic one. People will not be where they should be or in the quantities that the static plan was prepared for. Many of them will probably not know what they should do in an emergency and so most likely will follow any large group of people, like lemmings. Finally, most situations that require the evacuation of a building or area occur because all or part of the building has become, or is becoming, unsafe. It is impossible for a static evacuation plan to take into account the way a fire or poisonous gas is spreading, or the state of the structural stability of the building. What is needed during a crisis is an artificially intelligent and dynamic evacuation system that is capable of (1) analyzing the state of the building and its occupants, (2) coming up with a plan to get everyone out as fast as possible, and (3) directing all occupants along the best exit routes. Furthermore, the system should be able to modify its plan as the evacuation progresses. This application is intended to provide researchers in this area the means to quickly and accurately simulate different evacuation theories and ideas. That being the case, it will have powerful graphical capabilities, thus allowing the researchers to easily see the real-time results of their work. It will be able to use diverse modeling techniques in order to handle the many different ways of approaching this problem. It will provide a simple way for equations and mathematical models to be entered which can affect the behavior of most aspects of the world being simulated. This work is in conjunction with, and closely tied to, Dr Pushkin Kachrooâ s research on this same topic. The application is designed so that future developers can quickly add to and modify its design to meet their specifications. It is not the goal of this work to provide an application that directly solves the optimal evacuation problem, or one that inherently simulates everything perfectly. It is the job of the researchers using this application to define the specific physics equations and models for each component of the simulation. This application provides an easy way to add these definitions into the simulation calculations. In brief, this Escape Simulator is a client server application. All of the graphics and human interaction are handled client side using Win32 and Direct3D. The actual simulation world calculations are handled server side, and both the client and server communicate via DirectPlay. The algorithm being used to model the objects and world by the server will be completely configurable. In fact, everything in the world, including the world physics, will be completely modifiable. Though the researchers will need to write the necessary pluggins that to define the actual models and algorithms used by the agents, objects, and world, ultimately this will give them much more power and flexibility. It will also allow for third parties to develop libraries of commonly used algorithms and resources that the researchers can use. This research was supported in part from the National Science Foundation through grant no. CMS-0428196 with Dr. S. C. Liu as the Program Director. This support is gratefully acknowledged. Any opinion, findings, and conclusions or recommendations expressed in this study are those of the writer and do not necessarily reflect the views of the National Science Foundation.
Master of Science
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Real-time 3D visual simulation"

1

Mackey, Randall Lee. NPSNET: Hierarchical data structures for real-time three-dimensional visual simulation. Monterey, Calif: Naval Postgraduate School, 1991.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Real-time 3D Character Animation with Visual C++. Focal Press, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Lever, Nik. Real-time 3D Character Animation with Visual C++. Routledge, 2001. http://dx.doi.org/10.4324/9780080497983.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Facility, Dryden Flight Research, Hrsg. Real-time application of advanced three-dimensional graphic techniques for research aircraft simulation. Edwards, Calif: NASA Ames Research Center, Dryden Flight Research Facility, 1990.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Facility, Dryden Flight Research, Hrsg. Real-time application of advanced three-dimensional graphic techniques for research aircraft simulation. Edwards, Calif: NASA Ames Research Center, Dryden Flight Research Facility, 1990.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Facility, Dryden Flight Research, Hrsg. Real-time application of advanced three-dimensional graphic techniques for research aircraft simulation. Edwards, Calif: NASA Ames Research Center, Dryden Flight Research Facility, 1990.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sanders, Donald H. Virtual Reconstruction of Maritime Sites and Artifacts. Herausgegeben von Ben Ford, Donny L. Hamilton und Alexis Catsambis. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780199336005.013.0014.

Der volle Inhalt der Quelle
Annotation:
The integration of virtual reality into archaeological research began in the early 1990s. The use of computer-based methods in maritime archaeology is recent. Before exploring a real-time virtual, a 3D computer model is created from drawings, general sketches, raw dimensions, 3D scanned data, or photographs, or by using simple primitives and “drawing” on the computer. Virtual reality is a simulation of physical reality offering the viewer real-time movement through a true 3D space and interactivity with the objects, which can be further enhanced with 3D sound, lighting, and touch. This article presents case studies to show how virtual reality becomes valuable for the four components of archaeology: documentation, research/analysis/hypothesis testing, teaching, and publication. As digital technologies advance, so too will the opportunities to explore underwater sites in ways that will continue to enhance our abilities to understand and teach maritime history.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Real-time 3D visual simulation"

1

Lipşa, Dan R., Robert S. Laramee, Simon Cox und I. Tudur Davies. „Visualizing 3D Time-Dependent Foam Simulation Data“. In Advances in Visual Computing, 255–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41914-0_26.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Chi, Cheng. „Simulation Technique“. In Underwater Real-Time 3D Acoustical Imaging, 101–5. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-3744-4_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Petring, Ralf, Benjamin Eikel, Claudius Jähn, Matthias Fischer und Friedhelm Meyer auf der Heide. „Real-Time 3D Rendering of Heterogeneous Scenes“. In Advances in Visual Computing, 448–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41914-0_44.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Gutmann, Greg, Daisuke Inoue, Akira Kakugo und Akihiko Konagaya. „Real-Time 3D Microtubule Gliding Simulation“. In Communications in Computer and Information Science, 13–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-662-45283-7_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Brandstetter, Andrea, Najoua Bolakhrif, Christian Schiffer, Timo Dickscheid, Hartmut Mohlberg und Katrin Amunts. „Deep Learning-Supported Cytoarchitectonic Mapping of the Human Lateral Geniculate Body in the BigBrain“. In Lecture Notes in Computer Science, 22–32. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82427-3_2.

Der volle Inhalt der Quelle
Annotation:
AbstractThe human lateral geniculate body (LGB) with its six sickle shaped layers (lam) represents the principal thalamic relay nucleus for the visual system. Cytoarchitectonic analysis serves as the groundtruth for multimodal approaches and studies exploring its function. This technique, however, requires experienced knowledge about human neuroanatomy and is costly in terms of time. Here we mapped the six layers of the LGB manually in serial, histological sections of the BigBrain, a high-resolution model of the human brain, whereby their extent was manually labeled in every 30th section in both hemispheres. These maps were then used to train a deep learning algorithm in order to predict the borders on sections in-between these sections. These delineations needed to be performed in 1 µm scans of the tissue sections, for which no exact cross-section alignment is available. Due to the size and number of analyzed sections, this requires to employ high-performance computing. Based on the serial section delineations, high-resolution 3D reconstruction was performed at 20 µm isotropic resolution of the BigBrain model. The 3D reconstruction shows the shape of the human LGB and its sublayers for the first time at cellular precision. It represents a use case to study other complex structures, to visualize their shape and relationship to neighboring structures. Finally, our results could provide reference data of the LGB for modeling and simulation to investigate the dynamics of signal transduction in the visual system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Caccavale, Fabrizio, Vincenzo Lippiello, Bruno Siciliano und Luigi Villani. „Real-Time Visual Tracking of 3D-Objects“. In Springer Tracts in Advanced Robotics, 125–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-44410-7_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Yılmaz, Erdal, Yasemin Yardımcı Çetin, Çiğdem Eroğlu Erdem, Tanju Erdem und Mehmet Özkan. „Music Driven Real-Time 3D Concert Simulation“. In Multimedia Content Representation, Classification and Security, 379–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11848035_51.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Chen, Xiao, Guangming Wang, Ying Zhu und G. Scott Owen. „Real-Time Simulation of Ship Motions in Waves“. In Advances in Visual Computing, 71–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33179-4_8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Aquilio, Anthony S., Jeremy C. Brooks, Ying Zhu und G. Scott Owen. „Real-Time GPU-Based Simulation of Dynamic Terrain“. In Advances in Visual Computing, 891–900. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11919476_89.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Chen, Xiao, und Ying Zhu. „Real-Time Simulation of Vehicle Tracks on Soft Terrain“. In Advances in Visual Computing, 437–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41914-0_43.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Real-time 3D visual simulation"

1

Green, D., J. Cosmas, T. Itagaki, M. Waelkens, R. Degeest und E. Grabczewski. „A real time 3D stratigraphic visual simulation system for archaeological analysis and hypothesis testing“. In the 2001 conference. New York, New York, USA: ACM Press, 2001. http://dx.doi.org/10.1145/584993.585035.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Doleisch, Helmut. „SimVis: Interactive visual analysis of large and time-dependent 3D simulation data“. In 2007 Winter Simulation Conference. IEEE, 2007. http://dx.doi.org/10.1109/wsc.2007.4419665.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Robert, Ornprapa P., Chamnan Kumsap und Sibsan Suksuchano. „Modeling realistic 3D trees using materials from field survey for terrain analysis of tactical training center“. In THE 9TH INTERNATIONAL DEFENCE AND HOMELAND SECURITY SIMULATION WORKSHOP. CAL-TEK srl, 2019. http://dx.doi.org/10.46354/i3m.2019.dhss.006.

Der volle Inhalt der Quelle
Annotation:
This paper elaborates processes of modeling 3D trees for the simulation of the Army’s Tactical Training Center. The ultimate objective is to develop the 3D model database for inclusion to a game engine library. The adopted methodology includes collecting a forestry inventory for later 3D tree modeling in a Unity’s 3D Tree Modeler. Leaves and trunks were closely modeled using the data collected from the real site in the package SpeedTree modeler. Three tree types were sampled to demonstrate how close and realistic the adopted processes were to produce result 3D models for inclusion to the simulation of the tactical center. Visual comparison was made to show the final models. 3D scenes generated from the inclusion of the models were illustrated in comparison to the photo taken from the site. Further studies to adopt surface modeling data from UAV terrain mapping for tree canopies were recommended to verify photorealism of the processed 3D models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Iacob, Robert, Peter Mitrouchev und Jean-Claude Le´on. „A Simulation Framework for Assembly/Disassembly Process Modeling“. In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-34804.

Der volle Inhalt der Quelle
Annotation:
Simulations of Assembly/Disassembly (A/D) processes covers a large range of objectives, i.e. A/D sequencing, path finding, ergonomic analysis …, where the 3D shape description of the component plays a key role. In addition, the A/D simulations can be performed either from an automated or interactive point of view using standard computer equipment or through immersive and real-time simulation schemes. In order to address this diversity of configurations, this paper presents a simulation framework for A/D analysis based on a new simulation preparation process which allows a simulation process to address up to two types of shape representations, i.e. B-Rep NURBS and polyhedral ones, at the same time, thus handling efficiently the configurations where 3D shape representations of assemblies play a key role. In order to illustrate the simulation preparation process some specific steps are addressed. To this end, the automatic identification of contacts in a 3D product model and their corresponding list is described. After this first stage of identification, an interpretation of the results is needed in order to have the complete list with the mechanical contacts for a product. During the preparation process, three major stages of the framework are detailed: model tessellation, surface merging and contacts identification. Our framework is based on STEP exchange format. The contacts are related to basic geometrical surfaces like: planes, cylinders, cones, spheres. Some examples are provided in order to illustrate the contributions of the proposed framework. This software environment can assist designers to achieve a satisfactory assembly analysis rapidly and can reduce the lead-time of product development. Further consequences of the present work is its ability to produce models and treatments that improve integration of assembly models in immersive environments taking into account of the haptic and visual models needed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Batayneh, Wafa, Ahmad Bataineh, Samer Abandeh, Mohammad Al-Jarrah, Mohammad Banisaeed und Bara’ah alzo’ubei. „Using EMG Signals to Remotely Control a 3D Industrial Robotic Arm“. In ASME 2019 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/imece2019-10234.

Der volle Inhalt der Quelle
Annotation:
Abstract In this paper, a muscle gesture computer Interface (MGCI) system for robot navigation Control employing a commercial wearable MYO gesture Control armband is proposed. the motion and gesture control device from Thalamic Labs. The software interface is developed using LabVIEW and Visual Studio C++. The hardware Interface between the Thalamic lab’s MYO armband and the robotic arm has been implemented using a National Instruments My RIO, which provides real time EMG data needed. This system allows the user to control a three Degrees of freedom robotic arm remotely by his/her Intuitive motion by Combining the real time Electromyography (EMG) signal and inertial measurement unit (IMU) signals. Computer simulations and experiments are developed to evaluate the feasibility of the proposed System. This system will allow a person to wear this/her armband and move his/her hand and the robotic arm will imitate the motion of his/her hand. The armband can pick up the EMG signals of the person’s hand muscles, which is a time varying noisy signal, and then process this MYO EMG signals using LabVIEW and make classification of this signal in order to evaluate the angles which are used as feedback to servo motors needed to move the robotic arm. A simulation study of the system showed very good results. Tests show that the robotic arm can imitates the arm motion at an acceptable rate and with very good accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Asgary, Ali. „Holovulcano: Augmented Reality simulation of volcanic eruptions“. In The 8th International Defence and Homeland Security Simulation Workshop. CAL-TEK srl, 2018. http://dx.doi.org/10.46354/i3m.2018.dhss.007.

Der volle Inhalt der Quelle
Annotation:
"This paper describes an interactive holographic simulation of volcanic eruption. The aim of the project is to use Augmented Reality (AR) technology to visualize different volcanic eruptions for public education, emergency training, and preparedness planning purposes. To achieve this goal, a 3D model of the entire Vulcano Island in Italy has been created using real elevation data. Unity game engine and Microsoft Visual Studio have been used to develop HoloVulcano augmented/virtual reality simulation application. The current version of HoloVulcano simulates normal and unrest situations, single and long lasting Vulcanian, Plinian, and Strombolian eruptions. HoloVulcano has been developed for Microsoft HoloLens AR device. Wearing the HoloLens, users can interact with the volcano through voice, gazing, and gestures and view different eruptions from different points in the island. HoloVulcano will be used for training emergency exercises and public education."
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Xiao, Hui, und Xu Chen. „Following Fast-Dynamic Targets With Only Slow and Delayed Visual Feedback: A Kalman Filter and Model-Based Prediction Approach“. In ASME 2019 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/dscc2019-9022.

Der volle Inhalt der Quelle
Annotation:
Abstract Although visual feedback has enabled a wide range of robotic capabilities such as autonomous navigation and robotic surgery, low sampling rate and time delays of visual outputs continue to hinder real-time applications. When partial knowledge of the target dynamics is available, however, we show the potential of significant performance gain in vision-based target following. Specifically, we propose a new framework with Kalman filters and multirate model-based prediction (1) to reconstruct fast-sampled 3D target position and velocity data, and (2) to compensate the time delay for general robotic motion profiles. Along the path, we study the impact of modeling choices and the delay duration, build simulation tools, and experimentally verify different algorithms with a robot manipulator equipped with an eye-in-hand camera. The results show that the robot can track a moving target with fast dynamics even if the visual measurements are slow and incapable of providing timely information.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Onbasıog˘lu, Esin, Bas¸ar Atalay, Dionysis Goularas, Ahu H. Soydan, Koray K. S¸afak und Fethi Okyar. „Visualisation of Burring Operation in Virtual Surgery Simulation“. In ASME 2010 10th Biennial Conference on Engineering Systems Design and Analysis. ASMEDC, 2010. http://dx.doi.org/10.1115/esda2010-25233.

Der volle Inhalt der Quelle
Annotation:
Virtual reality based surgical training have a great potential as an alternative to traditional training methods. In neurosurgery, state-of-the-art training devices are limited and the surgical experience accumulates only after so many surgical procedures. Incorrect surgical movements can be destructive; leaving patients paralyzed, comatose or dead. Traditional techniques for training in surgery use animals, phantoms, cadavers and real patients. Most of the training is based either on these or on observation behind windows. The aim of this research is the development of a novel virtual reality training system for neurosurgical interventions based on a real surgical microscope for a better visual and tactile realism. The simulation works by an accurate tissue modeling, a force feedback device and a representation of the virtual scene on the screen or directly on the oculars of the operating microscope. An intra-operative presentation of the preoperative three-dimensional data will be prepared in our laboratory and by using this existing platform virtual organs will be reconstructed from real patients’ images. VISPLAT is a platform for virtual surgery simulation. It is designed as a patient-specific system that provides a database where patient information and CT images are stored. It acts as a framework for modeling 3D objects from CT images, visualization of the surgical operations, haptic interaction and mechanistic material-removal models for surgical operations. It tries to solve the challenging problems in surgical simulation, such as real-time interaction with complex 3D datasets, photorealistic visualization, and haptic (force-feedback) modeling. Surgical training on this system for educational and preoperative planning purposes will increase the surgical success and provide a better quality of life for the patients. Surgical residents trained to perform surgery using virtual reality simulators will be more proficient and have fewer errors in the first operations than those who received no virtual reality simulated education. VISPLAT will help to accelerate the learning curve. In future VISPLAT will offer more sophisticated task training programs for minimally invasive surgery; this system will record errors and supply a way of measuring operative efficiency and performance, working both as an educational tool and a surgical planning platform quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Mebarki, Rafik, Vincenzo Lippiello und Bruno Siciliano. „Exploiting Image Moments for Aerial Manipulation Control“. In ASME 2013 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/dscc2013-3760.

Der volle Inhalt der Quelle
Annotation:
This paper proposes a new visual servo control scheme that endows flying manipulators with the capability of positioning with respect to visual targets. A camera attached to the UAV provides real-time images of the scene. We consider the approaching part of an aerial assembling task, where the manipulator carries a structure to be plugged into the visual target. In order to augment the system capabilities regarding the 3D interaction with the target, we propose to use image moments. The developed controller generates desired velocities to both the UAV and the manipulator, simultaneously. While taking into account the under-actuation specific to rotary-wing vehicles, it makes use of the system redundancy to realize potential sub-tasks. The joints limits avoidance is also guaranteed. The presented developments are validated by means of computer simulations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Cserkaszky, Aron, Attila Barsi, Zsolt Nagy, Gabor Puhr, Tibor Balogh und Peter A. Kara. „Real-time light-field 3D telepresence“. In 2018 7th European Workshop on Visual Information Processing (EUVIP). IEEE, 2018. http://dx.doi.org/10.1109/euvip.2018.8611663.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie