Gotowa bibliografia na temat „Data Animation”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Data Animation”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Data Animation"

1

Waspada, Heri Priya, Ismanto Ismanto i Firman Hidayah. "Penggunaan Hasil Motion Capture (Data Bvh) Untuk Menganimasikan Model Karakter 3d Agar Menghasilkan Animasi Yang Humanoid". JAMI: Jurnal Ahli Muda Indonesia 1, nr 2 (31.12.2020): 94–102. http://dx.doi.org/10.46510/jami.v1i2.34.

Pełny tekst źródła
Streszczenie:
Abstrak Objektif. Proses pemodelan karakter 3D memegang peranan penting dalam menghasilkan model karakter 3D yang baik. Proses ini merupakan proses awal yang harus dilalui oleh seorang desainer dalam membuat sebuah model karakter 3D. Setelah proses pemodelan dikerjakan dengan baik agar karakter tersebut bisa dibuat bergerak maka diperlukan proses rigging. Dengan proses pemodelan dan rigging tersebut model karakter 3D bisa digunakan untuk menghasilkan animasi sesuai dengan keinginan animator. Tentunya seorang animator akan memerlukan kerja keras untuk membuat suatu adegan gerakan apabila animasi yang dibuat masih manual. Untuk itu dengan memanfaatkan data BVH, animator akan lebih ringan dalam membuat adegan animasinya. Hasil animasi karakter di tunjukkan kepada 40 responden untuk menilai dan menghasilkan rata-rata tingkat humanoid animasi karakter bernilai 65%. Material and Metode. Menganimasikan model karakter 3D memanfaatkan hasil motion capture (.bvh) Hasil. Animasi karakter 3D dengan menggunakan hasil motion capture menghasilkan animasi yang humanoid. Kesimpulan. Hasil motion capture merupakan susunan tulang yang sudah dilengkapi dengan hasil perekaman gerakan sehingga untuk memproduksi animasi model karakter 3D akan lebih mudah karena animator tidak perlu menggambar tiap gerakan yang diinginkan. Abstrack Objective. The process of modeling 3D characters plays an important role in producing good 3D character models. This process is the initial process that must be passed by a designer in creating a 3D character model. After the modeling process is done well so that the character can be moved, a rigging process is needed. With the modeling and rigging process, 3D character models can be used to produce animations in accordance with the wishes of the animator. Of course, an animator will need to work hard to create a motion scene if the animation created is still manual. For this reason, by utilizing BVH data, animators will be lighter in making their animated scenes. The results of the character animation were shown to 40 respondents to rate and produce an average humanoid character animation level of 65%. Materials and Methods. Menganimasikan model karakter 3D memanfaatkan hasil motion capture (.bvh) Results. 3D character animation using the results of motion capture produces humanoid animation. Conclusion. The result of motion capture is the arrangement of bones that has been equipped with the results of recording the motion so that to produce animated 3D character models will be easier because the animator does not need to draw every desired movement.
Style APA, Harvard, Vancouver, ISO itp.
2

Esponda-Argüero, Margarita. "Techniques for Visualizing Data Structures in Algorithmic Animations". Information Visualization 9, nr 1 (29.01.2009): 31–46. http://dx.doi.org/10.1057/ivs.2008.26.

Pełny tekst źródła
Streszczenie:
This paper deals with techniques for the design and production of appealing algorithmic animations and their use in computer science education. A good visual animation is both a technical artifact and a work of art that can greatly enhance the understanding of an algorithm's workings. In the first part of the paper, I show that awareness of the composition principles used by other animators and visual artists can help programmers to create better algorithmic animations. The second part shows how to incorporate those ideas in novel animation systems, which represent data structures in a visually intuitive manner. The animations described in this paper have been implemented and used in the classroom for courses at university level.
Style APA, Harvard, Vancouver, ISO itp.
3

HALFON, EFRAIM, i MORLEY HOWELL. "VISUALIZATION OF LIMNOLOGICAL DATA AS TWO- AND THREE-DIMENSIONAL COMPUTER GENERATED ANIMATIONS". Journal of Biological Systems 02, nr 04 (grudzień 1994): 443–52. http://dx.doi.org/10.1142/s0218339094000271.

Pełny tekst źródła
Streszczenie:
DATA ANIMATOR is a software program to develop and display limnological data as computer generated animations. The purpose of the program is to visualize in a dynamical fashion a variety of data collected in lakes. Examples are originated from Hamilton Harbour, Lake Ontario. Data collected at different stations and different times are interpolated in space and in time. Lake topography and lake bathymetry files are used to relate data collected in the lake(s) with topographical features. A graphic user interface allows the user to choose two- or three-dimensional views, a viewpoint, fonts, colour palette, data and keyframes. A typical 1800 frame animation can be displayed in a minute at 30 frames per second. Rendering time is about 12 hours. Animations can be displayed on a monitor or transferred to video tape.
Style APA, Harvard, Vancouver, ISO itp.
4

Wang, Hua, Xiaoyu He i Mingge Pan. "An Interactive and Personalized Erasure Animation System for a Large Group of Participants". Applied Sciences 9, nr 20 (18.10.2019): 4426. http://dx.doi.org/10.3390/app9204426.

Pełny tekst źródła
Streszczenie:
This paper introduces a system to realize interactive and personalized erasure animations by using mobile terminals, a shared display terminal, and a database server for a large group of participants. In the system, participants shake their mobile terminals with their hands. Their shaking data are captured by the database server. Then there are immersive and somatosensory erasure animations on the shared display terminal according to the participants’ shaking data in the database server. The system is implemented by a data preprocessing module and an interactive erasure animation module. The former is mainly responsible for the cleaning and semantic standardization of the personalized erasure shape data. The latter realizes the interactive erasure animation, which involves shaking the mobile terminal, visualizations of the erasure animation on the shared display terminal, and dynamic and personalized data editing. The experimental results show that the system can realize various styles of personalized erasure animation and can respond to more than 2000 shaking actions simultaneously and present the corresponding erasure animations on the shared display terminal in real time.
Style APA, Harvard, Vancouver, ISO itp.
5

Kim, Yejin, i Myunggyu Kim. "Data-Driven Approach for Human Locomotion Generation". International Journal of Image and Graphics 15, nr 02 (kwiecień 2015): 1540001. http://dx.doi.org/10.1142/s021946781540001x.

Pełny tekst źródła
Streszczenie:
This paper introduces a data-driven approach for human locomotion generation that takes as input a set of example locomotion clips and a motion path specified by an animator. Significantly, the approach only requires a single example of straight-path locomotion for each style expressed and can produce a continuous output sequence on an arbitrary path. Our approach considers quantitative and qualitative aspects of motion and suggests several techniques to synthesize a convincing output animation: motion path generation, interactive editing, and physical enhancement for the output animation. Initiated with an example clip, this process produces motion that differs stylistically from any in the example set, yet preserves the high quality of the example motion. As shown in the experimental results, our approach provides efficient locomotion generation by editing motion capture clips, especially for a novice animator, at interactive speed.
Style APA, Harvard, Vancouver, ISO itp.
6

PERLIN, MARK W. "VISUALIZING DYNAMIC ARTIFICIAL INTELLIGENCE ALGORITHMS AND APPLICATIONS". International Journal on Artificial Intelligence Tools 03, nr 02 (czerwiec 1994): 289–307. http://dx.doi.org/10.1142/s0218213094000145.

Pełny tekst źródła
Streszczenie:
Visualization is an important component of modern computing. By animating the course of an algorithm’s temporal execution, many key features can be elucidated. We have developed a general framework, termed Call-Graph Caching (CGC), for automating the construction of many complex AI algorithms. By incorporating visualization into CGC interpreters, principled animations can be automatically displayed as AI computations unfold. (1) Systems that support the automatic animation of AI algorithms must address these three design issues: (2) How to represent AI data structures in a general, uniform way that leads to perspicuous animation and efficient redisplay. (3) How to coordinate the succession of graphical events. (4) How to partition AI graphs to provide for separate, uncluttered displays. CGC provides a natural and effective solution to all these concerns. (5) We describe the CGC method, including detailed examples, and motivate why CGC works well for animation. We discuss the CACHE system, our CGC environment for AI algorithm animation. We demonstrate the animation of several AI algorithms – RETE match, linear unification, arc consistency, chart parsing, and truth maintenance – all of which have been implemented in CACHE. Finally, we discuss the application of these methods to interactive interfaces for intelligent systems, using molecular genetics as an example domain.
Style APA, Harvard, Vancouver, ISO itp.
7

Tjandra, Agatha Maisie, i Maya Yonesho. "Collaborative Storytelling Animation". IMOVICCON Conference Proceeding 1, nr 1 (3.07.2019): 35–44. http://dx.doi.org/10.37312/imoviccon.v1i1.4.

Pełny tekst źródła
Streszczenie:
High mobility era these days bring new trends for everyone including animator to working travel and still have possibilities to create animation movie without their studio room. Gadget such as mobile device camera and small papers can be used by the animator to create photo sequence for creating stop motion animation movie. Moreover, new ideas for storytelling will come up from collaboration that probably happen along the animator’s trip. The process for collaborative idea to make storytelling will increase the empathy of animators and audience with the movie. This paper use qualitative data by interview, existing study and experience study to explain the collaborative process of Daumenreise Animation project in animators’ perspective and analyzing the movie by using storytelling elements in order to suggest a method to be applied in any small device storytelling media for animator traveler to create movie project.
Style APA, Harvard, Vancouver, ISO itp.
8

Choi, Woong, Naoki Hashimoto, Ross Walker, Kozaburo Hachimura i Makoto Sato. "Generation of Character Motion by Using Reactive Motion Capture System with Force Feedback". Journal of Advanced Computational Intelligence and Intelligent Informatics 12, nr 2 (20.03.2008): 116–24. http://dx.doi.org/10.20965/jaciii.2008.p0116.

Pełny tekst źródła
Streszczenie:
Creating reactive motions with conventional motion capture systems is difficult because of the different task environment required. To overcome this drawback, we developed a reactive motion capture system that combines conventional motion capture system with force feedback and a human-scale virtual environment. Our objective is to make animation with reactive motion data generated from the interaction with force feedback and the virtual environment, using the fact that a person’s motions in the real world can be represented by the reactions of the person to real objects. In this paper we present the results of some animations made under various scenarios using animating reactive motion generation with our reactive motion capture system. Our results demonstrate that the reactive motion generated by this system was useful for producing the animation including scenes of reactive motion.
Style APA, Harvard, Vancouver, ISO itp.
9

Ehrlich, Nea. "The Animated Document: Animation’s Dual Indexicality in Mixed Realities". Animation 15, nr 3 (listopad 2020): 260–75. http://dx.doi.org/10.1177/1746847720974971.

Pełny tekst źródła
Streszczenie:
Animation has become ubiquitous within digital visual culture and fundamental to knowledge production. As such, its status as potentially reliable imagery should be clarified. This article examines how animation’s indexicality (both as trace and deixis) changes in mixed realities where the physical and the virtual converge, and how this contributes to the research of animation as documentary and/or non-fiction imagery. In digital culture, animation is used widely to depict both physical and virtual events, and actions. As a result, animation is no longer an interpretive visual language. Instead, animation in virtual culture acts as real-time visualization of computer-mediated actions, their capture and documentation. Now that animation includes both captured and generated imagery, not only do its definitions change but its link to the realities depicted and the documentary value of animated representations requires rethinking. This article begins with definitions of animation and their relation to the perception of animation’s validity as documentary imagery; thereafter it examines indexicality and the strength of indexical visualizations, introducing a continuum of strong and weak indices to theorize the hybrid and complex forms of indexicality in animation, ranging from graphic user interfaces (GUI) to data visualization. The article concludes by examining four indexical connections in relation to physical and virtual reality, offering a theoretical framework with which to conceptualize animation’s indexing abilities in today’s mixed realities.
Style APA, Harvard, Vancouver, ISO itp.
10

Stuckey, Owen. "A Comparison of ArcGIS and QGIS for Animation". Cartographic Perspectives, nr 85 (22.06.2017): 23–32. http://dx.doi.org/10.14714/cp85.1405.

Pełny tekst źródła
Streszczenie:
I compare two GIS programs which can be used to create cartographic animations—the commercial Esri ArcGIS and the free and open-source QGIS. ArcGIS implements animation through the “Time Slider” while QGIS uses a plugin called “TimeManager.” There are some key similarities and differences as well as functions unique to each plugin. This analysis examines each program’s capabilities in mapping time series data. Criteria for evaluation include the number of steps, the number of output formats, input of data, processing, output of a finished animation, and cost. The comparison indicates that ArcGIS has more control in input, processing, and output of animations than QGIS, but has a baseline cost of $100 per year for a personal license. In contrast, QGIS is free, uses fewer steps, and enables more output formats. The QGIS interface can make data input, processing, and output of an animation slower.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Data Animation"

1

Smith, Raymond James. "Animation of captured surface data". Thesis, University of Surrey, 2003. http://epubs.surrey.ac.uk/843070/.

Pełny tekst źródła
Streszczenie:
Creation of 3D graphical content becomes ever harder, as both display capabilities and the demand for complex 3D content increase. In this thesis, we present a method of using densely scanned surface data from physical objects in interactive animation systems. By using a layered approach, incorporating skeletal animation and displacement mapping, we can realistically animate complex datasets with a minimum of manual intervention. We propose a method using three layers; firstly, an articulated skeleton layer provides simple motion control of the object. Secondly, a low-polygon control layer, based on the scanned surface, is mapped to this skeleton, and animated using a novel geometric skeletal animation method. Finally, the densely sampled surface mesh is mapped to this control layer using a normal volume mapping, forming the detail layer of the system. This mapping allows animation of the dense mesh data based on deformation of the control layer beneath. The complete layered animation chain allows an animator to perform interactive animation using the control layer, the results of which can then be used to automatically animate a highly detailed surface for final rendering. We also propose an extension to this method, in which the detail layer is replaced by a displacement map defined over the control layer. This enables dynamic level of detail rendering, allowing realtime rendering of the dense data, or an approximation thereof. This representation also supports such applications as simple surface editing and compression of surface data. We describe a novel displacement map creation technique based on normal volume mapping, and analyse the performance and accuracy of this method.
Style APA, Harvard, Vancouver, ISO itp.
2

Stainback, Pamela Barth. "Computer animation : the animation capabilities of the Genigraphics 100C /". Online version of thesis, 1990. http://hdl.handle.net/1850/11460.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Yun, Hee Cheol. "Compression of computer animation frames". Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/13070.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Nar, Selim. "A Virtual Human Animation Tool Using Motion Capture Data". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609683/index.pdf.

Pełny tekst źródła
Streszczenie:
In this study, we developed an animation tool to animate 3D virtual characters. The tool offers facilities to integrate motion capture data with a 3D character mesh and animate the mesh by using Skeleton Subsurface Deformation and Dual Quaternion Skinning Methods. It is a compact tool, so it is possible to distribute, install and use the tool with ease. This tool can be used to illustrate medical kinematic gait data for educational purposes. For validation, we obtained medical motion capture data from two separate sources and animated a 3D mesh model by using this data. The animations are presented to physicians for evaluation. The results show that the tool is sufficient in displaying obvious gait patterns of the patients. The tool provides interactivity for inspecting the movements of patient from different angles and distances. We animate anonymous virtual characters which provide anonymity of the patient.
Style APA, Harvard, Vancouver, ISO itp.
5

Scheidt, November. "A facial animation driven by X-ray microbeam data". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0021/MQ54745.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Yin, KangKang. "Data-driven kinematic and dynamic models for character animation". Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31759.

Pełny tekst źródła
Streszczenie:
Human motion plays a key role in the production of films, video games, virtual reality applications, and the control of humanoid robots. Unfortunately, it is hard to generate high quality human motion for character animation either manually or algorithmically. As a result, approaches based on motion capture data have become a central focus of character animation research in recent years. We observe three principal weaknesses in previous work using data-driven approaches for modelling human motion. First, basic balance behaviours and locomotion tasks are currently not well modelled. Second, the ability to produce high quality motion that is responsive to its environment is limited. Third, knowledge about human motor control is not well utilized. This thesis develops several techniques to generalize motion capture character animations to balance and respond. We focus on balance and locomotion tasks, with an emphasis on responding to disturbances, user interaction, and motor control integration. For this purpose, we investigate both kinematic and dynamic models. Kinematic models are intuitive and fast to construct, but have narrow generality, and thus require more data. A novel performance-driven animation interface to a motion database is developed, which allows a user to use foot pressure to control an avatar to balance in place, punch, kick, and step. We also present a virtual avatar that can respond to pushes, with the aid of a motion database of push responses. Consideration is given to dynamics using motion selection and adaption. Dynamic modelling using forward dynamics simulations requires solving difficult problems related to motor control, but permits wider generalization from given motion data. We first present a simple neuromuscular model that decomposes joint torques into feedforward and low-gain feedback components, and can deal with small perturbations that are assumed not to affect balance. To cope with large perturbations we develop explicit balance recovery strategies for a standing character that is pushed in any direction. Lastly, we present a simple continuous balance feedback mechanism that enables the control of a large variety of locomotion gaits for bipeds. Different locomotion tasks, including walking, running, and skipping, are constructed either manually or from motion capture examples. Feedforward torques can be learned from the feedback components, emulating a biological motor learning process that leads to more stable and natural motions with low gains. The results of this thesis demonstrate the potential of a new generation of more sophisticated kinematic and dynamic models of human motion.
Science, Faculty of
Computer Science, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
7

Allander, Karl, Jim Svanberg i Mattias Klittmark. "Intresseväckande Animation : Utställningsmaterial för Mälsåkerprojektet". Thesis, Mälardalen University, Department of Innovation, Design and Product Development, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-703.

Pełny tekst źródła
Streszczenie:

Under sommaren 2006 öppnar Mälsåker Slott för allmänheten och ett av de planerade projekten är en multimedial utställning om norrmän som tränades där i hemlighet under andra världskriget.

Författarna till denna rapport är alla studenter av informationsdesign, med inriktning mot illustration på Mälardalens Högskola. Då vi har ett stort intresse av animation föreföll detta projekt mycket intressant.

Vi ingick i en projektgrupp som innefattade dataloger, textdesigners samt illustratörer. Illustratörernas del av projektet var att skapa det visuella materialet på ett intresseväckande sätt. I rapporten undersöks möjliga lösningar för animation, manér, bilddramaturgi samt de tekniska förutsättningar som krävs för att skapa ett lyckat slutresultat. Rapporten beskriver tillvägagångssättet för att uppnå dessa mål vilket inkluderar metoder som litteraturstudier, diskussioner samt utprovningar. Utställningsformen är experimentell och arbetet har därför givit ett slutresultat som kan ses som en

fallstudie i sig.

Utprovningar av det färdiga materialet visar att vi efter förutsättningarna lyckats uppnå ett gott resultat. Animerat bildspel fungerar i sammanhanget bra som informativt utställningsmaterial.

Style APA, Harvard, Vancouver, ISO itp.
8

Karlsson, Tobias. "Kroppsspråk i 3d-animation : Lögner hos 3d-karaktärer". Thesis, Högskolan i Skövde, Institutionen för kommunikation och information, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-5022.

Pełny tekst źródła
Streszczenie:
Detta arbete undersöker huruvida vi kan uppfatta lögn och nervositet hos en avskalad virtuell karaktär. Utifrån forskning om beteendevetenskap, kroppsspråk och icke-verbala signaler har lögnsignaler och nervositetstecken animerats hos en avskalad virtuell karaktär. Dessa animationer visades utan audiella ledtrådar, såsom tal, för urvalsgrupper och en kvantitativ undersökning genomfördes för att besvara arbetets frågeställning. Undersökningens resultat gav emellertid inget konkret svar på frågeställningen. För att respondenterna skulle uppfatta lögnsignaler hos en avskalad virtuell karaktär krävdes att ett flertal aspekter togs i beaktning, exempelvis behövde lögnsignalerna vara tydliga och inte överskuggas av starkare känslor eller signaler. Nervositetstecken avlästes dock enkelt av respondenterna i undersökningen vilket kan betyda att en avskalad virtuell karaktär kan gestalta ett nervöst tillstånd utan audiella ledtrådar. Resultatet som undersökningen gav har ställts mot insamlad data och tidigare forskning för att diskutera undersökningens brister och lämpliga framtida justeringar för att åtgärda dessa. Slutligen har även möjligheter för framtida arbete diskuterats och spekulationer kring användningsområden för en liknande undersökning i ett branschperspektiv har genomförts.
Style APA, Harvard, Vancouver, ISO itp.
9

Lindström, Erik. "Animation of humanoid characters using reinforcement learning". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-260338.

Pełny tekst źródła
Streszczenie:
Procedural animations are still in its infancy, and one of the techniques to create such is using Reinforcement Learning. In this project, swimming animations are created using UnityML version 0.6 with their Reinforcement Learning training agents, using the policy PPO, created by OpenAI. A humanoid character is placed in a simulated water environment and propels itself forward by rotating its joints. The force created depends on the joints mass and the scale of the rotation. The animation is then compared to a swimming animation created using movement capture data. It is concluded that the movement capture data animation is significantly more realistic than the one created in this project. The procedurally created animations displays many of the typical issues with reinforcement learning such as jittering and non-smooth motions. While the model is relatively simple, it is not possible to avoid these issues completely with more computational power in the form of a more complex model with more Degrees of Freedom. It is however possible to finetune the animations with the improvements listed at the end of the discussion.
Procedurellt skapade animationer är ett relativt nytt område som ständigt utvecklas och en av teknikerna för att skapa dessa är genom att använda sig av Reinforcement Learning. I detta projekt är animationer av simningsbeteenden skapade med hjälp av Unity’s plugin UnityML version 0.6, med hjälp av deras Reinforment Learning agenter och policyn PPO, skapad av OpenAI. En mänsklig karaktär är instantierad i en simulerad vattenmiljö och får sin rörelseförmåga genom att rotera sina lemmar, så som armar och ben, ihopkopplade med Unity’s jointkomponenter. Kraften skapad är baserad på lemmens massa och storleken av rotationen från en observation till den nästa. Större massa och rotation ger en större kraft. Ett animationsbeteende skapas av ett stort antal observationer och utförda krafter. Animationen jämförs sedan med en simningsanimation skapad med hjälp av referensdata från en människa. Den procedurellt skapade animationen visar sig vara betydligt mindre realistic än den skapad med hjälp av referensdata. Projektets skapade animation visar flera av de vanliga svagheterna hos animationer skapade med Reinforcement Learning så som överdrivna och hastiga rörelser. Även om modellen är relativt simpel skulle inte kvalitén på animationen förbättras betydelsefullt med en mer komplex modell med en högre Degree of Freedom. Det är däremot möjligt att förbättra animationen och eliminera många av felen i efterhand, något som beskrivs i diskussionen.
Style APA, Harvard, Vancouver, ISO itp.
10

Jokela, Juha. "Naturvetenskaplig animation inom QUASI-projektet". Thesis, Mälardalen University, Department of Innovation, Design and Product Development, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-238.

Pełny tekst źródła
Streszczenie:

Denna rapport är en del i mitt examensarbete för att animera vad som händer i en jästcell när den utsätts för salt. Mälardalens Högskola deltar i ett EU-finansierat forskningsarbete kallat QUASI-projektet. Det är ett projekt som pågår mellan olika högskolor i Europa. Projektet forskar i biokemiska processer på cellnivå. Animationen är tänkt att hjälpa lärare och studenter att förstå ämnet bättre.

Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Data Animation"

1

Deng, Zhigang, i Ulrich Neumann, red. Data-Driven 3D Facial Animation. London: Springer London, 2007. http://dx.doi.org/10.1007/978-1-84628-907-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Ogao, Patrick Job. Exploratory visualization of temporal geospatial data using animation =: Exploratieve visualisatie van temporele ruimtelijke gegevens met behulp van animaties. [Enschede, The Netherlands: International Institute for Aerospace Survey and Earth Sciences], 2002.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Dransch, Doris. Temporale und nontemporale Coumputer-Animation in der Kartographie. Berlin: Selbstverlag Fachbereich Geowissenschaften, Freie Universität Berlin, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Bradford, Rex E. Real-time animation tool-kit in C++. New York: Wiley, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

B, Yates Natalie, red. Environmental modeling, illustration, and animation: Theory and techniques for the representation of dynamic landscape. Hoboken, NJ: Wiley, 2012.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Lee, Victor Yuencheng. The construction and animation of functional facial models from cylindrical range/reflectance data. Ottawa: National Library of Canada, 1993.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Sassmannshausen, Volker. Architektur und Simulation: Animation als manipulierbares Darstellungswerkzeug in der Architektur. Berlin: Wissenschaft und Technik, 1998.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Real-time animation tool-kit in C [plus plus]. New York: Wiley, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Meier, Timothy W. Investigation into the use of texturing for real-time computer animation. Monterey, California: Naval Postgraduate School, 1988.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Dacheng, Tao, red. Modern machine learning techniques and their applications in cartoon animation research. Piscataway, N.J: IEEE Press/Wiley, 2013.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Data Animation"

1

Turk, Irfan. "Data Visualization and Animation". W Practical MATLAB, 133–45. Berkeley, CA: Apress, 2019. http://dx.doi.org/10.1007/978-1-4842-5281-9_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Leray, Pascal. "A 3D animation system". W Data Structures for Raster Graphics, 165–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/978-3-642-71071-1_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Courty, Nicolas, i Thomas Corpetti. "Data-Driven Animation of Crowds". W Computer Vision/Computer Graphics Collaboration Techniques, 377–88. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-71457-6_34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Komura, Taku, Ikhsanul Habibie, Jonathan Schwarz i Daniel Holden. "Data-Driven Character Animation Synthesis". W Handbook of Human Motion, 1–29. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_10-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Jörg, Sophie. "Data-Driven Hand Animation Synthesis". W Handbook of Human Motion, 1–13. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-30808-1_13-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Komura, Taku, Ikhsanul Habibie, Jonathan Schwarz i Daniel Holden. "Data-Driven Character Animation Synthesis". W Handbook of Human Motion, 2003–31. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Jörg, Sophie. "Data-Driven Hand Animation Synthesis". W Handbook of Human Motion, 2079–91. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-14418-4_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Lee, Myeong Won, i Tosiyasu L. Kunii. "Animation Platform: A Data Management System for Modeling Moving Objects". W Computer Animation ’91, 169–85. Tokyo: Springer Japan, 1991. http://dx.doi.org/10.1007/978-4-431-66890-9_12.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Chen, Philip C. "Applications of Scientific Visualization to Meteorological Data Analysis and Animation". W Computer Animation ’90, 31–38. Tokyo: Springer Japan, 1990. http://dx.doi.org/10.1007/978-4-431-68296-7_3.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Jern, M., S. Palmberg i M. Ranlöf. "Visual Data Navigators“Collaboratories”". W Advances in Modelling, Animation and Rendering, 65–77. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-0103-1_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Data Animation"

1

Ohki, Hidehiro, Moriyuki Shirazawa, Keiji Gyohten, Naomichi Sueda i Seiki Inoue. "Sport Data Animating - An Automatic Animation Generator from Real Soccer Data". W 2009 International Conference on Complex, Intelligent and Software Intensive Systems (CISIS). IEEE, 2009. http://dx.doi.org/10.1109/cisis.2009.185.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Yüksel, Sedat, i Mestan Boyaci. "EXAMINING EFFECT OF ANIMATION APPLICATIONS ON STUDENT ACHIEVEMENT IN SCIENCE AND TECHNOLOGY COURSE". W 1st International Baltic Symposium on Science and Technology Education. Scientia Socialis Ltd., 2015. http://dx.doi.org/10.33225/balticste/2015.51.

Pełny tekst źródła
Streszczenie:
The aim of this study was to determine whether or not animation applications affect student achievement in science and technology course. For this purpose, effect of constructive approach supported by animations in the instruction of the unit “Living Organisms and Energy” to the 8th grade students on their academic achievement was investigated. This unit was taught to the experimental group using a constructivist approach supported by animations and to the control group using a constructivist approach without animations. For data collection, an achievement was developed and administered to experimental and control groups as pre-tests and post-tests. Collected data was analyzed using t-test and MANOVA. As a result of the research, it was revealed that supporting the constructivist approach with animations was more effective in increasing academic achievement. Key wordThe aim of this study was to determine whether or not animation applications affect student achievement in science and technology course. For this purpose, effect of constructive approach supported by animations in the instruction of the unit “Living Organisms and Energy” to the 8th grade students on their academic achievement was investigated. This unit was taught to the experimental group using a constructivist approach supported by animations and to the control group using a constructivist approach without animations. For data collection, an achievement was developed and administered to experimental and control groups as pre-tests and post-tests. Collected data was analyzed using t-test and MANOVA. As a result of the research, it was revealed that supporting the constructivist approach with animations was more effective in increasing academic achievement. Key words: animation, constructivist science education, teaching supported by computer. s: animation, constructivist science education, teaching supported by computer.
Style APA, Harvard, Vancouver, ISO itp.
3

White, Ryan, Keenan Crane i D. A. Forsyth. "Data driven cloth animation". W ACM SIGGRAPH 2007 sketches. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1278780.1278825.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Vladić, Gojko, Selena Mijatović, Gordana Bošnjaković, Ivana Jurič i Vladimir Dimovski. "Analysis of the loading animation performance and viewer perception". W 10th International Symposium on Graphic Engineering and Design. University of Novi Sad, Faculty of technical sciences, Department of graphic engineering and design,, 2020. http://dx.doi.org/10.24867/grid-2020-p76.

Pełny tekst źródła
Streszczenie:
Digital content presented to the viewer usually has to be processed by the device on which is displayed, in case of internet content processing is done by hosting server and user device with additional download time. Time elapsed for these tasks differs depending on the quantity of the data and complexity of the processing needed. Waiting time for content to be displayed can have significant influence on the user experience. Loading animations are often used to divert viewers’ attention or to provide viewer with the information about the process progress, estimated time, etc. Performance of these animation can differ depending on their type, elements or even a story. This paper presents analysis of the performance and viewer perception of different loading animations. Survey and eye tracking were used to gain insight in to the viewer’s perception of the loading animation. Results show noticeable differences caused by loading animation type.
Style APA, Harvard, Vancouver, ISO itp.
5

Lee, Jehee. "Introduction to data-driven animation". W ACM SIGGRAPH ASIA 2010 Courses. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1900520.1900524.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Grover, Divyanshu, i Parag Chaudhuri. "Data-driven 2D effects animation". W the Tenth Indian Conference. New York, New York, USA: ACM Press, 2016. http://dx.doi.org/10.1145/3009977.3010000.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Yu, Hongchuan, Taku Komura i Jian J. Zhang. "Data-driven animation technology (D2AT)". W SA '17: SIGGRAPH Asia 2017. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3154457.3154458.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Schell, Jodie. "IBM Data Baby". W ACM SIGGRAPH 2010 Computer Animation Fesitval. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1836623.1836651.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Schell, Jodie. "IBM Data Energy". W ACM SIGGRAPH 2010 Computer Animation Fesitval. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1836623.1836652.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Bargar, Robin, Alex Betts i Insook Choi. "Computing procedural soundtracks from animation data". W ACM SIGGRAPH 98 Conference abstracts and applications. New York, New York, USA: ACM Press, 1998. http://dx.doi.org/10.1145/280953.282465.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Data Animation"

1

Kimbler, Nate. Data Visualization: Conversion of Data to Animation Files. Fort Belvoir, VA: Defense Technical Information Center, sierpień 2004. http://dx.doi.org/10.21236/ada426972.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Vines, John M. Leveraging Open Source Software to Create Technical Animations of Scientific Data. Fort Belvoir, VA: Defense Technical Information Center, wrzesień 2006. http://dx.doi.org/10.21236/ada455820.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii