Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Interactive audio.

Zeitschriftenartikel zum Thema „Interactive audio“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Interactive audio" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Waters, Richard C. „Audio interactive tutor“. Journal of the Acoustical Society of America 101, Nr. 5 (1997): 2428. http://dx.doi.org/10.1121/1.419492.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Raman, T. V., und David Gries. „Interactive Audio Documents“. Journal of Visual Languages & Computing 7, Nr. 1 (März 1996): 97–108. http://dx.doi.org/10.1006/jvlc.1996.0006.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Waters, Richard C. „THE AUDIO INTERACTIVE TUTOR“. Computer Assisted Language Learning 8, Nr. 4 (Dezember 1995): 325–54. http://dx.doi.org/10.1080/0958822950080403.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Jaspers, Fons, und Zhang Ji-Ping. „Interactive Audio for Computer Assisted Learning“. Journal of Educational Technology Systems 19, Nr. 1 (September 1990): 59–74. http://dx.doi.org/10.2190/xjgl-xp52-3teg-yg2m.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

CHURCH, D. M. „Interactive Audio for Foreign-Language Learning“. Literary and Linguistic Computing 5, Nr. 2 (01.04.1990): 191–94. http://dx.doi.org/10.1093/llc/5.2.191.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

McPhee, Scot. „Audio-visual Poetics in Interactive Multimedia“. Convergence: The International Journal of Research into New Media Technologies 3, Nr. 4 (Dezember 1997): 72–91. http://dx.doi.org/10.1177/135485659700300407.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pfost, Maximilian, und Jana G. Freund. „Interactive Audio Pens, Home Literacy Activities and Emergent Literacy Skills“. Jugendweihe & Co. – Übergangsrituale im Jugendalter 13, Nr. 3-2018 (10.09.2018): 337–49. http://dx.doi.org/10.3224/diskurs.v13i3.06.

Der volle Inhalt der Quelle
Annotation:
Interactive audio pens – pens that contain a built-in speaker and that can be used in combination with books that are made for this purpose – are new, commercially available technological developments that have found widespread dissemination. In the current paper, we studied the availability and use of these interactive audio pens and their associations with home literacy activities and children’s emergent literacy skills in a sample of 103 German preschool children. We found that the availability of interactive audio pens at home showed small positive relations to children’s verbal short-term memory. Home literacy activities were not correlated to the availability of interactive audio pens. Results are discussed against the background of current research in multimedia storybook reading.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Feasley, Charles E., und Ron Payne. „Media review: Interactive audio: Available training resources“. American Journal of Distance Education 2, Nr. 3 (Januar 1988): 97–100. http://dx.doi.org/10.1080/08923648809526642.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Samani, Hooman Aghaebrahimi, Adrian David Cheok und Owen Noel Newton Fernando. „An affective interactive audio interface for Lovotics“. Computers in Entertainment 9, Nr. 2 (Juli 2011): 1–14. http://dx.doi.org/10.1145/1998376.1998377.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Oldfield, Robert, Ben Shirley und Jens Spille. „Object-based audio for interactive football broadcast“. Multimedia Tools and Applications 74, Nr. 8 (01.05.2013): 2717–41. http://dx.doi.org/10.1007/s11042-013-1472-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Huo, Ruicai, Wei Huang und Chao Liu. „Design and implementation of intelligent audio-video interaction and control platform based on distributed architecture“. MATEC Web of Conferences 336 (2021): 05002. http://dx.doi.org/10.1051/matecconf/202133605002.

Der volle Inhalt der Quelle
Annotation:
The traditional audio-video interaction mainly adopts matrix centralized processing technology with limited transmission distance and expansion. In application mode, the function of remote signal sharing and multi-screen linkage cannot be realized. Aiming at the above problems, an audio-video interactive platform based on distributed architecture is designed in the background of multi-type audio-video network information interconnection. This platform takes the distributed architecture as the core and combines the audio-video codec technology to build the distributed display control system, which can realize the arbitrary display and control of signals from many places and scenes, and satisfy the rapid sharing and forwarding and integrated call of audio-video information through the permission setting.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Kim, Kwangki, Jeongil Seo, Seungkwon Beack, Kyeongok Kang und Minsoo Hahn. „Spatial Audio Object Coding With Two-Step Coding Structure for Interactive Audio Service“. IEEE Transactions on Multimedia 13, Nr. 6 (Dezember 2011): 1208–16. http://dx.doi.org/10.1109/tmm.2011.2168197.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Torres, Julio, und Íñigo Yanguas. „Levels of Engagement in Task-based Synchronous Computer Mediated Interaction“. Canadian Journal of Applied Linguistics 24, Nr. 2 (31.05.2021): 203–28. http://dx.doi.org/10.37213/cjal.2021.31319.

Der volle Inhalt der Quelle
Annotation:
Investigating task-based synchronous computer-mediated communication (SCMC) interaction has increasingly received scholarly attention. However, studies have focused on negotiation of meaning and the quantity, focus and resolution of language related episodes (LREs). This study aims to broaden our understanding of the role of audio, video, and text SCMC conditions by additionally examining second language (L2) learners’ levels of engagement during the production of LREs as a result of interactive real-world tasks. We tested 52 dyads of L2 Spanish intermediate learners who completed a decision- making/writing task. Our main analysis revealed that dyads in the audio SCMC condition engaged in more limited LREs vis-à-vis the text SCMC group, and audio SCMC dyads also showed a trend of engaging more in elaborate LREs. The findings imply that interactive SCMC conditions can place differential demands on L2 learners, which has an effect on the ways in which L2 learners address LREs during task-based interaction.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Geronazzo, Michele, Amalie Rosenkvist, David Sebastian Eriksen, Camilla Kirstine Markmann-Hansen, Jeppe Køhlert, Miicha Valimaa, Mikkel Brogaard Vittrup und Stefania Serafin. „Creating an Audio Story with Interactive Binaural Rendering in Virtual Reality“. Wireless Communications and Mobile Computing 2019 (14.11.2019): 1–14. http://dx.doi.org/10.1155/2019/1463204.

Der volle Inhalt der Quelle
Annotation:
The process of listening to an audiobook is usually a rather passive act that does not require an active interaction. If spatial interaction is incorporated into a storytelling scenario, can open. Possibilities of a novel experience which allows an active participation might affect the user-experience. The aim of this paper is to create a portable prototype system based on an embedded hardware platform, allowing listeners to get immersed in an interactive audio storytelling experience enhanced by dynamic binaural audio rendering. For the evaluation of the experience, a short story based on the horror narrative of Stephen King’s Strawberry Springs is adapted and designed in virtual environments. A comparison among three different listening experiences, namely, (i) monophonic (traditional audio story), (ii) static binaural rendering (state-of-the-art audio story), and (iii) our prototype, is conducted. We discuss the quality of the experience based on usability testing, physiological data, emotional assessments, and questionnaires for immersion and spatial presence. Results identify a clear trend for an increase in immersion with our prototype compared to traditional audiobooks, showing also an emphasis on story-specific emotions, i.e., terror and fear.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Lupsa-Tataru, Lucian. „Audio fade-out profile shaping for interactive multimedia“. Technium: Romanian Journal of Applied Sciences and Technology 2, Nr. 7 (27.10.2020): 179–89. http://dx.doi.org/10.47577/technium.v2i7.1913.

Der volle Inhalt der Quelle
Annotation:
The present investigation refers an effective technique of shaping the audio fade-out profile in the current context of high demand for interactive multimedia. A rational function is employed in order to bring forth a highly versatile fade-out transition. To emphasize that the audio volume, i.e. the output of the selected rational function, is returned without noticeable delays during the fading process, a straightforward implementation in pure JavaScript is put forward in the paper, taking into account the most unfavorable case from the computational point of view.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Burge, Elizabeth J., und Judith E. Snow. „Interactive audio classrooms: key principles for effective practice“. Education for Information 8, Nr. 4 (01.10.1990): 299–312. http://dx.doi.org/10.3233/efi-1990-8402.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Goudarzi, Visda. „Designing an Interactive Audio Interface for Climate Science“. IEEE MultiMedia 22, Nr. 1 (Januar 2015): 41–47. http://dx.doi.org/10.1109/mmul.2015.4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Raghuvanshi, Nikunj, Andrew Allen und John Snyder. „Numerical wave simulation for interactive audio-visual applications“. Journal of the Acoustical Society of America 139, Nr. 4 (April 2016): 2008–9. http://dx.doi.org/10.1121/1.4949905.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Correia, Nuno, und Raul Masu. „Affordances and Constraints in Interactive Audio / Visual Systems“. EAI Endorsed Transactions on Creative Technologies 7, Nr. 23 (23.04.2020): 164000. http://dx.doi.org/10.4108/eai.23-4-2020.164000.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Song, Myung-Suk, Cha Zhang, Dinei Florencio und Hong-Goo Kang. „An Interactive 3-D Audio System With Loudspeakers“. IEEE Transactions on Multimedia 13, Nr. 5 (Oktober 2011): 844–55. http://dx.doi.org/10.1109/tmm.2011.2162581.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Brock, Anke, und Christophe Jouffrais. „Interactive audio-tactile maps for visually impaired people“. ACM SIGACCESS Accessibility and Computing, Nr. 113 (24.11.2015): 3–12. http://dx.doi.org/10.1145/2850440.2850441.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Chamberlain, Alan, Mads Bødker, Adrian Hazzard, David McGookin, David De Roure, Pip Willcox und Konstantinos Papangelis. „Audio Technology and Mobile Human Computer Interaction“. International Journal of Mobile Human Computer Interaction 9, Nr. 4 (Oktober 2017): 25–40. http://dx.doi.org/10.4018/ijmhci.2017100103.

Der volle Inhalt der Quelle
Annotation:
Audio-based mobile technology is opening up a range of new interactive possibilities. This paper brings some of those possibilities to light by offering a range of perspectives based in this area. It is not only the technical systems that are developing, but novel approaches to the design and understanding of audio-based mobile systems are evolving to offer new perspectives on interaction and design and support such systems to be applied in areas, such as the humanities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

AU, TERRY KIT-FONG, WINNIE WAILAN CHAN, LIAO CHENG, LINDA S. SIEGEL und RICKY VAN YIP TSO. „Can non-interactive language input benefit young second-language learners?“ Journal of Child Language 42, Nr. 2 (07.04.2014): 323–50. http://dx.doi.org/10.1017/s0305000913000627.

Der volle Inhalt der Quelle
Annotation:
ABSTRACTTo fully acquire a language, especially its phonology, children need linguistic input from native speakers early on. When interaction with native speakers is not always possible – e.g. for children learning a second language that is not the societal language – audios are commonly used as an affordable substitute. But does such non-interactive input work? Two experiments evaluated the usefulness of audio storybooks in acquiring a more native-like second-language accent. Young children, first- and second-graders in Hong Kong whose native language was Cantonese Chinese, were given take-home listening assignments in a second language, either English or Putonghua Chinese. Accent ratings of the children's story reading revealed measurable benefits of non-interactive input from native speakers. The benefits were far more robust for Putonghua than English. Implications for second-language accent acquisition are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Simon, Christian, Yannik Grewe, Nicolas Faecks und Ulli Scuda. „Field Tests for Immersive and Interactive Broadcast Audio Production using MPEG-H 3D Audio“. SET INTERNATIONAL JOURNAL OF BROADCAST ENGINEERING 2018, Nr. 1 (01.12.2018): 40–46. http://dx.doi.org/10.18580/setijbe.2018.5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Johnston, Daniel, Hauke Egermann und Gavin Kearney. „Measuring the Behavioral Response to Spatial Audio within a Multi-Modal Virtual Reality Environment in Children with Autism Spectrum Disorder“. Applied Sciences 9, Nr. 15 (02.08.2019): 3152. http://dx.doi.org/10.3390/app9153152.

Der volle Inhalt der Quelle
Annotation:
Virtual Reality (VR) has been an active area of research in the development of interactive interventions for individuals with autism spectrum disorder (ASD) for over two decades. These immersive environments create a safe platform in which therapy can address the core symptoms associated with this condition. Recent advancements in spatial audio rendering techniques for VR now allow for the creation of realistic audio environments that accurately match their visual counterparts. However, reported auditory processing impairments associated with autism may affect how an individual interacts with their virtual therapy application. This study aims to investigate if these difficulties in processing audio information would directly impact how individuals with autism interact with a presented virtual spatial audio environment. Two experiments were conducted with participants diagnosed with ASD (n = 29) that compared: (1) behavioral reaction between spatialized and non-spatialized audio; and (2) the effect of background noise on participant interaction. Participants listening to binaural-based spatial audio showed higher spatial attention towards target auditory events. In addition, the amount of competing background audio was reported to influence spatial attention and interaction. These findings suggest that despite associated sensory processing difficulties, those with ASD can correctly decode the auditory cues simulated in current spatial audio rendering techniques.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Mu, Jie, und Ge Wang. „The Research on 3G Networks Streaming Media in M-Learning“. Advanced Materials Research 219-220 (März 2011): 1186–90. http://dx.doi.org/10.4028/www.scientific.net/amr.219-220.1186.

Der volle Inhalt der Quelle
Annotation:
The popularity of 3G networks makes revolutionary changes occurred in M-learning, which enables us to do real-time video and audio learning and interactive video and audio learning. The article put emphasis on how to protect video and audio data transmission and the quality of M-learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Cabrera, Andrés, JoAnn Kuchera-Morin und Curtis Roads. „The Evolution of Spatial Audio in the AlloSphere“. Computer Music Journal 40, Nr. 4 (Dezember 2016): 47–61. http://dx.doi.org/10.1162/comj_a_00382.

Der volle Inhalt der Quelle
Annotation:
Spatial audio has been at the core of the multimodal experience at the AlloSphere, a unique instrument for data discovery and exploration through interactive immersive display, since its conception. The AlloSphere multichannel spatial audio design has direct roots in the history of electroacoustic spatial audio and is the result of previous activities in spatial audio at the University of California at Santa Barbara. A concise technical description of the AlloSphere, its architectural and acoustic features, its unique 3-D visual projection system, and the current 54.1 Meyer Sound audio infrastructure is presented, with details of the audio software architecture and the immersive sound capabilities it supports. As part of the process of realizing scientific and artistic projects for the AlloSphere, spatial audio research has been conducted, including the use of decorrelation of audio signals to supplement spatialization and tackling the thorny problem of interactive up-mixing through the Sound Element Spatializer and the Zirkonium Chords project. The latter uses the metaphor of geometric spatial chords as a high-level means of spatial up-mixing in performance. Other developments relating to spatial audio are presented, such as Ryan McGee's Spatial Modulation Synthesis, which simultaneously explores the synthesis of space and timbre.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Reddy, Christopher. „A New Species of Science Education“. American Biology Teacher 76, Nr. 1 (01.01.2014): 28–33. http://dx.doi.org/10.1525/abt.2014.76.1.7.

Der volle Inhalt der Quelle
Annotation:
Interactive television is a type of distance education that uses streaming audio and video technology for real-time student–teacher interaction. Here, I discuss the design and logistics for developing a high school laboratory-based science course taught to students at a distance using interactive technologies. The goal is to share a successful model of science implementation that has the potential to alleviate financially induced stress on science programs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Gaudina, Marco, V. Zappi, A. Brogni und D. G. Caldwell. „Haptic, Audio, and Visual: Multimodal Distribution for Interactive Games“. IEEE Transactions on Instrumentation and Measurement 61, Nr. 11 (November 2012): 3103–11. http://dx.doi.org/10.1109/tim.2012.2202071.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Piacsek, Andrew A. „An interactive audio synthesizer for investigating formants and timbre“. Journal of the Acoustical Society of America 134, Nr. 5 (November 2013): 4019. http://dx.doi.org/10.1121/1.4830672.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Zeng, Limin, Mei Miao und Gerhard Weber. „Interactive Audio-haptic Map Explorer on a Tactile Display“. Interacting with Computers 27, Nr. 4 (26.02.2014): 413–29. http://dx.doi.org/10.1093/iwc/iwu006.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Dawei Jia, A. Bhatti, S. Nahavandi und B. Horan. „Human Performance Measures for Interactive Haptic-Audio-Visual Interfaces“. IEEE Transactions on Haptics 6, Nr. 1 (2013): 46–57. http://dx.doi.org/10.1109/toh.2012.41.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Hong, Euy Shick. „DrWebusic : Real-time Interactive Audio System on the Web“. Journal of Digital Contents Society 20, Nr. 8 (30.09.2019): 1755–61. http://dx.doi.org/10.9728/dcs.2019.20.9.1755.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Carvalho, Luiz Roberto, und Alice T. Cybis Pereira. „The Dynamic Audio Direct-Interactive and Indirect-Adaptive Sounds“. International Journal of Recent Contributions from Engineering, Science & IT (iJES) 2, Nr. 3 (07.08.2014): 38. http://dx.doi.org/10.3991/ijes.v2i3.3822.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Corbett, Richard, Kees van den Doel, John E. Lloyd und Wolfgang Heidrich. „TimbreFields: 3D Interactive Sound Models for Real-Time Audio“. Presence: Teleoperators and Virtual Environments 16, Nr. 6 (01.12.2007): 643–54. http://dx.doi.org/10.1162/pres.16.6.643.

Der volle Inhalt der Quelle
Annotation:
We describe a methodology for virtual reality designers to capture and resynthesize the variations in sound made by objects when we interact with them through contact such as touch. The timbre of contact sounds can vary greatly, depending on both the listener’s location relative to the object, and the interaction point on the object itself. We believe that an accurate rendering of this variation greatly enhances the feeling of immersion in a simulation. To do this, we model the variation with an efficient algorithm based on modal synthesis. This model contains a vector field that is defined on the product space of contact locations and listening positions around the object. The modal data are sampled on this high dimensional space using an automated measuring platform. A parameter-fitting algorithm is presented that recovers the parameters from a large set of sound recordings around objects and creates a continuous timbre field by interpolation. The model is subsequently rendered in a real-time simulation with integrated haptic, graphic, and audio display. We describe our experience with an implementation of this system and an informal evaluation of the results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

McKenney, Cynthia B., und Ellen B. Peffley. „(141) Blending Interactive Video Conferencing Classes with WebCT“. HortScience 40, Nr. 4 (Juli 2005): 1043D—1043. http://dx.doi.org/10.21273/hortsci.40.4.1043d.

Der volle Inhalt der Quelle
Annotation:
Teaching at a distance has many rewards and challenges inherent in its delivery. Interactive video conferencing has the advantages of having audio and visual contact with students during a set class period while having the disadvantages of scheduling multiple locations and keeping the equipment functioning at peak performance. Likewise, using a web platform such as WebCT provides a framework with excellent options to develop a course that is both audio and visually rich. This solution also presents its own difficulties as required textbooks change and the platform version may be upgraded. In this presentation, the advantages and disadvantages of both formats will be reviewed. In addition, helpful hints for blending these two teaching methods together to create a custom course will be discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Santo, Santo. „STRATEGI VISUAL ANIMASI INTERAKTIF “KELILING JAWA”“. Jurnal Dimensi DKV Seni Rupa dan Desain 4, Nr. 2 (01.10.2019): 109. http://dx.doi.org/10.25105/jdd.v4i2.5883.

Der volle Inhalt der Quelle
Annotation:
<p><strong>Abstract</strong><br />The Visual Strategy of Interactive Animation “Around Java”. Interactive animation “Around Java” is an interactive animation packed with digital technology, in which there is a visual language in the form of animation and audio to introduce Javanese island culture. Interactive animation “Around Java” is very special because there are social goals to convey educational information or messages about the richness of Indonesian culture. The aesthetic approach is to analyze the object of research using<br />the criteria, principles, and general phenomena of scientific theories, namely graphic design, animation, communication and UI/UX. The result is based on formalistic side is indicated that interactive animation “Around Java” is effective in terms of visual appearance, and in terms of communication as a messenger of the cultural message which can be conveyed well and clearly because it is supported by audio visual and<br />interesting interaction experiences.</p><p> </p><p><strong>Abstrak</strong><br />Strategi Visual Animasi Interaktif “Keliling Jawa”. Animasi interaktif “Keliling Jawa” merupakan sebuah animasi interaktif yang dikemas dengan teknologi digital, di mana di dalamnya terdapat bahasa visual berupa animasi dan audio untuk memperkenalkan budaya pulau Jawa. Animasi interaktif “Keliling Jawa” menyimpan keistimewaan karena ada tujuan sosial dalam menyampaikan pesan informasi yang mendidik tentang kekayaan budaya Indonesia. Pendekatan estetik yang dilakukan<br />adalah menganalisis objek penelitian dengan menggunakan kriteria, prinsip, dan fenomena umum teori-teori keilmuan, yaitu desain grafis, animasi, komunikasi dan UI/ UX. Hasilnya adalah berdasarkan aspek formalistik, animasi interaktif “Keliling Jawa” efektif dari segi penampilan visual, dan secara aspek komunikasi sebagai penyampai pesan budaya dapat menyampaikan pesan dengan baik dan jelas karena didukung<br />dengan audio visual serta pengalaman interaksi yang menarik.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Wei, Yu Qing, Xing Gao und Jing Hua Gao. „Virtools Based Development of Computer Experiment“. Advanced Materials Research 756-759 (September 2013): 2887–91. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.2887.

Der volle Inhalt der Quelle
Annotation:
Virtools is a set of integrated 2D graphic images, 3D models, audio, video, and so a variety of file formats, interactive software itself has a wealth of interactive behavior module, 3D games can be created, virtual experiments simulation objects very strong interaction with the display and other three-dimensional product. This paper presents a virtual experiment based on 3D and Virtools technology, which is built with 3D experiments scenes, making 3D animation; with Virtools data processing, interactive control. The experiments show that this method has the authenticity, interactivity, simple and easy to implement features.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Parker, J. R., und John Heerema. „Audio Interaction in Computer Mediated Games“. International Journal of Computer Games Technology 2008 (2008): 1–8. http://dx.doi.org/10.1155/2008/178923.

Der volle Inhalt der Quelle
Annotation:
The use of sound in an interactive media environment has not been advanced, as a technology, as far as graphics or artificial intelligence. This discussion will explore the use of sound as a way to influence the player of a computer game, will show ways that a game can use sound as input, and will describe ways that the player can influence sound in a game. The role of sound in computer games will be explored some practical design ideas that can be used to improve the current state of the art will be given.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Lee, Wei-Tsong, Tin-Yu Wu, Yu-Chieh Cheng, Yue-Ru Chuang und Shiann-Tsong Sheu. „Using the Kalman Filter for Auto Bit-rate H.264 Streaming Based on Human Interaction“. International Journal of Technology and Human Interaction 9, Nr. 4 (Oktober 2013): 58–74. http://dx.doi.org/10.4018/ijthi.2013100104.

Der volle Inhalt der Quelle
Annotation:
Human Machine Interface (HMI) and interactive systems applications are complex and diversified but human machine interaction in networks is even more complex. To design an interactive system, the designers must consider factors including design, technologies, people, activities and contexts. As the intermediary for human and machines, the interactive system takes charge of not only data transmission, display, storage and convertion, but also reactions to human behaviors. Based on human interaction, this paper focuses on the popular video streaming. As one of the most commonly used compression video formats, H.264 provides better quality at lower bit-rates than its previous standards in transmitting video/audio data. However, the quality of networked multimedia streaming easily fluctuates with the bit-rate variation. In order to maintain good video quality, this paper proposes to use the Kalman filter to implement the Auto bit-rate technique, which can regulate the bit-rate of the video/audio data automatically when the bit-rate is insufficient, and simultaneously guarantee the video quality. The experimental result proves that our proposed Auto bit-rate scheme can regulate the bit-rate to achieve the optimal visual quality and offer the best quality of service at the same time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Piitulainen, Roosa, und Ilyena Hirskyj-Douglas. „Music for Monkeys: Building Methods to Design with White-Faced Sakis for Animal-Driven Audio Enrichment Devices“. Animals 10, Nr. 10 (30.09.2020): 1768. http://dx.doi.org/10.3390/ani10101768.

Der volle Inhalt der Quelle
Annotation:
Computer systems for primates to listen to audio have been researched for a long time. However, there is a lack of investigations into what kind of sounds primates would prefer to listen to, how to quantify their preference, and how audio systems and methods can be designed in an animal-focused manner. One pressing question is, if given the choice to control an audio system, would or could primates use such a system. In this study, we design an audio enrichment prototype and method for white-faced sakis that allows them to listen to different sounds in their regular zoo habitat while automatically logging their interactions. Focusing on animal-centred design, this prototype was built from low fidelity testing of different forms within the sakis’ enclosure and gathering requirements from those who care for and view the animal. This process of designing in a participatory manner with the sakis resulted in an interactive system that was shown to be viable, non-invasive, highly interactive, and easy to use in a zoo habitat. Recordings of the sakis’ interactions demonstrated that the sakis triggered traffic audio more than silence, rain sounds, zen, and electronic music. The data and method also highlight the benefit of a longitudinal study within the animals’ own environment to mitigate against the novelty effect and the day-to-day varying rhythm of the animals and the zoo environment. This study builds on animal-centred methods and design paradigms to allow the monitoring of the animals’ behaviours in zoo environments, demonstrating that useful data can be yielded from primate-controlled devices. For the Animal-Computer Interaction community, this is the first audio enrichment system used in zoo contexts within the animals own environment over a long period of time that gives the primate control over their interactions and records this automatically.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Peng, Yun Feng, Chang Shu und Xiang Lin Tan. „Study of Audio Circuit Board Testing System of some Station Based on DSP“. Applied Mechanics and Materials 513-517 (Februar 2014): 3203–6. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.3203.

Der volle Inhalt der Quelle
Annotation:
In this paper, the developed audio circuit board testing system of some station based on DSP is used to realizing the auto acquisition and testing of the parameters of audio signal, and we use touching screen to realize man-machine interactive and real-time display testing results. The detecting process accurate and quick , so it can be widely used in the testing , checking, maintenance, fault location and so on of the stations audio circuit board.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Hulusić, Vedad, Kurt Debattista, Vibhor Aggarwal und Alan Chalmers. „Maintaining frame rate perception in interactive environments by exploiting audio-visual cross-modal interaction“. Visual Computer 27, Nr. 1 (10.07.2010): 57–66. http://dx.doi.org/10.1007/s00371-010-0514-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

AKATSU, Hiroko, und Akinori KOMATSUBARA. „2F2-4 Audio Rhythms as Operational Assistance on Interactive Systems“. Japanese journal of ergonomics 51, Supplement (2015): S316—S317. http://dx.doi.org/10.5100/jje.51.s316.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Linu, CK, D. Phansalkar, P. Stalin, Z. Singh und P. Mohnish. „Interactive ‘Audio-Visual’ Symposium: An Effective Mode of Health Education“. Journal of Research in Medical Education & Ethics 5, Nr. 3 (2015): 217. http://dx.doi.org/10.5958/2231-6728.2015.00044.x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Thompson, John, JoAnn Kuchera-Morin, Marcos Novak, Dan Overholt, Lance Putnam, Graham Wakefield und Wesley Smith. „The Allobrain: An interactive, stereographic, 3D audio, immersive virtual world“. International Journal of Human-Computer Studies 67, Nr. 11 (November 2009): 934–46. http://dx.doi.org/10.1016/j.ijhcs.2009.05.005.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Xiahou, Jian Bing, und Zhen Xiong Wang. „The Apply of DirectShow in Scenario Interactive Teaching System“. Advanced Materials Research 926-930 (Mai 2014): 4641–44. http://dx.doi.org/10.4028/www.scientific.net/amr.926-930.4641.

Der volle Inhalt der Quelle
Annotation:
This article describes the basic knowledge and principles of the DirectShow technology, including the DirectShow system architecture and COM technologies. And how to use DirectShow to do video real-time acquisition and storage, audio and video synchronization in the scene interactive teaching system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Rudi, Jøran. „Organising Sound with Audio Clothes: An interview with Benoit Maubrey“. Organised Sound 23, Nr. 3 (Dezember 2018): 292–95. http://dx.doi.org/10.1017/s1355771818000201.

Der volle Inhalt der Quelle
Annotation:
Benoit Maubrey’s work with audio art started in Berlin in 1982 with public sound sculptures, and he eventually turned to performative practices with portable audio embedded in clothes and costumes. His artistic practice currently spans site-specific and non-site-specific sound installations, locational and non-locational performances, as well as performed, interactive and non-interactive sound installations, and a comprehensive description of his artistic trajectory is planned for release in 2019: Benoit Maubrey – Sound Sculptures. His most well-known ensemble is The Audio Ballerinas, wearing tutus with a combination of solar cells, light sensors, samplers, radios, amplifiers and loudspeakers. The ensemble has been performing since its debut in Lille in 1990. Maubrey has developed a huge portfolio of audio ensemble performances on several continents, and an interesting thread of autonomy and critical reflection is running through his oeuvre. The costumes and their technical affordances have changed with new technological developments, and in this interview Maubrey explains these developments, and how he has maintained and extended his artistic focus.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Bowman, Doug A., und Larry F. Hodges. „Toolsets for the Development of Highly Interactive and Information-Rich Environments“. International Journal of Virtual Reality 3, Nr. 2 (01.01.1997): 1–19. http://dx.doi.org/10.20870/ijvr.1997.3.2.2622.

Der volle Inhalt der Quelle
Annotation:
This paper describes a unique set of programming toolsets which are designed specifically to aid in the authoring of immersive virtual environment (VE) applications. The first toolset provides a wide range of user interface options through a pen based metaphor, and benefits applications which have a high degree of interactive complexity. The second toolset allows high level control of audio annotations, and benefits applications which provide information to the user via the audio modality. The toolsets are designed to provide abstraction from the basic VE software system, to allow rapid prototyping and experimentation, and to be reusable across multiple applications. Two applications which have made extensive use of both toolsets are also presented.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Han, Yoon Chung, und Byeong-jun Han. „Skin Pattern Sonification as a New Timbral Expression“. Leonardo Music Journal 24 (Dezember 2014): 41–43. http://dx.doi.org/10.1162/lmj_a_00199.

Der volle Inhalt der Quelle
Annotation:
The authors discuss two sonification projects that transform fingerprint and skin patterns into audio: (1) Digiti Sonus, an interactive installation performing fingerprint sonification and visualization and (2) skin pattern sonification, which converts pore networks into sound. The projects include novel techniques for representing user-intended fingerprint expression and skin pattern selection as audio parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie