Auswahl der wissenschaftlichen Literatur zum Thema „Interactive audio“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Interactive audio" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Interactive audio"

1

Waters, Richard C. „Audio interactive tutor“. Journal of the Acoustical Society of America 101, Nr. 5 (1997): 2428. http://dx.doi.org/10.1121/1.419492.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Raman, T. V., und David Gries. „Interactive Audio Documents“. Journal of Visual Languages & Computing 7, Nr. 1 (März 1996): 97–108. http://dx.doi.org/10.1006/jvlc.1996.0006.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Waters, Richard C. „THE AUDIO INTERACTIVE TUTOR“. Computer Assisted Language Learning 8, Nr. 4 (Dezember 1995): 325–54. http://dx.doi.org/10.1080/0958822950080403.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Jaspers, Fons, und Zhang Ji-Ping. „Interactive Audio for Computer Assisted Learning“. Journal of Educational Technology Systems 19, Nr. 1 (September 1990): 59–74. http://dx.doi.org/10.2190/xjgl-xp52-3teg-yg2m.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

CHURCH, D. M. „Interactive Audio for Foreign-Language Learning“. Literary and Linguistic Computing 5, Nr. 2 (01.04.1990): 191–94. http://dx.doi.org/10.1093/llc/5.2.191.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

McPhee, Scot. „Audio-visual Poetics in Interactive Multimedia“. Convergence: The International Journal of Research into New Media Technologies 3, Nr. 4 (Dezember 1997): 72–91. http://dx.doi.org/10.1177/135485659700300407.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pfost, Maximilian, und Jana G. Freund. „Interactive Audio Pens, Home Literacy Activities and Emergent Literacy Skills“. Jugendweihe & Co. – Übergangsrituale im Jugendalter 13, Nr. 3-2018 (10.09.2018): 337–49. http://dx.doi.org/10.3224/diskurs.v13i3.06.

Der volle Inhalt der Quelle
Annotation:
Interactive audio pens – pens that contain a built-in speaker and that can be used in combination with books that are made for this purpose – are new, commercially available technological developments that have found widespread dissemination. In the current paper, we studied the availability and use of these interactive audio pens and their associations with home literacy activities and children’s emergent literacy skills in a sample of 103 German preschool children. We found that the availability of interactive audio pens at home showed small positive relations to children’s verbal short-term memory. Home literacy activities were not correlated to the availability of interactive audio pens. Results are discussed against the background of current research in multimedia storybook reading.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Feasley, Charles E., und Ron Payne. „Media review: Interactive audio: Available training resources“. American Journal of Distance Education 2, Nr. 3 (Januar 1988): 97–100. http://dx.doi.org/10.1080/08923648809526642.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Samani, Hooman Aghaebrahimi, Adrian David Cheok und Owen Noel Newton Fernando. „An affective interactive audio interface for Lovotics“. Computers in Entertainment 9, Nr. 2 (Juli 2011): 1–14. http://dx.doi.org/10.1145/1998376.1998377.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Oldfield, Robert, Ben Shirley und Jens Spille. „Object-based audio for interactive football broadcast“. Multimedia Tools and Applications 74, Nr. 8 (01.05.2013): 2717–41. http://dx.doi.org/10.1007/s11042-013-1472-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Interactive audio"

1

Smith, Adam Douglas 1975. „WAI-KNOT (Wireless Audio Interactive Knot)“. Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/62360.

Der volle Inhalt der Quelle
Annotation:
Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2001.
Includes bibliographical references (leaves 44-45).
The Sound Transformer is a new type of musical instrument. It looks a little like a saxophone, but when you sing or "kazoo" into it, astonishing transforms and mutations come out. What actually happens is that the input sound is sent via 802.11 wireless link to a net server that transforms the sound and sends it back to the instrument's speaker. In other words, instead of a resonant acoustic body, or a local computer synthesizer, this architecture allows sound to be sourced or transformed by an infinite array of online services, and channeled through a gesturally expressive handheld. Emerging infrastructures (802.11, Bluetooth, 3G and 4G, etc) seem to aim at this new class of instrument. But can such an architecture really work? In particular, given the delays incurred by decoupling the sound transformation from the instrument over a wireless network, are interactive music applications feasible? My thesis is that they are. To prove this, I built a platform called WAI-KNOT (for Wireless Audio Interactive Knot) in order to examine the latency issues as well as other design elements, and test their viability and impact on real music making. The Sound Transformer is a WAI-KNOT application.
Adam Douglas Smith.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Olaleye, Olufunke I. „Symbiotic Audio Communication on Interactive Transport“. Kent State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=kent1176438067.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Tsingos, Nicolas. „MODELS AND ALGORITHMS FOR INTERACTIVE AUDIO RENDERING“. Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00629574.

Der volle Inhalt der Quelle
Annotation:
Les systèmes de réalité virtuelle interactifs combinent des représentations visuelle, sonore et haptique, afin de simuler de manière immersive l'exploration d'un monde tridimensionnel représenté depuis le point de vue d'un observateur contrôlé en temps réel par l'utilisateur. La plupart des travaux effectués dans ce domaine ont historiquement port'e sur les aspects visuels (par exemple des méthodes d'affichage interactif de modèles 3D complexes ou de simulation réaliste et efficace de l'éclairage) et relativement peu de travaux ont été consacrés 'a la simulation de sources sonores virtuelles 'également dénommée auralisation. Il est pourtant certain que la simulation sonore est un facteur clé dans la production d'environnements de synthèse, la perception sonore s'ajoutant à la perception visuelle pour produire une interaction plus naturelle. En particulier, les effets sonores spatialisés, dont la direction de provenance est fidèlement reproduite aux oreilles de l'auditeur, sont particulièrement importants pour localiser les objets, séparer de multiples signaux sonores simultanés et donner des indices sur les caractéristiques spatiales de l'environnement (taille, matériaux, etc.). La plupart des systèmes de réalité virtuelle immersifs, des simulateurs les plus complexes aux jeux vidéo destin'es au grand public mettent aujourd'hui en œuvre des algorithmes de synthèse et spatialisation des sons qui permettent d'améliorer la navigation et d'accroître le réalisme et la sensation de présence de l'utilisateur dans l'environnement de synthèse. Comme la synthèse d'image dont elle est l'équivalent auditif, l'auralisation, appel'ee aussi rendu sonore, est un vaste sujet 'a la croisée de multiples disciplines : informatique, acoustique et 'électroacoustique, traitement du signal, musique, calcul géométrique mais également psycho-acoustique et perception audio-visuelle. Elle regroupe trois problématiques principales: synthèse et contrôle interactif de sons, simulation des effets de propagation du son dans l'environnement et enfin, perception et restitution spatiale aux oreilles de l'auditeur. Historiquement, ces trois problématiques émergent de travaux en acoustique architecturale, acoustique musicale et psycho-acoustique. Toutefois une différence fondamentale entre rendu sonore pour la réalité virtuelle et acoustique réside dans l'interaction multimodale et dans l'efficacité des algorithmes devant être mis en œuvre pour des applications interactives. Ces aspects importants contribuent 'a en faire un domaine 'a part qui prend une importance croissante, tant dans le milieu de l'acoustique que dans celui de la synthèse d'image/réalité virtuelle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Brossier, Paul M. „Automatic annotation of musical audio for interactive applications“. Thesis, Queen Mary, University of London, 2006. http://qmro.qmul.ac.uk/xmlui/handle/123456789/3809.

Der volle Inhalt der Quelle
Annotation:
As machines become more and more portable, and part of our everyday life, it becomes apparent that developing interactive and ubiquitous systems is an important aspect of new music applications created by the research community. We are interested in developing a robust layer for the automatic annotation of audio signals, to be used in various applications, from music search engines to interactive installations, and in various contexts, from embedded devices to audio content servers. We propose adaptations of existing signal processing techniques to a real time context. Amongst these annotation techniques, we concentrate on low and mid-level tasks such as onset detection, pitch tracking, tempo extraction and note modelling. We present a framework to extract these annotations and evaluate the performances of different algorithms. The first task is to detect onsets and offsets in audio streams within short latencies. The segmentation of audio streams into temporal objects enables various manipulation and analysis of metrical structure. Evaluation of different algorithms and their adaptation to real time are described. We then tackle the problem of fundamental frequency estimation, again trying to reduce both the delay and the computational cost. Different algorithms are implemented for real time and experimented on monophonic recordings and complex signals. Spectral analysis can be used to label the temporal segments; the estimation of higher level descriptions is approached. Techniques for modelling of note objects and localisation of beats are implemented and discussed. Applications of our framework include live and interactive music installations, and more generally tools for the composers and sound engineers. Speed optimisations may bring a significant improvement to various automated tasks, such as automatic classification and recommendation systems. We describe the design of our software solution, for our research purposes and in view of its integration within other systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sheets, Gregory S. „Audio coding and identification for an interactive television application“. Thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-02132009-172049/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Jordan, Eric Michael. „Programming models for the development of interactive audio applications“. Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/37764.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (leaves 59-63).
by Eric Michael Jordan.
M.S.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Malone, Caitlin A. „A Visit to the Priory: An Interactive Audio Tour“. Digital WPI, 2016. https://digitalcommons.wpi.edu/etd-theses/389.

Der volle Inhalt der Quelle
Annotation:
The chapter house of the Benedictine priory of Saint John Le Bas-Nueil, currently located in the Worcester Art Museum, is an impressive piece of architecture. However, visitors are currently restricted to admiring the structure and its restoration only, as there is limited information presented in the museum about the room’s original use. The purpose of this project was to produce a low-impact, narrative-driven audio experience designed to increase visitor interest in the museum in general and Benedictine life during the twelfth century in particular. The prototype produced combines elements of traditional audio tours, radio drama, and question-and-answer interaction sequences to provide a self-driven immersive experience.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Byers, Kenneth Charles. „Full-body interaction : perception and consciousness in interactive digital 3-dimension audio visual installations“. Thesis, University of the West of Scotland, 2017. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.740180.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lucas, Stephen 1985. „Virtual Stage: Merging Virtual Reality Technologies and Interactive Audio/Video“. Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc984124/.

Der volle Inhalt der Quelle
Annotation:
Virtual Stage is a project to use Virtual Reality (VR) technology as an audiovisual performance interface. The depth of control, modularity of design, and user immersion aim to solve some of the representational problems in interactive audiovisual art and the control problems in digital musical instruments. Creating feedback between interaction and perception, the VR environment references the viewer's behavioral intuition developed in the real world, facilitating clarity in the understanding of artistic representation. The critical essay discusses of interactive behavior, game mechanics, interface implementations, and technical developments to express the structures and performance possibilities. This discussion uses Virtual Stage as an example with specific aesthetic and technical solutions, but addresses archetypal concerns in interactive audiovisual art. The creative documentation lists the interactive functions present in Virtual Stage as well as code reproductions of selected technical solutions. The included code excerpts document novel approaches to virtual reality implementation and acoustic physical modeling of musical instruments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Roy, Deb Kumar 1969. „NewsComm--a hand-held device for interactive access to structured audio“. Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/60444.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Interactive audio"

1

Bhasha Research and Publication Centre. Gujaratanum adivasi sangita: An audio-visual interactive. Vadodara: Bhasha Research and Publication Centre, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

John, Matthews. Interactive whiteboards. Ann Arbor, MI: Cherry Lake Pub., 2009.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Behringer, Uli. Denoiser: The audio interactive noise reduction system, Model SNR 2000. 2. Aufl. Willich-Münchheide: Behringer Spezielle Studiotechnik, 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Harrison, Mark. The use of interactive audio and speech recognition techniques in training. [U.K.]: [s.n.], 1993.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

1953-, Gayeski Diane M., Hrsg. Using video: Interactive and linear designs. Englewood Cliffs, N.J: Educational Technology Publications, 1989.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Dave, Raybould, Hrsg. The game audio tutorial: A practical guide to sound and music for interactive games. Amsterdam: Boston, 2011.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Romiszowski, A. J. The selection and use of instructional media: For improved classroom teaching and for interactive, individualized instruction. 2. Aufl. London: K. Page, 1988.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Interactive optical technologies in education and training: Markets and trends. Westport: Meckler, 1990.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

La cattedra multimediale: Tecnologie didattiche per la scuola. Milano: Associazione italiana editori, Ufficio studi, 2002.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wilson, Kathleen S. The Palenque design: Children's discovery learning experiences in an interactive multimedia environment. [Cambridge, Mass.]: K.S. Wilson, 1988.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Interactive audio"

1

Goodwin, Simon N. „Interactive Audio Codecs“. In Beep to Boom, 173–81. New York, NY : Routledge, 2019. | Series: Audio engineering society presents …: Routledge, 2019. http://dx.doi.org/10.4324/9781351005548-17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

van den Boogaart, C. G., und R. Lienhart. „Visual Audio: An Interactive Tool for Analyzing and Editing of Audio in the Spectrogram“. In Interactive Video, 107–30. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/978-3-540-33215-2_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gampe, Johanna. „Interactive Narration within Audio Augmented Realities“. In Interactive Storytelling, 298–303. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-10643-9_34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Goodwin, Simon N. „Interactive Audio Development Roles“. In Beep to Boom, 35–42. New York, NY : Routledge, 2019. | Series: Audio engineering society presents …: Routledge, 2019. http://dx.doi.org/10.4324/9781351005548-4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Robinson, Ciarán. „Interactive Game Objects“. In Game Audio with FMOD and Unity, 93–100. New York, NY : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9780429455971-11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Goodwin, Simon N. „The Essence of Interactive Audio“. In Beep to Boom, 1–10. New York, NY : Routledge, 2019. | Series: Audio engineering society presents …: Routledge, 2019. http://dx.doi.org/10.4324/9781351005548-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Wu, Wenjie, und Stefan Rank. „Story Immersion in a Gesture-Based Audio-Only Game“. In Interactive Storytelling, 223–34. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-27036-4_21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Parry, Nye, Helen Bendon, Stephen Boyd Davis und Magnus Moar. „Locating Drama: A Demonstration of Location-Aware Audio Drama“. In Interactive Storytelling, 41–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89454-4_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Gonzalez, Ruben. „Better Than MFCC Audio Classification Features“. In The Era of Interactive Media, 291–301. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-3501-3_24.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Bakker, Saskia, Elise van den Hoven und Berry Eggen. „Exploring Interactive Systems Using Peripheral Sounds“. In Haptic and Audio Interaction Design, 55–64. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15841-4_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Interactive audio"

1

Raman, T. V., und D. Gries. „Interactive audio documents“. In the first annual ACM conference. New York, New York, USA: ACM Press, 1994. http://dx.doi.org/10.1145/191028.191045.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Turchet, Luca. „Interactive sonification and the IoT“. In AM'19: Audio Mostly. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3356590.3356631.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Rajan, Rahul, Cliff Chen und Ted Selker. „Considerate Audio MEdiating Oracle (CAMEO)“. In the Designing Interactive Systems Conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2317956.2317972.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Rowland, Jess, und Adrian Freed. „Flexible surfaces for interactive audio“. In the 2012 ACM international conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2396636.2396688.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wyse, Lonce. „Interactive Audio Web Development Workflow“. In MM '14: 2014 ACM Multimedia Conference. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2647868.2655064.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Hürst, Wolfgang. „Interactive audio-visual video browsing“. In the 14th annual ACM international conference. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1180639.1180781.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Masu, Raul, Nuno N. Correia, Stephan Jurgens, Jochen Feitsch und Teresa Romão. „Designing interactive sonic artefacts for dance performance“. In AM'20: Audio Mostly 2020. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3411109.3412297.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hadjakos, Aristotelis, Heizo Schulze, André Düchting, Christian Metzger, Marc Ottensmann, Friederike Riechmann, Anna-Maria Schneider und Michael Trappmann. „Learning Visual Programming by Creating a Walkable Interactive Installation“. In the Audio Mostly 2015. New York, New York, USA: ACM Press, 2015. http://dx.doi.org/10.1145/2814895.2814914.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Urbanek, Michael, Florian Güldenpfennig und Manuel T. Schrempf. „Building a Community of Audio Game Designers - Towards an Online Audio Game Editor“. In DIS '18: Designing Interactive Systems Conference 2018. New York, NY, USA: ACM, 2018. http://dx.doi.org/10.1145/3197391.3205431.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Thompson, Andrew, und Gyorgy Fazekas. „A Model-View-Update Framework for Interactive Web Audio Applications“. In AM'19: Audio Mostly. New York, NY, USA: ACM, 2019. http://dx.doi.org/10.1145/3356590.3356623.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Interactive audio"

1

Baluk, Nadia, Natalia Basij, Larysa Buk und Olha Vovchanska. VR/AR-TECHNOLOGIES – NEW CONTENT OF THE NEW MEDIA. Ivan Franko National University of Lviv, Februar 2021. http://dx.doi.org/10.30970/vjo.2021.49.11074.

Der volle Inhalt der Quelle
Annotation:
The article analyzes the peculiarities of the media content shaping and transformation in the convergent dimension of cross-media, taking into account the possibilities of augmented reality. With the help of the principles of objectivity, complexity and reliability in scientific research, a number of general scientific and special methods are used: method of analysis, synthesis, generalization, method of monitoring, observation, problem-thematic, typological and discursive methods. According to the form of information presentation, such types of media content as visual, audio, verbal and combined are defined and characterized. The most important in journalism is verbal content, it is the one that carries the main information load. The dynamic development of converged media leads to the dominance of image and video content; the likelihood of increasing the secondary content of the text increases. Given the market situation, the effective information product is a combined content that combines text with images, spreadsheets with video, animation with infographics, etc. Increasing number of new media are using applications and website platforms to interact with recipients. To proceed, the peculiarities of the new content of new media with the involvement of augmented reality are determined. Examples of successful interactive communication between recipients, the leading news agencies and commercial structures are provided. The conditions for effective use of VR / AR-technologies in the media content of new media, the involvement of viewers in changing stories with augmented reality are determined. The so-called immersive effect with the use of VR / AR-technologies involves complete immersion, immersion of the interested audience in the essence of the event being relayed. This interaction can be achieved through different types of VR video interactivity. One of the most important results of using VR content is the spatio-temporal and emotional immersion of viewers in the plot. The recipient turns from an external observer into an internal one; but his constant participation requires that the user preferences are taken into account. Factors such as satisfaction, positive reinforcement, empathy, and value influence the choice of VR / AR content by viewers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Syvash, Kateryna. AUDIENCE FEEDBACK AS AN ELEMENT OF PARASOCIAL COMMUNICATION WITH SCREEN MEDIA-PERSONS. Ivan Franko National University of Lviv, Februar 2021. http://dx.doi.org/10.30970/vjo.2021.49.11062.

Der volle Inhalt der Quelle
Annotation:
Parasocial communication is defined as an illusory and one-sided interaction between the viewer and the media person, which is analogous to interpersonal communication. Among the classic media, television has the greatest potential for such interaction through a combination of audio and visual series and a wide range of television content – from newscasts to talent shows. Viewers’ reaction to this product can be seen as a defining element of parasociality and directly affect the popularity of a media person and the ratings of the TV channel. In this article we will consider feedback as part of parasocial communication and describe ways to express it in times of media transformations. The psychological interaction «media person – viewer» had been the focus of research by both psychologists and media experts for over 60 years. During the study, scientists described the predictors, functions, manifestations and possible consequences of paracommunication. One of the key elements of the formed parasocial connections is the real audience reaction. Our goal is to conceptualize the concept of feedback in the paradigm of parasocial communication and describe the main types of reactions to the media person in long-term parasocial relationships. The research focuses on the ways in which the viewer’s feedback on the television media person is expressed, bypassing the issue of classifying the audience’s feedback as «positive» and «negative». For this purpose, more than 20 interdisciplinary scientific works on the issue of parasocial interaction were analyzed and their generalization was carried out. Based on pre­vious research, the types and methods of feedback in the television context are separated. With successful parasocial interaction, the viewer can react in different ways to the media person. The type of feedback will directly depend on the strength of the already established communication with the media person. We distinguish seven types of feedback and divide them into those that occur during or after a television show; those that are spontaneous or planned; aimed directly at the media person or third parties. We offer the following types of feedback from TV viewers: «talking to the TV»; telling about the experience of parasocial communication to others; following on social networks; likes and comments; imitation of behavior and appearance; purchase of recommended brands; fanart.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie