Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Audio content analysis.

Zeitschriftenartikel zum Thema „Audio content analysis“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Audio content analysis" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Raj, Bhiksha, Paris Smaragdis, Malcolm Slaney, Chung-Hsien Wu, Liming Chen und Hyoung-Gook Kim. „Scalable Audio-Content Analysis“. EURASIP Journal on Audio, Speech, and Music Processing 2010 (2010): 1–2. http://dx.doi.org/10.1155/2010/467278.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Lu, Lie, und Alan Hanjalic. „Audio Keywords Discovery for Text-Like Audio Content Analysis and Retrieval“. IEEE Transactions on Multimedia 10, Nr. 1 (Januar 2008): 74–85. http://dx.doi.org/10.1109/tmm.2007.911304.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Lie Lu, Hong-Jiang Zhang und Hao Jiang. „Content analysis for audio classification and segmentation“. IEEE Transactions on Speech and Audio Processing 10, Nr. 7 (Oktober 2002): 504–16. http://dx.doi.org/10.1109/tsa.2002.804546.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Li, Y., und C. Dorai. „Instructional Video Content Analysis Using Audio Information“. IEEE Transactions on Audio, Speech and Language Processing 14, Nr. 6 (November 2006): 2264–74. http://dx.doi.org/10.1109/tasl.2006.872602.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

JBARI, ATMAN, ABDELLAH ADIB und DRISS ABOUTAJDINE. „BLIND AUDIO SEPARATION AND CONTENT ANALYSIS IN THE TIME-SCALE DOMAIN“. International Journal of Semantic Computing 01, Nr. 03 (September 2007): 307–18. http://dx.doi.org/10.1142/s1793351x07000184.

Der volle Inhalt der Quelle
Annotation:
In this paper, we address the problem of Blind Audio Separation (BAS) by content evaluation of audio signals in the Time-Scale domain. Most of the proposed techniques rely on independence or at least uncorrelation assumption of the source signals exploiting mutual information or second/high order statistics. Here, we present a new algorithm, for instantaneous mixture, that considers only different time-scale source signature properties. Our approach lies in wavelet transformation advantages and proposes for this a new representation; Spatial Time Scale Distributions (STSD), to characterize energy and interference of the observed data. The BAS will be allowed by joint diagonalization, without a prior orthogonality constraint, of a set of selected diagonal STSD matrices. Several criteria will be proposed, in the transformed time-scale space, to assess the separated audio signal contents. We describe the logistics of the separation and the content rating, thus an exemplary implementation on synthetic signals and real audio recordings show the high efficiency of the proposed technique to restore the audio signal contents.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Magalhaes, Tairone Nunes, Felippe Brandão Barros und Mauricio Alves Loureiro. „Iracema: a Python library for audio content analysis“. Revista de Informática Teórica e Aplicada 27, Nr. 4 (23.12.2020): 127–38. http://dx.doi.org/10.22456/2175-2745.107202.

Der volle Inhalt der Quelle
Annotation:
Iracema is a Python library that aims to provide models for the extraction of meaningful informationfrom recordings of monophonic pieces of music, for purposes of research in music performance. With this objective in mind, we propose an architecture that will provide to users an abstraction level that simplifies the manipulation of different kinds of time series, as well as the extraction of segments from them. In this paper we: (1) introduce some key concepts at the core of the proposed architecture; (2) describe the current functionalities of the package; (3) give some examples of the application programming interface; and (4) give some brief examples of audio analysis using the system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Tzanetakis, George, und Perry Cook. „MARSYAS: a framework for audio analysis“. Organised Sound 4, Nr. 3 (16.11.2000): 169–75. http://dx.doi.org/10.1017/s1355771800003071.

Der volle Inhalt der Quelle
Annotation:
Existing audio tools handle the increasing amount of computer audio data inadequately. The typical tape-recorder paradigm for audio interfaces is inflexible and time consuming, especially for large data sets. On the other hand, completely automatic audio analysis and annotation is impossible using current techniques. Alternative solutions are semi-automatic user interfaces that let users interact with sound in flexible ways based on content. This approach offers significant advantages over manual browsing, annotation and retrieval. Furthermore, it can be implemented using existing techniques for audio content analysis in restricted domains. This paper describes MARSYAS, a framework for experimenting, evaluating and integrating such techniques. As a test for the architecture, some recently proposed techniques have been implemented and tested. In addition, a new method for temporal segmentation based on audio texture is described. This method is combined with audio analysis techniques and used for hierarchical browsing, classification and annotation of audio files.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Barker, Alexander B., Kathy Whittamore, John Britton, Rachael L. Murray und Jo Cranwell. „A content analysis of alcohol content in UK television“. Journal of Public Health 41, Nr. 3 (14.10.2018): 462–69. http://dx.doi.org/10.1093/pubmed/fdy142.

Der volle Inhalt der Quelle
Annotation:
Abstract Background Exposure to audio-visual alcohol content in media is associated with subsequent alcohol use in young people, but the extent of exposure contained in UK free-to-air prime-time television has not been explored since 2010. We report an analysis of alcohol content in a sample of UK free-to-air prime-time television broadcasts in 2015 and compare this with a similar analysis from 2010. Methods Content analysis of all programmes and advertisement/trailer breaks broadcast on the five national UK free-to-air channels in the UK between 6 and 10 pm during three separate weeks in September, October and November 2015. Results Alcohol content occurred in over 50% of all programmes broadcast and almost 50% of all advert/trailer periods between programmes. The majority of alcohol content occurred before the 9 pm watershed. Branding occurred in 3% of coded intervals and involved 122 brands, though three brands (Heineken, Corona and Fosters) accounted for almost half of all brand appearances. Conclusion Audio-visual alcohol content, including branding, is prevalent in UK television, and is therefore a potential driver of alcohol use in young people. These findings are virtually unchanged from our earlier analysis of programme content from 2010.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Martyniuk, Tetiana, Maksym Mykytiuk und Mykola Zaitsev. „FEATURES OF ANALYSIS OF MULTICHANNEL AUDIO SIGNALSFEATURES OF ANALYSIS OF MULTICHANNEL AUDIO SIGNALS“. ГРААЛЬ НАУКИ, Nr. 2-3 (09.04.2021): 302–5. http://dx.doi.org/10.36074/grail-of-science.02.04.2021.061.

Der volle Inhalt der Quelle
Annotation:
The rapid growth of audio content has led to the need to use tools for analysis and quality control of audio signals using software and hardware and modules. The fastest-growing industry is software and programming languages.The Python programming language today has the most operational and visual capabilities for working with sound. When developing programs for computational signal analysis, it provides the optimal balance of high and low-level programming functions. Compared to Matlab or other similar solutions, Python is free and allows you to create standalone applications without the need for large, permanently installed files and a virtual environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Yao Wang, Zhu Liu und Jin-Cheng Huang. „Multimedia content analysis-using both audio and visual clues“. IEEE Signal Processing Magazine 17, Nr. 6 (2000): 12–36. http://dx.doi.org/10.1109/79.888862.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Dong, Yuan, Le Zi Wang, Ji Wei Zhang, Hong Liang Bai und Jian Zhao. „Macro Segmentation and Content Analysis of TV Broadcast Stream“. Applied Mechanics and Materials 284-287 (Januar 2013): 3194–98. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.3194.

Der volle Inhalt der Quelle
Annotation:
This study addresses a non-supervised approach to extract TV programs via repetition based detection of the Inter-Programs (IPs) and audio based segmentation and classification algorithm to analyze the massive raw TV stream. Acoustic and visual information are both adopted for IPs detection so as to avoid missing true-positive. Novel audio fingerprints scheme and shot based indexing algorithm are introduced to guarantee the efficient and superior detection performance. After the TV programs are further segmented into clips, Gaussian Mixture Models (GMMs) are used to classify the clips into three types, namely, pure speech, non-pure speech, and non-speech. Experiments on a test dataset composed of more than 500 hours content-unknown TV streams show that the F-measure of the programs extraction and content analysis achieve 0.986 and 0.887 respectively. The experiments also demonstrate that the proposed algorithm for detecting repeated IPs outperforms the state-of-art approach.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Kostryukova, Elena A. „Genre Specification of Literary-Dramatic Audio Content“. Vestnik NSU. Series: History and Philology 19, Nr. 6 (2020): 141–58. http://dx.doi.org/10.25205/1818-7919-2020-19-6-141-158.

Der volle Inhalt der Quelle
Annotation:
This article sets out to systemize and modernize the study of theoretical knowledge in regards to the analysis of Russian literary and drama shows. The author is examining the conceptual basis picked out of a segment of broadcasting. The conceptual basis is formed from a number of proposed theories dating from the origin of radio broadcasting to contemporary time, while taking in consideration the reality of a modern media market. The foundation of the study is an analysis of 49 literary and drama cycles broadcasted on “Detskoe radio” (“Children’s radio”), as it relates to their genre orientation. This radio station is the only broadcaster in Russia that’s sole content is literary and drama shows. Despite the regularity of such shows not only on the air, but also on the Internet, questions regarding the genre stratification remain open. The author concludes that most prominent researches in the field only rarely touch on concepts that have become in wide use during the development of literary and drama broadcasting, such as “radio drama”. The author thus, sets out to show a more finite and explicit, genre system of literary theatrical shows, at least as it relates to previous attempts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Vafeiadis, Anastasios, Konstantinos Votis, Dimitrios Giakoumis, Dimitrios Tzovaras, Liming Chen und Raouf Hamzaoui. „Audio content analysis for unobtrusive event detection in smart homes“. Engineering Applications of Artificial Intelligence 89 (März 2020): 103226. http://dx.doi.org/10.1016/j.engappai.2019.08.020.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Voitko, V. V., S. V. Bevz, S. M. Burbelo und P. V. Stavytskyi. „ANALYSIS OF MODERN SYSTEM OF CREATING AND PROCESSING AUDIO CONTENT“. Scientific notes of Taurida National V.I. Vernadsky University. Series: Technical Sciences 1, Nr. 1 (2020): 55–59. http://dx.doi.org/10.32838/2663-5941/2020.1-1/10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Zhang, T., und C. C. Jay Kuo. „Audio content analysis for online audiovisual data segmentation and classification“. IEEE Transactions on Speech and Audio Processing 9, Nr. 4 (Mai 2001): 441–57. http://dx.doi.org/10.1109/89.917689.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Becerra Martinez, Helard, Andrew Hines und Mylène C. Q. Farias. „Perceptual Quality of Audio-Visual Content with Common Video and Audio Degradations“. Applied Sciences 11, Nr. 13 (23.06.2021): 5813. http://dx.doi.org/10.3390/app11135813.

Der volle Inhalt der Quelle
Annotation:
Audio-visual quality assessment remains as a complex research field. A great effort is being made to understand how visual and auditory domains are integrated and processed by humans. In this work, we analyzed and compared the results of three psychophisical experiments that collected quality and content scores given by a pool of subjects. The experiments include diverse content audio-visual material, e.g., Sports, TV Commercials, Interviews, Music, Documentaries and Cartoons, impaired with several visual (bitrate compression, packet-loss, and frame-freezing) and auditory (background noise, echo, clip, chop) distortions. Each experiment explores a particular domain. In Experiment 1, the video component was degraded with visual artifacts, meanwhile, the audio component did not suffer any type of degradation. In Experiment 2, the audio component was degraded while the video component remained untouched. Finally, in Experiment 3 both audio and video components were degraded. As expected, results confirmed a dominance of the visual component in the overall audio-visual quality. However, a detailed analysis showed that, for certain types of audio distortions, the audio component played a more important role in the construction of the overall perceived quality.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Lu, Lie, und Hong-Jiang Zhang. „Unsupervised speaker segmentation and tracking in real-time audio content analysis“. Multimedia Systems 10, Nr. 4 (April 2005): 332–43. http://dx.doi.org/10.1007/s00530-004-0160-5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Raats, Tim, und Catalina Iordache. „From Nordic Noir to Belgian Bright?“ Canned TV Going Global 9, Nr. 17 (31.08.2020): 79. http://dx.doi.org/10.18146/view.243.

Der volle Inhalt der Quelle
Annotation:
Shifts in audio-visual production, distribution and consumption have increased pressure on broadcasters as main financiers of domestic content in Europe. However, within the context of internationalisation and digitalisation, there are also new opportunities for the export of European content. By taking a close look at the evolution and increasing popularity of Flemish TV drama, this article identifies key explanatory factors for the export of content produced in a small media market. The analysis also discusses the extent to which the rise in exports may contribute to the increased sustainability of a small and fragile, yet vibrant audio-visual industry.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Lesnov, Roman Olegovich. „Content-Rich Versus Content-Deficient Video-Based Visuals in L2 Academic Listening Tests“. International Journal of Computer-Assisted Language Learning and Teaching 8, Nr. 1 (Januar 2018): 15–30. http://dx.doi.org/10.4018/ijcallt.2018010102.

Der volle Inhalt der Quelle
Annotation:
This article compares second language test-takers' performance on an academic listening test in an audio-only mode versus an audio-video mode. A new method of classifying video-based visuals was developed and piloted, which used L2 expert opinions to place the video on a continuum from being content-deficient (not helpful for answering comprehension items) to content-rich (very helpful for answering comprehension items). The video for one testlet contained only the speaker's non-verbal cues and was found to be content-deficient. The other video contained non-verbal cues overlapping with PowerPoint text and was deemed content-rich. Seventy-three ESL learners participated in the study. The video type classification method was shown to be reliable and practical. The results of the Rasch analysis showed no significant impact of condition, either the content-deficient or the content-rich, either at the testlet level or at the item level. Possible reasons and implications of these findings are discussed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Poria, Soujanya, Erik Cambria, Newton Howard, Guang-Bin Huang und Amir Hussain. „Fusing audio, visual and textual clues for sentiment analysis from multimodal content“. Neurocomputing 174 (Januar 2016): 50–59. http://dx.doi.org/10.1016/j.neucom.2015.01.095.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Jun-qing, Yu, Zhou Dong-ru, Jin Ye und Liu Hua-yong. „Content-based hierarchical analysis of news video using audio and visual information“. Wuhan University Journal of Natural Sciences 6, Nr. 4 (Dezember 2001): 779–83. http://dx.doi.org/10.1007/bf02850898.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Bhagawath, Rohith, Muralidhar M. Kulkarni, John Britton, Jo Cranwell, Monika Arora, Gaurang P. Nazar, Somya Mullapudi und Veena G. Kamath. „Quantifying audio visual alcohol imagery in popular Indian films: a content analysis“. BMJ Open 11, Nr. 5 (Mai 2021): e040630. http://dx.doi.org/10.1136/bmjopen-2020-040630.

Der volle Inhalt der Quelle
Annotation:
ObjectivesThough exposure to alcohol imagery in films is a significant determinant of uptake and severity of alcohol consumption among young people, there is poor evidence regarding the content of alcohol imagery in films in low-income and middle-income countries. We have measured alcohol imagery content and branding in popular Indian films, in total and in relation to language and age rating.DesignIn this observational study we measured alcohol imagery semiquantitatively using 5-minute interval coding. We coded each interval according to whether it contained alcohol imagery or brand appearances.SettingIndia.ParticipantsNone. Content analysis of a total of 30 national box office hit films over a period of 3 years from 2015 to 2017.Primary and secondary outcome measuresTo assess alcohol imagery in Indian films and its distribution in relation to age and language rating has been determined.ResultsThe 30 films included 22 (73%) Hindi films and 8 (27%) in regional languages. Seven (23%) were rated suitable for viewing by all ages (U), and 23 (77%) rated as suitable for viewing by children subject to parental guidance for those aged under 12 (UA). Any alcohol imagery was seen in 97% of the films, with 195 of a total of 923 5-minute intervals, and actual alcohol use in 25 (83%) films, in 90 (10%) intervals. The occurrence of these and other categories of alcohol imagery was similar in U-rated and UA-rated films, and in Hindi and local language films. Episodes of alcohol branding occurred in 10 intervals in five films.ConclusionAlmost all films popular in India contain alcohol imagery, irrespective of age rating and language. Measures need to be undertaken to limit alcohol imagery in Indian films to protect the health of young people, and to monitor alcohol imagery in other social media platforms in future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Maharso, Reno Dalu, und Irwansyah Irwansyah. „DARI MATA TURUN KE TELINGA: PERAN CUES DALAM NAVIGASI KONTEN APLIKASI DIGITAL AUDIO STREAMING“. ANDHARUPA: Jurnal Desain Komunikasi Visual & Multimedia 5, Nr. 02 (12.09.2019): 169–86. http://dx.doi.org/10.33633/andharupa.v5i2.2484.

Der volle Inhalt der Quelle
Annotation:
AbstrakPara pengguna ponsel pintar (smartphone users) sudah terbiasa menggunakan aplikasi mobile yang dipasangkan pada perangkat mereka. User mencari konten yang mereka inginkan dengan cara mengandalkan aspek visual yang disajikan oleh suatu aplikasi mobile. Aspek visual juga berlaku untuk menavigasi diri user dalam aplikasi yang menyediakan konten audio seperti aplikasi digital audio streaming. User mengandalkan aspek visual berupa cues pada Social Information Processing Theory (SIPT). Verbal cues antara lain nama penyanyi, judul lagu, nama playlist, lalu non-verbal cues seperti artwork yang menyertai sebuah konten lagu. Dengan mengambil contoh aplikasi Spotify, peneliti ingin menunjukkan bahwa aspek visual merupakan aspek utama dalam navigasi aplikasi konten audio. Penelitian dilakukan dengan analisis konten kualitatif untuk menelaah cues SIPT yang terdapat pada aspek navigability dalam model MAIN, diperkuat dengan focus group discussion kepada user Spotify untuk mendapatkan data mengenai pengalaman navigasi konten musik lewat cues yang tersedia. Analisis dilakukan dengan metode kualitatif deskriptif untuk menunjukkan aspek-aspek cues yang dominan. Hasil penelitian menunjukkan fitur search paling banyak digunakan untuk mencari konten lagu yang diinginkan dan user menavigasi pencarian terutama lewat judul lagu. Kata Kunci: aplikasi mobile, cues, digital audio streaming, navigasi, visual AbstractSmartphone users have accustomed to using mobile applications – apps – on their phones, either pre-installed or obtained through the app market. Users look for any content they desire in an app by relying on visual aspects presented by the app itself. These visual aspects also relied on when users need to navigate themselves through audio content made available in digital audio streaming apps. These aspects in the framework of Social Information Processing Theory (SIPT) in terms of verbal cues are artists’ names, song titles, playlists’ names, and non-verbal cues are artworks that accompany audio contents. Taking Spotify as a sample case, writers want to show that visuals become the sole aspect to navigate through content in a digital audio streaming app. The writers use qualitative content analysis to study elements of SIPT cues in the MAIN model. A focus group discussion to Spotify users is also conducted to obtain data about their navigating experience through cues available in the app. The analysis was conducted qualitative-descriptively to show which dominant aspects of cues. Results show ‘search’ as the most used feature when navigating through Spotify, and song title becomes the most popular way to search for songs. Keywords: cues, digital audio streaming, mobile app, navigation, visual
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Renza, Diego, Jaisson Vargas und Dora M. Ballesteros. „Robust Speech Hashing for Digital Audio Forensics“. Applied Sciences 10, Nr. 1 (28.12.2019): 249. http://dx.doi.org/10.3390/app10010249.

Der volle Inhalt der Quelle
Annotation:
The verification of the integrity and authenticity of multimedia content is an essential task in the forensic field, in order to make digital evidence admissible. The main objective is to establish whether the multimedia content has been manipulated with significant changes to its content, such as the removal of noise (e.g., a gunshot) that could clarify the facts of a crime. In this project we propose a method to generate a summary value for audio recordings, known as hash. Our method is robust, which means that if the audio has been modified slightly (without changing its significant content) with perceptual manipulations such as MPEG-4 AAC, the hash value of the new audio is very similar to that of the original audio; on the contrary, if the audio is altered and its content changes, for example with a low pass filter, the new hash value moves away from the original value. The method starts with the application of MFCC (Mel-frequency cepstrum coefficients) and the reduction of dimensions through the analysis of main components (principal component analysis, PCA). The reduced data is encrypted using as inputs two values from a particular binarization system using Collatz conjecture as the basis. Finally, a robust 96-bit code is obtained, which varies little when perceptual modifications are made to the signal such as compression or amplitude modification. According to experimental tests, the BER (bit error rate) between the hash value of the original audio recording and the manipulated audio recording is low for perceptual manipulations, i.e., 0% for FLAC and re-quantization, 1% in average for volume (−6 dB gain), less than 5% in average for MPEG-4 and resampling (using the FIR anti-aliasing filter); but more than 25% for non-perceptual manipulations such as low pass filtering (3 kHz, fifth order), additive noise, cutting and copy-move.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Juyal, Chirag, und R. H Goudar. „Architecture for Playing Songs using Audio Content Analysis according to First Chosen Song“. International Journal of Computer Applications 53, Nr. 16 (25.09.2012): 18–22. http://dx.doi.org/10.5120/8506-2313.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Ji, Xiang, Junwei Han, Xi Jiang, Xintao Hu, Lei Guo, Jungong Han, Ling Shao und Tianming Liu. „Analysis of music/speech via integration of audio content and functional brain response“. Information Sciences 297 (März 2015): 271–82. http://dx.doi.org/10.1016/j.ins.2014.11.020.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Zahid, Saadia, Fawad Hussain, Muhammad Rashid, Muhammad Haroon Yousaf und Hafiz Adnan Habib. „Optimized Audio Classification and Segmentation Algorithm by Using Ensemble Methods“. Mathematical Problems in Engineering 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/209814.

Der volle Inhalt der Quelle
Annotation:
Audio segmentation is a basis for multimedia content analysis which is the most important and widely used application nowadays. An optimized audio classification and segmentation algorithm is presented in this paper that segments a superimposed audio stream on the basis of its content into four main audio types: pure-speech, music, environment sound, and silence. An algorithm is proposed that preserves important audio content and reduces the misclassification rate without using large amount of training data, which handles noise and is suitable for use for real-time applications. Noise in an audio stream is segmented out as environment sound. A hybrid classification approach is used, bagged support vector machines (SVMs) with artificial neural networks (ANNs). Audio stream is classified, firstly, into speech and nonspeech segment by using bagged support vector machines; nonspeech segment is further classified into music and environment sound by using artificial neural networks and lastly, speech segment is classified into silence and pure-speech segments on the basis of rule-based classifier. Minimum data is used for training classifier; ensemble methods are used for minimizing misclassification rate and approximately 98% accurate segments are obtained. A fast and efficient algorithm is designed that can be used with real-time multimedia applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Kotsakis, Rigas, Maria Matsiola, George Kalliris und Charalampos Dimoulas. „Investigation of Spoken-Language Detection and Classification in Broadcasted Audio Content“. Information 11, Nr. 4 (15.04.2020): 211. http://dx.doi.org/10.3390/info11040211.

Der volle Inhalt der Quelle
Annotation:
The current paper focuses on the investigation of spoken-language classification in audio broadcasting content. The approach reflects a real-word scenario, encountered in modern media/monitoring organizations, where semi-automated indexing/documentation is deployed, which could be facilitated by the proposed language detection preprocessing. Multilingual audio recordings of specific radio streams are formed into a small dataset, which is used for the adaptive classification experiments, without seeking—at this step—for a generic language recognition model. Specifically, hierarchical discrimination schemes are followed to separate voice signals before classifying the spoken languages. Supervised and unsupervised machine learning is utilized at various windowing configurations to test the validity of our hypothesis. Besides the analysis of the achieved recognition scores (partial and overall), late integration models are proposed for semi-automatically annotation of new audio recordings. Hence, data augmentation mechanisms are offered, aiming at gradually formulating a Generic Audio Language Classification Repository. This database constitutes a program-adaptive collection that, beside the self-indexing metadata mechanisms, could facilitate generic language classification models in the future, through state-of-art techniques like deep learning. This approach matches the investigatory inception of the project, which seeks for indicators that could be applied in a second step with a larger dataset and/or an already pre-trained model, with the purpose to deliver overall results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

SOLEYMANI, MOHAMMAD, GUILLAUME CHANEL, JOEP J. M. KIERKELS und THIERRY PUN. „AFFECTIVE CHARACTERIZATION OF MOVIE SCENES BASED ON CONTENT ANALYSIS AND PHYSIOLOGICAL CHANGES“. International Journal of Semantic Computing 03, Nr. 02 (Juni 2009): 235–54. http://dx.doi.org/10.1142/s1793351x09000744.

Der volle Inhalt der Quelle
Annotation:
In this paper, we propose an approach for affective characterization of movie scenes based on the emotions that are actually felt by spectators. Such a representation can be used to characterize the emotional content of video clips in application areas such as affective video indexing and retrieval, and neuromarketing studies. A dataset of 64 different scenes from eight movies was shown to eight participants. While watching these scenes, their physiological responses were recorded. The participants were asked to self-assess their felt emotional arousal and valence for each scene. In addition, content-based audio- and video-based features were extracted from the movie scenes in order to characterize each scene. Degrees of arousal and valence were estimated by a linear combination of features from physiological signals, as well as by a linear combination of content-based features. We showed that a significant correlation exists between valence-arousal provided by the spectator's self-assessments, and affective grades obtained automatically from either physiological responses or from audio-video features. By means of an analysis of variance (ANOVA), the variation of different participants' self assessments and different gender groups self assessments for both valence and arousal were shown to be significant (p-values lower than 0.005). These affective characterization results demonstrate the ability of using multimedia features and physiological responses to predict the expected affect of the user in response to the emotional video content.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Kusumasondjaja, Sony. „Exploring the role of visual aesthetics and presentation modality in luxury fashion brand communication on Instagram“. Journal of Fashion Marketing and Management: An International Journal 24, Nr. 1 (09.09.2019): 15–31. http://dx.doi.org/10.1108/jfmm-02-2019-0019.

Der volle Inhalt der Quelle
Annotation:
Purpose The purpose of this paper is to examine the strategic importance of visual aesthetics and presentation modality for consumer responses to fashion luxury brand content posted on Instagram. Design/methodology/approach A content analysis of 40,679 posts on the official Instagram accounts of 15 global luxury brands was conducted. Findings Brand posts using expressive aesthetic image received more likes and comments on Instagram than those with classical aesthetics. Brand video content received more likes and comments than static content. There was also a significant interaction between visual aesthetics and presentation modality in generating likes and comments. Brand content adopting expressive aesthetic and audio-visual modality generated more responses when using audio-visual modality, while content using classical aesthetics produced more responses in a visual-only format. Practical implications As visual aesthetics and modality resulted in different responses to Instagram ads, luxury marketers should consider using appropriate approaches when creating brand posts on Instagram. Originality/value This is one of the few studies examining the effectiveness of visual aesthetics and presentation modality in Instagram advertising, especially in luxury fashion brand context.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Barker, Alexander B., John Britton, Emily Thomson, Abby Hunter, Magdalena Opazo Breton und Rachael L. Murray. „A content analysis of tobacco and alcohol audio-visual content in a sample of UK reality TV programmes“. Journal of Public Health 42, Nr. 3 (17.06.2019): 561–69. http://dx.doi.org/10.1093/pubmed/fdz043.

Der volle Inhalt der Quelle
Annotation:
Abstract Background Exposure to tobacco and alcohol content in audio-visual media is a risk factor for smoking and alcohol use in young people. We report an analysis of tobacco and alcohol content, and estimates of population exposure to this content, in a sample of reality television programmes broadcast in the UK. Methods We used 1-minute interval coding to quantify tobacco and alcohol content in all episodes of five reality TV programmes aired between January and August 2018 (Celebrity Big Brother; Made in Chelsea; The Only Way is Essex; Geordie Shore and Love Island), and estimated population exposure using viewing data and UK population estimates. Results We coded 5219 intervals from 112 episodes. Tobacco content appeared in 110 (2%) intervals in 20 (18%) episodes, and alcohol in 2212 (42%) intervals and in all episodes. The programmes delivered approximately 214 million tobacco gross impressions to the UK population, including 47.37 million to children; and for alcohol, 4.9 billion and 580 million respectively. Conclusion Tobacco, and especially alcohol, content is common in reality TV. The popularity of these programmes with young people, and consequent exposure to tobacco and alcohol imagery, represents a potentially major driver of smoking and alcohol consumption.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Kischinhevsky, Marcelo, Itala Maduell Vieira, João Guilherme Bastos dos Santos, Viktor Chagas, Miguel de Andrade Freitas und Alessandra Aldé. „WhatsApp audios and the remediation of radio: Disinformation in Brazilian 2018 presidential election“. Radio Journal: International Studies in Broadcast & Audio Media 18, Nr. 2 (01.10.2020): 139–58. http://dx.doi.org/10.1386/rjao_00021_1.

Der volle Inhalt der Quelle
Annotation:
This article brings the results of an investigation into the role of WhatsApp audio messages in the 2018 Brazilian presidential elections, proposing that instant voice messaging borrows elements from radio language. We started from a broader research, conducted by the Brazilian National Institute of Science and Technology in Digital Democracy (INCT.DD, in its Portuguese acronym), which identified a network composed of 220 WhatsApp groups – all of them with open-entry links – supporting six different candidates. Those groups put together thousands of anonymized profiles linked through connections to similar groups, configuring an extensive network. More than 1 million messages, including 98,000 audios, were gathered and downloaded during 2018 Brazilian electoral period (from June to October). We focused on eighteen audios with major circulation (totalling 3622 appearances) among the ones shared at least 100 times, which were extracted and analysed. The use of radio content analysis techniques pointed out strong evidence that audio messaging remediate radiophonic elements such as intimacy and colloquial language to accelerate disinformation campaigns.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Gorbatkova, Olga, Marina Puylova und Susanna Nalesnaya. „Phenomenon of impact of audio-visual media texts with violent content: socio-pedagogical discourse“. E3S Web of Conferences 273 (2021): 10026. http://dx.doi.org/10.1051/e3sconf/202127310026.

Der volle Inhalt der Quelle
Annotation:
In this article, the authors have carried out a hermeneutic analysis of the problem of violence among adolescents in audio-visual media texts and the peculiarities of this violent content impact. The material of the study is movies, TV programs, electronic versions of media texts reflecting the content of the violent segment in the adolescent environment. The main method is the hermeneutic analysis of discourse, which is based on the methodology created by A. Silverblatt and U. Eco. A hermeneutic analysis of audio-visual media texts with violent content showed that: - violence in different media texts differs in its peculiarities of expression, but at the same time, it is similar in social issues, methods and means of expressiveness, a tragic component in the narrative and image content; - the worldview of the audio-visual media texts authors is reduced to the position of fixing different types of violence; - a specific feature of the media products is the connection to the real life situation; - the value dominants of the main characters - adolescents (aggressors) are aggression, cruelty, etc.; - reasons for committing violent actions: conflict situations in the group, with people of different age and status; unrequited love; desire to raise the credibility, etc.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Mertens, Robert, Po-Sen Huang, Luke Gottlieb, Gerald Friedland, Ajay Divakaran und Mark Hasegawa-Johnson. „On the Applicability of Speaker Diarization to Audio Indexing of Non-Speech and Mixed Non-Speech/Speech Video Soundtracks“. International Journal of Multimedia Data Engineering and Management 3, Nr. 3 (Juli 2012): 1–19. http://dx.doi.org/10.4018/jmdem.2012070101.

Der volle Inhalt der Quelle
Annotation:
A video’s soundtrack is usually highly correlated to its content. Hence, audio-based techniques have recently emerged as a means for video concept detection complementary to visual analysis. Most state-of-the-art approaches rely on manual definition of predefined sound concepts such as “ngine sounds,” “utdoor/indoor sounds.” These approaches come with three major drawbacks: manual definitions do not scale as they are highly domain-dependent, manual definitions are highly subjective with respect to annotators and a large part of the audio content is omitted since the predefined concepts are usually found only in a fraction of the soundtrack. This paper explores how unsupervised audio segmentation systems like speaker diarization can be adapted to automatically identify low-level sound concepts similar to annotator defined concepts and how these concepts can be used for audio indexing. Speaker diarization systems are designed to answer the question “ho spoke when?”by finding segments in an audio stream that exhibit similar properties in feature space, i.e., sound similar. Using a diarization system, all the content of an audio file is analyzed and similar sounds are clustered. This article provides an in-depth analysis on the statistic properties of similar acoustic segments identified by the diarization system in a predefined document set and the theoretical fitness of this approach to discern one document class from another. It also discusses how diarization can be tuned in order to better reflect the acoustic properties of general sounds as opposed to speech and introduces a proof-of-concept system for multimedia event classification working with diarization-based indexing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Mikidenko, Natalia, und Svetlana Storozheva. „Audiobooks: Reading Practices and Educational Technologies“. SHS Web of Conferences 97 (2021): 01016. http://dx.doi.org/10.1051/shsconf/20219701016.

Der volle Inhalt der Quelle
Annotation:
Digital technologies have made it possible to create and replicate educational content in various forms: text, video, and audio. The post-PC generation actively uses audio content. There is a tendency to demand audiobooks as a format of educational resources. The purpose of the article is to identify and describe educational technologies and reading practices related to audiobooks. The novelty of the research is the analysis of student practices of audio reading. Methods: The first stage includes a desk study of audio reading as an educational technology, its capabilities and limitations, the representation of audiobooks in universities’ electronic libraries and the use of audiobooks in the educational process, educational technologies and practices of working with audiobooks. The second stage includes an empirical study of students ‘practices of accessing and using electronic libraries in the process of educational (1), leisure (2) activities, readers’ preferences for the reading format (traditional, audio reading) (3). Research results: With the development of digital technologies, electronic libraries have become a popular educational resource. The e-libraries extend the representation of the audio books. The possibilities and limitations of audio reading remain debatable. Audio reading as a resourcesaving technology (carefulness to vision, time optimization) has a number of advantages and at the same time requires the development of listening skills and critical perception of audio text.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Paneda, Xabiel G., David Melendi, Roberto Garcia, Manuel Vilas und Victor Garcia. „A Methodology for Performance, Content Analysis, and Configuration of Audio/Video-on-Demand Services“. International Journal of Business Data Communications and Networking 3, Nr. 4 (Oktober 2007): 17–46. http://dx.doi.org/10.4018/jbdcn.2007100102.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

HANJALIC, ALAN, REGINALD L. LAGENDIJK und JAN BIEMOND. „RECENT ADVANCES IN VIDEO CONTENT ANALYSIS: FROM VISUAL FEATURES TO SEMANTIC VIDEO SEGMENTS“. International Journal of Image and Graphics 01, Nr. 01 (Januar 2001): 63–81. http://dx.doi.org/10.1142/s0219467801000062.

Der volle Inhalt der Quelle
Annotation:
This paper addresses the problem of automatically partitioning a video into semantic segments using visual low-level features only. Semantic segments may be understood as building content blocks of a video with a clear sequential content structure. Examples are reports in a news program, episodes in a movie, scenes of a situation comedy or topic segments of a documentary. In some video genres like news programs or documentaries, the usage of different media (visual, audio, speech, text) may be beneficial or is even unavoidable for reliably detecting the boundaries between semantic segments. In many other genres, however, the pay-off in using different media for the purpose of high-level segmentation is not high. On the one hand, relating the audio, speech or text to the semantic temporal structure of video content is generally very difficult. This is especially so in "acting" video genres like movies and situation comedies. On the other hand, the information contained in the visual stream of these video genres often seems to provide the major clue about the position of semantic segments boundaries. Partitioning a video into semantic segments can be performed by measuring the coherence of the content along neighboring video shots of a sequence. The segment boundaries are then found at places (e.g., shot boundaries) where the values of content coherence are sufficiently low. On the basis of two state-of-the-art techniques for content coherence modeling, we illustrate in this paper the current possibilities for detecting the boundaries of semantic segments using visual low-level features only.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Barker, Alexander B., John Britton, Emily Thomson, Abby Hunter, Magdalena Opazo Breton und Rachael L. Murray. „Corrigendum: A content analysis of tobacco and alcohol audio-visual content in a sample of UK reality TV programmes“. Journal of Public Health 41, Nr. 4 (23.07.2019): 871. http://dx.doi.org/10.1093/pubmed/fdz088.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Cañas-Bajo, Jose, und Johanna M. Silvennoinen. „Cross-Cultural Factors in Experiencing Online Video Contents in Product Marketing“. International Journal of Art, Culture and Design Technologies 6, Nr. 1 (Januar 2017): 40–56. http://dx.doi.org/10.4018/ijacdt.2017010103.

Der volle Inhalt der Quelle
Annotation:
Although online shops represent convenient tools to buy and sell products, they do not offer as rich multisensory experiences than physical retailing offers. Audio-visual contents could provide dynamic multisensory information and offer more engaging experiences. However, to be successful, audio-visual contents need to be adjusted to the cultural characteristics of the users. This manuscript presents a study in which Spanish and Finnish participants interacted with audiovisual products depicting videos of the brand design. Through content analysis of participants' verbalizations, the authors identified categories and subcategories that defined the representation of the video elements and their relative weight depending on the cultural background of the viewer. Although results indicate common elements affecting viewers of the two countries, they differ in the relative weight to global aesthetics features. The results of this study can be utilized in designing audio-visual representations of products for online shops taking into account the cultural factors affecting the design practice.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Mohammed, Duraid, Khamis A. Al-Karawi, Philip Duncan und Francis F. Li. „Overlapped Music segmentation using a new Effective Feature and Random Forests“. IAES International Journal of Artificial Intelligence (IJ-AI) 8, Nr. 2 (01.06.2019): 181. http://dx.doi.org/10.11591/ijai.v8.i2.pp181-189.

Der volle Inhalt der Quelle
Annotation:
<div><p>In the field of audio classification, audio signals may be broadly divided into three classes: speech, music and events. Most studies, however, neglect that real audio soundtracks can have any combination of these classes simultaneously. In this study, a novel feature, “Entrocy”, is proposed for the detection of music both in pure form and overlapping with the other audio classes. Entrocy is defined as the variation of the information (or entropy) in an audio segment over time. Segments which contain music were found to have lower Entrocy since there are fewer abrupt changes over time.</p></div><p class="Abstract">We have also compared Entrocy with existing music detection features and the entrocy showing a promising performance.</p><p class="IndexTerms"><a name="PointTmp"></a><em>Keywords</em>—Music detection, audio content analysis, audio indexing, Entropy, real world audio classification.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Kruglova, Lyudmila A. „Russian-Speaking Conversational Radio Stations on the YouTube Platform: Audio Content Visualization“. Vestnik NSU. Series: History and Philology 19, Nr. 6 (2020): 159–70. http://dx.doi.org/10.25205/1818-7919-2020-19-6-159-170.

Der volle Inhalt der Quelle
Annotation:
Purpose. The article presents the results of an analysis of the work of Russian-speaking conversational radio stations with a YouTube platform. The accounts of 13 information and talk radio stations broadcasting in Russia and materials published on the YouTube site were analyzed. Results. The analysis was carried out in the summer and autumn of 2019 according to criteria such as the number of released materials, views, comments, “likes”, the nature of the content (on-air or special), informational occasions, format and timing. Conclusion. The author comes to the conclusion that most of the analyzed radio stations, with the exception of those who synergize with video content producers or specifically develop this resource, don’t yet know how to use YouTube tools and understand the specifics of working with the audience on this platform, and can’t compete with the top bloggers.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Wu, Wen-Chi Vivian, I.-Ting Doris Lin, Michael W. Marek und Fang-Chuan Ou Yang. „Analysis of English Idiomatic Learning Behaviors of an Audio-Visual Mobile Application“. SAGE Open 11, Nr. 2 (April 2021): 215824402110168. http://dx.doi.org/10.1177/21582440211016899.

Der volle Inhalt der Quelle
Annotation:
Employment of idioms is essential to reach higher English expressive levels, especially for English as Foreign Language (EFL) learners. However, English idioms are challenging for both instructors and learners because the complex content of idioms depends on understanding their cultural context. Most mobile language applications are for vocabulary acquisition. Therefore, the purpose of this study was to develop an animation/video-based application, “My English Idiom Learning Assistant” (MEILA), to explore the different idiom learning behaviors, as well as the relationships of their learning behaviors to MEILA. To explore the relationship between the learning outcomes and the learning behaviors, the researcher used logs from the MEILA database. The participants consisted of 59 freshmen from two English conversation classes in one private university in central Taiwan. Students experienced the learning activities over 3 weeks. The researcher adopted idiomatic understanding pre- and posttests for the study as well as in-depth interviews. The results revealed that MEILA significantly enhanced idiomatic learning outcomes. The sequential analysis used provides language instructors an example of monitoring learning behaviors to improve teaching materials and methods. The findings may stimulate more mobile-assist language learning (MALL) researchers, English instructors, and app designers to create innovative mobile environments for English idiomatic learning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

N.B.Khasanov, Z.A. Asymova und N.A.Rakhmanova. „DEVELOPMENT OF THE SKILL OF PROFESSIONALLY COMMUNICATIVE ANNOTATIONS WITH THE USE OF AUDIOVISUAL TECHNOLOGIES“. Herald of KSUCTA n a N Isanov, Nr. 3 (23.09.2019): 473–78. http://dx.doi.org/10.35803/1694-5298.2019.3.473-478.

Der volle Inhalt der Quelle
Annotation:
The article deals with the problem of the formation of professional communicative annotation skills using audio-visual technologies. The definition of professionally directed annotation is formulated. The relevance of the proposed article lies in the fact that it presents a scientifically-based method of forming the skills of professional communicative annotation of audiovisual texts in Russian. The purpose of the article is to substantiate the general methodology, law, and principles for integrating the content of vocational education in the context of a competence-based approach. Materials and methods: used integration, cognitive, competence approaches, the method of comparative analysis. Results of the research: in the course of the research, the concept of the term “annotation”, the annotation skills of an audio-visual text, the method of forming the skills of professional communicative annotation of audio-visual texts: scientific or popular science films in Russian are defined.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Simurra, Ivan, und Rodrigo Borges. „Analysis of Ligeti’s Atmosphères by Means of Computational and Symbolic Resources“. Revista Música 21, Nr. 1 (27.07.2021): 369–94. http://dx.doi.org/10.11606/rm.v21i1.188846.

Der volle Inhalt der Quelle
Annotation:
We report a music analysis study of Atmosphères (1961) from György Ligeti, combining symbolic information retrieved from the musical score and audio descriptors extracted from the audio recording. The piece was elected according to the following criteria: (a) it is a music composition based on sound transformations associated to motions on the global timbre; (b) its conceptual creative intercourse makes direct references to electronic music and sound/timbre techniques from the ancient Renaissance Music; and (c) its sonorities are explored by means of variations on the timbre contrast. From the symbolic analysis perspective, Atmosphères’ timbre content can be discussed considering the entanglement of individual characteristics of musical instruments. The computational method approaches the musical structure from an empirical perspective and is based on clustering techniques. We depart from previous studies, and this time we focus on the novelty curve calculated from the spectral content extracted from the piece recording. Our findings indicate that novelty curve can be associate with five specific clusters, and regarding the symbolic music analysis, three leading music features can be argued: (a) instrumentation changes; (b) distinct pitch chromatic set locations and (c) intensity dynamic fluctuations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Sturm, Bob L. „Alexander Lerch: An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics“. Computer Music Journal 37, Nr. 4 (Dezember 2013): 90–91. http://dx.doi.org/10.1162/comj_r_00208.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Haque, Mohammad A., und Jong-Myon Kim. „An analysis of content-based classification of audio signals using a fuzzy c-means algorithm“. Multimedia Tools and Applications 63, Nr. 1 (17.02.2012): 77–92. http://dx.doi.org/10.1007/s11042-012-1019-y.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Barker, Alexander, Magdalena Opazo-Breton, Emily Thomson, John Britton, Bruce Grant-Braham und Rachael L. Murray. „Quantifying alcohol audio-visual content in UK broadcasts of the 2018 Formula 1 Championship: a content analysis and population exposure“. BMJ Open 10, Nr. 8 (August 2020): e037035. http://dx.doi.org/10.1136/bmjopen-2020-037035.

Der volle Inhalt der Quelle
Annotation:
ObjectivesExposure to alcohol imagery is associated with subsequent alcohol use among young people, and UK broadcasting regulations protect young people from advertising alcohol content in UK television. However, alcohol promotion during sporting events, a significant potential medium of advertising to children, is exempt. We report an analysis and estimate the UK population exposure to, alcohol content, including branding, in UK broadcasts of the 2018 Formula 1 (F1) Championship.SettingUK.ParticipantsNone. Content analysis of broadcast footage of 21 2018 F1 Championship races on Channel 4, using 1-minute interval coding of any alcohol content, actual or implied use, other related content or branding. Census data and viewing figures were used to estimate gross and per capita alcohol impressions.ResultsAlcohol content occurred in all races, in 1613 (56%) 1-minute intervals of race footage and 44 (9%) of intervals across 28% of advertisement breaks. The most prominent content was branding, occurring in 51% of race intervals and 7% of advertisement break intervals, appearing predominantly on billboard advertisements around the track, with the Heineken and Johnnie Walker brands being particularly prominent. The 21 races delivered an estimated 3.9 billion alcohol gross impressions (95% CI 3.6 to 4.3) to the UK population, including 154 million (95% CI 124 to 184) to children, and 3.6 billion alcohol gross impressions of alcohol branding, including 141 million impressions to children. Branding was also shown in race footage from countries where alcohol promotion is prohibited.ConclusionsAlcohol content was highly prevalent in the 2018 F1 Championship broadcasts, delivering millions of alcohol impressions to young viewers. This exposure is likely to represent a significant driver of alcohol consumption among young people.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Al-Tameem, Ayman bin Abdulaziz, und Abdul Khader Jilani Saudagar. „Machine learning approach for identification of threat content in audio messages shared on social media“. Journal of Discrete Mathematical Sciences and Cryptography 23, Nr. 1 (02.01.2020): 83–93. http://dx.doi.org/10.1080/09720529.2020.1721876.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Alateeq, Hanan, Dalal Alzaid, Nadia Selim, Afnan Abouelwafa, Shiroq Al-Megren und Heba Kurdi. „Design and Implantation of a Voluntary Reading Mobile Application for People who are Visually Impaired“. Academic Perspective Procedia 1, Nr. 1 (09.11.2018): 645–53. http://dx.doi.org/10.33793/acperpro.01.01.120.

Der volle Inhalt der Quelle
Annotation:
This paper proposed an application that supports visually impaired users in the community by providing voluntary audio support for readable content. The application, Basirah, allows visually impaired users to post audio request for reading text. Volunteers are able to view these requests and offer their response as recorded audio links. The application is developed for iOS devices and supports requests in English and allows for multi-lingual responses for volunteers. This paper presents Basirah and walks through the analysis, design, and testing phases of development. Preliminary testing was carried out on Basirah, where it has shown promising results and contribution.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Xu, Hong. „The Application of Multi-modal Discourse in English Translation of Tujia Folk Song Long Chuan Diao in Western Hubei Province“. MATEC Web of Conferences 232 (2018): 02010. http://dx.doi.org/10.1051/matecconf/201823202010.

Der volle Inhalt der Quelle
Annotation:
Nowadays, multi-modal discourse, including language, picture, audio and video play a very important role in the translation of folk songs. Under the comprehensive framework of Delu Zhang, this paper analyses the application of multi-modal discourse in the translation of Tujia folk song Long Chuan Diao from four aspects: cultural level, context level, content level and expressive level. The study can better understand Tujia folk songs and transmit Tujia culture. Multi-modal discourse analysis plays an important role in translating and spreading Tujia folk songs in Western Hubei province.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie