Добірка наукової літератури з теми "Flux audio"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Flux audio".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Flux audio"
Bryant, M. D. "Bond Graph Models for Linear Motion Magnetostrictive Actuators." Journal of Dynamic Systems, Measurement, and Control 118, no. 1 (March 1, 1996): 161–67. http://dx.doi.org/10.1115/1.2801139.
Повний текст джерелаStupacher, Jan, Michael J. Hove, and Petr Janata. "Audio Features Underlying Perceived Groove and Sensorimotor Synchronization in Music." Music Perception 33, no. 5 (June 1, 2016): 571–89. http://dx.doi.org/10.1525/mp.2016.33.5.571.
Повний текст джерелаRossetti, Danilo, and Jônatas Manzolli. "Analysis of Granular Acousmatic Music: Representation of sound flux and emergence." Organised Sound 24, no. 02 (August 2019): 205–16. http://dx.doi.org/10.1017/s1355771819000244.
Повний текст джерелаValiveti, Hima Bindu, Anil Kumar B., Lakshmi Chaitanya Duggineni, Swetha Namburu, and Swaraja Kuraparthi. "Soft computing based audio signal analysis for accident prediction." International Journal of Pervasive Computing and Communications 17, no. 3 (March 26, 2021): 329–48. http://dx.doi.org/10.1108/ijpcc-08-2020-0120.
Повний текст джерелаStanton, Polly. "Sound, listening and the moving image." Qualitative Research Journal 19, no. 1 (February 4, 2019): 65–71. http://dx.doi.org/10.1108/qrj-12-2018-0019.
Повний текст джерелаIstvanek, Matej, Zdenek Smekal, Lubomir Spurny, and Jiri Mekyska. "Enhancement of Conventional Beat Tracking System Using Teager–Kaiser Energy Operator." Applied Sciences 10, no. 1 (January 4, 2020): 379. http://dx.doi.org/10.3390/app10010379.
Повний текст джерелаHao, Yiya, Yaobin Chen, Weiwei Zhang, Gong Chen, and Liang Ruan. "A real-time music detection method based on convolutional neural network using Mel-spectrogram and spectral flux." INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, no. 1 (August 1, 2021): 5910–18. http://dx.doi.org/10.3397/in-2021-11599.
Повний текст джерелаLuck, Geoff, and Petri Toiviainen. "Ensemble Musicians’ Synchronization With Conductors’ Gestures: An Automated Feature-Extraction Analysis." Music Perception 24, no. 2 (December 1, 2006): 189–200. http://dx.doi.org/10.1525/mp.2006.24.2.189.
Повний текст джерелаPurnomo, Endra Dwi, Ubaidillah Ubaidillah, Fitrian Imaduddin, Iwan Yahya, and Saiful Amri Mazlan. "Preliminary experimental evaluation of a novel loudspeaker featuring magnetorheological fluid surround absorber." Indonesian Journal of Electrical Engineering and Computer Science 17, no. 2 (February 1, 2020): 922. http://dx.doi.org/10.11591/ijeecs.v17.i2.pp922-928.
Повний текст джерелаMauch, Matthias, Robert M. MacCallum, Mark Levy, and Armand M. Leroi. "The evolution of popular music: USA 1960–2010." Royal Society Open Science 2, no. 5 (May 2015): 150081. http://dx.doi.org/10.1098/rsos.150081.
Повний текст джерелаДисертації з теми "Flux audio"
Nesvadba, Jan. "Segmentation sémantique des contenus audio-visuels." Bordeaux 1, 2007. http://www.theses.fr/2007BOR13456.
Повний текст джерелаRamona, Mathieu. "Classification automatique de flux radiophoniques par Machines à Vecteurs de Support." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00529331.
Повний текст джерелаSoldi, Giovanni. "Diarisation du locuteur en temps réel pour les objets intelligents." Electronic Thesis or Diss., Paris, ENST, 2016. http://www.theses.fr/2016ENST0061.
Повний текст джерелаOn-line speaker diarization aims to detect “who is speaking now" in a given audio stream. The majority of proposed on-line speaker diarization systems has focused on less challenging domains, such as broadcast news and plenary speeches, characterised by long speaker turns and low spontaneity. The first contribution of this thesis is the development of a completely unsupervised adaptive on-line diarization system for challenging and highly spontaneous meeting data. Due to the obtained high diarization error rates, a semi-supervised approach to on-line diarization, whereby speaker models are seeded with a modest amount of manually labelled data and adapted by an efficient incremental maximum a-posteriori adaptation (MAP) procedure, is proposed. Obtained error rates may be low enough to support practical applications. The second part of the thesis addresses instead the problem of phone normalisation when dealing with short-duration speaker modelling. First, Phone Adaptive Training (PAT), a recently proposed technique, is assessed and optimised at the speaker modelling level and in the context of automatic speaker verification (ASV) and then is further developed towards a completely unsupervised system using automatically generated acoustic class transcriptions, whose number is controlled by regression tree analysis. PAT delivers significant improvements in the performance of a state-of-the-art iVector ASV system even when accurate phonetic transcriptions are not available
Poignant, Johann. "Identification non-supervisée de personnes dans les flux télévisés." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00958774.
Повний текст джерелаTrad, Abdelbasset. "Déploiement à grande échelle de la voix sur IP dans des environnements hétérogènes." Phd thesis, Nice, 2006. http://tel.archives-ouvertes.fr/tel-00406513.
Повний текст джерелаMichaud, Jérôme. "Re-conceptualiser notre expérience de l’environnement audio-visuel qui nous entoure : l’individuation, entre attention et mémoire." Thèse, 2016. http://hdl.handle.net/1866/16151.
Повний текст джерелаThis thesis re-conceptualizes our new audio-visual environment and analyses the experience we make of it. In the digital age marked by the dissemination of moving images, we circumscribe a category of images which we see as the most likely to have an impact on human development. We call it synchrono-photo-temporalized images-sounds. Specifically, we seek to highlight their power of affection and control by showing that they have some influence on the process of individuation, an influence which is greatly facilitated by the structural isotopy between the stream of consciousness and the flow of motion images. By examining the research of Bernard Stiegler, we also note the important roles attention and memory play in the process of individuation. This thinking makes us realize how the current education system in Quebec fails in its mission to give a good civic education by not providing an adequate teaching of moving images.
Книги з теми "Flux audio"
Colbert, Don. Bible Cure for Colds, Flu & Sinus Infections (Bible Cure (Oasis Audio)). Oasis Audio, 2004.
Знайти повний текст джерела(Narrator), Greg Wheatley, ed. The Bible Cure for Colds, Flu and Sinus Infections (Bible Cure (Oasis Audio)). Oasis Audio, 2004.
Знайти повний текст джерелаShatzkin, Mike, and Robert Paris Riger. The Book Business. Oxford University Press, 2019. http://dx.doi.org/10.1093/wentk/9780190628031.001.0001.
Повний текст джерелаЧастини книг з теми "Flux audio"
Elrom, Elad. "Facilitating Audio and Video." In AdvancED Flex 4, 461–503. Berkeley, CA: Apress, 2010. http://dx.doi.org/10.1007/978-1-4302-2484-6_14.
Повний текст джерелаRichardson, Darren, Paul Milbourne, Steve Webster, Todd Yard, and Sean McSharry. "Using Audio." In Foundation ActionScript 3.0 for Flash and Flex, 301–53. Berkeley, CA: Apress, 2009. http://dx.doi.org/10.1007/978-1-4302-1919-4_8.
Повний текст джерелаMcSharry, Sean. "Using Audio." In Foundation ActionScript 3.0 with Flash CS3 and Flex, 293–343. Berkeley, CA: Apress, 2008. http://dx.doi.org/10.1007/978-1-4302-0196-0_8.
Повний текст джерелаI. Al-Shoshan, Abdullah. "Classification and Separation of Audio and Music Signals." In Multimedia Information Retrieval [Working Title]. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.94940.
Повний текст джерелаТези доповідей конференцій з теми "Flux audio"
Wang, Wengen, Xiaoqing Yu, Yun Hui Wang, and Ram Swaminathan. "Audio fingerprint based on Spectral Flux for audio retrieval." In 2012 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2012. http://dx.doi.org/10.1109/icalip.2012.6376781.
Повний текст джерелаLee, Sangkil, Jieun Kim, and Insung Lee. "Speech/Audio Signal Classification Using Spectral Flux Pattern Recognition." In 2012 IEEE Workshop on Signal Processing Systems (SiPS). IEEE, 2012. http://dx.doi.org/10.1109/sips.2012.36.
Повний текст джерелаXu, Y., P. Smagacz, J. Lapinskas, J. Webster, P. Shaw, and R. P. Taleyarkhan. "Neutron Detection with Centrifugally-Tensioned Metastable Fluid Detectors (CTMFD)." In 14th International Conference on Nuclear Engineering. ASMEDC, 2006. http://dx.doi.org/10.1115/icone14-89199.
Повний текст джерелаZheng, Haiming, and Tieqiao Guo. "Relative Accuracy Test Audit Evaluation for Flue Gas Continuous Emission Monitoring Systems in Power Plant." In 2008 Pacific-Asia Workshop on Computational Intelligence and Industrial Application (PACIIA). IEEE, 2008. http://dx.doi.org/10.1109/paciia.2008.123.
Повний текст джерелаMeng, Liu, Chen Yang, Zhong Zhuhai, Zhang Xiaodan, Deng Guoliang, Mingyan Yin, Jun Li, and Qi Sun. "Numerical Tests on the Effect Factors of the Last Stage Blade for Low Pressure Exhaust Hood Simulation." In ASME Turbo Expo 2017: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/gt2017-63964.
Повний текст джерела