Artykuły w czasopismach na temat „Speech and audio signals”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Speech and audio signals”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.
Rao*, G. Manmadha, Raidu Babu D.N, Krishna Kanth P.S.L, Vinay B. i Nikhil V. "Reduction of Impulsive Noise from Speech and Audio Signals by using Sd-Rom Algorithm". International Journal of Recent Technology and Engineering 10, nr 1 (30.05.2021): 265–68. http://dx.doi.org/10.35940/ijrte.a5943.0510121.
Pełny tekst źródłaS. Ashwin, J., i N. Manoharan. "Audio Denoising Based on Short Time Fourier Transform". Indonesian Journal of Electrical Engineering and Computer Science 9, nr 1 (1.01.2018): 89. http://dx.doi.org/10.11591/ijeecs.v9.i1.pp89-92.
Pełny tekst źródłaKacur, Juraj, Boris Puterka, Jarmila Pavlovicova i Milos Oravec. "Frequency, Time, Representation and Modeling Aspects for Major Speech and Audio Processing Applications". Sensors 22, nr 16 (22.08.2022): 6304. http://dx.doi.org/10.3390/s22166304.
Pełny tekst źródłaNittrouer, Susan, i Joanna H. Lowenstein. "Beyond Recognition: Visual Contributions to Verbal Working Memory". Journal of Speech, Language, and Hearing Research 65, nr 1 (12.01.2022): 253–73. http://dx.doi.org/10.1044/2021_jslhr-21-00177.
Pełny tekst źródłaB, Nagesh, i Dr M. Uttara Kumari. "A Review on Machine Learning for Audio Applications". Journal of University of Shanghai for Science and Technology 23, nr 07 (30.06.2021): 62–70. http://dx.doi.org/10.51201/jusst/21/06508.
Pełny tekst źródłaKubanek, M., J. Bobulski i L. Adrjanowicz. "Characteristics of the use of coupled hidden Markov models for audio-visual polish speech recognition". Bulletin of the Polish Academy of Sciences: Technical Sciences 60, nr 2 (1.10.2012): 307–16. http://dx.doi.org/10.2478/v10175-012-0041-6.
Pełny tekst źródłaTimmermann, Johannes, Florian Ernst i Delf Sachau. "Speech enhancement for helicopter headsets with an integrated ANC-system for FPGA-platforms". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, nr 5 (1.02.2023): 2720–30. http://dx.doi.org/10.3397/in_2022_0382.
Pełny tekst źródłaAbdallah, Hanaa A., i Souham Meshoul. "A Multilayered Audio Signal Encryption Approach for Secure Voice Communication". Electronics 12, nr 1 (20.12.2022): 2. http://dx.doi.org/10.3390/electronics12010002.
Pełny tekst źródłaYin, Shu Hua. "Design of the Auxiliary Speech Recognition System of Super-Short-Range Reconnaissance Radar". Applied Mechanics and Materials 556-562 (maj 2014): 4830–34. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.4830.
Pełny tekst źródłaMoore, Brian C. J. "Binaural sharing of audio signals". Hearing Journal 60, nr 11 (listopad 2007): 46–48. http://dx.doi.org/10.1097/01.hj.0000299172.13153.6f.
Pełny tekst źródłaGnanamanickam, Jenifa, Yuvaraj Natarajan i Sri Preethaa K. R. "A Hybrid Speech Enhancement Algorithm for Voice Assistance Application". Sensors 21, nr 21 (23.10.2021): 7025. http://dx.doi.org/10.3390/s21217025.
Pełny tekst źródłaHaas, Ellen C. "Can 3-D Auditory Warnings Enhance Helicopter Cockpit Safety?" Proceedings of the Human Factors and Ergonomics Society Annual Meeting 42, nr 15 (październik 1998): 1117–21. http://dx.doi.org/10.1177/154193129804201513.
Pełny tekst źródłaRashid, Rakan Saadallah, i Jafar Ramadhan Mohammed. "Securing speech signals by watermarking binary images in the wavelet domain". Indonesian Journal of Electrical Engineering and Computer Science 18, nr 2 (1.05.2020): 1096. http://dx.doi.org/10.11591/ijeecs.v18.i2.pp1096-1103.
Pełny tekst źródłaSinha, Ria. "Digital Assistant for Sound Classification Using Spectral Fingerprinting". International Journal for Research in Applied Science and Engineering Technology 9, nr 8 (31.08.2021): 2045–52. http://dx.doi.org/10.22214/ijraset.2021.37714.
Pełny tekst źródłaThanki, Rohit, i Komal Borisagar. "Watermarking Scheme with CS Encryption for Security and Piracy of Digital Audio Signals". International Journal of Information System Modeling and Design 8, nr 4 (październik 2017): 38–60. http://dx.doi.org/10.4018/ijismd.2017100103.
Pełny tekst źródłaMaryn, Youri, i Andrzej Zarowski. "Calibration of Clinical Audio Recording and Analysis Systems for Sound Intensity Measurement". American Journal of Speech-Language Pathology 24, nr 4 (listopad 2015): 608–18. http://dx.doi.org/10.1044/2015_ajslp-14-0082.
Pełny tekst źródłaMenezes, João Vítor Possamai de, Maria Mendes Cantoni, Denis Burnham i Adriano Vilela Barbosa. "method for lexical tone classification in audio-visual speech". Journal of Speech Sciences 9 (9.09.2020): 93–104. http://dx.doi.org/10.20396/joss.v9i00.14960.
Pełny tekst źródłaGunawan, T. S., O. O. Khalifa i E. Ambikairajah. "FORWARD MASKING THRESHOLD ESTIMATION USING NEURAL NETWORKS AND ITS APPLICATION TO PARALLEL SPEECH ENHANCEMENT". IIUM Engineering Journal 11, nr 1 (26.05.2010): 15–26. http://dx.doi.org/10.31436/iiumej.v11i1.41.
Pełny tekst źródłaMehrotra, Tushar, Neha Shukla, Tarunika Chaudhary, Gaurav Kumar Rajput, Majid Altuwairiqi i Mohd Asif Shah. "Improved Frame-Wise Segmentation of Audio Signals for Smart Hearing Aid Using Particle Swarm Optimization-Based Clustering". Mathematical Problems in Engineering 2022 (5.05.2022): 1–9. http://dx.doi.org/10.1155/2022/1182608.
Pełny tekst źródłaUsina, E. E., A. R. Shabanova i I. V. Lebedev. "Models and a Tecnique for Determining the Speech Activity of a User of a Socio-Cyberphysical System". Proceedings of the Southwest State University 23, nr 6 (23.02.2020): 225–40. http://dx.doi.org/10.21869/2223-1560-2019-23-6-225-240.
Pełny tekst źródłaMowlaee, Pejman, Abolghasem Sayadiuan i Hamid Sheikhzadeh. "FDMSM robust signal representation for speech mixtures and noise corrupted audio signals". IEICE Electronics Express 6, nr 15 (2009): 1077–83. http://dx.doi.org/10.1587/elex.6.1077.
Pełny tekst źródłaNaithani, Deeksha. "Development of a Real-Time Audio Signal Processing System for Speech Enhancement". Mathematical Statistician and Engineering Applications 70, nr 2 (26.02.2021): 1041–52. http://dx.doi.org/10.17762/msea.v70i2.2157.
Pełny tekst źródłaSreelekha, Pallepati, Aedabaina Devi, Kurva Pooja i S. T. Ramya. "Audio to Sign Language Translator". International Journal for Research in Applied Science and Engineering Technology 11, nr 4 (30.04.2023): 3382–84. http://dx.doi.org/10.22214/ijraset.2023.50873.
Pełny tekst źródłaMOWLAEE, PEJMAN, i ABOLGHASEM SAYADIYAN. "AUDIO CLASSIFICATION OF MUSIC/SPEECH MIXED SIGNALS USING SINUSOIDAL MODELING WITH SVM AND NEURAL NETWORK APPROACH". Journal of Circuits, Systems and Computers 22, nr 02 (luty 2013): 1250083. http://dx.doi.org/10.1142/s0218126612500831.
Pełny tekst źródłaHanani, Abualsoud, Yanal Abusara, Bisan Maher i Inas Musleh. "English speaking proficiency assessment using speech and electroencephalography signals". International Journal of Electrical and Computer Engineering (IJECE) 12, nr 3 (1.06.2022): 2501. http://dx.doi.org/10.11591/ijece.v12i3.pp2501-2508.
Pełny tekst źródłaLu, Yuanxun, Jinxiang Chai i Xun Cao. "Live speech portraits". ACM Transactions on Graphics 40, nr 6 (grudzień 2021): 1–17. http://dx.doi.org/10.1145/3478513.3480484.
Pełny tekst źródłaRani, Shalu. "Review: Audio Noise Reduction Using Filters and Discrete Wavelet Transformation". Journal of Advance Research in Electrical & Electronics Engineering (ISSN: 2208-2395) 2, nr 6 (30.06.2015): 17–21. http://dx.doi.org/10.53555/nneee.v2i6.192.
Pełny tekst źródłaKropotov, Y. A., A. A. Belov i A. Y. Prockuryakov. "Increasing signal/acoustic interference ratio in telecommunications audio exchange by adaptive filtering methods". Information Technology and Nanotechnology, nr 2416 (2019): 271–76. http://dx.doi.org/10.18287/1613-0073-2019-2416-271-276.
Pełny tekst źródłaKholkina, Natalya. "INFORMATION TRANSFER EFFECTIVENESS OF WARNING AND TELECOMMUNICATION SYSTEMS OF AUDIO-EXCHANGE UNDER NOISE CONDITIONS". Bulletin of Bryansk state technical university 2020, nr 5 (13.05.2020): 49–55. http://dx.doi.org/10.30987/1999-8775-2020-5-49-55.
Pełny tekst źródłaPutta, Venkata Subbaiah, A. Selwin Mich Priyadharson i Venkatesa Prabhu Sundramurthy. "Regional Language Speech Recognition from Bone-Conducted Speech Signals through Different Deep Learning Architectures". Computational Intelligence and Neuroscience 2022 (25.08.2022): 1–10. http://dx.doi.org/10.1155/2022/4473952.
Pełny tekst źródłaRashkevych, Yu, D. Peleshko, I. Pelekh i I. Izonіn. "Speech signal marking on the base of local magnitude and invariant segmentation". Mathematical Modeling and Computing 1, nr 2 (2014): 234–44. http://dx.doi.org/10.23939/mmc2014.02.234.
Pełny tekst źródłaCox, Trevor, Michael Akeroyd, Jon Barker, John Culling, Jennifer Firth, Simone Graetzer, Holly Griffiths i in. "Predicting Speech Intelligibility for People with a Hearing Loss: The Clarity Challenges". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, nr 3 (1.02.2023): 4599–606. http://dx.doi.org/10.3397/in_2022_0662.
Pełny tekst źródłaYashwanth, A. "Audio Enhancement and Denoising using Online Non-Negative Matrix Factorization and Deep Learning". International Journal for Research in Applied Science and Engineering Technology 10, nr 6 (30.06.2022): 1703–9. http://dx.doi.org/10.22214/ijraset.2022.44061.
Pełny tekst źródłaSaitoh, Takeshi. "Research on multi-modal silent speech recognition technology". Impact 2018, nr 3 (15.06.2018): 47–49. http://dx.doi.org/10.21820/23987073.2018.3.47.
Pełny tekst źródłaHao, Cailing. "Application of Neural Network Algorithm Based on Principal Component Image Analysis in Band Expansion of College English Listening". Computational Intelligence and Neuroscience 2021 (12.11.2021): 1–12. http://dx.doi.org/10.1155/2021/9732156.
Pełny tekst źródłaLIN, RUEI-SHIANG, i LING-HWEI CHEN. "A NEW APPROACH FOR CLASSIFICATION OF GENERIC AUDIO DATA". International Journal of Pattern Recognition and Artificial Intelligence 19, nr 01 (luty 2005): 63–78. http://dx.doi.org/10.1142/s0218001405003958.
Pełny tekst źródłaAlexandrou, Anna Maria, Timo Saarinen, Jan Kujala i Riitta Salmelin. "Cortical Tracking of Global and Local Variations of Speech Rhythm during Connected Natural Speech Perception". Journal of Cognitive Neuroscience 30, nr 11 (listopad 2018): 1704–19. http://dx.doi.org/10.1162/jocn_a_01295.
Pełny tekst źródłaFaghani, Maral, Hamidreza Rezaee-Dehsorkh, Nassim Ravanshad i Hamed Aminzadeh. "Ultra-Low-Power Voice Activity Detection System Using Level-Crossing Sampling". Electronics 12, nr 4 (5.02.2023): 795. http://dx.doi.org/10.3390/electronics12040795.
Pełny tekst źródłaHajarolasvadi, Noushin, i Hasan Demirel. "3D CNN-Based Speech Emotion Recognition Using K-Means Clustering and Spectrograms". Entropy 21, nr 5 (8.05.2019): 479. http://dx.doi.org/10.3390/e21050479.
Pełny tekst źródłaKane, Joji, i Akira Nohara. "Speech processing apparatus for separating voice and non‐voice audio signals contained in a same mixed audio signal". Journal of the Acoustical Society of America 95, nr 3 (marzec 1994): 1704. http://dx.doi.org/10.1121/1.408490.
Pełny tekst źródłaMohd Hanifa, Rafizah, Khalid Isa, Shamsul Mohamad, Shaharil Mohd Shah, Shelena Soosay Nathan, Rosni Ramle i Mazniha Berahim. "Voiced and unvoiced separation in malay speech using zero crossing rate and energy". Indonesian Journal of Electrical Engineering and Computer Science 16, nr 2 (1.11.2019): 775. http://dx.doi.org/10.11591/ijeecs.v16.i2.pp775-780.
Pełny tekst źródłaWolfe, Jace, i Erin C. Schafer. "Optimizing The Benefit of Sound Processors Coupled to Personal FM Systems". Journal of the American Academy of Audiology 19, nr 08 (wrzesień 2008): 585–94. http://dx.doi.org/10.3766/jaaa.19.8.2.
Pełny tekst źródłaZhao, Huan, Shaofang He, Zuo Chen i Xixiang Zhang. "Dual Key Speech Encryption Algorithm Based Underdetermined BSS". Scientific World Journal 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/974735.
Pełny tekst źródłaJahan, Ayesha, Sanobar Shadan, Yasmeen Fatima i Naheed Sultana. "Image Orator - Image to Speech Using CNN, LSTM and GTTS". International Journal for Research in Applied Science and Engineering Technology 11, nr 6 (30.06.2023): 4473–81. http://dx.doi.org/10.22214/ijraset.2023.54470.
Pełny tekst źródłaWang, Li, Weiguang Zheng, Xiaojun Ma i Shiming Lin. "Denoising Speech Based on Deep Learning and Wavelet Decomposition". Scientific Programming 2021 (16.07.2021): 1–10. http://dx.doi.org/10.1155/2021/8677043.
Pełny tekst źródłaV, Sethuram, Ande Prasad i R. Rajeswara Rao. "Metaheuristic adapted convolutional neural network for Telugu speaker diarization". Intelligent Decision Technologies 15, nr 4 (10.01.2022): 561–77. http://dx.doi.org/10.3233/idt-211005.
Pełny tekst źródłaLee, Dongheon, i Jung-Woo Choi. "Inter-channel Conv-TasNet for source-agnostic multichannel audio enhancement". INTER-NOISE and NOISE-CON Congress and Conference Proceedings 265, nr 5 (1.02.2023): 2068–75. http://dx.doi.org/10.3397/in_2022_0297.
Pełny tekst źródłaOng, Kah Liang, Chin Poo Lee, Heng Siong Lim i Kian Ming Lim. "Speech emotion recognition with light gradient boosting decision trees machine". International Journal of Electrical and Computer Engineering (IJECE) 13, nr 4 (1.08.2023): 4020. http://dx.doi.org/10.11591/ijece.v13i4.pp4020-4028.
Pełny tekst źródłaAlluhaidan, Ala Saleh, Oumaima Saidani, Rashid Jahangir, Muhammad Asif Nauman i Omnia Saidani Neffati. "Speech Emotion Recognition through Hybrid Features and Convolutional Neural Network". Applied Sciences 13, nr 8 (10.04.2023): 4750. http://dx.doi.org/10.3390/app13084750.
Pełny tekst źródłaCAO, JIANGTAO, NAOYUKI KUBOTA, PING LI i HONGHAI LIU. "THE VISUAL-AUDIO INTEGRATED RECOGNITION METHOD FOR USER AUTHENTICATION SYSTEM OF PARTNER ROBOTS". International Journal of Humanoid Robotics 08, nr 04 (grudzień 2011): 691–705. http://dx.doi.org/10.1142/s0219843611002678.
Pełny tekst źródła