Academic literature on the topic 'Speaker verification system'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Speaker verification system.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Speaker verification system":

1

Watari, Masao. "Speaker verification system." Journal of the Acoustical Society of America 91, no. 1 (January 1992): 546. http://dx.doi.org/10.1121/1.402663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sakoe, Hiroaki. "Speaker verification system." Journal of the Acoustical Society of America 85, no. 5 (May 1989): 2246. http://dx.doi.org/10.1121/1.397806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Uchiyama, Hiroki. "Speaker verification system." Journal of the Acoustical Society of America 95, no. 1 (January 1994): 593. http://dx.doi.org/10.1121/1.408274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shanmugapriya, P., and Y. Venkataramani. "Analysis of Speaker Verification System Using Support Vector Machine." JOURNAL OF ADVANCES IN CHEMISTRY 13, no. 10 (February 25, 2017): 6531–42. http://dx.doi.org/10.24297/jac.v13i10.5839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The integration of GMM- super vector and Support Vector Machine (SVM) has become one of most popular strategy in text-independent speaker verification system. This paper describes the application of Fuzzy Support Vector Machine (FSVM) for classification of speakers using GMM-super vectors. Super vectors are formed by stacking the mean vectors of adapted GMMs from UBM using maximum a posteriori (MAP). GMM super vectors characterize speaker’s acoustic characteristics which are used for developing a speaker dependent fuzzy SVM model. Introducing fuzzy theory in support vector machine yields better classification accuracy and requires less number of support vectors. Experiments were conducted on 2001 NIST speaker recognition evaluation corpus. Performance of GMM-FSVM based speaker verification system is compared with the conventional GMM-UBM and GMM-SVM based systems. Experimental results indicate that the fuzzy SVM based speaker verification system with GMM super vector achieves better performance to GMM-UBM system. Â
5

Gada, Amay, Neel Kothari, Ruhina Karani, Chetashri Badane, Dhruv Gada, and Tanish Patwa. "DR-SASV: A deep and reliable spoof aware speech verification system." International Journal on Information Technologies and Security 15, no. 4 (December 1, 2023): 93–106. http://dx.doi.org/10.59035/ffmb8272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A spoof-aware speaker verification system is an integrated system that is capable of jointly identifying impostor speakers as well as spoofing attacks from target speakers. This type of system largely helps in protecting sensitive data, mitigating fraud, and reducing theft. Research has recently enhanced the effectiveness of countermeasure systems and automatic speaker verification systems separately to produce low Equal Error Rates (EER) for each system. However, work exploring a combination of both is still scarce. This paper proposes an end-to-end solution to address spoof-aware automatic speaker verification (ASV) by introducing a Deep Reliable Spoof-Aware-Speaker-Verification (DR-SASV) system. The proposed system allows the target audio to pass through a “spoof aware” speaker verification model sequentially after applying a convolutional neural network (CNN)-based spoof detection model. The suggested system produces encouraging results after being trained on the ASVSpoof 2019 LA dataset. The spoof detection model gives a validation accuracy of 96%, while the transformer-based speech verification model authenticates users with an error rate of 13.74%. The system surpasses other state-of-the-art models and produces an EER score of 10.32%.
6

Mammone, Richard J. "Speaker identification and verification system." Journal of the Acoustical Society of America 101, no. 2 (February 1997): 665. http://dx.doi.org/10.1121/1.419408.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rabin, Michael D. "Speaker verification system and process." Journal of the Acoustical Society of America 103, no. 6 (June 1998): 3138. http://dx.doi.org/10.1121/1.423030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Milewski, Krzysztof, Szymon Zaporowski, and Andrzej Czyżewski. "Comparison of the Ability of Neural Network Model and Humans to Detect a Cloned Voice." Electronics 12, no. 21 (October 30, 2023): 4458. http://dx.doi.org/10.3390/electronics12214458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The vulnerability of the speaker identity verification system to attacks using voice cloning was examined. The research project assumed creating a model for verifying the speaker’s identity based on voice biometrics and then testing its resistance to potential attacks using voice cloning. The Deep Speaker Neural Speaker Embedding System was trained, and the Real-Time Voice Cloning system was employed based on the SV2TTS, Tacotron, WaveRNN, and GE2E neural networks. The results of attacks using voice cloning were analyzed and discussed in the context of a subjective assessment of cloned voice fidelity. Subjective test results and attempts to authenticate speakers proved that the tested biometric identity verification system might resist voice cloning attacks even if humans cannot distinguish cloned samples from original ones.
9

Bouziane, Ayoub, Jamal Kharroubi, and Arsalane Zarghili. "Towards an Optimal Speaker Modeling in Speaker Verification Systems using Personalized Background Models." International Journal of Electrical and Computer Engineering (IJECE) 7, no. 6 (December 1, 2017): 3655. http://dx.doi.org/10.11591/ijece.v7i6.pp3655-3663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<p>This paper presents a novel speaker modeling approachfor speaker recognition systems. The basic idea of this approach consists of deriving the target speaker model from a personalized background model, composed only of the UBM Gaussian components which are really present in the speech of the target speaker. The motivation behind the derivation of speakers’ models from personalized background models is to exploit the observeddifference insome acoustic-classes between speakers, in order to improve the performance of speaker recognition systems.</p>The proposed approach was evaluatedfor speaker verification task using various amounts of training and testing speech data. The experimental results showed that the proposed approach is efficientin termsof both verification performance and computational cost during the testing phase of the system, compared to the traditional UBM based speaker recognition systems.
10

Pham, Tuan, and Michael Wagner. "Speaker Verification with Fuzzy Fusion and Genetic Optimization." Journal of Advanced Computational Intelligence and Intelligent Informatics 3, no. 6 (December 20, 1999): 451–56. http://dx.doi.org/10.20965/jaciii.1999.p0451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Most speaker verification systems are based on similarity or likelihood normalization techniques as they help to better cope with speaker variability. In the conventional normalization, the it a priori probabilities of the cohort speakers are assumed to be equal. From this standpoint, we apply the fuzzy integral and genetic algorithms to combine the likelihood values of the cohort speakers in which the assumption of equal <I>a priori</I> probabilities is relaxed. This approach replaces the conventional normalization term by the fuzzy integral which acts as a non-linear fusion of the similarity measures of an utterance assigned to the cohort speakers. Furthermore, genetic algorithms are applied to find optimal fuzzy densities which are very important for the fuzzy fusion. We illustrate the performance of the proposed approach by testing the speaker verification system with both the conventional and the proposed algorithms using the commercial speech corpus TI46. The results in terms of the equal error rates show that the speaker verification system using the fuzzy integral is more favorable than the conventional normalization method.

Dissertations / Theses on the topic "Speaker verification system":

1

Nosratighods, Mohaddeseh Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Robust speaker verification system." Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2008. http://handle.unsw.edu.au/1959.4/42796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Identity verification or biometric recognition systems play an important role in our daily lives. Applications include Automatic Teller Machines (ATM), banking and share information retrieval, and personal verification for credit cards. Among the biometric techniques, authentication of speakers by his/her voice is of great importance, since it employs a non-invasive approach and is the only available modality in many applications. However,the performance of Automatic Speaker Verification (ASV) systems degrades significantly under adverse conditions which cause recordings from the same speaker to be different.The objective of this research is to investigate and develop robust techniques for performing automatic speaker recognition over various channel conditions, such as telephony and recorded microphone speech. This research is shown to improve the robustness of ASV systems in three main areas of feature extraction, speaker modelling and score normalization. At the feature level, a new set of dynamic features, termed Delta Cepstral Energy (DCE) is proposed, instead of traditional delta cepstra, which not only greatly reduces thedimensionality of the feature vector compared with delta and delta-delta cepstra, but is also shown to provide the same performance for matched testing and training conditions on TIMIT and a subset of the NIST 2002 dataset. The concept of speaker entropy, which conveys the information contained in a speaker's speech based on the extracted features, facilitates comparative evaluation of the proposed methods. In addition, Frequency Modulation features are combined in a complementary manner with the Mel Frequency CepstralCoefficients (MFCCs) to improve the performance of the ASV system under channel variability of various types. The proposed fused system shows a relative reduction of up to 23% in Equal Error Rate (EER) over the MFCC-based system when evaluated on the NIST 2008 dataset. Currently, the main challenge in speaker modelling is channel variability across different sessions. A recent approach to channel compensation, based on Support Vector Machines (SVM) is Nuisance Attribute Projection (NAP). The proposed multi-component approach to NAP, attempts to compensate for the main sources of inter-session variations through an additional optimization criteria, to allow more accurate estimates of the most dominant channel artefacts and to improve the system performance under mismatched training and test conditions. Another major issue in speaker recognition is that the variability of score distributions due to incompletely modelled regions of the feature space can produce segments of the test speech that are poorly matched to the claimed speaker model. A segment selection technique in score normalization is proposed that relies only on discriminative and reliable segments of the test utterance to verify the speaker. This approach is particularly useful in noisy conditions where using speech activity detection is not reliable at the feature level. Another source of score variability comes from the fact that not all phonemes are equally discriminative. To address this, a new score re-weighting technique is applied to likelihood values based on the discriminative level of each Gaussian component, i.e. each particular region of the feature space. It is found that a limited number of Gaussian mixtures, herein termed discriminative components are responsible for the overall performance, and that inclusion of the other non-discriminative components may only degrade the system performance.
2

Sarma, Sridevi Vedula. "A segment-based speaker verification system using SUMMIT." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/43406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (p. 75-79).
by Sridevi Vedula Sarma.
M.S.
3

Zhou, Yichao. "Lip password-based speaker verification system with unknown language alphabet." HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The traditional security systems that verify the identity of users based on password usually face the risk of leaking the password contents. To solve this problem, biometrics such as the face, iris, and fingerprint, begin to be widely used in verifying the identity of people. However, these biometrics cannot be changed if the database is hacked. What's more, verification systems based on the traditional biometrics might be cheated by fake fingerprint or the photo.;Liu and Cheung (Liu and Cheung 2014) have recently initiated the concept of lip password, which is composed of a password embedded in the lip movement and the underlying characteristics of lip motion [26]. Subsequently, a lip password-based system for visual speaker verification has been developed. Such a system is able to detect a target speaker saying the wrong password or an impostor who knows the correct password. That is, only a target user speaking correct password can be accepted by the system. Nevertheless, it recognizes the lip password based on a lip-reading algorithm, which needs to know the language alphabet of the password in advance, which may limit its applications.;To tackle this problem, in this thesis, we study the lip password-based visual speaker verification system with unknown language alphabet. First, we propose a method to verify the lip password based on the key frames of lip movement instead of recognizing the individual password elements, such that the lip password verification process can be made without knowing the password alphabet beforehand. To detect these key frames, we extract the lip contours and detect the interest intervals where the lip contours have significant variations. Moreover, in order to avoid accurate alignment of feature sequences or detection on mouth status which is computationally expensive, we design a novel overlapping subsequence matching approach to encode the information in lip passwords in the system. This technique works by sampling the feature sequences extracted from lip videos into overlapping subsequences and matching them individually. All the log-likelihood of each subsequence form the final feature of the sequence and are verified by the Euclidean distance to positive sample centers. We evaluate the proposed two methods on a database that contains totally 8 kinds of lip passwords including English digits and Chinese phrases. Experimental results show the superiority of the proposed methods for visual speaker verification.;Next, we propose a novel visual speaker verification approach based on diagonal-like pooling and pyramid structure of lips. We take advantage of the diagonal structure of sparse representation to preserve the temporal order of lip sequences by employ a diagonal-like mask in pooling stage and build a pyramid spatiotemporal features containing the structural characteristic under lip password. This approach eliminates the requirement of segmenting the lip-password into words or visemes. Consequently, the lip password with any language can be used for visual speaker verification. Experiments show the efficacy of the proposed approach compared with the state-of-the-art ones.;Additionally, to further evaluate the system, we also develop a prototype of the lip password-based visual speaker verification. The prototype has a Graphical User Interface (GUI) that make users easy to access.
4

Mtibaa, Aymen. "Towards robust and privacy-preserving speaker verification systems." Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les systèmes de vérification du locuteur sont une technologie clé dans de nombreux appareils et services tels que les smartphones, les assistants numériques intelligents et les applications bancaires. Pendant la pandémie de COVID-19, les systèmes de contrôle d'accès basés sur des lecteurs d'empreintes digitales ou des claviers augmentent le risque de propagation du virus. Par conséquent, les entreprises repensent maintenant leurs systèmes de contrôle d'accès des employés et envisagent des technologies d'autorisation sans contact, telles que les systèmes de vérification des locuteurs. Cependant, les systèmes de vérification des locuteurs exigent que le système d'accès stocke les modèles des locuteurs et ait accès aux enregistrements ou aux caractéristiques dérivées des voix des locuteurs lors de l'authentification. Ce processus soulève certaines préoccupations concernant le respect de la vie privée de l'utilisateur et la protection de ces données biométriques sensibles. Un adversaire peut voler les informations biométriques des locuteurs pour usurper l'identité de l'utilisateur authentique et obtenir un accès non autorisé. De plus, lorsqu'il s'agit de données vocales, nous sommes confrontés à des problèmes supplémentaires de confidentialité et de respect de vie privée parce que à partir des données vocales plusieurs informations personnelles liées à l'identité, au sexe, à l'âge ou à l'état de santé du locuteur peuvent être extraites. Dans ce contexte, la présente thèse de doctorat aborde les problèmes de protection des données biométriques, le respect de vie privée et la sécurité pour les systèmes de vérification du locuteur basés sur les modèles de mélange gaussien (GMM), i-vecteur et x-vecteur comme modélisation du locuteur. L'objectif est le développement de systèmes de vérification du locuteur qui effectuent une vérification biométrique tout en respectant la vie privée et la protection des données biométriques de l'utilisateur. Pour cela, nous avons proposé des schémas de protection biométrique afin de répondre aux exigences de protection des données biométriques (révocabilité, diversité, et irréversibilité) décrites dans la norme ISO/IEC IS~24745 et pour améliorer la robustesse des systèmes contre différentes scénarios d'attaques
Speaker verification systems are a key technology in many devices and services like smartphones, intelligent digital assistants, healthcare, and banking applications. Additionally, with the COVID pandemic, access control systems based on fingerprint scanners or keypads increase the risk of virus propagation. Therefore, companies are now rethinking their employee access control systems and considering touchless authorization technologies, such as speaker verification systems.However, speaker verification system requires users to transmit their recordings, features, or models derived from their voice samples without any obfuscation over untrusted public networks which stored and processed them on a cloud-based infrastructure. If the system is compromised, an adversary can use this biometric information to impersonate the genuine user and extract personal information. The voice samples may contain information about the user's gender, accent, ethnicity, and health status which raises several privacy issues.In this context, the present PhD Thesis address the privacy and security issues for speaker verification systems based on Gaussian mixture models (GMM), i-vector, and x-vector as speaker modeling. The objective is the development of speaker verification systems that perform biometric verification while preserving the privacy and the security of the user. To that end, we proposed biometric protection schemes for speaker verification systems to achieve the privacy requirements (revocability, unlinkability, irreversibility) described in the standard ISO/IEC IS~24745 on biometric information protection and to improve the robustness of the systems against different attack scenarios
5

Li, Yi. "Speaker Diarization System for Call-center data." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To answer the question who spoke when, speaker diarization (SD) is a critical step for many speech applications in practice. The task of our project is building a MFCC-vector based speaker diarization system on top of a speaker verification system (SV), which is an existing Call-centers application to check the customer’s identity from a phone call. Our speaker diarization system uses 13-Dimensional MFCCs as Features, performs Voice Active Detection (VAD), segmentation, Linear Clustering and the Hierarchical Clustering based on GMM and the BIC score. By applying it, we decrease the Equal Error Rate (EER) of the SV from 18.1% in the baseline experiment to 3.26% on the general call-center conversations. To better analyze and evaluate the system, we also simulated a set of call-center data based on the public audio databases ICSI corpus.
För att svara på frågan vem som talade när är högtalardarisering (SD) ett kritiskt steg för många talapplikationer i praktiken. Uppdraget med vårt projekt är att bygga ett MFCC-vektorbaserat högtalar-diariseringssystem ovanpå ett högtalarverifieringssystem (SV), som är ett befintligt Call-center-program för att kontrollera kundens identitet från ett telefonsamtal. Vårt högtalarsystem använder 13-dimensionella MFCC: er som funktioner, utför Voice Active Detection (VAD), segmentering, linjär gruppering och hierarkisk gruppering baserat på GMM och BIC-poäng. Genom att tillämpa den minskar vi EER (Equal Error Rate) från 18,1 % i baslinjeexperimentet till 3,26 % för de allmänna samtalscentret. För att bättre analysera och utvärdera systemet simulerade vi också en uppsättning callcenter-data baserat på de offentliga ljuddatabaserna ICSI corpus.
6

Guo, Yunfei. "Personalized Voice Activated Grasping System for a Robotic Exoskeleton Glove." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/101751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Controlling an exoskeleton glove with a highly efficient human-machine interface (HMI), while accurately applying force to each joint remains a hot topic. This paper proposes a fast, secure, accurate, and portable solution to control an exoskeleton glove. This state of the art solution includes both hardware and software components. The exoskeleton glove uses a modified serial elastic actuator (SEA) to achieve accurate force sensing. A portable electronic system is designed based on the SEA to allow force measurement, force application, slip detection, cloud computing, and a power supply to provide over 2 hours of continuous usage. A voice-control-based HMI referred to as the integrated trigger-word configurable voice activation and speaker verification system (CVASV), is integrated into a robotic exoskeleton glove to perform high-level control. The CVASV HMI is designed for embedded systems with limited computing power to perform voice-activation and voice-verification simultaneously. The system uses MobileNet as the feature extractor to reduce computational cost. The HMI is tuned to allow better performance in grasping daily objects. This study focuses on applying the CVASV HMI to the exoskeleton glove to perform a stable grasp with force-control and slip-detection using SEA based exoskeleton glove. This research found that using MobileNet as the speaker verification neural network can increase the speed of processing while maintaining similar verification accuracy.
Master of Science
The robotic exoskeleton glove used in this research is designed to help patients with hand disabilities. This thesis proposes a voice-activated grasping system to control the exoskeleton glove. Here, the user can use a self-defined keyword to activate the exoskeleton and use voice to control the exoskeleton. The voice command system can distinguish between different users' voices, thereby improving the safety of the glove control. A smartphone is used to process the voice commands and send them to an onboard computer on the exoskeleton glove. The exoskeleton glove then accurately applies force to each fingertip using a force feedback actuator.This study focused on designing a state of the art human machine interface to control an exoskeleton glove and perform an accurate and stable grasp.
7

Bekli, Zeid, and William Ouda. "A performance measurement of a Speaker Verification system based on a variance in data collection for Gaussian Mixture Model and Universal Background Model." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Voice recognition has become a more focused and researched field in the last century,and new techniques to identify speech has been introduced. A part of voice recognition isspeaker verification which is divided into Front-end and Back-end. The first componentis the front-end or feature extraction where techniques such as Mel-Frequency CepstrumCoefficients (MFCC) is used to extract the speaker specific features of a speech signal,MFCC is mostly used because it is based on the known variations of the humans ear’scritical frequency bandwidth. The second component is the back-end and handles thespeaker modeling. The back-end is based on the Gaussian Mixture Model (GMM) andGaussian Mixture Model-Universal Background Model (GMM-UBM) methods forenrollment and verification of the specific speaker. In addition, normalization techniquessuch as Cepstral Means Subtraction (CMS) and feature warping is also used forrobustness against noise and distortion. In this paper, we are going to build a speakerverification system and experiment with a variance in the amount of training data for thetrue speaker model, and to evaluate the system performance. And further investigate thearea of security in a speaker verification system then two methods are compared (GMMand GMM-UBM) to experiment on which is more secure depending on the amount oftraining data available.This research will therefore give a contribution to how much data is really necessary fora secure system where the False Positive is as close to zero as possible, how will theamount of training data affect the False Negative (FN), and how does this differ betweenGMM and GMM-UBM.The result shows that an increase in speaker specific training data will increase theperformance of the system. However, too much training data has been proven to beunnecessary because the performance of the system will eventually reach its highest point and in this case it was around 48 min of data, and the results also show that the GMMUBM model containing 48- to 60 minutes outperformed the GMM models.
8

Shou-Chun, Yin 1980. "Speaker adaptation in joint factor analysis based text independent speaker verification." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis presents methods for supervised and unsupervised speaker adaptation of Gaussian mixture speaker models in text-independent speaker verification. The proposed methods are based on an approach which is able to separate speaker and channel variability so that progressive updating of speaker models can be performed while minimizing the influence of the channel variability associated with the adaptation recordings. This approach relies on a joint factor analysis model of intrinsic speaker variability and session variability where inter-session variation is assumed to result primarily from the effects of the transmission channel. These adaptation methods have been evaluated under the adaptation paradigm defined under the NIST 2005 speaker recognition evaluation plan which is based on conversational telephone speech.
9

Chan, Siu Man. "Improved speaker verification with discrimination power weighting /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202004%20CHANS.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 86-93). Also available in electronic version. Access restricted to campus users.
10

Cilliers, Francois Dirk. "Tree-based Gaussian mixture models for speaker verification." Thesis, Link to the online version, 2005. http://hdl.handle.net/10019.1/1639.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Speaker verification system":

1

Meisel, William S. The telephony voice user interface: Applications of speech recognition, text-to-speech, and speaker verification over the telephone. Tarzana, CA: TMA Associates, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Speaker verification system":

1

Lee, Tae-Seung, and Byong-Won Hwang. "Continuants Based Neural Speaker Verification System." In MICAI 2004: Advances in Artificial Intelligence, 89–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-24694-7_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hong, Qingyang, Sheng Wang, and Zhijian Liu. "A Robust Speaker-Adaptive and Text-Prompted Speaker Verification System." In Biometric Recognition, 385–93. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-12484-1_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Naika, Ravika. "An Overview of Automatic Speaker Verification System." In Intelligent Computing and Information and Communication, 603–10. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-7245-1_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Padrta, Aleš, and Jan Vaněk. "Introduction of Improved UWB Speaker Verification System." In Text, Speech and Dialogue, 364–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11551874_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rakhmanenko, Ivan, Evgeny Kostyuchenko, Evgeny Choynzonov, Lidiya Balatskaya, and Alexander Shelupanov. "Score Normalization of X-Vector Speaker Verification System for Short-Duration Speaker Verification Challenge." In Speech and Computer, 457–66. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60276-5_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Padrta, Aleš, and Jan Vaněk. "A Structure of Expert System for Speaker Verification." In Text, Speech and Dialogue, 493–500. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11846406_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sakaguchi, Yuki, Rin Hirakawa, Hideki Kawano, Kenichi Nakashi, and Yoshihisa Nakatoh. "Speaker Verification Method Using HTM for Security System." In Human Interaction, Emerging Technologies and Future Applications III, 160–65. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-55307-4_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lee, Younjeong, Changwoo Seo, Joohun Lee, and Ki Yong Lee. "Speaker Verification System for PDA in Mobile-Commerce." In Web and Communication Technologies and Internet-Related Social Issues — HSI 2003, 668–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-45036-x_73.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ganchev, Todor, Nikos Fakotakis, and George Kokkinakis. "Text-Independent Speaker Verification: The WCL-1 System." In Text, Speech and Dialogue, 263–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39398-6_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lavrentyeva, Galina, Sergey Novoselov, and Konstantin Simonchik. "Anti-spoofing Methods for Automatic Speaker Verification System." In Communications in Computer and Information Science, 172–84. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-52920-2_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Speaker verification system":

1

Zheng, Yu, Jinghan Peng, Yihao Chen, Yajun Zhang, Jialong Wang, Min Liu, and Minqiang Xu. "The SpeakIn Speaker Verification System for Far-Field Speaker Verification Challenge 2022." In The 2022 Far-field Speaker Verification Challenge (FFSVC2022). ISCA: ISCA, 2022. http://dx.doi.org/10.21437/ffsvc.2022-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kahn, Juliette, Nicolas Audibert, Solange Rossato, and Jean-Francois Bonastre. "Speaker verification by inexperienced and experienced listeners vs. speaker verification system." In ICASSP 2011 - 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011. http://dx.doi.org/10.1109/icassp.2011.5947707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Li, Jian Wu, and Lei Xie. "NPU Speaker Verification System for INTERSPEECH 2020 Far-Field Speaker Verification Challenge." In Interspeech 2020. ISCA: ISCA, 2020. http://dx.doi.org/10.21437/interspeech.2020-2688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lei, Yuan, Zhou Cao, Dehui Kong, and Ke Xu. "ZXIC Speaker Verification System for FFSVC 2022 Challenge." In The 2022 Far-field Speaker Verification Challenge (FFSVC2022). ISCA: ISCA, 2022. http://dx.doi.org/10.21437/ffsvc.2022-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

You, Changhuai, Kong Aik Lee, Bin Ma, and Haizhou Li. "Text-Dependent Speaker Verification System in VHF Communication Channel." In The Speaker and Language Recognition Workshop (Odyssey 2014). ISCA: ISCA, 2014. http://dx.doi.org/10.21437/odyssey.2014-33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Kong Aik, Hitoshi Yamamoto, Koji Okabe, Qiongqiong Wang, Ling Guo, Takafumi Koshinaka, Jiacen Zhang, and Koichi Shinoda. "The NEC-TT 2018 Speaker Verification System." In Interspeech 2019. ISCA: ISCA, 2019. http://dx.doi.org/10.21437/interspeech.2019-1517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Faisal, Muhammad Yusuf, and Suyanto Suyanto. "SpecAugment Impact on Automatic Speaker Verification System." In 2019 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI). IEEE, 2019. http://dx.doi.org/10.1109/isriti48646.2019.9034603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dehak, Najim, Zahi N. Karam, Douglas A. Reynolds, Reda Dehak, William M. Campbell, and James R. Glass. "A channel-blind system for speaker verification." In ICASSP 2011 - 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011. http://dx.doi.org/10.1109/icassp.2011.5947363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zajic, Zbynek, and Marek Hruz. "Fisher vectors in PLDA speaker verification system." In 2016 IEEE 13th International Conference on Signal Processing (ICSP). IEEE, 2016. http://dx.doi.org/10.1109/icsp.2016.7878044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Debnath, Saswati, B. Soni, U. Baruah, and D. K. Sah. "Text-dependent speaker verification system: A review." In 2015 IEEE 9th International Conference on Intelligent Systems and Control (ISCO). IEEE, 2015. http://dx.doi.org/10.1109/isco.2015.7282386.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Speaker verification system":

1

Heide, David A. Evaluation and Improvement of a Speaker Verification System in Military Environments. Fort Belvoir, VA: Defense Technical Information Center, December 2002. http://dx.doi.org/10.21236/ada409777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mizrach, Amos, Michal Mazor, Amots Hetzroni, Joseph Grinshpun, Richard Mankin, Dennis Shuman, Nancy Epsky, and Robert Heath. Male Song as a Tool for Trapping Female Medflies. United States Department of Agriculture, December 2002. http://dx.doi.org/10.32747/2002.7586535.bard.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This interdisciplinaray work combines expertise in engineering and entomology in Israel and the US, to develop an acoustic trap for mate-seeking female medflies. Medflies are among the world's most economically harmful pests, and monitoring and control efforts cost about $800 million each year in Israel and the US. Efficient traps are vitally important tools for medfly quarantine and pest management activities; they are needed for early detection, for predicting dispersal patterns and for estimating medfly abundance within infested regions. Early detection facilitates rapid response to invasions, in order to contain them. Prediction of dispersal patterns facilitates preemptive action, and estimates of the pests' abundance lead to quantification of medfly infestations and control efforts. Although olfactory attractants and traps exist for capturing male and mated female medflies, there are still no satisfactorily efficient means to attract and trap virgin and remating females (a significant and dangerous segment of the population). We proposed to explore the largely ignored mechanism of female attraction to male song that the flies use in courtship. The potential of such an approach is indicated by studies under this project. Our research involved the identification, isolation, and augmentation of the most attractive components of male medfly songs and the use of these components in the design and testing of traps incorporating acoustic lures. The project combined expertise in acoustic engineering and instrumentation, fruit fly behavior, and integrated pest management. The BARD support was provided for 1 year to enable proof-of-concept studies, aimed to determine: 1) whether mate-seeking female medflies are attracted to male songs; and 2) over what distance such attraction works. Male medfly calling song was recorded during courtship. Multiple acoustic components of male song were examined and tested for synergism with substrate vibrations produced by various surfaces, plates and loudspeakers, with natural and artificial sound playbacks. A speaker-funnel system was developed that focused the playback signal to reproduce as closely as possible the near-field spatial characteristics of the sounds produced by individual males. In initial studies, the system was tasted by observing the behavior of females while the speaker system played songs at various intensities. Through morning and early afternoon periods of peak sexual activity, virgin female medflies landed on a sheet of filter paper at the funnel outlet and stayed longer during broadcasting than during the silent part of the cycle. In later studies, females were captured on sticky paper at the funnel outlet. The mean capture rates were 67 and 44%, respectively, during sound emission and silent control periods. The findings confirmed that female trapping was improved if a male calling song was played. The second stage of the research focused on estimating the trapping range. Initial results indicated that the range possibly extended to 70 cm, but additional, verification tests remain to be conducted. Further studies are planned also to consider effects of combining acoustic and pheromonal cues.

To the bibliography