Academic literature on the topic 'Voice identification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Voice identification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Voice identification"

1

Hammarstrom, C. "Voice Identification." Australian Journal of Forensic Sciences 19, no. 3 (March 1987): 95–99. http://dx.doi.org/10.1080/00450618709410271.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Plante-Hébert, Julien, Victor J. Boucher, and Boutheina Jemel. "The processing of intimately familiar and unfamiliar voices: Specific neural responses of speaker recognition and identification." PLOS ONE 16, no. 4 (April 16, 2021): e0250214. http://dx.doi.org/10.1371/journal.pone.0250214.

Full text
Abstract:
Research has repeatedly shown that familiar and unfamiliar voices elicit different neural responses. But it has also been suggested that different neural correlates associate with the feeling of having heard a voice and knowing who the voice represents. The terminology used to designate these varying responses remains vague, creating a degree of confusion in the literature. Additionally, terms serving to designate tasks of voice discrimination, voice recognition, and speaker identification are often inconsistent creating further ambiguities. The present study used event-related potentials (ERPs) to clarify the difference between responses to 1) unknown voices, 2) trained-to-familiar voices as speech stimuli are repeatedly presented, and 3) intimately familiar voices. In an experiment, 13 participants listened to repeated utterances recorded from 12 speakers. Only one of the 12 voices was intimately familiar to a participant, whereas the remaining 11 voices were unfamiliar. The frequency of presentation of these 11 unfamiliar voices varied with only one being frequently presented (the trained-to-familiar voice). ERP analyses revealed different responses for intimately familiar and unfamiliar voices in two distinct time windows (P2 between 200–250 ms and a late positive component, LPC, between 450–850 ms post-onset) with late responses occurring only for intimately familiar voices. The LPC present sustained shifts, and short-time ERP components appear to reflect an early recognition stage. The trained voice equally elicited distinct responses, compared to rarely heard voices, but these occurred in a third time window (N250 between 300–350 ms post-onset). Overall, the timing of responses suggests that the processing of intimately familiar voices operates in two distinct steps of voice recognition, marked by a P2 on right centro-frontal sites, and speaker identification marked by an LPC component. The recognition of frequently heard voices entails an independent recognition process marked by a differential N250. Based on the present results and previous observations, it is proposed that there is a need to distinguish between processes of voice “recognition” and “identification”. The present study also specifies test conditions serving to reveal this distinction in neural responses, one of which bears on the length of speech stimuli given the late responses associated with voice identification.
APA, Harvard, Vancouver, ISO, and other styles
3

Adhyke, Yuzy Prila, Anis Eliyana, Ahmad Rizki Sridadi, Dina Fitriasia Septiarini, and Aisha Anwar. "Hear Me Out! This Is My Idea: Transformational Leadership, Proactive Personality and Relational Identification." SAGE Open 13, no. 1 (January 2023): 215824402211458. http://dx.doi.org/10.1177/21582440221145869.

Full text
Abstract:
This study proposes that there is relationship between transformational leadership and employee’s voice as well as relational identification as a mediation and proactive personality as a moderator. Structural Equation Modeling was used to analyze data gathered from employees at the Ministry of Law and Human Rights through questionnaires. The findings revealed that transformational leadership has a significant effect on employee’s voice and relational identification; relational identification mediates the relation between transformational leadership and employee voice behavior, and proactive personality will weaken the transformational effect on employee’s voice behavior. This study enriches empirical studies that employee’s voice can represent the opinions and ideas of employees with the presence of relational identification, proactive personality, and transformational leadership in the organization. Furthermore, transformational leadership can build relational identification that is strengthened by a proactive personality so that employees are happy to convey their voices.
APA, Harvard, Vancouver, ISO, and other styles
4

McGorrery, Paul Gordon, and Marilyn McMahon. "A fair ‘hearing’." International Journal of Evidence & Proof 21, no. 3 (February 17, 2017): 262–86. http://dx.doi.org/10.1177/1365712717690753.

Full text
Abstract:
Voice identification evidence, identifying an offender by the sound of their voice, is sometimes the only means of identifying someone who has committed a crime. Auditory memory is, however, associated with poorer performance than visual memory, and is subject to distinctive sources of unreliability. Consequently, it is important for investigating authorities to adopt appropriate strategies when dealing with voice identification, particularly when the identification involves a voice previously unknown to the witness. Appropriate voice identification parades conducted by police can offer an otherwise unavailable means of identifying the offender. This article suggests some ‘best practice’ techniques for voice identification parades and then, using reported Australian criminal cases as case studies, evaluates voice identification parade procedures used by police. Overall, we argue that the case studies reveal practices that are inconsistent with current scientific understandings about auditory memory and voice identifications, and that courts are insufficiently attending to the problems associated with this evidence.
APA, Harvard, Vancouver, ISO, and other styles
5

Sabir, Brahim, Fatima Rouda, Yassine Khazri, Bouzekri Touri, and Mohamed Moussetad. "Improved Algorithm for Pathological and Normal Voices Identification." International Journal of Electrical and Computer Engineering (IJECE) 7, no. 1 (February 1, 2017): 238. http://dx.doi.org/10.11591/ijece.v7i1.pp238-243.

Full text
Abstract:
There are a lot of papers on automatic classification between normal and pathological voices, but they have the lack in the degree of severity estimation of the identified voice disorders. Building a model of pathological and normal voices identification, that can also evaluate the degree of severity of the identified voice disorders among students. In the present work, we present an automatic classifier using acoustical measurements on registered sustained vowels /a/ and pattern recognition tools based on neural networks. The training set was done by classifying students’ recorded voices based on threshold from the literature. We retrieve the pitch, jitter, shimmer and harmonic-to-noise ratio values of the speech utterance /a/, which constitute the input vector of the neural network. The degree of severity is estimated to evaluate how the parameters are far from the standard values based on the percent of normal and pathological values. In this work, the base data used for testing the proposed algorithm of the neural network is formed by healthy and pathological voices from German database of voice disorders. The performance of the proposed algorithm is evaluated in a term of the accuracy (97.9%), sensitivity (1.6%), and specificity (95.1%). The classification rate is 90% for normal class and 95% for pathological class.
APA, Harvard, Vancouver, ISO, and other styles
6

R Hanji, Bhagyashri, Sanjay T. J, Shivam Upadhyay, Tarun M, and Yashwanthgowda H. R. "Voice Grounded Gender Identification." Journal of Web Development and Web Designing 05, no. 02 (July 1, 2020): 20–25. http://dx.doi.org/10.46610/jowdwd.2020.v05i02.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fujimoto, Junichiroh. "Identification of voice pattern." Journal of the Acoustical Society of America 94, no. 6 (December 1993): 3539. http://dx.doi.org/10.1121/1.407114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ladefoged, Peter. "Validity of voice identification." Journal of the Acoustical Society of America 114, no. 4 (October 2003): 2403. http://dx.doi.org/10.1121/1.4778312.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kilgore, Ryan, and Mark Chignell. "Simple Visualizations Enhance Speaker Identification when Listening to Spatialized Voices." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, no. 4 (September 2005): 615–18. http://dx.doi.org/10.1177/154193120504900403.

Full text
Abstract:
Spatial audio has been demonstrated to enhance performance in a variety of listening tasks. The utility of visually reinforcing spatialized audio with depictions of voice locations in collaborative applications, however, has been questioned. In this experiment, we compared the accuracy, response time, confidence in task performance, and subjective mental workload of 18 participants in a voice-identification task under three different display conditions: 1) traditional mono audio; 2) spatial audio; 3) spatial audio with a visual representation of voice locations. Each format was investigated using four and eight unique stimuli voices. Results showed greater voice-identification accuracy for the spatial-plus-visual format than for the spatialand mono-only formats, and that visualization benefits increased with voice number. Spatialization was also found to increase confidence in task performance. Response time and mental workload remained unchanged across display conditions. These results indicate visualizations may benefit users of large, unfamiliar audio spaces.
APA, Harvard, Vancouver, ISO, and other styles
10

Liang, Tsang-Lang, Hsueh-Feng Chang, Ming-Hsiang Ko, and Chih-Wei Lin. "Transformational leadership and employee voices in the hospitality industry." International Journal of Contemporary Hospitality Management 29, no. 1 (January 9, 2017): 374–92. http://dx.doi.org/10.1108/ijchm-07-2015-0364.

Full text
Abstract:
Purpose This study aims to explore the relationship between transformational leadership and employee voice behavior and the role of relational identification and work engagement as mediators in the same. Design/methodology/approach This study uses structural equation modeling to analyze the data from a questionnaire survey of 251 Taiwanese hospitality industry employees. Findings The findings demonstrate that transformational leadership has significant relationships with relational identification, work engagement and employee voice behavior and that relational identification and work engagement sequentially mediate between transformational leadership and employee voice behavior. Practical implications The results of this study provide insights into the intervening mechanisms linking leaders’ behavior with employees’ voices, while also highlighting the potential importance of relational identification in organizations, especially concerning the enhancement of employees’ work engagement and voice. Originality/value The findings reveal the mechanisms by which supervisors’ transformational leadership encourages employees to voice their suggestions, providing empirical evidence of the sequential mediation of relational identification and work engagement. The results help clarify the psychological process by which leaders influence their followers.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Voice identification"

1

Kisel, Andrej. "Person Identification by Fingerprints and Voice." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20101230_093643-05320.

Full text
Abstract:
This dissertation focuses on person identification problems and proposes solutions to overcome those problems. First part is about fingerprint features extraction algorithm performance evaluation. Modifications to a known synthesis algorithm are proposed to make it fast and suitable for performance evaluation. Matching of deformed fingerprints is discussed in the second part of the work. New fingerprint matching algorithm that uses local structures and does not perform fingerprint alignment is proposed to match deformed fingerprints. The use of group delay features of linear prediction model for speaker recognition is proposed in the third part of the work. New similarity metric that uses group delay features is described. It is demonstrated that automatic speaker recognition system with proposed features and similarity metric outperforms traditional speaker identification systems . Multibiometrics using fingerprints and voice is addressed in the last part of the dissertation.
Penkiose disertacijos darbo dalyse nagrinėjamos žmogaus identifikavimo pagal pirštų atspaudus ir balsą problemos ir siūlomi jų sprendimai. Pirštų atspaudų požymių išskyrimo algoritmų kokybės įvertinimo problemą siūloma spręsti panaudojant sintezuotus pirštų atspaudus. Darbe siūlomos žinomo pirštų atpaudų sintezės algoritmo modifikacijos, kurios leidžia sukurti piršto atspaudo vaizdą su iš anksto nustatytomis charakteristikomis ir požymiais bei pagreitina sintezės procesą. Pirštų atspaudų požymių palyginimo problemos yra aptartos ir naujas palyginimo algoritmas yra siūlomas deformuotų pirštų palyginimui. Algoritmo kokybė yra įvertinta ant viešai prieinamų ir vidinių duomenų bazių. Naujas asmens identifikavimo pagal balsą metodas remiantis tiesinės prognozės modelio grupinės delsos požymiais ir tų požymių palyginimo metrika kokybės prasme lenkia tradicinius asmens identifikavimo pagal balsą metodus. Pirštų ir balso įrašų nepriklausomumas yra irodytas ir asmens atpažinimas pagal balsą ir pirštų atspaudus kartu yra pasiūlytas siekiant išspręsti bendras biometrinių sistemų problemas.
APA, Harvard, Vancouver, ISO, and other styles
2

Gudnason, Jon. "Voice source cepstrum processing for speaker identification." Thesis, Imperial College London, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.439448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Iliadi, Konstantina. "Bio-inspired voice recognition for speaker identification." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/413949/.

Full text
Abstract:
Speaker identification (SID) aims to identify the underlying speaker(s) given a speech utterance. In a speaker identification system, the first component is the front-end or feature extractor. Feature extraction transforms the raw speech signal into a compact but effective representation that is more stable and discriminative than the original signal. Since the front-end is the first component in the chain, the quality of the later components is strongly determined by its quality. Existing approaches have used several feature extraction methods that have been adopted directly from the speech recognition task. However, the nature of these two tasks is contradictory given that speaker variability is one of the major error sources in speech recognition whereas in speaker recognition, it is the information that we wish to extract. In this thesis, the possible benefits of adapting a biologically-inspired model of human auditory processing as part of the front-end of a SID system are examined. This auditory model named Auditory Image Model (AIM) generates the stabilized auditory image (SAI). Features are extracted by the SAI through breaking it into boxes of different scales. Vector quantization (VQ) is used to create the speaker database with the speakers’ reference templates that will be used for pattern matching with the features of the target speakers that need to be identified. Also, these features are compared to the Mel-frequency cepstral coefficients (MFCCs), which is the most evident example of a feature set that is extensively used in speaker recognition but originally developed for speech recognition purposes. Additionally, another important parameter in SID systems is the dimensionality of the features. This study addresses this issue by specifying the most speaker-specific features and trying to further improve the system configuration for obtaining a representation of the auditory features with lower dimensionality. Furthermore, after evaluating the system performance in quiet conditions, another primary topic of speaker recognition is investigated. SID systems can perform well under matched training and test conditions but their performance degrades significantly because of the mismatch caused by background noise in real-world environments. Achieving robustness to SID systems becomes an important research problem. In the second experimental part of this thesis, the developed version of the system is assessed for speaker data sets of different size. Clean speech is used for the training phase while speech in the presence of babble noise is used for speaker testing. The results suggest that the extracted auditory feature vectors lead to much better performance, i.e. higher SID accuracy, compared to the MFCC-based recognition system especially for low SNRs. Lastly, the system performance is inspected with regard to parameters related to the training and test speech data such as the duration of the spoken material. From these experiments, the system is found to produce satisfying identification scores for relatively short training and test speech segments.
APA, Harvard, Vancouver, ISO, and other styles
4

Haider, Zargham. "Robust speaker identification against computer aided voice impersonation." Thesis, University of Surrey, 2011. http://epubs.surrey.ac.uk/770387/.

Full text
Abstract:
Speaker Identification (SID) systems offer good performance in the case of noise free speech and most of the on-going research aims at improving their reliability in noisy environments. In ideal operating conditions very low identification error rates can be achieved. The low error rates suggest that SID systems can be used in real-life applications as an extra layer of security along with existing secure layers. They can, for instance, be used alongside a Personal Identification Number (PIN) or passwords. SID systems can also be used by law enforcements agencies as a detection system to track wanted people over voice communications networks. In this thesis, the performance of 'the existing SID systems against impersonation attacks is analysed and strategies to counteract them are discussed. A voice impersonation system is developed using Gaussian Mixture Modelling (GMM) utilizing Line Spectral Frequencies (LSF) as the features representing the spectral parameters of the source-target pair. Voice conversion systems based on probabilistic approaches suffer from the problem of over smoothing of the converted spectrum. A hybrid scheme using Linear Multivariate Regression and GMM, together with posterior probability smoothing is proposed to reduce over smoothing and alleviate the discontinuities in the converted speech. The converted voices are used to intrude a closed-set SID system in the scenarios of identity disguise and targeted speaker impersonation. The results of the intrusion suggest that in their present form the SID systems are vulnerable to deliberate voice conversion attacks. For impostors to transform their voices, a large volume of speech data is required, which may not be easily accessible. In the context of improving the performance of SID against deliberate impersonation attacks, the use of multiple classifiers is explored. Linear Prediction (LP) residual of the speech signal is also analysed for speaker-specific excitation information. A speaker identification system based on multiple classifier system, using features to describe the vocal tract and the LP residual is targeted by the impersonation system. The identification results provide an improvement in rejecting impostor claims when presented with converted voices. It is hoped that the findings in this thesis, can lead to the development of speaker identification systems which are better equipped to deal with the problem with deliberate voice impersonation.
APA, Harvard, Vancouver, ISO, and other styles
5

Atkinson, Nathan. "Variable factors affecting voice identification in forensic contexts." Thesis, University of York, 2015. http://etheses.whiterose.ac.uk/13013/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fredrickson, Steven Eric. "Neural networks for speaker identification." Thesis, University of Oxford, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.294364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gillivan-Murphy, Patricia. "Voice tremor in Parkinson's disease (PD) : identification, characterisation and relationship with speech, voice and disease variables." Thesis, University of Newcastle upon Tyne, 2013. http://hdl.handle.net/10443/2170.

Full text
Abstract:
Voice tremor is associated with Parkinson’s disease (PD), however little is known about the precise characteristics of PD voice tremor, optimum methods of evaluation or possible relationships with other speech, voice, and disease variables. The question of possible differences between voice tremor in people with PD (pwPD) and neurologically healthy ageing people has not been addressed. Thirty pwPD ‘off-medication’ and twenty eight age-sex matched neurologically healthy controls were evaluated for voice tremor features using acoustic measurement, auditory perceptual voice rating, and nasendoscopic vocal tract examination. Speech intelligibility, severity of voice impairment, voice disability and disease variables (duration, disability, motor symptom severity, phenotype) were measured and examined for relationship with acoustic voice tremor measures. Results showed that pwPD were more likely to show greater auditory perceived voice instability and a greater magnitude of frequency and amplitude tremor in comparison to controls, however without statistical significance. PwPD had a higher rate of amplitude tremor than controls (p<0.05). Judged from ‘silent’ video recordings of nasendoscopic examination, pwPD had a greater amount of tremor in the palate, tongue, and global larynx (vertical dimension) than controls during rest breathing, sustained /s/, /a/ and /i/ (p<0.05). Acoustic voice tremor did not relate significantly to other speech and voice variables. PwPD had a significantly higher voice disability than controls (p<0.05), though this was independent of voice tremor. The magnitude of frequency tremor was positively associated with disease duration (p<0.05). A lower rate of amplitude tremor was associated with an increase in motor symptoms severity (p<0.05). Acoustic voice tremor did not relate in any significant way to PD disability or phenotype. ii PD voice tremor is characterised by auditory perceived instability and tremor, a mean amplitude tremor of 4.94 Hz, and tremor in vocal tract structures. Acoustic analysis and nasendoscopy proved valuable adjunctive tools for characterising voice tremor. Voice tremor is not present in all people with PD, but does appear to increase with disease duration. However pwPD examined here represent a relatively mild group with relatively short disease duration. Further work will look at people with more severe disease symptomatology and longer duration.
APA, Harvard, Vancouver, ISO, and other styles
8

Wildermoth, Brett Richard, and n/a. "Text-Independent Speaker Recognition Using Source Based Features." Griffith University. School of Microelectronic Engineering, 2001. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20040831.115646.

Full text
Abstract:
Speech signal is basically meant to carry the information about the linguistic message. But, it also contains the speaker-specific information. It is generated by acoustically exciting the cavities of the mouth and nose, and can be used to recognize (identify/verify) a person. This thesis deals with the speaker identification task; i.e., to find the identity of a person using his/her speech from a group of persons already enrolled during the training phase. Listeners use many audible cues in identifying speakers. These cues range from high level cues such as semantics and linguistics of the speech, to low level cues relating to the speaker's vocal tract and voice source characteristics. Generally, the vocal tract characteristics are modeled in modern day speaker identification systems by cepstral coefficients. Although, these coeficients are good at representing vocal tract information, they can be supplemented by using both pitch and voicing information. Pitch provides very important and useful information for identifying speakers. In the current speaker recognition systems, it is very rarely used as it cannot be reliably extracted, and is not always present in the speech signal. In this thesis, an attempt is made to utilize this pitch and voicing information for speaker identification. This thesis illustrates, through the use of a text-independent speaker identification system, the reasonable performance of the cepstral coefficients, achieving an identification error of 6%. Using pitch as a feature in a straight forward manner results in identification errors in the range of 86% to 94%, and this is not very helpful. The two main reasons why the direct use of pitch as a feature does not work for speaker recognition are listed below. First, the speech is not always periodic; only about half of the frames are voiced. Thus, pitch can not be estimated for half of the frames (i.e. for unvoiced frames). The problem is how to account for pitch information for the unvoiced frames during recognition phase. Second, the pitch estimation methods are not very reliable. They classify some of the frames unvoiced when they are really voiced. Also, they make pitch estimation errors (such as doubling or halving of pitch value depending on the method). In order to use pitch information for speaker recognition, we have to overcome these problems. We need a method which does not use the pitch value directly as feature and which should work for voiced as well as unvoiced frames in a reliable manner. We propose here a method which uses the autocorrelation function of the given frame to derive pitch-related features. We call these features the maximum autocorrelation value (MACV) features. These features can be extracted for voiced as well as unvoiced frames and do not suffer from the pitch doubling or halving type of pitch estimation errors. Using these MACV features along with the cepstral features, the speaker identification performance is improved by 45%.
APA, Harvard, Vancouver, ISO, and other styles
9

Wildermoth, Brett Richard. "Text-Independent Speaker Recognition Using Source Based Features." Thesis, Griffith University, 2001. http://hdl.handle.net/10072/366289.

Full text
Abstract:
Speech signal is basically meant to carry the information about the linguistic message. But, it also contains the speaker-specific information. It is generated by acoustically exciting the cavities of the mouth and nose, and can be used to recognize (identify/verify) a person. This thesis deals with the speaker identification task; i.e., to find the identity of a person using his/her speech from a group of persons already enrolled during the training phase. Listeners use many audible cues in identifying speakers. These cues range from high level cues such as semantics and linguistics of the speech, to low level cues relating to the speaker's vocal tract and voice source characteristics. Generally, the vocal tract characteristics are modeled in modern day speaker identification systems by cepstral coefficients. Although, these coeficients are good at representing vocal tract information, they can be supplemented by using both pitch and voicing information. Pitch provides very important and useful information for identifying speakers. In the current speaker recognition systems, it is very rarely used as it cannot be reliably extracted, and is not always present in the speech signal. In this thesis, an attempt is made to utilize this pitch and voicing information for speaker identification. This thesis illustrates, through the use of a text-independent speaker identification system, the reasonable performance of the cepstral coefficients, achieving an identification error of 6%. Using pitch as a feature in a straight forward manner results in identification errors in the range of 86% to 94%, and this is not very helpful. The two main reasons why the direct use of pitch as a feature does not work for speaker recognition are listed below. First, the speech is not always periodic; only about half of the frames are voiced. Thus, pitch can not be estimated for half of the frames (i.e. for unvoiced frames). The problem is how to account for pitch information for the unvoiced frames during recognition phase. Second, the pitch estimation methods are not very reliable. They classify some of the frames unvoiced when they are really voiced. Also, they make pitch estimation errors (such as doubling or halving of pitch value depending on the method). In order to use pitch information for speaker recognition, we have to overcome these problems. We need a method which does not use the pitch value directly as feature and which should work for voiced as well as unvoiced frames in a reliable manner. We propose here a method which uses the autocorrelation function of the given frame to derive pitch-related features. We call these features the maximum autocorrelation value (MACV) features. These features can be extracted for voiced as well as unvoiced frames and do not suffer from the pitch doubling or halving type of pitch estimation errors. Using these MACV features along with the cepstral features, the speaker identification performance is improved by 45%.
Thesis (Masters)
Master of Philosophy (MPhil)
School of Microelectronic Engineering
Faculty of Engineering and Information Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
10

Rouse, Kenneth Arthur Gilbert Juan E. "Classifying speakers using voice biometrics In a multimodal world." Auburn, Ala, 2009. http://hdl.handle.net/10415/1824.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Voice identification"

1

Forensic voice identification. San Diego, Calif: Academic Press, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

McIntosh, Kenneth. A stranger's voice: Forensic speech identification. Philadelphia: Mason Crest Publishers, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Juang, Jer-Nan. Signal prediction with input identification. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Đoàn, Văn Thông. Tìm hiteu con ngưxoi qua tireng nói, chzu viret và chzu ký. Glendale, CA: Đại Nam, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Markowitz, Judith A. Voice ID source profiles. [Evanston, IL]: J. Markowitz, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Christian, Müller, ed. Speaker classification. Berlin: Springer, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Public voices: Political discourse in the writings of Caroline de la Motte Fouqué. Oxford: P. Lang, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hollien, Harry. Forensic Voice Identification. Elsevier Science & Technology Books, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Keane, Adrian, and Paul McKeown. 9. Visual and voice identification. Oxford University Press, 2018. http://dx.doi.org/10.1093/he/9780198811855.003.0009.

Full text
Abstract:
This chapter considers the risk of mistaken identification, and the law and procedure relating to evidence of visual and voice identification. In respect of evidence of visual identification, the chapter addresses: the Turnbull guidelines, including when a judge should stop a case and the direction to be given to the jury; visual recognition, including recognition by the jury themselves from a film, photograph or other image; evidence of analysis of films, photographs or other images; pre-trial procedure, including procedure relating to recognition by a witness from viewing films, photographs, either formally or informally; and admissibility where there have been breaches of pre-trial procedure. In respect of evidence of voice identification, the chapter addresses: pre -trial procedure; voice comparison by the jury with the assistance of experts or lay listeners’; and the warning to be given to the jury (essentially an adaption of the Turnbull warning, but with particular focus on the factors which might affect the reliability of voice identification).
APA, Harvard, Vancouver, ISO, and other styles
10

The Speaker Identification Ability of Blind and Sighted Listeners: An Empirical Investigation. Springer VS, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Voice identification"

1

Chowdhury, Foezur, Sid-Ahmed Selouani, and Douglas O'Shaughnessy. "Voice Biometrics: Speaker Verification and Identification." In Signal and Image Processing for Biometrics, 131–48. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118561911.ch7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Higgins, A., L. Bahler, and J. Porter. "Voice Identification Using Nonparametric Density Matching." In The Kluwer International Series in Engineering and Computer Science, 211–32. Boston, MA: Springer US, 1996. http://dx.doi.org/10.1007/978-1-4613-1367-0_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schweinberger, Stefan R. "Audiovisual Integration in Speaker Identification." In Integrating Face and Voice in Person Perception, 119–34. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-3585-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rakshit, Soubhik. "User Identification and Authentication Through Voice Samples." In Computational Intelligence in Pattern Recognition, 247–54. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9042-5_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jarman-Ivens, Freya. "Identification: We Go to the Opera to Eat Voice." In Queer Voices, 25–57. New York: Palgrave Macmillan US, 2011. http://dx.doi.org/10.1057/9780230119550_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Androulidakis, Iosif I. "Voice, SMS, and Identification Data Interception in GSM." In Mobile Phone Security and Forensics, 29–46. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-29742-2_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cordeiro, Hugo, Carlos Meneses, and José Fonseca. "Continuous Speech Classification Systems for Voice Pathologies Identification." In IFIP Advances in Information and Communication Technology, 217–24. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-16766-4_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Paul, Bachchu, Somnath Bera, Tanushree Dey, and Santanu Phadikar. "Voice-Based Railway Station Identification Using LSTM Approach." In Advances in Intelligent Systems and Computing, 319–28. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-7834-2_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Anusha, B., and P. Geetha. "Biomedical Voice Based Parkinson Disorder Identification for Homosapiens." In Computational Vision and Bio Inspired Computing, 641–51. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-71767-8_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jha, Ruchi, Anvita Saxena, Jodh Singh, Ashish Khanna, Deepak Gupta, Prerna Jain, and Viswanatha Reddy Allugunti. "Voice-Based Gender Identification Using qPSO Neural Network." In Data Analytics and Management, 879–89. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8335-3_66.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Voice identification"

1

Witkowski, Marcin, Magdalena Igras, Joanna Grzybowska, Pawel Jaciow, Jakub Galka, and Mariusz Ziolko. "Caller identification by voice." In 2014 XXII Annual Pacific Voice Conference (PVC). IEEE, 2014. http://dx.doi.org/10.1109/pvc.2014.6845420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sharipova, Elvira R., Anton A. Horoshiy, and Nikita A. Kotlyarov. "Student Voice Identification Method." In 2021 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus). IEEE, 2021. http://dx.doi.org/10.1109/elconrus51938.2021.9396443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Giannini, Antonella, Massimo Pettorino, and Umberto Cinque. "Speaker's identification by voice." In First European Conference on Speech Communication and Technology (Eurospeech 1989). ISCA: ISCA, 1989. http://dx.doi.org/10.21437/eurospeech.1989-72.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Jin, Qin, Arthur R. Toth, Tanja Schultz, and Alan W. Black. "Voice convergin: Speaker de-identification by voice transformation." In ICASSP 2009 - 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2009. http://dx.doi.org/10.1109/icassp.2009.4960482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Harnsberger, James, and Harry Hollien. "Selection of speech/voice vectors in forensic voice identification." In 162nd Meeting Acoustical Society of America. Acoustical Society of America, 2013. http://dx.doi.org/10.1121/1.4812442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, J., A. Ariyaeeinia, and R. Sotudeh. "User voice identification on FPGA." In Perspectives in Pervasive Computing. IET, 2005. http://dx.doi.org/10.1049/ic.2005.0789.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Didla, Grace S., and Harry Hollien. "Voice disguise and speaker identification." In 171st Meeting of the Acoustical Society of America. Acoustical Society of America, 2015. http://dx.doi.org/10.1121/2.0000239.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Mubarak al Balushi, Maryam Mohammed, R. Vidhya Lavanya, Sreedevi Koottala, and Ajay Vikram Singh. "Wavelet based human voice identification system." In 2017 International Conference on Infocom Technologies and Unmanned Systems (Trends and Future Directions) (ICTUS). IEEE, 2017. http://dx.doi.org/10.1109/ictus.2017.8286002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gomes, Jolae, Hayden Fernandes, Stefan Abraham, and Satishkumar Chavan. "Person identification based on voice recognition." In 2021 4th Biennial International Conference on Nascent Technologies in Engineering (ICNTE). IEEE, 2021. http://dx.doi.org/10.1109/icnte51185.2021.9487756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wang Zhongwei and Wang Hongbo. "Voice identification system based on server." In 2010 International Conference on Computer Application and System Modeling (ICCASM 2010). IEEE, 2010. http://dx.doi.org/10.1109/iccasm.2010.5623009.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Voice identification"

1

Parsons, G., and J. Maruszak. Calling Line Identification for Voice Mail Messages. RFC Editor, December 2004. http://dx.doi.org/10.17487/rfc3939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography