Please use this identifier to cite or link to this item: http://localhost:8080/xmlui/handle/20.500.12421/2741
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSarria Paja, Milton-
dc.contributor.authorFalk, Tiago H.-
dc.date.accessioned2020-02-10T07:15:08Z-
dc.date.available2020-02-10T07:15:08Z-
dc.date.issued2017-10-26-
dc.identifier.isbn978-099286267-1-
dc.identifier.urihttps://repository.usc.edu.co/handle/20.500.12421/2741-
dc.description.abstractIn this paper, automatic speaker verification using normal and whispered speech is explored. Typically, for speaker verification systems, varying vocal effort inputs during the testing stage significantly degrades system performance. Solutions such as feature mapping or addition of multi-style data during training and enrollment stages have been proposed but do not show similar advantages for the involved speaking styles. Herein, we focus attention on the extraction of invariant speaker-dependent information from normal and whispered speech, thus allowing for improved multi vocal effort speaker verification. We base our search on previously reported perceptual and acoustic insights and propose variants of the mel-frequency cepstral coefficients (MFCC). We show the complementarity of the proposed features via three fusion schemes. Gains as high as 39% and 43% can be achieved for normal and whispered speech, respectively, relative to the existing systems based on conventional MFCC features.es
dc.language.isoenes
dc.publisherInstitute of Electrical and Electronics Engineers Inc.es
dc.subjectWhispered speeches
dc.subjectSpeaker verificationes
dc.subjectFusiones
dc.subjectI-vector extractiones
dc.subjectMFCCes
dc.titleVariants of mel-frequency cepstral coefficients for improved whispered speech speaker verification in mismatched conditionses
dc.typeArticlees
Appears in Collections:Artículos Científicos

Files in This Item:
File Description SizeFormat 
Variants of mel-frequency cepstral coefficients for improved.jpg254.09 kBJPEGView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.