Please use this identifier to cite or link to this item:
http://localhost:8080/xmlui/handle/20.500.12421/2734
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Bouserhal, Rachel E. | - |
dc.contributor.author | Chabot, Philippe | - |
dc.contributor.author | Sarria Paja, Milton | - |
dc.contributor.author | Cardinal, Patrick | - |
dc.contributor.author | Voix, Jérémie | - |
dc.date.accessioned | 2020-02-10T07:13:43Z | - |
dc.date.available | 2020-02-10T07:13:43Z | - |
dc.date.issued | 2018-09-06 | - |
dc.identifier.issn | 2308457X | - |
dc.identifier.uri | https://repository.usc.edu.co/handle/20.500.12421/2734 | - |
dc.description.abstract | The accurate classification of nonverbal human producedaudio events opens the door to numerous applications beyondhealth monitoring. Voluntary events, such as tongue clickingand teeth chattering, may lead to a novel way of silent interfacecommand. Involuntary events, such as coughing and clearingthe throat, may advance the current state-of-the-art in hearinghealth research. The challenge of such applications is the bal-ance between the processing capabilities of a small intra-auraldevice and the accuracy of classification. In this pilot study,10 nonverbal audio events are captured inside the ear canalblocked by an intra-aural device. The performance of three clas-sifiers is investigated: Gaussian Mixture Model (GMM), Sup-port Vector Machine and Multi-Layer Perceptron. Each classi-fier is trained using three different feature vector structures con-structed using the mel-frequency cepstral (MFCC) coefficientsand their derivatives. Fusion of the MFCCs with the auditory-inspired amplitude modulation features (AAMF) is also investi-gated. Classification is compared between binaural and monau-ral training sets as well as for noisy and clean conditions. Thehighest accuracy is achieved at 75.45% using the GMM classi-fier with the binaural MFCC+AAMF clean training set. Accu-racy of 73.47% is achieved by training and testing the classifierwith the binaural clean and noisy dataset. | es |
dc.language.iso | en | es |
dc.publisher | International Speech Communication Association | es |
dc.subject | Nonverbal | es |
dc.subject | Classification | es |
dc.subject | Hearing protection | es |
dc.subject | Biosignals | es |
dc.title | Classification of nonverbal human produced audio events: A pilot study | es |
dc.type | Article | es |
Appears in Collections: | Artículos Científicos |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Classification of nonverbal human produced audio events A pilot study.jpg | 198.87 kB | JPEG | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.