dc.contributor.author | Tong, H. | |
dc.contributor.author | Sharifzadeh, Hamid | |
dc.contributor.author | McLoughlin, I. | |
dc.date.accessioned | 2020-11-23T23:11:16Z | |
dc.date.available | 2020-11-23T23:11:16Z | |
dc.date.issued | 2020-10 | |
dc.identifier.uri | https://hdl.handle.net/10652/5017 | |
dc.description.abstract | Dysarthria is a speech disorder that can significantly impact a person’s daily life, and yet may be amenable to therapy. To automatically detect and classify dysarthria, researchers have proposed various computational approaches ranging from traditional speech processing methods focusing on speech rate, intelligibility, intonation, etc. to more advanced machine learning techniques. Recently developed machine learning systems rely on audio features for classification; however, research in other fields has shown that audio-video crossmodal frameworks can improve classification accuracy while simultaneously reducing the amount of training data required compared to uni-modal systems (i.e. audio- or video-only). In this paper, we propose an audio-video cross-modal deep learning framework that takes both audio and video data as input to classify dysarthria severity levels. Our novel cross-modal framework achieves over 99% test accuracy on the UASPEECH dataset – significantly outperforming current uni-modal systems that utilise audio data alone. More importantly, it is able to accelerate training time while improving accuracy, and to do so with reduced training data requirements. | en_NZ |
dc.language.iso | en | en_NZ |
dc.publisher | ISCA (International Speech Communication Association) | en_NZ |
dc.rights | Copyright © 2020 ISCA | en_NZ |
dc.subject | dysarthria | en_NZ |
dc.subject | motor speech disorders | en_NZ |
dc.subject | dysarthric patients | en_NZ |
dc.subject | assessment | en_NZ |
dc.subject | audio data processing systems | en_NZ |
dc.subject | video data processing systems | en_NZ |
dc.subject | deep-learning algorithms | en_NZ |
dc.subject | algorithms | en_NZ |
dc.subject | UASPEECH (dataset of dysarthric speech) | en_NZ |
dc.subject | Convolutional Neural Network (CNN) | en_NZ |
dc.title | Automatic assessment of dysarthric severity level using audio-video cross-modal approach in deep learning | en_NZ |
dc.type | Conference Contribution - Paper in Published Proceedings | en_NZ |
dc.date.updated | 2020-11-10T13:30:08Z | |
dc.subject.marsden | 080108 Neural, Evolutionary and Fuzzy Computation | en_NZ |
dc.subject.marsden | 119999 Medical and Health Sciences not elsewhere classified | en_NZ |
dc.identifier.bibliographicCitation | Tong, H., Sharifzadeh, H., & McLoughlin, I. (2020). Automatic Assessment of Dysarthric Severity Level Using Audio-Video Cross-Modal Approach in Deep Learning. INTERSPEECH 2020 (pp. 4786-4790). doi:http://dx.doi.org/10.21437/Interspeech.2020-1997 Retrieved from http://www.interspeech2020.org/Program/Technical_Program/# | en_NZ |
unitec.publication.spage | 4786 | en_NZ |
unitec.publication.lpage | 4790 | en_NZ |
unitec.conference.title | INTERSPEECH 2020 “Cognitive Intelligence for Speech Processing” | en_NZ |
unitec.conference.org | ISCA (International Speech Communication Association) | en_NZ |
unitec.conference.location | Shanghai, China | en_NZ |
unitec.conference.sdate | 2020-10-25 | |
unitec.conference.edate | 2020-10-29 | |
unitec.peerreviewed | yes | en_NZ |
dc.contributor.affiliation | Unitec Institute of Technology | en_NZ |
dc.contributor.affiliation | Singapore Institute of Technology | en_NZ |
unitec.identifier.roms | 64998 | en_NZ |
unitec.institution.studyarea | Computing | |