Automatic assessment of dysarthric severity level using audio-video cross-modal approach in deep learning

Loading...
Thumbnail Image

Supplementary material

Other Title

Authors

Tong, H.
Sharifzadeh, Hamid
McLoughlin, I.

Author ORCID Profiles (clickable)

Degree

Grantor

Date

2020-10

Supervisors

Type

Conference Contribution - Paper in Published Proceedings

Ngā Upoko Tukutuku (Māori subject headings)

Keyword

dysarthria
motor speech disorders
dysarthric patients
assessment
audio data processing systems
video data processing systems
deep-learning algorithms
algorithms
UASPEECH (dataset of dysarthric speech)
Convolutional Neural Network (CNN)

Citation

Tong, H., Sharifzadeh, H., & McLoughlin, I. (2020). Automatic Assessment of Dysarthric Severity Level Using Audio-Video Cross-Modal Approach in Deep Learning. INTERSPEECH 2020 (pp. 4786-4790). doi:http://dx.doi.org/10.21437/Interspeech.2020-1997 Retrieved from http://www.interspeech2020.org/Program/Technical_Program/#

Abstract

Dysarthria is a speech disorder that can significantly impact a person’s daily life, and yet may be amenable to therapy. To automatically detect and classify dysarthria, researchers have proposed various computational approaches ranging from traditional speech processing methods focusing on speech rate, intelligibility, intonation, etc. to more advanced machine learning techniques. Recently developed machine learning systems rely on audio features for classification; however, research in other fields has shown that audio-video crossmodal frameworks can improve classification accuracy while simultaneously reducing the amount of training data required compared to uni-modal systems (i.e. audio- or video-only). In this paper, we propose an audio-video cross-modal deep learning framework that takes both audio and video data as input to classify dysarthria severity levels. Our novel cross-modal framework achieves over 99% test accuracy on the UASPEECH dataset – significantly outperforming current uni-modal systems that utilise audio data alone. More importantly, it is able to accelerate training time while improving accuracy, and to do so with reduced training data requirements.

Publisher

ISCA (International Speech Communication Association)

Link to ePress publication

DOI

Copyright holder

Copyright notice

Copyright © 2020 ISCA

Copyright license

Available online at

This item appears in: