• Login
    View Item 
    •   Research Bank Home
    • Unitec Institute of Technology
    • Study Areas
    • Computing
    • Computing Conference Papers
    • View Item
    •   Research Bank Home
    • Unitec Institute of Technology
    • Study Areas
    • Computing
    • Computing Conference Papers
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Automatic assessment of dysarthric severity level using audio-video cross-modal approach in deep learning

    Tong, H.; Sharifzadeh, Hamid; McLoughlin, I.

    Thumbnail
    Share
    View fulltext online
    Tong, H. (2020).pdf (622.1Kb)
    Date
    2020-10
    Citation:
    Tong, H., Sharifzadeh, H., & McLoughlin, I. (2020). Automatic Assessment of Dysarthric Severity Level Using Audio-Video Cross-Modal Approach in Deep Learning. INTERSPEECH 2020 (pp. 4786-4790). doi:http://dx.doi.org/10.21437/Interspeech.2020-1997 Retrieved from http://www.interspeech2020.org/Program/Technical_Program/#
    Permanent link to Research Bank record:
    https://hdl.handle.net/10652/5017
    Abstract
    Dysarthria is a speech disorder that can significantly impact a person’s daily life, and yet may be amenable to therapy. To automatically detect and classify dysarthria, researchers have proposed various computational approaches ranging from traditional speech processing methods focusing on speech rate, intelligibility, intonation, etc. to more advanced machine learning techniques. Recently developed machine learning systems rely on audio features for classification; however, research in other fields has shown that audio-video crossmodal frameworks can improve classification accuracy while simultaneously reducing the amount of training data required compared to uni-modal systems (i.e. audio- or video-only). In this paper, we propose an audio-video cross-modal deep learning framework that takes both audio and video data as input to classify dysarthria severity levels. Our novel cross-modal framework achieves over 99% test accuracy on the UASPEECH dataset – significantly outperforming current uni-modal systems that utilise audio data alone. More importantly, it is able to accelerate training time while improving accuracy, and to do so with reduced training data requirements.
    Keywords:
    dysarthria, motor speech disorders, dysarthric patients, assessment, audio data processing systems, video data processing systems, deep-learning algorithms, algorithms, UASPEECH (dataset of dysarthric speech), Convolutional Neural Network (CNN)
    ANZSRC Field of Research:
    080108 Neural, Evolutionary and Fuzzy Computation, 119999 Medical and Health Sciences not elsewhere classified

    Copyright Notice:
    Copyright © 2020 ISCA
    Rights:
    This digital work is protected by copyright. It may be consulted by you, provided you comply with the provisions of the Act and the following conditions of use. These documents or images may be used for research or private study purposes. Whether they can be used for any other purpose depends upon the Copyright Notice above. You will recognise the author's and publishers rights and give due acknowledgement where appropriate.
    Metadata
    Show detailed record
    This item appears in
    • Computing Conference Papers [150]

    Te Pūkenga

    Research Bank is part of Te Pūkenga - New Zealand Institute of Skills and Technology

    • About Te Pūkenga
    • Privacy Notice

    Copyright ©2022 Te Pūkenga

    Usage

    Downloads, last 12 months
    67
     
     

    Usage Statistics

    For this itemFor the Research Bank

    Share

    About

    About Research BankContact us

    Help for authors  

    How to add research

    Register for updates  

    LoginRegister

    Browse Research Bank  

    EverywhereInstitutionsStudy AreaAuthorDateSubjectTitleType of researchSupervisorCollaboratorThis CollectionStudy AreaAuthorDateSubjectTitleType of researchSupervisorCollaborator

    Te Pūkenga

    Research Bank is part of Te Pūkenga - New Zealand Institute of Skills and Technology

    • About Te Pūkenga
    • Privacy Notice

    Copyright ©2022 Te Pūkenga