Skip to main content

Section 2.1 Speech and Language laboratory

The speech and language research group in SCSE was founded in 2007 by Chng Eng Siong and Prof Li Haizhou (now in CUHK-Shenzen, China). The group is now situated within HESL Lab - N4-B2b-05 in SCSE. We also founded the AISG Speech Lab funded by NRF since 2018~current.

Subsection 2.1.1 Research Focus

Our research interest is primarily speech and language processing, classifications using ML:

  1. ASR and LLM
    1. Using LLM to improve ASR: see Hyporadise
    2. Code-switch multi-lingual speech recognition: see Audio to Byte
    3. Robust Large vocabulary continuous speech recognition: joint end-to-end ASR with speech enhancement module, wave2vec2, speaker extraction
    4. Speech enhancement: speaker extraction, denoising, feature enhancement, overlapping speech extraction
  2. Classification
    1. Deep Fake Detection (and generation)Link
    2. Speaker identification and speaker diarization: diarization, VAD, and speaker extraction issues, see Microsoft diarization approach
  3. Towards Speech Understanding - some aspects of NLP such as depression classification, summarization, name entity recognition, text normalization. See a demo of our ASR for ATC speech with NER highlighting. ATC with NER

Subsection 2.1.2 Demos

Some of our previous works:

  1. Youtube recordings: Our code-switch speech recognition in action:

    1. Recognizing English/Mandarin code-switch speech using our LVCSR system (2018 June).
    2. Comparing our system against Google, Siri (2018 Sep).
  2. Source separation - Separating Hillary Clinton and Trump voice from Youtube recording, from Chenglin's Demo slide (Oct 2018)

  3. Speech indexing using our MAGOR system (Code-switch English/Mandarin and Malay system)

  4. See a demo of our ASR for ATC speech with NER highlighting. ATC with NER

Subsection 2.1.3 Our past demos using our speech engine

2020 FYPs demo:

  1. Deploying Speech Recognition System using high availability and scalability kubernetes cluster Youtube

  2. Chatbot framework using Dialog flow and various Q and A modules (2020 Demo) Youtube and a live demoDemo

Subsection 2.1.4 Some of our past works in git

  1. PhD Student Hou Nana's work in NTU (2018~2021), single channel speech enhancement, github

  2. PhD Student Xu Chenglin's work in NTU (2015~2020), single channel speech separation/extration,github

  3. Intern GeMeng's work (intern from Tianjin 2020~2021), tutorial speech separation, github

  4. Intern Shangeths work (intern from BITS) (2020 Aug- 2021 June), Accent, Age, Height classificationPdf link

  5. MSAI student Samuel Samsudin (2020~2021), emotion detection, github depository, kaggle iEmoCap

  6. Language Identification by EEE's PhD student Liu Hexin (2021) github link

  7. Intern Shashank Shirol's work (2020 Jan-June), using GAN to create noisy speech, github depository