Armenian Speech Recognition System: Acoustic and Language Models

Varuzhan H. Baghdasaryan

Abstract

Nowadays automatic speech recognition (ASR) is an important task for machines. Several applications such as speech translation, virtual assistants and voice bot systems use ASR to understand human speech. Most of the research and available models are for widely used languages, such as English, German, French, Chinese and Spanish. This paper presents the Armenian speech recognition system. As a result of this research developed acoustic and language models for the Armenian language (modern ASR systems combine acoustic and language models to achieve higher accuracy). RNN-based Baidu’s Deep Speech deep neural network was used to train the acoustic model, and the KenLM toolkit was used to train the probabilistic language model. The acoustic model was trained and validated on ArmSpeech Armenian native speech corpus using transfer-learning and data augmentation techniques and tested on the Common Voice Armenian database. The language model was built based on the texts scraped from Armenian news websites. Final models are small in size and can be run and do real-time speech-to-text tasks on IoT devices. Testing on the Common Voice Armenian database the model gave 0.902565 WER and 0.305321 CER without the language model, and 0.552975 WER and 0.285904 CER with the language model. The paper aims to describe environment setup, data collection, acoustic and language models training processes, as well as final results and benchmarks.

Keywords

Armenian ASR; speech recognition system; speech-to-text; acoustic model; language model

Cite This Article

Baghdasaryan, V. H. (2022). Armenian Speech Recognition System: Acoustic and Language Models. International Journal of Scientific Advances (IJSCIA), Volume 3| Issue 5: Sep-Oct 2022, Pages 719-724, URL: https://www.ijscia.com/wp-content/uploads/2024/04/Volume3-Issue5-Sep-Oct-No.339-719-724.pdf

Volume 3 | Issue 5: Sep-Oct 2022 

 

ISSN: 2708-7972

สัญญาอนุญาตของครีเอทีฟคอมมอนส์

This work is licensed under a Creative Commons Attribution 4.0 (International) Licence.(CC BY-NC 4.0).

Navigations