Skip to content. | Skip to navigation

Sections
Personal tools
You are here: Home Publications Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network

Contact

Prof. Dr. S. Decker
RWTH Aachen
Informatik 5
Ahornstr. 55
D-52056 Aachen
Tel +49/241/8021501
Fax +49/241/8022321

How to find us

Annual Reports

Disclaimer

Webmaster

 

 

Classification Benchmarks for Under-resourced Bengali Language based on Multichannel Convolutional-LSTM Network

Year 2020
Abstract URL view
PDF URL view

Exponential growths of social media and micro-blogging sites not only provide platforms for empowering freedom of expression and individual voices but also enables people to express anti-social behaviour like online harassment, cyberbullying, and hate speech. Numerous works have been proposed to utilize these data for social and anti-social behaviours analysis, document characterization, and sentiment analysis by predicting the contexts mostly for highly resourced languages such as English. However, there are languages that are under-resources, e.g., South Asian languages like Bengali, Tamil, Assamese, Telugu that lack computational resources for the NLP tasks. In this paper, we provide several classification benchmarks for Bengali, an under-resourced language. We prepared three datasets of expressing hate, commonly used topics, and opinions for hate speech detection, document classification, and sentiment analysis, respectively. We built the largest Bengali word embedding models to date based on 250 million articles, which we call BengFastText. We perform three different experiments, covering document classification, sentiment analysis, and hate speech detection. We incorporate word embeddings into a Multichannel Convolutional-LSTM (MConv-LSTM) network for predicting different types of hate speech, document classification, and sentiment analysis. Experiments demonstrate that BengFastText can capture the semantics of words from respective contexts correctly. Evaluations against several baseline embedding models, e.g., Word2Vec and GloVe yield up to 92.30%, 82.25%, and 90.45% F1-scores in case of document classification, sentiment analysis, and hate speech detection, respectively during 5-fold cross-validation tests.

Details

IEEE International Conference on Data Science and Advanced Analytics (DSAA'2020)

Authors

Presented at

IEEE International Conference on Data Science and Advanced Analytics (DSAA'2020), 2020 , Sydney , AU.

Published in

IEEE International Conference on Data Science and Advanced Analytics (DSAA'2020) .

Document Actions