Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32870
Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory

Authors: Ebipatei Victoria Tunyan, T. A. Cao, Cheol Young Ock


Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.

Keywords: Subjective bias detection, machine learning, BERT–BiLSTM–Attention, text classification, natural language processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 782


[1] Foundation, O. S. 2018. Indicators of news media trust.
[2] Gallup. 2018. Americans: Much misinformation, bias, inaccuracy in news.\americansmisinformation-bias-\inaccuracy-news.aspx.
[3] Africanews: The voice of Africa. 2021.
[4] Reid Pryzant, Richard Diehl Martinez, Nathan Dass, Sadao Kurohashi, Dan Jurafsky, and Diyi Yang. 2019. “Automatically Neutralizing Subjective Bias in Text,” ArXiv Preprint arXiv:1911.09709.
[5] Wikipedia: Neutral point of view. (As updated on 13 March 2021).
[6] Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean (2013). “Efficient Estimation of Word Representations in Vector Space.” arXiv:1301.3781 (cs.CL) 7 Sep 2013.
[7] J. Pennington, R. Socher, and C. Manning, “Glove: Global vectors for word representation,” in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543.
[8] Marta Recasens, Cristian Danescu-Niculescu-Mizil, and Dan Jurafsky. 2013. “Linguistic Models for Analyzing and Detecting Biased Language. In Proceedings of the Association for Computer Linguistics,” 1650–1659.
[9] Kartikey Pant, Tanvi Dadu, and Radhika Mamidi. 2020. “Towards detection of subjective bias using contextualized word embeddings,” In Companion Proceedings of the Web Conference 2020, WWW 20, page 7576, New York,NY, USA. Association for Computing Machinery.
[10] N. Yu, and S. Kübler, 2011. “Filling the gap: Semi‐supervised learning for opinion detection across domains,” In Proceedings of the 15th Conference on Computational Natural Language Learning (CoNLL 2011) (pp. 200– 209).
[11] J. Wiebe and E. Riloff. 2005. “Creating subjective and objective sentence classifiers from unannotated texts,” In Proceedings of the Conference on Computational Linguistics and Intelligent Text Processing (CICLing), volume 3406, pages 486–497. Springer.
[12] W. Li, D. Li, H. Yin, L. Zhang, Z. Zhu, P. Liu, “Lexicon-Enhanced Attention Network Based on Text Representation for Sentiment Classification,” Appl. Sci. 2019, 9, 3717.
[13] Desislava Aleksandrova, François Lareau, Pierre André Ménard. “Multilingual sentence-level bias detection in Wikipedia,” 2019. Proceedings of the International Conference Recent Advances in Natural Language Processing (RANLP 2019) Varna, Bulgaria, 2019, pp. 42–51, doi: 10.26615/978-954-452-056-4_006.
[14] Aniruddha Ghosh and Tony Veale. “Magnets for sarcasm: Making sarcasm detection timely, contextual and very personal,” In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 482–491, 2017.
[15] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. “BERT: PreTraining of Deep Bidirectional Transformers for Language Understanding.” ArXiv Preprint ArXiv:1810.04805, 2018.
[16] R. Cai et al., "Sentiment Analysis About Investors and Consumers in Energy Market Based on BERT-BiLSTM," in IEEE Access, vol. 8, pp. 171408-171415, 2020, doi: 10.1109/ACCESS.2020.3024750.
[17] D. Liu, Z. Zhao and L. Gan, “Intention Detection Based on Bert-Bilstm in Taskoriented Dialogue System,” 2019 16th International Computer Conference on Wavelet Active Media Technology and Information Processing, Chengdu, China, 2019, pp. 187-191, doi: 10.1109/ICCWAMTIP47768.2019.9067660.
[18] M. Schuster and K. K. Paliwal, “Bidirectional recurrent neural networks,” in IEEE Transactions on Signal Processing, vol. 45, no. 11, pp. 2673-2681, Nov. 1997, doi: 10.1109/78.650093.
[19] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014.
[20] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” in Neural Computation, vol. 9, no. 8, pp. 1735-1780, 15 Nov. 1997, doi: 10.1162/neco.1997.9.8.1735.
[21] Adela Randall. 2017. CS 388: Natural Language Processing: LSTM Recurrent Neural Networks.
[22] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, 2014. “Dropout: a simple way to prevent neural networks from overfitting,” The Journal of Machine Learning Research 15(1):1929–1958.
[23] Shreydesai: attention-viz. 2019.