Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32759
Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments

Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea

Abstract:

The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.

Keywords: Deep learning, data mining, gender predication, MOOCs.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.3669220

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1279

References:


[1] A. I. Cristea, A. Alamri, M. Kayama, C. Stewart, M. Alshehri, and L. Shi, “Earliest Predictor of Dropout in MOOCs: A Longitudinal Study of FutureLearn Courses,” in 27th International Conference on Information Systems Development (ISD), 2018.
[2] T. Aljohani and A. I. Cristea, “Predicting Learners’ Demographics Characteristics: Deep Learning Ensemble Architecture for Learners’ Characteristics Prediction in MOOCs,” in Proceedings of the 2019 4th International Conference on Information and Education Innovations, 2019, pp. 23–27.
[3] O. Almatrafi and A. Johri, “Systematic Review of Discussion Forums in Massive Open Online Courses (MOOCs),” IEEE Trans. Learn. Technol., vol. PP, p. 1, 2018.
[4] M. Mazzolini and S. Maddison, “Sage, guide or ghost? The effect of instructor intervention on student participation in online discussion forums,” Comput. Educ., vol. 40, pp. 237–253, 2003.
[5] X. Wei, H. Lin, L. Yang, and Y. Yu, “A convolution-LSTM-based deep neural network for cross-domain MOOC forum post classification,” Inf., 2017.
[6] C. Robinson, M. Yeomans, J. Reich, C. Hulleman, and H. Gehlbach, “Forecasting student achievement in MOOCs with natural language processing,” in ACM International Conference Proceeding Series, 2016.
[7] G. Allione and R. M. Stein, “Mass attrition: An analysis of drop out from principles of microeconomics MOOC,” J. Econ. Educ., vol. 47, pp. 174–186, 2016.
[8] A. Friend Wise, Y. Cui, Q. Jin Wan, and J. Vytasek, “Mining for Gold: Identifying Content-Related MOOC Discussion Threads across Domains through Linguistic Modeling,” Internet High. Educ., vol. 32, 2016.
[9] L. Rossi and O. Gnawali, “Language Independent Analysis and Classification of Discussion Threads in Coursera MOOC Forums,” in Proceedings of the 2014 IEEE 15th International Conference on Information Reuse and Integration, IEEE IRI 2014, 2014.
[10] T. Atapattu and K. Falkner, “A framework for topic generation and labeling from MOOC discussions,” in L@S 2016 - Proceedings of the 3rd 2016 ACM Conference on Learning at Scale, 2016.
[11] E. Sezerer, O. Polatbilek, and S. Tekir, “A Turkish Dataset for Gender Identification of Twitter Users,” in Proceedings of the 13th Linguistic Annotation Workshop, 2019, pp. 203–207.
[12] R. Bayot and T. Goncalves, “Multilingual author profiling using word embedding averages and SVMs,” Ski. 2016 - 2016 10th Int. Conf. Software, Knowledge, Inf. Manag. Appl., pp. 382–386, 2017.
[13] T. Raghunadha Reddy, B. Vishnu Vardhan, and P. Vijayapal Reddy, “A survey on Authorship Profiling techniques,” Int. J. Appl. Eng. Res., vol. 11, no. 5, pp. 3092–3102, 2016.
[14] F. Claude, D. Galaktionov, R. Konow, S. Ladra, and Ó. Pedreira, “Competitive Author Profiling Using Compression-Based Strategies,” Int. J. Uncertainty, Fuzziness Knowlege-Based Syst., vol. 25, no. 4, pp. 1–16, 2017.
[15] M. Antkiewicz, M. Kuta, and J. Kitowski, “Author profiling with classification restricted Boltzmann machines,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10245 LNAI, pp. 3–13, 2017.
[16] T. Neal, K. Sundararajan, A. Fatima, Y. Yan, Y. Xiang, and D. Woodard, “Surveying Stylometry Techniques and Applications,” ACM Comput. Surv. Artic., vol. 50, no. 86, 2017.
[17] F. Jafariakinabad, S. Tarnpradab, and K. A. Hua, “Syntactic Recurrent Neural Network for Authorship Attribution,” CoRR, vol. abs/1902.0, 2019.
[18] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., 1997.
[19] N. Kalchbrenner, E. Grefenstette, and P. Blunsom, “A Convolutional Neural Network for Modelling Sentences,” CoRR, vol. abs/1404.2, 2014.
[20] X. Zhang, J. J. Zhao, and Y. LeCun, “Character-level Convolutional Networks for Text Classification,” CoRR, vol. abs/1509.0, 2015.
[21] C. Goller and A. Kuchler, “Learning task-dependent distributed representations by backpropagation through structure,” 1996, pp. 347–352 vol.1.
[22] R. Socher, C. Chiung-Yu Lin, A. Y. Ng, and C. Manning, “Parsing Natural Scenes and Natural Language with Recursive Neural Networks,” in Proceedings of the 28th International Conference on Machine Learning, ICML 2011, 2011, pp. 129–136.
[23] D. Dowty, “Compositionality as an empirical problem,” Direct Compos. Oxford Univ. Press., pp. 14--23, 2006.
[24] M. Ahmed, M. R. Samee, and R. E. Mercer, “Improving Tree-LSTM with Tree Attention,” in Proceedings - 13th IEEE International Conference on Semantic Computing, ICSC 2019, 2019.
[25] Y. Oseki, C. Yang, and A. Marantz, “Modeling Hierarchical Syntactic Structures in Morphological Processing,” the Association for Computational Linguistics,2019, pp. 43–52.
[26] K. S. Tai, R. Socher, and C. D. Manning, “Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks,” Assoc. Comput. Linguist., pp. 1556–1566, 2015.
[27] S. R. Bowman, R. Gupta, J. Gauthier, C. D. Manning, A. Rastogi, and C. Potts, “A fast unified model for parsing and sentence understanding,” in 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers, 2016.
[28] T. Kim, J. Choi, D. Edmiston, S. Bae, and S. Lee, “Dynamic Compositionality in Recursive Neural Networks with Structure-Aware Tag Representations,” Proc. AAAI Conf. Artif. Intell., 2019.
[29] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: A neural image caption generator,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2015.
[30] J. Li, M. T. Luong, D. Jurafsky, and E. Hovy, “When are tree structures necessary for deep learning of representations?,” in Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing, 2015.
[31] A. V. Aho and J. D. Ullman, “The Theory of Parsing, Translation, and Compiling,” Prentice-Hall Ser. Autom. Comput., 1972.
[32] D. Klein and C. D. Manning, “Accurate unlexicalized parsing,” the Association for Computational Linguistics,2003.
[33] B. Pang and L. Lee, “Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales,” in ACL-05 - 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, 2005.
[34] J. Ganitkevitch, B. VanDurme, and C. Callison-Burch, “PPDB: The Paraphrase Database,” Lrec-2013.
[35] J. Ganitkevitch and C. Callison-Burch, “The Multilingual Paraphrase Database,” Lrec-2014, 2014.
[36] N. Mahmoudi, P. Docherty, and P. Moscato, “Deep neural networks understand investors better,” Decis. Support Syst., 2018.
[37] A. Conneau, H. Schwenk, L. Barrault, and Y. Lecun, “Very Deep Convolutional Networks for Natural Language Processing,” EACL, 2016.
[38] C. e C. J. Sun Shiliang e Luo, “A review of natural language processing techniques for opinion mining systems,” Inf. Fusion, vol. 36, pp. 10–25, 2017.
[39] Y. Kim, “Convolutional Neural Networks for Sentence Classification,” Proc. 2014 Conf. Empir. Methods Nat. Lang. Process., 2014.
[40] J. Pennington, R. Socher, and C. Manning, “Glove: Global Vectors for Word Representation,” in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014.
[41] W. Merrill, L. Khazan, N. Amsel, Y. Hao, S. Mendelsohn, and R. Frank, “Finding Syntactic Representations in Neural Stacks.” Association for Computational Linguistics, (2019.
[42] I. Rodríguez-Ardura and A. Meseguer-Artola, “Flow experiences in personalised e-learning environments and the role of gender and academic performance,” Interact. Learn. Environ., vol. 0, no. 0, pp. 1–24, 2019.