User Intention Generation with Large Language Models Using Chain-of-Thought Prompting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33122
User Intention Generation with Large Language Models Using Chain-of-Thought Prompting

Authors: Gangmin Li, Fan Yang

Abstract:

Personalized recommendation is crucial for any recommendation system. One of the techniques for personalized recommendation is to identify the intention. Traditional user intention identification uses the user’s selection when facing multiple items. This modeling relies primarily on historical behavior data resulting in challenges such as the cold start, unintended choice, and failure to capture intention when items are new. Motivated by recent advancements in Large Language Models (LLMs) like ChatGPT, we present an approach for user intention identification by embracing LLMs with Chain-of-Thought (CoT) prompting. We use the initial user profile as input to LLMs and design a collection of prompts to align the LLM's response through various recommendation tasks encompassing rating prediction, search and browse history, user clarification, etc. Our tests on real-world datasets demonstrate the improvements in recommendation by explicit user intention identification and, with that intention, merged into a user model.

Keywords: Personalized recommendation, generative user modeling, user intention identification, large language models, chain-of-thought prompting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 112

References:


[1] Davidson J, Liebald B, Liu J, et al., “The YouTube video recommendation system”. Proceedings of the fourth ACM conference on Recommender systems. 2010: 293-296.
[2] Jin X, Zhou Y, Mobasher B., “Task-oriented web user modeling for recommendation”. International Conference on User Modeling. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005: 109-118.
[3] Hou Y, Zhang J, Lin Z, et al., “Large language models are zero-shot rankers for recommender systems”. arXiv preprint arXiv:2305.08845, 2023.
[4] Gao Y, Sheng T, Xiang Y, et al. “Chat-rec: Towards interactive and explainable llms-augmented recommender system”. arXiv preprint arXiv:2303.14524, 2023.
[5] Wang L, Lim E P. “Zero-Shot Next-Item Recommendation using Large Pretrained Language Models”. arXiv preprint arXiv:2304.03153, 2023.
[6] Kang W C, Ni J, Mehta N, et al. “Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction”. arXiv preprint arXiv:2305.06474, 2023.
[7] Acharya A, Brijraj S, and Naoyuki O. “LLM based generation of item-description for recommendation system”, Proceedings of the 17th ACM Conference on Recommender Systems. 2023: 1204-1207.
[8] Wang W, Lin X, Feng F, et al. “Generative recommendation: Towards next-generation recommender paradigm”. arXiv preprint arXiv:2304.03516, 2023.
[9] Li J, Zhang W, Wang T, et al. “GPT4Rec: A generative framework for personalized recommendation and user interests interpretation”. arXiv preprint arXiv:2304.03879, 2023.
[10] Liu Q, Chen N, Sakai T, et al. “A First Look at LLM-Powered Generative News Recommendation”. arXiv preprint arXiv:2305.06566, 2023.
[11] Zhang J, Xie R, Hou Y, et al. “Recommendation as instruction following: A large language model empowered recommendation approach”. arXiv preprint arXiv:2305.07001, 2023.
[12] Bao K, Zhang J, Zhang Y, et al. “Tallrec: An effective and efficient tuning framework to align large language model with recommendation”. arXiv preprint arXiv:2305.00447, 2023.
[13] Li, Lei, Yongfeng Zhang, and Li Chen. “Prompt distillation for efficient LLM-based recommendation”, Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 2023: 1348-1357.
[14] Brown T, Mann B, Ryder N, et al. “Language models are few-shot learners”. Advances in neural information processing systems, 2020, 33: 1877-1901.
[15] Dai S, Shao N, Zhao H, et al. “Uncovering ChatGPT's Capabilities in Recommender Systems”. arXiv preprint arXiv:2305.02182, 2023.
[16] Schick T, Schütze H. “Exploiting cloze questions for few shot text classification and natural language inference”. arXiv preprint arXiv:2001.07676, 2020.
[17] Gao T, Fisch A, Chen D. “Making pre-trained language models better few-shot learners”. arXiv preprint arXiv:2012.15723, 2020.
[18] Wang X, Wei J, Schuurmans D, et al. “Self-consistency improves chain of thought reasoning in language models”. arXiv preprint arXiv:2203.11171, 2022.
[19] Wei J, Wang X, Schuurmans D, et al. “Chain-of-thought prompting elicits reasoning in large language models”. Advances in Neural Information Processing Systems, 2022, 35: 24824-24837.
[20] Zhou, Denny, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans et al. "Least-to-most prompting enables complex reasoning in large language models." arXiv preprint arXiv:2205.10625 (2022).
[21] Deldjoo, Yashar. “Understanding Biases in ChatGPT-based Recommender Systems: Provider Fairness, Temporal Stability, and Recency”. arXiv preprint arXiv:2401.10545, 2024.
[22] Chen C, Zhang M, Liu Y, et al. “Social attentional memory network: Modeling aspect-and friend-level differences in recommendation”, Proceedings of the twelfth ACM international conference on web search and data mining. 2019: 177-185.
[23] Huang X, Fang Q, Qian S, et al. “Explainable interaction-driven user modeling over knowledge graph for sequential recommendation”, proceedings of the 27th ACM international conference on multimedia. 2019: 548-556.
[24] Wang C, Zhu Y, Liu H, et al. “Enhancing user interest modeling with knowledge-enriched itemsets for sequential recommendation”, Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 2021: 1889-1898.
[25] Wang R, Lu W. “Modeling multi-interest news sequence for news recommendation”. arXiv preprint arXiv:2207.07331, 2022.
[26] Portman F, Ragain S, El-Kishky A. “MiCRO: Multi-interest Candidate Retrieval Online”. arXiv preprint arXiv:2210.16271, 2022.
[27] Yang F, Yue Y, Li G, et al. “KEMIM: Knowledge-enhanced User Multi-interest Modeling for Recommender Systems”. IEEE Access, 2023.
[28] Zhang, Jianqing, Dongjing Wang, and Dongjin Yu. “TLSAN: Time-aware long-and short-term attention network for next-item recommendation”. Neurocomputing. 2021, 441: 179-191.
[29] Zhou K, Zhou Y, Zhao W X, et al. “Towards topic-guided conversational recommender system”. arXiv preprint arXiv:2010.04125, 2020.
[30] Douban Movie dataset. https://github.com/csuldw/AntSpider
[31] Douban Movie dataset https://www.heywhale.com/mw/dataset/5cbeb2088c90d7002c822b1
[32] Amazon Beauty dataset https://jmcauley.ucsd.edu/data/amazon/index_2014.html
[33] Rendle S, Freudenthaler C, Gantner Z, et al. “BPR: Bayesian personalized ranking from implicit feedback”. arXiv preprint arXiv:1205.2618, 2012.
[34] Cheng H T, Koc L, Harmsen J, et al. “Wide & deep learning for recommender systems”, Proceedings of the 1st workshop on deep learning for recommender systems. 2016: 7-10.
[35] Geng S, Liu S, Fu Z, et al. “Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5)”, Proceedings of the 16th ACM Conference on Recommender Systems. 2022: 299-315.