Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30737
Algorithm for Information Retrieval Optimization

Authors: Kehinde K. Agbele, Kehinde Daniel Aruleba, Eniafe F. Ayetiran


When using Information Retrieval Systems (IRS), users often present search queries made of ad-hoc keywords. It is then up to the IRS to obtain a precise representation of the user’s information need and the context of the information. This paper investigates optimization of IRS to individual information needs in order of relevance. The study addressed development of algorithms that optimize the ranking of documents retrieved from IRS. This study discusses and describes a Document Ranking Optimization (DROPT) algorithm for information retrieval (IR) in an Internet-based or designated databases environment. Conversely, as the volume of information available online and in designated databases is growing continuously, ranking algorithms can play a major role in the context of search results. In this paper, a DROPT technique for documents retrieved from a corpus is developed with respect to document index keywords and the query vectors. This is based on calculating the weight (

Keywords: Internet ranking

Digital Object Identifier (DOI):

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1091


[1] McGrayne, S. B. The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy. Yale University Press, 2012.
[2] Sasarak, C., Hart, K., Pospesel, R., Stalnaker, D., Hu, L., Livolsi, R., Zhu, S., and Zanibbi, R. min: A multimodal WWW interface for math search. In Symp. Human-Computer Interaction and Information Retrieval (HCIR) (Cambridge, MA, Oct. 2012).
[3] Lewandowski D. Search engine user behaviour: how can users be guided to quality content? Information Services and Use, 2008, Vol.28, No.3-4 pp. 261-268.
[4] Jansen BJ and Molina P. The Effectiveness of Web Search Engines for Retrieving Relevant Ecommerce Links, Information Processing & Management, 2006, vol. 42, no. 4, pp. 1075-1098.
[5] Jansen BJ, Spink A and Saracevic T. Real life, real users, and real needs: a study and analysis of user queries on the Web. Inf. Process. Manag., 2000, Vol. 36, pp: 207–227.
[6] Shen C, Kim J, Wang L and van den Hengel A. Positive semidefinite metric learning using boosting-like algorithms. Journal of Machine Learning Research, 2012, Vol. 13, pp. 1007–1036.
[7] Shivaswamy PK and Joachims T. Online Learning with Preference Feedback. In NIPS workshop on Choice Models and Preference Learning, 2011 edition.
[8] Nyongesa HO and Maleki-dizaji S. User modelling using evolutionary interactive reinforcement learning. Inf Retrieval, 2006, vol. 9, no. 3, pp. 343-355. DOI: 10.1007/s10791-006-4536-3
[9] Hjorland B. The Foundation of the Concept of Relevance. Journal of the American Society for Information Science and Technology, 2010, Vol. 61, No. 2, pp. 217-237.
[10] Grady C and Lease M. Crowdsourcing Document Relevance Assessment with Mechanical Turk. In Proceedings of the NAACL 2010 workshop on Creating Speech and Language Data with Amazon Mechanical Turk, Los-Angelis, California, 2010, pp. 172-179.
[11] Saracevic T. Relevance: A review of the literature and a framework for thinking on the notion in information science. Part III: Behavior and effects of relevance. Journal of the American Society for Information Science and Technology, 2007, Vol. 58, No.13, pp. 2126-2144.
[12] Borlund P. The Concept of Relevance in IR. Journal of the American Society for Information Science and Technology, 2003, Vol. 54, No. 10, pp. 913-925.
[13] Setchi R, Tang Q and Stankov I. Semantic-based information retrieval in support of concept design. In: Proceedings of Advanced Engineering Informatics, 2011, pp.131-146.
[14] Baeza-Yates R and Ribeiro-Neto B. Modern Information Retrieval: The Concepts and Technology Behind Search, 2nd edn. 2011, Addison-Wesley, Reading.
[15] Manning CD, Raghavan P and Schutze H. Introduction to Information Retrieval. Cambridge University Press, 2008, Cambridge.
[16] Salton G and Buckley C. Term-Weighting approaches in automatic text retrieval, Information Processing and Management, 1988, Vol. 24, Issue 5, pp. 513-523.
[17] Agichtein E, Brill E and Dumais S. Improving Web search ranking by incorporating user behavior information. In: 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2006), 2006a, pp. 19–26. ACM, Seattle.
[18] Jarvelin K and Kekalainen J. IR evaluation methods for retrieving highly relevant documents. Published in: Belkin, NJ Ingwersen P. and Leong MK. (eds). Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. New York, NY: ACM, 2000, pp. 41-48.
[19] Agbele K. Context-Awareness for Adaptive Information Retrieval Systems, Unpublished PhD Thesis, University of the Western Cape, 2014.