Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32759
Strategic Information in the Game of Go

Authors: Michael Harre, Terry Bossomaier, Ranqing Chu, Allan Snyder

Abstract:

We introduce a novel approach to measuring how humans learn based on techniques from information theory and apply it to the oriental game of Go. We show that the total amount of information observable in human strategies, called the strategic information, remains constant for populations of players of differing skill levels for well studied patterns of play. This is despite the very large amount of knowledge required to progress from the recreational players at one end of our spectrum to the very best and most experienced players in the world at the other and is in contrast to the idea that having more knowledge might imply more 'certainty' in what move to play next. We show this is true for very local up to medium sized board patterns, across a variety of different moves using 80,000 game records. Consequences for theoretical and practical AI are outlined.

Keywords: Board Games, Cognitive Capacity, Decision Theory, Information Theory.

Digital Object Identifier (DOI): doi.org/10.5281/zenodo.1077199

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1472

References:


[1] Xindi Cai and Donald Wunsch, "Computer Go: A grand challenge to AI," in Challenges for Computational Iintelligence, pp. 443-465. Springer Berlin, 2007.
[2] Fernand Gobet, Jean Retschitzki, and Alex de Voogt, Moves in Mind: The Psychology of Board Games, Psychology Press, 2004.
[3] Xiangchuan Chen, Daren Zhang, Xiaochu Zhang, Zhihao Li, Xiaomei Meng, Sheng He, and Xiaoping Hu, "A functional MRI study of highlevel cognition II. The game of Go," Cognitive Brain Research, vol. 16, no. 1, pp. 32 - 37, 2003.
[4] Yasuomi Ouchi, Toshihiko Kanno, Etsuji Yoshikawa, Masami Futatsubashi, Hiroyuki Okada, Tatsuo Torizuka, and Mitsuo Kaneko, "Neural substrates in judgment process while playing Go: A comparison of amateurs with professionals," Cognitive Brain Research, vol. 23, no. 2-3, pp. 164 - 170, 2005.
[5] J. Burmeister and J. Wiles, "The challenge of Go as a domain for AI research: A comparison between Go and chess," in Proceedings of the Third Australian and New Zealand Conference on Intelligent Information Systems, 1995.
[6] Y. Wang and S. Gelly, "Modifications of UCT and sequence-like simulations for monte-carlo Go," in IEEE Symposium on Computational Intelligence and Games, 2007., 2007, pp. 175-182.
[7] L. Chang-Shing, W. Mei-Hui, G. Chaslot, J.-B. Hoock, A. Rimmel, O. Teytaud, Shang-Rong T., Shun-Chin H., and H. Tzung-Pei, "The computational intelligence of MoGo revealed in Taiwan-s computer Go tournaments," in IEEE Transactions on Computational Intelligence and AI in Games., 2009, vol. 1, pp. 73-79.
[8] Fernand Gobet, Peter C.R. Lane, Steve Croker, Peter C-H. Cheng, Gary Jones, Iain Oliver, and Julian M. Pine, "Chunking mechanisms in human learning," TRENDS in Cognitive Sciences, vol. 5, no. 6, pp. 236 - 243, 2001.
[9] C. Shannon, "Programming a computer for playing Chess," Philosophical Magazine, vol. 41, no. 314, 1951.
[10] J. Robson, "The complexity of Go," in Proceedings of IFIP Congress, 1983, pp. 413-417.
[11] T. Cover and J. Thomas, Elements of Information Theory, 2nd edition, Wiley, 2006.
[12] Peter D. Gr¨unwald and A. Philip Dawid, "Game theory, maximum entropy, minimum discrepancy and robust bayesian decision theory," The Annals of Statistics, vol. 32, no. 4, pp. 1367-1433, 2004.
[13] S. Kullback and R. A. Leibler, "On information and sufficiency," The Annals of Mathematical Statistics, vol. 22, no. 1, pp. 79-86, 1951.
[14] J. Lin, "Divergence measures based on the shannon entropy," Information Theory, IEEE Transactions on, vol. 37, no. 1, pp. 145-151, August 2002.
[15] E. Berlekamp and D. Wolfe, Mathematical Go: Chilling gets the last point, AK Peters, 1994.
[16] J. S. Reitman, "Skilled perception in Go: Deducing memory structures from inter-response time," Cognitive Psychology, vol. 8, pp. 336-356, 1976.
[17] Mark S. Roulston, "Estimating the errors on measured entropy and mutual information," Physica D: Nonlinear Phenomena, vol. 125, no. 3-4, pp. 285 - 294, 1999.
[18] D. Messick and A. Rapaport, "Expected value and response uncertainty in multiple-choice decision behavior," Journal of Experimental Psychology, vol. 70, no. 2, pp. 224 - 230, 1965.
[19] A. Borst and F. Theunissen, "Information theory and neural coding," Nature Neuroscience, vol. 2, no. 11, pp. 947 - 957, 1999.
[20] G. Halford, N. Cowan, and G. Andrews, "Separating cognitive capacity from knowledge: a new hypothesis," TRENDS in Cognitive Science, vol. 11, no. 6, pp. 236 - 242, 2008.
[21] G. Orb'an, J. Fiser, R. Aslin, and M. Lengyel, "Bayesian learning of visual chunks by human observers," PNAS, vol. 105, no. 7, pp. 2745 - 2750, 2008.