Thursday, August 30, 2012
The UT^2 game bot, created by two University of Texas at Austin Department of Computer Science graduate students and a professor, won the Humanlike Bot Competition at the IEEE World Congress on Computational Intelligence (WCCI 2012).
The UT^2 bot is the first winning bot in the history of the Humanlike Bot Competition to be judged as human more often than half the human players participating in the evaluation.
“That is a significant milestone in the competition,” Miikkulainen said.“The idea of the competition is to evaluate how we can make game bots, non-player characters (NPCs) controlled by AI algorithms, appear as human as possible,” he said. “It is generally recognized that NPCs are relatively weak in most video games: their behavior is predictable and mechanical, and they often make mistakes that human players would be unlikely to make. Players often enjoy playing against other humans, because it provides a more interesting game experience. The goal of the competition is to promote more research in human-like bots, as well as evaluate how well we are currently doing in this area.”
This story was published on the Department of Computer Science website. It was posted by Staci Norman.
The winning entry consisted of a prioritized list of behaviors such as getting unstuck, shooting at the enemy, picking up an object and running around the environment. The simplest of these behaviors were designed by hand, as scripts, but the most complex behaviors were learned using human traces and neuroevolution.
The getting-unstuck and running-around behaviors were based on traces of actual human behavior. The team collected transcripts of many games, and indexed and stored them. During the game, when a getting-unstuck or running-around behavior is called for, the current situation is matched with the database and behaviors were executed that humans found appropriate in similar situations. The general idea was to learn to map these two behaviors from situations to human actions that would then apply to novel situations as well.
The battle behavior was learned using neuroevolution. Neural networks were used to control the movement, weapon selection and shooting during close combat with an opponent. Instead of training the network using human behavior as targets, the team used evolutionary computation (genetic algorithms) to search for a network that would perform well.
“In this case we found that a network that was evolved to be good also acted much like humans do, so a secondary objective of being similar to recorded human traces was not necessary,” Miikkulainen said.
While in the first several competitions, there was a persistent gap between all humans and all bots, that gap has now closed. “There is still much we can improve, and the competition will continue, with the next one scheduled for later this year,” he said.