UC Berkeley’s Pac-Man Projects are a great resource for learning about introductory artificial intelligence.
The AI course I’m enrolled in at Northeastern, CS4100/5100, is currently using the pac-man projects, and I think it’s a great idea. As a student, I think it’s often too easy to get caught up in treating your classes and assignments as adversaries, working for hours just to get something done, or just to get a grade, rather than working for your own personal and professional development.
I think the pac-man projects have the potential to make the process of working on homework a personal challenge.
For me, this means that I continued to work on the assignments even after I had passed the threshold for getting what I considered a good grade.
The implementation of minimax and expectimax search is beyond the scope of this post, but it’s important to know that for those search to work properly, it needs to be able to accurately measure the value of game states.
I spent much more time on tweaking my evaluation function than I did on actually implementing the search algorithms.
The evaluation function I wrote evaluated a proposed state that pacman was in. A pacman state contains information like the positions of all the ghosts in the maze, the list of all the food (dots), etc.
I chose to compute the utility of some state as a linear combination of a series of features:
score = 1 * currentScore + \
You can see the full function, along with an explanation of the coefficients I used, here.
My manual tweaks worked really well. Pacman hovered around a 80-90% win rate, and was able to consistently average over 1100 points, including the points he got when he died.
For reference, my assignment instructions explained:
With depth 2 search, your evaluation function should clear the smallClassic layout with two random ghosts more than half the time and still run at a reasonable rate (to get full credit, Pacman should be averaging around 1000 points when he’s winning).
While I was tweaking the coefficients, I thought back to Andrew Ng’s machine learning course on coursera. I only stuck with it for a few weeks, but that was enough to give me the intuition that this problem would lend itself to linear regression/batch gradient descent.
(If you want to follow along, see the code here)
Before I could do anything useful, I had to generate training samples, but before I could generate training samples, I had to figure out a way to send the coefficients to my evaluation function.
This took me a pretty long time, actually - it was very hard to wade through the pacman code and find how the system sent commands to the agent files. (In this case, the agent file was MultiAgent.py)
It turns out that the pacman system allows the user to send an arbitrary number of arguments to a pac-man agent. They’re taken in as comma-separated arguments to pacman.py itself, after the
-a flag, after specifying the agent.
After modfiying the
MultiAgentSearchAgent constructor to take in the coefficients as parameters, I was able to store them as global variables, and then access them in my evaluation function.
I then wrote up a short script to generate the training samples, by calling the pacman game with generated coefficients.
I settled on generating random values in the range
[-4, 4] for the coefficients. I moved this script and the pacman files to my cloud server, let it run for a day or so, and picked up about 2,234 training samples.
This generated a comma-separated data file, where the first six entries are generated coefficients, and the last entry is the average score for pacman over the course of 10 games.
1.8600012677476485, -0.03552331692745003, 1.841124773175646, 2.654340114506205, -0.8045050830568572, 0.2193506754716683, 766.3
The last value in each line is the average score over 10 games with
that set of coefficients.
This was the script I used to apply gradient descent.
My theta was very simply the vector consisting of all the coefficients.
I chose an alpha value of .01, and my algorithm had no trouble finding a minimum.
Running it for 100 iterations:
makes it seem like we’ve found the lower limit for the cost. Running it for 500 iterations convinces me even further:
Yeah, we’re there.
After running for 500 iterations, my script comes up with these coefficients:
158.55581188, -80.87408792, -1.69430904, 3.274168, -10.32809479, -13.93307463
Seen conveniently next to the features they are applied to:
158.55581188 * currentScore +
…and next to the coefficients I made:
score = 1 * currentScore +
They are, unsurprisingly, wildly different.
Butn how do they stack up?
Average Score: 847.48
Average Score: 1159.78
So, the win rate is actually pretty close (~5% difference).
But the difference in the average score between the two functions is
312.3. That’s pretty big.
I expected gradient descent to win out over my hand-made algorithm. I think it’s likely that the way I made my training samples is limiting gradient descent’s potential;
[-4, 4] is a pretty small window for selecting sample coefficient values.
I think better training data could be generated by using my hand-made coefficients as base values for the generated coefficients, or simply generating more training data with a larger range of possible coefficient values.
I’d be interested to see how an unsupervised algorithm performs on this problem!