The University of Waterloo Computer Science Club held a google sponsored game AI contest. The goal was to develop an AI for Planet Wars – a variation of Galcon, which is basically the feeling of an RTS distilled down to the most basic rules possible.
I challenged a couple of co-workers to join on the contest and at some point we had over 12 people in our internal message group. To up the ante I also ordered a small cup with the intriguing engraving “Most admirable genius – Yager AI Contest 2010”. Alas, I didn’t win it since I got a little lazy after leading for a couple of weeks, but nonetheless the little project was pretty interesting.
Early during the contest I had written a pretty cool build script in Python (the bot itself was written in Java, just out of curiosity of working with the latest Eclipse version). Basically it allowed me to play with any combination of bots on any combination of maps and just see the summarized outcome of all games.
My in-game debugging capabilities were severely lacking though. It was simply no fun debugging the games and finding out what was going wrong. Although I had a cool build environment, I failed in making the fine tuning fun. And like any human being I like fun and the absence of it in this spare time project made me not spent more time than necessary testing different properties, resulting in less iterations, resulting in worse rankings, resulting in not winning our internal competition. This is especially true for this contest, where I feel that overall the better bots most probably did not win by using some super advanced AI algorithms, but actually by just some really good tuning of fairly basic rules.