Paintball Game Simulation

A paintball game simulation against enemy AI controlled by finite state machines

About Paintball Game Simulation

Paintball Game Simulation was a project developed for the IGB383 - AI in Games Unit at QUT. In this project, we were tasked with developing four different enemy AI through the use of finite state machines. Each enemy were to make use of 3-4 behaviour states, and when the player entered a certain zone of the map, the enemies would switch states depending on their finite state machine setup. We also had to develop these enemies to make use of the Greedy and A* algorithms when pathfinding their way through the map of the game. The original project was a lot more simple, with no real game in mind. To help contextualise and inform the design of the enemy AI, I developed extra systems so that the player and enemy AI are competing in a paintball game and are scored based on how many times they are able to hit agents on the opposing side.

What I Learned From This Project

 Shown above is the result of a 5-and-a-half-minute session within the simulation (watch here). The results shown are quite typical of a session of this length, wherein the trooper enemy type did the worst, and the hunter did the best. The trooper often did worse than the other enemy types, by in large due to it not having the collect state that would allow it to gather ammo to shoot at the player, instead relying on stumbling upon ammo while in the other three states.  The hunter did the best due to its aggressive behaviour, but due to this it also runs out of ammo and tended to require some time to stock back up on ammo. The assassin’s behaviour meant that it almost always had ammo prepared for when it entered its attack state, meaning it was able to score points more consistently when in the attack state, although due to it being more elusive, it had less opportunities to score points than other agents. The unexpected result, however, was the strategist enemy type, which performed almost as well as the hunter enemy type. As the only difference between it and the trooper is that it makes use of the collect state, it was expected that it would only perform slightly better than the trooper. These results indicate, however, that perhaps the added collect state allowing it to actively seek ammo did have a significant improvement in performance.


These results could be flawed of course. The results of a session are largely dependent on the player and the actions they perform in game. They could potentially move towards areas that are away from specific enemies or manipulate the trigger zones so that certain enemies never enter their attack state, resulting in the scoreboard skewing in favour of certain enemy types.  Of course, it is expected that in different scenarios, different enemy types will perform better. Multiple test sessions were run, however, within which different strategies for navigating the map and shooting enemies were employed, and in all of those tests, similar results to the ones shown above were found. Therefore, this simulation project provides some interesting insight into how different play styles can affect how well an agent performs.

Aside from insights from the outcomes of the project, the development of the project provided some valuable insights as well. I feel I have quite a good understanding of what finite state machines are and how to make use of DFA tables to showcase unique behaviours using the same set of states. I also think that while I was aware of the Greedy and A* algorithms, I now have a deeper understanding of how they work, as well as being able to analyse pseudocode better. Combining all this, I feel confident that I can develop enemies and other AI-controlled entities in my projects to exhibit more behaviours, both in quantity and complexity.


 

Get in touch at markryanauman@gmail.com