How to Design a Worthy Opponent: AI in Game Development

//

I don’t play games a lot, but when I do, I choose Age of Empires. In this real-time strategy game, I’m offered the thrill of building an entire civilization from survival to dominance in just a couple of hours.

My peer group has long passed me in skill, so I often play against the game artificial intelligence (AI). This got me thinking one day: how does the game AI work? How does the AI plan long-term, manage multiple units, defend against tactics — like we humans do?

With 10 years of computer science research under my belt, I dug a little deeper. Read on to learn about two fundamental algorithms, and why game designers limit AI’s potential for the sake of our enjoyment.

The baseline for artificial intelligence

First, let’s pretend we are building AI for a guard archer. We can program three states for this AI: attack mode (🏹), retreat mode (🏃), and dead (💀).

Looks peaceful, but something is amiss. | Photo by jcoope12 on Pixabay

As expected, our archer attacks any threat and retreats if it’s low on health (no suicide mission here). In retreat mode, our archer stays low, returning to attack mode once fully recovered. If our archer receives a fatal blow, it simply dies.

A finite state machine diagram describing an archer’s behavior.

This framework of states and conditions is called a finite state machine (FSM). FSMs are well-suited for bounded games like Pac-Man, where each ghost moves to its own FSM. Chase, roam, flee, die.

Given how simple FSMs appear, you may be surprised to learn they’re behind Half-Life, an award-winning first-person shooter game that has sold over nine million copies. Released in 1998 and a franchise today (see Half-Life: Alyx, 2020), Half-Life’s legacy lies in its immersive storytelling, puzzle solving, and non-player character design.

Finite state machines are used in Pac-Man (1980) and Half-Life: Alyx (2020). | Source: Wikipedia

With a few tricks, the characters in Half-Life convincingly imitate human behavior. AI researcher Dr. Tommy Thompson explains the FSMs behind these characters in his YouTube channel. Each character follows a custom version of a base FSM of over 80 actions (i.e., states). Sets of related tasks are abstracted into a higher-level state, which I think of as FSMs of FSMs. Variety and complexity make characters unique and believable.

Finite state machine diagram of a creature from Half-Life: Alyx. | Source: Joe van den Heuvel

On top of that, Half-Life FSMs are dynamic, which means the underlying conditions can change. Based on how the game is going, a character may rewire its FSM to achieve a new goal. Much like we change our strategies when we receive new data, so does AI. This flexibility gives the effect of opponents who are savvy enough to change course, keeping players on their toes.

A fundamental framework, FSMs can be found in games of yesterday and games of today. The Half-Life franchise has engaged players for over 20 years-not bad for a simple state machine.

Creating a smarter machine

At some point, FSMs reach their limit. For open-ended games like Age of Empires, each possible move leads to an exponential number of new states. FSMs cannot be programmed with this many states and conditions; the time and space required are too high, even with modern computing.

How high is too high, exactly? Take the board game Go, where pieces are moved on the lines of a 19×19 grid. The number of legal board positions in Go is roughly 2.1 x 10 170, which is more than the number of atoms in the observable universe. If a bounded board game has this many states, it’s practically impossible to program all the states for an open-ended one.

The board game Go: deceptively simple. | Photo by OrcaTec on Pixabay

Instead of an FSM, we can decide our next move with the following steps, where k is a number between 1 and infinity:

  1. Start at the current state.
  2. List all possible next moves.
  3. For each next move:
    a) Simulate the rest of the game k times.
    b) Calculate the average payback across k simulations.
  4. Compare the average payback of all next moves.
  5. Make the move with the highest average payback.
Simplified diagram of a Monte Carlo tree search.

This outlined algorithm is called a Monte Carlo tree search (MCTS). We can pull several levers to customize the algorithm. For example, what’s the best number for k? How do we calculate payback if there are many ways to win? What’s the balance between exploring 30 steps ahead with one move vs. exploring the paths of many next moves but only 3 steps ahead?

As players level up, AI does too. | Photo by Florian Olivo on Unsplash

Machine learning researchers are constantly optimizing MCTS by pulling on these levers. In this paper from MIT, the authors sped up MCTS by estimating payback when possible, vs. running through simulations for every move. Estimations are done by comparing a potential move to moves previously calculated. They also trained the AI to read and learn from the game manual (the Matrix in real life!). Their enhanced Monte Carlo search beat the built-in AI in strategy game Civilization II over 3 out of 4 times.

…But not too smart (Why we need to rein in AI)

Do we want to play AI that’s unbeatable, or where the odds are always against us? It makes for fascinating research, but probably not for fun. Most players want reasonable opponents and challenges at their skill level.

Playing games should be an (overall) enjoyable experience. | Source: Canva

AI that plays too well can leave players feeling defeated, believing the game unfair or rigged. Sid Meier, the creator of Civilization, had to remove features involving AI, despite writing the code for it:

“[I] couldn’t balance the game with [multi-player] alliances because the computer could exploit alliances almost as good as the player. This would leave gamers with a sense that they couldn’t win because the computer was cheating”

Sid Meier, Creator of Civilization. Computer Gaming World, 1993

AI also has the advantage of speed over humans. The AMD Ryzen 7 3700X processor, for instance, has 8 cores and 16 threads. That’s like having 8 brains and 16 (extremely fast) hands. Developers limit this capability, because “Fight a losing battle!” is not a great selling point.

The AMD Ryzen 7 3700X stores 8 cores in less than 2×2 inches. | Photo by Olivier Collet on Unsplash

Other issues arise when AI developers use deep learning, a popular machine learning approach that mimics our brains via artificial neural networks. When developers use deep learning in games, it’s typically to enhance MCTS. Why not use deep learning on its own?

When we use deep learning on its own, we lose control of what AI learns, and what it learns may not be considered right. Some things just can’t be summarized by equations. For example, in the higher-stakes domain of autonomous driving, what’s “rational” or “ethical” or “the right thing to do” are questions with no clear answers. Game AI with deep learning can be unpredictably irrational, crossing the lines of the game’s original intent.

Take a deeper dive

For many open-ended games, MCTS is the algorithm of choice. It’s a good middle ground between FSMs, where we pre-program every action, and deep learning, where we throw AI over the fence and let it figure out life on its own. With MCTS, we can still encode our human definition of “rational” into the payback calculation that determines the next best move.

The game awaits you. | Photo by Alexandru Acea on Unsplash

Beyond gaming, I can see MCTS in my own life: from the mundane (“What should I eat for dinner?”) to the life-altering (“What career should I choose?”). Many of these algorithms are born from real-life, striving to achieve what our intelligent minds can do.

For further reading, check out the following:

Happy gaming,
Vina


If you like what you read, let’s get in touch. I’d love to hear your thoughts. If you want to play Age of Empires, I’d be up for that too 😉.

Filed under: