Feineigle.com - Game AI Pro 1

Home · Book Reports · 2024 · Game AI Pro 1

Published: March 13, 2024 (1 day ago.)
Tags:  Ai · Game · Programming



The book in...
One sentence:
A whirlwind tour of tons of video game AI that is a great starting point for determining which AI path you want to travel further down.

Five sentences:
The first few chapters of the book talks about the human body and how neurology and signals within your body for things like reflexes are analogous to AI processes. Then the rest of the book jumps around and covers a lot of ground but never goes into the nitty-gritty details of any of the topics. This is fine and the book was a great primer on what kinds of AI is available and what the strengths, weakness, or use-cases for the different varieties of AI you might use in games. Some of the information might be a little dated - they talk about the advanced GPUs having hundreds of pipes - but the core concepts are perfectly applicable today (10 years after publication). If you are interested in general game AI and want to get a feel for the options available, then this is probably a pretty good place to start; if you are interested in a particular type of AI, there are probably better options.

designates my notes. / designates important. / designates very important.


Thoughts

Chapter 2 talks about neurology. This is used as an analogy of how signals propagate within the body and how they can propagate within you game’s AI. Interesting to note that the speed of signals within the body - your reflexes - are actually WAY slower (compared to for example internet speeds) than you might think.

The book touches on many topics but doesn’t go deeply into any. That isn’t a bad thing. It is actually a good introduction to many AI concepts and can act as a sort of idea catalyst to give you a jumping off point in which AI branch you want to learn more about (or which AI might work in your game).

Some of the info is a little dated - they talk about hundreds of pipes in a GPU as a big deal - but nothing bad and nothing that makes the concepts any less relevant.


Table of Contents


Part 1 - General Wisdom

· Chapter 1 - What is Game AI?

page 007:

· Chapter 2 - Informing Game AI through the Study of Neurology

page 018:
page 019:
page 022:

$$y = v_i . v_w > 0 \space then \space 1 \space else \space 0$$

$$\omega_{i,t+1} = \omega_i + \mu(y_{ideal} - y)i_i$$

page 023:

For example, here is a utility equation from a recent talk [Mark 10]:

$$coverChance = 0.2+reloadNeed \times 1.0+healNeed \times 1.5+threatRating \times 1.3$$

$$coverChance = 1 \times w_0 + reloadNeed \times w_1+healNeed \times w_2+threatRating \times w_3$$

page 026:

· Chapter 3 - Advanced Randomness Techniques for Game AI Gaussian Randomness, Filtered Randomness, and Perlin Noise

page 030:
page 031:
page 032:
page 034:
page 036:
page 037:
page 040:
page 042:

Part 2 - Architecture

· Chapter 4 - Behavior Selection Algorithms - An Overview

page 048:
page 049:
class FSMState
{
    virtual void onEnter();
    virtual void onUpdate();
    virtual void onExit();
    list<FSMTransition> transitions;
};
class FSMTransition
{
    virtual bool isValid();
    virtual FSMState* getNextState();
    virtual void onTransition();
}
class FiniteStateMachine
{
    void update();
    list<FSMState> states;
    FSMState* initialState;
    FSMState* activeState;
}
page 050:
page 052:
page 053:
page 054:
page 055:
page 056:
page 057:
page 058:

· Chapter 5 - Structural Architecture - Common Tricks of the Trade

page 064:
page 065:
page 066:
page 067:

· Chapter 6 - The Behavior Tree Starter Kit

· Chapter 7 - Real-World Behavior Trees in Script

process_behavior_node(node)
    if (node.precondition returns true) {
        node.action()
        if (node.child exists)
            process_behavior_node(node.child)
    } else {
    if (node.sibling exists)
        process_behavior_node(node.sibling)
    }

· Chapter 8 - Simulating Behavior Trees A Behavior Tree/Planner Hybrid Approach

page 100:
page 101:
page 102:
page 110:

Rather than add these special cases to each Action, we added a special Action called a FakeSim. The FakeSim Action is a special type of Composite Action called a decorator, which wraps another Action to add extra functionality to it. The FakeSim was responsible for adding incorrect information to the wrapped Action’s simulation step by modifying the world state directly. For example, there are some enemies that have a shield which makes them invulnerable to lightsaber attacks. If we want a Jedi to attack the enemy to demonstrate that the enemy is invulnerable while the shield is up, we can wrap the SwingSaber Action with a FakeSim Decorator which lowers the victim’s shield during the simulation step. Then, the SwingSaber simulation will think that the Jedi can damage the enemy and give it a good simulation result. This would allow SwingSaber to be chosen, even though it won’t actually be beneficial.

· Chapter 9 - An Introduction to Utility Theory

page 114:
page 115:

$$EU = \sum_{i=1}^n D_i P_i$$

page 116:
page 117:

$$U =\frac{x}{m}$$

$$U =\empheqlparen\frac{x}{m}\empheqrparen^k$$

page 118:
page 119:

$$U=\frac{1}{1+e^-x}$$

page 120:
page 121:
page 122:
page 123:

$$U = max\Biggl( min\biggl( \Bigl( 1-\frac{hp-minDmg}{maxDmg-minDmg} \Bigl) \times(1-a)+a,1 \biggl),0 \Biggl)$$

$$U = min \empheqlparen \frac{maxDmg}{hp} , 1 \empheqrparen$$

$$U = 1 - \frac{1}{1+(e\times0.68)^{-\empheqlparen\frac{hp}{maxHp}\times12\empheqrparen+6}}$$

page 124:

$$U=1-\empheqlparen\frac{hp}{maxHp}\empheqrparen^{\frac{1}{(p+1)^4} \times 0.25}$$

page 126:

· Chapter 10 - Building Utility Decisions into Your Existing Behavior Tree

page 129:
page 131:

$$RawUtility = HealthGained – HealthLost$$

$$Value = HealthGained – (HealthLost × 2.0)$$

$$Value = HealthGained – (pow(HealthLost,1.2))$$

$$Utility = \frac{Heal\times HealPower - Delay \times DelayPower}{HealPower + DelayPower}$$

page 132:
page 134:

· Chapter 11 - Reactivity and Deliberation in Decision-Making Systems

page 137:
page 138:
page 140:
page 141:
page 142:
page 143:
page 146:

· Chapter 12 - Exploring HTN Planners through Example

page 149:
page 153:
Compound Task [AttackEnemy]
    Method 0 [WsHasTreeTrunk == true]
        Subtasks [NavigateTo(EnemyLoc), DoTrunkSlam()]
    Method 1 [WsHasTreeTrunk == false]
        Subtasks [LiftBoulderFromGround(), ThrowBoulderAt(EnemyLoc)]
page 154:
Compound Task [BeTrunkThumper]
    Method [WsCanSeeEnemy == true]
        Subtasks [NavigateToEnemy(), DoTrunkSlam()]
    Method [true]
        Subtasks [ChooseBridgeToCheck(), NavigateToBridge(), CheckBridge()]
Primitive Task [DoTrunkSlam]
    Operator [AnimatedAttackOperator(TrunkSlamAnimName)]
Primitive Task [NavigateToEnemy]
    Operator [NavigateToOperator(EnemyLocRef)]
        Effects [WsLocation = EnemyLocRef]
Primitive Task [ChooseBridgeToCheck]
    Operator [ChooseBridgeToCheckOperator]
Primitive Task [NavigateToBridge]
    Operator [NavigateToOperator(NextBridgeLocRef)]
        Effects [WsLocation = NextBridgeLocRef]
Primitive Task [CheckBridge]
    Operator [CheckBridgeOperator(SearchAnimName)]
page 155:
page 156:
page 157:
Compound Task [BeTrunkThumper]
    Method [ WsCanSeeEnemy == true]
        Subtasks [AttackEnemy()]// using the new compound task
    Method [true]
        Subtasks [ChooseBridgeToCheck(), NavigateToBridge(), CheckBridge()]
Compound Task [AttackEnemy]//new compound task
    Method [WsTrunkHealth > 0]
        Subtasks [NavigateToEnemy(), DoTrunkSlam()]
    Method [true]
        Subtasks [FindTrunk(), NavigateToTrunk(), UprootTrunk(), AttackEnemy()]
Primitive Task [DoTrunkSlam]
    Operator [DoTrunkSlamOperator]
        Effects [WsTrunkHealth += -1]
Primitive Task [UprootTrunk]
    Operator [UprootTrunkOperator]
        Effects [WsTrunkHealth = 3]
Primitive Task [NavigateToTrunk]
    Operator [NavigateToOperator(FoundTrunk)]
    Effects [WsLocation = FoundTrunk]
page 159:
Compound Task [BeTrunkThumper]
    Method [ WsCanSeeEnemy == true]
        Subtasks [AttackEnemy()]
    Method [ WsHasSeenEnemyRecently == true]//New method
        Subtasks [NavToLastEnemyLoc(), RegainLOSRoar()]
    Method [true]
        Subtasks [ChooseBridgeToCheck(), NavigateToBridge(), CheckBridge()]
Primitive Task [NavToLastEnemyLoc]
    Operator [NavigateToOperator(LastEnemyLocation)]
        Effects [WsLocation = LastEnemyLocation]
Primitive Task [RegainLOSRoar]
    Preconditions[WsCanSeeEnemy == true]
    Operator [RegainLOSRoar()]
Primitive Task [NavToLastEnemyLoc]
    Operator [NavigateToOperator(LastEnemyLocation)]
        Effects [WsLocation = LastEnemyLocation]
        ExpectedEffects [WsCanSeeEnemy = true]
page 160:
Compound Task [AttackEnemy]
    Method [WsTrunkHealth > 0, AttackedRecently == false, CanNavigateToEnemy == true]
        Subtasks [NavigateToEnemy(), DoTrunkSlam(), RecoveryRoar()]
    Method [WsTrunkHealth == 0]
        Subtasks [FindTrunk(), NavigateToTrunk(), UprootTrunk(), AttackEnemy()]
    Method [true]
        Subtasks [PickupBoulder(), ThrowBoulder()]
Primitive Task [DoTrunkSlam]
    Operator [DoTrunkSlamOperator]
        Effects [WsTrunkHealth += -1, AttackedRecently = true]
Primitive Task [RecoveryRoar]
    Operator [PlayAnimation(TrunkSlamRecoverAnim)]
Primitive Task [PickupBoulder]
    Operator [PickupBoulder()]
Primitive Task [ThrowBoulder]
    Operator [ThrowBoulder()]
page 165:

· Chapter 13 - Hierarchical Plan-Space Planning for Multi-unit Combat Maneuvers

page 181:

· Chapter 14 - Phenomenal AI Level-of-Detail Control with the LOD Trader

page 188:
page 189:

· Chapter 15 - Runtime Compiled C++ for Rapid AI Development

· Chapter 16 - Plumbing the Forbidden Depths Scripting and AI

page 220:
page 232:
page 233:
page 236:

Part 3 - Movement and Pathfinding

· Chapter 17 - Pathfinding Architecture Optimizations

page 242:
page 243:
page 244:
page 245:
page 246:
page 247:

$$ f(x) = g(x) + h(x)$$

$$ f(x) = g(x) + (h(x) \times weight)$$

page 249:

· Chapter 18 - Choosing a Search Space Representation

page 253:
page 255:
page 256:
page 257:

· Chapter 19 - Creating High-Order Navigation Meshes through Iterative Wavefront Edge Expansions

· Chapter 20 - Precomputed Pathfinding for Large and Detailed Worlds on MMO Servers

page 269:
page 275:

· Chapter 21 - Techniques for Formation Movement Using Steering Circles

· Chapter 22 - Collision Avoidance for Preplanned Locomotion

· Chapter 23 - Crowd Pathfinding and Steering Using Flow Field Tiles

page 308:
page 309:

· Chapter 24 - Efficient Crowd Simulation for Mobile Games

page 318:
page 320:
page 321:

· Chapter 25 - Animation-Driven Locomotion with Locomotion Planning

Part 4 - Strategy and Tactics

· Chapter 26 - Tactical Position Selection An Architecture and Query Language

page 340:
page 341:
page 348:
Conditions = {canReachBefore_the_target = true}
page 352:
page 353:
page 358:

· Chapter 27 - Tactical Pathfinding on a NavMesh

page 366:

· Chapter 28 - Beyond the Kung-Fu Circle A Flexible System for Managing NPC Attacks

page 370:
page 371:
page 372:
page 373:

· Chapter 29 - Hierarchical AI for Multiplayer Bots in Killzone 3

· Chapter 30 - Using Neural Networks to Control Agent Threat Response

page 391:

Part 5 - Agent Awareness and Knowledge Representation

· Chapter 31 - Crytek’s Target Tracks Perception System

page 406:
page 407:

· Chapter 32 - How to Catch a Ninja NPC Awareness in a 2D Stealth Platformer

page 413:
page 415:
page 416:
page 417:
page 418:
INTEREST_PRIORITY_LOWEST = 0
INTEREST_PRIORITY_BROKEN = 1
INTEREST_PRIORITY_MISSING = 2
INTEREST_PRIORITY_SUSPECT = 4
INTEREST_PRIORITY_SMOKE = 4
INTEREST_PRIORITY_CORPSE = 4
INTEREST_PRIORITY_NOISE_QUIET = 4
INTEREST_PRIORITY_NOISE_LOUD = 4
INTEREST_PRIORITY_BOX = 5
INTEREST_PRIORITY_SPIKEMINE = 5
INTEREST_PRIORITY_DISTRACTIONFLARE = 10
INTEREST_PRIORITY_TERROR = 20
page 419:

· Chapter 33 - Asking the Environment Smart Questions

· Chapter 34 - A Simple and Robust Knowledge Representation System

· Chapter 35 - A Simple and Practical Social Dynamics System

page 444:

· Chapter 36 - Breathing Life into Your Background Characters

page 452:
page 453:
page 454:
page 456:
page 457:

· Chapter 37 - Alibi Generation Fooling All the Players All the Time

Part 6 - Racing

· Chapter 38 - An Architecture Overview for AI in Racing Games

page 473:

· Chapter 39 - Representing and Driving a Race Track for AI Controlled Vehicles

· Chapter 40 - Racing Vehicle Control Systems using PID Controllers

page 493:
page 494:

$$e(t) = Y(t) - R(t)$$

$$u(t) = K_p \cdot e(t) + K_i \cdot \int e(t)dt + K_d \cdot \frac{d}{dt}e(t) $$

page 497:
  1. Set all gains (K) to 0 and increase Kp until the system behaves in a desirable manner with no overshoot or oscillation. Tuning this value first is advisable, as it will generally have the biggest effect on the output.

  2. Increase Ki to eliminate the steady-state error.

  3. Adjust Kd to reduce any overshoot or reduce the settling time as required.

· Chapter 41 - The Heat Vision System for Racing AI A Novel Way to Determine Optimal Track Positioning

page 502:
page 503:

· Chapter 42 - A Rubber-Banding System for Gameplay and Race Management

Part 7 - Odds and Ends

· Chapter 43 - An Architecture for Character-Rich Social Simulation

page 515:

· Chapter 44 - A Control-Based Architecture for Animal Behavior

· Chapter 45 - Introduction to GPGPU for AI

page 541:
for each agent
    for each neighbor within radius
        calculate agent separation force away from neighbor
        calculate center of mass for agent cohesion force
        calculate average heading for agent alignment force
    calculate wander force
    sum prioritized weighted forces and apply to agent velocity
for each agent
    apply velocity to position

· Chapter 46 - Creating Dynamic Soundscapes Using an Artificial Sound Designer

page 551;
page 552:

· Chapter 47 - Tips and Tricks for a Robust Third-Person Camera System

page 559:

· Chapter 48 - Implementing N-Grams for Player Prediction, Procedural Generation, and Stylized AI

page 568:

$$ P(event\space e) = \frac{#\space of\space ways\space e\space can\space happen}{#\space of\space all\space possible\space outcomes}$$

$$P(event\space e) = \frac{#\space of\space times\space e\space occurred}{#\space of\space time\space any\space event\space occurred }$$

page 569:
page 570:

The probability that an event $e$ will occur next is equal to the probability of the matching pattern containing e as its $N$th event, as shown in Equation 48.3.

$$P(event\space e\space is\space next) = P(matching\space pattern\space ending in\space e\space is\space next) $$

$$P(matching\space pattern\space mp) = \frac{#\space of\space mp\space occurances}{Sum\space of\space all\space matching\space pattern\space occurances}$$

page 571:

$$P(event\space pattern\space ep) = \frac{#\space of\space ep\space occurances}{Sum\space of\space all\space pattern\space occurances\space so\space far}$$

$$Total\space pattern\space occurances\space T = L-(N-1)$$

page 577: