Devblog 20: Three Dimensions of Pain

Month Nine

The next step in the process was to create a third dimension for the pathfinding system. This may sound trivial, but isn’t. Which creates an obvious question: but Richard, why didn’t you just create three dimensions in the first place? Incompetence may be too strong a word. So I’m going to say (with some legitimacy) that it was simpler and faster to get the basic system working in two dimensions, because there’s less that can go wrong. So although I’ve been making progress towards this end, it has been one of those things which has taken an unfortunate amount of time.

In other news, the new hire, Jamie McCully, has begun work. He will have the opportunity to write his own devblog soon enough. At any rate, he has learned quickly and is already proving an invaluable contribution. I suspect he may have been expecting more specific instruction on his first task than “do what you want”… but delegation is an important skill. Of course, this may give the wrong impression; that task took hours to explain.

Devblog 19: Octrees and Octants

Month Eight

For strategy games, with potentially hundreds of units running about at once, nearest neighbour searching is a non-trivial problem. This is when a unit must check who its neighbours are as efficiently as possible to make decisions about movement and combat.

If there are only ever going to be a handful of units in the game at any one time, then a linear search is probably fine. A linear search is when code examines a list of items, one by one, until it finds what its looking for. This is fine for a shopping list, but not necessarily for checking hundreds of units multiple times a second. If 100 units are checking 100 units every time, that’s a lot of checks!

To investigate, I created 300 units, noting frame rate as a heuristic for code performance. When collision detection was switched off the simulation fluctuated around 90 Frames Per Second (FPS). Not bad. However, when collision detection was activated, using a linear search, this reduced the frame rate to 10. Clearly something had to be done.

Octree illustration, Wikipedia.

The ideal solution is spatial partitioning, using an octree. This means that the game world is encompassed by a huge box (octant), and that box contains eight smaller boxes, and each of those boxes is divided recursively into eight more, etc. Each octant logs which units are contained within, adding units when they enter the box, and removing them when they exit.

This solution allows recursive searching up and down the tree, to find the boxes relevant to the space being searched. This is especially powerful with a dynamic octree, meaning octants only exist where units exist, significantly reducing the number of octants that need to be searched. Instead of checking a three hundred item list, we just check the contents of a dozen or so boxes.

In the below image octants are visualised using cyan and magenta. The dynamic octree must prune itself, and so when an octant has no units inside it is placed on a list of vacant octants. Vacant octants have a timer, and after a few seconds of disuse are deleted. This ensures the octree never gets too big, and allows octants to be reused where units are most likely to move again.

Occupied octants are cyan and vacant octants are magenta. Units are a mess.

The proof is in the pudding. But in this case, the pudding is code, and the metaphor is broken. Regardless, it shouldn’t be hard to improve upon a measly 10 FPS. The octree was applied to all unit movement and combat neighbour checks. The latest results are as follows:

1 unit 270-280 FPS
10 units 250-260 FPS
100 units 130-160 FPS
200 units 90-110 FPS
300 units 70-80 FPS
400 units 50-60 FPS
500 units 30-40 FPS

These figures can undoubtedly still be improved, and they demonstrate the efficacy of spatial partitioning, compared to a naive solution like linear search. The octree also provides the foundations upon which a deterministic raycaster can be built.

One optimisation has been to sort units inside octants by team, and to push those lists up the tree recursively. What this means is that units searching for nearby enemies may only have to check up the tree, until they find the octant which encloses their maximum range. If there are no units belonging to a hostile team inside that octant, then the search is resolved. The alternative is suboptimal: to move up and then down the octree, searching every unit in every octant for whether they belong to a hostile team.

Devblog 18: Flocking and Hiring

Month Seven

Last month a pathfinding solution was created. This month a Flocking solution was required. Flow Fields generate a route from A to B, but this doesn’t tell us how units should move along that path in relation to each other.

Flocking was first coined by Craig Reynolds in 1986. Simply, flocking algorithms enable moving agents to adjust their heading relative to nearby agents. The agents are referred to as “boids”. Legend has it that flock agents are called boids because this is how birds are pronounced with a thick New York accent.

The bespoke modified flocking algorithm works to an extent, though it still needs refinement. In the below image we can see both a group of boids moving past obstacles to get to their destination, and what this looks like from a pathfinding perspective.

Fifty boids move past obstacles with neither grace nor beauty.

In other news, I am delighted to announce that Norn Industries has hired its first employee. I have never recruited anyone before, so this is a bold adventure for both of us. I am excited and confident that together we are in a much stronger position to deliver the best possible product. Two programmers are better than one.

Devblog 17: Automated Tests and Flow Fields

Month Six

This month has been consumed by automated testing and pathfinding.

It is difficult to create units which are unique and balanced. This is important for a multiplayer game, and being a solo developer I have to make tools to test the design assumptions I have created.

In generic software development automated testing allows programmers to quickly check their code is returning the right data. In an environment as complex as Real Time Strategy games, where soldiers navigate a 3D environment in real time, there are so many variables to consider, that the only way to investigate is to run gladiatorial test matches.

I created a user interface to easily customise tests. Each test places a number of units at either side of a small map. The number is determined by their cost (value). The units immediately rush to attack the other team, until one side has none left. The code then calculates who has won, and by how much, and saves that to a text file. This means I can run hundreds of tests overnight and examine the results in the morning, providing a rough measure of whether a unit type is too strong or too weak.

In any game it is unlikely that battles will just be between individual units, so each test creates multiple units, and places them at random locations. Each pairing is then repeated a number of times to provide an average result, which should be more accurate.

Pathfinding has been something else. The old system generated one path per unit, which clearly will not scale if hundreds are involved. Instead, I followed the Flow Field design described by Elijah Emerson, published in ‘Game AI Pro’.

Each map is divided into sectors, which contain tiles. Sectors connect to other sectors via portals (green dots, white lines show connections). Red lines show the flow fields themselves. Each tile has a direction, and when units enter that tile they read the direction they should be facing.

The path visualised above shows the directions five units from the top right must travel to arrive at their goal in the bottom left. Each unit starts in its own sector. But there could be five or ten units in each sector, and yet the cost of generating the path remains the same. The advantage is a considerably more efficient means of moving large groups from A to B.

Initially, it took as long as 100 milliseconds to generate a path for one unit moving along that distance. After some optimisation, the worst case was reduced to 1.5 milliseconds.

Devblog 16: Iterative Design

Month Five

This month, good progress.  Environment art is ongoing, marketing research is complete, and new features are implemented.

The terrain system has been reworked, and the player can now edit individual walls.  There is new a query button, which allows the player to mouse over a unit and see basic information.  The healthbar has been implemented, with new and old styles to choose from.  There’s also a new charge button, which allows the player to order a group of soldiers to launch into melee against the nearest enemy.  There have been many other fixes and much refactoring besides.

I have also been consulting with friends who play strategy games, to try and get a feel for how to implement unique factions.  How many will there be?  How will they be different?  What narratives will guide them?  These questions have been answered for now, and there will be four playable factions, each with a unique identity and distinct mechanics.

The next deterministic problem to resolve will be replacing Unity’s non-deterministic Raycast system.  That will be non-trivial and expensive, but once complete there won’t be too much left for multiplayer to be viable.

A raycaster is an object which creates an imaginary line from A to B, and then checks if this line intersects any triangles (geometry in the game world).  This is used to determine if units can see each other, and if projectiles have impacted walls.  In principle, not too complex.  But the problem is efficiency; the geometry in the game world must be saved so that only the closest objects are accessed.  For example, one such design pattern is similar to a tree data structure, and is called binary space partitioning.  Doing all that well is non-trivial.