Jamie and I are back to work after a Christmas break. This means there isn’t much to report, but we have been making progress on lockstep, networking, pathfinding, and flocking. These systems are complex, and have proved challenging. It’s the most difficult work either of us has done. Nevertheless, we are fast approaching a basic completeness, upon which other functionality can be safely added.
It has been a year since work on the project began. Reflecting upon that, much has been done and much is still left to do. After we have finished the aforementioned concerns, the next tasks will involve the implementation of base building, economy, and art integration.
Other non-trivial tasks on the to-do list involve deterministic raycasting and computer player intelligence. The latter will be modelled on human attention and emotional states, so that the computer player behaves more like a human, providing a more interesting experience.
This month Jamie and I were working together to make the terrain creation logic network ready. We encountered various problem which are typical of game development; the player intuitively expects everything to behave sensibly, but each sensible behaviour is actually a collection of many little pieces of logic which have to be strung together.
When it came to the question of how exactly do units create new terrain, the answer had to involve code to tell units to flee from the construction site before any work can begin. And that fleeing behaviour must itself be intelligible, units must flee the shortest distance to the nearest available safe space.
With Jamie left to mud wrestle network code, I returned to flocking. The most obvious issue was that groups of units suffered from jittery movement when they were close to an obstacle, be this another unit or a wall. The solution was twofold.
First, the order of execution changed, so that groups are ordered by distance from the destination, and then moved in that order. This increases the odds of units moving in the right direction, and thus moving with less jitter. If a group is told to move and a unit in the middle moves first, this causes problem because there is no space to move into. Second, after units have been moved by the flocking algorithm, their position is adjusted. This adjustment ensures that any movement into an obstacle is corrected immediately. The result was that jitter is less and group movement is smoother.
Another concern is ordering the impossible: when a player tells units to move to somewhere inaccessible. This could be because the player has told their units to move inside of or on top of a space they cannot reach, because there is no doorway or ramp to access that space. As it was, such an order would break the code.
The solution was to implement another part of Elijah Emerson’s flow field design. The map is composed of square sectors, but not all sectors connect. For example, if a player creates new terrain, the sectors on top and inside are inaccessible from the surrounding terrain. Each inaccessible group of sectors is stored as an ‘island’. Islands provide a useful metaphor, and enable much quicker searching to find the next best location units should move to.
Traffic jams also existed. The naïve solution is to find the fastest path to the destination, which works fine for individual units. But for groups this falls apart, because units begin to snag on corners, limiting the number who can move around a corner simultaneously. If the terrain didn’t have sharp edges this may not be a problem. The solution is to force groups to path down the middle of sectors, thus for the most part avoiding edges and traffic jams.
One of the last obvious additions will be to add a line-of-sight (LOS) pass to the process. This ensures that if the goal can be seen, then the unit will ignore the flow field completely and aim directly for the destination, which is the most optimal outcome. In order to efficiently calculate LOS Bresenham’s line drawing algorithm is required.
If all goes well, next week pathfinding will be relatively complete, and Jamie will have done the same with networking. That should put us in a good position before the two week Christmas break, prepared to begin the new year with refreshed enthusiasm.
The next step in the process was to create a third dimension for the pathfinding system. This may sound trivial, but isn’t. Which creates an obvious question: but Richard, why didn’t you just create three dimensions in the first place? Incompetence may be too strong a word. So I’m going to say (with some legitimacy) that it was simpler and faster to get the basic system working in two dimensions, because there’s less that can go wrong. So although I’ve been making progress towards this end, it has been one of those things which has taken an unfortunate amount of time.
In other news, the new hire, Jamie McCully, has begun work. He will have the opportunity to write his own devblog soon enough. At any rate, he has learned quickly and is already proving an invaluable contribution. I suspect he may have been expecting more specific instruction on his first task than “do what you want”… but delegation is an important skill. Of course, this may give the wrong impression; that task took hours to explain.
For strategy games, with potentially hundreds of units running about at once, nearest neighbour searching is a non-trivial problem. This is when a unit must check who its neighbours are as efficiently as possible to make decisions about movement and combat.
If there are only ever going to be a handful of units in the game at any one time, then a linear search is probably fine. A linear search is when code examines a list of items, one by one, until it finds what its looking for. This is fine for a shopping list, but not necessarily for checking hundreds of units multiple times a second. If 100 units are checking 100 units every time, that’s a lot of checks!
To investigate, I created 300 units, noting frame rate as a heuristic for code performance. When collision detection was switched off the simulation fluctuated around 90 Frames Per Second (FPS). Not bad. However, when collision detection was activated, using a linear search, this reduced the frame rate to 10. Clearly something had to be done.
The ideal solution is spatial partitioning, using an octree. This means that the game world is encompassed by a huge box (octant), and that box contains eight smaller boxes, and each of those boxes is divided recursively into eight more, etc. Each octant logs which units are contained within, adding units when they enter the box, and removing them when they exit.
This solution allows recursive searching up and down the tree, to find the boxes relevant to the space being searched. This is especially powerful with a dynamic octree, meaning octants only exist where units exist, significantly reducing the number of octants that need to be searched. Instead of checking a three hundred item list, we just check the contents of a dozen or so boxes.
In the below image octants are visualised using cyan and magenta. The dynamic octree must prune itself, and so when an octant has no units inside it is placed on a list of vacant octants. Vacant octants have a timer, and after a few seconds of disuse are deleted. This ensures the octree never gets too big, and allows octants to be reused where units are most likely to move again.
The proof is in the pudding. But in this case, the pudding is code, and the metaphor is broken. Regardless, it shouldn’t be hard to improve upon a measly 10 FPS. The octree was applied to all unit movement and combat neighbour checks. The latest results are as follows:
1 unit 270-280 FPS 10 units 250-260 FPS 100 units 130-160 FPS 200 units 90-110 FPS 300 units 70-80 FPS 400 units 50-60 FPS 500 units 30-40 FPS
These figures can undoubtedly still be improved, and they demonstrate the efficacy of spatial partitioning, compared to a naive solution like linear search. The octree also provides the foundations upon which a deterministic raycaster can be built.
One optimisation has been to sort units inside octants by team, and to push those lists up the tree recursively. What this means is that units searching for nearby enemies may only have to check up the tree, until they find the octant which encloses their maximum range. If there are no units belonging to a hostile team inside that octant, then the search is resolved. The alternative is suboptimal: to move up and then down the octree, searching every unit in every octant for whether they belong to a hostile team.
Last month a pathfinding solution was created. This month a Flocking solution was required. Flow Fields generate a route from A to B, but this doesn’t tell us how units should move along that path in relation to each other.
Flocking was first coined by Craig Reynolds in 1986. Simply, flocking algorithms enable moving agents to adjust their heading relative to nearby agents. The agents are referred to as “boids”. Legend has it that flock agents are called boids because this is how birds are pronounced with a thick New York accent.
The bespoke modified flocking algorithm works to an extent, though it still needs refinement. In the below image we can see both a group of boids moving past obstacles to get to their destination, and what this looks like from a pathfinding perspective.
In other news, I am delighted to announce that Norn Industries has hired its first employee. I have never recruited anyone before, so this is a bold adventure for both of us. I am excited and confident that together we are in a much stronger position to deliver the best possible product. Two programmers are better than one.