Jamie and I are back to work after a Christmas break. This means there isn’t much to report, but we have been making progress on lockstep, networking, pathfinding, and flocking. These systems are complex, and have proved challenging. It’s the most difficult work either of us has done. Nevertheless, we are fast approaching a basic completeness, upon which other functionality can be safely added.
It has been a year since work on the project began. Reflecting upon that, much has been done and much is still left to do. After we have finished the aforementioned concerns, the next tasks will involve the implementation of base building, economy, and art integration.
Other non-trivial tasks on the to-do list involve deterministic raycasting and computer player intelligence. The latter will be modelled on human attention and emotional states, so that the computer player behaves more like a human, providing a more interesting experience.
This month Jamie and I were working together to make the terrain creation logic network ready. We encountered various problem which are typical of game development; the player intuitively expects everything to behave sensibly, but each sensible behaviour is actually a collection of many little pieces of logic which have to be strung together.
When it came to the question of how exactly do units create new terrain, the answer had to involve code to tell units to flee from the construction site before any work can begin. And that fleeing behaviour must itself be intelligible, units must flee the shortest distance to the nearest available safe space.
With Jamie left to mud wrestle network code, I returned to flocking. The most obvious issue was that groups of units suffered from jittery movement when they were close to an obstacle, be this another unit or a wall. The solution was twofold.
First, the order of execution changed, so that groups are ordered by distance from the destination, and then moved in that order. This increases the odds of units moving in the right direction, and thus moving with less jitter. If a group is told to move and a unit in the middle moves first, this causes problem because there is no space to move into. Second, after units have been moved by the flocking algorithm, their position is adjusted. This adjustment ensures that any movement into an obstacle is corrected immediately. The result was that jitter is less and group movement is smoother.
Another concern is ordering the impossible: when a player tells units to move to somewhere inaccessible. This could be because the player has told their units to move inside of or on top of a space they cannot reach, because there is no doorway or ramp to access that space. As it was, such an order would break the code.
The solution was to implement another part of Elijah Emerson’s flow field design. The map is composed of square sectors, but not all sectors connect. For example, if a player creates new terrain, the sectors on top and inside are inaccessible from the surrounding terrain. Each inaccessible group of sectors is stored as an ‘island’. Islands provide a useful metaphor, and enable much quicker searching to find the next best location units should move to.
Traffic jams also existed. The naïve solution is to find the fastest path to the destination, which works fine for individual units. But for groups this falls apart, because units begin to snag on corners, limiting the number who can move around a corner simultaneously. If the terrain didn’t have sharp edges this may not be a problem. The solution is to force groups to path down the middle of sectors, thus for the most part avoiding edges and traffic jams.
One of the last obvious additions will be to add a line-of-sight (LOS) pass to the process. This ensures that if the goal can be seen, then the unit will ignore the flow field completely and aim directly for the destination, which is the most optimal outcome. In order to efficiently calculate LOS Bresenham’s line drawing algorithm is required.
If all goes well, next week pathfinding will be relatively complete, and Jamie will have done the same with networking. That should put us in a good position before the two week Christmas break, prepared to begin the new year with refreshed enthusiasm.
When Richard told me that he needed “a Riker to my Picard to bring this project over the line”… well, how could I say no? Ever since I first played Metal Gear Solid 2’s demo I have been obsessed with gaming, and felt like I should attempt to make something. Since graduation in 2017 I have worked in software development. Experience as a programmer and later consultant allowed me to hone my skills, but recently, as everything certain has become uncertain, I felt it was time for a change.
With online multiplayer games it is important to make sure each player remains ‘in-sync’ with others at all times. This is where the Lockstep protocol comes in, it ensures that all players see the same actions taking place at the same time (within milliseconds depending on latency). Richard had already started implementation using John Pan’s lockstep framework, but it needed work.
During my first month this is where I focused, after spending a few days getting to grips with the codebase and Unity itself. My technical background was C# for web applications, so I knew the language, but the architecture was mysterious; much like the title of this post (a reference to one of the greatest movies of all time.)
After preliminary analysis the code looked abstract, all too abstract. So I began to, well, ‘rip out’ wouldn’t be far from the truth, lots of files I felt were unnecessary. Once I got the project building again I re-organised the classes into a legible folder structure. One thing to note here: If you’re going to move classes about, do it in the Unity editor, so that script references are not lost to game objects.
Once I was satisfied with a first pass it was nearly time to test. However, I had a few issues with the lobby and network code. For a start, players were not able to click the ready button to start the game. This proved to be a simple fix as the system was not treating joining players as humans.
Next task was to play a game, and to begin the cube wars. I discovered that the system was treating every spawn command as if it came from the host, this was easy enough to fix. But then it seemed that actions, whether to spawn a unit or move, were not reflected on the other players’ screens. This was somewhat problematic, had I missed something in the refactor? An investigation had to be carried out.
I began by taking it back to basics. I stepped through every line of code, from the spawn command until the unit appeared on-screen, I then compared the path the code took to other classes that I thought were relevant, in case there was a call to a secretive method. This was not the case.
My main mistake was assuming that the commands were being sent between players, examining code which looked very much like it received data packets. After a few days, and the odd existential crisis, I had a cup of tea and prepared to continue my war with the network code.
The entire British Empire was built on cups of tea. And if you think I’m going to war without one, you’re mistaken.
Eddy (Lock, Stock and Two Smoking Barrels)
Fuelled on tea and biscuits I had another crack at the code. I created a ‘SendCommandOverNetwork’ method which utilised Unity’s NetworkTransport class. Suddenly I was able to send spawn commands. “It’s all completely chicken soup” as Tom would say.
Now I could finally get to testing. I had to be sure that commands were being executed by all players at the same time. Move commands had to take a similar amount of time to process and act upon, as this would be an easy way for players to become ‘out of sync’. I believe simple solutions are the best, so decided to log data to a .txt file, revealing critical information, like who sent the command and how long it took to execute.
I compared the logs, finding that one game was at least four seconds faster than the other. Yes, seconds, not milliseconds. Even after logging more data I was still not any closer to a solution. I then wondered, what if the Unity editor is just faster than the game builds? In hindsight, it did seem like an unfair test, to have one game build and the other within the Unity editor. So I changed the logging slightly, and had two game builds logging to separate files. You can already guess what I discovered, not only was the time of execution the exact same but the calculation times were pretty much equal, give or take a few milliseconds.
My first month was good, but I struggled with a few of Unity’s nuances, learning harsh lessons in the process. The transition from consultancy to working with a start-up on a game has been interesting. Work no longer feels like ‘work’, bugs are welcomed as a challenge to overcome rather than with distain or fear of missed deadlines. I am looking forward to the next challenge, which is of course more lockstep features regarding building, stances and overclocking.
The next step in the process was to create a third dimension for the pathfinding system. This may sound trivial, but isn’t. Which creates an obvious question: but Richard, why didn’t you just create three dimensions in the first place? Incompetence may be too strong a word. So I’m going to say (with some legitimacy) that it was simpler and faster to get the basic system working in two dimensions, because there’s less that can go wrong. So although I’ve been making progress towards this end, it has been one of those things which has taken an unfortunate amount of time.
In other news, the new hire, Jamie McCully, has begun work. He will have the opportunity to write his own devblog soon enough. At any rate, he has learned quickly and is already proving an invaluable contribution. I suspect he may have been expecting more specific instruction on his first task than “do what you want”… but delegation is an important skill. Of course, this may give the wrong impression; that task took hours to explain.
For strategy games, with potentially hundreds of units running about at once, nearest neighbour searching is a non-trivial problem. This is when a unit must check who its neighbours are as efficiently as possible to make decisions about movement and combat.
If there are only ever going to be a handful of units in the game at any one time, then a linear search is probably fine. A linear search is when code examines a list of items, one by one, until it finds what its looking for. This is fine for a shopping list, but not necessarily for checking hundreds of units multiple times a second. If 100 units are checking 100 units every time, that’s a lot of checks!
To investigate, I created 300 units, noting frame rate as a heuristic for code performance. When collision detection was switched off the simulation fluctuated around 90 Frames Per Second (FPS). Not bad. However, when collision detection was activated, using a linear search, this reduced the frame rate to 10. Clearly something had to be done.
The ideal solution is spatial partitioning, using an octree. This means that the game world is encompassed by a huge box (octant), and that box contains eight smaller boxes, and each of those boxes is divided recursively into eight more, etc. Each octant logs which units are contained within, adding units when they enter the box, and removing them when they exit.
This solution allows recursive searching up and down the tree, to find the boxes relevant to the space being searched. This is especially powerful with a dynamic octree, meaning octants only exist where units exist, significantly reducing the number of octants that need to be searched. Instead of checking a three hundred item list, we just check the contents of a dozen or so boxes.
In the below image octants are visualised using cyan and magenta. The dynamic octree must prune itself, and so when an octant has no units inside it is placed on a list of vacant octants. Vacant octants have a timer, and after a few seconds of disuse are deleted. This ensures the octree never gets too big, and allows octants to be reused where units are most likely to move again.
The proof is in the pudding. But in this case, the pudding is code, and the metaphor is broken. Regardless, it shouldn’t be hard to improve upon a measly 10 FPS. The octree was applied to all unit movement and combat neighbour checks. The latest results are as follows:
1 unit 270-280 FPS 10 units 250-260 FPS 100 units 130-160 FPS 200 units 90-110 FPS 300 units 70-80 FPS 400 units 50-60 FPS 500 units 30-40 FPS
These figures can undoubtedly still be improved, and they demonstrate the efficacy of spatial partitioning, compared to a naive solution like linear search. The octree also provides the foundations upon which a deterministic raycaster can be built.
One optimisation has been to sort units inside octants by team, and to push those lists up the tree recursively. What this means is that units searching for nearby enemies may only have to check up the tree, until they find the octant which encloses their maximum range. If there are no units belonging to a hostile team inside that octant, then the search is resolved. The alternative is suboptimal: to move up and then down the octree, searching every unit in every octant for whether they belong to a hostile team.