Devblog 29: Manifest Good Vibes (With Good UI)

Month Eighteen

The User Interface (UI) is a critical component of any digital product, whether a website, mobile app, or game. It must be inviting and intuitive, and sometimes what is necessary goes beyond functionality. As an immersive experience, games need to create an ambience for the player. Disco Elysium does a wonderful job of this, combining pastel backdrop and soft atmospheric music. But we are here to talk about our UI… and not games I’m currently playing.

The old UI was basic, but served its purpose. Buttons were generated by bespoke code at runtime, rendered on screen with absolute values, such as position, width and height. The problem was this doesn’t scale for different monitor resolutions. But now we are hurtling towards the end of the core development phase, and need something better. We also have no art assets for the menus at this time, as the team are focused on other jobs. So my task was to prepare the UI for being ‘skinned’ later with some colourful images and animations.

This task was divided into three phases: research (as always), the menu system (pre-game) and the GUI (in-game). Usually in software development the design and development of a UI would be delegated to a dedicated technical artist who has specialist knowledge. Richard has me, without specialist knowledge.

Before creating any designs, I browsed the UI’s of games I’ve played, as well as other games in the RTS genre. I researched a lot of start screens and tried to spot the differences between the good and the bad. After some digging (and over fifty open Chrome tabs) it started to become obvious which ones simply worked and which ones didn’t.

Classic and modern RTS games have significant differences: modern games like Northgard adopt minimalism, showing information only when necessary, while older games make panels of information permanent fixtures. We wanted to make the menus as minimalist as possible, much like Northgard or Disco Elysium. Just simple and pretty menus which allow the player effortless interaction.

Before and after. Next step is to jazz up the buttons, add some animation and a logo.

We wanted to make sure that everything would be accessible exactly where the player expects. Eventually we’ll enable the player to move these elements, such as the minimap, to wherever their heart so desires, affording a personalised UI experience. But for now we must establish a generic layout, such as an actions interface positioned at the bottom of the screen. This space will dynamically populate with actions, based upon the player’s selection. I also added unit counters along the top, so that at a glance players know how many of each unit type they possess.

All of the objects shown were created using Unity’s UI tools. It took a few tutorials before I understood the different components. For example, my first attempt at the dynamic action panel involved programmatically creating and moving buttons. After some irritation with that not working, I came across Unity’s “GridLayoutGroup” component, which unsurprisingly did what I wanted. One caveat though, it did not change the size of child elements (i.e. action buttons) to fit the space provided. So a small custom script was needed to supplement this component.

This task did not require a perfect, final, or even pretty UI, as we have begun an iterative process, to refine the design until it performs without issue. For now, the foundations have been laid, making future work much easier.

Devblog 28: The Fog

Month Seventeen

Unlike John Carpenter’s movie, this blog post will not be about Ghost Pirates.

Chess players have total knowledge of the game state. They know where their opponents pieces are and can plan accordingly. In RTS games it is important that this is not the case. Players must be allowed time to build their armies and defences in secret; a match could be over in minutes if everyone knew exactly where each others bases are and what they are doing (looking at you grenadiers). Not knowing where the enemy is creates suspense and rewards players who bother to scout.

The way we can achieve this is to produce something called ‘fog of war’. This fog covers the entire game world and is dispersed by troops as they traverse the field. Think of it as the Mist from the movie of the same name. The characters are our units, we want them to be able to see only a certain distance ahead.

There are two ways to create fog of war, one is computed on the CPU and another on the GPU. I chose to implement the former, as it seemed easier to understand and implement. Programmers love to follow the KISS principle. No, that does not mean putting on makeup and listening to ‘Rock And Roll All Nite’.

Sometimes we end up having to redo our work. I began by creating the fog, which was basically a black plane sitting above the game world, and corresponded to an array of fog tiles. Next I created a script that would be added to each unit. This script adds the unit to a list of ‘fog effectors’, and while the unit is moving, finds the intersection on the fog plane between our camera and the unit. We would then use this point as the centre of our circle of dispersal. This circle was then used to gather coordinates and compare that to a list of fog tiles, if the tiles were ‘fogged’ they were added to a list.

As you can see, It’s a good thing we have actual artists drawing the characters for the game.

At first this process worked quite well. The system cycled through each effector and ‘dispersed’ the appropriate fog tiles. There was one problem though. With RTS games its quite common to have hundreds of units on screen at once. This test proved fatal. Frame rate dropped to around 1.7 frames per second. Which is not playable. At all. We had to do better!

After some research, and a fair few forum posts later, I found that a change of plan would be in our best interest. This was implemented in a similar way, only the computation was delegated to the hardware through the use of shaders. A map object was created containing the size and position of the game world. This was then passed into the ‘drawer’ which set properties for the shaders to do their job of covering the map in a glorious fog. Luckily I was able to get quite a bit of assistance with the shaders, as they are dark boxes of mysticism that only a wizard can make sense of.

Each team in a match is assigned a fog of war object, with only the local players’ fog rendered on screen. This lets our determinism system incorporate fog checks for other players’ units, and includes a shared fog for team-mates.

Raycasting is magical.

This solution also uses the position of the unit as it moves to determine what to disperse. This time we incorporated unit ‘line of sight’ (LOS), allowing us to create a flashlight effect, and stop dispersal when the unit is obstructed by a wall, for example.

We also use shaders to create the shape around the unit for fog dispersal. This shape is also cached so it does not need to be recreated every tick.

With this completed, and the great purge [refactor] (which Richard alluded to last month) coming to conclusion, we are now in a position to create the UI. I have told Richard this will take hours of playing games research before I can get cracking.

LOS with obstructions.

Devblog 26: From Block to Reptile – A Unit’s Tale

Month Fifteen

Currently Richard and I are working on buildings and ‘fog of war’ respectively, and we will share these things with you. But in the meantime, thanks to the hard work of Michal, Arek, and Ayse, our art production line is up and running.

So if you’re wondering how we go from angry geometric shapes to android animals you’re in luck. Game objects all follow the same design flow:

Once Michal has finished with the sketch, Arek takes over and creates the 3D model, and the team discuss improvements, such as whether the character is correctly proportioned, or if anyone can see potential animation issues due to the model’s dimensions.

Once everyone is happy the model is finalised and coloured. This is when the character starts to come to life.

fully coloured reptile in all its T-posing glory

Next up is Ayse, who adds life to the character through animation. Each character requires several animations to make the game immersive. No one wants T-posing characters floating around, that’s the stuff of nightmares!

Generally each character needs: idle, walk, run, shoot, melee, and death animations. So while Ayse got to work creating those, I got to work on the last part of the process; importing and integrating art into Unity. This was a welcome break from the fog of war system which is proving… interesting….

My first task was to figure out how to replace our geometric shapes with these new models. Adding the .FBX files to the project was easy, these files include the model, rig, and materials. But I soon realised that the colour wasn’t right on the final in-engine product. After discussion with the art team I had to edit the project structure to use URP (Universal Render Pipeline) from the default Legacy Renderer that unity uses.

There are lots of different rendering options in Unity, but we settled with URP, as it is meant to be quick to learn and easy to use. More powerful options, like HDRP, weren’t required as our models are low poly. Also, URP has ‘shader graphs’. These are fantastic once you get past the initial learning curve.

After a few tutorials we were able to fully integrate the character model in the game, including team colours.

Animation State Machine

Once the animations were complete it was time to add them to the character. This is done in unity by creating an animation state machine, which basically has a list of Boolean values that you set in code through your animation controller. Create the state machine, add the animation, and link it to the character, and most of the time that’s all that’s required. But sometimes there are bizarre outcomes, such as the death animation perpetually rotating the character. Although if you add the Dead or Alive classic “You spin me round” it becomes hilarious.

Devblog 21: Lockstep and Two Smoking Barrels

Month Ten

When Richard told me that he needed “a Riker to my Picard to bring this project over the line”… well, how could I say no? Ever since I first played Metal Gear Solid 2’s demo I have been obsessed with gaming, and felt like I should attempt to make something. Since graduation in 2017 I have worked in software development. Experience as a programmer and later consultant allowed me to hone my skills, but recently, as everything certain has become uncertain, I felt it was time for a change.

With online multiplayer games it is important to make sure each player remains ‘in-sync’ with others at all times. This is where the Lockstep protocol comes in, it ensures that all players see the same actions taking place at the same time (within milliseconds depending on latency). Richard had already started implementation using John Pan’s lockstep framework, but it needed work.

During my first month this is where I focused, after spending a few days getting to grips with the codebase and Unity itself. My technical background was C# for web applications, so I knew the language, but the architecture was mysterious; much like the title of this post (a reference to one of the greatest movies of all time.)

After preliminary analysis the code looked abstract, all too abstract. So I began to, well, ‘rip out’ wouldn’t be far from the truth, lots of files I felt were unnecessary. Once I got the project building again I re-organised the classes into a legible folder structure. One thing to note here: If you’re going to move classes about, do it in the Unity editor, so that script references are not lost to game objects.

Joining player unable to ready up.

Once I was satisfied with a first pass it was nearly time to test. However, I had a few issues with the lobby and network code. For a start, players were not able to click the ready button to start the game. This proved to be a simple fix as the system was not treating joining players as humans.

Next task was to play a game, and to begin the cube wars. I discovered that the system was treating every spawn command as if it came from the host, this was easy enough to fix. But then it seemed that actions, whether to spawn a unit or move, were not reflected on the other players’ screens. This was somewhat problematic, had I missed something in the refactor? An investigation had to be carried out.

I began by taking it back to basics. I stepped through every line of code, from the spawn command until the unit appeared on-screen, I then compared the path the code took to other classes that I thought were relevant, in case there was a call to a secretive method. This was not the case.

My main mistake was assuming that the commands were being sent between players, examining code which looked very much like it received data packets. After a few days, and the odd existential crisis, I had a cup of tea and prepared to continue my war with the network code.

The entire British Empire was built on cups of tea. And if you think I’m going to war without one, you’re mistaken.

Eddy (Lock, Stock and Two Smoking Barrels)

Fuelled on tea and biscuits I had another crack at the code. I created a ‘SendCommandOverNetwork’ method which utilised Unity’s NetworkTransport class. Suddenly I was able to send spawn commands. “It’s all completely chicken soup” as Tom would say.

Synchronised violence.
Between angry geometric shapes.

Now I could finally get to testing. I had to be sure that commands were being executed by all players at the same time. Move commands had to take a similar amount of time to process and act upon, as this would be an easy way for players to become ‘out of sync’. I believe simple solutions are the best, so decided to log data to a .txt file, revealing critical information, like who sent the command and how long it took to execute.

I compared the logs, finding that one game was at least four seconds faster than the other. Yes, seconds, not milliseconds. Even after logging more data I was still not any closer to a solution. I then wondered, what if the Unity editor is just faster than the game builds? In hindsight, it did seem like an unfair test, to have one game build and the other within the Unity editor. So I changed the logging slightly, and had two game builds logging to separate files. You can already guess what I discovered, not only was the time of execution the exact same but the calculation times were pretty much equal, give or take a few milliseconds.

My first month was good, but I struggled with a few of Unity’s nuances, learning harsh lessons in the process. The transition from consultancy to working with a start-up on a game has been interesting. Work no longer feels like ‘work’, bugs are welcomed as a challenge to overcome rather than with distain or fear of missed deadlines. I am looking forward to the next challenge, which is of course more lockstep features regarding building, stances and overclocking.