Devlog: Upheaval

I’ve lost count of the number of times I’ve moved house. Enough that I should be a bit of an expert at it, or at least, not underestimate how long it takes to sort everything out once I’ve “unpacked”. As it was, the first three weeks of October were a right-off, but bar buying a new coffee machine I’m basically good to go, I‘m back on the ‘net, and I’ve just come back from a much more chilled few days in the UK to celebrate my birthday.

Go go go!

Unfortunately, a few more things changed than just my location. It’s been a weird month.

Next Game

I’ve not written about this in detail for a while, mainly because progress has been so stop-start it’s been hard to wrap-up the work into an interesting post.

Where am I? I think from the code point of view I’ve got pretty much everything in place for a networked, multi-player game. Steam integration is working, you can join / leave hosted games through friend invites, browsing for public hosted games works, the co-op and TDM modes work, environmental stuff like trigger operated doors and moving platforms are in, pick-ups, weapon types, ammo, projectiles, area explosions, AI spawning, death— everything you’d expect from a game of Quake — is in and working. But it’s hit a wall.

I’ve lost some collaborators (totally understandably, and I’m amazingly grateful for their help to-date) and I’ve not got the money to sub-contract out in order to hit the quality bar that I want. So I’m left with a choice: rollback on the art/audio/VFX-sides, and maybe go for something like Strafe or Gibhard, or, come back to this project if-and-when I have the cash to bring on a couple of people for a few months.

I probably could hit a nice retro-feel with some low poly characters and pixely textures, but I think Strafe’s got that covered now and I don’t want to re-tread that ground. Going even more coarsely grained, almost Doom2-like, is an option, but again, feels like a step back from what I have in my head. So I’m going to park this.

If I’m honest, every step in this project has been just a smidge harder than it should have been, not least because of some seismic shifts in my personal-life that have royally fucked my ability to focus, but I’m inclined to listen to the universe and accept that this “isn’t the one”. Meh.

I still want to make a really fucking fast, twitchy FPS, so I’m not ready to let this go completely. Which means maintaining it with each UE4 upgrade and biding my time…


Man, I’m so fucking glad that I decided to roll with a little side-project, post Lumo. Being able to work on something when surrounded by boxes, on a train, bus, or stuck in an airport, has been a life-saver. I’d be much more stressed if I was looking back on October as being entirely unproductive.

The fruits of this half-arsed labour?


Behold, my nearly finished Gun Editor.

Yup, dead simple, probably only a day’s work, but I’ve also gone through and cleaned up a few hanging TODO items, and added viewport scaling to the mouse wheel for the editor modes.

I need to add the bullet definitions to this, then do a particle emitter editor, at which point I literally have no more excuses. I need to be making a game with this. But I’m not going to go back to it until Xmas.

I have a cunning plan…

Devlog: Curves

I’m in the process of packing up my life for the move back to Tampere, so this will be the last catch-up blog until I’m setup again. Meh.


I’ve done a fair bit more work on this, due to travel, downtime waiting for estate agents and the start of the new Semester at University. The “big” new item is the curve editor.

I want the enemies to have pretty varied attack patterns, and although a lot of this could be done with the judicious use of Sine, you can’t beat having a decent spline to hand.

There’re a few ways you can do this, but I’ve opted for chaining cubic Bézier curves together, as the maths is about my level.

Maths.... Ugh

Or, in code:

glm::vec2 CalculateBezierPoint(const float t, const glm::vec2* p0, const glm::vec2* p1, const glm::vec2* p2, const glm::vec2* p3) 
    float fT = clamp(t, 0.0f, 1.0f);
    float u = 1  fT;
    float tt = fT*fT;
    float uu = u*u;
    float uuu = uu * u;
    float ttt = tt * fT;
    glm::vec2 p = glm::vec2(uuu * *p0);
    p += 3 * uu * t * *p1;
    p += 3 * u * tt * *p2;
    p += ttt * *p3;
    return p;

ImGui has been a lifesaver, again, as it took very little time to put together enough UI to load, save and edit curves. There’s no real limit to the number of curves I can chain together, but in practice, two or three is more than enough. It’s simple, but it works.

Fancy Gif

The other nice addition to Neutrino, as you can see above, is embedded .GIF output. Totally stole this idea from Philip Bak, but it makes perfect sense; in a world where you need to fire out screenshots on social media as often as possible, what could be better than pressing a button and getting an animated .GIF, resized for Twitter?

We both opted to use this single header lib from Charlie Tangora. 3 function calls and Bob’s your mother’s brother. Perfect.

Other stuff that’s been done:

  • Simple timer based callbacks, so single entities don’t need to constantly poll their state, they can just ask to do something in the future.
  • Start of a level sequencer… This is based off the camera position, and will be responsible for doing all the spawning and in-game events. I’ll probably do a ImGui editor for this and talk about it in a future post
  • Roughly sketched out how enemy entities will work. I’m trying to do existence based processing using a table layout — I’m high on Data Oriented Design kool-aid — so I’ll talk about this in a future post, once I’ve worked out how I’m doing it.
  • Ran the codebase through Valgrind and removed a bunch of silly little memory leaks. One problem with the stop/start nature of the way I work on this is that, sometimes, if I have to stop in the middle of something, I forget to go back and clean-up after myself. I’ve got into the habit of just checking periodically in-case I’ve done something stupid. Normally I have ;D

Anyway, I better get back to packing boxes ;D

Devlog: Grab bag catch-up post...

I’m waaaay overdue a post.

A lot has been going on, on the dev-side, and in my personal life. I’ve had a bit of a holiday, I’ve been to the UK and back, and I’m at the start of trying to move away from Helsinki, and back to Tampere. My plans of a heads down summer of work didn’t really happen. Oh well :D


Due to the travel, hanging about in airports and nights in hotels, Neutrino has had a fair bit of attention. Effectively the engine side of it is complete.

Box2D has been integrated. I’m not a massive fan of Box2D, but it’s simple enough to use and reasonably fast for a small number of objects. I’ve gone the whole hog this time and written the debug renderer for the physics world! Seeing what’s happening under the hood really does make a difference.

Neutrino is still single threaded, and still, effectively, vsync locked. I’m going to do a bit of work at some point to move Box2D over to another thread and fix the fucking timestep, at which point it’d be interesting to see it run on a GSync screen. But anyway…

Debug Rendering

Still not much to look at, but it’s getting there. Next step is to make a particle emitter editor, bullet trajectory “pattern” editor and a spline editor for the attack waves. Then I think I can make some levels…

Next Game

A month or so ago I decided that I had a lot of bits, but I didn’t have a game. I really should start putting this all together.

Steam’s a good forcing function for this: If I’m going to make an online game then I really need to have it running through Steam and then get some other people to play it! To do that, I’ll need working UI and a proper flow in-and-out of the various game modes.

UI in UE4 is pretty lovely. UMG has its quirks, but it works, looks good, and compared to the complete abomination in Unity, is a joy to put together. And — shock — I really like Blueprints for chaining UI together. Delay node ftw!

That being said, there’s still a lot of UI screens to put together — once you throw in game-modes, with options, lobby screens, error handling etc. — but I have a working screen for each part of this (with no real thought applied to the styling atm) and I’m able to enter and exit a game.

Online Session handling (like pretty much everything else in UE4) is piss poorly documented. There’s a video showing you how to do it in Blueprints, and a working example in Shooter Game that does it in C++. Because it’s so intrinsically tied to the UI, for the first pass I opted to start a simple blueprint session and see if I can connect to it via Steam. This took the better part of a day, but it does actually work; I can see hosted games over Steam, pull up some small bits of information about them, and I can join and leave a session. Wow!

The problem is, the blueprint stuff doesn’t really have any idea about any of the custom parameters that define a hosted session, so I’m going to have to go back under the hood and do this properly in C++. Then I can filter against game mode, see the name of the server, handle friend invites and the like. But for now, just for the purposes of development, I have a “game” that I can upload to Steam, send to friends, and we can run about and shoot each.

Time to grey box some levels to run around in.

Work In Progress

Quick little update, especially for those of you that actually follow the blog, rather than catch me on Tumblr or Medium. Here’s the progress that I’ve been making on the AI in the test dungeon over the last 3 weeks:

I need to add in some animations to emphasis when he’s searching for you, and when he’s fleeing from you, along with some audio barks so it’s clear that the modes have changed. But he’s running about, shooting and trying his best not to be a completely static target. Good enough, for now, but I’ll be coming back to this a lot over the course of development.

Devlog: Oh Behave...

Behaviour Tree

Finally. I’ve got around to some game-play programming…

My natural instinct when approaching this is to getting busy building a finite state machine, the core of which can be shared over the AI in my game, and then start building some instances to see what I like and what I don’t.

UE4, unsurprisingly, is reasonably opinionated about how you should approach this and has its own system: Behaviour Trees. I’ve seen the BT system mentioned by quite a few devs over the last couple of months, and read bits and pieces about how to use it prior to rolling up my sleeves. I was quite excited to jump-in, but over the course of the last ten days I think I’ve gone through at least half of the seven stages of grief just trying to find a way of working with it that I can live with…

One reason for this — and the thing that consistently annoys me about learning modern tools — is the absolute piss-poor state of documentation. I will never understand why people think scrubbing through hours of video is better than concise, written explanations of something, but there you go. Good technical documentation is a dying breed.

The BT system does do slightly better than expected in this regard, as there’s a relatively skinny HOWTO that walks you through the basics, but A) It’s blueprint orientated and B) AI also has to drive animation, and audio, and oh-my-god-stuff-needs-to-be-replicated-and-why-the-fuck-isn’t-this-working-what-the-fuck-simple-thing-have-I-missed-now sob. Etc.

Ok, I’m slightly exaggerating, but after a day of use my initial impression of the whole thing was that it was a teensy bit over engineered. Not designed for me. And I didn’t like it.

I’ve slightly changed my mind since…

My simple starter AI character has a few states:

  • Idle
  • Idle at attention
  • Patrolling
  • Trying to get close to the Player
  • Leeroy Jenkins
  • Looking for a player that it’s just lost sight of

Some of this information needs to be passed to the animation blueprint (being at attention, for example, or aiming at something) so the correct set of animations get played. Some of this information needs to be replicated, so clients see the correct thing.

A Behaviour Tree’s Blackboard is basically designed to support this, being a slightly decoupled place to store data that a BT uses to make decisions, and that the rest of your code can then modify & grab, as required. But that means touch-points in multiple places; the character blueprint, custom events to populate the animation blueprint, the AI Controller… in addition to the things that make the BT tick: the functions that make decisions and the services that perform checks.

I really don’t like this. Debugging this stuff is a fucking nightmare. You end up with windows and breakpoints everywhere, and the relevant data is spread too far. I like my parameters in one place and I like to be able to quickly read state at runtime, preferably in one place, so my first foray into this wonderful world (using blueprints only) gave me the heebie jeebies, and worse, didn’t end up working correctly. I have no idea why.


By this point I’ve gone through the first three stages of grief, although mostly “Anger and Frustration”. So I decided in the “Depression” stage to have a go at a pure C++ AI, and check out what else the engine had to offer. This lead me to the AI Perception system, which on paper looks great: Sight, Sound, Damage and Touch events for your AI, just by adding a simple component. Woo! And at least half of that system works! The rest, largely undocumented, doesn’t appear to, but it’s labelled WIP so this is either my fault, or there’s some arcane magic that I’m missing.

After an hour I really couldn’t be arsed stepping through the code to work out which, so I reverted back to the old Pawn Sensing stuff. This clearly isn’t as good, and it doesn’t provide anywhere near as fancy debugging output (which I’m a sucker for) but it works, and I could move on.

After a day I had my FSM, a little AI dude, a derivation of the player weapons that the AI could use to kill me, and everything was working in co-op with a connected second player. Hurrah! Except that’s only the tip of the iceberg. This stuff only looks good, or becomes convincing, when the transitions between the states have some range of probability, a bit of variation, and reactions can be deferred a little. This means adding transitional states, which means FSMs in code quickly become unwieldy. Adding time delays to state changes also makes things harder to read…

I wasn’t excited about carrying this forward and then having to debug it at some point in the future, and I do want something a tiny bit more advanced that Doom’s AI, so on reflection, straight C++ didn’t seem like the best bet either.

The upward turn (grief stage 5, apparently) was when I worked out how to use BTs with C++. Even moving the tasks — operations in a BT that do something to the character or it’s data — to C++ is a massive win. I can debug my character, my controller and individual AI tasks within Visual Studio, with a decent call-stack and inspector, and use the BT to add in all the little random waits, variations, or sub-routes, without clogging up the code. Things immediately started looking better.

Behaviour Trees also make the Environment Query System a tad easier to use and it seems like something that’s potentially cool, but I’ll be honest, I’m still on the up-hill climb with this. Have a look for yourself.


Spot the system written by a coder, for a coder.

So far I’ve been able to use the EQS to generate random places to look for a player when the AI loses them, and random locations around the player, so the AI isn’t a stationary target when engaging. But I need to spend more time to actually understand how to use this system properly. Having the AI run for cover, or flank the player, would be cool and eminently doable.

In Game

So where am I now?

Well, the header image shows the BT I ended up with after all of this experimentation. One thing that’s abundantly clear is that using a BT to sense and make state decisions dynamically, each frame, isn’t the way to go. The stack of conditionals you end up with prior to running sequences and progressing down the tree is messy, and still not fun to debug. I’m going to re-do this next week, but with a stored “current state” that pulls from an enumerated list in the Blackboard. I’ll combine the pawn-sensing, via the AI controller, with the simple tests in the BT to change state at given circumstances, and write a small set of methods in the AI controller to set the animation params, replicate, and /or call multicast stuff for clients.

I think this will reduce the surface area for debugging, make the BT itself a bit cleaner, and leave me with a small collection of C++ BT Tasks that I can re-use.

But those could be famous last words of stage 7; acceptance and hope.