Working in Unity as a Web Dev


One of the things that I'm enjoying as a challenge with Rootless is learning how to properly code a Unity 6 game.  My software development experience and career has been mostly focused on web development with a few forays into WPF, Android and iOS apps, Electron, and numerous web application frameworks over the years, but at times that has served as a detriment when it comes to working in Unity.

Event-First Architecture

The biggest thing I've had to wrap my head around has been the focus on event-based architecture.  While I've used events before in the past, most modern application frameworks rely on "data binding", where changes to underlying data models are bound to and reflected in the UI as soon as they're changed.  Unity does not (at least natively) support that out of the box.

Instead, Unity relies mostly on event publishing and subscription between the layers to trigger changes.  When a Unit in my code moves to a new position on the battle map, I fire an event which anything can subscribe to if it wants to be informed of the movement.  In our case, on the other side I have a "UnitView" which does subscribe to that event, and handles kicking off the animation routine that moves the unit in the scene.

Separation of Concerns

That said, there are a few software development patterns that are worth retaining across platforms.  One of the changes I've done recently is a small rewrite to separate the "game logic" from the "view logic".  The main reason for this is to enable simulation; because all of the parts of the game that don't require visual representation are in a separate library, it's somewhat trivial to "emulate" a battle by creating in-memory instances of units, tiles, the battle map, and the actions taken.

Think of it this way; all the constraints of how a unit can move, who they can attack, the calculation of damage and odds, they're all just a bunch of instructions.  We can write computer code to say "build me a map that's 10 tiles wide by 10 tiles high, and place an enemy and player on the map in this position" without actually needing to draw it to a screen.  We can then say "move closer to the enemy until you can hit them, and then attack", as if we were executing the game in our head.

Consequently, we can plug a "view" into this in-memory representation to show a player what is happening in the "core" of the game.  As I am with Unity 6, I can take that same 10x10 map grid and draw planes in the exact position, and draw a 3D model where the unit stands in this in-memory example.  Along the same lines, if for some reason I want to change the "view" to another engine like Godot or Unreal or basically anything else, I don't need to rewrite the game rules, I just need to change the part that draws the game state onto the screen.

My original project had view and game logic intertwined as I learned Unity and got things to work with minimal effort, but I thought it was important to spend time separating things out while the game was still in its early stages.

Co-routines

An interesting concept that I'm still learning about is Co-routines.  A coroutine in Unity is kind of like a separate thread that's tied to the render loop.  It returns an IEnumerator which allows you to use "yield return" statements to execute up to a certain point.  A good example is moving a unit from one position in the scene to another; each frame you want to change its position a certain amount (likely depending on the distance and speed), so you calculate how much further to move it based on elapsed time, set the transform position, and "yield return" to stop execution until the next frame is being rendered.

Inputs

Each MonoBehaviour in Unity has an Update method which is invoked every rendered frame.  Within here you can check for input events like "Mouse.current.leftButton.wasPressedThisFrame" or "Keyboard.current.qKey.wasPressedThisFrame".  This is different from other input systems I've dealt with in the past where input events were treated more like, well, events, as opposed to checking for their state, but it effectively works the same.

Since these singleton instances are available to all MonoBehaviour objects you could technically have any object in the scene listen to input events in their Update method, but I'm trying to centralize my input logic somewhat to a PlayerInputController.

With mouse events you often want to know "what" the player clicked on, meaning doing some raycasting.  This translates the mouse's x & y position to a "ray", which then is used to see if it intersects with any game objects.  If it does, I use eventing to communicate that a Unit / Tile / other component was clicked on, which (like the move events for units from the "game core") anyone and anything can subscribe to.

Leave a comment

Log in with itch.io to leave a comment.