Tower Defense update. Blocked Enemies

I think I once said I had no technical challenges left. That’s not 100% accurate. While there’s nothing I have to look up in the engine on how to do, there are still a few technical design decisions left that I’ve been avoiding since I started. Here are those three technical decisions.

  1. What happens if you build towers and the enemies can no longer reach the exit?
  2. What to do when you try to build a tower and enemies appear where you want to build? Note that the game does not pause while you build.
  3. What to do when an enemy reaches the exit.

The third one is a critical part of the game and I’m still trying out different things. The second one could be handled with a delay and disable building more towers until enemies move away.

The first one is the one that I really didn’t want to tackle. At first, I tried to build an internal map, but that got complicated real fast. Then I realized a few thing. First is that you can only build towers on one kind of Actor. These are called PathActors in my game. You can disable building towers on them and still use them in the level for the enemies to use. These are 8×8 tilesets. You can reduce their size with a simple size property. And you just place them side by side. All tiles in my game are 100cm (or 1m). So it’s easy to build an internal graph for this part and connecting them together is just checking what tiles are about 100cm apart. Easy.

Next are the irregular shapes. There are custom shapes and single tile components that can be grouped together into a single actor. I tried a few different things again it just got complicated. Then I had my second realization. Since you cannot build here, only the jointing tiles need to be specified. So I manually added a spline component to these with an endpoint at each end of the path. You can even add more splines if you’d like as long as spline points overlap as connecting points. Again, easy. I just scan for these splines and join them up. It’s a bit of manual work to add the splines, but it just makes thing so much easier.

I had a few issues still and drew the internal map. Here is an example (after I fixed my issues).

What is the point of this map? How is it used? I hooked this up to an AStar navmesh. When you want to build a tower, I set a node as blocked (the blue spheres) in the graph and check if the enemies can go from each entrance to their exit. If not, then you can’t build there. In fact, the menu and the selection square won’t even show up.

This took three days to implement. And this was quick because I had to generate the map from the actors in the level. I didn’t want to generate the maps manually. I also reused the AStar code from my first game.

I also learned a few things about navigation meshes. First, you don’t need to spawn one yourself. Instead, you go into the project settings under Engine/Navigation System. Then go to the Agents section and add an agent for your navigation mesh. You can select your navigation mesh class there. Do not remove the main mesh. That’s the one you will use inside your game.

One important thing is that registering a navmesh IS the same as adding it to the list of supported agents. So don’t register your navmesh. It’s not necessary.

To do an internal query, it’s rather simple. You grab the navigation system and do your query there. Like this:

FPathFindingQuery Query;
Query.StartLocation = StartTile->Position;
Query.EndLocation = EndTile->Position;
Query.bAllowPartialPaths = false;
FSharedNavQueryFilter QueryFilter = MakeShareable(new FNavigationQueryFilter());
Query.QueryFilter = QueryFilter;

UNavigationSystemV1* NavSys = FNavigationSystem::GetCurrent<UNavigationSystemV1>(GetWorld());
check(NavSys);
FNavDataConfig config = NavSys->GetSupportedAgents()[2];

FPathFindingResult Result = NavSys->FindPathSync(config, Query);

if (!Result.IsSuccessful() || Result.IsPartial())
  return false;

Note that this is when you don’t use a path following component or a in-level navmesh and just want to do a simple path query.

You select the navmesh you want to use with NavSys->GetSupportedAgents()[n] where n is the index of the agents listed in your project settings. Mine was the third navmesh. The first is the one used in-game by the enemies. The second is used by the UFO’s for the carrier tower. And the third is to verify that you can place a tower.

That’s one issue resolved and it works great. I added some flags that are cached so I don’t need to recheck the same tiles repeatedly. When you build or remove a tower, those flags are erased.

Next I have to finally decide what happens when an enemy reaches the exit and start putting some levels together.

Converting Blueprint to C++ in Unreal Engine

There may come a time when your blueprints just get way too confusing or too large to maintain. In my current game, I tend to use blueprints for UI because this is where there are a lot of binding events on buttons clicks, hovers, etc. So it’s just easier in a blueprint. But after a while, it could turn out like this. Here is a real screenshot from my game. Click on the image to see a larger version.

Note that this is a very technical discussion. If you just want an update on the game, I’m working on the UI and you can just scroll down and look at the screenshots ๐Ÿ™‚ They’re still work in progress.

And the worst part is I’m not even done. Luckily, it’s not too difficult to convert this to C++. For this, we need to understand a few things.

First is how inheritance works. All Blueprints are derived from C++ classes. In recent versions of Unreal Engine, you can have blueprints derive from other blueprints with the use of slots, but that’s another discussion. For our purposes, we just need to add our own base class to our widget blueprints. But which blueprints do we add a C++ base class? It will likely be a lot of them. This can create a lot of C++ files. However, there’s a way around this as well that we’ll get into. To start, take the highest level blueprint that is becoming too large and add a C++ base class.

To do this, select the Tools menu and click “New C++ Class”. Then choose the exact same base class as your widget. It will usually be UserWidget, but could also be CommonActivatableWidget if you use CommonUI. Click Next. Type in a name for your class and click “Create Class”. Unreal Engine will try to build this new class using live coding. This will likely crash the editor, so make sure to save before doing any of this. You just need to recompile and relaunch the editor.

Now open up your widget and select the File menu and click on “Reparent Blueprint”. Select the C++ class you just created. Voila. You just added a C++ base class.

The nice part is that everything in C++ can be made accessible to your blueprint by adding “BlueprintReadWrite” (and “EditAnywhere”) to any UPROPERTY or “BlueprintCallable” to any UFUNCTION. If you’re not familiar with how to create properties and UE callable functions, then you’ll need to look that up.

So now you need to start moving variables to your C++ file. For basic types, this is simply adding a UPROPERTY for each variable. I tend to rename all my blueprint variables before doing this. So if I have a variable called “NumTabs”, I’ll rename it to “NumTabX” or “NumTabOld”. The reason for this is that it’s easier to avoid conflicts when you relaunch the editor. It’s a bit more work to refactor, but it’s worth it.

Now close the editor and rebuild. One thing you’ll note is that the C++ properties do not show up in the Variables panel. This is somewhat annoying, but you can still grab the property by right clicking in the blueprint and typing “Get NumTabs” for example and the property will show up. You just can’t drag it in anymore.

To replace the old value, right click on it in the variables panel and click “Find References”. In the new dialog, there is a binocular icon to the right. Click on it. This will search all your blueprints. Now you must replace all the references with your new C++ variable. It’s a bit tedious, but it’s cleaner and you avoid a lot of headaches down the road. Do this for each variable you converted to C++. When done, you can delete the Blueprint version of the variable. It will tell you if there are still references to it.

Replacing variables is ok, but there’s not much point if you can’t use them anywhere. These variable are used with other widgets. So now we are going to replace widget variables. Note that CREATING the widget (for those created at runtime) must remain in the blueprint unless you set the CLASS as a default variable in C++ that you can use to create the widget. This is because you cannot create the C++ class version of the widget as that’s the base class, not the full class. The full class is the Blueprint itself.

So how do we get access to blueprint widget references in C++ if we don’t create them? Well, there’s a way to bind widgets to properties so that you can use them in C++. You’ll only have access to methods defined in C++, but that’s usually good enough. If you need more access (for example if you created custom widgets), you can use the same technique here and/or add C++ methods that can be implemented in blueprints. We’ll get into that later.

 

For binding widgets as properties, we do NOT change the names. The editor will automatically update everything. So all you have to do is add this property to your C++ header.

UPROPERTY(EditAnywhere, BlueprintReadWrite, meta = (BindWidget))
TObjectPtr<UTextBlock> TextBlock_Points;

 

That’s an example of a TextBlock widget binding. One thing to note is that the property name must match EXACTLY what it is named in the Blueprint. Other widgets tend to have the same name with U added in front like UImage.

For any pre-existing widget, it’s that easy.

Now you can start converting your events to C++ functions (UFUNCTION) that are callable from blueprints.

 

To bind custom widgets, you will have to either bind to UserWidget class (where you will not be able to call custom events you added from C++) or add a custom base C++ class as done previously. Then you can bind to that class and move your events there so that Class 1 can call functions in Class 2.

If this starts to chain out of control, there is a way out.

Instead of converting every event, you can instead just declare a function that can be implemented in a blueprint. So this allows you to call blueprint code from C++.

 

In my game, there is a place where I want to show the Pause menu from C++ code. But the pause menu is completely implemented in blueprint. How did I do call this from C++?

UFUNCTION(BlueprintImplementableEvent, Category = “UI”)
void ShowPauseMenu();

By adding the above function in my C++ header, I then implemented it in the blueprint that has this class as its base class. You can override it in the blueprint by using the little dropdown in the FUNCTIONS header panel.

 

This is a way to avoid converting everything. This is useful in smaller custom widgets. You still need to add a custom base C++ class, but you don’t need to move all the variable or code down to it. You just add the functions you want access to and the base C++ class allows you to bind to it in other higher level widgets that’s you’ve added a C++ base class.

 

Here are a few examples of converting blueprint to C++. I have a screen for unlocking towers. It’s not complete yet. Still need to update the preview image and add longer description for example, but here’s a screenshot.

Here’s the blueprint graph just to fill in the data for each tower.

Just one graph like that isn’t too bad, but this is the bottom graph of the first image at the top. So it was getting way too complicated. After converting this to C++, here’s my code:

It is a 1 to 1 conversion. I find the C++ code much easier to read and maintain.

In the TurretStats widget, I had an event called SetTurret that sets up all the stats bars, cost, title, etc. You can see this function called in both the blueprint and C++ code above. Here is the blueprint version (the top graph).

Here is the C++ version.

In the C++ code, it’s easier to see what’s going on. It’s just grabbing 3 stats and setting it on the current widget. It’s setting also setting the display name. The reason there’s a loop is so you can compare the new stats with the previous level’s stats. The stats are displayed overlapping each other as seen in the blue unlock screenshot above (in green, yellow and red bars).

I didn’t go into delegates or event bindings. Those are a bit more complicated. I tend to leave those in the blueprints though I have used them in C++. I had once tried to create a UE interface in C++ and make blueprints implement those methods so that I could call those methods from C++ instead of having to create C++ base classes for each widget. Well, there’s a little undocumented snag with that. You can’t call UE interfaces from C++ unless that interface is directly implemented in C++ as well. It will literally do nothing if you call a UE interface defined in C++ that is implemented in a Blueprint. Now, there is a way to call it. You need a pointer to the widget and you call Execute_MyFunction(MyWidget) instead of just calling MyFunction. Also, the references to the interfaces are whack when dealing with blueprints, especially if you’re passing them around. They’re not just pointers. Short story is that UE interfaces were not worth the pain.

 

What we have here is one scenario where converting to C++ is not done for speed at all. In fact, I’d be more than happy to keep everything as blueprints. But sometimes it’s just easier to write small snippets of C++ code. Blueprint is great for prototyping and if it doesn’t need much more work, leave it as a blueprint. But for things that need a little more setup, I find it’s just easier to code in C++. And yes, you can use the exact same techniques with actors and regular blueprints.

Finally, my levels screen.

Once I’m done the tower unlock screen, I need to do a bit more work on the gameplay. I finally need to resolve what happens when a robot reached the end and what happens if you block off the path to the exit. Those are the last real issues left. I’ve written down a few ideas for more levels. So if I can make one or two half decent levels, I’ll likely make a demo. I’m starting to create my store page on steam and looking into making a game page. Stay tuned for that!

 

Tower Defense Game November Update

For this update, we have a couple of new turrets. We have 9/10 done. Only one left to do.

The Carrier turret/tower is finally done. I went way overboard on that one and there was no need. I took the A* navigation system I had in my first (un game and moved it over to this game. It is only used by the UFO’s. It is extremely long range and is the costliest tower. You start with 4 UFO’s and get an extra one for each level (maximum 3 levels, green, yellow and red). Here is the level 3 Carrier tower.

The UFO’s fly around and attack groups of enemies. There must be at least 4 enemies close to each other. So you cannot use it for single enemies.

Anyhow, the A* navigation system that I brought over was converted to be 3D (not just 2D) and I had to add a bunch of restrictions so it didn’t do weird zig zagging. It still has a somewhat erratic pattern, but it looks cool and they’re UFO’s. I like that they fly a little bit weird. I added new sound effects for flying and shooting. I thought finding a sound effect for a UFO flying around would be easy. It’s a UFO. You could use literally anything as no one knows what a “real” UFO would sound like. But I didn’t want it to be too annoying. I went through ALL the sounds I have a license (or free) and only found one sound I kind of liked. I think it came out really good. But we’ll have to see once people play test it.

Here’s a shot of the UFO’s attacking.

Sorry for the blurry screenshot, but they move somewhat fast. As you can see, they rotate around a center point as they fly around. Once they’re near their targets, they will tilt toward the enemy and shoot. The fire and smoke on the right is where the UFO’s were located when they took their last shot.

Next is the Fire Turret. This one I had a lot of trouble deciding what I wanted to do with it. This tower can hit many targets at once. But anything with a fire effect (fire and laser towers) doesn’t do much damage to shielded units. Still, I wanted an AOE (area of effect) tower that was cheaper than the main AOE tower (still remaining to be done). Each tower has 3 levels. So I wanted them to all be slightly different. Not just do more damage. So I thought if I make level 1 have 4 slots, they would shoot out of opposing slots, shut off, then the other two slots could fire. For level 2, four perpendicular slots could fire at once and then the diagonals could fire. For level 3, all 8 directions could fire at once. This is exactly what I did and level 3 looked especially cool.

But I didn’t like it.

One problem was that I didn’t want it to shoot through adjacent towers. This is fixable, but what if there are towers on both sides. The shooting would no longer alternate. What do I do then? A similar problem happens if there is only a road on either side of the turret. If it alternates the shooting, half of the time, it’s doing absolutely nothing and looks rather silly.

Then I realized I already had invisible collision areas all around the tower to detect when to shoot. So I could have the slots shoot independently of each other. But I could limit how many could fire at a time. So level 1 is 2 out of 4. Level 2 is 4 out of 8. And Level 3 is all 8 can fire at once. Problem solved. The best part is that I don’t need to check that a flame will hit a wall or neighbouring tower because it only shoots in a direction where an enemy was detected. The sides where there are no roads or have a tower will never fire.

The nice thing about this approach is that the flames ignite in order. And then the first one will stop and the next one where the enemy is located can ignite. It basically follows the enemies around.

Here is a screenshot just before the first set of flames will turn off (at the top left of the fire tower). Once that one shuts off, the one in the direction of the enemy will ignite and destroy it. This is a level 1 tower so 2 out of 4 slots are firing.

And that brings us to the last topic, setting the enemies on fire. Towers that set enemies on fire are the fire tower (above) and the laser tower. You can see in the screenshot above the enemy is on fire. What this means is that the enemy takes damage for a little while even after the turret has stopped shooting or is out of range. Here is another example.

The laser tower on the right has just shot some enemies and you can see some of them are on fire indicating they are still taking damage.

This was fairly difficult to do at first. But once you know how to do it, it is quite easy. Here I used a free fire effect package from the Epic Marketplace. But if you use it out of the box, what happened is that the flames would produce a long winding firewall wherever the enemies travelled. It was kind of funny looking. Here’s what I mean.

You can see the trail of fire and distortion lingers behind the enemies. The fix is easy. In Niagara, you can set each effect to use local space. Those will stay on the enemy. So the fire and distortion use local space. And the smoke and embers can use global space since I wanted those to stay behind.

I picked and altered three of these fire effects to try and avoid all the enemies looking exactly the same when they are on fire.

I said in an earlier blog entry that I was done everything on the technical side. That’s not entirely true. I still haven’t fixed the issue if you try to built a tower where an enemy is located. Also if you block the enemy’s path from reaching the exit. I’ll have to deal with that very soon.

One more tower left to do. Then it’s menus and config screens. I really need a setting for the volume ๐Ÿ™‚ After that is making levels. I have a few ideas already.

 

Tower Defense Preview

Below is a link to a video showing a preview of a test level for my Unnamed Tower Defense game.

I have 7 towers done. One is in the works (can be seen at the very end of the video). That will leave two more to go.

The tower I’m working on now is a carrier with little spaceships that can fly anywhere on the map and attack enemies. It’s a very long range tower.

Other towers that are complete are:

  1. Gun (best all around turret)
  2. Sniper (long range, delay between shots)
  3. Laser (weak against shields, but does lingering damage even after turret stops shooting)
  4. Short Circuit (Electrocutes enemies, very strong against shields)
  5. EMP (Slows down enemies. Not shown in video, but looks very cool)
  6. Machine Gun (Rapid shooting, but overheats a lot and must wait to cool down)
  7. Bank (Gains extra interest on points and scores extra points on enemies killed within its radius.

The last two towers beyond the Carrier are Fire and an undetermined AOE turret that damages all enemies around it. Thinking of going with the robot/computer theme and calling it MIPS (Multiple Independent Projectile System).

I managed to get enemies to stay on its side of the path (enemies can travel side by side) instead of all clumping around the corner.

I have created two enemies so far. I have also bought licenses for two more. Hoping to have at least 5 enemies total. They will likely have a few variations of each for additional “bosses” and difficulty.

I have lots of floor tiles I’m playing around with. These will likely not be the ones I use, but some of them might end up in some levels.

One thing to note in the preview are the shields. I licensed those as well. But I had to program them to show the area where it was shot. You can see little ripples where the shield is shot.

Turrets require a lot of work before they are complete.

  1. 3D mesh
  2. Animations (guns moving for example)
  3. Special Effects (I made the electric field myself as well as the laser and EMP effects. There are also gun shot effects.)
  4. Sound effects (this is actually a little annoying to get right since a lot of it must repeat or fade in/out).
  5. Program how the turret actually does damage (overheating delay, chain lightning, direct hit, slow, etc.).
  6. Set all the usual options (name, description, cost, damage per shot, etc.)

At one item a day, it takes a week to do a single turret. And that would be a very good week.

After the turrets and enemies are done, I need to create levels. I also need to finish the start screen, unlock screen, level selection screen and configuration screen. I have a lot done on controller configuration screen. So yes, controller support is already implemented.

Anyways, here is the preview. It is playable. I just have to finish it. There are no more obstacles or technical details to solve. It’s just about finishing the content/assets and making levels.

Tower Defense (Tile System w/ Virtual Textures)

Note: Skip to the Tile section below if you’re only interested in seeing how virtual textures work.

UPDATE: This technique does not work. I’m leaving the article up because it provides information on how to use UDIM. But it does NOT fix the tile overlap issue.

I really wanted to continue on the resource management game. And it will get completed eventually, but the animations required were making it impossible to make any real progress. I’ve found that these items take most of my time.

  • UI (I’ve always been slow at UI)
  • Animations
  • Asset creation and fixing

That last part… about fixing… is tweaking details, textures, render settings, etc. You can spend days, weeks and months just messing around with things before you realize all your time is gone.

So I have every intention of finishing the robot resource management game. But I need to get something out much faster. I decided on a Tower Defense game. Not everyone likes them, but there doesn’t seem to be that many on Steam that I’d play.

I gave myself two weeks to come up with a prototype and see how far I’d get. After the first week, I’d only spent times on a tile editor. I had 8×8 grid and an Unreal Editor panel where I could specify what tiles I wanted in the grid. I also needed blueprints to create the tileset. It was a lot of work. I then got sick with a cold. When the second week resumed, I made a Sniper turret, had a basic UI, one enemy and a weird way to make a hole for the turret to come out. In other words, in two weeks, I had a working concept. I was already way ahead of where I was with the other game.

Basic tower selection menu. The stats panel renders the turrets in real time. They are not static images. I may animate them later. And I even have a health bar over my placeholder enemy. Don’t worry, the health bar is a normal size when you’re not zoomed in that close. It also supports shielded enemies. Some turrets are better against shields. The turret bases are placeholders as well, but we’ll see.

Already, this is kind of playable for a prototype (everything can and will change in final release). I just need more enemies. Sniper and Gun work. You can hover over a turret and see its range with a transparent overlay circle. Machine gun has an extra overheating stat that I need to hook up and get some special effects for. Oh yeah, the turrets track an enemy and when it is in line, it shoots and keeps tracking for a bit afterwards in case it shoots again. There is recoil and there is special effects for a bit of fire and smoke after each shot. But that’s for another day.

Tile Section

The large slab in the image above is an 8×8 tiled area. I wanted to be able to select from a list of tiles. I wanted to start with allowing at least 16 tiles. But with 6 attributes each (color, specular, metallic, roughness, normal and bump), that makes for 96 texture samples. That’s ridiculous. So I tried to put them all together into a single 4K x 4K texture. I wrote a blueprint to do this for me. And what you get is not very good. Here is an example of the first 4 tiles and the rest being black.

First, never mind that the texture is using the wrong resolution here. I had to force it to display lower resolutions since I had to undo many of the fixes I have implemented. It shows up the same even with the correct resolution.

How does this work? There is a source (licensed) tileset with all the source tile textures. Here it is. Only the first four tiles are populated for now. This is the roughness texture.

There is another tiny 8×8 lookup texture that indicates which tile to use from the tileset above. So we can populate the 8×8 grid in any way we want.

I made the first four tiles in the 8×8 grid use the same tiles just so the reader can see the adjacent layout of the tiles.

Finally, what are those repeated dots in the screenshot above? Well, there’s a bump map on white lines to make them appear as if they stand out. The way bump map is implemented in UE is that it samples neighbouring texture pixels. So for those pixels on the edge of a tile, it will sample pixels from the neighbouring tile and you get this effect. The section inside the red square is what is getting duplicated. Actually, it’s from the tileset texture above. Keeping them in the same order makes it easier to visualize how the engine would sample from a neighbouring tile.

This effect of using neighbouring texture pixels happens even without bump mapping (bilinear will use 4 adjacent pixels for example) . It’s just that bump mapping makes it worse.

Possible Fixes

My first attempt at fixing this was to use a technique I had used before with megatexturing. That is to add an 8 pixel border around each tile in the source tileset. This way, it will never sample the adjacent tile. Well, this works fine for the first four LODs, then the artifacts come back.

What is a LOD?

A LOD is a Level Of Detail. For textures, it’s usually called a MIP or mipmap. It is a lower resolution texture created from the original. The first level is usually half the width and height. The second level is a quarter the width and height. And so one. So as you zoom out, you will use coarser texture to get better renders. If you use the highest resolution, you will skip many pixels in the texture and it will look grainy.

So using the above technique, Unreal Engine does its own MIP creation and you will have an extra border for 4 levels. After that, you’ll be sampling adjacent tiles again. You can clamp the LODs from 0 to 3. And this will work if you specify in UE to load all LODs. If you don’t it won’t clamp until the correct LOD gets loaded in memory. So you’ll see artifacts until you zoom in at which point it will start to render correctly. It’ll still be grainy if you have more colourful tiles than I do here.

To completely fix it, you need to keep at least a 4 pixels border on each LOD. That would mean a 128 x 128 tile would contain 120×120 pixels of actual data. Never mind the blueprints to scale to this area and producing these textures. It’s a pain. The next level would be 64×64, but only 56×56 will contain actual data. That’s not even a multiple of two when it comes to actual data. So you can get shimmering effects just going from one LOD to another.

Virtual Textures

What are virtual textures? To explain this, we need to explain what happens to a normal texture. Normally, when you texture a cube for example, the entire cube (or surface) will sample each pixel according to its distance from the camera. So if it samples from LOD0, ALL of LOD0 must be read into GPU memory. If an object is slanted, it could use several texture LODs, all of which need to be loaded into memory at the same time even if most of it is unused. What virtual textures does is split up your textures into tiles and only loads the tiles in each LOD that it needs. In this way, if a player never zooms in close to an object, it will never be loaded. But even if the player does zoom in, it only loads those tiles the user sees.

Internally, Unreal Engine will do much the same things that were explained in the previous section. You need to specify the tile size and you need to specify a border size. That border is used for a similar, but opposite reason as above. The border is actually needed to avoid colour bleeding during compression. Compression like DXT5 compresses 4×4 blocks. This is why a border of 4 is recommended. That is also something needed in the previous technique, but wasn’t mentioned. With Virtual Textures, the border COPIES the adjacent tile. So the border is actually used to get the adjacent tile. Exactly the opposite of what we want. Remember, Virtual Textures is supposed to be used with regular textures. So you can’t have breaks right in the middle of it. You want it to render exactly like before except use less memory. The fact that it has borders implies this is the intended use.

This would seem pointless for a tiling system then since we don’t want to sample from the adjacent tiles.

Well, there is one other feature of Virtual Textures we haven’t discussed.

UDIM

UDIM can be simple or complicated depending on who you ask.

UDIM is a system where you use UV coordinates higher than 1. Usually, UV coordinates go from 0 to 1 and span the whole texture.

To better understand UDIM, consider a human character. If you put everything in a single texture, you start to run out of space quickly and your textures need to increase in size to compensate. Also, some areas may need more resolution than others and it’s not ideal. So what was done is to split up different areas of a character into different materials. The head, torso, (legs & arms) are usually all split up into their own materials. So for this example, we have 3 sets of UV’s and 3 sets of textures. In the past, all UV’s would overlap. Since they’re in different materials and assigned different textures, it’s no problem.

UDIM comes in and says no. We have one set of UV’s and one set of textures. We start off with three sets of UVs, but we place them next to each other. So our UVs now go from 0 to 3 on the U axis. They still go from 0 to 1 on the V axis. You can tile on the V axis as well if you have lots of materials. Unreal engine limits 10 tiles per row. It also seems to have a limit of 8K pixels across for all UDIM textures combined. It should be noted this somewhat goes against the original intent of UDIM, but you can keep them separate if need be.

There is a file naming convention as well.

You can see this link for more info on UDIM.

https://docs.unrealengine.com/5.0/en-US/streaming-virtual-texturing-in-unreal-engine/

Scroll down to the UDIM section. You can see an image of how each tile can be placed in the grid.

You now name your textures with the addition of a UDIM coordinate system indicating where it should go in the UDIM grid.

tile-colour.1001.png (UV (0,0) – (1,1))

tile-colour.1002.png (UV (1,0) – (2,1))

tile-colour.1003.png (UV (2,0) – (3,1))

tile-colour.1011.png (UV (0,1) – (1,2))

 

The horizontal coordinates are 1 based and the vertical coordinates are 0 based. Don’t ask me why. The last digit is the horizontal tile index. And the second last digit is the vertical tile index.

If you import any ONE of these images, Unreal Engine will load the other images in the same folder that use the same prefix and UDIM convention and put them all into a UDIM virtual texture.

By using this naming convention, you are indicating what UV’s you want to use. This means your mesh needs to use UV’s higher than 1. But for our tiling system, we don’t need to do that. More an that later.

If you want to draw the second tile, you just use UVs (1,0) to (2,1). That’s it.

But what about sampling from adjacent tiles? It seems that UDIM virtual textures don’t sample across UDIM boundaries. I’ve tested it and I get no artifacts. But the Bump Map node changes the UV coordinates themselves, so those need to be clamped to the same UDIM tile.

So the short story is to make your tiles into UDIM tiles. And adjust your UVs accordingly. That’s it.

As for the tile size specified in the settings, keeping it at 128×128 should be fine. Just don’t use UDIM tiles smaller than that. Bigger UDIM tiles are fine.

 

UDIM UV

Ok, so if we have an 8×8 gridded mesh, how do we set up our UVs? You could set it up to use 0 to 10 or however many UDIM tiles you’re using and adjust the integer part of the UV’s after doing a lookup. But there’s a simpler way. Just set the whole thing as you would normally from 0 to 1. This is the simplest UV you’ll ever get. A square with normalized UVs. The “Plane” component in Unreal Engine is already set up this way and you can use that.

Next is the material. In our example, we have an 8×8 grid, so we’ll multiply our UV coordinates by 8. The fractional part will be our UV within the UDIM tile. The integer part is used as UV coordinates into our 8×8 lookup texture. Once we have the lookup value (in red and green after multiplying by 255) we replace the integer part of our UVs and add on the fractional part. That’s our new UV coordinates. And we then sample from our virtual textures. It’s really that simple. There is one more thing to look into, but let’s take a closer look at what was just explained.

 

Here, we multiply our UV by 8, clamp it because we don’t want 8 as an actual value. Remove the fractional part. Add 0.5 to sample the center of our lookup texture. And then divide by 8 to get back to 0-1 range since we’re doing a normal texture sample here into our tiny 8×8 lookup texture. We must sample from the highest resolution texture LOD. So make sure to set the level to 0 and the MipValueMode of the texture lookup node to MipLevel (Absolute).

Frac (after multiplying our UV coordinates by 8) gives us the UV coordinates inside our UDIM tile. But we now need the UDIM coordinates that we read from our lookup texture. Those coordinates are in the Red and Green channels so we make a float2 out of it and we multiply by 255 because all values from a texture sample will be from 0 to 1 (these are pixel values, not uv). Since pixel values range from 0 to 255, we multiply by 255 and will get the integer value again. We then add the fractional part back in and that’s our UV for this tile. You can then use this as the UV for your texture sample. In the above image, using a bump map adjusts the uv a bit so I use that adjusted UV in all other texture samples.

 

So that’s rather easy. But there’s still one problem that remains.

Our virtual texture could be any size. And it won’t necessarily match the number of tiles in our mesh. So the LOD that the Engine will use by default will be the LOD of the UDIM texture. That size is off by 8 in our example. So it would be 3 LOD levels off. That’s not good.

What we need to do is compute the texture LOD level we want.

We need to calculate what our texture size is on our mesh if we were to render it at the highest resolution. Well, there are 8 tiles across. What size is each tile? In my case, they are 1024×1024. Rather large, but hey, that’s what virtual textures are for. So my “virtual” texture size is actually 8×1024 = 8K. This number can go beyond the 8K limit of Unreal Engine max texture size. But for every doubling, you lose one bit of precision. Luckily, most hardware uses several bits of sub pixel precision. It’s usually not important, but something to be aware if you start to use a huge grid.

Virtual Texture Size = Grid Size * Tile Size

There is a node in UE to compute the texture LOD to use.

Above, we can see a constant with our computed virtual texture size and a ComputeMipLevel node. You shouldn’t need to clamp it. I just like to be safe. Then you use this level in all your texture samples except the lookup texture.

Above, we can see how to do texture samples. If you use a bump map, you use the UV you computed and feed the sample into the BumpOffset node. Make sure to set the appropriate range for the bump map. Is it 0 to 1, or -1 to +1? This will adjust the UV to create the impression of extra detail. I didn’t show the clamping of the output of the BumpOffset node. The integer part needs to remain the same. So one solution might be to bump the fractional part of the UV by itself, run it into a saturate node and then add the integer UDIM coordinates.

You then use this new UV into all other texture samples. An example is shown with TilesColor. The output Color node would hook up to the Base Color pin of the output node.

Ensure the Sampler Type is set to a Virtual type such as “Virtual Color” and “Virtual Linear Color”. Note the VT at the bottom right of the preview images. I use Virtual Color for the actual color texture, Virtual Normal for the normal texture and Virtual Linear Color for everything else. The lookup texture should use a regular Linear Color sampler.

 

If you don’t use bump map, then just use the computed UV (just rename “Bump UV” to “UV” in the example above) and use that as your UV.

 

Conclusion

I’ve seen way too many threads that ask how to do this with no responses. It’s a somewhat challenging problem, but Unreal Engine is up to the task. And once you understand the setup, it’s remarkably easy to do. UDIM file naming will automatically merge your tiles together. No need for cumbersome blueprint or C++ code to do that. No need to deal with borders and resampling your texture to odd sizes. Just do a lookup, replace the integer value of your UV and you’re done. Oh, and Mip Level computation, but there’s a node for that.

One last thing, remember to use a different lookup texture for each gridded mesh if you want a different pattern of tiles.

Here is my tile editor to let me choose what pattern of tiles I want on my grid. Unfortunately, I haven’t implemented rotating tiles yet ๐Ÿ™ I’ll put rotation info in the blue channel. That part is left as an exercise to the reader ๐Ÿ™‚

The Left area in the panel is what you see in the game. The right side is the set of tiles I can choose from. I hit apply to apply the selected tile (on the right) to the selected grid cell (on the left). I’ll need another button to rotate the grid tile and update the material to handle that.

Updates (Youtube)

I’ll try to start posting small updates youtube. I uploaded one such clip already. See below. I finished the animations for the treecutter, but there are still some issues to work out as can be seen in the clip. This is still really early in development, so what you see here is just a sample landscape with some trees to test the treecutter (grass and other things are turned off).

My game will likely be released on Steam and/or Epic Store. Almost certain to be on Steam. I’m was hoping for August release, but that may be way too optimistic. I’m confident a lot of progress will be made soon. Most of the underlying code is done for the core mechanics. The biggest obstacle right now is resolving path collisions not just when robots collides, but also when grabbing and dropping resources at the same time at the same tile. When the game is released, you can subscribe to my youtube channel and I’ll post here. I’ll likely createย  a Twitter account soon as well.

Quick Update and a bit of lore

Don’t have much to report (no images or anything), but wanted to indicate that there is still progress being made. Mostly backend code and fixing collisions as items fall down when chopping trees. I also wanted to go over 6 questions to ask when making a story. This apparently works with video games as well. Let’s go over that a bit. Link below on the video I got that from. It’s 6 question that if you answer them, you can write any story.

Here is the link to the video:

https://www.youtube.com/watch?v=uL0atQFZzL8

The 6 questions are:

  1. Who is it about?
  2. What did they want?
  3. Why can’t they get it?
  4. What did they do about it?
  5. Why doesn’t that work?
  6. How does it end?

The premise is that there was a conflict or war between humans. Some AI were involved and reprogrammed.ย  But most robots did not get involved directly and do not harm anyone unless they have a really good reason. So your task as the player is to go from island to island to clean up and organize robots that were left behind. Some will help you. Others are leftover from the conflict and will want to harm you. You must build up and defeat those that would fight against you until you reach victory. But you must build up in buildings, technology and monuments to prove your worth where the robots will fight for you. Once you’ve claimed the entire island, the robots will vote if they want to stay or return home. The more robots stay, this will indicate how appealing your settlement is and will mean a better score.

Hover over this text for spoiler about who you play

To answer the questions more directly:

  1. Stranded robots on an island after a conflict
  2. To go back to normal helping build a community
  3. They’ve lost their leaders or anyone to help and are stuck on an island. (This is where the player comes in)
  4. Nothing. This is where the player takes over and gives them purpose. The robots will actually do most everything without telling them.
  5. They’re not working together at first and if they do, the bad robots will attack. You need to find resources, build up and build defenses, etc.
  6. When you take over the island.

I haven’t fully decided on the end. Much of this may change. But that’s the premise right now.

As for updates, things were a bit slow for a while. I still don’t have the treecutter done. I have one thing remaining. To place the carried item on the ground at the correct location in the correct tile. Items will stack and I’ve set a current limit of 6. So 3 on the bottom, the 2 above that and 1 at the top. But I need a way to automatically tell the size of the item and if it’s a circular item or not (like a log). Logs stack differently. So I’ll likely do that today. I also slowed down the animations. At one point he was removing the center of the tree before the top of the tree fell down.

Once placing items is done, the tree cutter is done, the carrier will need a bit of code to handle priority items and then do some design decisions on what a carrier should pick up next. Should it be random? Nearby? I think the item should be somewhat nearby otherwise, it would waste a lot of time. I will also have to handle collision between carriers. Waiting is one option, but that could lead to deadlocks. Also, if someone plonks a building right in the path of a carrier, I need to update that path. The next big resource is the rock cutter. I intend to reuse a lot of the same code for this. I can now move robots around where I want and do fine adjustments with no hassle now.

Then I need a UI to at least place where a building should be built. Then the user can actually play a game or the start of one. In any case, the core mechanics are coming along. I may start posting youtube videos or shorts and linking them here to show some progress. The tree falling animation would have been funny to post.

That’s it for now.

I may as well add one image. Here’s a building I made a while back that is actually hundreds of parts (so the robots can build it one piece at a time). There is a lot done that I haven’t shown yet. The purple is a custom shader to indicate that my code is seeing the building where it actually is. This means you can place pre-built buildings at level start and the code will automatically recognize where they are and update the pathfinding code to go around it as well as set the input and output tiles for resources for the building.

Inverse Kinematics

We’re going to go over IK chains mostly so I’ll have a reference for what I did. This may get a little bit into some technical details, but hopefully not too bad.

First, what is Inverse Kinematics? It’s simply trying to move a limb and have other limbs up to the torso move with it. So if you move someone’s foot, their leg (calves and thighs) will move as well. For now, we’ll only look at two bone IK. What I was struggling with was trying to move the robot’s hand and have the arm follow it.

Before getting to all that, there’s this thing in Unreal Engine called a control rig. What it is are widgets you can place around your character to move different parts and easily create character animations. Some can be for the eyes. Others can be for the head where the head will always follow a widget that you can move around. Makes it easy for your character to look at a specific target. One aspect of this rig would be to move the legs and arms. And that’s where IK comes in.

So if you move a control rig widget for the hand, you’d like the arm to follow. That’s it. You should be able to plug in a two bone IK component that comes with Unreal Engine and be done with it. Instead, this is what I got.

The image is a little busy, but the red circles are control rig widgets (as are the arrows). If you move the red circle on the hand, the arms should bend in to make room. There’s a widget for the elbow, but it’s not on the elbow itself. It’s behind it to tell the IK component in what direction to move the elbow.

Ok, so what’s going on here? The arm on the left of the image is not hooked up yet. It’s in the default T pose. The right side (left arm) is hooked up and notice anything strange? The forearm is rotated in place. The elbow is too far up and the hand is too far low. Basically, an offset has been applied and I have no idea why. I have used IK chains for many years. I know how to hook them up. I’ve used them before in Unreal Engine. This is very old hat for me. And it just gets worse if you try to move the arm. Sometimes the hand completely detaches from the body. Sometimes the arms go backwards. It’s ugly. I’ve tried all sorts of things and nothing. I did fix it, but not using an IK component that directly modify the bones’ transforms.

Let’s take a little break and change the topic a bit. I needed this because I wanted to implement the backwards transform for the control rig so I could make animations to chop trees. Forward transform is what I described above with the control rig. Reverse transform is so if you have an existing animation, it can be applied to the control rig so that you can edit and modify said pre-existing animation. I didn’t have the reverse transform set up, but when I did try to set it up, I noticed something was off with the arm and here we are.

As for the game, I’ve finished the asset management for foliage (trees, grass, flowers). I can swap out the foliage instance for an individual actor so that I can chop it down. There’s still plenty to do under the hood, but buildings automatically get assigned a worker. They can find their way to said building. They will enter it and they will perform the task for that building. In this case, I’m trying to get the treecutter to chop down some trees. He now exits the building and goes to the tree and is ready to chop it down. But I ended up down this rabbit hole trying to make some simple animations. He already has a blade. I just need to extend it, make a sweeping motion and let the tree fall. That’s it. I could have done it manually by tweaking individual bones, but no, I wanted a working control rig.

So how did I fix this? Since I have no idea why this is happening, I needed a different way to accomplish this. I know IK tools work. So I noticed unreal actually does have a really simple component that only takes 3 positions for IK. Normally, you need 5 (the elbow and hand control positions indicating where you want to move the arm). So how does this work?

The trick is to notice the last two input pins. The Effector is where you want the hand to go. So this is the hand rig control position. The root is the first of the two bones. In this case, it is the location of the top of the arm (shoulder area). The pole vector is the direction that the elbow should go. So this is the elbow rig control position. Then you calculate the bone lengths in their originalย T pose. There are components to do this, so you just plug it in.

Notice the “Initial” pin. This is their starting position in the T pose. So you can calculate the original bone lengths avoiding any distortions. And they plug directly into the IK components. Easy!

Notice also that we DON’T use these transforms for the input of the IK component. We want the current position for those.

For debugging, I displayed the elbow and hand locations of where the IK component computed them. The positions are exactly where they should be. Now to move the arms.

I chose this Basic IK Positions component specifically because it did not alter the skeleton or bones of my character. Problem is we have to do that ourselves. This is surprisingly easy. I just used a couple of “Aim” components that come with Unreal Engine.

Aiming takes a bone that should be pointing at a certain target. It takes said target as input as well. And it has a secondary target because it needs to know what the “up” direction is for rotating laterally. So the input bones and targets are easy. It’s the upper arm and forearm bones respectively. And for the targets, they are the outputs from the IK component. That only leaves the secondary or “up” targets. For the upper arm, I just used a perpendicular vector from the triangle formed with the upper arm and the elbow control rig position. This is done by taking two vectors and taking the cross product. For the lower arm, I used the triangle of the upper arm and lower arm.

I put all this in a blueprint function so I can reuse it and this is what I get.

The arm looks like it should. I can move it around by moving the red circle on the hand and the elbow goes where it should. I also used the other controls to open the eyelids and set the pupil size just to show how this works. It’s a little busy, but it’s very flexible.

The next problem I have is the reverse transform. Taking the bone positions of the arm and placing the controls of the control rig in the correct location so that the arm remains in the same location. I’m not sure how to do that. That’s the next challenge. I was just happy to get this really annoying problem solved and thought I’d share the results. And again, this is why I chose a robot with no legs. I just didn’t want to deal with the animations and robots’ movements don’t need to look human. Even so, still not without its problems ๐Ÿ™‚

Oh, forgot to mention a really nice bonus to my fix. In all existing implementations that rotate the bones for you, the hand would not rotate with the forearm and would look really weird on the robot when his hand is not aligned with the forearm. I think this is done so that when you move the foot for example, the foot stays aligned with the floor. But in my case, I don’t want this and just rotating the forearm rotates all the child bones automatically.

Chopping Trees

This will be a very short post. I’m currently working on being able to chop trees. This is probably the most difficult part of the game to develop. I have to cut the tree, remove the branches, cut the logs, bring them back to the cabin. And I have to make sure the tree falls down in the right direction. I’ve already cut up two trees (in 3d software). I may add a third tree, but it’s an insane amount of work. So for now, there are only two variations of trees (I have a total of 5). Oh, and the tree planter needs to be able to plant and grow trees. Those are available in the Megascan trees made available by Epic.

But first, we need a saw to cut down the trees. And yes, the trees he’s going to chop down are the ones behind him in the image below.

No axe or hand saws for this guy. Built-in portable circular saw. Instantly cuts trees down. A few challenges here. I’m going to have to disable collisions with trees so the robot can get in the forest. Otherwise, this isn’t going to work. And I’m going to have to ignore a lot of stuff that may be around the tree if it falls in an unusual spot. I don’t really have a choice. So next step is doing the animations and I may do them in all in C++ instead of in the editor because I need to ensure everything lines up properly. I mean, I don’t have to. But it’d look a lot nicer.

This is one of the main resources in the game (planks from logs), so this is a big milestone in the game. All the assets are ready to go for cutting down trees. Just got done the saw and adding skeleton sockets to attach the saw. The other two big milestones are carrying resources from one place to another and stacking them (and taking the top one), and figuring out conflicts if more than one robot is picking stuff up from the same tile. And lastly, being able to construct buildings. If I can get those three things done, that will be the core mechanic of the game in place and the rest is details.

Level of Detail

This is jumping ahead a bit, but we’ll talk about LOD’s (level of detail). When the camera is up close to a 3D model, you want the highest resolution model available that can be supported by the game. But as you zoom out, you can’t see much of the small details anymore, so you want a simpler mesh. There are automatic tools that will do this. This process is called decimation. In Unreal Engine, you can set how small the character is on screen before switching to a different level of detail (a coarser mesh). You can then set the percentage of polygons you want from the original when creating each LOD. Usually each LOD is 50% of the polygons of the previous. So if you start with 24K triangles, LOD 1 will have 12K, LOD 2 will have 6K and so on. Usually, 3 to 5 LODs are created. But Unreal Engine supports up to 8. You want to use more LOD’s the more polygons your mesh has.

The reasons for building LOD’s are numerous. First is that you want to have your video card do less work if possible. In this game, I’m going to have many characters (robots) running around. If they each have 50K polygons, the game isn’t going to run very smoothly. And to fit them all on screen, you’re going to have to be zoomed out anyhow. So drawing 8x small meshes at LOD 3 is the same work as drawing one highly detailed mesh (LOD 0).

So let’s start with the little guy in the game.

This robot is licensed asset found at DAZ 3D. I did not create it. The trees and grass are free on Unreal Engine’s Epic store. The little robot will push the grass away from him a little when he moves around. The red hex lines are just an indicator where the tiles are located. I may remove them or make them an option. There are actually 6 pie slices in each hex tile for roads, buildings, resources, etc. So it’s not fixed strictly on hex tiles. Sub-tiles can be used in the game.

The reasons to use a robot for this game are:

  1. Save time creating an asset
  2. Don’t need to worry about leg animations
  3. Skinning weight maps are really simple compared to live figures (these tell what polygons to move when a bone moves).
  4. Starting over isn’t a big deal and since this is my first project, I expect to do that several times.

Let’s take a look what what auto generated LODs look like.

This is LOD 3 so you would never see it up close like this. The arms and rest of the body are actually fine. But the distinct feature of this robot are its eyes. And as we can see here, the eyes are all messed up. The whites of the eyes are poking through the pupil and this is very noticeable, especially considering the contrast of black and white. And this isn’t a criticism of Unreal Engine’s decimator. It’s actually quite good. I find it’s one of the best out there. It’s just that the eyes are where there are a lot of overlapping polygons. The robot also has eyelids that cover the entire eye area and this has pokethrough as well. In other LODs, it’s actually the whites of the eyes that have blacks from the mesh behind it poking through. Anywhere you have overlapping polygons, you’re going to have these kinds of issues.

So one fix is to just export the model, fix up the eyes and re-import it. It’s not that simple though. You need software that can retain the rigging, weight maps and UV maps (texturing). There are options in Unreal to only import the geometry, but I have no idea what happens to the UVs and weight maps in such a case. And in my experience, it will still complain about the rigging if the weight maps or skeleton isn’t imported back.

A nice feature about LOD’s in Unreal Engine is that each LOD is a distinct model. You can remove bones if they are too small or not used anymore. You can have different UVs and different weight maps. Sometimes, there are parts of a model that become so small, you just delete it. So those UVs are gone and so are its materials. This is one less draw call.

I use Lightwave 3D for modelling. It doesn’t support rigging though. So what I do is import the FBX file into Lightwave’s Layout software (for animations) and here I can send the model to Modeler to edit the figure and then send it back and export it back. The bone structure is somewhat different than Unreal. You have to rename all the end bones and configure it a specific way. But it lets you export it back to FBX with full rigging, weight maps, UV maps and everything. I do keep Blender in my back pocket as a fallback if all else fails.

So… with all that said, did I just fix the eyes and call it a day? ๐Ÿ™‚

NO!

I made every single LOD by hand.

I wanted to get more practice out of the tools I was using and I love doing this stuff, so no big deal. When making LOD’s yourself, I learned one very important lesson. Keep everything you can as quads. Do NOT triangulate your mesh. Unreal Engine will convert everything to triangles, but if you want to make your life simpler, keep everything as quads where possible on the assets you work with outside of Unreal (before importing to Unreal). Why? Because if you have a mesh of quads, you’ll most often have them in a grid arrangement. If you have a section where you have a 2×2 region of quads, remove the middle horizontal edges and you’re left with two quads. Instantly reduced your mesh by 2. Do it again vertically and your mesh has been reduced by 4 from the original. You only need to fix up the areas that join up to different shapes. You can repeat this again and again until you run out of polygons. You can’t do any of this with triangles.

Above are LODs 0 to 3. I was somewhat surprised at how good LOD 1 looked. I may end up using it as the main mesh. Haven’t decided yet. By LOD 2 (3.4K) you can start to see degradation. And this was manually and intentionally done. When zoomed out, the eye appear perfectly circular. You really can’t tell the difference. In Unreal Engine, you can zoom out and select which LOD you want to look at so you can pick the exact screen size that the LOD will be used at.

One thing I noticed is that assets that aren’t made for games often don’t care about how many polygons they use. In this case, I simplified the ears. Originally, they had 4340 quads. I reduced it to 1200 (600 each ear). Even that was a bit much I think. Again, this is no criticism of the original artist. These assets are made for detail.

Another issue with third party assets is that often the mesh is in separate parts. IOW, sections are copy and pasted (often scaled) but not joined together. Why does this matter? Again, it’s no problem if you’re just rendering these assets. But in a game where you need to make LOD’s you can’t decimate these meshes properly. In this example, the black rubber triangular section above the wheel was 4 different sections. When decimating, it would distort or just plain disappear. Even doing it manually, it limits what edges and polygons you can remove. In LOD3 for example, I made it into a single section with only 24 polygons. Before using even the original mesh, some adjustments are needed. There was an antenna on the bottom with some ring structures. I’m guessing for a levitating robot. I’m never going to use that, so I just removed it entirely. The door on the belly was just decorations. I’m actually going to put a battery in there that will have a function in the game. So I cut out the door and added a cavity section in the torso to store the battery. I also modelled a battery that will go in there. I added UV’s for the internal section, added weight maps and added a new bone to the skeleton so that the door can open.

I may end up adding LOD 4 and 5. I’m at the limit of what I can manually remove, so I may let Unreal’s decimator take care of it and manually fix the eyes as mentioned earlier. One reason for doing at least the first two LODs is that if I decide to use LOD 1 or LOD 2 as the main mesh (it will become LOD 0) or even doing it dynamically in game for older machines, I know those LOD’s will look good. The other LOD’s aren’t as important since they will never be seen up close.

Is this the correct way to do it? I have no idea. But it was FUN!!! I find I can do at least two LODs per day. And the door enclosure and battery took another day.

As for the next entry, it might be about Megascan trees and assets and what I did to be able to chop down trees. This is a resource management game, so a tree cutter hut will have a robot in it. He will chop down trees. Another robot will bring the logs to the saw mill where another robot will saw them into planks. With planks and stones, you can build basic buildings (like said tree cutter hut and sawmill). There’s a list of resources and buildings that I’ve yet to finalize. Chopping down trees was by far the most difficult challenge. Another topic would be A* pathfinding in Unreal. That’s where the hex tiles come in. That was fun.