#worst part about their faction name is the search engine optimization
Explore tagged Tumblr posts
makiitoh · 2 months ago
Text
Tumblr media
78 notes · View notes
aurelliocheek · 5 years ago
Text
The Making of A Year Of Rain: Destroying People’s Work
How I make buildings collapse in A Year Of Rain.
Since I started working at Daedalic in 2014, I’ve been taking care of the visual effects for our in-house produced 3D games, such as Silence or State of Mind. For some time now we’re working on a new game called A Year Of Rain. The new thing here is that it’s not a story-driven adventure game like we used to create, but a real RTS game with competitive gameplay mechanics, co-op and multiplayer, and a full-grown Campaign. The challenge of such a game, besides the huge amount of work for a small team like ours, is that everything we produce follows one main approach: Gameplay first. This means for the visual effects: readability is the most important thing. It doesn’t help anybody, if you see the most awesome explosion you’ve ever seen in a video game, but can’t see your own units or even understand what’s going on.
In addition to the numerous abilities, environment, and UI effects, destruction effects are an important part of the game. It is kind of satisfying to see enemy buildings collapse under my siege. Also, the destruction effects serve not only the visual satisfaction but also the readability of the game. Anyone looking at the battlefield needs to know what’s going on. Of course, we have life bars and UI, which shows you how much life or health a unit or building has left. But when I see a bunch of flames and smoke coming out of a main building, I instantly know something is going on here.
The four states of destruction. The flames spawn randomly at predefined sockets.
We Didn’t Start The Fire In order to represent the health status of a building, we use fire particles in addition to the UI and life bars. The less health the building has, the more fire is spawned on predefined sockets. For example, if a building burns in one place only, it still has about 75% health, but if it burns in 3 places, it’s pretty badly affected. We used this approach because it is simple to implement, and we have to keep in mind that you can repair your buildings. So, if we start to destroy them when they lose health, we also have to make them build up again, when they get repaired. It is much easier to deactivate a particle emitter than animate some building up or repair animation for a building. Easy fix.
Nevertheless, if the health value of a building falls to or below 0, it should collapse. It looks cool and feels nice. From this moment, it is no longer gameplay relevant. It loses its collision and can’t produce anything anymore. It can’t even be selected. So, we got some time for some effect magic on this one. The collapse animation is just visual, serving as feedback only and has no effect on the gameplay.
Although the Unreal Engine possesses quite powerful destruction mechanics for static meshes that can be dynamically switched on and objects can fall apart impressively, these dynamics also bring uncontrollability with it. Thus, some parts of destroyed objects can fly quite strangely through the maps, which can lead to funny, but not always wanted results. In addition, the collision calculation can be relatively quickly expensive and pushing down the frames. So Unreals dynamic destruction was no option.
So, animation? It’s fairly hard to animate a destruction by hand. It’s time-consuming and needs a lot of polishing to look and feel right. Not to mention that our animation department is fully occupied with animation sets for all the playable units in the game. And there are a lot of buildings to destroy. Just for one faction, there are approximately eight buildings. And environment props like rubble or gates. And the trees you have to gather wood from. We’ve got a lot of trees. So, I decided to go with simulations.
A material instance for the destruction of the barracks. All the parameters coming from Houdini are set here.
A Kind Of Magic For mesh destruction and simulation, I use SideFX Houdini. Houdini has its roots in the film industry and is basically a complete package for CGI. Interesting is the non-destructive workflow since it is based on a waterfall model-like node system, in which you can undo, change, and adjust settings and modifications at any time. It also takes a procedural approach to asset creation, making it easier and faster to produce a higher output for a small team by creating a multitude of variations through randomization. In addition, Houdini is developing and updating a toolset for game development, specializing in the creation of game assets and applications in game engines.
Part of this toolset is the Vertex Animation to Texture export, short VAT. This method stores the position and rotation of vertices or pivots of a previously fractured or/and animated mesh in a texture. The respective channel of the texture stands for the respective orientation in the 3D space, i.e., RGB = XYZ. The width of the texture results from the animated and/or simulated vertices, and the height corresponds to the frame count of the sequence. For example, if you got an animated mesh with 40 ver­tices in a sequence of 96 frames, the texture will be 40 x 96 pixels.
The texture is then interpreted in a separate UV channel by the material/shader in engine. This sets the simulated vertices per frame to the respective position. The sequence can then be played according to a defined value, like a curve over time. The advantage is that the sequence is completely controlled by the material. No rigging or weighting is needed, and the shader complexity is manageable low. Only the mesh has to be prepared for the destruction and simulation.
A grouped geometry with Unshared Edges flagged and a PolyCap node that connects the marked edges.
Another Brick In The Wall How do I destroy the meshes? First, I get a final mesh from a 3D artist. In general, this mesh is optimized, which means the unnecessary faces have been removed. The mesh is almost hollow. That’s good for the game because fewer triangles mean better performance. But to make the mesh fall apart nicely, I need closed geometry. When a mesh is fractured, the surface of the mesh gets cut. At the edges, new geometry is created, the so-called inside geometry. This geo is necessary to give the mesh volume. Without the newly created inside geo, the mesh looks like an empty shell when it breaks apart.
So, let’s dive into Houdini. To close the mesh in the required places, I group individual larger parts, search for Unshared Edges, and use PolyCap to close these parts. This step often needs most of the time, because I have to make sure that the mesh is clean and ready for the fracturing. In the best case, my 3D artist colleague hands me a nice closed and unwrapped mesh. In the worst case, I get an open, put together mesh that needs to be reworked for destruction purposes. If the mesh is ready, I unwrap the new surfaces. Then I randomly scatter points over the mesh. At these locations, the mesh is fractured using the Voronoi Fracture. The number of scatter points indicates in how many parts the mesh is fractured. To get more control over the scatter, I use a Paint node to specify the mesh vertex color and use these values as the base for the scattering. So, I can let parts of the mesh crumble more than others.
After chopping the mesh into small pieces, the newly created inside surfaces have to be unwrapped too. Fortunately, Houdini creates name attributes for the new geometry in the fracture node, making it easy to unwrap them. In order to be able to assign your own material to the inside geometry later in Unreal, this already must be done in Houdini. For this, a separate material is assigned the inside geo. I usually use a simple red material as a placeholder, which makes it easier to spot errors caused by the fracture.
Houdini in action. The “bowl” under the mesh catches the falling pieces. The node tree on the right defines the destruction.
Shake It Off Now the mesh is ready for the simulation. Using the RBD Fractured Object shelf, which is basically a preset for a simulation node tree, I create a rigid body object from the fractured geo, to which collision and gravity attributes are assigned. You can build this node tree by yourself, but Houdini does a good job with the shelf tools, and it’s the fastest way. Also, you always can dive into the nodes and edit them.
In the newly created DOP network, which stands for Dynamic Operation Network and includes the simulation, the mesh is now simulated. There are still parameters that need to be set up for the simulation to behave as I would like. If I just let the simulation run, the mesh will simply fall to infinity.
To control the simulation, I set the Object Type of the individual parts that make up the mesh to Create Static Object. As a result, the individual parts are now static. They still have a collision, but they are not affected by gravity or other forces. Then I create a box group that contains parts of the mesh. I set these grouped parts to active through an AttributeWrangle, which is basically a code node. Since I can move the box group over time, I can activate more and more individual parts of the fractured mesh. If I move the box group slowly from above the mesh to the bottom, the individual parts of the mesh are activated one after another and fall slowly downwards, colliding with the still static mesh parts and resulting in a nice destruction effect.
To give the parts even more mass, I reduce the gravity force, causing them to fall more slowly, which make them appear bigger. Since the buildings in the game are relatively small in comparison to the units, the slower falling apart gives them back size and weight and makes them not look like miniatures. I don’t let the falling parts collide with a ground plane. I want them to fall right through the ground. Because you can place the buildings almost everywhere on the maps, you never know how or where the individual parts collide with the environment or other units. If I let the single parts just collide with the rest of the building and nothing else, I have no parts lying around or clipping through other meshes. It simply avoids possible errors. However, the fact that the individual parts fall through the ground to infinity also has a disadvantage. As the position of the falling parts constantly changes, and the distance increases, the position values stored in a texture become way too big. This can lead to errors when playing back the sequence in the engine. For this reason, I always build a big “bowl” mesh under the simulation that catches my individual parts. This looks kind of funny when played back in the engine, but usually the player doesn’t see it.
Finally, I export the simulation via the VAT Game Shelf. In the exporter, you specify the path to the simulated geo, as well as the length of the sequence in frames.
I try to keep all sequences between 96 and 120 frames in order to get a fairly short and compact destruction sequence. If the destruction becomes too long or too slow, the player possibly doesn’t get, that the building or prop is already destroyed and keeps trying to destroy it. As always: Gameplay first!
The Export creates a new mesh with all the fracturing and new surfaces, as well as the mentioned position/rotation textures.
Top: Detail view of the simulated mesh with the point count of the packed geo. Bottom: Position texture in Unreal. A width of 37 pixels for the points and 95 for the sequence.
Go With The Flow Now I import the generated files into the engine. In the static mesh settings of the destruction mesh I flag the Use Precision UVs option to prevent errors in the UV layout. The textures must not be compressed. Each pixel contains position/rotation values, and these can be lost through compression. Since most of the meshes in the simulation consist of packed geometry, which means geometry that is connected and in one piece, it gets one pivot point in the middle of itself.
The advantage: Not all the vertices have to be saved in the texture, but only the pivots of the individual fractured parts. Which in turn means that, if I have a destructed mesh with 24.000 vertices but fractured and packed into 40 pieces, I only need the pivots of these 40 pieces, which makes the texture just 40 pixels wide. And because the sequence is mostly between 96 and 120 frames, the texture is just between 96 and 120 pixels high. Even uncompressed, this makes it not bigger than several kilobytes. I only have to be careful not to go crazy when I fracture the mesh, because even if the geometry is packed in Houdini, if I import it into the engine, I still have all the vertices, and this can blow up the mesh. So,
I have to keep it simple: Less small pieces, more big chunks and not so much new inside surfaces. The last step is to put all the parts together. The textures need to be interpreted in a material. The VAT export in Houdini provides all necessary code for this. I only have to copy it from Houdini and paste it into an empty material. All needed configurations are commented, so it’s pretty easy to build the material. To make it even easier to assign the VAT functionality to already existing materials, I made a material function out of it. Now I can create instances from the master materials for each destructible mesh with all the needed parameter exposed and editable to create a whole lot of destructible mesh materials. Only some values like bounds and frame count have to be transferred from Houdini to the material instance, and the material has to be assigned to the destruction mesh.
To make it work in the game, we created Blueprint components, which are added to the individual Blueprints. These components control all the settings for the destruction sequence. I can define which destruction mesh and material I want to use, change transform settings to adapt to the optimized “original” mesh, add additional destruction meshes for special props or set up many other parameters such as delays, decals or particle effects.
In the End For me, the VAT pipeline works very well. Because of the “waterfall” node-based and non-destructible workflow in Houdini and the easy import into the Engine, I can make a lot of game props destructible and game ready in a short amount of time and integrate them into our game.
Martin Pitzing VFX Artist at Daedalic Entertainment
While studying Game Design at the University of Applied Sciences in Berlin, his professor asked him what exactly he wanted to do in game development. Not quite sure about that, his professor advised him to specialize in visual effects. A very good advice, as it turned out. Since then, Martin loves to make effects and learn new things about it every day.
The post The Making of A Year Of Rain: Destroying People’s Work appeared first on Making Games.
The Making of A Year Of Rain: Destroying People’s Work published first on https://leolarsonblog.tumblr.com/
0 notes