Skip navigation

Category Archives: Game Engineering II

For our game engineering final, we had to use everything we had learned so far in the semester, and build a game using all that. I decided to remake a simpler version of my thesis game, 404Sight. It is a firs/third person runner inspired from Mirror’s Edge, with the eagle vision from Assassin’s Creed. The player uses their ping ability to reveal parts of the environment, using which they get have to get to the end as quickly as possible. I chose this game because I knew how all the mechanics worked, and wanted to see how simply I could replicate them in an engine I made from scratch. For my final, I decided to remake our first level, called the hallway. It’s straight path to the end, but the way is littered with fast and slow tiles. To reveal the tiles, use left click. Movement is through the WASD keys. You can rotate the player character around by using Q and E, but the movement axes are fixed for now. One of my goals for next semester is to get a third person camera implemented. The game exits when you make it to the end. The game name in the top left is a 2D Sprite. Another thing I want to implement is a timer, using a spritesheet. I learned so much in this class, about how to setup pipelines, which made it easier for me to add assets to my game, and I had negligible graphics programming knowledge before taking this class. Now I feel I know atleast the basics, and can now go deeper into graphics programming, and be able to render something that looks much better. I also delved a bit into making plugins for Maya, as I modified the Maya plugin given to us by the professor, changing the way it exported multiple meshes. I also added sound, which is the music that plays in game in 404Sight (The real one). Here are a couple of screenshots of my game:

game1 game2

In the debug build (both release and debug are linked below), you can press F1 to toggle debug lines. The debug line I have right now changes size based on the player’s speed. This helped me debug whether I had the intended interaction with each tile. You can find the debug build here. The release build is here.

For this assignment we had to implement sprites in our game. Sprites can be used to display messages to the player, as well as UI elements. To do so, I implemented a sprite class. Since sprite are always drawn on top of everything, we draw them last, and with the depth buffer off. The sprites are also alpha blended with whatever is behind them, unlike meshes. To accomplish both of these, we change the render state of our device to disbale the depth buffer and enable alpha blending, then draw the sprites, and the undo the render state changes. The hardest part of this assignment was to make sure that the sprites don’t appear stretched, no matter the resolution. It took  some thinking and trying a bunch of different things before I could get that part working. Here are a couple of screenshots showing the sprites with two different resolutions:

sprite1

800×600

 

sprite2

1280×720

 

The blue numbers on screen are drawn using a spritesheet. To change the numbers, press any number key from 0-9 to see the corresponding umber show up. This currently works with only the number keys that are above the letters on a keyboard, and not with with numpad. Here are a couple of PIX screenshots to show the render state of the device when drawing a sprite:

pixsprite1 pixsprite2

 

Here is another screenshot showing the PIX instrumentation:

pixsprite3

 

Time Taken:
Reading/Understanding what to do: ~0.5 hrs
Getting the Sprite Class working: ~1.5 hrs
Adding new shaders: ~0.5 hr
Getting spritesheet working: ~0.5 hr
Making the sprites not strecht on resolution changes: ~1.0 hr

Total Time: ~4 hrs

You can download the working exe here.

Your browser may say that the file is malicious. This is because some browsers do not allow sharing executables from unknown sources. However, the file is perfectly safe to run.

This assignment was all about lighting. We had to a directional light and an ambient light to our game. Specific requirements were:

  1. Add normals to meshes, so they could interact with lights
  2. Have a directional light that is moveable
  3. Be able to change the color of both ambient and directional light

 

For the first part, we had to add the normal information to the MayaExporter, and then make sure that the normal information made it all the way from the Maya exported mesh file to our shaders. This meant that the MeshBuilder had to rebuild all meshes with normal information in them, and then we had to read in the new data from the mesh binary files, and then pass it into the vertex buffer. Once I got that working, adding the lighting was a matter of updating both vertex and fragment shaders, to be able to handle normals and add lighting information to the final screen output. Making the light moveable involved more shader constants, and adding keyboard controls to actually move it around. The controls I implemented are:

  1. I and K to move the light in and out of the screen
  2. J and L to move it left and right
  3. U and O to move it up and down

 

Since it is a directional light, there is no single source that can be seen, its just the direction which is being changed.

To change the color values of the lights, I  currently have a place in code to do it, but will soon make a change to make it read that value from an external file, so that we won’t need to rebuild the game just to change the light information. Currently it can be changed in GameScope.h, where there are three constants, shown here:

gamescope

This is what my game looks like now:

lighting

 

Here is a PIX screenshot of me debugging a single pixel on screen, and stepping through the fragment shader to see what are the values being used:

pixfragment

Time Taken:
Reading/Understanding what to do: ~0.5 hrs
Getting the MayaExporter working: ~0.5 hrs
Getting normals working: ~0.5 hr
Getting lighting working: ~0.5 hr
Making the light moveable and colors changeable: ~1.0 hr

Total Time: ~3 hrs

You can download the working exe here.

Your browser may say that the file is malicious. This is because some browsers do not allow sharing executables from unknown sources. However, the file is perfectly safe to run.

This had four parts to it:

  1. Build a plugin for Maya 2013, which would export a mesh into our mesh format that we made earlier
  2. Use the exported meshes in the our game, with certain conditions:
    1. Draw two different meshes with the same material
    2. Draw the same mesh twice with different materials
  3. Add PIX instrumentation
  4. Make a user settings file, read it in and use it in the game

 

For the first part, we had most of the code for the plugin, and had to add functionality in it to make it export the Maya mesh into our specific format. It was simple enough, and after playing around with a few est meshes, I decided to export the mesh of the main character from my thesis game, 404Sight. I was able to import in my game and able to render it easily enough, with the proper texture too! (I got the assets from one of the artists on the thesis team). I then exported a basic cube and pyramid to add to my game. This really allows us to add any sort of mesh from Maya, and saves us from having to hard code data, such as vertex positions and UVs. We can simply create a mesh in Maya and use it in game quickly enough.

For the next part, I decided to render a total of 6 meshes at the same time. One was the ground plane, one was the character mesh, and then two each of a cube and a pyramid. I drew them just around the character mesh, using a red brick and a green brick texture. In-order to replace any mesh, I can simply go to my assets folder, and replace cube.mesh, pyramid.mesh, plane.mesh or Character.mesh, rebuild them, and be able to use them in game. Again, adding calls to draw all the new meshes was a simple enough task. Here is what my game looks like now:

meshes

As for PIX instrumentation, we were getting to the point where there were a lot of DirectX calls being made, which was making it hard to scroll through all the calls in PIX to find the one I was looking for. The current PIX instrumentation lets me group function calls into collapsible nodes. This makes the PIX output much more easy to read, and I can now easily look for a specific mesh or material call.

Here is the new PIX output for the above screenshot:

Pixinstrumentation

The final part of the assignment was adding a UserSettings.ini, which allows users to change certain setting of the game. Currently these settings are width, height, and fullscreen options. This is a human readable file, since it has to be easily changeable by end users. These are the most basic of all game settings, and now instead of being hard coded into the game, they can be changed from outside the game.

 

Time Taken:
Reading/Understanding what to do: ~1.5 hrs
Getting the MayaExporter working: ~0.5 hrs
Getting multiple meshes working: ~1 hr
PIX Instrumentation: ~1 hr
User Settings: ~1.5 hrs
GIT downtime issues: 5 hrs

Total Time: ~5.5 hrs + 5 hrs(due to GIT)

You can download the working exe here.

Your browser may say that the file is malicious. This is because some browsers do not allow sharing executables from unknown sources. However, the file is perfectly safe to run.

For this assignment, we had to add textures to our game, with both the meshes being rendered with different textures. The textures were also to be a part of our pipeline, which meant that we added another tool, TextureBuilder, which would read in images in common formats(jpg, png, etc), and convert them into .dds files, which would then be used in our game. Another part to this assignment was understanding how UVs work, and incorporating that information in our Mesh data.
The TextureBuilder was simple enough(thanks to our professor). However, incorporating the texture into the game meant LOTS of small changes in a lot of files. First off, I had to ensure that the textures were actually being built (changing AssetsToBuild.lua). Then, I had to add the UV info t the mesh, which meant changing the structure of the mesh file I made in the last assignment to add UV information, then changing the MeshBuilder to read and parse that information as well. The, I had to change the structure of the vertex buffer to add UV information into it. This meant that the vertex shader was also updated, to process UV information. Finally, we had a new fragment shader, which would use the UV information to actually get the correct part of the texture and actually display on the screen. The materials were also updated, with the material now also containing which texture it would use, which meant that material parsing code was also updated to rad in the texture name. The material class also needed a way to set the texture which the fragment shader would use, and finally we had to tell the fragment shader the texture to use before drawing each mesh. As you can imagine, this assignment involved a lot of changes all over the existing codebase, and any mistake would have caused everything to stop working. Here is the result of all the above changes:
twotextures
The cube has a soccer ball texture on it, while the floor has a brick wall texture on it. Here is a PIX capture of the above:
pixtex
I have highlighted the command which tells the shader which texture to use for the next draw.

Time Taken:
Reading/Understanding what to do: ~0.5 hrs
Getting the TextureBuilder working: ~0.5 hrs
Getting Everything else working: ~2 hr

Total Time: ~3 hrs

You can download the working exe here.

Your browser may say that the file is malicious. This is because some browsers do not allow sharing executables from unknown sources. However, the file is perfectly safe to run.

The goal for this assignment was to streamline our build pipeline further. Now, instead of having hard coded mesh data in the game, we have to read it from an external file. To do this, we have to create our own mesh file format, read it in through a MeshBuilder, write it out in binary, and then finally read the binary file in our game code. The condition in this assignment was that we could only do a maximum of four read operations when reading the binary file in the game. Here is the Mesh format I made:
plane.mesh
I used a similar mesh file for the cube as well. I chose this format because it is very human readable, as well as relatively easy to parse through code. It is also easy to add new properties in the file, which allows me to extend its functionality when needed. Next, I made a MeshBuilder tool, which reads in this mesh file, and writes it out in binary. This was easily the longest part of this assignment. After I got the MeshBuilder working, reading in the binary was simple enough. Although it was possible to read the entire file with a single read operation, I decided to do it with three, as it allowed me to easily read in the different types of data I had, which were the counts (vertex and index), all the vertex data, and the index data. With this approach, I had to change very little in my existing Mesh class, and could easily incorporate the binary file.

Reading/Understanding what to do: ~0.5 hrs
Getting the MeshBuilder working: ~5 hrs
Getting the Binary file reading working: ~1 hr

Total Time: ~6.5 hrs

You can download the working exe here.

Your browser may say that the file is malicious. This is because some browsers do not allow sharing executables from unknown sources. However, the file is perfectly safe to run.

For this assignment, we had the following two tasks:
1) Use a lua file to specify which assets to build, rather than sending them in through command line.
2) Create a ShaderBuilder project, which reads, compiles, and writes out compiled shaders before the game starts. The game then loads in these compiled shaders.

The AssetsToBuild.lua that I used is as follows:

AssetsToBuild

This file has sections which specify what type of file is being built (shader or generic for now). Then, I specify which builder program to use to build these. This is followed by a list of files, what their built (or target) names should be, as well as any arguments needed during the build process. This allows me to specify what the built files should use, rather than having them use their original names as it is.

The reason for the second part of the assignment, creating a ShaderBuilder, was that it saves game loading times at start up, as now the game doesn’t have to compile shaders whenever it starts up. It can simply read pre-compiled shaders, reducing the time it takes to get to the game. This also allows us to debug the shaders during the build, rather than having to load up the game every single time we made a change in them.

This was a relatively easy assignment, and didn’t take me very long to do. My time estimates are:

Reading/Understanding what to do: ~1 hrs
Getting the AssetsToBuild.lua working: ~0.5 hrs
Getting the ShaderBuilder working: ~1 hr
Getting the different file extensions working: ~0.5 hr

Total Time: ~3 hrs

You can download the working exe here.

Your browser may say that the file is malicious. This is because some browsers do not allow sharing executables from unknown sources. However, the file is perfectly safe to run.

For this assignment, we had to do the following:
1) Draw a cube and a plane on screen
2) Be able to move the cube left and right, and in and out
3) Be able to move the camera left and right, and up and down

To accomplish these tasks, we had to do a lot of stuff. Since we were drawing two different shapes(meshes), I needed a way to tell the renderer what shape to draw, and where to draw it. For this, I made a Mesh class, whose header is shown below:
Mesh
Each object of the Mesh class now handles its own vertex and index buffers. The class also holds a reference to Game.exe window and the Direct3D device being used. Currently the Mesh data (vertex info and drawing order) is hard coded in to the class, but in the future it will made so that it can read that data from a file. Now, I just need to make the mesh objects, initialize them and then draw them to show them on the screen.

To achieve a movable camera, I made a Camera class as well. The header for that is:
Camera
The camera class holds information about the camera, like the Field of View, Near and Far planes, and the camera’s position.

To draw any object in a 3D environment, we first need to take it from its own space (model space), and find out where it is in the world. This done through a Model-to-World transformation. Then, once it is in the world, we need to figure out where it is relative to camera. This is done through World-to-View transformation. Finally, since all screens are in reality 2D, we need to make a projection of the object onto a 2D plane, called the screen. This is done by using a View-to-Screen transformation. For this assignment, we used perspective projection, as it is what the human eye uses. These three transformations are represented with 4×4 Matrices. The first one, Model-to-World, is based on the object’s position, rotation, and scale. The other two are based on the camera being used. The three matrices are calculated and passed into the vertex shader, which then uses them to decide where on the screen the object is to be drawn, if its visible. Here is a screenshot of the cube and plane:
Game
While deciding the drawing order for each face of the cube, we have to keep in mind that the indices used follow the left handed winding motion, and that the thumb is sticking out if we want that triangle to be seen, and inwards if we don’t it seen. Hence, all faces of the cube are visible, as the index buffer specifies that all faces are drawn, whereas for the plane, only the top “face” is drawn, and nothing else. If we wanted to see the bottom of the plane, the index buffer would have to specify another set of vertices to draw, but in a different order.

Here is a PIX capture of the program, with the Draw call of the cube selected, and with the shader being debugged:
PIX
The shader handles the calculation of the final position on screen, by using the matrices that sent to it. We can see the value of position_world that was calculated in that frame, by using the input position and the Model-to-World matrix.

You can download the exe below, and the controls are:
Arrow keys to move the cube (left and right keys move the cube left and right, and the up and down keys move it in and out of the screen)
WASD keys move the camera (A and D move it left and right, W and S move it up and down). When you move the camera left, the objects appear to move right, and vice-versa. A similar behavior is seen when the camera is moved up and down.

Time breakup:
Reading/Understanding what to do: ~2 hrs
Getting the Mesh class working: ~2 hrs
Getting the Camera class working: ~1 hr
Getting the two objects drawn properly and controls working: ~1 hr

Total Time: ~6 hrs
This took me surprisingly less time, though I do know that I didn’t spend time developing a robust architecture.

You can download the working exe here.

Your browser may say that the file is malicious. This is because some browsers do not allow sharing executables from unknown sources. However, the file is perfectly safe to run.

The two parts for this assignment were:
1) Use shader constants, and change them through either the material file(color) or user input(position)
2) Add a new GenericBuilder project, as an example of of different builders we can have.

For the first part, the material file was changed to the following:
return
{
    constants =
    {
        g_colorModifier = { 0.8, 0.5, 0.8 },
    },
    shaders =
    {
        vertexShader = “data/vertexShader.hlsl”,
        fragmentShader = “data/fragmentShader.hlsl”,
    },
}

Now, when the program runs, it reads the g_colorModifier value from above, and sets it in the fragmentShader, shown here:
Fragment Shader

This is a per-material constant, meaning it stays the same throughout the use of the material. What we could do is make different copies of the same material, but with a different per-material constant value, resulting in objects that look the same, but with a different color (in this case).
Here is the rectangle using two different color values from the material:
Game - 4 - 1

Game - 4 - 2

Another type of constant is a per-instance constant, which is different for each mesh using a material. This is why its not read from a material. The vertex shader has the following constant in it:
Vertex Shader

We used this constant to change the position of the rectangle on screen using the user input. In the sample exe below, use the arrow keys to move the rectangle around.

For the second part, we added a BuilderHelper project, which can now be used as a base to develop specific builders in the future. As a start, we made a GenericBuilder, which builds all assets. However, in the future we can extend BuilderHelper to make asset-type specific builders, such as a shader builder, a texture builder and so on. Building assets converts it from a human friendly form to an efficient machine readable format. Hence, it is better to have different builders for different types of assets to help build them efficiently.

Time breakup:
Reading/Understanding what to do: ~4 hrs
Getting the graphics part working: ~3 hrs
Setting up the BuilderHelper and GenericBuilder: ~1 hrs

Total Time: ~8 hrs

You can download the working exe here.

Your browser may say that the file is malicious. This is because some browsers do not allow sharing executables from unknown sources. However, the file is perfectly safe to run.

In this assignment, we progressed from drawing one triangle, to a rectangle. We also made our own material file, which currently only holds shader information for our rectangle. So the two parts to this assignment were:
1) Use an indexed buffer to draw two triangles(to make a rectangle)
2)Create a new file format for material files which we will be using in the rest of the assignments.

Here is the rectangle we drew:
Game - 3

For the first part, since a rectangle needs two triangles to be drawn, and each triangle needs 3 vertices to draw, the first thought is to just draw two triangles, with three separate vertices for each. But, as two vertices were shared between the triangles, we decided to use an index buffer, so that we can minimize the size of the vertex buffer being used, by reusing vertices. An index buffer stores the indices of the vertices that form a triangle. Thus we could reuse the vertices already present in the buffer to draw separate triangles. This reduces the size of the vertex buffer. Here is a PIX screenshot showing the Draw call used for above rectangle:
PIX - 3

The IDX column in the PreVS tab in the details section shows the index buffer values used for each triangle. Also in the same section is the position for each vertex. When finally drawing on screen, DirectX uses normalized screen space co-ordinates, which means that the rightmost edge of the screen has an X value of 1.0, leftmost has -1.0, and similarly for top and bottom. Which is why the co-ordinates are in decimals smaller than 1, because otherwise they wont be rendered on screen.

For the second part, I created the following material file:

return
{
    shaders =
    {
        vertexShader = “data/vertexShader.hlsl”,
        fragmentShader = “data/fragmentShader.hlsl”,
    },
}

Its a very small material file for now, but will grow as the class goes on. It is based in Lua, which is a powerful scripting language, and also a very good data description language. I chose the above format as it is easy to parse in code, easy to add new material information, and also very human readable, making it easy to find any problems that might be present in the material file itself.

Time breakup:
Reading/Understanding what to do: ~4 hrs
Getting the graphics part working: ~0.5 hrs
Setting a new class to use the material file: ~1 hr
Reading the material file and using it to render everything: ~2 hrs

Total Time: ~7.5 hrs

You can download the working exe here.

Your browser may say that the file is malicious. This is because some browsers do not allow sharing executables from unknown sources. However, the file is perfectly safe to run.