GameEngineering_2: Final Game – “Lost Rabbits in Silence”

Game Description

“Lost Rabbits in Silence” is a third-person view horror game where the player needs to find his five lost friends in a mystic small town fulled of fog. This game is made for showing the respect and love for the classical horror game “Silent Hill”.

Download Link

Debug Build: https://drive.google.com/file/d/0B6aorYphcJ5INW0tMmllZHJRU3M/view?usp=sharing

Release Build: https://drive.google.com/file/d/0B6aorYphcJ5Ib0hvNHltQWlqQms/view?usp=sharing

If you met the error “MSVCP120d.dll is missing” when you run the demo, please download vs2013 visual C++ redistributable package x86 (32bit version), here is the link: http://www.microsoft.com/en-us/download/details.aspx?id=40784

Game Screenshots

Game_1 Game_2 Game_3

Game Controls

Arrow Keys for Player Movement

WASD Keys for Camera Movement

Esc for exit

When game begins, you need to control the character to move around in this small town to find his five lost friends. Some are abandoned outside, and some are hided inside the house.  You have to move forward and go through the fog beyond your vision to find your friends.

In debug build, press P to enable the debug lines, which all point to the positions of five rabbits, and press O to disable the debug lines.

Game_4

 

Game Technical Detailed

This game is made by Hui engine, which is created by myself. The engine is developed in C++, using DirectX 9 and lua.

What I learned from the class really helps me a lot for creating this game. The knowledge and lua programming skills about the graphics like mesh, material and texture, help me organize the game art assets. Now the game art assets are well-organized and flexible to change and edit. Here is my game’s art assets folders.

Game_5

 

Meanwhile, what I learn about the shader in class contribute to my game’s final effect a lot. As I have fully understand the function of vertex shader and pixel shader, it is easy for me to add more image effect for my game. Here I have added the fog effect for my game. In general, adding fog effect is pretty simple. All you need is to calculate the fog factor and then use the fog formulas to add the fog color to each pixel at last.

Here is the fog formulas I use for the game:

Linear Fog = (FogEnd – ViewpointDistance) / (FogEnd – FogStart)

Fog Color = FogFactor * TextureColor + (1.0 – FogFactor) * FogColor

And here is my final pixel shader of my game:

Game_6

The fog effect part is here:

Game_7

 

Also I take advantage of  DirecX sound code to add the background music into my game.

At Last

I want to extend my gratitude to our class teacher John Paul and class TA Jamie. This class is great! I really learn a lot about the game graphics and also get experience of the game industry project development. These sleepless nights of working on this project would be the sweetest and most unforgettable memory of my EAE MGS time.

GameEngineering_2: Sprite rendering

Description

  • Draw a 2D “sprite” overlay
  • Draw a sprite that uses a texture “atlas”

Reading Time: 1.0 hours

Coding Time: 5.5 hours

Write-up Time: 1 hours

Total Time: 7.5 hours

Download Link: http://blogs.eae.utah.edu/jdong/wp-content/uploads/sites/13/2014/12/game.zip

Achievement

1. Create the new Sprite class to draw a 2D overlay

In general, sprite is a 2D object which is similar to actor game object, but they are different in following:

Sprite only has vertex buffer while actor has vertex buffer and index buffer. The reason is that sprite only needs four vertexes.

Sprite vertex only needs the information of position and uv coordination, although it could still share the vertex format with actor class.

Sprite draw function is DrawPrimitive rather than DrawIndexPrimitive.

Based on those differences, here is my sprite class:

11.01

And in my World and Graphics component, before each component has Vector<Actor> list to store all actors in the game. Now I need to add Vector<Sprite> list to store the sprites game objects.

11.02

And for DrawPrimitive, we used the draw type D3DPT_TRIANGLESTRIP. The reason is that if we used D3DPT_TRIANGLELIST, we must use index buffer. But now we only need to draw a few triangles, so we could use D3DPT_TRIANGLESTRIP. In order to use D3DPT_TRIANGLESTRIP, we must define the order of the vertex in a different way, like the following:

11.03

Now the order is like in the shape of Z.

Final Effects:

11.04

11.05

2. Create the new Sprite “atlas”

By using atlas, we could make a sprite with shared texture, and even we could achieve a kind of simple animation by taking advantage of the atlas.

In my project, you could change the atlas by pressing the key from 0 – 9.

11.06 11.07

To achieve this, the UV of the atlas sprite should be changed according to the keyboard input.

Pixel Debugging:

11.08 11.09

 

11.10

Problems I met

1. Use a new vertex format

At first, I thought the sprite vertex only has two information: position and uv, so I created a new vertex format called sVertex. Then I learned from class e-mail that there is no need to create a new vertex format for just a simple sprite. So at last my sprite vertex shares the same vertex format with actor’s vertex format.

2. Mysterious bug fixing

Remember in previous I mentioned that my camera movement is not so smooth. I have asked JP and Jamie and tried several solutions but it did not work. And tonight I found that my program only update camera’s position when there is new input. Once I added the camera position update function to the branch  where there is no message sent from the window, the problem has been fixed. I still don’t know why this problem has been fixed in this way.  I guess it would take a while to check whether there is new message from the window or not. So when there is new input, the camera update function will not be called immediately.

11.11

GameEngineering_2: Diffuse Light

Description

  • Add ambient and directional lighting to your fragment shader
  • Add keyboard controls to change the direction of the light in real time

Reading Time: 1 hour

Coding Time: 2.5 hours

Write-up Time: 1 hour

Total Time: 4.5 hours

Download Link: http://blogs.eae.utah.edu/jdong/wp-content/uploads/sites/13/2014/11/game4.zip

Achievement

1. Add normal map into the mesh file

As my maya exporter has not included the normal information, this time I need to add the normals information first into my code. Here is the updated exporter code:

10.1

Then I exported the model mesh file one more time to input into my game. Now I need to change the structure of the vertex class and also the s_vertexElements stricture by adding the normal variable, which is a float3 array.  Right now it looks like this:

10.2

2. Change the shader file

First, for vertex shader, it needs to add new argument for its input and output:

in const float3 i_position_model : POSITION, in const float3 i_normal_model:NORMAL, in const float3 i_color : COLOR0, in const float2 i_uv : TEXCOORD0,
out float4 o_position_screen : POSITION, out float3 o_normal_world: NORMAL, out float3 o_color : COLOR0, out float2 o_uv : TEXCOORD0

Then we need to change the normal from the model space into the world space:

10.3

Second, for fragment shader, it needs to add new argument for its input just like what I did for vertex shader. Moreover, it needs to add light_diffuse and light_ambient variable.

  • diffuse light: In order to get the diffuse light value, we need two more variables, which are diffuse light direction and diffuse light color. Both of them are the new constants for the fragment shader and they need to be adjusted by my game’s code.
  • ambient light: It is also a new constant for the fragment shader and it needs to be adjusted by the game’s code.

Therefore, now I need to add three more constants into the shader:

10.4

To get the diffuse value, I used the Lambert’s law. Here the simplest way to use that is like the following:

10.5

 

Finally, I need to input the diffuse light and the ambient light into the shader by: float3 color_lit = color_albedo * ( lighting_diffuse + g_light_ambient );

Ok, now everything is ready except setting the constants in game’s code. Here I created another builder in my game called lightBuilder, it would read the light lua file and write it as binary file into the target data folder.

10.10

 

Then these constants value (diffuse light color, diffuse light direction and ambient color value) would be changed according to the light binary file. Here is the code:

10.6

 

3. Final result

10.7

You can change the direction light’s direction by moving the camera, which is controlled by WASD, just like this:

Move down

10.8

 

Move to the right

10.9

If you change the ambient light color, like ambient = {0.0, 0.2, 0.2 }. The image is like:

10.11

Finally this is the PIXEL debug result:

10.12 10.13

Problems I met

This assignment goes very smoothly, so does this write-up go to the end? Of course no!

I want to share with you about the Half Lambert Diffuse Shader. This shader I has mentioned before in Unity3D shader learning, and this time I could test it in my game.

In origin, the value of diffuseAmountis from -1 to 1. When the value is smaller than zero, the output image could be dark. In order to avoid the object’s edge losing and flat looking, I set the value diffuseAmountin the range of 0 to 1 by adding one sentence: float hLambert = diffuseAmount * 0.5 + 0.5;.

Half Lambert lighting can most often be seen being used on the characters face materials, it has been used in Half Life 2.

Here is the result:

1. Lambert Diffuse Shader

10.11

2.Half Lambert Diffuse Shader

10.14

I have to say, for this SH3 rabbit, the Lambert Diffuse Shader is better… 🙂

———————————————————————————————–

Tonight I am trying to add the specular light into the game and here is the basic introduction of the specular light:

Specular Reflection

  • reflection off of shiny surfaces – you see a highlight
  • shiny metal or plastic has high specular component
  • chalk or carpet has very low specular component
  • position of the viewer IS important in specular reflection

I = Ip cos^n(a) W(theta)
I: intensity
Ip: intensity of point light
n: specular-reflection exponent (higher is sharper falloff)
W: gives specular component of non-specular materials

So I changed the fragment shader again like this:

10.16

Now the game comes out like the following:

10.15

If you want the specular light, you can disable the comment in my fragment shader: //float3 color_lit = color_albedo * ( lighting_diffuse + g_light_ambient ) + g_light_direction_color * finalSpec; and comment this line: float3 color_lit = color_albedo * ( lighting_diffuse + g_light_ambient ); then it will come.

GameEngineering_2: Maya Exporter; more meshes/materials; PIX instrumentation; user settings

Description

  • Create a plug-in for Maya that will export geometry data in your human-readable mesh format
  • Render more objects
  • Add PIX instrumentation
  • Add a way for the player to specify settings for your game

Reading Time: 2 hours

Coding Time: 10 hours

Write-up Time: 1 hour

Total Work Time: 13 hours

Download Link: http://blogs.eae.utah.edu/jdong/wp-content/uploads/sites/13/2014/11/game5.zip

Achievement

1. Maya Exporter

In order to get the correct mesh information from Maya, I need to get the right mesh format which fits my game’s MeshBuilder. Here is the code of producing the correct format:

9.3

Then I could export the model from Maya. Here is the one mesh file:

9.4

Here is the final result:

Capture2

Now in my game, I used the actor to represent each game object. Each actor get its relative material, texture and mesh file by its name ( Actor.getType() ).  Now in my game, there are three game objects, Cube( Knife), Floor ( Floor ) and Sphere ( Rabbit ).  All three mesh files are: Cube.mesh.lua, Floor.mesh.lua and Sphere.mesh.lua.

2. Pix Instrumentation 

  • D3DPERF_BeginEvent()
  • D3DPERF_EndEvent()

I added these two function in my Draw() process.  And here is the result:

9.1

9.2

You can see in the “Draw the mesh” process, it would do “Set the material”, “Set Stream Source”, “Set Indices” and “Draw Indexed Primitive”.  In this way, you could get more clear about the game’s process and it is also easier for you to debug.

3. User Setting

In general, to achieve the user setting feature, you need to do the lua operation again. What we do here is reading the user setting value from a lua table and the pass these values into the graphic class or the window class. But here we need to think about more about the player’s behavior, what if they do something wrong like deleting all the values in the file? So I need to make sure the program would not crash easily ( better not forever ) besides reading the value from the lua table.

What I did is when the lua table has the problem, set the user setting value as default instead of returning the false value. Here is what the code like:

9.5

Of course this is only part of the code. This part only check the width value. In my game, if the user setting has the wrong value ( like string for width, or negative value for the width ), the game would use the default setting. Even the player deletes all the value in to user setting file or deletes the file itself, the game would not crash.

Problems I met

1. Change the vertex format of Maya to DirectX problem

What I have done for changing the vertex format from Maya to DirectX are following:

  • Change the vertex structure for Mesh class. As I know in shader, the color datatype is float and the range is 0.0 – 1.0, while DirectX uses DWORD( RGBA ) to represent the color, and the the range is 0 – 255. Because in Maya, the range of color value is also 0.0 – 1.0, so there is no need for me to use the  D3DCOLOR  in Vertex class. In this assignment, I change the structure of the Vertex class, now it is like this: Vertex( float position[3]; float color[4]; float uv[2] ).
  • Change the UV system. Maya’s coordinate system is different than the default Direct3D behavior, like UVs have (0,0) at the lower left corner. So I change the UVs value from Maya like this: TEXCOORD -> u, 1 – v.

After I have done this, the objects could be rendered, but in a wrong way. Then I realized the Maya is right handed system and I need to change the index, too. To change the index value correctly, I need to exchange the second and third value of each triangle, like this index ( value 0, value 1, value 2 ) -> index ( value 0, value 2, value 1 ). Then this problem has been fixed.

2. PrimitiveCountToRender Problem

When I have done the Maya Exporter project and import some awesome models I found online into the game, then I found only a small part of them has been rendered. At first I thought it was the texture problem as those models usually have over one texture. But then I realized that could not be possible as the model should be rendered completely even though there is no texture attached. After I used the D3D debugger to find the problem, I found the problem came from:

HRESULT result = s_direct3dDevice->DrawIndexedPrimitive(primitiveType, indexOfFirstVertexToRender, 0, vertexCountToRender, indexOfFirstIndexToUse, primitiveCountToRender);

The primitiveCountToRender value is still 12, which is the right number for a cube. But now the current object is more complicated than a cube, so this function could not work right now. What I have done to fix this problem is get the number of the index and then diveide it by 3.

3. Maya model size could not change 

When I tried to import the silent hill nurse model into my game, I found no matter how I change the seize of the model, the output file still showed the same vertex information. After I asked other students, I came to know that because I only changed the scale of the object, which would not affect the model’s vertex position. Scale is like the factor when the object is rendering, it would use Scale to change its size. But now the Maya output does not have the Scale information, so each vertex position is still the original position. To fix this problem, I change the vertex position of the model and then export that to my game. Now it works well.

I am also thinking about the scale in Unity. I think the scale affects the size of the object in game, and this is why the scale is one the three information ( other are position and rotation ) of the transform. I am assuming the original transform vertex information is still stored in the game, every time the scale has been changed, the new transform vertex information will be calculated by multiplying the new scale and the original vertex position.

4. Full Screen Black Problem

When I set the game’s screen as full size, then the screen is all black. Even when I run the first assignment, the full screen is still black. It took me a while to figure it out until Kehan found that the problem came from the backfuffer size. After several tests, we found that the buffer size has to be the exactly same with the current screen’s resolution.

GameEngineering_2: Textures

Description

  • Create a TextureBuilder tool to create the texture for your project
  • Your cube and your floor plane must render with different textures

Reading Time: 2 hours

Coding Time: 5 hours

Write-Up Time: 1 hour

Total Time: 8 hours

Download Link: http://blogs.eae.utah.edu/jdong/wp-content/uploads/sites/13/2014/11/game3.zip

Achievement

1. DDS picture produce 

The format of the texture is “.dds”. It is the format that DirectX used as Direct Draw Surface. We could use the dds tool provided by DirectX, which you could find in DirectX Utilities folder. It is very easy to get the dds picture by using this tool. But here we will do it by coding the TextureBuilder project.

Similar to the projects we used in the previous assignment, the TextureBuilder project would get the source path of the picture ( which here is the texture ) file and the target path, which is the data folder. I used JP’s texturebuilder file in my assignment.

For the function D3DXCreateTextureFromFileEx(), one interesting thing about this function is that the argument  DWORD Filter. There are multiple filters. They are used for re-sizing. They are used if you specify a size other than what’s on disk, or if you use default size and don’t specify that non pow2 is okay, and the texture is not a power of 2 size.

For example, if you load a 48×48 texture, the load function will automatically make it a 64×64 texture, as it’s the next pow2 size that’s acceptable. If you specify a filter, it will use this filter to resize the image. The texture is filtered when loaded, and filtered more when rendered, creating quite a blurry image. If you specify a filter of NONE, it will load the image as is into the upper left part of the texture, and fill the remainder of the texture with black (possibly, and most likely, transparent black if alpha is part of the format). You’ll need to do more work to get accurate texture coordinates, but image quality will be better. Of course, you could just save the image as a pow2 size to begin with.

Here is the new file of my AssetToBuild file:

8.1

2. Texture Coordinates 

8.2

 

In order to map the texture to the object, we need to understand the UV coordination. This system change the map the texture pixel ( float ) range from 0.0f to 1.0f to the object’s pixel value in integer. If a texture’s width is 256 pixel, the first pixel is mapped as 0.0f, and the 256th pixel is mapped as  1.of.

The vertical direction is “V” and the horizontal direction is “U”.

In order to add the new vertex information, I changed the vertex shader arguments and also the vertices format of the mesh file. Here is the new shader and also the new mesh file:

8.4

I add a new input argument i_uv and a new output argument o_uv. And in the definition of this function, I added the “o_uv = i_iv;”

8.3

3. Attach the texture to the object

There are several part I have changed. First is the structure of the s_vertexElements[], the new format is like this:

8.5

Secondly, when the program reads the binary file, now it will read an extra argument, which is the uv value.

Third, set the texture of the direct device.

8.6

Here is the my game right now:

8.7

And here is the screenshot of the Pixel Tool:

8.8

And what I want to talk about more is the Texture Filtering.

In msdn, it is described as:

When Direct3D renders a primitive, it maps the 3D primitive onto a 2D screen. If the primitive has a texture, Direct3D must use that texture to produce a color for each pixel in the primitive’s 2D rendered image. For every pixel in the primitive’s on-screen image, it must obtain a color value from the texture. This process is called texture filtering.

When a texture filter operation is performed, the texture being used is typically also being magnified or minified. In other words, it is being mapped into a primitive image that is larger or smaller than itself. Magnification of a texture can result in many pixels being mapped to one texel. The result can be a chunky appearance. Minification of a texture often means that a single pixel is mapped to many texels. The resulting image can be blurry or aliased. To resolve these problems, some blending of the texel colors must be performed to arrive at a color for the pixel.

Direct3D simplifies the complex process of texture filtering. It provides you with three types of texture filtering – linear filtering, anisotropic filtering, and mipmap filtering. If you select no texture filtering, Direct3D uses a technique called nearest-point sampling.

In order to set the filter of the texture map, we need to use the function 

HRESULT SetSamplerState(
  [in]  DWORD Sampler,
  [in]  D3DSAMPLERSTATETYPE Type,
  [in]  DWORD Value
);

1. Nearest-Point Sampling ( cost less, poor texture effect )

g_device->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_POINT);
g_device->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_POINT);

2. Linear Texture Filtering ( between the 1 and 3 )

g_device->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR);
g_device->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR);

3. Anisotropic Texture Filtering ( cost most, great texture effect )

g_device->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_ANISOTROPIC);
g_device->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_ANISOTROPIC);
g_device->SetSamplerState(0, D3DSAMP_MAXANISOTROPY, 4);

The following are Nearest-Point Sampling,  Linear Texture Filtering and Anisotropic Texture Filtering (from left to right ). And you can see the difference.

Nearest-Point Sampling
7.9
Linear Texture Filtering
8.0
Anisotropic Texture Filtering

7.10

Problems I met

1. uv mess problem

After I implement the texture, the texture on the object is just like in a mess. The reason is that the structure of the s_vertexElements has not been changed. After I changed the format of that,  the texture shows normally.

 

 

GameEngineering_2: Binary mesh file

Description

  • Create a human-readable mesh file format using Lua
  • Create a new MeshBuilder tool
  • Change your game/engine code to load the binary cube and floor meshes from the built files rather than having the data hard-coded

Reading Time: 4 hours

Coding Time: 11 hours

Write-Up Time: 1.5 hours

Total Time: 16.5 hours

Download Link: http://blogs.eae.utah.edu/jdong/wp-content/uploads/sites/13/2014/11/game.zip

Achievement

1. Mesh Format

This is the mesh format I used in my project:

7.1

The first two numbers represent the count of the vertex and index.  For each vertex, it stores the position information and color information in a table that has no key value. And also for the indices, it stores its value in an integer array, where three numbers represent for one triangle.

/******************************************Jinghui Dong’ Blog *****************************************************/

2. Lua Function

In the previous project, I used the lua function created by JP. Those functions mostly are like “LoadTableValue” and “LoadTable”, meaning they either load the table or load the value of the table. However, the method is not so good, especially there are table in a table in table ( endless…). If I keep reading lua table in this way, I will need to create countless lua read function, which is stupid. So, I create my own Lua Table Helper Class to help the mesh builder read the lua value from the lua file.

In general, I need three types of lua function: 1) LoadTable( load the lua table with the path provided ); 2) GetToStack ( check whether the variable has been successfully on the top); 3) Get ( get the variable and return it).

So I create my LuaTableHelper class and implement those three functions. Particularly, I set the Get function as a template function.

7.2

For the GetToStack function, it will the situation that the table you want has a key value or not. To successfully get the required table, it will relatively needs two different functions:

lua_getfield and lua_rawgeti. If I want to look for a table with key value, I could use lua_getfield, and if I want to look for a table without a key value, I can use the function lua_rawgeti, like this lua_rawgeti(lua_state, -1, index). The index means the number of the lua table in order. In my project,  lua_rawgeti(lua_state, -1, 0) could means loading the first table in Vertices table.

/******************************************Jinghui Dong’ Blog *****************************************************/

3. Write and Read Binary File

Once I get the count of vertex and index and also the buffer of both of them, I could write them into the binary file. Just like this:

7.3

Remember to change the data type of each variable to (char*).

This is my binary file:

7.4

Then in my mesh class, I could read these binary files an assign the value to my mesh’s vertex buffer and index buffer.

7.5

7.6

   /******************************************Jinghui Dong’ Blog *****************************************************/

Problems I met

This time, I would like to describe the whole process of my programming.

Before I started, I listed three problem I would meet for this assignment:

1. How to read the lua table from the mesh file?

2. How to put the variables I read from lua file to binary file?

3. Is there any change in mesh class as now it would read the variable from binary file?

In general, most of the problem I met in this assignment came from the question 1.

 First, I found the biggest problem is how to traverse the lua table.  And there is the solution:

7.7

The -2 in stack is the key and the -1 in stack is the value.  And before you pop out the value from the stack, you could do some process, like storing it to a variable.

After I finished this, an error came out:

1. command … exited with code -1073741819

To solve this problem, 1) first I checked whether the extension of the file is wrong. No thing was wrong. 2) Anything wrong with the file’s path? No was wrong. After about another one hour check, I found a strange result. Each time I built the project for the first time, this error would come out. But if I build the project again, this error disappeared. So there might be some problems with the building dependence. And yes! This is the problem! After I fixed the order of the project dependence. This problem has been solved.

Then another problem showed up. The old friend:2. LNK 2019. This problem came from my LuaTableHelper class as the definition of the template function should be put into the header file rather than the cpp file.

And the last big problem I met, which forced me to stay up all night, 3. was the lua table reading process problem. Even though the lua class was created successfully, the value my program read from the lua file was still not correct. You know, debug the lua function is hard. And finally, I solved this problem by using std::cout  a lot to see the debug information. I found my program failed to read the vertex table with no key value. At that time I found out I need to use lua_rawgeti function.

This is the debug information which helps me to solve this problem:

7.8

It shows the value from the lua table and also how many values have been read from the file, making it easy for me to check whether the value is right or not and also whether some values have been missed. In this picture, you can see the position information and the color information have been successfully read from the lua file.

GameEngineering_2: Asset list from a file; pre-compiled binary shaders

Description

  • Get the asset list from a single file instead of putting all the file information in the command line information
  • Compile the shaders programs and load the compiled binary shader at run-time rather than compiling the source code.

Reading Time: 2 hours

Coding Time: 4 hours

  • Get the asset list from a single lua file: 1 hour
  • Compile shader programs and load the binary shaders at run-time: 3 hours

Write-Up Time: 1.0 hours

Total Time: 7 hours

Download Link: http://blogs.eae.utah.edu/jdong/wp-content/uploads/sites/13/2014/10/game2.zip

Achievement

1. Get the asset list from a single lua file

I remember at the first assignment, I asked a question about the asset list. The question was what if in the future there are hundreds of assets in my game engine, then do I still input these file names into the command line of VS? And what I am required to do in this assignment is to solve this problem. What I do is to use one file to store all the assets information and then  read them from a script.

Here is what my program does for archiving this: 1) BuildAssets executes the command line: “$(BinDir)AssetBuilder.exe” “$(ScriptDir)AssetsToBuild.lua”,processing the AssetsToBuild.lua file, which collects the information of all assets. 2) Each builder handles relative asset, like shader builder handles the vertex shader and fragment shader.

My lua file format is like this:

6.0

The reason that I choose to do in this format is that first I want to separate the shader and material( maybe mesh in the future ). Secondly, I think the extensions should be also separated just in case. Thirdly, I keep the each shader as one table and I let the relative builder to decide what to do with it. In lua function, all I need to do is to pass the shader table as the arguments:

6.2

As the key of the table in my lua file is continuous, I used pairs in the first loop. Then I change the code of BuildAsset(), the part I have changed is as following:

6.1

The BuildAsset() function will extract the extension information from the shader table.

/******************************************Jinghui Dong’ Blog *****************************************************/

2. Compile shader programs and load the binary shaders at run-time

I replaced the HRESULT result = D3DXCompileShaderFromFile(sourceCodeFileName, noMacros, noIncludes, entryPoint, profile, flags, &compiledShader, &errorMessages, noConstants); with LoadAndAllocateShaderProgram(sourceCodeFileName, reinterpret_cast<void*&>(compiledShader), errorMessages).  After doing this, you should change the compiledShader->Release(); into free(compiledShader);

/******************************************Jinghui Dong’ Blog *****************************************************/

Problems I met

1. Attempt to concatenate local ‘relativePath’ (a table value)

At first, I try to input the shader table information into the BuildAsset() directly, but then I found that the argument I passed is a table rather than a asset name. This is why I change the code of BuildAsset() for this assignment.

2.Linker Tools Warning LNK4221

Last time I found the problem of my render class, so I just disabled render header and cpp file. Then this warning showed up this time because “This object file does not define any previously undefined public symbols, so it will not be used by any link operation that consumes this library”. So I checked my render.cpp and found:

6.3

The way to fix this warning is to comment the second line: #include “Render.h”.

GameEngineering_2: 3D rendering (colorful box and ground plane)

Description

  • Change the vertex from the 2D dimension to the 3D dimension
  • Attach the mesh to the actor class, render two different objects with different meshes
  • Render the object in 3D view rather than the 2D view

Reading Time: 1 hours

Coding Time: 11 hours

  • Make the mesh independent from the graphics class: 2 hours
  • Make the first object (box) rendered in 3D: 4 hours
  • Render two objects: 1 hours
  • Change the project structure: 4 hours

Write-Up Time: 1.0 hours

Total Time: 13 hours

Download Link: http://blogs.eae.utah.edu/jdong/wp-content/uploads/sites/13/2014/10/game1.zip

Achievement

1. Attach the mesh to the actor class

As a game engine, the mesh should be attached to the actor, not the graphics itself. In last assignment, I added the mesh as the data member of the graphics class, which was totally wrong ( even though it can still meet the requirement of last assignment). So before I started to render the object into 3D, I first isolated the mesh from graphics class and then put it into the actor class. Here is my Mesh class:

5.4

 

The job of mesh is to handle the index buffer and vertex buffer,  the actor would import the offset to the mesh, which is used in box controller. Now the structure of my render part is like this:

Graphics:

  • Actor List
  • Material (now the material is shared )

Actor:

  • Mesh

When the Graphics Draw() function is called, it will (1)first load each actor’s mesh, (2)then clear the scene, (3)begin the scene, (4)and render the actor in the actor list, (5) last end the scene.

/******************************************Jinghui Dong’ Blog *****************************************************/

2. Render the box in 3D dimension

Let me show the final effect to you first:

5.1

Really beautiful, right? Here are the several things I have done to achieve this effect:

  1.  Use my own vertex class and change it data from float2 ( x,y) to float3 (x,y,z). Then when I initialize the vertex buffer, I will use the vertex class as the data type.
  2. Change the structure of s_vertexElements: (1) change D3DDECLTYPE_FLOAT2 to D3DDECLTYPE_FLOAT3; (2) change the offset from 8 to 12, because now there are 3 float values.
  3. Create the Camera Class. The WorldToView Matrix and ViewToScreen Matrix would be generated by it. Add the camera as the data member of Graphics class.
  4. Add the GetTransform() function to mesh class. The ModelToWorld Matrix would be generated by it.
  5. Set the constant table value of vertex shader each time the shader is complied.
  6. Change the vertex buffer and index buffer of the mesh. Now the vertexCount in mesh is 8 rather than 4, as there are 8 vertices in a box comparing to 4 vertices in a rectangle. The same goes to vertexCountToRender = 8 rather than 4 and primitiveCountToRender = 12 rather than 2.

Here is the PIX screen shot:

5.2

5.9

And here is the screen shot showing the bottom of my box:

5.65.7

/******************************************Jinghui Dong’ Blog *****************************************************/

3. Render two different objects

To render two objects, I need to make the Graphics class render each actor in the actors list each time the Draw() function is called. Here are what I have done to achieve this:

  1. Create two actors, one is the box, another is the floor. ( Right now I just distinguish them by a int variable named type, I will improve this in the future ).
  2. Add them to the Graphics instance and World instance.
  3. Every frame after the World instance update, call the Draw() function of Graphics instance.
  4. The Graphics will get the index buffer from each actor and set it to the device, then use the DrawIndexedPrimitive function to draw the objects we want.

Right now, the vertex buffer of floor is like this:

  • vertexData[0].m_x = -5.0f;
    vertexData[0].m_y = -1.0f;
    vertexData[0].m_z = -2.0f;
    vertexData[0].m_color = D3DCOLOR_XRGB(0, 255, 255);
  • vertexData[1].m_x = -5.0f;
    vertexData[1].m_y = -1.0f;
    vertexData[1].m_z = 2.0f;
    vertexData[1].m_color = D3DCOLOR_XRGB(0, 255, 255);
  • vertexData[2].m_x = 5.0f;
    vertexData[2].m_y = -1.0f;
    vertexData[2].m_z = 2.0f;
    vertexData[2].m_color = D3DCOLOR_XRGB(0, 255, 255);
  • vertexData[3].m_x = 5.0f;
    vertexData[3].m_y = -1.0f;
    vertexData[3].m_z = -2.0f;
    vertexData[3].m_color = D3DCOLOR_XRGB(0, 255, 255);

/******************************************Jinghui Dong’ Blog *****************************************************/

4. Move the camera and box

Control Method:

  • Arrow Up: box moves back; Arrow Down: box moves forward; Arrow Left: box moves left;  Arrow Right: box moves right
  • W: camera moves up;              S: camera moves down;                        A: camera moves left;               D:camera moves right

There are two offsets I update each frame, one is the offset for the camera, and another is the offset for the actor’s mesh. This is how I did in my current code:

5.8

/******************************************Jinghui Dong’ Blog *****************************************************/

Problems I met

1. Failed to make the vertex shader work

5.9

 

Even though at last I found the problem is that I spelled the constant name wrongly when changing the constant table of the vertex shader, I still have a question about it. Because when I was figuring out the origin of the problem, I tried to input the matrix value into the vertex shader manually to test whether the problem is come from the matrix or the shader. After I recorded these three matrices value by breaking the program, and input them into the shader manually, the screen was still black. So why I could not assign the matrix value to the shader manually to make the program run? ( Just want to make a test) eg: uniform float4x4 g_transform_modelToWorld = { 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0,  0.0,0.0,1.0,0.0, 0.0,0.0,0.0,1.0 };

2. Shining objects on the screen

When I first added two actors to the Graphics instance and then called the Draw() function, the screens showed two objects I want, but both of them were shining. The problem is that I wrongly attached the graphics to the actor class as one data member, so the order of the render is like this:

ClearScene -> BeginScene -> Render Box -> EndScene -> ClearScene -> BeginScene -> Render Floor -> EndScene.

You see, the ClearScene ran two times each frame. No wonder the objects were shining. Then I found the structure of my project was totally wrong!! So I changed the structure again and attach the actors list to the Graphics class, which I think is much better compared to the previous structure.

3. Null Index Buffer and Vertex Buffer

After I changed the structure of my code, then I found that each actor’s index buffer and vertex buffer was NULL. The problem is that actor’s mesh data member is not a pointer. When coming to the code: m_renderActorArray.at(i)->Get_Mesh()->CreateVertexAndIndexBuffers(),  the mesh I get is not the original one, it is just a copy of this actor’s mesh. So the index buffer and vertex buffer have not been assigned to the actor’s mesh. After I changed the mesh to the Mesh* pointer, this problem has been fixed.

 

GameEngineering_2: Add a per-material and a per-instance constant; add a generic builder program

Description

  • Add a constant value to the material that modifies the rectangle’s color
  • Change the position of the rendering and make the rectangle be able to be moved by the keyboard input
  • Change the AssetBuilder project by adding GenericBuilder, which build the asset one by one.

Reading Time: 2 hours

Coding Time: 11 hours

Changing Architecture time: 4.5 hours ( included in Coding time )

Write-up Time: 1 hour

Total Time: 14 hours

Download Link: http://blogs.eae.utah.edu/jdong/wp-content/uploads/sites/13/2014/09/game3.zip

Achievement

1. Add the constant to the material

First, constant table contains the variables that are used by high-level language shaders and effects.Shader constants are contained in the constant table. This can be accessed with the ID3DXConstantTable interface. Global shader variables can be initialized in shader code. These are initialized at run time by calling SetDefaults.

I can use a handle to set the value of the constant table. Here is the new format of my material file:

Capture4.1

 

In order to store the g_colorModifier value into my material class, I added a new data member called modifierColor. At first, I created this value with the data type of D3DXVECTOR4, then I found I only need the RGB value. So I changed the data type from D3DXVECTOR4 to float array, which is easier for me to assign the value. In my code, I did this in this way:

Capture4.2

 

So the logic of the whole process is I (1) read the value from material lua file, (2) assign the modifier value to the constant of fragment shader.

As the new fragment shader is like this:

Capture4.3

 

Here, the g_colorModifier is the constant for the material. I can adjust the color of my game object by handling the modifier color of each material. Now the modifier value is g_colorModifier = {0.3,0.4,1.0}, the final effect is like:

Capture4.3

 

if modifier value is g_colorModifier = {0.3,0.4,0.0}, now the final effect is:

Capture4.4

 

Further Reading:

This time in order to read the modifier color value from constant table, what I did is achieving the read lua table from a table. I created two functions:

bool LoadTableValues_modifierColor(lua_State& io_luaState);
bool LoadTableValues_modifierColor_values(lua_State& io_luaState);

to help me. The mechanic is similar, but how to make sure the lua stack work well? Here I used int top = lua_gettop(&io_luaState); before the operation of any lua stack, and also added this sentence at the end of the lua function. If the top value is the same, meaning the lua stack has been handled very well. In this way, it shorted my coding time and made it convenient to find the error.

/******************************************Jinghui Dong’ Blog *****************************************************/

2. Add the constant to the material to game instance

This is similar to the first part, and only one thing we should pay attention to. What the program has changed is the position of the mesh, not the position of the game object itself. Now the logic of game object position and render position is separated! What we moved is not the game object, instead, it is its image ( or you can see shadow… ).

For this part, what I did most is to rebuild my game project’s architecture. Here is what I added to my project:

Actor class ( it has position, and render ).

Vector class ( this is from what I made in Joe’s class )

World class ( it uses vector list to store the game object, and it could create and add new game object into the game )

Render Class ( it uses vector list to store the game object and it could render each game object by calling their render function ).

When the window get the input message, my program will do this:

Capture4.5

 

Now the rectangle could be moved by: W( Up), S(Down), A( Left ) and D (Right ).

Capture4.6

/******************************************Jinghui Dong’ Blog *****************************************************/

3. Add a generic builder program

I added GenericBuilder project and BuilderHelper project into my game. The GenericBuilder will process the arguments by using the function: Build( char** i_arguments, const unsigned int i_argumentCount ). This function will call ParseCommandArgumentsAndBuild( i_arguments, i_argumentCount ), after this function get the source path and target path that we want, it will call Build(const std::vector<const std::string>& optionalArguments).

The whole process is BuildAssets call (1)AssetBuilder.exe -> (2)AssetBuilder.exe will call the lua function BuildAssets -> (3) Pass the source path and target path to GenericBuilder.exe, and “local result, terminationType, exitCode = os.execute( commandLine )” will execute the command that we create ->(4) Copy the source asset one by one to the target folder.

In order to understand this process clearer, I changed the code of BuildAssets.lua:

  1. if result then
    print( “Built ” .. commandLine )
  2. return true, exitCode

Then the compiler shows the information:

Capture4.7

This is what we did in AssetBuilder command line before, but now, we don’t have to do this anymore. Also, GenericBuilder make it possible that each time before we copy an asset, we could do something else, like some processing.

/******************************************Jinghui Dong’ Blog *****************************************************/

Problems I met

1. Wrongly added the argument in the command line

Even though I know the order of GenericBuilder process, I still tried to add the arguments, including the exe file name, the source path and target path, into the command option of AssetBuilder project. By doing so, the compiler always showed error information, and I even found some weird information like the source path had been cut into two parts after it was assigned by the program. So I changed the code of BuilderAssets lua file to test whether GenericBuilder could be executed without the argument in command line. Later, things went on well.

2. Misunderstand of offset

Originally, I reset the offset if there is no input. Then I found the rectangle only moved once after I pressed the keyboard.  Later I realized this position is not the position of the game object. Instead, it is just the position of the render, which we could say this render object has not transform and no definition. I think now the method of moving the rectangle will be replaced in the future class as now the position of object and position of rendering object is not the same, which is not right for a game engine. However, for this assignment, this setting really helped me understand the difference between two of them.

3. DoLuaFile error

This error is caused by a lua syntax error, I wrongly assigned the modifier color value like: { 0.0f, 1.0f, 0.4f }. The weird thing was the compiler did not report the error first, and after a while, this error showed up, which confused me for about half an hour and then I fixed it.

GameEngineering_2: Vertex & Fragment Shader; Use Lua Script in Project

Description

For assignment 2, I added new function to vertex and fragment shaders to projects, which it will make each vertex colorful for this time. Also I replace the Helperfuncions in Tools project with lua script, aiming at copying those two shaders into the target folder.

Achievement

1.The function of vertex shader and fragment shader

In short, Vertex shaders are fed Vertex Attribute data, as specified from a vertex array object.  A vertex shader receives a single vertex from the vertex stream and generates a single vertex to the output vertex stream. There must be a 1:1 mapping from input vertices to output vertices.

As for Fragment shaders,  is a user-supplied program that, when executed, will process a Fragment from the rasterization process into a set of colors and a single depth value.

In the assigment 1.1, what the vertex shaders does is merely send the input vertex information as output without any process, even without the color information. Here is the code of previous vertex shader:

// Calculate position
{
// Set the “out” position directly from the “in” position:
o_position = float4( i_position.x, i_position.y, 0.0, 1.0 );
// Or, equivalently:
o_position = float4( i_position.xy, 0.0, 1.0 );
o_position = float4( i_position, 0.0, 1.0 );
}

It just sends the input vertex coordination as the output, which is stored in o_position. But in assignment 1.2, we added several sentences:

// Calculate color
{
// Set the “out” color directly from the “in” color:
o_color = i_color;
}

It stores the color information of the vertex, which is very important in this assignment. Now in fragment shader, now it will send the input color information to the output. Although it will not process any more about the color information, this is the first step for using a real shader. In following further reading, I will discuss more about this. Here is the code:

o_color = float4( i_color.rgb, 1.0 );
// “RGB” = “Red/Green/Blue” and “A” = “Alpha”.
// For now the A value should _always_ be 1.0.

In Graphics.cpp file, what we added first is the struct of sVertex. Here it added one more data: D3DCOLOR color;

struct sVertex
{
float x, y;
D3DCOLOR color; // D3DCOLOR = 4 bytes, or 8 bits [0,255] per RGBA channel
};

Remember what I mentioned in the beginning, the vertex shader will generate the single vertex into the vertex output stream, which here is the vertex buffer. Vertex buffer  is a memory buffer that contain vertex data. So in order to store the RGB value (color information) into the vertex, we need to do the following:

vertexData[0].x = 0.0f;
vertexData[0].y = 0.0f;
vertexData[0].color = D3DCOLOR_XRGB( 255, 0, 0 );

vertexData[1].x = 1.0f;
vertexData[1].y = 1.0f;
vertexData[1].color = D3DCOLOR_XRGB( 0, 255, 0 );

vertexData[2].x = 1.0f;
vertexData[2].y = 0.0f;
vertexData[2].color = D3DCOLOR_XRGB( 0, 0, 255 );

Because there is only one triangle we want to draw, so we only need three vertices. This is why here we store 3 RGB values into the vertex buffer. Let me shows the final result now:

ScreenShot_Jinghui_Dong_2

You can see, the color of three vertices is relatively red( 0,0) , blue( 1,0 ) and green ( 1,1 ). And those colors are mixed together as the vertices is closer to the center of this triangle. Next is the render information from DirectX tool Pix.

ScreenShot_Jinghui_Dong

From this picture, you can see the whole process of rendering from the Events Tab:

Clear -> BeginScene -> SetVertexShader -> SetPixelShader -> DrawPrimitive -> EndeScene -> Present

In general, for each shader, it will go though the process of: Declaration, Compile, Create. For example, for Vertex Shader, it will first SetVertexDeclaration, SetVertexBuffer, then D3DxCompileShaderrFromFile, next CreateVertexShader. After all of this, we could use the vertex shader.

In the Mesh tab, you can see the mesh of each process that is rendered on the screen. Also in the PreVS tab, you can see the information of each vertice, including the position and color information.

What I would like to say is DreaPrimitive() function.This function renders a sequence of nonindexed, geometric primitives of the specified type from the current set of data input streams.

  • HRESULT DrawPrimitive(
    [in] D3DPRIMITIVETYPE PrimitiveType,
    [in] UINT StartVertex,
    [in] UINT PrimitiveCount
    );

I made a test about its first arguments. It totally has six arguments types: D3DPT_POINTLIST = 1, D3DPT_LINELIST = 2, D3DPT_LINESTRIP = 3, D3DPT_TRIANGLELIST = 4, D3DPT_TRIANGLESTRIP = 5, D3DPT_TRIANGLEFAN = 6. Here we are using D3DPT_TRIANGLELIST, which renders the specified vertices as a sequence of isolated triangles. Each group of three vertices defines a separate triangle. Back-face culling is affected by the current winding-order render state. If I changed into D3DPT_LINELIST, it will looks like this:

ScreenShot_Jinghui_Dong_3

You see, it is just a line now.

 

 

 

 

 

 

 

Further reading

1. Vertex Shader

1). Vertex Shader Input: Attributes and Uniforms

2). Vertex Shader Program: It will process each vertice

3). Vertex Shader Output: The position and any data that fragment shader needs to color.

2. Fragment Shader

1). Fragment Shader Input: The user-defined outputs from the last vertex processing stage will be interpolated according to their assigned interpolation qualifiers. ( Further reading about interpolation qualifiers: http://www.geeks3d.com/20130514/opengl-interpolation-qualifiers-glsl-tutorial/ )

2). Fragment Shader Output: The color of the pixel.

In short, vertex shader decides which pixel to shade? Fragment shader decides what color to shade?

One more thing, if we want to get to know shader more quickly, I think Unity3D shader could be a perfect choice. Its shader will contain all information we need to learn, even though in a simpler way. Here is the link of one article about the Unity3D shader, which we will find how Fragment shader and Vertex Shader combine together and what will happen if we do some process in Fragment Shader: http://blogs.eae.utah.edu/jdong/category/unity3d-shader/.  I really want to learn more about the shader.

/******************************************Jinghui Dong’ Blog *****************************************************/

2.Add Lua Script into the project

In assignemtn1.2 project, we will use the lua script to copy those two shaders into the target folder. I got the lua library from the example code, which is used for the lua script in AssetBuilder.

There are two ways of the interaction between lua script and cpp file. The first is C++ file call the lua function, the second is lua function call the C++ file function. Before we get start, I would like to talk about the Lua stack.

When lua is interacting with C/C++, the lua is maintaining a stack. All the data interactions is achieved by this lua stack.  When the C++ function returns the value to Lua, it will use the lua_push fucntion to push the return value into the stack. Also it will returns a int value to tell the number of the return values it has pushed into the stack.

1). LuaFuncitonFromC

Each time C++ call the lua function, it will need to push the functions arguments into the stack. First it will get the lua function by using:  lua_getglobal( L, <function name>). Then it will push the input argument into the stack: lua_push*(). Last it will call: lua_call( luastate, argumentCount, retureValueCount );

For example, in EntryPoint.cpp, it calls a lua function called ExamplePrint():

// ExamplePrint()
{
lua_getglobal( luaState, “ExamplePrint” );
// This function has a single argument
const int argumentCount = 1;
{
 // Function arguments must be pushed onto the stack
                          // after the function is and in order
const char* stringToPrint = “Example string to print”;
lua_pushstring( luaState, stringToPrint );
}
const int returnValueCount = 0;
lua_call( luaState, argumentCount, returnValueCount );
}

2). CFuncitonFromLua

This time, lua will call the function from C++ file. The form of the C++ function is like: Function( lua_State *io_luastate ). It will still push the return value into the lua stack. When there is error, we will need lua_error( lua_State* L) or  luaL_error( lua_State* L, const char* fmt).

So the whole process of lua script in my project is first the EntryPoint.cpp call the BuildAsset function from AssetBuilder.cpp (C++), then AssetBuilder.cpp calls the AssetBuilder function from AssetBuilder.lua( lua ), next AssetBuilder.lua will call several functions, for example Copyfile fucntion, from AssetBuidler.cpp ( C++ ). This is just like what JP mentioned in Use lua article that lua could call the C++ function which could call the lua file as it can call the C++ file and so on…

Further Reading

https://github.com/andycai/luaprimer/blob/master/08.md

/******************************************Jinghui Dong’ Blog *****************************************************/

Problems I met

1. Compare two nil values

This problem comes from the function GetLastWriteTime(). The reason is that I wrongly returned the lastWriteTime to the lua. But like what I said just now, when the C++ file return the value to lua, it will need to push that into the lua stack and return a int value which shows the number of the return values. So what I need to do is like this:

  • const lua_Number lastWriteTime = static_cast<lua_Number>( lastWriteTime_int.QuadPart );
    lua_pushinteger(io_luaState, (lua_Integer)lastWriteTime);
    return 1;

2. ScriptDir could not be founded

Even though I added the new user macro of ScriptDir, I did not set it as the environment variable, this is why the compiler said it could not find “ScriptDir”.

3. Could not find “Lua.lib”

This problem comes from the dependence order between lua, luac and lualib. Of course lua.c and luaC need to depend on LuaLib.

4. Huge problem with VS 2013 installer

When I tried to launch the VS 2013 installer, it will always crashed as showing “it encountered a user-defined breakpoint.” No matter what the installer I got from online, this problem still exists. Then I checked the Windows Event Log and found the error might be related to KernelBase.dll. This is a very important for the normal operation of OS. So I use a online tool to replace this dll then something more horrible happened. All the programs on my computer could not find its entry point when I try to run them. Luckily, I fixed this extra problem by use the command line sfc/scannow. This will fix the incorrect, corrupted and changed files. Now my computer could operate well except that I still could not launch VS 2013 installer. At least I learned a useful command line to save my OS and knows an important dll of Windows.

Updated: JP told me the solution to this problem, which I need to change the option of the DirectX control panel. The reason why I could not launch the vs 2013 installer is that I set DirectX as the debug mode. Anyway, this problem is fixed. 🙂