Parallax Mapping

Parallax Mapping is enhanced version of Normal Mapping, which significantly boosts a textured surface’s detail and gives it a sense of depth. In Normal Mapping, we calculate per-fragment normal instead of an interpolated surface normal and feed it to the lighting algorithm. The lighting is then what gives a surface its detail. It does good enough job most of the time without touching increasing the vertex count.
Another approach is Displacement mapping, which uses height map to actually offset the vertices. This gives very good result and allows the depth to be visible, even when the viewing angle is close to the surface. But the detail comes at cost of high number of vertices.  Parallax Mapping offsets initial geometry, but instead it offsets texture coordinates that are then used to access textures with diffuse colors and normals.
parallax_mapping_scaled_height

parallax mapping scaled height

Considering the diagram above, instead of directly looking up the texture coordinate ‘A‘, we calculate the offset using the fragment-to-eye vector and height-map. Parallax mapping aims to offset the texture coordinates at fragment position ‘A‘ in such a way that we get texture coordinates at point ‘B‘. We then use the texture coordinates at point ‘B‘ for all subsequent texture samples, making it look like the viewer is actually looking at point ‘B‘.
vertexshader
fragmentshader
When I implemented Normal Mapping, I created TBN (Tangent, Bi-normal, Normal) matrix that transform any vector from tangent to world space. I then passed TBN matrix to my fragment shader so that I can transform the interpolated normals to world space for each fragment. This way I do not have touch my lighting algorithm and the matrix multiplication calculation will happen in fragment shader.  In case of Parallax mapping, we would want to transform the scaled fragment-to-eye vector to tangent space and do all calculations in tangent coordinate space. By transforming the fragment-to-view direction vector to tangent space the transformed ‘P‘ vector will have its x and component aligned to the surface’s tangent and bi-tangent vectors. As the tangent and bi-tangent vectors are pointing in the same direction as the surface’s texture coordinates we can take the x and y components of ‘P‘ as the texture coordinate offset, regardless of the surface’s direction. One more benefit of doing that is now we can  transform the light-direction vector and view-direction vector in tangent space by multiplying the vectors with transpose of TBN matrix and do the calculations in vertex shader and let it interpolate for each fragment. This saves us from matrix multiplications in the fragment shader. This is a nice optimization as the vertex shader runs considerably less often than the fragment shader.
NormalTBN

Normal TBN

parallax

 

Parallax Mapping implemented in this is called Normal Parallax Mapping  and gives more or less valid results only when height-map is smooth and doesn’t contain a lot of small details. Also, with large angles between vector to camera ‘V‘ and normal ‘N‘, effect of parallax won’t be valid. We can also choose to leave components x and y as they are, such implementation is called Parallax Mapping with Offset Limiting. Parallax Mapping with Offset Limiting allows to decrease amount of weird results when angle between vector to the camera ‘V‘ and noraml ‘N‘ is high. So if you will add x and y components of vector ‘V‘ to original texture coordinates you get new texture coordinates that are shifted along vector ‘V‘.

Once I am done implementing this, I would try to analyze these two scenarios and find out when exactly Normal Parallax Mapping gives best result and when should we use Parallax Mapping with Offset Limiting. Also, we can also take multiple samples instead of only one. It is called Steep Parallax Mapping, my another stretch goal.

Final Project

As part of our final assignment, we were asked to make any game of our choice which can be made in three weeks. We used free assets that we got online and used our Texture Builder, Mesh Builder, Material Builder etc to use them in our project. I made a racing game, well let’s not call it racing since there are no opponents and you cannot control your speed and acceleration and there are no turns just a straight road with no end. All you have to do is dodge the incoming traffic but even this can be difficult some times so the game have power-ups too which you can use to help you. You can go invisible and pass through all the traffic without worrying about colliding with other cars.

Other than problem in finding good quality models with proper texture in a format that we can use, I also struggled to make my game feel like happening environment. In my game, the player’s car is always moving but building an infinite length level wasn’t the way I wanted to go. So I implemented a vertex shader in which I can change the UV offset dynamically. I am using three planes or quads in my game with texture scrolling shader, one for the road and other two for the landscape on the left and right side of the road. Earlier I had speed implemented in my game and I was changing the rate to scroll the texture based on the players speed to make it look like he is accelerating but this became a problem when I started adding other cars in the scene. The other cars are moving in opposite to our direction, so if our car is accelerating, other car velocity relative to ours will keep changing too. To adjust the other car velocity and the texture scrolling speed based on player’s car speed is not that difficult to implement but since this was a stretch goal, I made the texture scroll with a constant speed for now.

I also added a third plane at the end of the our road plane to give you an illusion that we are driving towards some location. Ideally the size of this plane should change with time, since we are constantly moving toward our destination but I kept it as it is, anyway we are driving on an infinite length road in a never ending game 🙂

There is one more quad for the sky which is setup to work like a default clear color DirectX uses when it doesn’t have anything to render.  We have two car models that I am using as NPC, they instantiate far from the player and come towards the player with a random speed. If they collide with you, your car color will turn red as part of the feedback. There is no life bar as of now, hence there is no win or lose state. If you use invisible power, I am actually changing the alpha float value we are multiplying to the final alpha channel of the output color, so during invisible power duration, the car will be rendered translucent. By default we had ZBuffer write set to false in our transparent shader because of which I was having problem rendering my partially transparent car model. Some parts of the car were visible through other parts as if the model is ignoring the ZBuffer completely. Actually, it wasn’t ignoring it, it was just not writing on it which was causing that see through artifact. Setting the ZBUffer write to true did the trick, at least for the camera perspective I was using.

We were warned when we started the class that all the designing decisions we make during the class, how we choose to implement any given feature in our project is going to determine the difficulty in making any game using the engine. Fortunately, my project was flexible enough and made it easier for me to scale it or use it. I did implement smart pointer which made it easy to maintain object life cycle. The objects in my game are never going to rotate and they are always aligned to the axis’s so I don’t have to use any fancy OBB collision detection algorithm. As part of our assignments through out the semester, we paid special attention to our architecture, how each project communicates with other projects in our solution. We discussed and implemented less obvious but optimized techniques to import and build assets and used them in our project. We learned how 3D world can be projected on a 2D screen, we also discussed few things we can do to make the scene more realistic. How the meshes, materials, textures and shaders collectively helps us make the scene look close to reality. How we can streamline our asset building pipeline in a way that it stay easy and flexible to add multiple asset but still be able to use the optimized format when using them in our game. We also discussed how vertex lit, diffuse and specular light work.

Good coding practice means having an understanding of how you can implement any given feature so that the code is extendable, not redundant, well encapsulated, well organized, optimized. All in all, programmers should have at least some insight of how to architect the project. Which design pattern to use. Understand how the module is going to be used and take decisions based on that. In what ways the current feature can be extended in future, we should consider this too while make design decisions. I think, all this combined define what a good programming practice is.

finalGame

 

Download the DirectX version

Download the OpenGL version

Introducing Textures

In this assignment, we added TextureBuilder, which gave us the ability to use textures in our project.  Before that we used to have color property in our shader that we use to multiply to every vertex and vertex color that we get by assigning color values to each vertex while building the mesh. To make texture work we now have UV’s in our vertex structure which is used to figure out what part of texture is actually used on the mesh. UV is just a struct of two float’s with value ranging from 0 to 1. Our vertex shader just fetches the UV values from each vertex and pass them to fragment shader as it is. The fragment shader actually gets the data from the texture with the help of passed UV and sampler. Sampler is another uniform we added in our fragment shader which is required in order to render any texture. Texture sampling is the act of retrieving data from the texture.

Typically, when associating texture coordinates with the vertices of a mesh, the values are constrained to [0, 1] along both axes. However, this is not always the case. Negative texture coordinates, or coordinates greater than 1 can also be used. When coordinates outside the [0, 1] range are used, the addressing mode of the sampler comes into effect. There are a variety of different behaviors that can be selected like Clamp-to-Edge, Clamp -to-Zero, Repeat, Mirror etc.

In order to make it all work in our project, we also had to change our materialBuilder to accept texture path and sampler uniform name. I decided to add a Texture inner class in my Material class itself since Texture doesn’t hold much meaning alone in itself without the material. My material file and binary data looks like this now:

MatMat-bin

I opted to have another table of Texture data just like Uniform data. It contains two entries “TextuerPath” and “Sampler”.  I am storing the two strings just after the effect path string. As you can see, in out binary file we have 01 (in red) which is the number of textures we have in this material, followed by two strings separated by 00 (null terminator). The two strings are the texture path (in yellow) and the sampler name (in green). My runtime loading code just reads the number of entries in Texture Data and runs the loop to read other two string. We don’t need to store these two strings as we will be needing them just to load the texture and to get the uniform handle or uniform id.

For transparent fragment shader, now the rgb and alpha value will get multiplied by values from the texture that we got after sampling the texture to the color and alpha uniform variable values that we provided in our material.  Here’s how the final result looks like.

Game-DirectX

DirectX

Game-OpenGL

OpenGL

Download the DirectX version

Download the OpenGL version

EAE day

At last, we get to showcase our game on a big screen in front of everybody including industry professionals. There was only one problem,  all the things we worked on, suddenly stopped working. There were some synchronization issues between the deaf and blind player systems. The networking problems were still plaguing us. Then we found out that the game was running fine when were playing from the Unity editor, but not when we start the game by directly running the executable. So we installed Unity on our systems and ran the game from those.

Overall, the whole event went fine. We ended up explaining all the rules to every player once again but other than that it was good. We got plenty of positive responses and people seemed genuinely exited about our game which is a very good sign.

12345386_10153256665170233_4539571271945464603_n

12347744_10153256668610233_8009426896163977389_n-300x300

New Tutorial

This week we started working on our new tutorial based on feedback we got from Rockwell Collins. The only problem was that we had way too much stuff to implement and we did not have enough time. We tried our best but mid way we realized that its not gonna make it in time. So, we came with fallback plan to just implement the important parts. But then again, while we were working on it, we realized that it doesn’t make sense to implement just part of and leave the rest. If we do that, we will end-up with the same problem we had earlier and we will have to explain all the rules to players. So we thought what the heck, lets just finish it for once and we will be done for good. We stayed till 5 am and implemented all the tutorial states we planed for both, blind and deaf character.

pasted_image_at_2015_12_03_11_58_am_1024-300x167

Introducing materials

In this assignment, we created a new class called Material and a new MaterialBuilder so that we can use same effect files with different color or alpha values (by using uniforms) without hard coding the values in the shader files. Now we can have two gameObjects in our game using same mesh, same effect files (shaders) but different materials, one BlueTransparentMaterial and another maybe GreenTransparentmaterial.

Just like all other assets, we created a human readable material file and used MaterialBuilder tool to build material binary fie which we can load and use at run time. Here is a example of my Transparent material lua file.

TransparentMat

My human-readable material file contains the effect path and the UniformData in table format. UniformData consists of ‘name’ which is the name if the uniform variable, ‘value’ is the uniform variable’s value, ‘shaderType’ specifies the type of shader (vertex or fragment), the uniform variable is associated with. There can be more than one uniform variables, for example in Transparent.mat we have two color and alpha, while only one color in Standard.mat. That’s the reason why we used table to store UniformData. At run time, while loading the material, program will read the uniform name from this material structure and will get the appropriate handle which then we can use to control the associated shader property. As of now, we are just assigning the value taken from material structure (or lua file) to the uniform shader variable.

DebugTMatogl

OpenGL Debug

Our materialBuilder is responsible to generate the binary file from above mentioned material.lua file. Here is how my OpenGL Debug and DirectX Debug binary file looks like.

DebugTMatd3d

DirectX Debug

The first byte in red is the uniform set count. For Transparent.mat it is 2 in both OpenGl and DirectX. Followed by the effect path char array in green. Note the 00 byte at the end terminating the string. After the effect path, I am storing my material struct array (blue and orange). In OpenGL version the GLint is of 4 byte but in DirectX version, the uniform handle is of type const char pointer (D3DXHANDLE) which takes 8 bytes since our DirectX build is for 64-bit architecture. That’s why our material structure in OpenGL is of total 28 bytes, while in DirecctX it is 32 bytes.

matParams Other than uniformHandle, we have an enumerator ShaderType of 4 byte, float array of size 4 so 4X4=16 byte and 1 byte unit8_t for number of values. In OpenGl it takes total of 25 bytes, the rest is padding, making it 28 bytes. In DirectX, its 29 bytes (4 more than opengl) + 3 byte padding making it total of 32 bytes. We can see the padded bits and the uninitialized bits as CD CD CD… in our binary file. In first material striucture (blue), after 3 floats the forth float is CD CD CD, since its uninitialized (color takes only 3 floats). In our second structure (orange), only first index of float array is initialized and the rest three are CD CD… that means uninitialized, since we need only one float for storing alpha value.

Also, I tried to use the 4 byte in OpenGL and 8 byte in DirectX version that we allocated for uniformHandle to store the offset for name char array. We can do that because we don’t need the uniform handle while building the project. At run time, the offset values at uniformHandle will be replaced by the actual handles. As you can see, the first byte of my first material struct (blue) is 40, which is the offset for the uniform name stored after the struct array. In this case its g_color. For second struct (orange) the value is 48, which is pointing to the string g_alpha. By doing this we used some of teh space that we anyway have allocated for uniform handles and also, we dont have to do strlen() while loading the binary file since we already know the offset to them.

ReleaseTMatd3d

There is no difference in Debug and Release version of binary file for any platform, except for the CD CD uninitialized bits are replace with zero.

Controls for my project are: WASD to move the Camera. Arrow keys to move the helix object.

game

Click here to download DirectX version

Click here to download OpenGL version

Maya MeshExporter and Alpha-Blending

In this assignment, we added MayaMeshExporter solution in our project, so that we can convert maya mesh to the human-readable format of our choice and then use our MeshBuilder to create a binary mesh file from it. Other than that, we also made changes in our Effect.lua file to include render states, created one more effect (TransparentEffect) and imlemented alpha-blending.

MayaMeshExporter project does include windows.h but it is still an independent project and does not need any library to build, neither any of the other projects need MayaMeshExporter to build before them in order to build. The sole purpose of MayaMeshExporter is to provide Maya a plug-in capable of exporting maya mesh into the format we want.

maya-plugin

Here is the screenshot showing Maya Plug-in manager with our plug-in eae6320_rathore_ankur_DEBUG.mll loaded. We had to setup environment variable so that maya can find the plug-in location. We can also use Visual Studios MayaMeshExporter project to debug our plug-in by attaching maya.exe process to Visual Studios Debugger. Here is the screenshot showing the breakpoing hit when we try to export mesh from maya using our plug-in.

MayaDebug

‘We changed our effect.lua file to include render states, other than the path of vertex and fragment shaders. It makes sense because each effect uses different shaders and thus depends on different render states to work properly. Although, its not practical to change render state with each effect due to the performance hit we get for changing render states, we are trying to keep things simple so its fine for our project. In reality, we would wanna group objects with similar objects with similar effects (materials), so that we can render all of them in one go without changing the render states. This is called batching, one batch renders all the triangles in same render state.

renderStatesrenderBit

Render States are in form of string-bool key-value pair in our Effect.lua file but in our binary file we don’t have to store them as bool. We can store the information of true and false in one single bit so we created uint8_t renderstates variable in our Effect class and EffectBuilder project. Each bit of renderstate variable store either 1 or 0 indicating true or false for any given state. I have created an enum RenderStates to bind state with a bit, which I use to quarry renderstate variable using OR & AND operations.

 

stdEffBin

The first byte, the number inside the square is the 8 bit renderstate value

transEffBin

The first byte, the number inside the square is the 8 bit renderstate value

Here I have showed two binary files, one for standard effect and another for transparent effect. I am storing the renderstates variable in the beginning of the file of 1 byte. Since the other two values I am storing are string, I have to calculate the string length to find the offset and then jump my pointer by that offset if I want to point to the next value. If I store the renderstates 1 byte at the end of the file, I have to calculate the string length two times so I decided to store it in the beginning of the file. If you look at my human-readable StandardEffect.lua file above, renderstates are false,true,true and in my StandardEffect.binary file the value is 06 (the first byte) which means 0000 0110 = FFFF FTTF.

In standard effect, transparency is false but the depthTest and depthWrite is true. Since, the standard effect represent an typical opaque object, its alpha-blend state should be false, but it is required to do depth test to decide if it is hidden by other opaque object or not and it is also required to write to depth buffer since it has to change depth buffer value too. On other hand the Transparent effect contains renderstate value 03, which means 0000 00011, which means, transparency is true since it is a transparent object and Depth Testing is true too because it has to check depth buffer but the depth write is false because it is a transparent object, it is not supposed to write on depth buffer.

Alpha blending is the process of mixing the pixel color of an object (source) with the color of the same pixel of object behind it (destination) based on its alpha value. we render the opaque objects first since the opaque objects are always going to hid the behind object so to make the process more efficient we render opaque object first an then the transparent objects in back to front order. Back to front order is because we want to treat the behind object as the destination while we are the source.

Game

Click here to download DirectX version

Click here to download OpenGL version

Showcase at Rockwell Collins

I was doing my internship there at Rockwell Collins when we got the opportunity to showcase our game there. We got lot of useful feedback and this was the first time when we got time to actually evaluate what we did and how people who play test our game though about it. Among plethora of suggestions we got from them one was that that our tutorial level is not intuitive enough and is not doing a good job of either explaining game rules or demonstrating what player needs to do to make any progress in our game. There were lot of questions about our game level not being clear about the direction where the players are supposed to go or how exactly do they complete the game objectives.

20151105_092615_1024

Matrix Transformations and Depth Test

 

3dGame

This is how the game screen looks like now. We replaced our square.mesh with a 3D cube.mesh but that’s not it. As you can see, the box is getting render in a perspective space now as opposed to a 2D space without depth. By perspective i meant the sense of the object being in a 3D space where we can see the farther sides of the box are appearing to be little smaller as compared to the sides closer to the camera. Same can be said about the pane. Before be were rendering our meshes directly on screen where min was -1 and max was +1 to cover complete screen of any aspect ratio. Now we are providing position to our meshes in World-Coordinate System (including Camera) and using matrix transformations to transform each mesh or GameObject to View-Coordinate System and then to Screen-Coordinate System. Combination of these three matrices are sometimes called Model-View-Projection matrices. Its easier to think of Model-View transformation as looking at the scene or object from the perspective of the camera with properties like how near and far you can see, the field of view, the aspect ratio etc. Now in order to view this 3d information on a screen(which is 2d), we need to project the result of Model-View transformation on to the screen. We use View-Screen transformation for this, which gives us 2D image with perspective (depth) on our screen. One more benefit of using MVP matrix is that we can calculate the result of the matrix multiplication outside of the loop and then use it for each object, little more optimized.

game

When we render two or more objects on screen, there is always a question of order in which they are getting drawn. The object which is getting drawn the last will overlap what was drawn before, right? That’s where depth buffer and Z-test comes into the picture. When rendering two objects on top of each other, the depth test gets performed for each overlapping pixel and based on which object’s Z-value is smaller (O means close to the camera near clip, 1 means far from camera or close to far clip). Depth buffer is used to store per-pixel floating point data for z-depth of each pixel rendered. We use less-than-equal to operation to check which objec’s pixel is closer and thus needs to render. We always clear depth buffer with value 1 because if there is only one object between near and far clip or while doing depth test for the first object, we need to compare it’s Z value with what is stored in depth buffer, which will be 1, since there is no other object in front of it and 1 means the farthest.

floorMeshfloorBinary

The floor mesh now contains three values for position, x,y and z. the y will be 0 since its a floor and perpendicular to Up-axis. The binary file of floor.mesh reflect the z component of position data. 04 is the vertex count with three 4 byte float for position and four 1 byte uint8_t for color.

In order t do all that, we made lot of changes in our code. Instead of having one position offset uniform, now we have three uniforms for each matrix model, view and projection. We don’t need offset uniform anymore since now we are using uniform model matrix to update object’s change in position at run time. Same for Camera, we have View and Projection matrix to update its position and any change in aspect ratio or field of view. We made a new Camera class with its own position, orientation and other camera related properties. My platform independent Camera class resides in Graphics project because it performs transformations which not directly related to graphics directly but still kind of part of it. But it should be accessed just like GameObject, from game code outside of the graphics project.

Click here to download DirectX version

Click here to download OpenGL version

After IGF

This was time when we started working on our plan of making that level and start completing all the stuff that we could not put in our game before IGF submission. Problems related to networking were still haunting us. All the assignments from other class was pending for this week, since we did not get to work on them during IGF submission. So we kind of got little sloppy but we did involved our self with meetings and discussion about our game and what next we can implement in our game, when we are done with our pending work.

level-screenshot_1024