Steam Greenlight games!

My thesis team has finally put our game up on Steam Greenlight!  Here’s a full list of EAE games currently on greenlight:

EAE 6320 Final Project

For my final project, I wanted to make a simplified version of one of the prototype games that I worked on last year called Entropy.  You can play the original game on my website jonkenkel.com. The original game is based on Missile Command, where the player had to shoot down missiles to defend several cities from being destroyed.  The game adapted that to protect a planet with weapons on it.  For this version I just used silly models to represent the enemies, pistachios and cashews.  Either way, the objective is to stop the enemies from reaching the planet in the center of the screen.

You can download the game to play here

Controls

  • A/D – Rotate planet counter-clockwise/clockwise
  • Space – Fire Weapon

Creating the game was relatively painless, as I tried to design around a potential game as I created systems.  The GameObject/Component system made this process quite easy.  For every new type of player/enemy/object behavior that I needed, I was able to create a component and attached that to the same GameObject as the other components, such as renderables.  GameObjects survive with std::shared_ptr in a world class, that contains all objects in that world.  The RenderableComponents are all held in a Graphics class that draws them each frame.  The problems of this system are actually higher level than their individual usability.  As I was creating the game and controlling objects’ life cycles, I wished that I had a system for creating objects in groups, such as an object with certain renderables, custom components and other variables.  I realized that such a system would actually encompass a wrapper for the entirety of the engine systems that I had so far.

Since working with the game engine was pretty simple, I decided to spend time solving other problems that I had with my engine’s workflow.  If you have been following my blog, you saw my post about C++ Bitmasks (which I won’t elaborate on, because that post is lengthy already), and my search for a reusable solution to creating them.

I also created a C++ wrapper object for working with Lua/LuaState.  This LuaStack object allows easy retrieval of data at various points in the stack, and allows me to much more easily parse out data that I need.  I was able to use this new class to easily rewrite all the old Lua reading code (which had a lot of copy-pasted functions and awkward layout) into fractions of the length they previously had.


I think this class really helped me realize how much I enjoy tools development.  I already enjoyed making scripts in Unity that designers could use and customize in various ways.  So, the focus on asset workflow was very interesting to me.  While creating builders did become a bit samey toward the end of the class, that idea of managing working assets and creating optimized versions that are loaded for the game at runtime was exciting and fascinating.

I also feel like I learned a lot about the graphics programming workflow, despite that not being considered the focus of the class.  Creating meshes, contexts, and effects that were functionally different on each platform, and yet had platform-independent interfaces was really fun and rewarding.

My coding habits haven’t really changed because of this course, but I feel like I have a couple extra tools in my tool belt now.  One of the skills I really pride myself in is how I consider others that use a system as I design and implement it.  I am constantly asking myself if there is a simpler, more elegant way to solve the complex problems I am working on.  I am constantly asking myself, “what would my ideal interface for this system look like?”.  I often find myself splitting code into segments and finding new uses for those smaller components that I have created.

This is what I believe separates good code and design from bad.  The ability to take something and reuse it without customization for a variety of tasks.  Can you take a function, put it in an external location, and have others find it useful and solve problems for them, thus eliminating duplication and complexity?  This becomes more difficult as a system becomes larger and more unwieldy.  However if broken down correctly, the subsequent pieces can be useful in lots of places for lots of problems.

I will leave you with two quotes from Albert Einstein (where I believe he was talking about math problems) that accurately express my opinions on what I am describing here, for how to create well designed code:

  • “Any Intelligent fool can make things bigger and more complex…it takes a touch of genius – a lot of courage to move in the opposite direction.”
  • “Make everything as simple as possible, but not simpler”

My code from this course/project is now available on GitHub, if you would like to peruse some of my systems.

EAE 6320 Assignment 13 – Textures

This week we added Textures to our game, essentially replacing having to use vertex coloring to color meshes.  In order to do this, we needed a way to reference textures and assign them to fragment shaders.  This worked well with last week’s work of adding materials, as we can reference the texture files as uniform parameters in the material.  Here is an example material file referencing a texture:

return
{
    effect = "transparent.effect.bin",
    uniforms = 
    {
        { name = "base_texture", texture = "alpha.DDS", shader = "fragment" },
        { name = "color_value", value = { 1, 1, 1 }, shader = "fragment" },
        { name = "alpha_value", value = { 1 }, shader = "fragment" },
    },
}
openglmat

OpenGL binary material with texture parameter name/texture location highlighted

There are only 2 changes since the last post to this binary format.  The parameter name length is now stored in the Handle location of the parameter array, and the texture name is now after the name at the end of the file:

  • Effect location string length (uint8_t)
  • Effect location string (char*)
  • Parameter count (uint8_t)
  • Uniform Parameter array (Parameter[])
    • Texture pointer – contains either nullptr if it is not a texture, or the length of the texture location string
    • Constant handle (const char* on DirectX, and GLint on OpenGL) – Contains length of this uniform’s parameter name
    • 4 floats for value
    • Shader (enum for Vertex or Fragment)
    • uint8_t value count
  • uniform parameter names and texture locations repeating until end of file (series of c-strings)
    • e.g. param1 name, param1 texture location, param2 name, param3 name, param4 name, param 4 texture location

I was able to save a small amount of space by storing the name length inside the unused constant handles.  Unfortunately, I then proceed to waste even more binary space by including a texture pointer in each uniform parameter.  This setup allowed me to use a single structure for all parameters, texture and float[].  It isn’t sustainable if we start adding more data types though, and is not an ideal setup.  In future versions, I believe this could be better accomplished using either templates or inheritance to define a base parameter type, and enums to print to binary each parameter type array.

opengl13

OpenGL Game

dx13

DirectX Game


I ran into a couple problems working on this assignment.  The first was a small error in my old Graphics code.  I was creating a separate list of transparent objects each frame (to allow objects to change transparency dynamically).  After all the opaque objects are drawn, the transparent objects in that list were then drawn.  However I accidentally referenced the wrong array, and would only sometimes draw the correct objects.  This was challenging to track down, as all the opaque objects would be drawn correctly, and transparent objects that were within the first few items to be drawn would appear correctly.

The other issue was with OpenGL vertex definition.  There was a bit of pointer math that I had copy-pasted from previous assignments to calculate the offset of each attribute that contained an error.  This was a minor fix that was obvious when I was looking at the mesh file.  This manifested as an odd black corner on all every mesh in the game.


Download the latest version here

Controls:

  • WASD – Move camera forward/backward/left/right
  • IJKL – Move triangular prism up/down/left/right (hidden behind the objects)

EAE 6320 Assignment 12 – Materials

This week we added Materials to our game.  These are essentially a wrapper for an effect that defines parameters that are set in the effect’s vertex or fragment shaders.  This allows an artist to adjust parameters that go into their custom shaders without having to write custom engine code.  It also allows us to define materials that have  different set of parameters, but use the same effect/shader combo.

Here is an example our human-readable Material format (green_transparent.material):

return
{
    effect = "transparent.effect.bin",
    uniforms = 
    {
        { name = "color_value", value = { 0.0, 1.0, 0.0 }, shader = "fragment" },
        { name = "alpha_value", value = { 0.2 }, shader = "fragment" },
    },
}

The uniforms are the parameters that are set in the shaders.  Each uniform parameter defines the name that it appears in the shader as, a float array with 1 to 4 values for what it should be set to, and which shader the parameter is set in (fragment or vertex).

The binary format is laid out as follows:

  • Effect location string length (uint8_t)
  • Effect location string (char*)
  • Parameter count (uint8_t)
  • Uniform Parameter array (Parameter[])
    • 4 floats for value
    • Constant handle (const char* on DirectX, and GLint on OpenGL) – unused currently, contains garbage
    • uint8_t value count
    • Shader (enum for Vertex or Fragment)
  • uniform parameter names repeating until end of file
    • Each parameter is a uint8_t of its length, followed by a char* of the string value

I chose this format, because it allows me to load the effect first, then iterate over the uniforms and names at the same time, grabbing the uniform handles with the uniform names from the already built effect along the way.  The binary files are completely identical between Debug/Release builds, and are NEARLY identical on OpenGL or DirectX (the ConstantHandle size is the only difference, because each platform stores uniform handles differently).

  • Triangle prism in the center and box at the top right are using a green opaque material with the opaque effect.
  • Transparent green box is using a green transparent material with the transparent effect.
  • Blue box at left is using a blue transparent material with the transparent effect.
  • Floor and red box in the back are using a red opaque material with the opaque effect.

I did have to update all my meshes to have white vertices to get these colors to appear correctly (as they are multiplied onto the original vertex color).


I didn’t actually run into any issue this week. Smooooooth sailin’


Download the latest version of the game here.

Controls:

  • WASD – Move camera forward/backward/left/right
  • IJKL – Move triangular prism up/down/left/right

C++ Bitmasks – The Mess

Those who have gotten into extended conversations with me about coding have undoubtedly heard me quote Larry Wall’s Three Virtues of a great programmer.  I find the three rules both hilarious, and littered with truth.  I’m not quite sure where frustration with a task not being simple and clear enough falls into the three, probably somewhere between laziness and impatience.  Either way, I was reminded of these virtues when trying to find the perfect solution to the bitmask problem.  I want to talk about the different methods that I have seen for handling named bitmasks in C++ and if I was able to find that perfect solution.

First I want to mentions some of the goals of bitmasks, when we are working with them.  Some of these are required (*), and others are simply things that would be nice and make our lives easier.  (The numbering is not for priority, but so that I can reference them later in the post)

  1. *Store a series of boolean values as flags in a defined type
  2. *Values are a limited set of possible values that is known at compile time.
    • There are bitmasks that allow new flag values to be added to the set of possible flags at runtime, that is not part of this post.
  3. *Very small memory footprint to store
  4. *Very Fast to set/clear/check any or all flags
  5. Be able to reference individual flags by name (preferably type-checked or with some assurance that the name is valid before runtime)
  6. Simple, and clear interface (Do we know FOR SURE that this is a mask?  Can we guarantee how it is being used?)
  7. Minimal work to define a mask type (Are we required to do any special things to make this method work?)
  8. Able to be scoped (within a namespace/class/struct)

A Couple Immediate Bad Ideas

I often find it helpful to think about the worst ways to solve a problem, or at least talk about them when trying to find solutions.  So here’s a few methods that might be some people’s first thoughts on achieving the goals above, and why they don’t really meet our requirements.  These might be blatantly obvious to most of you, but I enjoy listing them anyways.

std::map or std::unordered_map

These solutions are (per the name of the section) an immediate a bad idea, because they violate #3.  The smallest value in the key-value-pair that could be stored to achieve our goal is a boolean type.  Boolean is 1 byte on most platforms, meaning that we are leaving 7/8 bits unused for each boolean, thus greatly increasing our memory footprint (let alone the overhead of the maps).

string type keys

Talking about maps brings up the fact that whatever method we choose will undoubtedly involve mapping some key (our flag name) to some value (is the flag set or not).  We would like to reference these keys by name (#5), so a string might be the first thing that some people choose.  This method is a poor choice when thinking about #4.  c-string or std::string comparisons are expensive operations, often being O(n) where n is the length of the strings.

We can optimize this comparison time out by using some form of hashed string.  However that does not hide the fact that strings are also violating another rule, #2.  Strings, regardless of how they are stored can (theoretically) hold an infinite number of possible values, and we only want a small set of possible values for our bitmasks, so that is off the table.

#define BitMasks

Alright, now let’s get to real solutions.  The first possible solution is using #defines to store the possible values and a typedef to store the mask itself:

#include <cstdint>

typedef uint8_t ImportantMask;
#define IMPORTANTMASK_ONE 1u << 0
#define IMPORTANTMASK_TWO 1u << 1
#define IMPORTANTMASK_THREE 1u << 2

void TestImportantMask()
{
    //initialize to the one and three flags
    ImportantMask mask = IMPORTANTMASK_ONE | IMPORTANTMASK_THREE;

    //set the TWO flag
    mask |= IMPORTANTMASK_TWO;

    //clear the ONE flag
    mask = mask & ~IMPORTANTMASK_ONE;

    //check the THREE flag
    bool threeIsSet = mask & IMPORTANTMASK_THREE;
}

This solution is actually pretty good.

Positives:

  • 2 We #define only the values we consider to be valid.
  • 3 We store our mask in a single byte (or larger if need be, simply change the typedef).
  • 4 Bit operators to modify flags are fast.
  • 5 We also have direct references to the set of valid values by name.
  • 6 We can clearly see when someone is using this as a bitmask, as they should be using our typedef.
  • 7 a bit unclear for the first definition, but straightforward to create after that

Negatives:

  • 8 #defines cannot be scoped
  • No type checking for mask keys.  Any integer could be dropped into this type and screw it up.
  • #defines are not immediately apparent what possible keys are stored or how to get to them without good documentation.

All that said, I don’t like this method, mostly for the last point listed.  If someone were to pickup my API and try to use this mask, they would be FORCED to read either documentation or source code to understand what possible values could be put in.  The scoping is also a serious issue, as polluting the global namespace is always very ugly.

SIDE NOTE: I could write an entire other section here about const member keys or const namespace member keys, however they have very similar issues to the #define, with the exception of the scoping issues.  However I don’t like them, because it is really unclear if values created in the manner are supposed to be used together, or for what purpose.  They also do not define a type, as our next candidate does, which also puts them on a lower bracket.

Enum Bitmasks

C (and thus naturally C++) includes enumerations as a built in type.  Under the hood, these are treated as unsigned integers, but they have the FANTASTIC quality of being type checked, scopable, and being able to reference the valid values by name in the code.  This makes them great for our purposes, as they essentially guarantee 2, 5, and 8 right out of the gate.  Unfortunately, they are not without issues.  The most noteworthy being the fact that any integer type can be cast to the enum, and placed inside, regardless of value (even if not within the set of possible elements).  However their naming clarity makes them fantastic for our desires for desires for convenient naming of the set of valid elements.

Enums as Bitmasks themselves

Since enums are stored as unsigned integers under-the-hood, we can simply treat them as a bitmask, as long as the values for each possible entry are unique bits.  We accomplish this by shifting each entry to a different bit.  I also want to preface this section with the fact that many do not think this method is appropriate for C++, as it is not always safe across compilers.

enum Test
{
    One = 1 << 0,
    Two = 1 << 1,
    Three = 1 << 2,
};

void TestClassicMask()
{
    //initialize to the one and three flags
    Test mask = static_cast<Test>(Test::One | Test::Three);

    //set the TWO flag
    mask = static_cast<Test>(mask | Test::Two);

    //clear the ONE flag
    mask = static_cast<Test>(mask & ~Test::One);

    //check the THREE flag
    bool threeIsSet = mask & Test::Three;
}

You will notice that we have to constantly do conversions here, because the bitwise operators (|, &, ^) return integer types, not our enum type (even though we know that our enum is technically an integer).  Luckily, if you are on windows, there is a define called DEFINE_ENUM_FLAG_OPERATORS that defines all these operators for a given enum type.  If you aren’t on Windows, I found a forum post with a version of the define in full or you can look it up directly in the Windows.h header file.

#include <Windows.h>

enum Test
{
    One = 1 << 0,
    Two = 1 << 1,
    Three = 1 << 2,
};
DEFINE_ENUM_FLAG_OPERATORS(Test);

void TestClassicMask()
{
    //initialize to the one and three flags
    Test mask = Test::One | Test::Three;

    //set the TWO flag
    mask |= Test::Two;

    //clear the ONE flag
    mask &= ~Test::One;

    //check the THREE flag
    bool threeIsSet = mask & Test::Three;
}

Positives:

  • 2 Enums have a limited set of possible values
  • 3 Enums are stored as unsigned integers
  • 4 Bit operators are fast
  • 5 enum entries are named, and type checked
  • 6 operations are very clear what is happening
  • 7 Each entry must be a unique bit, and the #define must be called to generate operators (both pretty manageable)
  • 8 yay scoping!

Negatives:

  • 6 There is no way to tell if we are storing a single value or a mask of values, unless the name of the variable says so.
  • Not guaranteed to be stored as the same integer type under-the-hood across compilers
  • Offsetting each element into a new bit is awkward (but not complicated)

Some would consider my first negative actually a positive, as the type is used solely for storing the potential uses of the enum, either a single value or an enum.  However, it would be nice to know when this is a mask, and when it is a single enum value.  That is just good type-safety, so it is a negative in this instance.

We can actually work around the first 2 negatives pretty easily by using a typedef for our mask type.  Doing this would also REMOVE the need for DEFINE_ENUM_FLAG_OPERATORS, as we would no longer be doing the operations on the enum values themselves, but instead on their underlying integer values.  This does return the previous type-checking issue, however.

#include <Windows.h>
#include <cstdint>

enum Test
{
    One = 1 << 0,
    Two = 1 << 1,
    Three = 1 << 2, 
}; 

typedef uint8_t TestMask; 

void TestClassicMaskWithTypedef() 
{
    //initialize to the one and three flags 
    TestMask mask = Test::One | Test::Three; 

    //set the TWO flag 
    mask |= Test::Two; 

    //clear the ONE flag 
    mask &= ~Test::One; 

    //check the THREE flag 
    bool threeIsSet = (mask & Test::Three) > 0;
}

New Negatives:

  • 7 Offsetting each element into a new bit is awkward (but not complicated)
  • 7 typedef of mask must have enough bits to contain all possible elements of the enum (very easy to change when altering the elements).
  • No type-checking of values put into mask.

Overall, these methods are some of the best and most commonly used.  They have negatives, but are simple enough to work around, with a minimal understanding of how they are defined.

std::bitset with enums

The STL includes a class called std::bitset, that is used to create bitmasks of defined sizes at compile time.  I am already a big fan of this class, so lets see how well it works with enums:

enum Test
{
    One,    //0
    Two,    //1
    Three,  //2
};

void TestBitSet()
{
    //initialize to the one and three flags
    std::bitset<3> mask;
    mask.set(Test::One);
    mask.set(Test::Three);

    //set the TWO flag
    mask.set(Test::Two);

    //clear the ONE flag
    mask.reset(Test::One);

    //check the THREE flag
    bool threeIsSet = mask.test(Test::Three);
}

Positives:

  • 2 Enums contain valid possible values
  • 3 bitsets are just integers as bitmasks underneath
  • 4 operations are bitwise operators underneath
  • 5 Enums are the names of values
  • 6 literally functions to modify state of mask, rather than bitwise operators, making the interface VERY CLEAR.
  • 8 No scoping issues
  • Works on any standard enum without defined values (because they always start enumerations at 0)

Negatives:

  • 5 Any size_t is accepted to modify the state of the bitset (set, reset, flip), meaning the reference to the enum is NOT type-checked, allowing invalid values.
  • 6 No way to know if the bitset is a mask (minus variable naming conventions) of the enum or another mask of that size.
  • 7 Must be standard enum with NO defined values internally.  Must ALWAYS know the size of the enum when creating the mask.
  • All masks will need their definitions updated if the number of items in the enum ever changes.
  • Unlike all other methods, writing a large bitset to binary can be a pain

Well, we managed to get a really nice and straightforward interface with this method, however there are some obvious large tradeoffs.  There is a workaround for hardcoding the size of the bitset, defining the count of elements within the enum itself.  Unfortunately, it introduces other issues too:

enum Test
{
    One,    //0
    Two,    //1
    Three,  //2
    Count,  //3 - ALWAYS after the last valid value
};

std::bitset<Test::Count> mask;

This eliminates the hardcoding of the mask size, however it also creates a new element in the enum.  This means that Count is considered a valid value of Test, meaning we can pass it to anything that accepts a Test, which is less than ideal.  However depending on the usage, this might be okay or even desirable.  For example, with this, we can now iterate over all values in the enum with a for loop without knowing the number of items.

for (size_t t = Test::One; t < Test::Count; t++)
    std::cout << static_cast<Test>(t);

I also realized that if we KNOW this count exists, we can create a template wrapper that can be used to hide away several of the other negatives here (also, I learned about SFINAE making this, which kind of blew my mind).  My version hasn’t been thoroughly tested yet, though it is mostly a straight wrapper of std::bitset, so it should work fine.  The new usage is now:

enum Test
{
    One,    //0
    Two,    //1
    Three,  //2
    Count,  //3
};

void TestTemplateMask()
{
    //initialize to the one and three flags
    EnumMask<Test> mask;
    mask.set(Test::One);
    mask.set(Test::Three);

    //set the TWO flag
    mask.set(Test::Two);

    //clear the ONE flag
    mask.reset(Test::One);

    //check the THREE flag
    bool threeIsSet = mask.test(Test::Three);
}

Positives:

  • 2 Enums contain valid possible values (values are type-checked on input to all functions)
  • 3 bitsets are just integers as bitmasks underneath
  • 4 operations are bitwise operators underneath
  • 5 Enums are the names of values
  • 6 literally functions to modify state of mask, rather than bitwise operators, making the interface VERY CLEAR.
  • 6 Template also makes it obvious what this is a mask of.
  • 7 Any standard enum can now be masked with minimal work (Add the Count element)
  • 8 No scoping issues

Negatives:

  • Enum MUST define a Count element as the last element in the enumeration.  (Very awkward if not needed)
  • Enum MUST either not use defined values, or ensure that all values [0, Count) are used.  (if defined element values skip certain numbers, those skipped numbers still add toward the Count element, as enumerating continues from the last element by the compiler)
  • Templates mean that invalid enums are not recognized until compile time. (No intellisense help here)
  • Unlike all other methods, writing a large bitset to binary can be a pain Edit: writing this to binary is actually no more difficult than writing any of the other masks (because the underlying data is already in a format we want to to be for storage).

I am generally happy with the conclusion of this method.  I am sure that we could work around this a bit more and make it a bit more dynamic, but this is about the end of the road in direct simplicity for it.

Conclusion

So, I guess I have to come to some sort of consensus on what method I prefer here.  Honestly, I think that any of (the types I talked about the most here…) the classic bitmasked enum methods, or a standard enum with a Count and the templated mask would be valid solutions.  They all accomplish nearly all our goals, have simple interfaces, and are quite optimized (both speed and memory).  They all have a bit of awkwardness with their definition, but nothing too unwieldy.  So, I guess we didn’t find a perfect solution, but we found several pretty good ones.  At least, good enough to satiet a bit of my laziness and impatience with this problem.

EAE 6320 Assignment 11 – Maya Mesh Exporter and Effect Render States.

This week’s assignment was primarily focused on building a mesh exporter for Maya that exports into our human-readable format.  Using Maya, we will then be able to create much more advanced mesh types, and load them into our game.  Unfortunately we didn’t do much in the way of actually creating the interface with Maya, as JP already had that code set up for us.  Instead we focused on converting the Maya format into our format.

My format has changed, since Maya exports more information than the old format could comfortably handle.  Here is an example of the new format with the floor object:

return
{
    vertex =
    {
        { -- Vertex 0
            pos = { -2.5, -5.55112e-16, 2.5 },
            color = { 1, 0.9931, 0, 1 },
            normal = { 0, 1, 2.22045e-16 },
            tangent = { 1, -0, 0 },
            bitangent = { 0, 2.22045e-16, -1 },
            texcoord = { 0, 0 },
            shading_group = 0,
        },
        { -- Vertex 1
            pos = { 2.5, -5.55112e-16, 2.5 },
            color = { 0, 0.0334, 1, 1 },
            normal = { 0, 1, 2.22045e-16 },
            tangent = { 1, -0, 0 },
            bitangent = { 0, 2.22045e-16, -1 },
            texcoord = { 1, 0 },
            shading_group = 0,
        },
        { -- Vertex 2
            pos = { -2.5, 5.55112e-16, -2.5 },
            color = { 0.0218, 1, 0, 1 },
            normal = { 0, 1, 2.22045e-16 },
            tangent = { 1, -0, 0 },
            bitangent = { 0, 2.22045e-16, -1 },
            texcoord = { 0, 1 },
            shading_group = 0,
        },
        { -- Vertex 3
            pos = { 2.5, 5.55112e-16, -2.5 },
            color = { 1, 0, 0.1421, 1 },
            normal = { 0, 1, 2.22045e-16 },
            tangent = { 1, -0, 0 },
            bitangent = { 0, 2.22045e-16, -1 },
            texcoord = { 1, 1 },
            shading_group = 0,
        },
    },
    index =
    {
        { 0, 1, 2 }, 
        { 2, 1, 3 }, 
    },
}

 

My Maya exporter Loaded in the Maya Plug-in manager

After attaching Visual Studio to the Maya process, we can debug our plugin during an export.

The MayaMeshExporter project, which is part of the Tools project directory, has no dependency connections to the rest of the solution.  It does not need any other projects to be build before it, and no other projects need it built for them to work properly.


The other part of this assignment started with the idea to add transparency to the game, but ended up being about the many possible render states of objects.

The current game, with transparent objects and render states!

The human-readable effect format has been updated to store the render state flags.  These are all optional booleans for whether they are enabled or not.  This allows easy reading and modifying their values, and is also very clean looking.  Here is the new format:

return
{
    vertex = "vertex.shader.bin",
    fragment = "opaque_fragment.shader.bin",
    transparency = false,
    depth_test = true,
    depth_write = true,
    face_cull = true,
}

For the binary formats, we store the render state as a bitmask (allow us to store as many render states as the number of bits in the mask type).  I have actually been meaning to write an entire post about the different ways to handle enums as bitmasks, and might do that this next week.  Here is the definition for our RenderState enum and the containing RenderMask:

typedef uint8_t RenderMask;
enum RenderState
{
    Transparency    = 1 << 0,
    DepthTest   = 1 << 1,
    DepthWrite  = 1 << 2,
    FaceCull    = 1 << 3,
};

In truth, I could have just used the RenderState as the mask type, but then it’s size would not be defined (which is important for the binary format), and could be different on different platforms.  By using the RenderMask, I guarantee that I know what size the mask type is.

Since our RenderMask is a uint8_t, it only takes up 1 byte in the binary format.  In the effect binary examples below I highlighted the first byte of the file that stores this mask, and I included the conversions between its hex value, decimal and binary on the right (the new Windows programmer calculator is a beautiful thing).  As the bits appear in the images below the mask is organized as Face Culling, Depth Writing, Depth Testing, Transparency.

Opaque effect in binary format. Note the human-readable effect above is the same effect as this one.

The opaque effect has no transparency, but has depth testing, writing and face culling enabled.

Transparent effect binary format

After the first byte of the RenderMask, there is another uint8_t byte that stores the length of the vertex string (for fast jumping in the format), then the vertex shader c-string and the fragment shader c-string.  Storing the RenderMask at the beginning of the file was just a convenience thing for me.  Since it is 1 byte, whether it was at the beginning or end of the file did not matter (since we always knew how big the file was, reading the last byte would be trivial).  Because of that, I chose the beginning for the easy pointer math:

RenderMask renderMask = *reinterpret_cast<RenderMask*>(fileData);
uint8_t vertexStringLength = *reinterpret_cast<uint8_t*>(fileData + sizeof(renderMask));
const char* vertex = reinterpret_cast(fileData + sizeof(renderMask) + sizeof(vertexStringLength));
const char* fragment = reinterpret_cast(vertex + vertexStringLength + 1);

Alpha transparency (the type we implemented) is a normalized value between 0 and 1 that determines how opaque (not see through) an object is.  0 being invisible and 1 not being able to be seen through at all.  Alpha blending is the process of blending the color of the pixel behind the transparent object with the transparent object’s color, based on the transparent object pixel’s alpha value (using something like: bg.color * (1 – transparent.alpha) + transparent.color * transparent.alpha  to determine the new color of the pixel).  Because of how alpha blending works, we want to render all opaque objects first.  Then render all transparent objects in order back-to-front.  This applies transparency with a higher appearing priority to the objects in front, which matches how our eyes see transparency.


 

No interesting issues this week. I did run into my old friend, the copy during Custom Build Step error, however this was simply a permissions error with the directory it was trying to copy to.


 

Download the latest version here!

Controls:

  • WASD – Move camera forward/backward/left/right
  • IJKL – Move triangular prism up/down/left/right

Why I “overuse” the Singleton pattern in Unity

Singletons.  Some people love them, some people hate them.  I have been thinking about my love/hate relationship with them recently as I worked on more projects in Unity.  I found that I have a tendency to use this pattern quite frequently in the engine.  This isn’t something I do as quickly in other languages, such as C++, so I thought it best to talk about why this has happened to my development.  There are a number of reasons and thoughts that go into this decision every time that I make it, so I thought this post was important, if for no other reason than to validate my decisions to myself.

When I talk about a singleton here, I am talking about my personal implementation that I use on about a half dozen projects with great success.  You are free to take and modify that implementation, if you so desire.

Object lifecycle

In Unity, most objects die at the end of a scene (or when a new level is loaded).  The exception to this rule are objects that are flagged by the DontDestroyOnLoad function.  These objects will survive through scene loads.  Objects flagged in this manner are already candidates for a singleton.  This is because they must survive the scene load, thus destroying any local object’s references to them.  And odds are that we need an easily accessible reference to this object.  Some examples include scripts that will never be modified throughout the life of the game, but instead are used to grab data or global state (InputManager, LevelManager).  Why would we even bother destroying these objects?  They are never modified, and will be needed in every level of the game.  This is also useful for scripts that move state between levels, such as a HighScore script that stores the high score between levels.

The singleton pattern also allows lazy initialization, something very handy, as many objects don’t matter until we call for them.  As an example, we don’t really need to know about the LevelManager (and any meta data that it stores about levels) until we are about to load a new level, so there is no sense in creating it until it is asked for.  This is opposed to something like an InputManager, which needs to know the current state of objects by the time we poll it for data.  A script like that needs to be touched and brought into existence at the beginning of our game.  Either way, we not only blow off creating these object until absolutely necessary, but we also ensure that they are created in the same way every time, guaranteeing that we find bugs with that creation step, if there are any.

Data Models

In most custom engines, you must store data in formats that need to be interpreted and applied to objects (Lua, JSON, XML).  This is still true in Unity, however using Prefabs, this happens automatically.  A prefab not only provides an nice GUI to edit any variables (the inspector), but it is also loaded immediately onto that object in the exact configuration of the prefab.  You will notice that my implementation has LoadedFromResources as one of the options, and honestly this is the most common method that I use.  I can create a prefab that represents that object, and the singleton will load that prefab exactly into the scene and begin running the scripts with that configuration.  I use this for virtually all my singletons in Unity.  A LevelManager prefab might store the levels, a bitmask about what types of levels they are, and the number of enemies on that level.  An InputManager might store the sensitivity of gamepad sticks.  NetworkManager might store all the basic networking configurations, such as ports, networked prefabs, and other data.

Secret Singletons

I have seen a lot of Unity code, from good and bad engineers (including my own).  One thing that happens constantly, that continuously amazes me is the misunderstanding that commonly happens when grabbing references.  For example:

MyImportandScript myImportantObject;

void Awake()
{
    myImportantObject = GameObject.FindObjectsOfType<MyImportantScript>();
}

AND

MyImportandScript myImportantObject;

void Awake()
{
    myImportantObject = GameObject.Find("My Important Object").GetComponent<MyImportantScript>();
}

Are both secret singleton’s as I’ve grown to call them.  These are actually poor singleton implementations.  The coder expects there to be only 1 instance of this object at a time, they just have a poor search method that must be run to get that instance.  This is actually much worse than using the singleton pattern, in my books.  The coder here has all the negatives of a singleton (global object, tight coupling), and none of the benefits (easy referencing, easy lifecycle management).  This also makes it very hard to ensure that other objects are all grabbing a reference in the same way.  What if two difference script use each of the methods above, then someone renames that object?  One of them will break, but not the other.  This is really awkward and undesirable behaviour.

Some samples of Singletons I use

HUDController – Management for an in-game HUD (not for on menus).

[UnitySingleton(UnitySingletonAttribute.Type.LoadedFromResources, true, "UI/HUD Canvas")]
public class HUDController : UnitySingleton<HUDController>

This singleton is something that is constantly referenced from the player in my scenes, ensuring that the HUD actually exists in that level whenever the player tries to modify the state of it (their health, etc).  However you can also see that this object is destroyed when a new scene loads, meaning that it does not live on.  If it did live on, it could end up in a menu, which is NOT what we want from a HUD.  However any level that has a player in it, will by necessity create a HUD configured as we have for the HUD Canvas prefab.

StatTracker – FPS and other statistics tracking

[UnitySingleton(UnitySingletonAttribute.Type.CreateOnNewGameObject, false)]
public class StatTracker : UnitySingleton<StatTracker>

This singleton is used for tracking the game’s stats throughout the lifecycle of the game.  It is touched into existence on the first level by the menu, and is not destroyed until the game closes.  When we look at stats, we want a full history of the highest, lowest, and average fps, throughout the course of the entire game.  This script doesn’t have any special configuration, it just grabs data at runtime, so there is no need for a prefab.  Using this as a singleton is necessary (as opposed to a simple static class), because we need to grab data EVERY FRAME (thus we need the Update function).

LevelManager – Storing metadata about scenes

[UnitySingleton(UnitySingletonAttribute.Type.LoadedFromResources, false, "Managers/Level Manager")]
public class LevelManager : UnitySingleton<LevelManager>

This singleton stores a list of structs containing data about each level.  Unfortunately Unity doesn’t let us store data about scenes in one cohesive location, so we must manage it manually like this.  But you can see that this object is loaded from a resources prefab, with any designer-configured level variables, and survives the length of the game (as that data will never change).

Conclusion

So I guess each decision to use or not comes down to a series of questions.  These questions are only answered after I KNOW that we only ever need a single instance of the script/gameObject at a time.

  • Do we need access to MonoBehaviour functions?  (Update, Start, OnLevelWasLoaded)
  • Do we have special data to store that can easily be modified from a prefab inspector? (let the designers have a nice UI without us writing custom scripts for this)
  • Do we have a configuration that is dependent on this object being laid out in a certain way? (must have parent/child, or must have objects attached to it)
  • Does this object need to survive through multiple scenes?
  • Am I creating secret singletons if I don’t create a singleton?
  • Are many objects trying to grab references to this object? (and do I want them to all grab them the same way?)

If any of those are true, there is a very high probability that I will chose a singleton over another pattern.  Hopefully this has been informative and you can comment if you disagree with anything I’ve said here.

EAE 6320 Assignment 10 – The Third Dimension

This week we transferred our engine from handling simple 2D (in a poor fashion) to proper 3D.  Simple task, but adds a few items to the checklist to complete.

First is the meshes, which need a z coordinate added to the position in each vertex.  That is very easy to do in the human-readable format:

return
{
    vertex = 
    {
        { pos = { -5, 0, 5 }, color = { 0, 1, 0, 1 }, },
        { pos = { 5, 0, 5 }, color = { 0, 0, 1, 1 }, },
        { pos = { -5, 0, -5 }, color = { 1, 0, 0, 1 }, },
        { pos = { 5, 0, -5 }, color = { 0, 1, 1, 1 }, },
    },
    index = 
    {
        { 0, 1, 2, },
        { 1, 3, 2, },
    },
}

In binary, our format is actually exactly the same (we simply increased the size of each vertex).

  1. Vertex Count (uint32_t)
  2. Index Count (uint32_t)
  3. Vertices (Vertex[]) – stored as (3 float’s for position, 4 uint8_t’s for color)
  4. Indices (uint32_t[])

Floor Mesh Binary Format

We also need the concept of a camera.  Something to define the view frustum for what objects we are going to draw.  In my architecture, I am using another gameobject with a CameraComponent that stores all the variables we need.  This object is stored within my Graphics object, and can be modified and moved from a reference there.  I could have defined it externally, however the Graphics object needs to know about the existence of a camera, and so it made the most sense to keep it inside.  This creates an interesting and appropriate connection in the Graphics object between the RenderableComponents (mesh and effect combination) that are going to be drawn, and the camera that is going to draw them.

Finally, we need to be able to draw each object to the screen.  In a similar fashion to how we used the position offset in a previous assignment, we are going to define 3 matrices for the vertex transformation of the object to the camera/screen.

  1. Local-to-World : Transforms the vertex from the object’s local coordinates to the game’s world coordinates
  2. World-to-View : Transforms from game world coordinates to the camera’s coordinates
  3. View-to-Screen : Transforms from camera coordinates to the screen’s coordinates

We have 3D! (and the box is able to intersect the floor properly, thanks to depth buffering)

The only other task was to enable the depth buffer for each platform.  The depth buffer allows the box above to properly intersect with the floor.  Without the depth buffer, a painter’s algorithm is used, meaning that objects are drawn on screen in the order they are called (with the last drawn objects overlapping the first draw objects).  Such an algorithm could never draw the scene above correctly, as it needs to draw the floor over the box in some areas, and the box over the floor in others.  The depth buffer accomplishes this by storing the depth of each pixel in an array.  When a new object is drawn, each pixel checks against this buffer, and if it’s value is lower than the value in the buffer, that pixel is drawn.  In our case, we use a Less than comparison, and set the depth buffer to 1 by default.  1 is the highest value that can be in the buffer, as this z depth corresponds to the object’s position depth in the view frustum talked about above (0 being at the near plane of the frustum, and 1 being at the far plane).  We use less than, because we want objects that are closer to the camera to be drawn on top of objects behind them in 3D space.


The only issue I ran into was when working with setting different depth buffers.  For whatever reason neither platform liked when I used the or-equal (|=) operator to set the depth values in an if statement.  I wound up using ternary operators with the or operator like so:

const DWORD buffersToClear = (screen ? D3DCLEAR_TARGET : 0x0) | (depth ? D3DCLEAR_ZBUFFER : 0x0) | (stencil ? D3DCLEAR_STENCIL : 0x0);

I also decided to just add the option to clear the stencil buffer, since we added the depth buffer already.  It seems foreseeable that we will add that too, eventually.


Download the game here!

Controls:

  • WASD – Move camera forward/backward/left/right
  • IJKL – Move box up/down/left/right

Game Design – My problems with Halo 5

I have been a longtime Halo fanboy, spending thousands of hours across the games.  So many of my points that I will bring up here are with my bias toward the classic Halo games.  I am only going to talk about the campaign exclusively.  I played through on Heroic difficulty for the majority of the game (the preferred standard difficulty for returning Halo players in the previous games), and had to switch off of Heroic toward the end, which I will talk about later.  I also (thanks to 343’s decision to remove splitscreen) played through the game solo, with a friend watching and just talking about the game with me.

This post will primarily focus on the issues that I feel ruined the Halo 5 campaign.  From destroying the player’s fantasy during gameplay, to enemy design where enemies are just bullet sponges.  I will be spoiling a few details about story and gameplay, if that is a concern for you.

The Fantasy

Halo has always been about the player living out a fantasy.  This is obvious from the first moments of the first game.  When the Master Chief is awakened from cryosleep and how the other characters talk about him.  How he is treated by Captain Keyes, with respect and with VERY high expectations.  The design of older games missions reinforced this by letting the player be the one to take all the awesome actions.  During sniper missions, any allies would wait for the player to engage and take the first shot before backing them up.  During epic boss battles, such as the Scarab fights in Halo 3 (and somewhat in Halo 2), the player had to be the one to destroy the Scarab and save everyone.  The player always had a leg up and always was the coolest part of the games.  YOU were the hero, and you felt that.  You destroyed enemies, you rescued your allies, and there was no one else, no other character, that could have done the things you did.

Unfortunately, Halo 5 destroys this fantasy right out of the gate.  If you watched the release trailer, which also serves as the opening cinematic to the game.  You instantly feel the power of the Spartans in this cinematic.  It is masterfully crafted, empowering, awe-inspiring, and the initial step of doom for Halo 5.

The first mission, which you are immediately thrust into after this cinematic is a rather standard, and bland, fight through a fissure in the ice, and then a few enemies in a building.  There are no dropships for the player to jump on and destroy, as we saw in the trailer.  There are no vehicles fro the player to board and destroy, like we saw in the cutscene.  There isn’t even a hill for the player to go running down, wreaking havoc as they sprint to the finish.  By the end of this mission, you realize that you do NOT get to do all the awesome things that they show you in that cutscene, and instead only get to have some small skirmishes with several dozen enemies.

This mission culminates as you approach Dr. Halsey (original creator of the Spartans 2s) and her captor, Covenant Supreme Leader Jul M’adama.  This is it, what all this mission has been working toward, what all the Spartan Ops missions from Halo 4 had been setting up.  This final battle with the last leader of the Covenant, a force you have been fighting for more than half a dozen games now.  The player is primed, the stage is set, we are ready to pummel this guy and not only beat the mission, but also to basically take victory from the Covenant finally.  After you fight the few enemies in the last room of the mission, you are told that M’adama and Halsey are in the next room, and as you approach the door, you realize that this isn’t what you had hoped for.  The game cuts to a cutscene here, where you get to watch Locke’s fireteam Osiris fight and kill M’adama.  You don’t get to do anything.  You don’t get a boss battle.  You don’t get to earn your closure.  WHY?!?!  It was the perfect setup, something we have wanted, and the game takes it from us.

This game also sets a new (and completely unnecessary) precedent by adding 3 “missions” where there is no action, and the character has to talk to NPCs.  This is something completely foreign to Halo.  There have never been entire missions about talking to people, let alone the silly “talk to people to gather information” sort of things that happen in the first.  Even more asinine is the fact that these missions are literally seconds long.  I was able to go to the waypoint marked people, talk to them and process to the next mission in under a minute.  Why are these even here?  We can do all the worldbuilding that they do during gameplay, and all the talking could have just as easily been a cutscene.  They only assist in the destruction of the hero fantasy, as the player is made to do the busywork of walking around and talking with people.

I feel that these 3 examples are more than sufficient to display why this game destroys the Halo fantasy.  One where the player is the one saving the world.  Instead, replacing it with a movie, where the player watches Spartans do all these awesome amazing feats, and participate in none of them.  There are some missions which do some repair to this fantasy later in the game (the Kraken had some of the fun that the Scarab had, unfortunately it only interacted with flying forces :( ), but it is too little, too late by that point.  It almost appears as though the writers wanted to make a movie, and piecemeal that into a game, and that is not the appropriate way to handle a game, especially one with as interesting and rich of a universe as Halo.

The only other thing that I want to talk about here is how weird it was playing the Chief with Blue Team.  Seeing them together was cool (I have read the first several books, and know of their awesome adventures).  However the first time that I took too much damage as Chief, and was downed, it was the most peculiar moment in my entire Halo history.  The Chief just asked for help (I think the line was: “Blue team, need an assist”).  This immediately blew me out of character.  I realized that in all the games and hundreds of hours I’ve been the Master Chief, I’ve never heard him ask for help.  Previously, anytime that I was bested in battle, I had to restart the level (or wait for an friend to find a safe location, so I could respawn).  I don’t know if I liked it or hated it, but it really threw me off.

Strategy and Sponges

Believe it or not, I always viewed Halo as a game about strategy and tactics (more tactics technically).  The original game had a small set of groups of enemies that you fought over and over throughout the game (1 elite with 4-6 grunts; 4-5 elites, where a couple had better weapons; mixings of jackals with these as snipers or shield support).  Whether you realized it or not, you were developing strategies to handle these different groups of enemies, and you constantly accessed your surroundings.  How can I use cover from this location vs that other one?  There are several mechanics in Halo 5 that I feel negatively affect the player’s ability to develop these strategies naturally, and several mechanics that simply have poor strategic design.

One of the new mechanics is the fire team that runs with the player.  I really enjoyed this, it was reminiscent of Star Wars Republic Commando and a bit of the first Army of Two.  One of the features of this squad is the fact that they can revive you when you take too much damage.  This differs from the original games, where you had to restart at a previous checkpoint whenever you died.  This was a great convenience, and did a fantastic job of stopping me from feeling like I was “stuck” in areas (with the exception of one boss battle, which I will talk about later).

Unfortunately, squad revive is not all positives.  When talking about the Halo strategy, I need to also mention that no one does the strategic action on their first try.  You always try to just run, guns blazing, and take out all the enemies before you revert to being strategic.  That is why this mechanic is bad for developing tactics.  I would constantly run in, get knocked down, get revived and continue running in.  Only once in a blue moon would I not have an ally able to revive me, even in the silliest of situations (this probably only happened a couple dozen times during the entire campaign).  There were almost no times where I would be kicked back, and say to myself: “Alright, that didn’t work.  What can I do differently to get through this?”.  The game never challenged me and my shenanigans, this design allows me to simply keep pressing forward with the worst and least interesting strategy available.


This strategy about how to handle different groups of enemies is also ruined when we start to look at the Prometheans.  In previous games, enemies were encountered in 2 styles: enemies that were already on the map when you arrived, and enemies that arrived via dropships.  This was a fantastic and simple design.  The player knew that more enemies were coming and there was a moment of building tension as a dropship approached with a new set of troops to fight.  Prometheans ruin this thoroughly by simply appearing out of thin air.  There is usually  warning from an ally before this, something like: “More Prometheans incoming!”.  However this is completely out of character, and confusing.  How do they know that?  The player has no way to predict when another wave of enemies will simply appear out of nowhere.  They also don’t know how many enemies to expect when this happens.  You have a pretty good idea how many enemies can fit into a dropship.  But how about teleporting?  How many enemies can the player predict they will have to work against and plan for when there opponents just pop into existence?

Unpredictability destroys strategy.  Why are so many e-sports competitive games moving further and further from random generation and unpredictable results?  Because watching players understand the game, predict what can happen and react and play to that is a fun, enthralling experience.  When things happen “randomly” or in a manner that cannot be predicted, the player will simply stop trying to predict and not care about that portion of the game.  Why should they?  There is nothing for them to do there, nothing to plan or prepare.  They simply have to accept their fate, and this is the one of the greatest missteps of the game.

One of the worst enemy designs is unfortunately the most common enemy in the game, the Soldier.  Nevermind that they chose to simply create a humanoid-biped rather than something more unique like an elite or knight.  This enemy has 2 things that make him poor for strategy.

The Solider exposes his core after taking a certain amount of damage, after which he is very vulnerable and able to be destroyed.  This has the same mechanic as a shield, only without recharging (the only thing that made them strategically interesting).  Now there is no reason to focus on one Soldier over another, just keep putting bullets into them until they are dead, as they will not recharge their core cover.  I could be wrong here, but it also appeared to me that they gained a moment of invincibility during their transition to having an exposed core.  This is just unacceptable.  We fight hundreds of this enemy throughout the game, and they are just big bullet sponges.  You cannot kill them in a single shot with the sniper rifle, it takes 2 headshots to kill them.  That was NEVER true of any common-level enemy in any other Halo game.  But that is not the most frustrating part about this enemy’s design.  They can also teleport out of the way of damage.  They will literally become a wave of energy and zip around to a new location.  They do this to jump to high ground, dodge grenades, and flank the player when he gets too close.  This would be commendable, if they were not completely immune to damage while teleporting!  When they teleport like this, you basically have to stop the fight, find them again, and re-engage.  There is nothing you can do to affect or stop them from doing this.  If you spend your last grenade into a group of them, they all teleport away, you’re just out of luck.

The worst part is that this enemy is SO CLOSE to being fun, they just didn’t go the extra little bit to make it interesting to fight against.  I would suggest the following 2 changes to make the Soldier more fun:

First, alter core exposure to simply be regenerating health.  The more health they lose, the more exposed their core is, finally exploding when they’re dead.  This would also allow headshots to be 1 shot again, as the transition to exposed would no longer be necessary.  It would also allow the player to visibly see how much damage they’ve done to the Soldier, as the core becomes exposed and covered based on how much health they have (halting the other issue, that you can never really tell how much damage they’ve taken until the transition).  They also would no longer become invincible for a period of time, allowing players that know where the core is, and are skilled at hitting it to kill them quickly without having to grind through the same enemy twice.

Second, don’t make them invincible during teleporting.  Give them a shape during the move, and make them VERY vulnerable if shot in this.  This allows the AI to make interesting decisions, such as repositioning, dodging and flanking.  But is also allows the player to figure out when they will do this.  Do they always jump away from grenades?  Can I throw a grenade in, get them to transition and one-shot them with a battle rifle?  These are interesting interactions that give the player the knowledge and power to overcome the enemy, rather than forcing them to spend the time to multi-kill every enemy.


I’m putting a break in, because this section is becoming longer than I expected it to be.

The other enemy that was negatively affected in Halo 5 is the Hunter.  This one really seems obvious to me, but they missed it, so here is my explanation: Hunter Pairs are DIFFERENT from normal enemies.

Players should absolutely fight Hunter pairs in unique ways from normal opponents, and in Halo 5 that is not necessary.  The only different strategy from other enemies is the fact they appear to not take melee damage, and do a lot more of their own melee damage.

In previous games, when a hunter fire his fuel rod cannon, the player knew it, and the hunter made a commitment.  Not only was the shot well telegraphed that they were going to fire, but when they did fire, the beam would come out and they would be forced to aim at that location, unable to move their gun for a second or two.  This did a number of things, but primarily, it allowed the player to notice they were about to fire, dodge the shot, and reposition before the hunter had time to fully recover.  The player felt like they were outsmarting the hunters, moving faster and predicting their moves.  This created some very fun gameplay, especially when the player had to predict and react to both hunters in the pair coordinating their shots at the player.  If the player got too close, they did a fair amount of melee damage, but it might be worth it to get an easy sticky grenade in, or a few melee punches of your own.

Now, being close is NEVER worth it, and the player is punished and never rewarded for taking that risk.  The Hunter’s new fuel rod cannons also have 2 modes now: 1 is a single shot explosive blast, and the other is a machine gun rapid fire mode.  This is exactly like all the guns that the player has already faced.  Rapid fire weapons are the most common in the game, and horribly uninteresting for something meant to get the player to thing about enemies in a new way.  The explosive blast is another very common weapon type, with many enemies using turrets or cannons that fire these exact same style of shots.  The player doesn’t have to think about hunters in a new way at all.  The Halo 5 Hunters are simply a bullet-spongy, slower versions of the normal enemies you fight everywhere already.


We also need to take a moment and talk about “boss battles” in Halo 5.  The “Warden Eternal” that you encounter in mission 3 as Locke was not a fun fight.  This could have been tossed off as an uninteresting element of the game, however they have to jam it down your throat.  You fight the Warden the warden as least 7 times, I lost count (2 single times, a pair, then a trifecta).  It honestly became a joke to us, we began calling him Ultron, as he had seemingly endless bodies, and had awkward, boring dialogue.

During the battle, the Warden simply runs at you and melee attacks you most of the time.  If you get in a position he cannot reach, he will occasionally fire a black-hole-style shot that kills you in a single shot.  He also has an instant hit orange blast that immediately drops shields (or kills you if they’re down).  He one shots with almost everything he can hit you with, a truly frustrating experience.  Beyond that, you can only kill him by shooting him in the back.  This is where his design really falls apart.  The Warden was clearly designed as a team fight.  One person stay in front and pull aggro, another stay behind and hit his only weak spot.  This however is COMPLETELY IMPOSSIBLE in single player on heroic.   The Warden stays focused on the player character over any AI characters, and I spent way, way, way too long at each fight just trying to get him to turn around so I could damage him.  At the double boss battle this because so frustrating that my friend had to take over and play for a bit.  When the group of 3 Wardens appeared toward the end of the game, I literally dropped my controller.  Why were they forcing me to replay this horrendous boss battle over and over?  I could not handle it any more.  I switched the game to easy mode, finally hitting the limit of my patience with the game.  From there the game became trivial to beat, removing all challenge.  I wasn’t about to turn it back up to Heroic though, as if I had to fight another Warden again I would have just stopped playing the game.

The Warden suffers from the same plague of the rest of the Halo 5 enemies, he feels like a bullet sponge.  He is not interesting or dynamic to fight.  His actions are instant kills most of the time, which means that rather than playing around him, I just hid from him.


 

Finally in this section I need to ask about what happened to all the other enemy types?  Removing the Brutes could be forgiven, as they were very boring in Halo 2, and largely served as a direct replacement to the elites that were your allies in Halo 3 (despite the fact their armor/rage mechanics were very interesting and fun to fight).  But Buggers?  These guys were (while a tad annoying due to their speed) very interesting to fight against.  They were the only infantry unit that could fly, and only carried the weakest of the Covenant weapons, the plasma pistol.  This didn’t matter though, because their movements and numbers were such a change of pace and made fights with them feel out of control, but in an exciting and unusual way.

The Appearance

Halo 5 is a gorgeous game, no one can argue that.  Models and textures are detailed.  Particles and lighting are dynamic and look absolutely gorgeous.  I actually really enjoy the graphical work that has been done to advance this game, truly commendable.  Especially the consistent 60 fps with independent dynamic resolution.

All that said, someone needed to think about the design of enemies and characters during gameplay.  Enemies in Halo 5 commonly blend in with the environment in ways that never happened with previous games.  Just take a look at the Elites in Halo 1, Halo 2, and Brutes in Halo 3, versus the Knights in Halo 4 and Soldiers in Halo 5.  In particular, look at how the enemies are colored against backgrounds.  Notice how in the first three games, enemies are almost always a contrasting color with their environments.  Bright blue and yellow contrast with darker backgrounds, and in the well lit environments, enemies darker segments really help them stick out.  You also will note that elites light up like a christmas tree when they take damage, as their bright blue shields stand out, regardless of the environments coloring.

The Soldiers light up bright orange when teleporting or exposing their cores, but this does not assist with normal visuals.  343 obviously knew how difficult it was to tell that you were doing damage to enemies.  So much so, that they felt the need to add a UI element that appears around your reticule when you do damage to an enemy, something brand new to the franchise.

Marketing and Character Growth

Someone in marketing over at 343 (or whoever was in charge of that) really knows what they’re doing.  I was ABSOLUTELY HYPED when they release the Master Chief vs Spartan Locke trailers.  These trailers hint at a fantastic story.  One where there are two great heroes finally meeting in battle, and the destruction it wrought to their surroundings.  One, the old ally we’ve grown up with and know and love.  The other, the new guy, trying to hold the world together from the uncontrollable actions of his predecessor.  Who will win?  What will happen to the universe from this amazing battle?

Well, nothing actually.  That cutscene is the only time these characters are actively fighting each other (again, why can’t the player do this? Wrecking the fantasy).  Let’s ignore the fact that the Chief/any Spartan II would wipe the floor with a Spartan IV, simply out of combat knowledge alone.  Instead, imagine a world where those trailers were the story for this game.  There is no Cortana, she died in Halo 4, as we remember.  The Covenant is in a civil war that is destroying their civilization.  The Prometheans are largely a memory, and still a military secret.

How would the Chief have grown as a character?  Remember the conversation between the Chief and Commander Lasky at the end of Halo 4?  He was finally showing some real chips in his armor.  Questioning what it meant to be a soldier, to be a person.  It was a beautiful moment, something that did not take away any of his awesomeness, but made you remember that under all that armor is a real human being (with a lot of augmentations).  Imagine a game where the Chief finally had to grow up, move beyond those who had died and he had left behind?  What kind of a man would he be?  What kinds of decisions would he make in this new world, so strongly shaped by his decisions and actions in the old?  What would he do in a world without a constant looming threat of humanity’s annihilation?

THAT is a perfect setup for a game about the Chief going AWOL, trying to find himself, find a purpose again.  About Spartan Locke having to be called in to return him to the UNSC, either a new man or in a body bag.  Where the Covenant are having a massive civil war, right in the middle of all of this.  We could even find a forerunner artifact in the midst, if you want to fight Prometheans too.

We don’t need universe-ending events, or bringing characters back to life with cheap, deus ex machina.  We can explore the characters that are already there, how they will have been shaped by their pasts, and forced into their futures.  How actions of previous games have come back to haunt us, and maybe even new heroes will rise to help us save the day.

Conclusion

Well, as I finish writing this freaking book of a post about Halo 5, I realize that 343 had really big shoes to fill.  This isn’t just another shooter, this is a franchise that has been around for 14 years.  It shaped not only my shooter experience as a kid, but also my entire game experience, my friendships, and what I really wanted to do with my life (as I am going into game development/engineering now :P).  I can easily say that I would not be the person I am today without this series.  It is this deep love, this history of amazing experiences that is why I am so critical of its future.  Perhaps the bar has been set too high, perhaps I am too biased toward the classic games.  I personally don’t think so, I think this game has been a major letdown to the series and will argue with you on that.  But who am I, just a fanboy?

EAE 6320 ASSIGNMENT 9 – Graphics Platform Indepence and Shader includes

Graphics Platform Independence

For this assignment we needed to have a platform independent version of the original Graphics files (previously .d3d.cpp and .gl.cpp, now just .cpp).  Luckily, I did this several assignments ago.  I have a Context object that stores data about the platform dependent things, and has implementations of BeginFrame, EndFrame, and (the newly added) ClearScreen.  My Graphics file has actually changed quite a bit since the original file.  It now is simply used to store RenderableComponent’s and render them each frame.  You can see my Render function here:

bool Graphics::Render()
{
    bool success = context()->ClearScreen() && context()->BeginFrame();
    if (!success)
    return false;
    for (size_t x = 0; x < renderables_.size(); x++) 
    { 
        success = renderables_[x]->Render() && success;
    }
    success = context()->EndFrame() && success;
    return success;
}

 

Shader Includes

We also added a shaders.inc file that is included in the top of our shader files now.  This file redefines types for one platform to another.  In my case, GLSL variables are replaced with HLSL.  I chose HLSL because I have more familiarity with it, as Unity’s shader files are very similar.  With this change, we also had to make our shader include files as platform independent as possible.  You can see my vertex shader here:

/*
    This is an example of a vertex shader
*/

#include "shaders.inc"

uniform float2 position_offset;

////////////////////////////////////////////////////////////////////////////////////////
#if defined( EAE6320_PLATFORM_D3D )
////////////////////////////////////////////////////////////////////////////////////////

// Entry Point
//============

void main(

    // Input
    //======

    // The "semantics" (the keywords in all caps after the colon) are arbitrary,
    // but must match the C call to CreateVertexDeclaration()

    // These values come from one of the sVertex that we filled the vertex buffer with in C code
    in const float2 i_position : POSITION,
    in const float4 i_color : COLOR,

    // Output
    //=======

    // A POSITION value must always be output from every vertex shader
    // so that the GPU can figure out which fragments need to be shaded
    out float4 o_position : POSITION,

    // Any other data is optional; the GPU doesn't know or care what it is,
    // and will merely interpolate it across the triangle
    // and give us the resulting interpolated value in a fragment shader.
    // It is then up to us to use it however we want to.
    // The "semantics" are used to match the vertex shader outputs
    // with the fragment shader inputs
    // (note that OpenGL uses arbitrarily assignable number IDs to do the same thing).
    out float4 o_color : COLOR

    )
    
////////////////////////////////////////////////////////////////////////////////////////
#elif defined( EAE6320_PLATFORM_GL )
////////////////////////////////////////////////////////////////////////////////////////

// The locations assigned are arbitrary
// but must match the C calls to glVertexAttribPointer()

// These values come from one of the sVertex that we filled the vertex buffer with in C code
layout( location = 0 ) in vec2 i_position;
layout( location = 1 ) in vec4 i_color;

// Output
//=======

// The vertex shader must always output a position value,
// but unlike HLSL where the value is explicit
// GLSL has an implicit required variable called "gl_Position"

// Any other data is optional; the GPU doesn't know or care what it is,
// and will merely interpolate it across the triangle
// and give us the resulting interpolated value in a fragment shader.
// It is then up to us to use it however we want to.
// The locations are used to match the vertex shader outputs
// with the fragment shader inputs
// (note that Direct3D uses arbitrarily assignable "semantics").
layout( location = 0 ) out vec4 o_color;

// Entry Point
//============

void main()

////////////////////////////////////////////////////////////////////////////////////////
#endif
////////////////////////////////////////////////////////////////////////////////////////

{
    // Calculate position
    {
        // When we move to 3D graphics the screen position that the vertex shader outputs
        // will be different than the position that is input to it from C code,
        // but for now the "out" position is set directly from the "in" position:
#if defined( EAE6320_PLATFORM_GL )
        gl_Position
#elif defined( EAE6320_PLATFORM_D3D )
        o_position
#endif
            = float4( i_position + position_offset, 0.0, 1.0 );
    }
    // Pass the input color to the fragment shader unchanged:
    {
        o_color = i_color;
    }
}

I had to add this file as a dependency to our AssetsToBuild.lua file, so that anytime this file changes, all those assets would be built again.  I did this like so:

{
    tool = "ShaderBuilder.exe",
    dependencies = { "shaders.inc" },
    files = 
    {
        { source = "vertex.shader", target = "vertex.shader.bin", arguments = "vertex" },
        { source = "fragment.shader", target = "fragment.shader.bin", arguments = "fragment" },
    }
},

Then we simply iterate through all the dependencies (if any) for that builder for each asset that is built.  If any of the dependencies are older than the files, we build the new versions.


I didn’t run into anything worth noting on this assignment (as it was mostly already done).


You can get the (again not appearing any different) version of the game here.

EAE 6320 ASSIGNMENT 8 – Effect files, and Effect/Shader Builders

Effect File

For this assignment we had to create an Effect file for managing the relationship between a vertex and fragment shader, and create a builder for both this new file format and our shader files.  The effect file simply stores the paths to 2 shader files.  There was no need for subtables here, as we know what 2 pieces of data we need directly.  Here is what my human-readable .effect file looks like:

return
{
    vertex = "vertex.shader.bin",
    fragment = "fragment.shader.bin",
}

And in Hex:

effectfilehex

This is stored as:

  • Vertex string length (uint8_t)
  • Vertex path c-string
  • Fragment path c-string

We extract the paths as follows.  First loading the length of the vertex path, then the position of vertex path is immediate after that, then jumping ahead to the fragment path.

uint8_t vertexStringLength = *reinterpret_cast<uint8_t*>(fileData);
const char* vertex = reinterpret_cast(fileData + sizeof(vertexStringLength));
const char* fragment = reinterpret_cast(vertex + vertexStringLength + 1);

if (fragment >= fileData + fileLength)
{
    std::stringstream error;
    error << "Loaded data for " << i_effect_path << " is invalid";
    System::UserOutput::Display(error.str(), "Effect loading error");
    delete[] fileData;
    return nullptr;
}

Shader Builders

I use a single ShaderBuilder for my implementation of the builder.  This caused each shader to have to include metadata about if it was a Vertex or Fragment shader.  I am doing this by providing that as an argument to the builder, like so:

{
    tool = "ShaderBuilder.exe",
    files = 
    {
        { source = "vertex.shader", target = "vertex.shader.bin", arguments = "vertex" },
        { source = "fragment.shader", target = "fragment.shader.bin", arguments = "fragment" },
    }
},

We are using a separate #define for building debug/release shaders as opposed to using the _DEBUG define.  This is so we can build shaders in a different configuration than the rest of the game.  The primary use of this would be to debug shader bugs in a release build.  Naturally, the debug version of the shader is larger.  For the DirectX version, this is because it maintains certains symbols and data that will be used when stepping through it with a graphics debugger.  For the OpenGL version, this is because it maintains all comments in the file for reference when looking at the file (all of these are removed for release, as we should have the most optimal version, both in size and speed, for release builds).

releasevertex

DirectX Release Vertex Shader

debugvertexshader

DirectX Debug Vertex Shader

vert_release_debug

OpenGL Vertex Shader (Left: Debug, Right: Release)


The fun problem that I ran into for this assignment was handling the relative data folder for effects.  The effect file stores the locations of the shader files, relative to the game.  There are essentially 2 different ways of storing that relative folder path:

1.       Scan the path the game inputs for a new Effect, and load shaders relative to that path.

a.       Requires scanning the string

b.      Requires concatenating shaders files with relative paths

2.       Add the relative data path to the shaders when building the binary file.

a.       Requires data path to not change

b.      Increase the size of the Effect file

c.       Paths are loaded in correctly without any concatenation

The solution that I chose was a modified version of the second option.  I modified our property sheets to include a GameDataDir (“data\”) that defines the relative path of the BuiltAssetsDir to the GameDir.  I can now simply get this environment variable and append it to the front of each Shader file location to store the correct location of the file.  This does still increase the size of the Effect file, however this is only a handful of bytes for each effect.

I chose to wrapper the binary file loading into one interface finally.  When I realized we were loading 4+ different types of binary files from different locations exactly the same, this was a necessity.  The interface is straightforward:

//Loads a binary file into a temporary buffer, and returns a pointer to that buffer
char* LoadBinary(std::string i_file_name, size_t* o_fileLength = nullptr);

You can download the latest (basically unchanged from your perspective) version of the game here.

EAE 6320 Assignment 7 – Uniforms, Positions, Input (and an experiment in overkill)

This assignment was primarily about getting a mesh to move around on the screen.  We accomplish this by using a uniform vector position offset variable in the vertex shader.  This same thing could be accomplished by adding the position to every vertex every frame, however that would be a very clunky solution to the issue here.  We want to move the mesh around, and altering the mesh would require generating a new mesh every frame (which is unnecessarily costly), and alters the base mesh state, something that isn’t exactly the goal here.  We simply want to give it a new position, so setting a uniform that is applied when drawing it removes the mesh generation cost, is a lot more direct way of achieving our goal, and is much cleaner.  Vertex data, which is good for holding data specific to individual vertices, is not really the appropriate place to be modifying the mesh’s draw position that is updated constantly.

I haven’t really used a uniform before, but have played around with Unity’s shaders, which support setting variables inside the shader via name (unfortunately using string comparisons).  In our Effect class (where we stored the repesentation of the shader data), we were required to provide a platform independent way of setting the position.  However, I wanted the ability to be able to set arbitrary variables within the shaders.  Unfortunately we can’t just reference everything by strings all the time, as this is really inefficient, so we must cache a handle to the uniform and be able to modify it later.

To avoid the string comparisons, I have a cache function, which accepts a string to generate the stored handle to the uniform.  The handles are stored in a hashmap with a HashedString as the key.  A HashedString is a string that has an integer hash value, so comparisons are super cheap, and the string is NOT stored with the HashedString, so we aren’t wasting any unnecessary space.  After that, we have a pretty simple interface:

bool SetPosition(eae6320::Math::cVector i_position);

//Cache a constant for dynamic setting
bool CacheConstant(const std::string &i_constant, Engine::HashedString* o_constantId = nullptr);

//sets the value of a cache'd constant
bool SetConstant(const Engine::HashedString &i_constant, const eae6320::Math::cVector &i_val);

It also is trivial to add new SetConstant functions that accept the variety of parameters based on what is accepted by the shaders.

I made a pretty large refactoring of Graphics for this assignment.  I wanted a RenderableComponent to wrapper the Mesh/Shader/Position data inside, however I didn’t simple want a system that only could handle that.  I ended up reusing a modified version of my GameObject/Component system from last semester.  I was able to tidy it up a bit using shared pointers (which actually fixes one of my previous concerns where a Graphics Context could potentially be deleted before the Meshes or Effects).  I also converted my Graphics into a class that contains a context, and RenderableComponent’s (which extend the IComponent interface).  All the game’s logic (which previously resided in Graphics) has been moved to a namespace in the Game project called Gameplay.  It looks relatively similar, but instead has references to the Graphics object (which draws any Renderables added to it).  The gameplay also tracks the GameObject’s  that it creates, and uses the user input to move their positions accordingly (just the square mesh right now).

assign7boxes

Multiple copies of the same triangle mesh, and the box mesh.


I actually ran into all sorts of issues working on the assignment, though none of them were related to the requirements.  Instead most of my issues were self-created when adding in old code from last semester.  There were several instances of namespaces being changed in one file and forgetting to update in the cpp.  I also found out that several of my pieces of old code are probably not adequate for what we are going to need (In particular my old time class did not work properly)

 


Here’s the link to download and “play” the game!

Controls: WASD (Move the square Up/Left/Down/Right), Esc (exit)