Sprinting Toward Industry Panel Pitches

On Thursday, we pitch to a panel of industry professionals to help determine which of the 10 games being prototyped will be made into full thesis games. So until then we’re sprinting to that finish line trying to get everything in place and bring the game and the pitch together to make us look as awesome as possible. Roger, our professor, pointed out that getting the green light here is basically equivalent to getting a $250,000 investment in our idea. That’s pretty intimidating.

So what are we doing to get there? We’ve been working hard to add two more levels to the one we showed on Thursday: one that shows off the object tracking abilities of the camera tech, and one that shows off color matching. Sean, our lead engineer, and I will be pitching it together. Thursday we’ll be making videos of our game and finalizing the pitch. Then we’ll do the pitches and that will be over and the real work will begin.

Practice Pitch for Greenlight

We continue to iterate on our ideas super rapidly, and today we had a lot of really cool ideas after we practiced pitching our game in preparation for an industry panel coming next week to help pick which of the 10 prototypes will be made into full thesis games.

While testing things last night to make our prototype video, we discovered that it’s actually easier for the camera to see simple figures drawn on paper than actual shapes of objects. Testing things this morning, we found that even slightly different handwriting examples work – but it’s still strict enough that not just anything works. For instance, check out this image:


So in this image, the “EAE” on the circle is what we used to calibrate the camera, and then I took a piece of paper around to other people and had them write EAE to see if it worked when it wasn’t an exact match. Both the “EAE”s in black on that page actually worked, but the cursive, curly one did not.

This experiment led us to thinking about the possibilities of using different glyphs that we just have the player draw on paper and hold up to the camera to solve puzzles. Here’s one puzzle idea:


In this level, the idea is the door is locked, and the player has to hold up glyphs to spell “key,” but they have to figure out how to spell “key” in our made-up language by looking at the images above. It’s a little complicated, and maybe we would just have a bunch of images above and one of the images is a key and so the player would have to draw the same key and hold it up to the camera.

We’re also playing with other ideas like changing the light in the level by holding up different colored objects and revealing new stuff on the wall, shadow puppets (still haven’t given up on that), the room being completely dark and the player having to hold up a light, having to create a tune by holding up glyphs that trigger sounds in the proper order, etc. It’s a pretty exciting time, we just have to nail things down and bring them together.

Finally, check out the video below. This is what we used as a tech demo/prototype for our pitch today.

To the future!

Major Pivot

We had a pretty major pivot this week after trying to make the OpenCV technology and Unity play nice and trying to find a way to meaningfully integrate the camera with the game design while keeping realistic expectations about what we could get the camera to do.

We spent most of the week staring at a fork of two similar but different enough games that we had to make a choice. We had the original pitch, which was proving difficult with trying to make the camera stuff work the way I pitched it, and Sean, our engineer, had another cool idea that played more with the weird relationship possibilities of looking at 2D shadows of 3D objects and the gameplay spaces there with changing perspectives and surprise reveals.

Yesterday, we got together again at the beginning of class and came up with a door number 3 that we’re all pretty excited about.

Now the game starts with a calibration screen where it tells you to hold up a square object and then a circle object and then you’re presented with a level that looks like this:


Basically, now Plato is running to escape the shadow world and an evil shadow monster will be chasing him and trying to bring him back. While Plato is running (the character will run on rails–the player doesn’t control him), there will be obstacles along the way that the player has to help Plato overcoming by casting proper “shadows” by holding the right object up to the camera at the right time. In the level above, the first obstacle is just a hole in the platform that you fill in by holding up your square object, the second obstacle is a rotating platform that you have to stop at the right time by holding up your circle object, and the final obstacle is a platform moving up and down that you have to stop at the right time by holding up your square object again.

This way we had a timing mechanic to the game to give it more fun and action than just the simple puzzle game we had before. This also eases the technical burden by only requiring that the camera find a match of a previous image rather than read real-time shapes and orientation changes like before.

Now we just have to finish a prototype before practice pitches tomorrow.

Wish us luck!