EAE Day was an incredible showing. There would be too much to describe in a single sitting, so I’ll focus on our game.
We were fixing errors and tweaking sensitivity right up until the last minute, but it all worked.
Kids loved it, and were very adept at getting the cards to work with the camera, better than most of the adults. We received a number of important points of feedback:
The camera was still a bit finicky, so reducing sensitivity even further and shortening cursor lock-on times might help, as well as perhaps an anti-shake routine.
Holding the card in front of the camera, which was at eye level, made it difficult to see the screen.
The game should have multiple cards, performing different functions, like a small inventory of adventuring items. (This was great to hear, since we were already planning on testing this concept.)
We should take advantage of as much data from the camera SDK as possible, including rotation data in each axis. (We hadn’t considered this before, but we already have access to that data!)
The following Thursday, we had a team lunch and a brief retrospective: not much that hasn’t been covered here, but we all certainly agreed that a more structured task management system was needed, as well as a more decisive management duo. We also planned out some light work for the coming months, enough to finalize game mechanics and explore additional stories to tell in this new interactive medium. It would also be nice to incorporate an actual native language…
EAE Day is a few days away! All our levels are fully integrated with each other, and I think we’re ready for showtime. I worked closely with Erica and Jack to import all remaining art assets, script the animations, arrange objects, and set shader properties to their liking.
I also ended up spending a lot of time helping with Git problems this week, but according to Sean, splitting everyone’s tasks by scene file has helped a lot in preventing even more. I’m starting to strongly consider switching to a VCS that allows exclusive file locks, to avoid this problem altogether. If my teammates won’t communicate which files they’re working on, then I’ll just have to get the system to do it for them.
Nonetheless, I’m very impressed with our accomplishments. Teammates have even stopped complaining about camera instability; it seems to work well enough for our purposes, i.e. showing off something cool that few game devs have attempted.
Much progress. Jack made a new “eye” cursor and animated it, for when the player focuses on an interactive object on screen. I incorporated the animations, and devised a script that will orient the cursor’s collider with the camera at all times using raycasts, in order to ensure that whatever is directly behind the cursor is what will be activated. Simple but effective hacks are the best.
I also helped create a prefab (template) for all new levels, which includes the cursor, in effect standardizing both our core mechanic and our visuals across levels. Some people were confused about how prefabs work, but once I explained that changes made to a prefab instance (as opposed to the prefab itself) are not replicated to other instances, they were all over it.
I’m starting to see a lot of Git-related problems, now that we’re all working at full steam. Part of these are due to user inexperience, but some are simply due to a proliferation of merge conflicts. It’s possible that we’ll need to formalize a workflow to avoid this.
Karthik once again worked overtime for us, making a build of our game that was shown at a small indie game festival that will be happening while I’m at work. I’m excited to hear the results!
Big build review. You can almost guarantee that the producers are going to feel gutted after each one. Unfortunately, when their self-confidence drops, so does the coherence of our team. Again come the suggestions for removing the camera. I didn’t even think our showing was that bad…
We ended up meeting yesterday (a Saturday) to talk design. Aqeel and Karthik came in with a whole sketchbook of level design ideas, none of which use camera controls. They’ve lost faith in it. It took a long time just to get to a point where the team was comfortable with keeping it, at least until our display at EAE Day. We’ve invested too much and there’s not enough time to change.
Sean has brought up the point that we still haven’t really given the camera a fair shake. I understand the team’s fatigue: we’ve been dragging this broken game around for two months already, and still haven’t “found the fun”, according to them, but we’ve only just started to create functional levels. Perhaps because all our camera techs were on the original Plato’s Cave team, this fatigue is even more disheartening to them…
Nonetheless, we’re committed. We penned down a multi-scene level design that will demonstrate all the camera features we’ve gotten working thus far, as well as tell a complete narrative, and give the player a safe space in the first few scenes to get familiar with the controls. This design also gives each person on the team a distinct task for the next two weeks, which is impressive, given our track record in sprint planning. My work is on refining the cursor system, making it into something like a point-and-click adventure game.
The “Lens of Truth” works perfectly and it’s brilliant. I had no idea that it would be possible to implement using shaders, let alone that it would end up being the ideal solution. It’s relatively simple in concept, using two shaders, one for the “lens” and the other for the hidden object. The object with the “lens” shader writes 1s wherever it appears on camera into the stencil buffer (which, as the name suggests, exists for precisely this kind of application), then the “hidden” shader looks at the stencil buffer and only renders each pixel if it reads a 1. It’s like masking sprites in 2D, but with arbitrary 3D geometry. Simple and effective, as long as you mind the draw order.
The level we planned last week is practically finished. I think we’re going to talk about new levels soon.
Not much to speak of yet, but Metaio works, and we now have a level design revolving around object tracking, using a custom printed card. It’s based on a native Ute folktale featuring a brave young hero and a series of gruesome battles. Next week is implementing it.
My own part of this week’s project is to make hidden objects that only reveal themselves when you pass a “window” over them, similar the Lens of Truth. I particularly enjoy engineering new mechanics…
I’m starting to recognize a pattern at our team meetings.
Some teammates think we should change our theme again, others still think we should ditch the camera and cut our losses, doing something simpler like touch gesture recognition. The only thing that doesn’t change in these meetings is that Paul entertains each idea with the same earnest interest, not willing to put his foot down on any one of them. I think, however, that finally, we may have some semblance of a new direction.
After looking Metaio, it’s possible that it will solve most of our camera tech problems. Karthik showed us a Metaio demo app on his phone, and it performs object tracking very well. He’s now attempting to integrate it with our existing “level” (our GDC demo), which should be much simpler than the other frameworks we’ve seen, since it’s built for Unity.
Once that’s finished, we can experiment with its capabilities and try to design some puzzle elements around them. That’s right, some old fashioned rapid prototyping! It’s vastly better to prove a design point with a demo than a whiteboard. Here’s hoping.
Things aren’t moving quite as smoothly anymore. What I didn’t mention before was that we’ve had a lot of heated debate over our game’s target hardware. Tablets, consoles, PCs, museum kiosks? Each one poses a challenge, given our usage of a camera. Many teammates are even suggesting that we ditch the camera entirely, that it’s never going to work to the point that it’ll be fun to play, or that it’s holding us back in terms of design and distribution options. However, it’s the thing that made the industry panel say “yes, do this!” It’s the most unique feature our game has, even given our new storytelling goals. How much does market penetration really matter for a student project, especially this early?
The fact that I’m trying to debate this point in a blog post is a sign that we’re directionless. A lot of the team’s worry is over the stability of the camera tech, and the fact that we’ve only been able to get one function working reliably: static image recognition. Paul was really excited about the idea of recognizing arbitrary shapes, i.e. using household objects as game controllers, not just pre-printed cards. However, I think our camera techs have oversold the feasibility of this prospect.
We’re also considering ditching OpenCV. It’s been giving us a lot of pain, especially in terms of mobile integration. We’ve been avoiding commercial software as much as possible, but it turns out that Metaio, the SDK used by the makers of Ice-Bound, would be free for student projects.
A much bigger problem is that, while our camera engineers flounder, the rest of us have nothing to do…
GDC was amazing. However, Jack and I almost missed the flight home because a Chinese New Year parade (2 weeks late!!?) cut off half of the city. We somehow found a way under it through the subway tunnels. It was practically miraculous.
GDC was amazing for the amount of support I saw for games diversity. It was a welcome breath of fresh air in contrast to what’s been making the news in the past few months. Aside from that, I also collected a lot of great advice from other indie devs, on our game, on tech, and on the industry. I gave out more business cards than I was expecting, although I doubt I’ll hear from anyone.
GDC was also very tiring. We spent hours running around the city trying to get into industry parties, most of which ran out of tickets within the first day. I think I’ll be more prepared for next years’.
Things seem to be moving quickly. After a few grueling meetings, we decided we like the idea of moving from “shadows on a cave wall” to “cave paintings that come to life”, and can use this concept as a means to retell Native American legends. This has a lot of implications, mostly in terms of the research and expert advice we’ll have to get, but for now, since there’s not much time before GDC, we’re focusing on making a small playable demo, hopefully with Android support.
I even came up with a new name for the project, after the pigment used in cave paintings! Everyone seems to like it.
For the time being, we’ve split the engineering team into camera tech and level design. OpenCV is not playing nice with Unity, so after I helped Sean with importing sprite sheets and animating them in Unity, he mocked up a demo that fakes image recognition with the space bar, for testing purposes. Paul made a video (don’t tell anyone it’s fake…)