Massive Pivot

We are reorganizing the team and the game. We are pivoting away from FPS Multiplayer. As long as we keep the camera we will still be a first person game (although we could try to do something with 3rd person that might be interesting) so we need to figure out what type of first person game we will be making.

This will give people motion sickness!

A panoramic camera in an FPS is definitely strange. One of the comments we hear most frequently when people look at the game is that it will give people motion sickness. And looking at all the distortion in the game it seems like that would be true.

But playing it is different than watching it. Most playtesters say they pick it up in a few minutes and that while its different its doable. Now to be fair, I’ve played this game enough that I don’t even notice the camera any more.

Microsoft even suggests taking medication for motion sickness so you can play their games! There are other suggestions like take breaks, play with lights on, adjust your distance from screen, make sure your video card gets high fps, etc. But all of these solutions are on the players side of things. What could be adjusted in the game to help people who struggle with motion sickness?

Increase your FOV. This suggestion appears in the comments of every article I could find on motion sickness. But its not always included in the articles themselves. Why is this? Many console games don’t allow you to change their fovs even if they are really narrow.

Why narrow FOVs? One of the major benefits of a narrow field of view is a performance increase. You are actually rendering much less of the scene every frame. This is especially important in console games where performance is capped. PC games are often patched to allow for wider FOVs but console games don’t always get that treatment.

FOVs for FPS default to range somewhere between 55 and 80. Half life 2 had a default FOV of 75. Destiny is at 72 degrees. Modern Warfare clocks in at 65. Crysis 2 had a default of 55.

But increasing FOV to 90 seems to help most cases of motion sickness. So why not go higher? If 90 is good wouldn’t 120 be better? This is where camera projection comes into play.  Almost all games use a flat projection method which means at 180 degrees you have infinite stretching (seen in Shaun LeBron’s video below at 9:38)

So with a default video game camera you can’t go beyond 180 degrees. But is that a problem?

Wikipedia has this to say about FOV  “Different animals have different visual fields, depending, among others, on the placement of the eyes.Humans have an almost 180-degree forward-facing horizontal diameter of their visual field, while some birds have a complete or nearly complete 360-degree visual field. The vertical range of the visual field in humans is typically around 135 degrees.”

So while the human eye has distortion similar to a 50mm camera (on a full frame 35 mm sensor) our field of view is actually closer to 180 on the horizontal and 135 on the vertical. Changing cameras though isn’t a problem for viewers in film. We have learned to accept rapid edits from a 18mm fishey to a 250mm closeup in film. But what is best for video games? I enjoy a wider fov in games because it feels more immersive but I also like zooming in to snipe in an FPS. I think like in film camera FOVs should be flexible to accommodate the need of the game. Wide FOVs are great for navigating but we have noticed aiming is more difficult. More things to think about.


FOV and Animation in FPS

I read through this article and found a lot of interesting topics related to FPS game design. Animation is a big element of gameplay that can contribute to motion sickness but you want to have some animation to give the players weight and combat that ‘floating’ animation feeling.

While this is true for games with normal FOV’s we run into some issues with our camera. Wide angle lenses minimize subtle motion in a camera. This is why gopro videos are watchable even when the camera is flipping all over the place.

Our problem with a 360 degree camera is that it is the widest camera angle you can possibly have without duplicating information (you can do 720 or 1080 camera for instance where the world is repeated multiple times).  So subtle motion like head bobs have to be dramatically increased to be visible. This is something we need to experiment with.

Another big topic in the article is character animation for the camera. Because you can see 360 degrees in our gameyou also see ALL of your character at ALL times. This is a huge problem because you have an ugly distorted mess right in front of your camera that actually blocks your view. To combat this we have removed the mesh renderer on the player model so you can see your shadow but not your character mesh.  Unfortunately this means we have no visible character animation at all.

Can we bring back this animation feedback through 2d UI elements? Or switching shaders? Would a 3rd person camera toggle benefit the weight of the character? More things to look at.

Sensory Overload Style Guide

Download with pretty pictures HERE.

Our camera distorts EVERYTHING. We will require extra in game testing for every asset because objects will not look like you expect them to. Here is a cube (thick white lines) inside of a sphere (grey lines) around the player. You will notice that the cube looks rounded and the sphere looks square. The top and bottoms of the sphere are stretched dramatically.

Distortion:  Objects closer to your eye level will be magnified. Objects below you will be stretched vertically. Straight lines become curved lines. The center of an object looks larger than its edges. Things that are close to you look significantly larger than you would usually expect.

Here are a series of rectangular shapes rendered with a 120 degree camera (left) and ours (right).

These two objects are the same distance away but with our camera the object appears smaller expanded. But as you approach an object the distortion becomes much more apparent as seen below.

With our camera an object never leaves your field of view because you can see everything.

Principles of 360 degree camera success.

Negative Space:  Our camera expands depth. The magnified depth makes the shapes look very different. Objects that felt very flat suddenly have dimensionality to them. Negative space becomes very important in identifying an object as 3 dimensional shapes can appear very warped.


Cylinders: Withstand distortion very well, especially if their vertical segment is fairly short. Multiple short cylinders together seem to work very well together.

Squares and cubes: While it seems counterintuitive, squares and cubes can look fantastic with our camera especially for levels and static objects. Depth is magnified and negative space made more prominent. Square tiling patterns give a consistent understanding of depth.

Art Themes

Shapes: Simple geometric shapes with clear differentiation between them. Textures are relatively simple with complex and accurate pbr settings for roughness, metalness, etc.

Colors: Colors will be used to differentiate objects in the game. Characters can use primary colors. Levels will be low saturation secondary colors (orange is a good choice).  The world is clean and bright.


Our characters are hybrid animals; American West Mountain Buffalion, Russian Wolfbears, Chinese Red Panda Pandas, Jackelopes, etc. They each hail from a separate culture and preserve some of that heritage in their props and costumes. Each character also has specialized boots that enable them to walk on walls and alter gravity.

We looked through a lot of different characters to get a feel for what worked in our game. We found 2 options that looked great with our distortion.

While they seem fairly different at first, there are some basic principles that Cylinders and rounded shapes, complex negative space, small center. Your character will be invisible to you so characters need to look best to other players.  Despite some early efforts to combine the two it makes more sense to use the mechanized character shapes for weapons, and the sonic/megaman like attributes for characters.

Animations: Characters are bipedal and we will be using motion capture data in mechanim. We are using an Inverse Kinematic Plugin that will allow for live retargeting of the characters which will smooth the transitions for the characters. Character tails and hair will be animated dynamically. All characters share the same skeletal hierarchy and will share animations.


Unlike other FPS games you will not see your own gun on screen. With our detached aiming system it would be very difficult to use 3d models so we will be using HUD elements to convey relevant information. The gun’s identifying characteristics will need to be readily understandable from the reticule/crosshairs.

Reticles must be instantly recognizable, simple, and varied in color. Each gun gets its own crosshairs.


Because the gun is not visible on your own screen, guns will only be seen at a distance in other player’s hands. Guns will have tracer effects, particles and colored lights attached to them when we fire.

Particles for Tracer Rounds and gun effects.

The gun’s silhouette must be instantly recognizable at a distance of 50 meters. Because the Guns are for enemies to identify, not for you to see, the guns will be BIG and chunky. A gun the same size as the character is acceptable. Guns should be composed primarily of cylindrical shapes.


Because we can potentially interact with every surface in the game will potentially be part of level design.

Repeating Textures: A consistent scale of features is important for users to gauge distance. Patterns should be multiples of 1 m per texture. Not everything in the level has to have tiling textures on it but they need to appear often enough to help understand scale, distance, and speed. Even a normal map detail on the material will be enough to get an accurate sense of scale.

Here are some references of simple diffuse maps with accurate pbr settings for material definitions. Warm colors like orange and yellow work well.  Emissive materials should be set to white.

Decals: We will be using a decal system to spice up our levels. Reusing decals of the same scale will also help give players an understanding of distance.

Racing games like Trackmania use tiling textures and decals to relay direction and speed. An atlas of Scifi panels.  Checkerboards, panels, and cross sections also help with identify shape through distortion.

Lighting: Unity 5 has introduced high quality lighting features. While we may not be able to do screen space effects we can take advantage of Unity 5’s PBR workflow to get very high quality materials and textures. We will use extra bounce light settings to keep shadows subtle.

Post Process Effects: Our camera is actually 5 cameras stitched together so screen space effects can look very strange. Camera blur behaves differently in the most distorted areas.

Light Probes and Reflection Probes: Will be necessary to get lighting interaction with Guns, particles, character effects, etc.

Render Style: Deferred Lighting: With Deferred lighting we can have multiple moving lights. We have been testing outlines/toon shaders and they seem to help in differentiating objects.

Heads up display

 The top and bottom of our screen is drastically distorted so we will use this area for health, ammunition, selected guns, etc.

Our current favorite font for titles is Piranha.

From Camera to Character to Level Design Basics

I’ve been working on the character controller this last week. We did a bunch of playtests of FPS games like Quake Live, Ue4, Cs:Go, Tribes, Dirty Bomb etc. to figure out how long it takes to cross a room, how long to get to another exit, how long engagements typically take and other information like so we can make informed decisions on balancing our character, level and weapons.

Because camera distortion is so extreme in this game I made a shooting range to see when enemy characters appear too far away and polled my team on which ones they would shoot and which ones they would ignore. The answers were a unanimous 50 meters was the limit. Then I adjusted the character movement speed so that the character would get to the max range enemy in just over 4 seconds.

Shooting Range Tests.

shooting range

I then tweaked and measured the player controller settings to get us more accurate level design details and have distilled them below. More testing to come.

Sensory Overload Level Design: What we know so far

Use Prototype to keep meters consistent. Only scale prototype objects in Element mode or you will lose 1:1 scale of the textures. Make levels out of multiple smaller pieces. Color objects with Prototypes vertex color system. Use colored point lights to point out features.

Our game is very different so PLAYTEST EVER YTHING. We have a lot to learn. Share what you learn with us so we can add it to this document.

Character controller:
Our character can move 50 meters in 4 seconds.
The crosshair is 1.5 meters high.
Vertical Jump: is 3.5 meters. A character can transition to overhead surfaces that are 3 meters above them and back smoothly. The character will automatically walk onto overhead surfaces that are 2 meters above them (feels strange) but can pass underneath them by disabling wall walking.
When jumping up and down the character will start rotating to walls 4 meters in front of it.
Horizontal Jump: At full speed the character can jump horizontally 7 meters or less comfortably and 8 meters with a skilled jump. Longer distances are catapulting by catapulting or turning off gravity.

WSAD controls. Space to jump.
Shift detaches the reticle (screen space aiming).
Q and E rotate your character (helpful with detached reticle).
Ctrl turns off wall walking and you will fall from the last surface you were standing on. (We will experiment with changing this).
Z re-centers your character to look straight ahead (helps knowing where you will fall without wall walking or when you get gimbal lock looking straight up or down).
Tab is sprint but its use will be limited.

General Design of Levels:
3 or 4 main area/rooms for a small level. 5 – 7 in a large arena. Enclosed maps with angular walls are easier. Open maps and curved walls will require more testing.
Where possible keep wall/room segments at multiples of 5 meters so we can make modular environmental pieces later.
If a smooth transition is required Angle walls 45 degrees to 60 degrees to minimize transition hiccup. This is not a rule as jumping makes all floor to wall transitions smooth (even overhangs).
Negative space and parallax depth look great with our camera. Cylinders show the least amount of distortion.

Room size:
It should take 3-4 seconds to get to at least 2 separate exits from wherever you are standing in a room.
The longest distance you can see before you are obstructed will be 50 meters (4 seconds of walking).
Create loops (where you can choose to go another direction and get back to where you started) not linear paths or dead ends wherever possible.
Tunnels connecting rooms must take only 1-2 seconds to get through and should be at least 5 meters wide.
Every room needs at least 3 elevations of combat.
If you create windows the character must be able to fit through them. Smaller windows will require that you turn off wall walking to shoot through them which will be something to explore later.

Every object we create needs to be at least 3 meters thick. 2 meters of thickness can work (if you test it) but 1 unit thick objects play havoc with our wall walking and will launch players in unexpected ways.
Rounded surfaces now work but require additional testing.
Pipes are smoother to move over with more resolution. 64 or 128 vertexes are better. 8 meter radius or larger feels comfortable to move inside of, but for passing through tunnels smaller sizes might be necessary.

Unity has a limit of overlapping dynamic point lights it can handle before it messes up the camera. The max is 4. We have it set to 2.
We want repeating patterns of uniform scale over the map to help the players register depth and distance. 1 meter tiles register very well.

We are planning to use a combination of tiling textures for the ground planes, a single texture with multiple materials on it for prop variation (like this), and decals.


This week we had EAE day where we showed our thesis game along with the cohort ahead of us and some of the other games made by the program. We got some really good feedback from industry pros, students, and faculty members who hadn’t seen the game much. I was really impressed how well people were able to use the detach camera controls and definitely think we can make it a main feature of the game.

From the other games shown my favorite was a top down temple run style game played with ‘twitch plays pokemon’ style controls over the web on your mobile fun. It was really fun to see everyone trying to steer the character together as he slowly dodged obstacles in his way.

Animation system dropped in

We are using mixamo to rig our characters which is pretty fun because we can drag and drop animation systems into unity and they already work. We still don’t have our character movement speed locked down but when we have a lock down for our controller I will go through and tweak the animations to fit.

John also dropped in FinalIK (an amazing inverse kinematics unity plugin) and it just seems to work! We played with it for a little bit and the character now procedurally blends to aim the gun at the on screen target and automatically places its feet on the surface. It even works with the detached camera though it gets a little funny when crossing over directly behind the character.   We got it working on the local player last night and after a bit of fiddling it turns out the animations will pass over our network with just a little tweaking. This gives us quite a bit of juice for free. We will definitely be looking to exploit more of the FinalIK system in the future.


Color Scheme test


Testing out some textures and colors for the hippo. The bunny is on hold until we have some other characters figured out. Zum modeled out this one off the base mesh I created. I added the hard surface details and carved out some shapes in nDo and then textured it using dDo. Quixel makes this process a lot of fun especially with the ability to reload and rebake the base textures. I do wish there was a way to draw on the model in 3do but I can work around that a little bit by using zbrush or blenders built in bake tools to get an idea of where nDo features should be drawn. I think this hippo needs a fun paint job (maybe flames, racing stripes, or stickers) in the future.