Staffs Uni visits Develop 2014 – Day 3 Develop

Welcome to the last post in the Staffs Uni visits Develop 2014 but certainly however day 3 was just as packed as the rest with some great speakers.

INDIE OPENING KEYNOTE: Growing Together: Being an Independent Studio in 2014

Presented by Nathan Vella
Attended by Paul Roberts

Unfortunately we don’t have many notes from the opening Keynote of the second day. To summarise, Nathan discussed the opportunities having a team around you creates and how important growing a good team around you is. Here is the description from the Develop website:

‘Over almost 9 years Capy has built and maintained an independent 20+ person studio.Throughout this time, the studio has learned ultra-valuable lessons in everything from publisher work to funding your own titles, from doing their own PR to learning how to best produce their own titles without outside push, from cashflow to HR. In 2014 the possibilities for independent devs to move from just a few people to a full studio working on multiple projects has grown greatly, but for many this opportunity may seem far from reach. Leaning on the trials and tribulations of his studio, Nathan will try to share why he feels so strongly about growing a team and the big opportunities a team enables.’

And here is a short video of part of it:

Unity 5 Audio

Presented by Wayne Johnson
Attended by Paul Boocock

This was a live demonstration that introduced Unity 5 new Audio tools. Rather than me trying to recall all the fancy Audio related amazingness of Unity 5, you can watch the same talk albeit at a different event here:

Failing and Learning to be an Indie

Presented by Mike Bithell
Attended by Paul Boocock

Mike Bithell took to the stage to talk about his trials and tribulations as an Indie developer. He discussed what it actually meant to be an ‘Indie’, commenting that the word has started to mean almost anything and is now pretty meaningless. Also, Indies have become popstars! We end up believe that all Indies are equal but ultimately this is dumb but we still play in it. Compare that one guy making a game in his bedroom to Notch, both are Indies…

Popstars

We often make assumptions at how well a game is doing based on the news we see but this isn’t always right, no one succeeds the first time and everyone fails. We had been to other talks at various stages of Develop and have consistently been told of ways to make a sure fire hit but Mike says this is bullshit. The best way to make a hit is to fail a lot; but to survive whilst doing it.

Redefining Alpha

Presented by David Braden
Attended by Paul Roberts

Frontiers founder, David Braben took to the stage to discuss Elite: Dangerous and how they use Kickstarter to enable them to create a game without a traditional publisher. David discussed how they work closely with the public and that this has changed the way they have to approach their development, rather than the traditional approach. Being in public scrutiny is a very different process, many games don’t release footage or gameplay until they are truly happy with it but being so public with your development process leaves your audience wanting to see more of your game earlier. This is both a good and bad thing but David certainly seemed to be of the belief that the process they are going through has certainly helped with project.

TEARDOWN: Xbox One Kinect Sports Rivals: Champion Characterisation Technology

Presented by Nick Burton and Andy Bastable
Attended by Paul Boocock

This was part of the Teardown series of talks that RARE were doing at Develop and this one focussed on the development of the Champion scanning technology in Kinect Sports Rivals.

Champion

The had a few problems to overcome when developing this system:

  • R&D Work in a AAA game schedule
  • Perception that machine vision problems are ‘easy’
  • Make the experience playful yet accurate

So how did they go about getting it to work?

  1. Move into position
  2. Scan body & face with Kinect 2
  3. Create classified to determine facial features
  4. Use results to assemble final character

They came up with many different classifiers in the following categories: Face Shape, Body Size, Glasses, Facial Hair, Skin Tone, and Hair Style. The big issue with all this is how to know it is going to work for everyone, this means they had to do LOTS of testing! They used an automated system which gave a percentage accuracy for the created face to what they expected it to come out with from a large set of examples. By the end they had a very high level of accuracy and confidence in the system.

There’s was a lot of talk around each of these classifiers, how each was implemented and what they learnt from implementing each stage. If you would like to see my additional notes on this, please just drop me an e-mail. One interesting thing I would like to point out is they actually found skin tone by using the Active IR to create a greyscale image which allowed them to correct the lighting in the RGB feed which would have otherwise been too unreliable given variable lighting conditions.

Creating Waves

Presented by Ricardo Flores Santos and Joao Magalhaes
Attended by Paul Boocock

This talk looked at an implementation of wave simulation on a mobile game, Billabong Surf Trip. Whilst this game is now quite old it was interesting to see how it still had a following and that was driving them to create their next surfing game. This time however, it is for the PC which up to date graphics and more realistic wave simulation.

We looked at many different components of waves, shape, whitecap, soup, fog, spit, foam, and surfer tail to name a few! Accurate fluid simulation of a wave is unfeasible mainly for performance but they also mentioned how it would be hard to tame content wise – to create specific waves for gameplay reasons. However particle based simulation, such as Smooth Particle Hydrodynamics, are not out of the question especially if used on narrow regions and it also will allow them to create specific content.

There were also many visual elements to how they will be generating the waves in the next game but basically the wave is generated as a 2d animation and then 3D effects are added to make it more realistic such as tapering at the extremities and introducing time lag to the animation to give the wave peeling effect.

The Power of the Cloud, Enhancing Your Game Experience with Cloud Services

Presented by Will Frost, Kavitha Mullapudi and Louis Deane
Attended by Paul Boocock

This talk mainly focused on the cloud implementation of Zoo Tycoon Friends. As I mentioned in an earlier blog post, Zoo Tycoon was also developed in Unity using an Azure backend. This allowed Microsoft to create a common core of c# code shared between the client, server and test.

Moving the backend into a cloud architecture gave them many benefits. Will discussed the following benefits:

  • Game logic validation
  • Social gameplay features (visit zoos and cloud saves)
  • Player notifications (4 hours to build enclosure becomes a push notif.)
  • Admin portal for customer support
  • Scalable, performant backend / auto scaling
  • Monitoring and logging
  • Testing

They made use of Cloud Services in Azure, using a mixture of Web Role (IIS) and Worker roles. Creating 2 instances also gives them 99.95% SLA. The architecture on the server side also made use of validation to ensure no tampering is possible from the client. They take requests and verify they are possible in game. If any disagreements then the client in reverted.

The discussion then moved on to testing. Testing efficiency is very important. Using Fiddler, VS Unit Tests and the Share C# code base all helped towards testing efficiency. Performance testing was also very important and the advice of the day was to start it as early as possible! Always consider:

  • Performance Goals
  • Load testing framework
  • Where to start
  • Early dry runs
  • Capacity Planning

Testing early helps to identify problems much earlier. Also, a small plug for VisualStudio.com and AppInsights. This tool helped them identify that Zoo Tycoon had issues with roles going to sleep in their cloud implementation leading to poor usage of resources.

The talk then wrapped up with Louis Deane discussing his experiences using Azure and their upcoming product.

Developing for Project Morpheus

Presented by Ian Bickerstaff and Patrick Connor
Attended by Paul Boocock and Paul Roberts

This talk centred on how they developed Project Morpheus and the technologies behind making VR work. It was a very interesting talk and I picked up quite a few things I’d never even thought of before, especially in regards to the optics and the use of magnifying glass (this means the image remains the same no matter how far away from the screen).

The technologies that are now available and that make Project Morpheus work are:

  • High quality screen – latest in mobile developments
  • Wide angle optics
  • 360 degree head tracking
  • Binaural 3D audio
  • Tracked Peripherals

The talk then went on to discuss what we should be think of as designers when we make games for VR headsets. Considering fusion zones, the brain can only fuse on a limited depth of field so be careful to not make the player constantly change focus. Fuse the accelerometer and gyroscope data with the data from the camera to get accurately tracked movement with no drifting over time. Developers must take care with first person action (an option is to perhaps make the camera third person and watch from a safe distance?). Don’t use cinematography such as lens flare, film grain and vignette.

The talk then moved into a more technical discussion around software development considerations:

  • Stereo rendering is fairly heavy
  • Wider FOV means culling is more expensive
  • Good AA can be more important than native resolution
    • Humans are distracted by high frequency noise
    • Combination of several effects may give better results
    • Specular AA can improve the image a lot
  • High frame rate, V-sync
  • Maintain a high frame rate throughout development – testing is difficult with it
  • Lack of V-Sync is much more noticeable
  • Low latency pipeline
    • Latency between reading the tracking data and rendering the frame needs to be as low as possible
    • Don’t use more than double buffering
    • Re-projecting the image with the latest tracking data can improve the apparent latency

Leave a Reply

Your email address will not be published. Required fields are marked *