+ All Categories
Home > Technology > GDC16: Arbitrary amount of 3D data running on Gear VR by Vinh Truong

GDC16: Arbitrary amount of 3D data running on Gear VR by Vinh Truong

Date post: 16-Apr-2017
Category:
Upload: umbra-software
View: 407 times
Download: 1 times
Share this document with a friend
12
Arbitrary Amount of 3D Data, Running On Gear VR Vinh Truong
Transcript

Arbitrary amount of 3D data, running on Gear VR

Arbitrary Amount of 3D Data,Running On Gear VR

Vinh Truong

Hello Im Vinh and work for a company called Umbra

In this session I will ->

So we are a relatively small company based in Helsinki in Finland.

Umbra is mostly known for 3D rendering performance optimization software.

Over hundred games shipped with our technology.

Big triple A games like Destiny, Fallout4, Witcher3.

We also come with every single installation of Unity Engine.Occlusion culling in Unity is our tech.

WHY UMBRA?BETTER FRAME RATESLARGER AND MORE DETAILEDLEVELSAUTOMATESMANUAL WORKALL PLATFORMS

So, what are the BENEFITS, why developers use Umbra?

[1] Main benefit is of course, that we improve performance of your game

[2] Benefit of increased performance is bigger, more detailed worlds.

[3] How we achieve that is our technology automate all the work that areassociated with these benefits.

You can focus on content creation and let Umbra automatically handle thechallenge of making it run well.

[4] And finally our tech runs on every platform, PCs, consoles, mobile devices.

Demo OverviewFlying in 3D model of Boston (~10km area)Google EarthAfter Umbra optimizations

Running on Samsung Galaxy S7 EdgeVery powerful, but clearly not enough for running the input

Gear VRRaises performance requirements

Now I am going to explain a little bit about our demo.

[Content]Our demo involves flying in huge 3D model of Boston from Google Earth.In the demo we have around 5 kilometer area of Boston loaded.

Original scene before Umbra optimizations consist of over 30 gigabytes ofmeshes and textures. Hundreds of thousands of individual objects.Sixty-thousand individual textures.

[Platform]The demo runs on the newest Samsung S7 with Gear VR. We used to run S6but S7 just happened to come out just recently, so we switched to that.

Even though S7 is a very powerful mobile device, the original scene could neverrun as is, on pretty much any consumer hardware. Even if you had enough memory,you would die trying to do that many draw calls every frame.

Problems are multiplied when you try to do all this while in VR, so all VR requirementsand constraints apply here.

[Extra]Samsung S7 and Gear VR happened to be convenient platforms for developingthe demo and technology behind it so thats the main reason why we chose them.

[Results]So the demo runs smooth 60 fps in VR.

Pretty much no aliasing, we use 2x MSAA and mipmaps.

Runtime memory is bounded, no matter what the camera is looking at.Around 100 MB of VRAM at maximum.

[Transitioning]So thats neat,

what is even cooler is that we got all these performance winssimply by feeding Umbra 4 all that 30 GB of data and let it,completely automatically, optimize the scene for us.

Demo OverviewRuns smooth 60 fps in VR

Very little aliasing2x MSAA + mipmaps

Bounded memory no matter camera position100 MB at maximum

Streams assets over networkCan always fallback to lower detail, even if network is slow

Now I am going to explain a little bit about our demo.

[Content]Our demo involves flying in huge 3D model of Boston from Google Earth.In the demo we have around 5 kilometer area of Boston loaded.

Original scene before Umbra optimizations consist of over 30 gigabytes ofmeshes and textures. Hundreds of thousands of individual objects.Sixty-thousand individual textures.

[Platform]The demo runs on the newest Samsung S7 with Gear VR. We used to run S6but S7 just happened to come out just recently, so we switched to that.

Even though S7 is a very powerful mobile device, the original scene could neverrun as is, on pretty much any consumer hardware. Even if you had enough memory,you would die trying to do that many draw calls every frame.

Problems are multiplied when you try to do all this while in VR, so all VR requirementsand constraints apply here.

[Extra]Samsung S7 and Gear VR happened to be convenient platforms for developingthe demo and technology behind it so thats the main reason why we chose them.

[Results]So the demo runs smooth 60 fps in VR.

Pretty much no aliasing, we use 2x MSAA and mipmaps.

Runtime memory is bounded, no matter what the camera is looking at.Around 100 MB of VRAM at maximum.

[Transitioning]So thats neat,

what is even cooler is that we got all these performance winssimply by feeding Umbra 4 all that 30 GB of data and let it,completely automatically, optimize the scene for us.

It all starts with the user (e.g. architect or game builder) creating 3D worlds.How It Works

6

With their tool of choice (Unity, Unreal etc.), they can umbrafy these worlds. This means that they are sent to the Umbra Cloud......or the Umbra Optimizer on a local computer.

7

Umbra then automatically restructures the data...... and creates an optimized database that is stored locally or in the Umbra Cloud.

8

When the application (e.g. a game, BIM or a map) is running, Umbra will...

...tell what to render next with 3D content streaming...report what is visible with occlusion culling...define which version of assets to use (level of detail)

9

ARM NEON Occlusion CullingMost expensive runtime operationRasterizes occluder models on CPUOperates in low resolution, generates conservative resultsRasterization is embarassingly parallel in nature process multiple pixels/elements in SIMD

Discoveries, tips & tricks:http://www.slideshare.net/UmbraSoftware/gdc2014-boosting-your-arm-mobile-3d-rendering-performance-with-umbra

occlusion culling

So all that work had to be done in advance, so we can have tremendous wins during runtime.All our optimizations aim to give an upper bound to time and space for overall rendering performance.

We start by determining all objects visible to the camera.This operation is always conservative so false negatives are impossible.

This query has to finish in less than 2 milliseconds, althoughyou can, if you want, give us little more time for even more accurate results.

This query is easily parallelizable across CPU cores, and eachcore utilizes SIMD to maximum extent. In ARM platforms we use NEONextensively.

Although if you want to see more hardcore details about how we use NEONwe actually gave a talk about how we applied NEON in Umbra.

ConclusionCurrently in closed Alpha testing

Built on top of UnityAvailable as single .unitypackage file

Preprocessing on Umbra Cloud

So we already have a working implementation of this available asplugin for Unity.

Currently in closed alpha testing. If youre interested you can applyto our alpha test program at umbra3d.com.

First version runs on Unity only, but we will target other platformsand engines in near future.

Q&ASign up for Umbra for Unity Alpha here: umbra3d.com/any-3d-content-any-device/

UP NEXT: SHOW LIVE DEMO FROM FAKE / SIDE BY SIDE COMPARISON12


Recommended