+ All Categories
Home > Documents > THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS ...€¦ · THE INTERNATIONAL MAGAZINE FOR...

THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS ...€¦ · THE INTERNATIONAL MAGAZINE FOR...

Date post: 10-May-2020
Category:
Upload: others
View: 5 times
Download: 0 times
Share this document with a friend
6
BENCH MARK THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS & ANALYSTS FROM NAFEMS January 2019 issue . . . Simulation Limited: How Sensor Simulation for Self-driving Vehicles is Limited by Game Engine Based Simulators A Guide to the Internet of Things Simulation of Complex Brain Surgery with a Personalized Brain Model Learn How to See Prediction of Clothing Pressure Distribution on the Human Body for Wearable Health Monitoring What is Uncertainty Quantification (UQ)? Efficient Preparation of Quality Simulation Models - An Event Summary Excel for Engineers and other STEM Professionals
Transcript
Page 1: THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS ...€¦ · THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS & ANALYSTS FROM NAFEMS January 2019 issue . . . • Simulation

BENCHMARKTHE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS & ANALYSTS FROM NAFEMS

January 2019 issue . . . • Simulation Limited: How Sensor Simulation for Self-driving Vehicles is

Limited by Game Engine Based Simulators

• A Guide to the Internet of Things

• Simulation of Complex Brain Surgery with a Personalized Brain Model

• Learn How to See

• Prediction of Clothing Pressure Distribution on the Human Body forWearable Health Monitoring

• What is Uncertainty Quantification (UQ)?

• Efficient Preparation of Quality Simulation Models - An Event Summary

• Excel for Engineers and other STEM Professionals

Page 2: THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS ...€¦ · THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS & ANALYSTS FROM NAFEMS January 2019 issue . . . • Simulation

Simulation Limited:How Sensor Simulation for Self-driving Vehiclesis Limited by Game Engine Based Simulators

Zoltán Hortsin, AImotive

Simulating the images of unique cameras used for self-driving was a drivingforce behind the development of a new engine for a simulator designed forautonomous vehicle development.

Over the last 18 months, the importance of simulation in the development ofautonomous vehicle technologies has become widely accepted. Industry experts andanalysts alike are claiming that enormous distances must be covered by self-drivingsystems to achieve safe operation in varied road conditions and environments.Meanwhile, certain weather conditions and traffic scenarios are extremely rare,resulting in limited test opportunities. The only way to cover the distances and reachthe diversity required for the safe operation of highly automated vehicles is throughtesting these systems and their modules in virtual environments. The demands ofsimulation for self-driving cars are extremely high and not all simulators arecreated equal.

Simulation for Self-drivingSimulators for autonomous technologies must becomprehensive and robust tools; offering at the least:

1. A diversity of maps, environments, conditions anddriving cultures.

2. Repeatability of tests and scenarios for the continuousdevelopment of safety-critical systems.

3. Ready access for self-driving developers andengineers to accelerate iteration times anddevelopment loops.

However, these requirements only scratch the surface.The demands of self-driving are unique, as a result, only apurpose-built virtualization environment can become afull solution. Several industry stakeholders have builtproprietary solutions based on game engines. While gameengine rendered simulators can provide the above-listedcharacteristics, they do not solve all the challenges ofself-driving simulation. Beyond the basic demands listedabove a true self-driving simulator has to offer more:

1. The utmost level of physical realism in vehicle androad models, weather conditions, and sensorsimulation.

2. Pixel-precise deterministic rendering to ensure thatminor differences in simulated environments do notaffect the results of repeated tests.

3. The efficient use of hardware resources including theability to run on any system from laptops to servers orutilizing the performance of multiple CPUs and GPUs.

The Limitations of Game Engines These more specific demands cannot be answeredefficiently by game engine based simulators. Their focusis inherently different. A game engine is designed with acommercial end-user in mind and is built to utilizeaverage hardware setups while being optimized to offerthe best game performance. Furthermore, it is often 3Dand graphic artists that create the final visual effect toensure a spectacular world for the consumer, rather thana physically correct environment.

Page 3: THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS ...€¦ · THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS & ANALYSTS FROM NAFEMS January 2019 issue . . . • Simulation
Page 4: THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS ...€¦ · THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS & ANALYSTS FROM NAFEMS January 2019 issue . . . • Simulation

AImotive encountered several problems while using agame engine-based simulator. One of the most notablewas how game engines are not prepared to simulate theimages of unique sensors such as ultra-wide field of viewfisheye lenses or narrow view long-range cameras. Thisis fundamentally because such views are almost neverused in games. As a result, the previous iteration of oursimulator contained custom modifications to the engine.However, as these were not organic elements of the codethey were also a bottleneck. On the one hand, theyaffected the performance of the simulator. On the otherhand, certain effects built into the game engine could notbe used on intermediary images, only on the final render,which resulted in less realism. Of these more uniquesensors, ultra-wide field of view fisheye cameras werethe most exciting challenge. The following sections willoutline the approach we took to overcome thesedifficulties.

Synthesizing Camera ImagesA camera’s operation can be divided into two parts: theoptics and the sensor. Rays of light move through thelens and arrive in a given pixel of the sensor. Whensimulating camera images, it is this process that has tobe recreated. There are several mathematical models forthis projection, the commonality between them being thatthey all result in a 2D pixel position from 3D data.

However, when discussing images taken with ultra-wide-angle lenses, further difficulties arise. We use fisheyelenses to cover ultra-wide field of views which do notcreate rectilinear images, but exhibit barrel distortion. Asa result, projecting them onto a two-dimensional plane isa slightly more complex process as the distortion mustbe accounted for. Figure 1 illustrates how this distortionis mapped to a 2D plane.

Simulating a Fisheye Image The most obvious solution to simulate such an imagewould be to simulate the exact paths of the rays of lightas they move through a simulated lens and onto the

sensor. However, this approach relies on ray tracing, atechnology currently only available in a few select GPUs.Most GPUs employ rasterization for image synthesizing(Figure 2), a solution that does not allow for a robustfisheye projection. While there are certain workaroundsthat make the projection possible, the nature of thesesolutions mean that the projection will not be robust,thus, rendering may be incorrect, or the performance ofthe engine may be affected. To find a solution theproblem has to be reexamined.

Reexamining the ProblemTo achieve a robust projection the task at hand has to bedivided into two parts. In the first step, data for rays oflight reaching the lens is generated. In the second stepthe corresponding data for the rays that reach the sensormust be found. Naturally, the high distortion of fisheye,lenses causes difficulty in the projection.

Step one – This is the most demanding part of theprocess, which relies most heavily on the rasterizationperformance of the GPU. Based on the calculatedlocation of the sensor, images must be generated thatcover the area of space from which light can enter thelens. This can be a single image or a total of six,depending on the characteristics of the simulatedcamera, as shown in Figure 3. The hardware demands ofthe process are heavily dependent on the quality of theseimages, and to achieve the highest degree of reality high-quality images and physically based High Dynamic Range(HDR) rendering must be employed. As several camerasand other sensors are being simulated concurrently bythe simulator, this can lead to huge demands on memoryand compute capacity. Hence the ability to efficientlyutilize multiple CPUs and GPUs if needed is vital.However, to ensure flexibility, the system should also beable to run on more everyday systems such as desktopPCs. This is to allow developers and self-driving testersto have proper access to the simulator and use it as adevelopment tool. Naturally, high-quality tests have to berun on heavy performance setups, but not all testsrequire such resolutions.

Figure 1: Fisheye projection illustrating rays and final pixels

Figure 2 : GPU rasterizing

Page 5: THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS ...€¦ · THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS & ANALYSTS FROM NAFEMS January 2019 issue . . . • Simulation

Simulation Limited

Step two – In essence, the task at hand is to simulate thelens distortion of the camera and create the image thatappears on the sensor itself. The resolution of this imagematches the resolution of the sensor. The data used tocreate the image is obtained from the 1-6 imagesgenerated in step one. The GPU is used to calculate thecharacteristics of the 3D rays that correspond to the 2Dpixels on these images (Figure 4). Further characteristicsof the simulated camera (exposure time, tonemappingetc.) are also added in this step. These are needed tocreate a simulated sensor image that is the closest it canbe to the image the real sensor would provide to the self-driving system in the real world. Only through a highcorrelation can simulation truly be a viable tool for thedevelopment and validation of self-driving systems.

The Simulated ImageFollowing these steps, we can create a virtualrepresentation of an image that a fisheye camera wouldprovide to the self-driving system. The robustness of thissolution is based on recreating the problem in a way thatallows GPUs to compute the simulation effectivelywithout relying on niche technologies. The method is also

extremely precise, allowing for pixel-perfect deterministicrendering: each scene will be calculated and rendered inentirely the same way, every time it is loaded.

If simulators are to be used as a platform for not onlytesting but also validation, they must be the closestpossible representation of the real world in every regard.The example given above clearly presents how theinherent limitations of game engines cannot serve as areliable platform for this. Fisheye cameras are animportant element of a vision-first self-driving setup asthey can be used to create a wide (or ultra-wide) field ofview around the vehicle easily. Without the ability toproperly simulate these there will be limits to thecorrelation between the real world and the simulated testenvironment. Beyond this highly technical difficulty andthe required level of physical fidelity, the limitationsdiscussed above regarding the flexibility and scalability ofthe simulator, the efficient use of hardware are alsoimportant factors. Building on our experience with self-driving technology and working with a game-engine-based simulator in the past we strongly believe that in thefuture simulators for autonomous technology testing andverification must be purpose built.

Figure 4: Pixels from images are projected onto a sphereFigure 3: Six images forming a cube with a camera placed in the center

Zoltán Hortsin is a self-trained game engine development and OpenGL ES 2, 3 andVulkan 3D API specialist who originally studied Dentistry at Semmelweis University,Budapest. After discontinuing his university studies during his final semester, hejoined AImotive's predecessor, Kishonti Informatics Ltd, where he served as a teamleader to work on the company's mobile GPU benchmarks. Zoltan is the technologicalmentor of AImotive's internal simulation engine development team. His favorite areaof research is real-time global illumination. Zoltán is a chocolate connoisseur, andspends his free time reading up on paleontology and collecting fossils.

To read more about AImotive visit aimotive.com

Page 6: THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS ...€¦ · THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS & ANALYSTS FROM NAFEMS January 2019 issue . . . • Simulation

Join us

+44 (0) 1355 225 688

[email protected]

nafems.org/joinus


Recommended