+ All Categories
Home > Documents > Benchmarking of a LiDAR Sensor for use as Pseudo Ground...

Benchmarking of a LiDAR Sensor for use as Pseudo Ground...

Date post: 06-Oct-2020
Category:
Upload: others
View: 4 times
Download: 0 times
Share this document with a friend
85
IN DEGREE PROJECT MECHANICAL ENGINEERING, SECOND CYCLE, 30 CREDITS , STOCKHOLM SWEDEN 2019 Benchmarking of a LiDAR Sensor for use as Pseudo Ground Truth in Automotive Perception SEBASTIAN LJUNGBERG FREDRIK SCHALLING KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF INDUSTRIAL ENGINEERING AND MANAGEMENT
Transcript
Page 1: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

IN DEGREE PROJECT MECHANICAL ENGINEERING,SECOND CYCLE, 30 CREDITS

, STOCKHOLM SWEDEN 2019

Benchmarking of a LiDAR Sensor for use as Pseudo Ground Truth in Automotive Perception

SEBASTIAN LJUNGBERG

FREDRIK SCHALLING

KTH ROYAL INSTITUTE OF TECHNOLOGYSCHOOL OF INDUSTRIAL ENGINEERING AND MANAGEMENT

Page 2: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication
Page 3: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Benchmarking of a LiDAR Sensor for use asPseudo Ground Truth in Automotive Perception

SEBASTIAN LJUNGBERGFREDRIK SCHALLING

Master’s Thesis at ITMSupervisor: Naveen MohanExaminer: Martin Torngren

TRITA-ITM-EX 2019:473

Page 4: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication
Page 5: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Master of Science Thesis TRITA-ITM-EX 2019:473

Benchmarking of a LiDAR Sensor for use as Pseudo Ground Truth in Automotive Perception

Sebastian Ljungberg

Fredrik Schalling

Approved

2019-06-24

Examiner

Martin Törngren

Supervisor

Naveen Mohan

Commissioner

Scania CV AB

Contact person

Hjalmar Lundin

Abstract Environmental perception and representation are among the most critical tasks in automated

driving. To meet the high demands on reliability from clients and the needs of safety standards,

such as ISO 26262, there is a need for automated quantitative evaluation of perceived information.

However, the typical evaluation methods currently being used, such as a comparison with Ground

Truth (GT), is not feasible in the real world. Creating a substitute for GT with annotated data is

not efficiently scalable due to the manual effort involved in evaluating the sheer number of

scenarios, environmental conditions, etc. Hence, there is a need to automate the generation of data

used as GT. This thesis focuses on a methodology to generate a substitute for GT data, named

Pseudo Ground Truth (PGT), with a LiDAR sensor and to identify the precautions needed if this

PGT is to be used in the development of perception systems. This thesis aims to assess the

proposed methodology in a common scenario. The limitations with the LiDAR sensor are analyzed

by performing a Systematic Literature Review (SLR) on available academic texts, conducting

semi-structured interviews with experts from one of the largest heavy vehicle manufacturers in

Europe, Scania CV AB and lastly implementation of an experimental algorithm to create a PGT.

The main contributions are 1) a list of found limitations with the current LiDAR sensors from a

SLR and semi-structured interviews and 2) a proposed methodology, which assesses the use of a

LiDAR sensor as GT in a scenarios coupled with a set of precautions that has to be taken if the

method were used in the development of new perception systems.

Page 6: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication
Page 7: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Examensarbete TRITA-ITM-EX 2019:473

Prestandamätning av en LiDAR-baserad pseudo sanning för omvärldsuppfattning i en

fordonsapplikation

Sebastian Ljungberg

Fredrik Schalling

Godkänt

2019-06-24

Examinator

Martin Törngren

Handledare

Naveen Mohan

Uppdragsgivare

Scania CV AB

Kontaktperson

Hjalmar Lundin

Sammanfattning Perception och representation av omgivningen är några av de viktigaste underfunktionerna i

självkörande fordon. För att möta de högt ställda kraven på noggrannhet och stabilitet från

kunder och samtidigt kraven för funktionell säkerhet som ISO 26262, finns det ett behov att

automatisera kvantitativ utvärdering av samlad information. Användningen av typiska

utvärderingsmetoder som jämförelse med en Sanning (GT) är inte möjligt i verkligheten. En GT

baserad på annoterad data är inte effektivt skalbar på grund av det manuella arbetet som är

involverat i utvärderingen av de stora antalet av scenarion, klimatförhållanden, etc. Det finns

därför ett behov av att automatisera genereringen av en GT skapad av samlad mätdata. Den här

studien fokuserar på en metodik för att generera en Pseudo Sanning (PGT) med en LiDAR

sensor och identifiera de åtgärderna som behöver tas om PGTn ska användas i utvecklingen av

ett perceptionssystem. Studien har som mål att utvärdera den föreslagna metodiken i ett vanligt

förekommande scenario. Begränsningarna med LiDAR sensorn är analyserade genom en

Systematisk Literatursökning (SLR) på tillgängliga akademiska texter, semi-strukturerade

intervjuer med experter på en av de största tillverkarna av tunga fordon i Europa, Scania CV AB

och slutligen implementering av en experimentell algoritm för att skapa en PGT. Huvudbidragen

är 1) en lista med begränsningar av nuvarande LiDAR sensorer funna genom en SLR och semi-

strukturerade intervjuer och 2) en föreslagen metod som utvärderar användandet av en LiDAR

sensor som GT i ett scenario, tillsammans med åtgärder som behöver tas om metoden skulle

användas i utvecklingen av perceptionssystem.

Page 8: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication
Page 9: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Acknowledgements

We would like to start this report by expressing our deep gratitude to the peoplewho helped us in the completion of our Master’s degree studies at the School ofIndustrial Engineering and Management at KTH Royal Institute of Technology.

Firstly, our academic supervisor Naveen Mohan, whose dedication and guidance hasbeen indispensable.

Secondly, our industrial supervisor Hjalmar Lundin, for giving us the opportunityto write our thesis in collaboration with Scania CV AB, for the initial subject ofthis thesis and the valuable feedback.

Lastly, to Sara Wood and Louise Tornsten for the valuable input and feedback tothis report.

To you and everybody else who has contributed to us in the writing of this thesis,Thank you!

Sebastian and FredrikStockholm, June 2019

vii

Page 10: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication
Page 11: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

List of Abbreviations

APC Accumulated Point Cloud

BDS BeiDou Navigation Satellite System

DGPS Differential Global Positioning System

DS Data Synchronization

GNSS Global Navigation Satellite System

GPS Global Positioning System

GT Ground Truth

KPI Key Performance Indicator

ODD Operational Design Domain

OEM Original Equipment Manufacturer

OGM Occupancy Grid Map

OODA Observe Orient Decide Act

PGT Psuedo Ground Truth

R&D Research and Development

ROS Robot Operating System

RTK Real-Time Kinematic

SLR Systematic Literature Review

ix

Page 12: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Contents

List of Abbreviations ixList of Figures xiiList of Tables xiiiReader’s Guide xv

1 Introduction 11.1 Purpose and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3.1 Automatic Generation of High Definition World Representations 31.3.2 Current Evaluation Methods for Occupancy Grid Maps . . . 41.3.3 Current Methods to Increase the Accuracy of World Repre-

sentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Methodology 52.1 Identifying the State of the Art and Practice . . . . . . . . . . . . . 72.2 Experimental Study . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Frame-of-Reference 93.1 Definition of Terminology . . . . . . . . . . . . . . . . . . . . . . . . 9

3.1.1 Description of the Environment . . . . . . . . . . . . . . . . . 93.2 Sub-fields of Automated Driving . . . . . . . . . . . . . . . . . . . . 10

3.2.1 Light Detection and Ranging . . . . . . . . . . . . . . . . . . 103.2.2 Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2.3 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3.3 Key Performance Indicators . . . . . . . . . . . . . . . . . . . . . . . 12

4 Identifying Limitations from the State of the Art and Practice 154.1 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4.1.1 Literature Review Protocol . . . . . . . . . . . . . . . . . . . 154.1.2 Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.2 Interviews with Experts . . . . . . . . . . . . . . . . . . . . . . . . . 184.2.1 Bounding the Study . . . . . . . . . . . . . . . . . . . . . . . 194.2.2 Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

x

Page 13: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

4.2.3 Expert Opinion . . . . . . . . . . . . . . . . . . . . . . . . . . 28

5 Experimental Study 315.1 Experiment Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.1.1 Use-Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315.1.2 Pseudo Ground Truth Generating Algorithm . . . . . . . . . 33

5.2 Prototype Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 335.2.1 Simulation Environment . . . . . . . . . . . . . . . . . . . . . 335.2.2 Transforms and Data Restructuring . . . . . . . . . . . . . . 355.2.3 Analytics Module . . . . . . . . . . . . . . . . . . . . . . . . . 37

5.3 Test Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385.3.1 Scenery Variation . . . . . . . . . . . . . . . . . . . . . . . . . 395.3.2 Use of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.3.3 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

6 Results 416.1 Qualitative Results for Q1, Q2 and Q3 . . . . . . . . . . . . . . . . . 41

6.1.1 Identified Challenges with Evaluating Perception . . . . . . . 416.1.2 Limitations with the LiDAR Technology . . . . . . . . . . . . 42

6.2 Quantitative Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 436.2.1 The Influence of Rain . . . . . . . . . . . . . . . . . . . . . . 436.2.2 The Influence of Reflectivity . . . . . . . . . . . . . . . . . . . 48

7 Discussion and Future Work 497.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

7.1.1 State of the Art and Practice Review . . . . . . . . . . . . . 497.1.2 Experimental Study . . . . . . . . . . . . . . . . . . . . . . . 507.1.3 Threats to Validity . . . . . . . . . . . . . . . . . . . . . . . . 51

7.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

Bibliography 53

Appendices 59

A Data Collection Strategies 59

B The LiDAR model in PreScan 61

C ROS transforming network 63

D Extensive Declaration of Input Parameters in Test Cases I-III 65

Page 14: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

List of Figures

1.1 Schematic Venn diagram of the relationship between a PGT and GT. . 2

2.1 A schematic representation of the sequences in the methodology. . . . . 6

3.1 The relationship of terms in a Venn diagram adapted from Ulbrich et al.[18]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3.2 An example of an architecture of an automated vehicle [20]. . . . . . . . 103.3 Distance, r to first occupied cell compared to GT (blue) in PGT gener-

ated with 18 mm/h rain. The red squares represent the first, Apgt,j andlast, Agt,j cells marked wrongfully occupied in row j = 5. . . . . . . . . 14

5.1 Schematic overview of a use-case . . . . . . . . . . . . . . . . . . . . . . 325.2 Implementation architecture. . . . . . . . . . . . . . . . . . . . . . . . . 335.3 The PreScan GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345.4 Sensor settings in PreScan. . . . . . . . . . . . . . . . . . . . . . . . . . 345.5 Flowchart of data processing including the rain model. . . . . . . . . . . 355.6 Setup and workflow for the analytics environment. . . . . . . . . . . . . 385.7 Third person view of the ego-vehicle with the parked vehicle further

down the road. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385.8 Generic overview of the scenarios. Ego-vehicle drives along the arrow

with the sensor readings from the non-dashed subdistance used in thegeneration of the PGT. . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

6.1 Map Score and Pearson’s Coefficient for Test Case I. . . . . . . . . . . . 446.2 Pearson’s Coefficient with a 99.9 % confidence interval for Test Case I. . 456.3 Variance in X for Test Case I. . . . . . . . . . . . . . . . . . . . . . . . . 466.4 PGT representations (white) of a target vehicle and GT (blue) with

different level of rain intensity, R. . . . . . . . . . . . . . . . . . . . . . . 466.5 Test Case III: Performance of PGT depending on a distance threshold. . 476.6 Test Case II: Scores depending on the reflectivity of the target, ρ. . . . . 48

B.1 Laser beam incident on target object [57]. . . . . . . . . . . . . . . . . . 62

xii

Page 15: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

List of Tables

3.1 A collection LiDAR sensor devices for automotive usage on the market. 11

4.1 Search strings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164.2 SLR - Limitations to the LiDAR sensor technology. . . . . . . . . . . . . 174.3 Interviewees with specific profession and their years of experience in a

role directly or indirectly addressing automotive perception. . . . . . . . 204.4 Codes used in interview analysis. . . . . . . . . . . . . . . . . . . . . . . 224.5 Interviews with the experts - Summary of limiting factors of the cur-

rent LiDAR sensor technology or scenarios where LiDAR sensors hasdeprecated performance. The factors are presented in no particular order. 25

4.6 Interviews with the expert - Proposed performance metrics of world rep-resentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4.7 Expert Opinion - Score description. . . . . . . . . . . . . . . . . . . . . . 284.8 Expert Opinion - Questionnaire result. . . . . . . . . . . . . . . . . . . . 29

5.1 Interpretation of rain intensities [58]. . . . . . . . . . . . . . . . . . . . . 325.2 Typical reflectance of common targets for light, λ = 905 nm. . . . . . . 325.3 LiDAR Parameters used in the simulation. . . . . . . . . . . . . . . . . 355.4 Logfile extracted from simulation. . . . . . . . . . . . . . . . . . . . . . 365.5 Input to evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365.6 Declaration of Test Case I-III, an extensive declaration is found in Ap-

pendix D. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

6.1 Found LiDAR specific imitations from the SLR and the interviews withthe experts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

6.2 Distance at first detection at different rain intensities. . . . . . . . . . . 47

A.1 Interpretation of rank. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59A.2 Questionnaire of LiDAR specific limitations. . . . . . . . . . . . . . . . . 60

D.1 Declaration of input parameters used for Test Case I. . . . . . . . . . . 65D.2 Declaration of input parameters used for Test Case II. . . . . . . . . . . 66D.3 Declaration of input parameters used for Test Case III. . . . . . . . . . 66

xiii

Page 16: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication
Page 17: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Reader’s Guide

This section presents information about the structure of the report. It aims toinform the reader of the layout and guide the reader in the disposition of informationthroughout the report.Chapter 1 - IntroductionThe first chapter introduces the reader to the background of the research area anddescribes the objective and scope of the thesis.Chapter 2 - Frame-of-ReferenceThe second chapter contains relevant information and terminology that is used inthe report.Chapter 3 - MethodologyThe third chapter defines which studies that are performed in the scope of thisthesis.Chapter 4 - Identifying Limitations from the State of the Art and Prac-ticeThe fourth chapter is the first phase of data collecting which aims to summarizethe state of the art and practice and also the current industry status.Chapter 5 - Experimental StudyThe fifth chapter is the second phase of data collection which aims to experimentallyassess the results in the state of the art and present an algorithm that implementsa proof-of-concept through with the robustness of automatic GT generation can beassessed.Chapter 6 - ResultsThe sixth chapter presents the joint results of the state of the art study and theexperimental study to facilitate a combined discussion of these.Chapter 7 - Discussion and Future workThe seventh chapter discusses how the automatic GT generation can be used and towhat extent the results are validated. Furthermore, interesting topics and extensionsto the study are presented as potential future work.

xv

Page 18: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Chapter 1

Introduction

The development of automated driving is advancing at a rapid pace with severalOriginal Equipment Manufacturers (OEMs) currently testing prototypes at a highlevel (as defined by SAE J3016 [1]) of automation on public roads. The commonapproach used to enable the vehicles to perceive the world is to use a multitude ofsensors to create a world representation. This is then used as a basis in the decisionmaking for longitudinal and lateral control of the vehicle. The decisions are madeusing a perception system with perception algorithms based on the data collected bythe sensors. A common validation technique, to ensure that the decisions are correct,is using pre-recorded, annotated data as Ground Truth (GT) [2] to compare withthe perception algorithms automatically. In this thesis, GT is defined as an absolutetruth of what the environment contains. Another common validation technique isqualitative evaluation by Manual Intervention meaning manual comparison of theperception system’s interpretation of the environment with a human understandingas the GT. Using qualitative evaluation methods, or comparing algorithms to a GTprovides a method to demonstrate the absence of- or to find obvious errors in theperception system. Thus serving as a base for verification and validation processesof development. However, the method must be analyzed from the perspective offunctional safety standards such as ISO 26262 [3] and SOTIF [4]. ISO 26262 wasdesigned to provide a framework for addressing functional safety in safety-criticalautomotive functions. Following the standard entails demonstrating evidence andreasoning in a cogent safety case i.e the safety requirements are complete, andsatisfied by evidence. While ISO 26262 was not created particularly for automatedvehicles, some studies have shown that several billions of kilometers driven [5] couldbe required to meet the stringent requirements of ISO 26262.

1.1 Purpose and ObjectivesAt this time, no general framework exists for evaluating the performance of percep-tion systems [6] and today’s methods require either access to GT by the assumptionthat a high-resolution sensor is perfect or that the manual annotation of data is per-

1

Page 19: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 1. INTRODUCTION

fect. For the first purpose, a high-precision LiDAR sensor is commonly used, and forthe latter, the manually annotated KITTI Vision Benchmark Suite [7] is a widelyused dataset to benchmark perception systems and algorithms.

To facilitate testing within the scale of the billions of kilometers driven thatcould be required in the scope of ISO 26262, would demand an enormous amountof resources in the form of time from human intervention. Reducing the humanintervention would therefore be favourable for economic reasons, but also to ensureconsistency.

This thesis intends to explore the concept of a valid substitute to GT based onsensor data. More specifically, a proposal of a Psuedo Ground Truth (PGT) - aconcept of creating a world representation containing enough information to act asGT in testing. The PGT is titled valid for the scenarios where it contains enoughinformation to be used as a substitute for GT. The boundaries of the PGT arethe distinction of which scenarios or environment the PGT is valid. A schematicrepresentation of the relationship between PGT and GT is shown in Figure 1.1.Since the LiDAR sensor has proved high precision and angular accuracy, this thesiswill aim to to create a PGT based on LiDAR sensor data and to find the scenarioswhere the proposed PGT is valid.

Figure 1.1. Schematic Venn diagram of the relationship between a PGT and GT.

This thesis also intends to propose a methodology to find which precautions areneeded in the development of perception systems, if a proposed algorithm is usedto counteract the limitations with the LiDAR technology. Using an automaticallygenerated PGT could enable usage of data gathered from numerous vehicles, whichimplies higher generalization compared to being restricted by the cost associated tomanually annotate data to be used as GT. This will help bring the OEMs a stepcloser to meet the functional safety standards by providing greater data variationfrom real-world scenarios.

To understand where a PGT is a valid substitute for GT, this thesis addressesthe following research question:

RQ: Which precautions need to be taken in the development of percep-tion systems for automated vehicles when only using a high precisionLiDAR sensor as a substitute for GT information?

2

Page 20: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

1.2. DELIMITATIONS

In order to answer the research question, following sub-questions will be answered:

Q1 Identify the limitations of the LiDAR sensor as PGT.

Q2 Identify the current challenges in evaluating of automotive perception systems.

Q3 Identify methods to evaluate automotive perception systems.

1.2 DelimitationsSince this thesis aims to find the limitations of use of the LiDAR technology asan exclusive data source for evaluation, it will assess neither Data Synchronization(DS) nor localization. Only the LiDAR sensor as the data source and OccupancyGrid Maps (OGMs) for world representation are considered for the generation ofthe PGT. The OGM is used to simplify quantitative comparisons between worldrepresentations. Only static targets will be assessed.

1.3 Related WorkSeveral areas are touched upon in this thesis, such as verification, validation, evalua-tion, and sensor technologies. As the research topic is wide it would be an enormoustask to summarize the current state of the art research in order to position this work.Meaning that this thesis does not claim for coverage of the state of the art researchwithin these areas. An attempt is made in this section to instead find topics thatare specifically related to sub-question Q3. To this end, Section 1.3.1 focuses onsolutions for generating a reference system, similar to a PGT. Section 1.3.2 gives asummary of current methods to evaluate OGMs in automated driving. Lastly, Sec-tion 1.3.3 describes current methods that increase the accuracy of the sensor data.The papers that were found interesting had either quantitative methods for evalu-ation of a world representation or propose a method to generate a GT substitutethat were deemed to be state of the art in this field.

1.3.1 Automatic Generation of High Definition World RepresentationsThe principle for many evaluation methods include a procedure that distinguishesbetween the correct and incorrect behaviors of the system under test [8]. Thereexists a multitude of studies attempting to automate and increase the generationof high definition world representation, and how to validate their respective perfor-mance. The related studies were found through searches on terms related to thequestions stated in Section 1.1.

Hajri et al. [9] implemented an automatic generation of GT using multiple vehiclesto accurately position the ego-vehicle. The used vehicles are all equipped with ahigh precision positioning system with the ego-vehicle additionally equipped withLiDAR sensors, Radars and cameras. By simultaneously recording the position and

3

Page 21: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 1. INTRODUCTION

dynamics of all vehicles, the kinematics of all equipped vehicles present in the sceneare expressed in the ego-vehicle’s frame of reference. Therefore enabling generationof a more precise position of the ego-vehicle.

Grewe et al. [2] creates a GT equivalent by surveying a Differential Global Posi-tioning System (DGPS) to localize targets relative to the ego-vehicle. In the study,the accuracy of the DGPS as GT is not measured. In summary, previous studies aimto create a GT equivalent with setups with several vehicles to increase the resolu-tion, with use of a DGPS. This thesis however creates a PGT that are independentof a test track and where only one vehicle is needed.

1.3.2 Current Evaluation Methods for Occupancy Grid MapsIn Collins et al. [10] a method is proposed to combine several metrics to compareOGMs, such as Baron’s cross-correlation coefficient [11], Map Score [12] and path-based analysis [10], to increase the final reliability in the analysis. A survey paper ofcomparison metrics can be found as Grewe et al. [2]. In the study, they find a lackof consistency of some quality measures, such as Map Score. Hence they argue thatthese measures are non-adequate for verification processes of perception systems inautomotive applications. The study concluded that descriptive measures such asposition errors and position variance are needed in a verification process and, thatquality measures indicates the similarity between two maps. This thesis uses bothquality measures and position variance to evaluate the PGT.

1.3.3 Current Methods to Increase the Accuracy of WorldRepresentations

The issue of surveying an environment containing dynamic objects is addressed bySuganuma and Matsui [13]. Their study proposes a method to robustly detect staticobjects around the ego-vehicle not affected by temporal occlusions. Their methodalso include extraction of and mapping of dynamic objects.

A strategy to create a GT equivalent based on LiDAR sensor readings is proposedby Aldibaja et al. [14], as an accumulation of data points recorded from differentsensor locations. They propose mapping the data points to each other throughglobal coordinates and then merging all sensor readings into a global point cloud.The study shows that access to a larger number of points increases the resolutionand accuracy of world representation. The accumulation process is performed offlineto increase the available computational power.

In summary, several studies show that accumulation of data over time increasesthe resolution of the representation [13] [14]. This thesis uses the accumulationstrategy to increase the resolution of the PGT.

4

Page 22: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Chapter 2

Methodology

This chapter includes a discussion on methods used in the scope of this thesis toreach results, from which a conclusion regarding the objective is drawn.

Figure 2.1 shows the discrete steps that were undertaken in this thesis with thereferences to the respective chapter in this report.

A mixed methods approach was chosen as a framework for the thesis, divided intotwo phases; collection of data of the current state of the art research and practiceas presented in Chapter 4, and an experimental assessment as presented in Chapter5. The first phase aimed to, through a literature review of state of the art researcharticles and interviews with industry experts, find answers to Q1-Q3. Qualitativesurveying, as used in the first phase, offers an approach to generalize and formtheories based on experiences and literature [15].

The second phase aimed to assess the findings in Q1-Q3 through addressing whatwas deemed most important and propose an PGT generating algorithm. An exper-imental system of the algorithm was implemented based on information deductedfrom the first phase. The system was presented and tested on a set of test casesbased on a simple scenario. Validation will be done by comparing GT and PGT inan OGM format using well-known Key Performance Indicators (KPIs).

5

Page 23: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 2. METHODOLOGY

Figure 2.1. A schematic representation of the sequences in the methodology.

6

Page 24: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

2.1. IDENTIFYING THE STATE OF THE ART AND PRACTICE

2.1 Identifying the State of the Art and PracticeTo the best of our knowledge, the use of sensors to perceive an environment is usedin a broad number of applications. To identify the limitations of the LiDAR sensor,as stated in Q1, existing literature was examined in a systematic manner whichenabled transparency to the review, using the guidelines for a Systematic LiteratureReview (SLR) by Kitchenham et al. [16]. A review protocol is presented in Section4.1.1 which contains the steps followed, enabling the repeatability and, thus, makingthe bias in the literature review easier to evaluate [16] [17].

To validate the findings in the SLR a discussion was held with experts in thefield. The findings were also used as a basis for further discussion to find answersto Q2-Q3. To accommodate for the discussion being tailored to the expertise of therespective interviewee, the interviews were carried out in a semi-structured mannerusing guidelines by Creswell [15].

2.2 Experimental StudyThe experimental study intends to test the impact ”of a treatment (or an inter-vention) on an outcome, controlling for all other factors that might influence thatoutcome” [15]. The experiments conducted included a set of aggregated findings toQ1-Q3 to design an experimental setup. The experimental setup was implementedas a prototype of a PGT generating algorithm. To be able to test the limitationsof the LiDAR sensor addressed in Section 4, the prototype was implemented in asimulated environment.

The prototype was then validated on a set of test cases representing the mostcritical limitations to the LiDAR sensor, based on SLR findings and data gatheredfrom the interviews with industry experts.

The test cases was created to quantitatively evaluate the influence of the limita-tions on the generated PGT. The quantitative approach facilitated the gatheringof numerical data of where PGT was a valid substitute for GT. An analysis of thedata enabled a conclusion of which precautions were needed if the proposed PGTgenerating algorithms was used in the development of perception systems.

7

Page 25: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication
Page 26: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Chapter 3

Frame-of-Reference

This chapter presents the theoretical background of the concepts, technologies andterminology used in this thesis. Firstly, the background information of relevanttechnical areas is presented followed by evaluation metrics.

3.1 Definition of TerminologyTo facilitate a consistent report, some terminology used in this thesis is definedin this section. This includes terminology that is commonly known but used in aspecific way in this thesis.

Level of Automation

The level of automation is used as defined by SAE standard J3016 [1].

Ground Truth

GT is an abstract concept in this thesis used as an element for having access to thecorrect information of an environment.

Pseudo Ground Truth

A PGT is a world representation that when deemed valid can be used as a substitutefor GT. The boundary of a PGT is the distinction between a valid and invalid PGT.

3.1.1 Description of the EnvironmentTo describe the elements of an environment, terminology as defined by Ulbrich etal. [18] will be used. The terms used are in particular; scenario, scene, scenery,stationary elements, environment conditions and self-representation and actor. Theterms are shown in relationship in Figure 3.1.

9

Page 27: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 3. FRAME-OF-REFERENCE

Figure 3.1. The relationship of terms in a Venn diagram adapted from Ulbrich etal. [18].

3.2 Sub-fields of Automated DrivingSystems built in attempts to perform automated driving commonly follows the ar-chitecture of an Observe Orient Decide Act (OODA) loop by Boyd [19] as originallyproposed in the field of military strategy. An example of an implementation of theloop in an automated driving application is the open source project Autoware asproposed by Kato et al. [20] schematically shown in Figure 3.2.

Figure 3.2. An example of an architecture of an automated vehicle [20].

3.2.1 Light Detection and RangingLiDAR is an optical time-of-flight based technology where a target is actively illumi-nated by laser light pulses and the reflection is recorded. (Compare with a camerawhich passively uses an external source of illumination). The LiDAR technologytypically operates with electromagnetic waves in the infrared spectrum. A set oflaser light pulses is sent out with a sample rate. By receiving information aboutangles and range to the target, the position of the points can be calculated, and the

10

Page 28: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

3.2. SUB-FIELDS OF AUTOMATED DRIVING

collection of points per discrete time sample is commonly saved in a data structurecalled point cloud. Additionally, the reflected intensity could be recorded whichenables a prediction of the reflectivity of the target which could make targets atthe same distance distinguishable. Surveying data sheets from common of LiDARsensors on the market for automotive usage (seen in Table Table 3.1), the followingsummarization is made:

Long range LiDAR sensors have a maximum range of 100 to 300 m but workbest from closer than 50 m. They can have a field of view angle up to 360°. Thecurrent state of the art LiDAR sensors have a cycle time between 20 and 50 ms anda wavelength in the range of 903-905 nm. Information regarding expected lifetimeof LiDAR sensors is generally not disclosed by manufacturers but instances haverevealed this number as above 12 000 hours.

Table 3.1. A collection LiDAR sensor devices for automotive usage on the market.

Model Range FoV -hori-zontal

Accuracy(dis-tance,de-grees)

Cycletime(FoV)

Wavelength Lifetime

QuanergyM8-1

150 m 360° 5 cm,0.03°

33 ms 905 nm -

Ibeo LUX 200 m 110° 10 cm,0.125°

20 ms 905 nm -

ContinentalSRL1

10 m 27° 10 cm, - 10 ms 905 nm 12000hours

VelodyneHDL-64ES2

120 m 360° 2 cm,0.09°

50 ms 905 nm -

VelodyneAlphaPuck

300 m 360° 3 cm,0.11°

50 ms 905 nm -

OusterOS-2

250 m 45° -, 0.175° 50 ms 903 nm -

3.2.2 PerceptionThe term perception is often used to describe the process of visual cognition inautomated driving. Perception systems are responsible for perceiving the surround-ings through the capture of sensor data and translating this into a usable worldrepresentation.

11

Page 29: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 3. FRAME-OF-REFERENCE

World Representation

To facilitate efficient decision making steps the perceived information, including theobstacles that the vehicle might phase as well as the free space of the environment,must be presented in a usable format, a world representation. An example of aworld representation, used in automated driving applications, is the OGMs.

OGMs represent the environment by dividing it into cells, which disjoints andcompletely covers the environment, with each cell assigned a probability of it beingoccupied. An OGM is commonly used to describe the local environment basedon proximity sensors. In this thesis the OGM format is used to present the PGTsince the representation facilitate quantitative comparisons based on well-knowncorrelation metrics.

3.2.3 LocalizationBy assuming perfect localization neither localization nor positioning will be con-sidered in the scope of this thesis. But it is a deeply related sub-field since it willhave a great impact on the accuracy of a world representation [2]. In this thesisthe field of localization is only touched when discussing real world application ofa PGT generating algorithm. In general, the aim for a localization system is touse available information to determine where the object is positioned in relationto its environment. Typical automotive positioning systems include different vari-ations of Global Navigation Satellite Systems (GNSSs) such as Global PositioningSystem (GPS) [21], Galileo [22] and BeiDou Navigation Satellite System (BDS) [23].Pairing with precision enhancement algorithms as the Real-time Kinematic (RTK)technique centimeter precision could be achieved [24].

3.3 Key Performance IndicatorsA KPI, in this thesis, is a measure that demonstrates the quality of the proposedPGT in OGM format compared to a generated GT in OGM format. The selectionof the KPIs is done by finding KPIs from the literature about OGM evaluationsand also finding correlational measures. In the examples below, A and B are matrixrepresentations of OGMs with elements representing probability of respective cellbeing occupied.

KPI 1: Map Score is a quantitative cell-wise comparison metric proposed in[12] which compares occupancy grids. A logarithmic implementation is shown inEquation 3.1. The metric will in this report be normalized by relating the score tothe worst case score as shown in Equation 3.2, where GT is the ground truth gridrepresentation and GT is its inverse. The normalization of the Map Score makes

12

Page 30: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

3.3. KEY PERFORMANCE INDICATORS

the metric independent of grid size.

Map Score(A,B) =∑i

[1 + log2(AiBi +AiBi)] (3.1)

Map Scorenorm = Map Score(GT,B)Map Score(GT,B)

(3.2)

KPI 2: Occupied/free cells ratio is a descriptive metric proposed by Cohenet al. [25] where the ratio of free respective occupied cells are used as an indicatorof correlation between two grids. In Equation 3.3 the metric for occupied cells isshown where A and B are binary occupancy grids.

Free Cells Ratio(A,B) =∑iAi∑iBi

(3.3)

KPI 3: Jaccard’s Coefficient also known as intersection over union shows thesimilarity and diversity of sample sets. It is defined as the size of the intersectiondivided by the size of the union of sample sets described in Equation 3.4.

J(A,B) = A ∩BA ∪B

(3.4)

In a scenario where grid maps should be evaluated, Jaccard’s coefficient tells thesimilarity between two maps. Since every cell in the map has a value between 0 and1, with two digits a threshold it is needed to tell how well the maps correlate.

KPI 4: Pearson correlation coefficient is a linear correlation between twodata series.

p(A,B) = E[(A− µa)(B − µb)]σaσb

(3.5)

where σa and σb a is the standard deviation of A and B.µa and µb is the mean of Aand B. E is the expectation.

If the output of this equation is greater than 0 it has a positive correlation andif the output is below 0 it has a negative linear correlation and if the value is 0 ithas no linear correlation. The coefficient is often presented together with the upperand lower limits for a confidence interval.

KPI 5: Variance The KPI shows the variance of the distance, r between thefirst and last cells marked occupied wrongfully and the GT in X-axis. The varianceis hence calculated as

v = 12N

∑j

|rj − µ|2 (3.6)

13

Page 31: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 3. FRAME-OF-REFERENCE

whererj = |Apgt,j −Agt,j | (3.7)

and µ is the mean of r, Apgt is the first and last occupied cells of the PGT gridmap, Agt is the GT occupied cell and N the number of rows in Y-direction. Theerror distance, r is shown in Figure 3.3.

Figure 3.3. Distance, r to first occupied cell compared to GT (blue) in PGTgenerated with 18 mm/h rain. The red squares represent the first, Apgt,j and last,Agt,j cells marked wrongfully occupied in row j = 5.

14

Page 32: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Chapter 4

Identifying Limitations from the Stateof the Art and Practice

This section presents methodologies to find the most common limitations with aLiDAR sensor, by a literature review and semi-structured interviews with industryexperts within perception-related fields.

4.1 Literature ReviewA SLR is performed in two areas namely: environmental and technical limitationsof using the LiDAR technology in an automotive application. The literature reviewis carried out in accordance with Kitchenham et al. [16]. The review planninghas been iterated to find a set of keywords, inclusion- and exclusion criteria thatgenerate papers that are relevant to this work. The final version is presented inSection 4.1.1.

4.1.1 Literature Review ProtocolThis section presents the review protocol structured according to [16].

RQ focus: To find answers to Q1 i.e to identify limitations of the LiDAR tech-nology in the context of automotive applications.

RQ: Under what conditions does a LiDAR sensor risk to perform significantlyworse than expected?

Context: The scenarios have to either (1) directly relate to an automotive ap-plication or (2) are reasonably likely to be found within an automotive application.

Keywords: “lidar“, “laser radar“, “performance“, “automotive“, “environment“,“test“, “weather“, “problem“

Source List: The Google Scholar database has been used as the sole source, asit indexes all databases of interest to us such as IEEE, Springer, etc.

Inclusion and Exclusion criteria: As LiDAR is a relatively new technologyfor the automotive industry, older publications will not be included. Furthermore,

15

Page 33: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 4. IDENTIFYING LIMITATIONS FROM THE STATE OF THE ART ANDPRACTICE

only papers that address perception related to an automotive application will beincluded since the use case can differ between industries. This reasoning results inthe following inclusion and exclusion criteria. Since Google Scholar does not offerthe possibility to sort findings according to citations, the first search string wasqueried through a third party program which fetches the top 1000 results accord-ing to Google’s algorithm together with meta information such as the number ofcitation, which enables sorting according to top-cited publications. The reason forhaving two different sorting criterion is motivated to address the bias towards thedefinition of ”relevance” according to Google and the bias towards the choice ofkeywords according to the authors.

Inclusion criteria for search string 1:

I1 The publication is published the year 2015 or later.

I2 The publication is among top 50 most cited in the search result.

Inclusion criteria for search string 2 and 3:

I1 The publication is published the year 2015 or later.

I2 The publication is among the top 50 search results.

Exclusion criteria:

E1 The publication is not directly related to automotive perception and includesa limitation of using the LiDAR technology in perception.

E2 The publication is not available in English and in full format online.

The material in the literature review was collected in March 2019 through thesearch strings, shown in Table 4.1, are based on the key words stated in the reviewplanning.

Table 4.1. Search strings.

1. lidar AND performance AND environmentAND (automotive or ”autonomous driving”)–2. lidar AND performance AND automotive ANDlimitations AND problem AND weather–3. (lidar OR ”laser radar”) AND performanceAND automotive AND environment AND test

The SLR resulted in 91 unique publications of which exclusion criteria E1 andE2 led to 58 respective 9 excluded from further analysis. The final resulting 24

16

Page 34: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

4.1. LITERATURE REVIEW

publications were read thoroughly and the limitations found are specified in Table4.2, categorized into three distinct categories; Obstructing (Ob), Attenuating/Noise(AN) and Other (Ot). Furthermore, the citations are classified depending on if thelimitation found is either mentioned where it is not the primary context (Mention)and if it is a primary source and main content of its paper (Experimental). In Table4.2 the findings are ranked in descending order according to the number of citations.

Table 4.2. SLR - Limitations to the LiDAR sensor technology.

Limitation Mention Experimental Cat.

Rain [26] [27] [28][29] [30] [31][32]

[33] [34] [35][36] [37] [38]

AN

Fog/Mist/Haze [26] [27] [28][29] [37] [30][31] [32]

[39] [40] [41][35] [38]

AN

Snow [26] [27] [28][29] [30] [32]

[39] [35] [38] AN

Material/surfaces [42] [43] [34] [44] [45] Ob

Sunlight [46] [28] [47] [48] AN

Dust [27] [35] AN

Road dirt on sensor cover [32] [49] Ob

Object to close to sensor [45] Ob

Wet roadway causes road spray [47] AN

Wavelength related [29] AN

Interference [50] Ot

Remote attacks (imitating signal) [51] Ot

Sparse measurements for distant objects [52] Ot

Temperature [53] Ot

Vibrations [53] Ot

17

Page 35: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 4. IDENTIFYING LIMITATIONS FROM THE STATE OF THE ART ANDPRACTICE

4.1.2 FindingsIn this section the three categories of limitations to LiDAR section found in theSLR is presented.

Obstructing Conditions

Among the 24 identified publications seen in Table 4.2, 8 limitations are either men-tioned or shown in an experiment to give an obstructing impact. In this category,the effects of targets consisting of a different material or having different surfacesare also included. Generally, this category depicts limitations with a predictable orcontrollable impact. For example, road dirt accumulated over a long time on thesensor cover as in Rivero et al. [49] could be controlled through simple maintenanceof the vehicle or, as shown in Hasirlioglu and Kamann [34] the impact of a wetsurface is predictable and has around 10 % reduced reflectively.

Attenuating/Noise Conditions

Among the 24 identified publications seen in Table 4.2, 17 papers are found toaddress attenuating or noise induction. 11 of the found limitations are categorizedas experimental. Most common are scenarios of precipitation, which includes bothrefraction, when a particle reflects enough effect back to the sensor to be detectedas a hit, and an unordered scattering of the beams. These scenarios generallyinduce disturbances with irregular behavior including both an attenuating effect andgeneration of noise. Rain, for example, is shown to, as expected, be dependent onthe wavelength used in the LiDAR. As shown in Kutila [40], the wavelength is set to905 nm in 95 % of the LiDAR sensors on the market but a more expensive technologybased on 1550 nm laser, shows more robust performance in rainy conditions.

Other conditions

This category includes limitations as vibrations induced by unevenness of the roador uneven mixing of fuel and oxygen combustion which over time causes deterioratedperformance of the sensor [53]. Another limitation is that when the number of activeLiDAR sensors rises in traffic the risk of interference between them [50].

4.2 Interviews with ExpertsData related to industry trends and state of the art research was gathered via theconduction of interviews with experts from the industry. The interviews were carriedout according to what is proposed as semi-structured interviews (by Creswell [15])with the boundaries defined in the following section.

18

Page 36: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

4.2. INTERVIEWS WITH EXPERTS

4.2.1 Bounding the StudyThis section presents the interview protocol structured according to Creswell [15].

Verification: In ensuring internal validity, the following strategies will be em-ployed:

1. Peer examination - Two master students conducting master thesis studies atKTH served as a peer examiners.

2. Clarification of researcher bias - At the outset of this study researcher biaswas articulated in writing under the heading, ”The Researcher’s Role.”

3. Clarification on the industrial bias - At the outset of this study the industrialbias articulated in writing under the heading, ”The Informants Role.”

The Researcher’s Role: In qualitative research, the researcher’s role is tocollect data from primary sources, analyze the data and declare a result. It ispossible that this process induces a bias from the researchers, dependent on theirbenefits from specific result. It is beneficial for the researchers if the interviews withexperts 4.2 and the Literature review 4.1 shows a uniform result. It is also beneficialif the interviews with the experts 4.2 strengthens the hypothesis and the backgroundfor the thesis. Although a number of activities will be made to ensure objectivity,we understand that these biases shape the way we understand situations, interpretthe data and draw conclusions.

The Informants Role: This study will include several interviews with engineersfrom Scania CV AB in Sodertalje, Sweden (hereafter referred to as Scania). It couldlead to a bias on what is important in the whole industry depending on how theproblem is interpreted at Scania.

Interviewees: The interviewees in this study are engineers in various groupsrelated to the R&D of autonomous transport systems. The interviewees were se-lected with the aim to reach perspectives from: (1) sensor systems generating inputto the perception systems, (2) development of perception systems and (3) users ofthe output from the perception system. The interviewees are kept anonymous butare presented by experience and under which area they are active; sensor system(hardware), perception system (software) and trajectory planning. The selection ofcandidates was carried out with the aim to get multiple sides of the topic, it wasdone together with the academic supervisor and industry supervisor of this study,and is presented in Table 4.3.

19

Page 37: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 4. IDENTIFYING LIMITATIONS FROM THE STATE OF THE ART ANDPRACTICE

Table 4.3. Interviewees with specific profession and their years of experience in arole directly or indirectly addressing automotive perception.

Interviewee Profession ExperiencePerson 1 Sensor system (hardware) 8 yearsPerson 2 Sensor system (hardware) 2 yearsPerson 3 Sensor system (hardware) 1 yearsPerson 4 Sensor system (hardware) 1 yearsPerson 5 Perception system (software) 7 yearsPerson 6 Perception system (software) 2 yearsPerson 7 Perception system (software) 1 yearsPerson 8 Perception system (software) 1 yearsPerson 9 Trajectory planning 9 yearsPerson 10 Trajectory planning 3 years

Events: Using semi-structured interviews, the focus of this study was to discussthe interviewees experience and thoughts about the greatest challenges with vali-dating a perception system for the automotive industry and the complications ofcreating a PGT exclusively dependent on a LiDAR sensor.

Processes: Particular attention was paid to problems and solutions with vali-dating a perception system, sensor specific answers, and inputs to KPIs.

Ethical Considerations: By following ethical regulatory system such as pro-posed by the Swedish Research Council (Swedish: Vetenskapsradet) [54] four re-quirements need to be fullfilled. (1) The information requirement which meansthat the researcher needs to inform the participants about their project and theircondition for participating. (2) The approval requirement, which means that theresearcher needs to gather information about if the participant approves the cir-cumstances that the participant is involved. (3) The participant should be able todecide if, for how long and under which requirements they should participate. Theresearcher is not allowed to affect the participants decision regarding participation.And lastly, (4) the usage requirement says that information gathered for research,cannot be used or lended for neither commercial nor non-commercial use. Personalinformation gathered for research cannot use for decisions or actions that directlyaffect the participant.

Data Collection strategies: The data collection was conducted in April 2019,see Appendix A and included 15-30 minutes of recorded interviews with initialquestions shown in Appendix A, and the discussions that followed the questions.Every interview was transcribed excluding company critical information.

Data Analysis Procedures: The collected data in the transcripts was readthrough to gather an outline of its content from which information codes were de-rived. The codes created consists of a high-level code with several possible subcodesand is presented in the report before the interpretation of the data. After the codingthe connection to the respective interviewee was dropped and the data is presented

20

Page 38: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

4.2. INTERVIEWS WITH EXPERTS

anonymously and at group level. A note was though kept to state whether anopinion was stated by four interviewees or more.

4.2.2 FindingsThis section contains the result of three cycles of analysis of the interviews. Firstlythe zeroth cycle, which aims to get an overview of the boundaries of the data whichis used to define codes to annotate the data. Then the codes are used in the firstcycle to structure the data to present the data in an efficient manner. Lastly, in thesecond cycle, the data is iterated once more and presented in a concise format.

Coding

The declaration of codes was carried out through the extraction of relevant areasof discussion, which was consecutively divided into subtopics by the researchers.To keep interesting and relevant information only mentioned by one or a few in-terviewees each area is given an ”other”-code which is used when no other fit andthe material does not motivate the creation of a new separate code. The codes arepresented in Table 4.4.

21

Page 39: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 4. IDENTIFYING LIMITATIONS FROM THE STATE OF THE ART ANDPRACTICE

Table 4.4. Codes used in interview analysis.

High-levelcode

Sub-levelcode

Description

[Non-det: The perception system is a non-deterministic system.

...:oracle] Lack of test oracle

...:data] Lack of large amount of logged data

...:sim] Lack of high quality simulation envi-ronment

...:quant] Lack of quantitative measure of perfor-mance

[Arch: Lack of a agreement of the architecture...:who Lack of definition of what issues that

should be solved where[Valid: Validation of perception

...:arch] Lack of validation architecture and adefinition of sufficient performance

...:level] Mention of high versus low level vali-dation

[Lid: LiDAR sensor specific limitation...:place] Sensor position on a truck is not rigid...:res] Sensor resolution is low...:time] Time synchronization is low...:compu] Computational need is high...:dust] Sensor is sensitive to dust...:weather] Sensor is sensitive to different weather

conditions...:oth] Other relevant mention

[Kpi] Mention of an applicable KPI[Gen] General mention of interest.

1st Cycle

In this section, the data is extensively presented, sorted per high-level code. Asstated under Section 4.2.1, the information is presented anonymously. The high-level codes are; the perception system being non-deterministic (Non-det), archi-tectural issues (Arch), issues related to the definition of valid in an automotiveperception context (Valid), issues related directly to the LiDAR sensor (Lid), pro-posed KPIs that would give valuable information (KPI) and general mentions ofinterest (Gen).

22

Page 40: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

4.2. INTERVIEWS WITH EXPERTS

The perception system is non-deterministicSeveral interviewees described how the process of validating perception systems isfundamentally challenged by the oracle problem. (The oracle problem is defined asthe lack of mechanism determining whether a test has passed or failed [8].) Theunderlying issue, as stated by some interviewees, is the lack of GT of an environment.A typical method to address the lack of a GT, as told by most of interviewees, is tomanually annotate data or manually perform the comparisons of outputs throughhuman, visual inspection. Both methods include the assumption that all errors thatneed to be address, would be found through human, visual inspection.

To tackle the lack of a GT, a few interviewees proposed to create a simulationenvironment which could facilitate extraction of GT in different formats. Someinterviewees also concluded that there is a need for an extensive simulation environ-ment to enable efficient test-driven development. To address the potential biasesof manually created test cases the simulation environment could be extended withmethods to artificially generate new test cases based on corner cases in logged data.

To have the testing performed retrospectively on logged data was also backed byseveral interviewees, which facilitates the use of offline algorithms. Raised by someinterviewees was the importance of validation of the used simulation environmentto be able to state the results as valid in a real-world application.

Validation and ArchitectureQuestions as what should be included in the perception model and what should besolved in other models were amongst the most discussed topics in the interviews.Specifically, the lack of conventions and architecture to which disturbances thatshould be addressed in a perception system and which could be handled in laterplanning stages. A specific dilemma that was mentioned was the possibility to in-crease robustness against noise from the surrounding environment or the systemitself by requiring several sensors to report the detection of each object before re-porting it as an obstacle. The system will then tend to report a more consistentworld representation, but by design it also induces a latency towards detecting newobstacles. The latency of perceiving new obstacles could force planning algorithmsto react more quickly, potentially reducing the consistency of the planned vehicletrajectory.

A method to address the trade-off between sensitivity to new detections andgiving a consistent world representation is proposed as, by several interviewees,to apply modularization to the perception systems. If the perception consists ofseveral sub-modules several modules could be tuned for different use-cases to enablecustomization of the final vehicle to a different application. For example, in anapplication where it is guaranteed that no humans are present, the detection modulecould include a coarse filtering and relatively high latencies to ensure smooth drivingand minimal wear of equipment. If the application is in an urban environment thedetection algorithm could be switched to something more suitable and less extensive.

Some interviewees also raised the lack of scientific methodology to objectivelyassess a perception system as a current important challenge. One reason for this

23

Page 41: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 4. IDENTIFYING LIMITATIONS FROM THE STATE OF THE ART ANDPRACTICE

was expressed as the lack of conversions of how to architect a perception system.Proposed, by one interviewee, was to search in different fields of research to findsimilar applications of challenges to automated driving. In new research areas how-ever, it is uncommon to conduct comparisons with other research topics despite thefact that it would be beneficial for the researchers.

A common opinion of the interviewees was that most of the LiDAR specific lim-itations could be addressed through architecture, etc. by adding other sensors orinter-vehicle communication.

Testing on Different Architectural LevelsThe uncertainty regarding how to determine when perception systems operate suffi-ciently well was one of the most discussed areas. Due to this, perception systems aredeveloped to handle all different types of situations rather than specific use-cases.Testing on a high functional level generally includes several independent variables,which could remove the possibility to track the influence of a parameter to the out-put of a test or, to trace an unwanted behavior back to where it originated. Severalinterviewees believed that to enable large scale development, the perception systemhas to be validated standalone from the decision- and planning systems, to avoidan infeasible task of validating the entire automated driving system. The reasonfor the validation task growing infeasible was said to be the complex dependenciesbetween subsystems such as perception and planning. Furthermore, in some of theinterviews it was concluded that a perception system should be built modular, withsubsystems created for specific environments or missions, to enable customizationto the intended application of the final vehicle.

Sensor Specific LimitationIn the interviews several limits with the current LiDAR sensors on the market werementioned. The limitations or other issues related to the LiDAR sensor technologyare presented below and summarized in Table 4.5.

Non-rigid displacements A LiDAR sensor is often placed on the cabin when usedmounted on a truck. The cabin of a truck is typically air-suspended to enable acomfortable ride for the driver. The suspension system induces large movementsbetween the mounted sensor and the chassis or ground. This creates the complicatedtask of aligning LiDAR sensor readings with data from other sensors. The task isespecially challenging on long distance detections since the usage of the LiDARsensor is dependent on having a known pose.

High amount of output data A LiDAR sensor streams a high amount of datapoints which requires a computing platform to have a matching performance toenable a real-time system.

Short lifetime The currently most used LiDAR sensors devices include a mechan-ical moving mechanism with generally lower life expectancy than other comparablesensor technologies.

Dust A LiDAR sensor is sensitive and is likely to return detections or miss objectsif encountered with dust- or other clouds of small particles, which is common in

24

Page 42: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

4.2. INTERVIEWS WITH EXPERTS

many application for an automated vehicle. Signal processing could, through therecording of multiple echoes, rather than only the first reflection from each beam,increase the robustness against dust and other semi-transparent targets.

Sensitivity to weather Similar to dust, a LiDAR sensor is likely to have a depre-ciated performance in several common weather conditions, such as rain, snow andfog.

Blooming/self dazzling There exist situations, for example, when faced with ob-jects with high reflectivity, that cause beams to be mixed up and severely deprecatethe precision by creating a blooming or self dazzling effect.

Capture of dynamic objects The LiDAR sensor technology does not natively sup-port capturing of dynamic properties of objects, though there exist methods toderive this information through post-processing of the sensor readings.

Varying data quality The quality of the LiDAR sensor depends on many factorsof the environment, such as the amount of sunlight (since the sunlight includes elec-tromagnetic radiation with wavelength as the one used by the LiDAR sensor). Thisvariation gives a varying precision of the LiDAR sensor data in different environ-ments.

Data synchronization To benefit from the high precision and high update fre-quency of a LiDAR sensor there is the challenge of having a sufficiently high preci-sion time stamping to synchronize with other sensor readings.

Insufficient resolution to capture thin objects Since the LiDAR sensor surveys theenvironment with thin beams small objects with a smaller width than the distancebetween two beams, have a risk of being missed. Examples of such objects are barsor lamp posts at a far away distance.

Table 4.5. Interviews with the experts - Summary of limiting factors of the currentLiDAR sensor technology or scenarios where LiDAR sensors has deprecated perfor-mance. The factors are presented in no particular order.

Ind. Limitation1 Non-rigid displacements2 Output a high amount of data3 Short lifetime4 Dust5 Sensitivity to weather6 Blooming/self dazzling7 Captureing of dynamic objects8 Varying data quality9 Data synchronization10 Insufficient resolution to capture thin objects

25

Page 43: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 4. IDENTIFYING LIMITATIONS FROM THE STATE OF THE ART ANDPRACTICE

Proposed Performance MetricsThe interviewees proposed the performance metrics of world representations us-able in evaluation in the development of perception or adjacent systems shown inTable 4.6. The performance metrics are a representation of the information theinterviewees stated as valuable for them to make a design or parameter choice.

Table 4.6. Interviews with the expert - Proposed performance metrics of worldrepresentation.

Ind. Performance metric1 Ratio of detected objects over total amount of

frames2 Uncertainty of longitudinal and latitudinal dis-

tances3 Uncertainty of the pose of detected objects4 Amount of false positive detected objects5 Variance over time delays between sensor read-

ing and world representation

To measure uncertainty metrics as Probably Approximately Correct-bounds (asdefined by Valiant [55]) or confidence bounds based on standard deviation wereproposed.

2nd Cycle

The data gathered in the interviews is here condensed into a bullet point list.

• The perception system is non-deterministic without conventions of how tovalidate sufficient performance.

– Targets that are assumed easy to detect, but seldom encountered, couldinduce unexpected behaviour.

– There is a need for an extensive simulation environment to enable an ef-ficient test-driven development. To address the potential biases of man-ually created test cases the simulation environment could be extendedwith methods to artificially generate new test cases based on corner casesin logged data.

– Validation of a simulated environment is crucial to define what could beverified through simulation and what needs to be proven in a real-worldsetting.

– The oracle problem - What is GT in perception and how do you verifyit?

– Testing will naturally be biased towards common scenarios and scenes.

26

Page 44: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

4.2. INTERVIEWS WITH EXPERTS

– Retrospective testing on logged data facilitates algorithms including com-putationally expensive methods to generate higher resolution represen-tations of the environment to be used in testing.

• There is a lack of conventions and requirements of what an automotive percep-tion system should manage. More specifically, there is a lack of cost-efficientscientific methodology to objectively evaluate perception.

– The lack of requirements could result in an ad hoc trade-off betweenuptime and robustness in a safety context.

– The requirements will vary between ODDs. For example, in some de-marcated settings, a coarse and slow adaption to new obstacles could befeasible which enables a more consistent trajectory. This is not suitablein, for example, an urban environment.

– New research areas do occasionally ”reinvent the wheel”. It is impor-tant to make sure that benefits are drawn from other fields with similarapplications.

• An automated system is very complex. The aim should not be for a perfectsystem that can handle all situations. Defined scenarios should be solved andlimited to specific use-cases to enable a feasible verification process.

• To enable large scale development, the perception must be validated stan-dalone from the decision- and planning systems, to avoid an infeasible task ofvalidating the entire automated vehicle.

– There is a contradiction between the information gain in high-level test-ing versus low-level testing. High-level testing can verify sufficient perfor-mance in use cases directly but could lead to an arduous troubleshootingprocess, low-level testing has the opposite properties.

• Most of the limitations with the LiDAR sensor can be addressed througharchitecture, etc. by adding other sensors or inter-vehicle communication.

• Current LiDAR sensors on the market have the following limitations;

– The sensor is commonly placed on the cabin (when mounted on a truck),which does not have a rigid displacement from the chassis. This createsthe complicated task of aligning the LiDAR sensor readings with datafrom other sensors on other parts of the truck. The task is especiallychallenging at long distances measurements since the usage of the LiDARsensor is dependent on having a known pose.

– Current LiDAR sensors on the market do not robustly give a high defi-nition output.

– Current LiDAR sensors have a low life expectancy.

27

Page 45: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 4. IDENTIFYING LIMITATIONS FROM THE STATE OF THE ART ANDPRACTICE

– The high update frequency of LiDAR sensors makes time synchronizationdifficult.

– LiDAR sensors generate a high amount of data which require high com-putational power.

– LiDAR sensors do not natively capture the dynamic properties of objects.– LiDAR sensors could suffer from ”blooming” or reflection glare if an

obstacle is placed to close or has high reflectivity.– LiDAR sensors are sensitive towards aerosols and will have a significantly

lower performance if exposed to scenarios including etc. rain or dust.With signal processing, the recording of multiple echoes, than the firstreflection from each beam, an increase in the robustness towards aerosolscould be achieved.

– LiDAR sensors risk missing objects with a smaller width than the dis-tance between two beams.

4.2.3 Expert OpinionThe semi-structured interviews were followed up with a questionnaire to the inter-viewees where they could rank the result of the SLR found in Section 4.1. The rankincluded assignation a Score (as defined in Table 4.7) to each entry in Table 4.2.

Table 4.7. Expert Opinion - Score description.

Score Description1 Negligible2 Something to have in mind3 Needs to be dealt with4 High importance5 Crucial to handle a functioning perception system

The questionnaire had in total 9 respondents (1 declined). The average score andthe number of respondents to each respective question are shown in Table 4.8.

28

Page 46: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

4.2. INTERVIEWS WITH EXPERTS

Table 4.8. Expert Opinion - Questionnaire result.

Limitation Average score Number of respondentsMaterial/Surface 3.2 6Rain 3.8 7Fog/Mist/Haze 3.4 7Snow 3.8 7Road dirt on sensor cover 4.2 8First detection closed object 3.2 7Wet roadway causes roadspray 4.0 8Dust 4.0 7Wavelength related 4.0 8Sunlight 3.3 7Temperature 3.0 7Vibrations 3.6 6Interference 3.3 7Remote attacks 2.8 7Objects from long range 4.0 7

29

Page 47: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication
Page 48: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Chapter 5

Experimental Study

This chapter presents the use case, test scenario, evaluated limitations and the pro-totype architecture.

5.1 Experiment DesignTo further investigate the concept of a LiDAR-based PGT, a proposal and imple-mentation of a PGT generating algorithm were done and validated on a set of testcases. The proposal addressed a subset of the findings in the state of the art reviewin Chapter 4. The addressed findings were pragmatically chosen with respect to thefeasibility to implement and design a test case. The addressed findings are:

1. A clearly scoped use-case.

2. An offline algorithm.

3. Implications of different rain intensities.

4. Implications of different reflectivity of obstacles.

The results were derived by running a test scenario multiple times with differentscenes to find robustness of the PGT versus their respective effect of the quality.To enable an efficient assessment, a physics-based [56] [57] simulation platform wasused to simulate a world from which both an actual GT and sensor readings wereextracted. The input to the PGT algorithm is sensor readings and the output isthe PGT represented in an OGM format.

5.1.1 Use-CaseTo facilitate the conclusion from Section 4.2, ”solve defined scenarios and limit tospecific use-cases”, a use-case where transportation between A and B is chosen.The transportation takes place in an open landscape where the ego-vehicle is the

31

Page 49: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 5. EXPERIMENTAL STUDY

only object in motion. There are several different static objects in this area. Theuse-case is illustrated in Figure 5.1.

Figure 5.1. Schematic overview of a use-case

By investigating the found limitations in Table 4.2, different material/surface andrain seems to be two possible limitations that occur in this use-case.

Rain

In the evaluation of how the rain affects the PGT, simulation of different amount ofrain intensity and how it affects the quality of LiDAR sensor readings is proposed.In Table 5.1, a set of mapping of rain descriptions to the rain amount is shown [58].

Table 5.1. Interpretation of rain intensities [58].

Descriptions Rain intensity [mm/h]Light rain 0.5Moderate rain 0.5-4Heavy rain 4Light shower 2Moderate shower 2-10Heavy shower 10-50

Reflectivity

Issues with varying responses of different targets will be evaluated through simu-lating different levels of reflectivity of the target vehicle in the simulation model.Typical values of reflectivity is seen in Table 5.2.

Table 5.2. Typical reflectance of common targets for light, λ = 905 nm.

Target Reflectivitycoefficient

Source

White glossy car 0.62 [59]Black glossy car 0.04 [59]Human skin 0.55 [60]Soil 0.35 [61]

32

Page 50: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

5.2. PROTOTYPE ARCHITECTURE

5.1.2 Pseudo Ground Truth Generating AlgorithmThis thesis, as mentioned in Section 1.3.3, includes an accumulation strategy overmultiple time frames to increase the resolution and accuracy of the world represen-tation. The accumulation strategy is deployed to reduce the impact of noise inducedby the models of rain and reflectivity. The accumulation is performed as an offlineprocess where all point clouds are transformed into global coordinates and merged.

5.2 Prototype ArchitectureThe prototype of the PGT generating algorithm will be implemented using multipleenvironments. Firstly, the data gathering and creation of scenario have support foran appropriate simulator or a real-world system with output as a sensor represen-tation of the scene. The ego-position will be recorded to enable a transform intocommon coordinate systems in an offline algorithm. After the data is preprocessedan accumulation module creates Accumulated Point Clouds, (APCs), followed up byprojection onto an OGM. Lastly, the OGMs is analyzed by a comparison betweenGT and a PGT. The entire system setup overview is shown in Figure 5.2.

Figure 5.2. Implementation architecture.

5.2.1 Simulation EnvironmentTo ensure comparable results of the effect of the chosen parameters the world issimulated through the AD-EYE [56] platform, which is using the physics-basedtool PreScan to enable an uncomplicated creation of scenarios and a sensor basedrepresentation of these.

The platform has Robot Operating System (ROS) as underlying middleware whichcomes with a large community and a considerable number of large open sourceprojects in the field of automated driving including topics as perception. In thePreScan GUI (a screenshot is seen in Figure 5.3) components like actors, obstaclesand trajectories are easily deployed to the scene through drag and drop princi-ple. The settings are also easily confirmed through graphical feedback, for examplesensor settings, as shown in Figure 5.4.

33

Page 51: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 5. EXPERIMENTAL STUDY

Figure 5.3. The PreScan GUI.

Figure 5.4. Sensor settings in PreScan.

34

Page 52: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

5.2. PROTOTYPE ARCHITECTURE

The simulation is captured using ”Rosbag” - the logging package of Robot Op-erating System, (ROS). With Rosbag, the communication in the ROS network isrecorded in a file which can be played back, mimicking the signals of the ones atthe recording.

Lidar model

The parameters of the LiDAR sensor model in the simulation is seen in Table 5.3together with values which are set based on the LiDAR sensors presented in Table3.1. In Appendix B the sensor model is described in depth.

Table 5.3. LiDAR Parameters used in the simulation.

LiDAR model parameter ValueBeam range 250 mAzimuth FoV 45°Number of beams 512Wavelength 900 nmDivergence angle 0.13°Reflections recorded One (the first)Angular resolution horizontal 0.18°Angular resolution vertical 0.35°Distance resolution 3 cm

5.2.2 Transforms and Data RestructuringSince the output from the simulation is recorded in ROS the PGT generating algo-rithm is also built through launching each computing module as a node in a ROSnetwork. This enables simple playback of the logged data. The nodes transformsand structures data to enable an accumulation of point clouds and is followed upwith the accumulation itself, shown in Figure 5.5. The rain model takes simulatedLiDAR sensor data as input and is therefor run in the ROS network. The transformof the LiDAR data is derived from the recorded ego-pose in the simulation software.A detailed representation of the PGT generating algorithm is found in AppendixC.

Figure 5.5. Flowchart of data processing including the rain model.

35

Page 53: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 5. EXPERIMENTAL STUDY

The LiDAR sensor data is before prepared for accumulation fed to the rain modeldescribed in Section 5.2.2. The output of the rain model is a point cloud containingall of the data point received by the sensor and then modeled by the rain model inone time step. The point cloud is then transformed to global coordinates.

To minimize the influence of unsynchonized data, the ego-position is recorded inthe simulation and matched to the sensor reading point clouds, to be used as thebasis for the transform from local to global coordinates. The data of the exportedlogfile from the simulation is shown in Table 5.4.

Table 5.4. Logfile extracted from simulation.

Data Frame FormatSensor reading from LiDAR Local Point cloudEgo-position Global QuaternionSensor reading from GT Global Point cloud

After the APCs are created they are fed into an adaption of the grid mappingalgorithm Points2costmap, from the open source project Autoware [20]. The OGMsare recorded into a logfile in the format shown in Table 5.5, containing both GTand PGT representations. A logfile is generated for each scenario.

Table 5.5. Input to evaluation.

Data Frame FormatPGT Global OGMGT Global OGM

Model of Rain

The model of rain is created as a ROS node and deployed before the data prepro-cessing in the ROS network. The model of rain is adapted from the mathematicalmodel proposed by Goodin et al. [62] which expresses an algorithm for the rainsinfluence on LiDAR point clouds. The model gives a quantitative prediction of theLiDAR sensor’s performance in rain, given rain intensity, R, and the maximumsurveying range of the sensor. The algorithm consists of three steps performed perpoint in a point cloud; (1) modify the coordinate of the point, (2) modify the in-tensity of the point and (3) exclude points with new intensity under the threshold.The steps are shown in Algorithm 1.

The new distance to the point is depending on the actual position, z and the rainintensity, R. As indicated in Filgueira [63], almost all the position errors are lessthan 2 % when using a LiDAR sensor in rain. Therefore, range errors are modeled

36

Page 54: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

5.2. PROTOTYPE ARCHITECTURE

by normal distribution, ν around the actual position with a standard deviation ofσ = ν(0, 0.02z(1− e−R)2). The modified range, z′ is expressed as

z′ = z + ν(0, 0.02z(1− e−R)2). (5.1)

The new intensity value is depending on a change from the old intensity value,which gives

p′ = e−2αRβzp. (5.2)

The threshold for each value is countered as shown in

Pminn = 0.9/πz2max, (5.3)

where zmax is the maximum range of the LiDAR sensor. The point change can beexpressed with the formulas from Equation 5.3, 5.2 and 5.1 combined together asshown in Algorithm 1.

Algorithm 1 Rain Model.Input: p zOutput: p′

z′

1: z′ ← z + ν(0, 0.02z(1− e−R)2)

2: p′ = e−2αRβzp

3: Pminn = 0.9/πz2max

4: if p′ ≤ Pminn then5: return None6: else7: return z

5.2.3 Analytics ModuleThe analytics module consists of four parts:

1. Read OGMs from logfiles to Matlab workspace.

2. Compare the grid representations of GT and PGTs and calculate respectiveKPI.

3. Combine all the KPI values for each limitation setting.

4. Visualize all KPI values relative to the parameter setting.

The respective steps are shown in Figure 5.6.

37

Page 55: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 5. EXPERIMENTAL STUDY

Figure 5.6. Setup and workflow for the analytics environment.

Key Performance Indicator Modules

The implemented KPIs are Map Score, Pearson’s Coefficient and the Variance de-scribed in Section 3.3. Points where both the PGT and GT shows a zero wereneglected when calculating Map Score and Pearson’s Coefficient due to the largenumber of unoccupied cells in the data.

5.3 Test ScenariosTo test the chosen use-case in Section 5.1.1, simple scenarios of a vehicle drivingfrom one point to another is used. The scenarios included the ego-vehicle as the onlyactor passing one static element represented by a parked vehicle on the oppositeside of the road as seen in Figure 5.7. The scenario differs with varying sceneries.

Figure 5.7. Third person view of the ego-vehicle with the parked vehicle furtherdown the road.

38

Page 56: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

5.3. TEST SCENARIOS

5.3.1 Scenery VariationThe LiDAR sensor specific limitations regarding precipitation and target surfaceare evaluated through tampering with the environmental conditions and the staticelement. The environment condition is varied by the rain intensity, Rε [0, 50] mm/hin the rain model. The static element is varied through the reflectivity of the parkedvehicle, ρ ε (0, 1] in PreScan.

5.3.2 Use of DataThe Distance D, is the distance from the ego-car to the parked car which describeswhere the accumulation of data starts. In all scenarios, the accumulation startsfrom Distance D and ends after the ego-car have passed by the parked car.

5.3.3 ScenariosDifferent scenarios were created by varying the reflectivity, ρ of the parked vehicle,the use of the threshold distance to the parked vehicle, D, and rain intensity, R. Ageneric overview of a scenario is shown in Figure 5.8.

Figure 5.8. Generic overview of the scenarios. Ego-vehicle drives along the arrowwith the sensor readings from the non-dashed subdistance used in the generation ofthe PGT.

Different sets of scenarios have been combined into three test cases, as shown inTable 5.6.

39

Page 57: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 5. EXPERIMENTAL STUDY

Table 5.6. Declaration of Test Case I-III, an extensive declaration is found inAppendix D.

Use of Data SceneryTest Cases D [m] R [mm/h] ρ

Test Case I 150 [0, 50] 0.5Test Case II 150 2 [0.02, 1]Test Case III [15, 150] [0, 30] 0.5

40

Page 58: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Chapter 6

Results

In this chapter the data gathered is collectively presented.

6.1 Qualitative Results for Q1, Q2 and Q3This section contains a summary of the most important results, for the extensivepresentation of the qualitatively obtained data, see Sections 4.1.2 and 4.2.2.

6.1.1 Identified Challenges with Evaluating PerceptionFindings of the state of the art and practice review for current issues, seen in Section4.2.2, with evaluating perception systems and methods to address these issues. Thecurrent issues are:

• The environmental perception is a non-deterministic problem without conven-tions of how to validate sufficient performance.

• There is a lack of conventions and requirements of what an automotive percep-tion system should manage. More specifically, there is a lack of cost-efficientscientific methodology to objectively evaluate the perception.

• Verification of systems downstream to perception, such as planning algo-rithms, depends on data provided by the perception system, hence, perceptionneeds to be verified as a standalone system.

The found methods to address the current challenges, found through an expertopinion, are:

• An automated system is a very complex system, do not aim for a perfectsystem that handles all situations. Solve defined scenarios and limit to specificuse cases to enable a feasible verification process.

41

Page 59: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 6. RESULTS

• To enable large scale development, the perception system has to be validatedstandalone from the decision- and planning systems, to avoid an infeasibletask of validating the entire automated vehicle.

• Most of the limitations with the LiDAR sensor can be addressed througharchitecture, etc. by adding other sensors or inter-vehicle communication.

• An extensive simulation environment could enable efficient test-driven devel-opment. To address the potential biases of manually created test cases, thesimulation environment could be extended with methods to artificially gener-ate new test cases, based on corner cases in logged data.

6.1.2 Limitations with the LiDAR TechnologyThe findings of the SLR seen in Section 4.1.2 and interviews with the expert, seenin Section 4.2.3 is shown in descending order, where the rank is dependent on thenumber citations in the SLR and score given in the interviews with the experts.Table 6.1 shows the found limitations.

42

Page 60: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

6.2. QUANTITATIVE RESULTS

Table 6.1. Found LiDAR specific imitations from the SLR and the interviews withthe experts.

Ind. Limitation

Most critical limitations1 Rain2 Fog/Mist/Haze3 Snow

Critical limitations 4 Material/surfaces

Important limitations

5 Road dirt on sensor cover6 First detection close object7 Wet roadway causes road spray8 Dust9 Wavelength related10 Sunlight11 Temperature12 Vibrations13 Interference14 Objects from long range15 Placement of the sensor16 Computational power17 lifetime18 Blooming/self dazzling19 Dynamic Environmental20 Data quality21 Time synchronization22 Resolution

6.2 Quantitative ResultsThis section shows the results of the finding from the experimental study definedin Chapter 5.

6.2.1 The Influence of RainFigure 6.1 shows the mapping of Pearson’s Coefficient and Map Score when com-paring the PGT with GT, based on data obtained from Test Case I, which startsthe accumulation at 150 m before the parked car, with reflectivity value ρ of 0.5and the rain intensity, R varied between 0 and 50 mm/h.

43

Page 61: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 6. RESULTS

Figure 6.1. Map Score and Pearson’s Coefficient for Test Case I.

Both Pearson’s Coefficient and Map Score shows an inverse relationship betweenrain intensity, R and correctional score. Hence, the PGT is a better representationof the GT at lower rain values compared to the PGT at higher rain values. Fur-thermore, Figure 6.2 shows the boundaries for the upper and lower limit of a 99.9% confidence interval of Pearson’s Coefficient.

44

Page 62: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

6.2. QUANTITATIVE RESULTS

Figure 6.2. Pearson’s Coefficient with a 99.9 % confidence interval for Test Case I.

The lower limit is always above 0 which indicates that there is always a posi-tive association between GT and PGT with 99.9% confidence interval by Pearson’scoefficient.

Test Case I was further investigated by determining the variance between thePGT and GT. Figure 6.3 shows that the variance is high at rain intensities, Rbetween 4 and 14 mm/h followed by a steady decrease. At high rain intensities,the sensor range decreases and fewer data points are collected, which leads to lowervariance, as seen in Table 6.2.

45

Page 63: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 6. RESULTS

Figure 6.3. Variance in X for Test Case I.

The Figure 6.4 shows a subset of the PGT and GT OGMs from Test Case I withdifferent rain intensities, R. It is seen that the number of occupied cells decreaseswith the increasing rain intensity which, in turn, lower the variance as shown inFigure 6.3.

Figure 6.4. PGT representations (white) of a target vehicle and GT (blue) withdifferent level of rain intensity, R.

Figure 6.5 shows how the mapping of Pearson’s Coefficient based on data obtainedfrom Test Case 3. The Distance, D, that is the distance when the accumulationstarts, is varying, as and shown in the x-axis. The different lines represent differentamount of rain.

46

Page 64: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

6.2. QUANTITATIVE RESULTS

Figure 6.5. Test Case III: Performance of PGT depending on a distance threshold.

The Figure 6.5 shows a decreasing correlation between PGT and GT accordingto Pearson’s Coefficient as the rain intensity is increased, R. It is also seen thatthe value of the Pearson’s Coefficient depends on the distance limit, D and is notfound in the point with most accumulated data, i.e. at maximum distance limit.The distance to the target where the first detection is made is shown in Table 6.2.

Table 6.2. Distance at first detection at different rain intensities.

Rain, R [mm/h] Distance at first detection, D [m]0 250*0.5 1201 10510 4530 30* 250 m according to the LiDAR sensor device OusterOS-2 specification but the scenario started at a distanceof 150 m.

47

Page 65: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 6. RESULTS

6.2.2 The Influence of ReflectivityTest Case II evaluates how Pearson’s Coefficient and Map Score changes dependingon the reflectivity of the parked car as shown in Figure 6.6. Test Case II is startingthe accumulation from the distance limit, D at 150 m. The blue lines show arain intensity, R at 0 mm/h and the red line has a rain intensity R at 2 mm/h.The dotted lines measure Map Score and the others Pearson’s Coefficient. Thecorrelation between GT and PGT is shown significant with a 99.9 % confidenceinterval, through Pearson’s coefficient. In this test case, neither Map Score norPearson’s coefficient indicates that the PGT is affected by the change in reflectivity,although Map Score indicates that the quality of the PGT decreases with lowerreflectivity values, a typical value for a white car, ρ = 0.5.

Figure 6.6. Test Case II: Scores depending on the reflectivity of the target, ρ.

48

Page 66: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Chapter 7

Discussion and Future Work

In this chapter an analysis of the results is presented as a discussion, and conclusionsof the contribution are drawn. Lastly, potential extensions of topics related, but notevaluated in the scope of this thesis, are presented.

7.1 DiscussionThe discussion is divided into three sections relating to different topics; the state ofthe art and practice review, the experimental study and threats to validity.

7.1.1 State of the Art and Practice ReviewThroughout the performed studies, several limitations with the initial idea of cre-ating a PGT emerged. Since the performance of a LiDAR sensor depends on theenvironment it operates in, shown in Section 4, the requirements on a perceptionsystem depends on external factors, such as the uncertainty of the environment. Forexample, driving in a mine will most likely induce noise to a LiDAR sensor throughdust clouds, but since one could also assume that no humans will be present in thedriving path, coarse filtering could be applied. In another ODD, such as urban driv-ing, the assumption of absence of humans does not apply, hence, the same coarsefiltering cannot be used. It has been shown that the settings of a perception systemshould depend on the scene where it is used, hence supporting the strategy of build-ing a non-generalized perception. We proposed that a PGT generating algorithmshould be created for usage in specific environments of different applications.

In the search for the requirements for PGT generation, we found that the industrylacks conventions to determine what is sufficient performance of a perception systemas a standalone system. Hence, it is not possible to determine the boundary of avalid and an invalid PGT quantitatively. But if a boundary is set subjectively, thevalid PGT could potentially be used in the development of perception systems toevaluate the used algorithms and set parameters.

49

Page 67: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 7. DISCUSSION AND FUTURE WORK

In Section 6.1.2, a sizeable list of specific limitations to the current LiDAR sensorswas concluded based on the findings from the SLR and the interviews with theexperts. The ranking of the limitations based on criticality was derived from thenumber of found publications and how often respective limitations were mentionedin the interviews. How often a specific limitation is mentioned depends on severalfactors such as the likelihood of it being present in the environment, it being difficultto address, or it having a great impact on the quality of the sensor data. Theclassification based on criticality does then not disclose the main reason why aspecific limitation is critical.

7.1.2 Experimental StudyAs seen in Figure 6.1, the current setup shows an inverse relationship between rainintensity and the quality of the PGT. Meaning that the PGT performs better atlower rain intensity values. In Figure 6.2 the most dramatic decrease of the qualityof the PGT, as seen in Pearson’s coefficient, is seen from no rain up to rain intensityof around 15 mm/h, after which increased rain intensity has low effect on the scorethough the confidence interval widens. Since significant correlation is shown ateach point, all points are entailed to further investigation. As seen in Section 1.3presentation of the results in real-world applicable units is easier to interpret thana qualitative score such as Map Score. In Figure 6.3 the variance of the positionof a parked vehicle in the longitudinal direction is determined to be at a maximumof 1.6 m. At the same time, Table 6.2 indicates that the first hit distance heavilydecreases with an increasing rain intensity.

This means that the range from the ego-vehicle to where a valid PGT couldbe found is decreasing as rain intensity increases. Figure 6.5 shows that varyingthe accumulated distance D, could reduce the variance, which could satisfy moresafety-critical applications depending on the chosen use-case.

Reducing the reflectivity coefficient reduces the reflected intensity but does notin these experiments induce noise. The lowered reflected intensity does then onlyaffect at which distance first detection of a target is made, which limits the radiusat where a valid PGT could be found.

The generation of PGT in this thesis assumes a high accuracy of data synchro-nization and access to accurate positioning. To facilitate an accurate position in areal-world setting, technology such as RTK-GPS, which combines a GPS-signal withinformation of the position of a known reference station could be used. Though,the quality of the RTK information deprecates in bad weather which gives a super-position of error since the quality of the PGT has similar behavior.

The use of occupancy grids to filter point cloud data enables an analysis withseveral statistical methods. It could however be argued that important informationis lost in the filtering and the results will also depend on the parameters of the gridmapping algorithm as cell size, assumed the reliability of sensor readings i.e. howmany and how close in time the hits have to be to define an occupied cell. Theseparameters are not evaluated in the scope of this thesis but since no absolute results

50

Page 68: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

7.2. CONTRIBUTIONS

are analyzed the results are assumed valid in that context.To find in which scenarios the proposed algorithm can generate a valid PGT,

requirements need to be specified, for example, the reflectivity of targets. If speci-fied, through for example expert opinion, a simulation environment as shown in thisthesis can be used to find which objects or scenes that will be included in the validPGT and to which objects or scenes precautions need to be taken.

7.1.3 Threats to ValidityIn the planning of the SLR, we decided to exclude material published before 2015to ensure relevance in a rapid evolvement of the LiDAR technology. This exclu-sion criterion could possibly have excluded valuable findings. Another threat tocompleteness is the choice of search strings since they are all queries for findingsthat evaluate limitations with the LiDAR technology explicitly, possibly excludingpublications implicitly giving information.

The choice of semi-structured methodology for conducting interviews enables in-formation gain outside of the interviewer’s knowledge since it includes a discussionled by the interviewee. The interpretation of the open-ended discussion containsthe possibility of adding the bias of the researcher. The researchers’ bias are thusclarified together and the results are peer examined by independent actors. Thechoice of interviewees was made in cooperation with both academic and industrialsupervisors, utilizing their knowledge in the field to find an even and relevant spreadof input.

The experimental results are subject to sampling imperfections and a noise modeldependent on pseudo-random data. There is a lack of multiple trials to evaluate theinfluence of these errors. But, as previously mentioned, no absolute conclusions aredrawn from the data and the since no contradictory results were found the data isdeemed coherent and valid. The results are also dependent on the sensor model inthe simulation environment (shown in Appendix B) implemented with parametersbased on a common LiDAR sensor device on the market with any noise, apart fromthe noise evaluated directly in this thesis, neglected.

7.2 ContributionsThe limitations of building a perception system relying exclusively on the LiDARtechnology were investigated through an SLR and semi-structured interviews withindustry experts. The limitations were ranked according to the number of appear-ances weighted with the significance of the source. Where the significance wasdetermined by whether the limitation was quantitatively assessed in the literatureor highlighted in the interviews.

Methods to generate GT representations based on sensor data were found throughother related studies and surveying industry experts through semi-structured inter-views. The methods are presented together with the context in which they wereused.

51

Page 69: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

CHAPTER 7. DISCUSSION AND FUTURE WORK

We proposed an algorithm to generate a PGT is based on the findings of the SLRand the semi-structured interviews. The algorithm was implemented and tested in asimulated environment on a few LiDAR specific limitations, namely; rain and variedreflectively of targets. The impact of the limitations was reduced through a dataaccumulation process in the generation of the PGT. Quantitative data is presentedbased on a sensor model of an off-the-shelf LiDAR sensor device. The architecture ofthe algorithm is modular and enables simple change of settings such as parametersof the simulation and implementation of additional functions (filtering algorithmsetc.)

The precautions needed, if the proposed algorithm to generate a PGT is used asGT when evaluating perception systems in development, is dependent on specifyingnumerical requirement, such as maximum variance allowed for the PGT. If therequirements are specified, through for example, expert opinion, the boundary forthe valid PGT could be found in a simulation environment. Precautions, such asalternative testing methods, needs to be employed to evaluate the scenes or detectionof objects present in the test case not included in the valid PGT.

7.3 Future WorkIn the scope of this thesis one scenario was assessed, to increase the applicationdomain of the PGT more scenarios including for example; dynamic objects, differentenvironment conditions or multiple objects should be examined.

Future research could assess the design choices of the PGT generating algorithmmade in this thesis. An extension to this thesis could include alternatives such asdifferent sensors, a combination of sensors or evaluation of different world represen-tations to present a PGT.

52

Page 70: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Bibliography

[1] “Taxonomy and definitions for terms related to driving automation systemsfor on-road motor vehicles”, SAE International Surface Vehicle RecommendedPractice, SAE Standard J3016, 2018-06.

[2] R. Grewe, M. Komar, A. Hohm, S. Lueke, and H. Winner, “Evaluation methodand results for the accuracy of an automotive occupancy grid”, in 2012 IEEEInternational Conference on Vehicular Electronics and Safety (ICVES 2012),2012-07, pp. 19–24. doi: 10.1109/ICVES.2012.6294297.

[3] International Organization for Standardization, “ISO 26262: Road vehicles–Functional safety”, no. 1, 2011.

[4] ——, “ISO/WD PAS 21448: Road vehicles – Safety of the intended function-ality”, Unpublished standard currently in working draft status, no. 1,

[5] N. Kalra and S. M. Paddock, “Driving to safety: How many miles of drivingwould it take to demonstrate autonomous vehicle reliability?”, TransportationResearch Part A: Policy and Practice, vol. 94, pp. 182–193, 2016, issn: 0965-8564.

[6] A. Dokhanchi, H. B. Amor, J. V. Deshmukh, and G. Fainekos, “Evaluatingperception systems for autonomous vehicles using quality temporal logic”,in Runtime Verification, C. Colombo and M. Leucker, Eds., Cham: SpringerInternational Publishing, 2018, pp. 409–416, isbn: 978-3-030-03769-7.

[7] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving?the kitti vision benchmark suite”, in Conference on Computer Vision andPattern Recognition (CVPR), 2012.

[8] E. T. Barr, M. Harman, P. McMinn, M. Shahbaz, and S. Yoo, “The oracleproblem in software testing: A survey”, IEEE Transactions on Software Engi-neering, vol. 41, no. 5, pp. 507–525, 2015-05, issn: 0098-5589. doi: 10.1109/TSE.2014.2372785.

[9] H. Hajri, E. Doucet, M. Revilloud, L. Halit, B. Lusetti, and M. Rahal, “Auto-matic generation of ground truth for the evaluation of obstacle detection andtracking techniques”, CoRR, vol. abs/1807.05722, 2018. arXiv: 1807.05722.

53

Page 71: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

BIBLIOGRAPHY

[10] T. Collins, J. Collins, M. Mansfield, and S. O’sullivan, “Evaluating techniquesfor resolving redundant information and specularity in occupancy grids”, eng,in AI 2005: Advances in Artificial Intelligence: 18th Australian Joint Confer-ence on Artificial Intelligence, Sydney, Australia, December 5-9, 2005. Pro-ceedings, ser. Lecture Notes in Computer Science, vol. 3809, Berlin, Heidel-berg: Springer Berlin Heidelberg, 2005, pp. 235–244, isbn: 9783540304623.

[11] R. M. Baron and D. A. Kenny, “The moderator–mediator variable distinctionin social psychological research: Conceptual, strategic, and statistical consid-erations.”, Journal of personality and social psychology, vol. 51, no. 6, p. 1173,1986.

[12] M. C. Martin and H. P. Moravec, “Robot evidence grids.”, CARNEGIE-MELLON UNIV PITTSBURGH PA ROBOTICS INST, Tech. Rep., 1996.

[13] N. Suganuma and T. Matsui, “Robust environment perception based on oc-cupancy grid maps for autonomous vehicle”, in Proceedings of SICE AnnualConference 2010, 2010-08, pp. 2354–2357.

[14] M. Aldibaja, N. Suganuma, and K. Yoneda, “Lidar-data accumulation strat-egy to generate high definition maps for autonomous vehicles”, in 2017 IEEEInternational Conference on Multisensor Fusion and Integration for Intelli-gent Systems (MFI), 2017-11, pp. 422–428. doi: 10.1109/MFI.2017.8170357.

[15] J. W. Creswell and J. D. Creswell, Research design: Qualitative, quantitative,and mixed methods approaches. Sage publications, 2017.

[16] B. Kitchenham, O. P. Brereton, D. Budgen, M. Turner, J. Bailey, and S.Linkman, “Systematic literature reviews in software engineering – a system-atic literature review”, Information and Software Technology, vol. 51, no. 1,pp. 7–15, 2009, issn: 0950-5849.

[17] R. Barcelos and G. Travassos, “Evaluation approaches for software architec-tural documents: A systematic review.”, in Conference: Memorias de la IXConferenci a Iberoamericana de Software Engineering (CIbSE 2006), 2006-01,pp. 433–446.

[18] S. Ulbrich, T. Menzel, A. Reschka, F. Schuldt, and M. Maurer, “Defining andsubstantiating the terms scene, situation, and scenario for automated driving”,in 2015 IEEE 18th International Conference on Intelligent TransportationSystems, 2015-09, pp. 982–988. doi: 10.1109/ITSC.2015.164.

[19] J. R. Boyd, “The essence of winning and losing”, Unpublished lecture notes,1996.

[20] S. Kato, E. Takeuchi, Y. Ishiguro, Y. Ninomiya, K. Takeda, and T. Hamada,“An open approach to autonomous vehicles”, IEEE Micro, vol. 35, no. 6,pp. 60–68, 2015-11, issn: 0272-1732. doi: 10.1109/MM.2015.133.

54

Page 72: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

BIBLIOGRAPHY

[21] National Coordination Office for Space-Based Positioning, Navigation, andTiming, Gps standard positioning service (sps) performance standard, https://www.gps.gov/technical/ps/2008-SPS-performance-standard.pdf,[Online; accessed 2019-05-06].

[22] E. G. N. S. S. Agancy, What is galileo?, https://www.gsc- europa.eu/galileo-overview/what-is-galileo, [Online; accessed 2019-05-06].

[23] B. N. S. System, Beidou navigation satellite system, http://en.beidou.gov.cn/SYSTEMS/System/, [Online; accessed 2019-05-06].

[24] L. Wanninger, “Introduction to network rtk”, IAG Working Group, vol. 4,no. 1, pp. 2003–2007, 2004.

[25] O. Cohen, Y. Edan, and E. Schechtman, “Statistical evaluation method forcomparing grid map based sensor fusion algorithms”, The International Jour-nal of Robotics Research, vol. 25, no. 2, pp. 117–133, 2006.

[26] M. Jernigan, S. Alsweiss, J. Cathcart, and R. Razdan, “Conceptual sensorstesting framework for autonomous vehicles”, in 2018 IEEE Vehicular Network-ing Conference (VNC), 2018-12, pp. 1–4. doi: 10.1109/VNC.2018.8628370.

[27] M. U. de Haag, C. G. Bartone, and M. S. Braasch, “Flight-test evaluationof small form-factor lidar and radar sensors for suas detect-and-avoid ap-plications”, in 2016 IEEE/AIAA 35th Digital Avionics Systems Conference(DASC), 2016-09, pp. 1–11. doi: 10.1109/DASC.2016.7778108.

[28] F. Castano, G. Beruvides, R. E. Haber, and A. Artunedo, “Obstacle recogni-tion based on machine learning for on-chip lidar sensors in a cyber-physicalsystem”, Sensors, vol. 17, no. 9, 2017, issn: 1424-8220. doi: 10.3390/s17092109.

[29] Y. Li, Y. Wang, W. Deng, X. Li, Z. liu, and L. Jiang, “Lidar sensor modelingfor adas applications under a virtual driving environment”, in SAE-TONGJI2016 Driving Technology of Intelligent Vehicle Symposium, SAE International,2016-09. doi: https://doi.org/10.4271/2016-01-1907.

[30] V. D. Silva, J. Roche, and A. M. Kondoz, “Fusion of lidar and camera sensordata for environment sensing in driverless vehicles”, CoRR, vol. abs/1710.06230,2017. arXiv: 1710.06230.

[31] N. Pinchon, O. Cassignol, A. Nicolas, F. Bernardin, P. Leduc, J.-P. Tarel, R.Bremond, E. Bercier, and J. Brunet, “All-weather vision for automotive safety:Which spectral band?”, in International Forum on Advanced Microsystems forAutomotive Applications, Springer, 2018, pp. 3–15.

[32] D. Gruyer, V. Magnier, K. Hamdi, L. Claussmann, O. Orfila, and A. Rako-tonirainy, “Perception, information processing and modeling: Critical stagesfor autonomous driving applications”, Annual Reviews in Control, vol. 44,pp. 323–341, 2017.

55

Page 73: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

BIBLIOGRAPHY

[33] S. Hasirlioglu, A. Kamann, I. Doric, and T. Brandmeier, “Test methodologyfor rain influence on automotive surround sensors”, in 2016 IEEE 19th Inter-national Conference on Intelligent Transportation Systems (ITSC), 2016-11,pp. 2242–2247. doi: 10.1109/ITSC.2016.7795918.

[34] S. Hasirlioglu and A. Riener, “A model-based approach to simulate rain effectson automotive surround sensor data”, in 2018 21st International Conferenceon Intelligent Transportation Systems (ITSC), 2018-11, pp. 2609–2615. doi:10.1109/ITSC.2018.8569907.

[35] R. H. Rasshofer, M. Spies, and H. Spies, “Influences of weather phenomena onautomotive laser radar systems”, Advances in Radio Science, vol. 9, 2011-07.doi: 10.5194/ars-9-49-2011.

[36] T. Fersch, A. Buhmann, A. Koelpin, and R. Weigel, “The influence of rainon small aperture lidar sensors”, in 2016 German Microwave Conference(GeMiC), 2016-03, pp. 84–87. doi: 10.1109/GEMIC.2016.7461562.

[37] S. Hasirlioglu, I. Doric, C. Lauerer, and T. Brandmeier, “Modeling and sim-ulation of rain for the test of automotive sensor systems”, in 2016 IEEE In-telligent Vehicles Symposium (IV), 2016-06, pp. 286–291. doi: 10.1109/IVS.2016.7535399.

[38] H. Zhu, K. Yuen, L. Mihaylova, and H. Leung, “Overview of environmentperception for intelligent vehicles”, IEEE Transactions on Intelligent Trans-portation Systems, vol. 18, no. 10, pp. 2584–2601, 2017-10, issn: 1524-9050.doi: 10.1109/TITS.2017.2658662.

[39] M. Kutila, P. Pyykonen, W. Ritter, O. Sawade, and B. Schaufele, “Automotivelidar sensor development scenarios for harsh weather conditions”, in 2016IEEE 19th International Conference on Intelligent Transportation Systems(ITSC), 2016-11, pp. 265–270.

[40] M. Kutila, P. Pyykonen, H. Holzhuter, M. Colomb, and P. Duthon, “Automo-tive lidar performance verification in fog and rain”, in 2018 21st InternationalConference on Intelligent Transportation Systems (ITSC), 2018-11, pp. 1695–1701. doi: 10.1109/ITSC.2018.8569624.

[41] S. Hasirlioglu, I. Doric, A. Kamann, and A. Riener, “Reproducible fog sim-ulation for testing automotive surround sensors”, in 2017 IEEE 85th Vehic-ular Technology Conference (VTC Spring), 2017-06, pp. 1–7. doi: 10.1109/VTCSpring.2017.8108566.

[42] M. F. Holder, P. Rosenberger, F. Bert, and H. Winner, “Data-driven deriva-tion of requirements for a lidar sensor model”, Graz Symposium Virtual Ve-hicle 2018, 2018.

[43] J. Petit, B. Stottelaar, M. Feiri, and F. Kargl, “Remote attacks on automatedvehicles sensors: Experiments on camera and lidar”, Black Hat Europe, 2015,Query date: 2019-04-03.

56

Page 74: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

BIBLIOGRAPHY

[44] P. Radecki, M. Campbell, and K. Matzen, “All weather perception: Jointdata association, tracking, and classification for autonomous ground vehicles”,arXiv preprint arXiv:1605.02196, 2016.

[45] T. S. Combs, L. S. Sandt, M. P. Clamann, and N. C. McDonald, “Automatedvehicles and pedestrian safety: Exploring the promise and limits of pedestriandetection”, American journal of preventive medicine, vol. 56, no. 1, pp. 1–7,2019.

[46] J. Steinbaeck, C. Steger, G. Holweg, and N. Druml, “Next generation radarsensors in automotive sensor fusion systems”, in 2017 Sensor Data Fusion:Trends, Solutions, Applications (SDF), IEEE, 2017, pp. 1–6.

[47] T. Fersch, R. Weigel, and A. Koelpin, “A cdma modulation technique for au-tomotive time-of-flight lidar systems”, IEEE Sensors Journal, vol. 17, no. 11,pp. 3507–3516, 2017-06, issn: 1530-437X. doi: 10.1109/JSEN.2017.2688126.

[48] S. Gnecchi and C. Jackson, “A 1× 16 sipm array for automotive 3d imaginglidar systems”, in Proceedings of the 2017 International Image Sensor Work-shop (IISW), Hiroshima, Japan, 2017, pp. 133–136.

[49] J. R. V. Rivero, I. Tahiraj, O. Schubert, C. Glassl, B. Buschardt, M. Berk,and J. Chen, “Characterization and simulation of the effect of road dirt on theperformance of a laser scanner”, in 2017 IEEE 20th International Conferenceon Intelligent Transportation Systems (ITSC), 2017-10, pp. 1–6. doi: 10 .1109/ITSC.2017.8317784.

[50] G. Kim, J. Eom, and Y. Park, “An experiment of mutual interference betweenautomotive lidar scanners”, in 2015 12th International Conference on Infor-mation Technology - New Generations, 2015-04, pp. 680–685. doi: 10.1109/ITNG.2015.113.

[51] J. Petit, B. Stottelaar, M. Feiri, and F. Kargl, “Remote attacks on automatedvehicles sensors: Experiments on camera and lidar”, Black Hat Europe, vol. 11,p. 2015, 2015.

[52] F. Ma and S. Karaman, “Sparse-to-dense: Depth prediction from sparse depthsamples and a single image”, in 2018 IEEE International Conference onRobotics and Automation (ICRA), 2018-05, pp. 1–8. doi: 10.1109/ICRA.2018.8460184.

[53] Y. O. Yi Wang Jianfeng Yang, “Study on environmental test technology oflidar used for vehicle”, in Young Scientists Forum 2017, vol. 10710, 2018. doi:10.1117/12.2315056.

[54] Swedish Research Council (Swedish: Vetenskapsradet), “Forskningsetiska principer[Research ethical principles]”, 2019. [Online]. Available: http://www.codex.vr.se/texts/HSFR.pdf.

[55] L. G. Valiant, “A theory of the learnable”, in Proceedings of the sixteenthannual ACM symposium on Theory of computing, ACM, 1984, pp. 436–445.

57

Page 75: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

BIBLIOGRAPHY

[56] N. Mohan and M. Torngren, “AD-EYE: A Co-Simulation Platform for EarlyVerification of Functional Safety Concepts”, SAE Technical Paper 2019-01-0126, 2019, issn: 0148-7191.

[57] Siemens, in Prescan Manual, 2018-12.[58] “Nederbordsintensitet”, 2011. [Online]. Available: https://www.smhi.se/

kunskapsbanken/meteorologi/nederbordsintensitet-1.19163.[59] C. M. Seubert, Ir reflectivity of paint: Autonomy and co2 emissions, https:

//detroitcc.org/wp-content/uploads/2018/07/IR-Reflectivity-of-Paint-Autonomy-and-CO2-Seubert.pdf, [Online; accessed 2019-05-06].

[60] C. C. Cooksey, B. K. Tsai, and D. W. Allen, “A collection and statisticalanalysis of skin reflectance signatures for inherent variability over the 250 nmto 2500 nm spectral range”, in Active and Passive Signatures V, InternationalSociety for Optics and Photonics, vol. 9082, 2014, p. 908 206.

[61] A. Siegmund and G. Menz, “Fernes nah gebracht–satelliten-und luftbildein-satz zur analyse von umweltveranderungen im geographieunterricht”, Geogra-phie und Schule, vol. 154, no. 4, pp. 2–10, 2005.

[62] C. Goodin, D. Carruth, M. Doude, and C. Hudson, “Predicting the influenceof rain on lidar in adas”, Electronics, vol. 8, no. 1, p. 89, 2019.

[63] A. Filgueira, H. Gonzalez-Jorge, S. Laguela, L. Dıaz-Vilarino, and P. Arias,“Quantifying the influence of rain in lidar performance”, Measurement, vol. 95,pp. 143–148, 2017.

58

Page 76: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Appendix A

Data Collection Strategies

Semi-structured interview questions:

• What are the biggest challenges today with validating/verifying/testing a per-ception system for the automotive industry?

• What are the biggest challenges with verifying a perception system using onlyGT information based on LiDAR sensors within the automotive industry forlevel 3 and higher automated vehicles?

– What is needed to address these challenges?

Expert opinion on LiDAR specific limitations

Table A.1. Interpretation of rank.

1 Negligible2 Something to have in mind3 Needs to be dealt with4 High importance5 Crucial to a functioning system

59

Page 77: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

APPENDIX A. DATA COLLECTION STRATEGIES

Table A.2. Questionnaire of LiDAR specific limitations.

Name: Role: Group:Index Limitation Rank (1-5)1 Road dirt on sensor cover2 First detection close object3 Material/surface4 Wet roadway causes road spray5 Rain6 Fog/Mist/Haze7 Snow8 Dust9 Wavelength related10 Sunlight11 Temperature12 Vibrations13 Interference14 Remote attacks (imitating signal)15 Objects from long range

60

Page 78: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Appendix B

The LiDAR model in PreScan

This is derived from the PreScan Manual [57], for complete information, read themanual.

The distance R is given by,2R = trc (B.1)

where the tr is the time-of-flight and c is the speed of light.The time-of-flight is calculated directly through the phase shift of the transmitted

and received signal of frequency fmod. The time-of-flight is assumed to result in aphase shift shorter than a period of the transmitted signal and is derived as

tr = ϕr/(2πfmod). (B.2)

Substituting B.2 in B.1 gives the range R as,

R = ϕrc

(4πfmod). (B.3)

In PreScan [57] the LiDAR is implemented with three assumptions that and thefollowing mathematical expressions.

1. The divergence apex θ and the aperture diameter dta is small.

2. The power density is uniform in the beam cross section.

3. The target object have uniform diffuse hemispherical reflection

The area of the beam is expressed as,

A = WbeamR2 (B.4)

The laser beam radiates uniformly in the solid angle, Wbeam with power, Pt. Tocompute the incident power, Ptar at a distance R shown in Figure B.1.

61

Page 79: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

APPENDIX B. THE LIDAR MODEL IN PRESCAN

Figure B.1. Laser beam incident on target object [57].

The incident power, Ptar is the fraction between Wtar/Wbeam of the transmittedpower, Pt multiplied by the path loss, which gives

Ptar = PtTR WtarR

2

WbeamR2 = PtTR Wtar

Wbeam(B.5)

where T is the atmospheric transmitivity i.e. the fraction of incident light thatwill pass through one meter of air.

Combining the incident power, Ptar, target reflectivity, ρtar at wavelength λ andeffective area, Ara, the incident power on the receiver is

Pr = Ara2π2T

RρtarPtar. (B.6)

Combining Equation B.6 and Equation B.2 gives

Pr = Ara2πR2T

2RρtarWtar

WbeamPt. (B.7)

The specific power ratio, SPR is defined as the power ratio per square meter. UsingEquation B.7 the SPR can be expressed as

SPR = PrAraPt

= ρtarT2R

2πR2Wtar

Wbeam. (B.8)

The energy loss is defined as the SPR expressed in decibels as shown in EquationB.9.

EnergyLoss = 10log(SPR) (B.9)

In PreScan the Range, R,, the solid angle, Wbeam and the target angle Wtar is derivednumerically.

62

Page 80: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Appendix C

ROS transforming network

63

Page 81: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication
Page 82: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Appendix D

Extensive Declaration of InputParameters in Test Cases I-III

Table D.1. Declaration of input parameters used for Test Case I.

Use of data SceneryTest case D [m] R [mm/h] ρ

Test Case I 150 [0, 50] 0.5” 0 ”” 0.5 ”” 1 ”” 2 ”” 3 ”” 4 ”” 5 ”” 6 ”” 8 ”” 10 ”” 12 ”” 14 ”” 16 ”” 18 ”” 20 ”” 30 ”” 40 ”” 50 ”

65

Page 83: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

Table D.2. Declaration of input parameters used for Test Case II.

Use of data SceneryTest case D [m] R [mm/h] ρ

Test Case II 150 2 [0.02, 1]” ” 0.02” ” 0.05” ” 0.1” ” 0.25” ” 0.5” ” 0.75” ” 1

Table D.3. Declaration of input parameters used for Test Case III.

Use of data SceneryTest case D [m] R [mm/h] ρ

Test Case III [15, 150] [0, 30] 0.515, 30, 45, ..., 150 0 ”

” 1 ”” 10 ”” 30 ”

Page 84: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication
Page 85: Benchmarking of a LiDAR Sensor for use as Pseudo Ground ...kth.diva-portal.org/smash/get/diva2:1367406/FULLTEXT01.pdf · Firstly, our academic supervisor Naveen Mohan, whose dedication

TRITA ITM-EX 2019:473

www.kth.se


Recommended