Introduction of Point Cloud Processing
Xinlian LiangFinnish Geospatial Research Institute, FGI
Overview─ Point Cloud and Sources
─ Object Recognition
─ Pre-Processing
─ Driving Forces
Part I: Point Cloud and Sources
Point cloud: Definition
─ Definition
• A set of points in 3-Dimentional (3D) space • possibly having attributes per point
𝑃𝑃 = 𝑝𝑝𝑖𝑖 𝑖𝑖 = 1,2, … ,𝑛𝑛,𝑛𝑛 ∈ 𝑁𝑁𝑝𝑝𝑖𝑖 = (𝑥𝑥𝑖𝑖, 𝑦𝑦𝑖𝑖, 𝑧𝑧𝑖𝑖 , 𝑎𝑎𝑖𝑖𝑖𝑖)(𝑥𝑥𝑖𝑖, 𝑦𝑦𝑖𝑖, 𝑧𝑧𝑖𝑖) ∈ 𝑅𝑅3𝑗𝑗 = 1,2, … ,𝑚𝑚,𝑚𝑚 ∈ 𝑁𝑁
(𝑥𝑥𝑖𝑖, 𝑦𝑦𝑖𝑖, 𝑧𝑧𝑖𝑖 , 𝑎𝑎𝑖𝑖𝑖𝑖)
Point cloud: Definition─ example
Point cloud: Definition─ attributes: e.g., intensity, spectral
the world first commercial multispectral ALS system TITAN
capable to record return intensities with wavelengths of 532 nm, 1064 nm and 1,55 µm
Image sources: Leena Matikainen, FGI, Terratec Oy/Optech Teledyne
Sources: point cloud
Depth camera (cell phone)
Image-based, e.g., photogrammetric, orstructure from motion
Laser Scanning, LS
(Tomaštík et al., 2017)
Sources: Structure from Motion
Sources: Structure light
https://www.youtube.com/watch?v=nvvQJxgykcU&feature=player_embedded)
Kinectdepth images at 30 frames per second Up to 4 m
Google TangoSensing, learningPoint cloud goes to everywhere
Sources: Structure light
(Hyyppä et al. 2017)(Tomaštík et al., 2017)
Part II: Point cloud pre-processing
• Background noise• Aerosole noise: e.g. particles in air • Thermal noise: e.g. thermal agitation of the
charge carriers• Detector noise: e.g. free photoelectrons in air
• Random noise• GPS: e.g. signal strength, clock, path, etc. • Angular: e.g. beam angle shift by yaw, roll,
pitch of platform• Range: e.g. backscatter reflectance from
mirro, glass, etc.
Pre-processing: noise
Pre-processing: noise sources
Hardware related: e.g., Continuous-wave (CW) laser rangingExtremely fast measurementProne to errorneous noise in cases of no returnor bad signal due to multiple hitsfiltering is mostly done by the sensor
Pre-processing: noise sources
Application related: Noise pointscausing problemsfor further steps
Image source: Antero Kukko, FGI
Initial point logic• No building covers an area 80m * 80m size• Lowest point in any such rectangle is ground
Pre-processing: noise filtering
courtesy to Arttu Soininen, Terrasolid Oy
• Isolated point or point cluster
Pre-processing: noise filtering
‒ in Voxel data structure: if a voxel includes too few points, suchpoitns are removed
‒ In point cloud: to find points isolated from the other accordingcertain criteria
• universal solutions are rare
E.g., in a traffic sign detection application using MLS, to be able to preserve points from poles and other narrow objects, considering the speed used in the survey, n=10 points was taken to be the minimum required amount of points within the search radius r.
• Application dependent: object, hardware, data collection method, …
• Multiple flightlines with overlap• Time stamped trajectory information
• time x y z [h r p]
• Laser points linked to trajectory position• flightline number matching trajectory number• time stamp
Pre-processing: strip adjustment
• Dz easiest• Roll easy
• flat surfaces are sufficient
Pre-processing: strip adjustment
Parameter difficulty
• Pitch more difficult• requires slopes in flight direction
• Heading most difficult• requires slopes on both sides of flightline
Ground observations: more challenging than ALS, GNSS signal loss
GPS
Pre-processing: strip adjustment
differential GNSS (real-time or post-processed) + IMU => 0.6–0.8 m accuracy
image source: Antero Kukko, FGI
(Liang et al., 2014)
Ground observations: more challenging, GNSS signal loss
Pre-processing: strip adjustment
Z Heading Roll
Image sources: Anttoni Jaakkola, Antero Kukko, FGI
• Solution: SLAM when poor GNSS signal exists under forest canopy
Before
After
Pre-processing: strip adjustment
image sources: Risto Kaijaluoto, Antero Kukko, FGI
Google TangoSensing, learning
Pre-processing: strip adjustment
• Automation is earlystage
• Fully automation noteven possible
• interactive workneeded.
(Hyyppä et al., 2017)
• What is Intensity?
Pre-processing: intensity calibration
• Received Power at the trigger point• Analogue to Radar Cross-Section, i.e. laser cross-section• from waveform, sum of received power as a function of range• Maximum intensity level near the trigger point• Received Voltage level (at the trigger point)• Terms brightness, irradiance, radiance, BRDF (radiance scattered by surface
into given direction), reflectance factor, relative reflectance also used.
• EuroSDR project on “Radiometric Calibration of ALS Intensity” , 2007
• High point density + intensity calibration + strip merge + rasterizeOrtho-image (0.5m GSD)
Google Imagery
Image source: Riadh Munjy
Pre-processing: intensity calibration
Pre-processing: intensity calibration
image source: Leena Matikainen, Eero Ahokas, FGI
Part III: Automatic Object Recognition
1. Object-based segmentation
height and intensity data.
Data driven: classification
image source: Leena Matikainen, FGI
2. Large number of featuresObject-based features calculated from the ALS data:
• Brightness• Mean intensities• Intensity quantiles• Intensity ratios• Pseudo NDVI• maxDSM – minDSM• Standard deviations of DSMs• GLCM homogeneity of DSMs
Feature number Feature name 1 (Int.) Brightness (mean value of the mean
intensity values in different channels) 2, 3, 4 (Int.) Mean intensity in Ch1, Ch2, Ch3 5, 6, 7 (Int.) Intensity quantile 25% in Ch1, Ch2, Ch3 8, 9, 10 (Int.) Intensity quantile 50% in Ch1, Ch2, Ch3 11, 12, 13 (Int.) Intensity quantile 75% in Ch1, Ch2, Ch3 14, 15, 16 (Int.) Intensity ratio in Ch1, Ch2, Ch3 (ratio is
calculated by dividing the mean intensity in one channel by the sum of the mean intensity values in all channels)
17 (Int.) PseudoNDVI (normalized difference vegetation index) = (Mean Ch2 – Mean Ch 3)/(Mean Ch2 + Mean Ch3) (Wichmann et al., 2015)
18 (DSM) maxDSM – minDSM (difference between mean values)
19 (DSM) Standard deviation of the maxDSM 20 (DSM) Standard deviation of the minDSM 21 (DSM) Grey-level co-occurrence matrix
(GLCM) homogeneity of the maxDSM (texture feature)
22 (DSM) GLCM homogeneity of the minDSM
Object recognition: classification
courtesy to Leena Matikainen, FGI
3. automated classification
Object recognition: classification
image source: Leena Matikainen, FG
3. automated classification
Random forests method
… 1000 trees created automatically for each classification test
Object recognition: classification
image source: Leena Matikainen, FGI
Land cover classification
Object recognition: classificationRoad mapping
image source: Kirsi Karila, Leena Matikainen, FGI
― Feature extraction
• determined model- Least square fitting- Hough transformation- RANSAC
• arbitary model- Principle component analysis (PCA)
Data driven: recognition
Random sample consensus (RANSAC) -Concepts
Step 1: Randomly select a small subset of original data, i.e. hypothetical inliers
Step 2: Fit a model to the hypothetical inliers
Step 3: Find all remaining points that are “supportive” to the model, these point are considered as part of the consensus set
Step 4: The estimated model is reasonably good if sufficiently many points areclassified as part of the consensus set; if not satisfying, repead from Step 1.
Step 5: Reestimate the model using all data in the consensus set
• Very general framework for model fitting in the presence of outliers
• Pros• Simple, explict, general• Very wide applications• Usually works well
• Cons• Lots of paramters to tune• Too many iterations or fail when outlier ratio e is high• Can’t alway get a good initialization of the model utilizing
the minimum number of hypothetical inliers • May produce different models by each implementation
RANSAC - Pros & Cons
X1
X2
• objective of PCA is to rigidly rotate the axes of this p-dimensional space to new positions (principal axes) that have the following properties:
• ordered such that principal axis 1 has the highest variance, axis 2 has the next highest variance, .... , and axis p has the lowest variance
• covariance among each pair of the principal axes is zero (the principal axes are uncorrelated).
PCA - Geometric Rationale
• Direction (Eigenvector) and length (Eigenvalue) supports followingclassification
PCA applications
• Deep Learning vs. Machin Learning − Machin Learning: define the feature and develop the recognition method− Deep Learning: define the training data and train the network
Deep Learning
• Deep Learning emphasizes on the design of the network• The deeper the network, the harder to understand the feature• The result is good in certain applications
Image source: Xinhuai Zou
Part IV: Driving forces
• Data : the impact of the data acquisition methods to the results. new sensor? new data?
• Algorithm : to which extent can the algorithms interpret the data. New method?
Data processing: three driving forces
• Application : how to meet the requirement. New data / method?
New Data : Bi-temporal
After storm(image DSM)
Before storm(laser DSM)
Difference
Image source: Eija Honkavaara, FGI
New Data : Bi-temporal
Blue represents data in 2008 and red data in 2009.
Image source: Antero Kukko, Harri Kaartinen, FGI, Matti Vaaja, Aalto, Petteri alho, Turku
New Data: First-Last Pulse for tree species
First pulses: in green; Last pulses: in redDeciduous Trees Coniferous
Easy Medium Difficult
International TLS benchmarking for forest modelling: • Single- vs. Multi-scan• Different forest conditions
Algorithm makes differences
• Add natural elements like sky, vegetations and water etc.• Create roads
Application: 3D city modelling
Image source: Tuomas Turppa, FGI