HELI-DEM portal for geo-processing services: overview and load testing
Massimiliano Cannata, Milan Antonovic, Monia Molinari
DACD / IST / HELI-DEM geo-portal
HELI-DEM project’s goals
Creation of a unified digital terrain model, for the Alpine and Sub-Alpine area on the border between Italy and Switzerland, correctly geo-referenced in the three dimensions
Administrative areas: Piemonte, Lombardia, Canton Ticino, Canton Grigioni
DTM elaboration by coordination and fusion of available information
+ Correction of DTM with
LiDAR HR DTM +
Cross-border GNSS network experiments
+ Cross-border geoid calculation (IT-CH)
+ Experimentation, diffusion
and valorization of DTM
DACD / IST / HELI-DEM geo-portal
HELI-DEM geo-portal architecture (OWS / SOA)
3
DACD / IST / HELI-DEM geo-portal
FOSS4G stacks
4
SERVER SIDE STACK CLIENT SIDE STACK
DACD / IST / HELI-DEM geo-portal
Operation sequence disgramm
5
WCS WMS WFS FS WPS process
DTM area
Publish results
Publish results
Publish results Links to results
DACD / IST / HELI-DEM geo-portal
7
DACD / IST / HELI-DEM geo-portal
8
17/07/2014
DACD / IST / HELI-DEM geo-portal
17/07/2014
9
DACD / IST / HELI-DEM geo-portal
PDF report or CSV parameters
17/07/2014
11
DACD / IST / HELI-DEM geo-portal
17/07/2014
12
DACD / IST / HELI-DEM geo-portal
Load testing
13
2.1 MB
7.5 MB
17.3
MB
29.2
MB
DACD / IST / HELI-DEM geo-portal
Load testing procedure
7 test sessions with 1, 2, 4, 8, 16, 32, 64 concurrent users
14
Contour lines
Altimetric profiles
Watershed analysis
Elevation derivatives
Maggia - Large
Varzasca - medium
Breggia - small
Cama – very small E
qu
all
y b
ala
nc
ed
Ra
nd
om
ly se
lec
ted
DACD / IST / HELI-DEM geo-portal
System configuration
parameter WPS / WMS / WFS
server WCS server
OS Ubuntu server
version 12.04, 32 bit (VM)
Ubuntu server version 12.04, 32 bit (VM)
RAM 4GB 4GB
CPU Intel(R) Xeon(R)
CPU E5-2650 0 @ 2.00GHz
Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
N° of processors 6 4
Disk size * 100GB 100GB
15
* Increased from 50GB after some test configuration
DACD / IST / HELI-DEM geo-portal
Load testing results: the system
• No exception recorded
• Response time exponentially grows after 16 concurrent users
16
1 2 4 8 16 32 64
Average 28326 31705 29732 42239 52193 128932 303406
Exceptions 0 0 0 0 0 0 0
0
50000
100000
150000
200000
250000
300000
350000
Ave
rage
res
po
snse
tim
e [m
illis
eco
nds]
General statistics
350k
300k
250k
200k
150k
100k
50k
DACD / IST / HELI-DEM geo-portal
Load testing results: the processes
17
Min = 2.5 sec.
Max = 2 min.
Avg = 18 sec.
Min = 8.4 sec.
Max = 17.4 min.
Avg =2.2 min.
Min = 3.6 sec.
Max = 3 min.
Avg =26.9 sec.
Min = 24 sec.
Max = 23.5 min.
Avg = 3.15 min.
80 % CPUs
100 % CPUs
DACD / IST / HELI-DEM geo-portal
Where is the bottleneck ?
18
WCS WMS WFS FS WPS process
DTM area
Publish results
Publish results
Publish results Links to results
DACD / IST / HELI-DEM geo-portal
Load testing results: data retrieval
• More smoothed behavior suggests it isn’t the real weak point, at least for this amount of data (ranging from 3 MB to 30 MB)
19
1 2 4 8 16 32 64
Very small 808 854 861 890 774 839 1444
Small 1624 1663 1668 1862 1546 1677 2424
Medium 2831 2957 3070 3222 2865 3019 3800
Big 5538 5662 6004 6198 5717 6252 9093
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
AV
G r
esp
on
se t
ime
(m
illi
sec
on
ds)
WCS
DACD / IST / HELI-DEM geo-portal
WCS impact over offered processes
1. The data retrieval impacts more the fast processes (-)
2. Increasing concurrent user lead to longer response time and minor relative cost for data retival (-)
20
DACD / IST / HELI-DEM geo-portal
Conclusions: the good
Proven quality of the HELI-DEM portal and thus of the FOSS4G/OWS stack it relies on:
• good robustness having (no system failure – even when exceptions raised in load testing setting)
• good application quality (no exception response registered)
• average good performances (compared with desktop processing)
Serve dynamic analyses to non GIS experts for better planning !
21
DACD / IST / HELI-DEM geo-portal
Conclusions : the bad
Scalability issues:
• Scalability: response time exponentially degrades with increasing number of user
– Increase hardware infrastructure (more CPUs, load balancing, scalable cloud computing service, etc..)
– Optimize processing (asynchronous programming, parallelization, etc.)
Patience is a rare virtues in Web users…
22
DACD / IST / HELI-DEM geo-portal
Conclusions : the ugly
Open questions to be investigated:
• Impact of WPS service (overhead to GRASS processing)
• Behaviour under different hardware settings (what if you increase the CPUs?)
• What under larger concurrency?
• Any other hints or suggestions?
23
24
Thanks to: - FOSS4G developers - project partners - the audience