+ All Categories
Home > Documents > HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral...

HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral...

Date post: 11-Mar-2020
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
41
hyperx: scalable hypergraph processing Jin Huang November 15, 2015 The University of Melbourne
Transcript
Page 1: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

hyperx: scalable hypergraph processing

Jin HuangNovember 15, 2015

The University of Melbourne

Page 2: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

overview

Research Outline

Scalable Hypergraph Processing

Problem and Challenge

Idea

Solution Implementation

Emperical Results

Conclusion

2

Page 3: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

research outline

Page 4: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

scalable hypergraph processing

Page 5: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

problem context

Any (high-order) relationships with more than 2 participants.

Figure 1: A few high-order relationships 5

Page 6: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

representative existing hypergraph studies

Table 1: Various hypergraph learning studies in literature

Application Study Vertex Hyperedge

Recommendation [TMCCA’13] Songs and users Listening historiesText retrieval [SIGIR’08] Documents Semantic similaritiesImage retrieval [Pattern Recognition’13] Images Descriptor similaritiesMultimedia [Multimedia’08] Videos HyperlinksBioinformatics [ICDM’13] Proteins InteractionsSocial mining [AAAI’14] Users CommunitiesMachine learning [Signal Processing’14] Data Records Labels

6

Page 7: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

existing solution

Converting to a graph!

Option I a bipartiteOption II a clique

Figure 2: Graph conversion inflates the problem size

7

Page 8: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

challenges i

Scalable graph frameworks: GraphLab, Giraph, GraphX, etc.

• synchronous BSP (Pregel)• vertex-centric style• vertex replication and aggregation

Inflated Size 2M V and 15M H -> 17M V and 1B EExcessive Replication replicating both V and H

8

Page 9: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

challenges i

Scalable graph frameworks: GraphLab, Giraph, GraphX, etc.

• synchronous BSP (Pregel)• vertex-centric style• vertex replication and aggregation

Figure 3: Vertex replicas to reduce network communication

Inflated Size 2M V and 15M H -> 17M V and 1B EExcessive Replication replicating both V and H

8

Page 10: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

challenges i

Scalable graph frameworks: GraphLab, Giraph, GraphX, etc.

• synchronous BSP (Pregel)• vertex-centric style• vertex replication and aggregation

Inflated Size 2M V and 15M H -> 17M V and 1B EExcessive Replication replicating both V and H

8

Page 11: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

challenges ii

Difficulty in Load Balance two causes1. V and H not active simultaneously2. double overhead in each iteration

Figure 3: Two issues in balancing the loads9

Page 12: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

idea

To Support (API) Random walks, label propagation, spectralInflated Size (Representation) a distributed hypergraphExcessive Replication (Representation) replicate only VDifficulty in Lload Balance (Partitioning) An optimization

• minimizes the communication cost• minimizes the replication cost• balances both V and H loads

10

Page 13: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

proposed solution: hyperx

Figure 4: An overview of HyperX implemented over Spark11

Page 14: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

details: apis

• Algorithms expressed as• vProg updates vertex values given incident hyperedges• hProg update hyperedge values given incident vertices

Table 2: HyperX Main APIs

Name Usage

joinV vProg as distributed joinsmrTuples hProg on hyperedges and reduce verticesmapV update vertices independently (locally)mapH update hyperedges independently (locally)subH restrict computation over a sub-hypergraphHyperPregel iteratively execute mrTuple and joinV

12

Page 15: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

details: hyperpregel implementation

Algorithm 1: HyperPregelinput : G: Hypergraph[V,H], vProg: (Id,V)⇒ V, hProg: Tuple

⇒ M, combine: (M,M)⇒ M, initial: Moutput: RDD[(Id, V)]

1 G ← G.mapV((id, v)⇒ vProg(id, v, initial))2 msg← G.mrTuples(hProg, combine)3 while |msg| > 0 do4 G ← G.joinV (msg) (vProg).subH(v’, t’)5 msg← G.mrTuples(hProg, combine)6 return G.vertices

13

Page 16: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

details: random walks with apis

Algorithm 2: Random Walks (RW) with restartinput : G, label vertex set L, restart probability rpoutput: RDD[(Id, Double)]

1 vProg(id,(v,d),msg)= ((1− rp)×msg+ rp× v, d)2 hProg(S,D,Sd,Dd,h)=∑i≤|S|

SiSdi×|D|

3 combine(a,b)= a+ b4 G ← G.joinV (G.outDeg, (id, v, d)⇒ d)5 G ← G.mapV((id, v)⇒ if id ∈ L (1.0, v) else (0.0, v))6 G.HyperPregel(G, vProg, hProg, combine,0)

14

Page 17: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

details: representation

Built on Spark’s RDD, how to represent a hypergraph?

• Vertices vRDD• Hyperedges hRDD

• Multiple vertices• × list or set•√flattened (vid,hid, isSrc) in columnar arrays

• saves 41% to 88% memory consumption

• To do mrTuples locally, replicate vertices• One replica is adequate• Cost in distributed vProg• Cost in updating replicas• Cost in storing replicas

• How to partition vRDD and hRDD to minimize the cost?

15

Page 18: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

details: representation

Built on Spark’s RDD, how to represent a hypergraph?

• Vertices vRDD• Hyperedges hRDD• To do mrTuples locally, replicate vertices

• One replica is adequate• Cost in distributed vProg• Cost in updating replicas• Cost in storing replicas

• How to partition vRDD and hRDD to minimize the cost?

15

Page 19: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

details: partitioning introduction

Different from vertex-cut or edge-cut in graph literature

• Cut both vertices and hyperedges simultaneously• Minimizes the vertex replicas (with local aggregation)• With separate load constaints on vProg and hProg

16

Page 20: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

details: partitioning objective formulation

n vertices, m hyperedges, k workers, ah the arity of h

• number of replicas for vertex u

R(xu, y) =k∑i=1

max((1− xu,i −∏

h∈N(u)

(1− yh,i), 0)

• to optimize

minimize∑u∈V

R(xu, y)

subject to∑h∈H

yh,iah ≤ (1+ α)

∑h∈H ahk , i ∈ {1, 2, ..., k}

∑u∈V

xu,iR(xu, y) ≤ (1+ β)

∑u∈V R(xu, y)

k , i ∈ {1, 2, ..., k}

17

Page 21: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

details: partitioning objective formulation

n vertices, m hyperedges, k workers, ah the arity of h

• number of replicas for vertex u

R(xu, y) =k∑i=1

max((1− xu,i −∏

h∈N(u)

(1− yh,i), 0)

• to optimize

minimize∑u∈V

R(xu, y)

subject to∑h∈H

yh,iah ≤ (1+ α)

∑h∈H ahk , i ∈ {1, 2, ..., k}

∑u∈V

xu,iR(xu, y) ≤ (1+ β)

∑u∈V R(xu, y)

k , i ∈ {1, 2, ..., k}

17

Page 22: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

details: partitioning theoretic analysis

How hard?

• a special case where α = 0 and β = +∞

minimize∑u∈V

k∑i=1

(1−∏

h∈N(u)

(1− yh,i))

subject to∑h∈H

yh,iah ≤∑

h∈H ahk , i ∈ {1, 2, ..., k}

• reduction from the strongly NP-Complete 3-Partition• no polynomial solution with finite approximation factor

• in plain words, it is extremely hard!• how about α > 0?

18

Page 23: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

details: partitioning practical solutions

Lable propagation partitioning (LPP)

• labels are partitions• label both vertices and hyperedges• iteratively update labels

• specifically,

L(h) = argmaxi∈K

|{v|v ∈ N(h) ∧ L(v) = i}|

L(v) = argmaxi∈K

(|{h|h ∈ N(v) ∧ L(h) = i}| × eA2−A2iA2 ),

where Ai =∑

L(h)=i ah.

19

Page 24: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

details: partitioning practical solutions

Lable propagation partitioning (LPP)

• labels are partitions• label both vertices and hyperedges• iteratively update labels• specifically,

L(h) = argmaxi∈K

|{v|v ∈ N(h) ∧ L(v) = i}|

L(v) = argmaxi∈K

(|{h|h ∈ N(v) ∧ L(h) = i}| × eA2−A2iA2 ),

where Ai =∑

L(h)=i ah.

19

Page 25: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

experimental settings

• Metrics• data RDD size• data shuffuled• elapsed time

• Comparisons• HyperX (hx), Bipartite (star), Clique (clique)• random, greedy, aweto, hMetis, LPP• random walk (RW), label propagation (LP), spectural (SP)

• Environment• 8 node, 28 workers, network 600Mbps• Hadoop 2.4.0, YARN enabled, Spark 1.1.0• HyperX implemented in Scala

20

Page 26: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

datasets

Table 3: Datasets presented in the empirical study

Dataset n m dmin dmax d σd cvd amin amax a σa cva

Medline Coauthor (Med) 3.2m 8m 1 5913 10 36.91 3.69 2 744 4 2.15 0.54Orkut Communities (Ork) 2.3m 15m 1 2958 46 80.23 1.74 2 9,120 71 70.81 1.00Friendster Communities (Fri) 7.9m 1.6m 1 1700 5 5.14 1.03 2 9,299 81 81.39 1.00Synthetic (Zipfian s = 2) 2m 8m 2 803 32 33.7 1.05 2 48,744 8 178.59 22.32

12m 5 1,173 48 50.27 1.05 2 49,526 8 174.07 21.7616m 10 1,527 63 66.56 1.06 2 49,006 8 171.36 21.4220m 15 1,893 79 83.40 1.06 2 49,963 8 175.52 21.9424m 21 2,305 95 100.00 1.05 2 49,326 8 173.12 21.64

4m 16m 1 1,102 32 36.04 1.13 2 49,843 8 173.12 21.646m 1 940 21 25.04 1.19 2 49,728 8 179.55 22.448m 1 799 16 19.42 1.21 2 49,526 8 173.84 21.7310m 1 716 13 15.79 1.21 2 49,932 8 173.84 21.73

21

Page 27: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

evaluating hypergraph representation: space

02468

hx clique

starhx clique

starhx clique

starhx clique

starhx clique

starhx clique

star

Dat

a R

DD

siz

e (G

B)

HyperedgesVertices

FriLPFriRWOrkLP OrkRW MedLP MedRW

4555

310

Figure 5: Memory Consumption of Data RDDs

HyperX consumes 44% to 77% less memory than Bipartite.

22

Page 28: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

evaluating hypergraph representation: communication

0

2

4

6

8

hx star

hx star

hx star

hx star

hx star

hx starD

ata

shuf

fled

(GB

) at

5t h

iter

ReadWrite

FriLPFriRWOrkLP OrkRW MedLP MedRW

20

25

Figure 6: Data Shuffled on the Network

HyperX shuffles 19% to 98% fewer data than Bipartite.

23

Page 29: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

evaluating hypergraph representation: time

100

101

102

103

104

MedRW

MedLP

OrkRw

OrkLP

FriRWFriLP

Ela

psed

tim

e (S

) pe

r 10

iter

s hxstar

Figure 7: Elapsed Time

HyperX is up to 49.1 times faster than Bipartite.

24

Page 30: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

evaluating partitioning effectiveness: replica factor

4

8

12

16

Med Ork Fri

Rep

lica

fact

or

randomaweto

greedyhmetis5

hmetis1lpp

Figure 8: Different partitioning algorithms, replication factor

HyperX produces 1.1 to 1.9 times more replicas than hMetis.

25

Page 31: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

evaluating partitioning effectiveness: load balance

0.001

0.01

0.1

1

MedReplica

MedArityOrkReplica

OrkArityFriReplica

FriArity

Wor

kloa

d C

oV

randomaweto

greedyhmetis5

hmetis1lpp

Figure 9: Different partitioning algorithms, load balance

LPP prodcues 1.1 to 37.7 times more balanced loads thanhMetis. 26

Page 32: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

evaluating partitioning effectiveness: space

0123456

randomaw

etogreedyhm

etis5hm

etis1lpp

randomaw

etogreedyhm

etis5hm

etis1lpp

randomaw

etogreedyhm

etis5hm

etis1lppD

ata

RD

D s

ize

(GB

)Hyperedges

Vertices

SP LP RW

Figure 10: Different partitioning algorithms on Orkut, space

LPP and hMetis both outperform simplistic methods.

27

Page 33: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

evaluating partitioning effectiveness: communication

600

1200

1800

randomaw

etogreedyhm

etis5hm

etis1lpprandomaw

etogreedyhm

etis5hm

etis1lpprandomaw

etogreedyhm

etis5hm

etis1lpp

Dat

a sh

uffle

d (M

B)

at 5

th It

er

ReadWrite

SP LP RW

Figure 11: Different partitioning algorithms on Orkut, communication

LPP and hMetis both significantly outperform simplisticmethods. 28

Page 34: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

evaluating partitioning effectiveness: time

0

300

600

900

1200

MedRW

MedLP

MedSP

OrkRW

OrkLP

OrkSP

Ela

psed

tim

e (S

) pe

r 10

iter

s

randomaweto

greedyhmetis5

hmetis1lpp

Figure 12: Different partitioning algorithms, time

LPP results to up to 2.6 times speedup over hMetis.

29

Page 35: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

evaluating partitioning efficiency

LPP in Scala, run on JVM; hMetis in C

Table 4: Partitioning time of different algorithms

Dataset Algorithm Time t (s) w w.r.t. LPP

Med LPP 356 28 1.0hMetis5 14,796 1 1.5

Ork LPP 753 28 1.0hMetis5 88,936 1 4.2

Fri LPP 248 28 1.0hMetis5 6,766 1 1.0

30

Page 36: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

evaluating learning algorithms: dataset cardinality

0

400

800

1200

1600

8M 12M 16M 20M 24M

Ela

psed

Tim

e (S

) pe

r 5

iters

Number of hyperedges

RW LP SP

Figure 13: Elapsed time running algorithms on varying datasetcardinality, synthetic

31

Page 37: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

evaluating learning algorithms: number of workers

102

103

104

4 8 12 16 20 24 28

Ela

pase

d tim

e (S

) pe

r 10

iter

s

Number of workers

RWLPSP

Figure 14: Elapsed time running algorithms on varying number ofworkers, Orkut

32

Page 38: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

optional evaluating lpp: time and replicas

0700

1400210028003500

5 10 15 20 30 40 50 1 2 3 4 5 6 7

Ela

psed

tim

e (S

)

Rep

lica

fact

or

Iteration

MedReplicaOrkReplica

MedTimeOrkTime

Figure 15: Elapsed time and replication factor

It only takes LPP a few iteration to achieve reasonablereplication ratio. 33

Page 39: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

optional evaluating lpp: load balance

0.01

0.1

1

5 10 15 20 30 40 50

Wor

kloa

d C

oV

Iteration

MedReplicaMedArity

OrkReplicaOrkArity

Figure 16: Elapsed time and replication factor

It only takes LPP a few iteration to achieve reasonable loadbalance. 34

Page 40: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

conclusion

Problem Scalable hypergraph learningChallenges 1. Inflated problem size

2. Excessive replication3. Great difficulty in balancing the loads

Solutions 1. Operate on a distributed hypergraph2. Replicate only vertices3. Partitioning optimization

Contribution • Efficient and scalable hypergraph framework• Effective and efficient partitioning algorithm

35

Page 41: HyperX: Scalable Hypergraph Processing · ToSupport(API) Randomwalks,labelpropagation,spectral InflatedSize(Representation) adistributedhypergraph ExcessiveReplication(Representation)

Thanks!

Any Questions or Comments?

36


Recommended