+ All Categories
Home > Documents > SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential...

SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential...

Date post: 15-Aug-2020
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
18
SDGen: Mimicking Datasets for Content Generation in Storage Benchmarks Raúl Gracia-Tinedo (Universitat Rovira i Virgili, Spain) Danny Harnik, Dalit Naor, Dmitry Sotnikov (IBM Research-Haifa, Israel) Sivan Toledo, Aviad Zuck (Tel-Aviv University, Israel)
Transcript
Page 1: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

SDGen: Mimicking Datasets for Content

Generation in Storage Benchmarks

Raúl Gracia-Tinedo (Universitat Rovira i Virgili, Spain)

Danny Harnik, Dalit Naor, Dmitry Sotnikov (IBM Research-Haifa, Israel)

Sivan Toledo, Aviad Zuck (Tel-Aviv University, Israel)

Page 2: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Pre-Introduction

2

Random

Data Zero Data

Stones in the backpack!!! Just thin air

Page 3: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Introduction

Benchmarking is essential to evaluate storage systems:

File systems, databases, micro-benchmarks…

FileBench, LinkBench, Bonnie++, YCSB,…

Many storage benchmarks try to recreate real workloads:

Operations per unit of time, R/W behavior,…

But, what about the data generated during a benchmark?

Real dataset: representative, proprietary, potentially large

Simple synthetic data (zeros, random data): not-representative,

easy to create, reproducible

3

Page 4: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

The Problem

Does the benchmarking data actually matter?

ZFS Example: A file system with built-in compression

ZFS is significantly content-sensitive if compression enabled

The throughput also varies depending on the compressor

Conclusion: Yes, it matters if data reduction is involved!

4

Page 5: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Current Solutions

Some benchmarks try to emulate the compressibility of data

(LinkBench, Fio, VDBench): Mixing compressible/incompressible data

at right proportion.

Problems (LinkBench data vs real data):

Accurate compression ratios

but insensitive to compressor

Unrealistic compression times

Heterogeneity is not captured

5

zlib - Text Data (Calgary Corpus)

Rand Zeros Rand Zeros 50% compressible!

Page 6: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Our Mission

Complex situation:

Most storage benchmarks generate unrealistic contents

Representative data is normally not shared due to privacy issues

Not good for the performance evaluation of storage

systems with data reduction techniques built-in.

We need a common approach to generate realistic and

reproducible benchmarking data.

In this work, we focus on compression benchmarking.

6

Page 7: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Summary of our Work

Synthetic Data GENerator (SDGen): open and extensible

framework to generate realistic data for storage benchmarks.

Goal: “mimic” real datasets.

Compact, reusable and anonymized dataset representation.

Mimicking compression: identify the properties of data that

are key to the performance of popular lossless compressors (e.g.

zlib, lz4).

Usability and integration: SDGen is available for download and

has been integrated in popular benchmarks (LinkBench,

Impressions).

7

Page 8: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

SDGen: Concept & Overview

Scan Dataset

Build Characteri

zation Share it

Load Characteri

zation

Generate Data

8

Mimicking method: capture the characteristics of data that affect data reduction techniques to generate similar synthetic data.

SDGen works in two main phases:

SDGen can do full scans or use sampling.

SDGen requires knowing “what to scan for” and “how to generate data”.

SDG

en L

ifecy

cle

Scan Phase Generation Phase

Page 9: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Mimicking data for compression

9

We empirically found two properties that affect the behavior of

compression engines:

Repetition length distribution

Key for compression time & ratio

Typically follows a power-law

Byte frequency

Impacts on entropy coding

Changes importantly depending

on data

Page 10: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Generating synthetic data

10

Goals:

Generate data with similar properties (repetition lengths, byte freq.)

Fast generation throughput

At high-level, we generate a data chunk as follows:

Synthetic

chunk

Repeated sequence 1) Random decision: repetition or not?

2) Repetition: insert repeated data

3) Pick a repeated sequence length from the

repetition histogram

4) Insert repeated data Byte Freq.

Histogram

2) No repetition: insert newly randomized data

3) Pick a random sequence length from the

repetition histogram

Rep. Len.

Histogram

4) Insert random data Data

generator

Initialize

source of

repetitions

Page 11: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Evaluation

Objective Metrics Additional Mimicked Properties

11

Compression ratio

Compression time

Repetition length

Entropy (byte frequencies)

Datasets

Calgary/Canterbury corpus

Silesia Corpus

PDFs (FAST conferences)

Media (IBM engineers)

Sensor network data

Enwiki9

Private Mix (VMs, .xml, .html,…)

Compressors

Target: Lossless compression based on byte level repetition finding and/or on entropy encoding (zlib, lz4).

We tested other families of compressors (bzip2, lzma).

Page 12: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Evaluation: Mimicked Properties

12

Experiment: compare repetition length distributions and

byte entropy in real and SDGen data.

SDGen generates data that closely mimics these metrics.

Page 13: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Evaluation: Compression Ratio & Time

13

Per-chunk compression ratio

Compression ratios are

closely mimicked

Heterogeneity is also well

captured within a dataset Per-chunk compression time

Compression times are harder

to mimic (especially for lz4)

Still, for most data types

compressors behave similarly

Experiment: Capture per-chunk compression ratios and times

for both synthetic and real datasets.

Page 14: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Evaluation: Performance of ZFS

14

Experiment: write to ZFS 1GB files augmenting

previous datasets.

ZFS exhibits similar behavior for both real and our

synthetic data.

ZFS digests faster LinkBench data (+12% to +44%).

DNA sequencing files in Calgary corpus are specially hard to compress

Page 15: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Evaluation: Integration with LinkBench

15

Experiment: LinkBench write workload using distinct data

types (ZFS + SSD storage).

SDGen serves as data generation layer for LinkBench.

Write latency is similar in both synthetic and text dataset.

Page 16: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Conclusions & Future Directions

16

Data is an important aspect of storage benchmarking

when data reduction is involved (compression, dedup).

We presented SDGen: a framework for generating

realistic and sharable benchmarking data.

Idea: scan data, build a characterization, share it, generate data

We designed a method to mimic data compression

ratios and times for popular lossless compressors.

We plan to extend SDGen to mimic data deduplication.

Page 17: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Q&A

17

Thanks for your attention!

SDGen code: https://github.com/iostackproject/SDGen

Funding projects:

http://cloudspaces.eu

http://iostack.eu

Towards the next generation of open Personal Clouds

Software-Defined Storage for Big Data

Page 18: SDGen: Mimicking Datasets for Content Generation in ... · Introduction Benchmarking is essential to evaluate storage systems: File systems, databases, micro-benchmarks… FileBench,

Backup: Generation Performance

18

Characterizations of chunks can be used in parallel for

generation.

Generating uncompressible data is more expensive.

We plan optimizations to wisely reuse random data for boosting

throughput.


Recommended