+ All Categories
Home > Software > Meetup tensorframes

Meetup tensorframes

Date post: 21-Feb-2017
Category:
Upload: paolo-platter
View: 141 times
Download: 3 times
Share this document with a friend
39
TensorFlow & TensorFrames w/ Apache Spark Presents... Marco Saviano
Transcript
Page 1: Meetup tensorframes

TensorFlow & TensorFrames w/ Apache Spark

Presents...

Marco Saviano

Page 2: Meetup tensorframes

TensorFlow & TensorFrames w/ Apache Spark

1. Numerical Computing2. Apache Spark3. Google Tensorflow4. Tensorframes5. Future developments

Page 3: Meetup tensorframes

Numerical computing

• Queries and algorithms are computation-heavy

• Numerical algorithms, like ML, uses very simple data types: integers/floating-point operations, vectors, matrixes

• Not necessary a lot of data movement

• Numerical bottlenecks are good targets for optimization

Page 4: Meetup tensorframes

Evolution of computing power

Scale out

Scal

e up

Page 5: Meetup tensorframes

HPC Frameworks

Scale out

Scal

e up

Today’s talk:Spark + TensorFlow = TensorFrames

Page 6: Meetup tensorframes

Open source successesCommits on master branch on GitHub

Apache Spark – 1015 contributors

Google Tensorflow – 582 contributors

Page 7: Meetup tensorframes

Spark enterprise users

Page 8: Meetup tensorframes

Tensorflow enterprise users

Page 9: Meetup tensorframes

TensorFlow & TensorFrames w/ Apache Spark

1. Numerical Computing2. Apache Spark3. Google Tensorflow4. Tensorframes5. Future developments

Page 10: Meetup tensorframes

Apache Spark

Apache Spark™ is a fast and general engine for large-scale data processing, with built-in

modules for streaming, SQL, machine learning and graph processing

Page 11: Meetup tensorframes

Spark Unified Stack

Page 12: Meetup tensorframes

How does it work? (1/3)

Spark is written in Scala and runs on the Java Virtual Machine.

Every Spark application consists of a driver program:It contains the main function, defines distributed datasets on the cluster and then applies operations to them.

Driver programs access Spark through a SparkContext object.

Page 13: Meetup tensorframes

How does it work? (2/3)

To run the operations defined in the application the driver typically manage a number of nodes called executors.These operations result in tasks that the executors have to perform.

Page 14: Meetup tensorframes

How does it work?(3/3)Managing and manipulating datasets distributed over a cluster writing just a driver program without taking care of the distributed system is possible because of:• Cluster managers (resources management, networking… )• SparkContext (task definition from more abstract operations)• RDD (Spark’s programming main abstraction to represent distributed

datasets)

Page 15: Meetup tensorframes

RDD vs DataFrame

• RDD: Immutable distributed collection of elements of your data,

partitioned across nodes in your cluster that can be operated in parallel with a low-level API, i.e. transformations and actions.

• DataFrame: Immutable distributed collection of data, organized into named

columns. It is like table in a relational database.

Page 16: Meetup tensorframes

DataFrame: pros and cons• Higher level API, which makes Spark available to wider

audience

• Performance gains thanks to the Catalyst query optimizer

• Space efficiency by leveraging the Tungsten subsystem• Off-Heap Memory Management and managing memory explicitly• Cache-aware computation improves the speed of data processing through

more effective use of L1/ L2/L3 CPU caches

• Higher level API may limit expressiveness• Complex transformation are better express using RDD’s API

Page 17: Meetup tensorframes

TensorFlow & TensorFrames w/ Apache Spark

1. Numerical Computing2. Apache Spark3. Google Tensorflow4. Tensorframes5. Future developments

Page 18: Meetup tensorframes

Google TensorFlow

• Programming system in which you represent computations as graphs

• Google Brain Team (https://research.google.com/teams/brain/)

• Very popular for deep learning and neural networks

Page 19: Meetup tensorframes

Google TensorFlow• Core written in C++

• Interface in C++ and Python

C++ front end Python front end

Core TensorFlow Execution System

CPU GPU Android iOS …

Page 20: Meetup tensorframes

Google TensorFlow adoption

Page 21: Meetup tensorframes

Tensors• Big idea: Express a numeric computation as a graph.• Graph nodes are operations which have any number of inputs and outputs• Graph edges are tensors which flow between nodes

• Tensors can be viewed as a multidimensional array of numbers• A scalar is a tensor,• A vector is a tensor• A matrix is a tensor• And so on…

Page 22: Meetup tensorframes

Programming model

import tensorflow as tfx = tf.placeholder(tf.int32, name=“x”)y = tf.placeholder(tf.int32, name=“y”)output = tf.add(x, 3 * y, name=“z”)

session = tf.Session()output_value = session.run(output, {x: 3, y: 5})

x:int32

y:int32

mul

z

3

Page 23: Meetup tensorframes

Tensorflow Demo

Page 24: Meetup tensorframes

TensorFlow & TensorFrames w/ Apache Spark

1. Numerical Computing2. Apache Spark3. Google Tensorflow4. Tensorframes5. Future developments

Page 25: Meetup tensorframes

Tensorframes

• TensorFrames (TensorFlow on Spark Dataframes) lets you manipulate Spark's DataFrames with TensorFlow programs.

• Code written in Python, Scala or directly by passing a protocol buffer description of the operations graph

• Build on the javacpp project

• Officially supported Spark versions: 1.6+

Page 26: Meetup tensorframes

Spark with Tensorflow

Spark worker process Worker python process

C++buffer

Python pickle

Tungsten binary format

Python pickle

Javaobject

Page 27: Meetup tensorframes

TensorFrames: native embedding of

TensorFlow

Spark worker process

C++buffer

Tungsten binary format

Javaobject

Page 28: Meetup tensorframes

Programming model• Integrate Tensorflow API in Spark Dataframes

df=sqlContext.createDataFrame(zip( range(0,10),

range(1,11))).toDF(“x”,”y”)

import tensorflow as tfimport tensorframes as tfsx = tfs.row(df,"x")y = tfs.row(df,"y")output = tf.add(x, 3*y, name=“z”)output_df = tfs.map_rows(output,df)

output_df.collect()

x:int32

y:int32

mul

z

3

df: DataFrame[x: int, y: int]

output_df: DataFrame[x: int, y: int, z: int]

Page 29: Meetup tensorframes

Demo

Page 30: Meetup tensorframes

Tensors• TensorFlow expresses operations on tensors: homogeneous data

structures that consist in an array and a shape• In Tensorframes, tensors are stored into Spark Dataframe

x y

1 [1.1 1.2]2 [2.1 2.2]3 [3.1 3.2]

x y

1 [1.1 1.2]

x y

2 [2.1 2.2]3 [3.1 3.2]Spark Dataframe

Cluster

Node 1

Node 2

Chunk and distribute the table across the cluster

Page 31: Meetup tensorframes

Map operations• TensorFrames provides most operations in two forms

• row-based version• block-based version

• The block transforms are usually more efficient: there is less overhead in calling TensorFlow, and they can manipulate more data at once.• In some cases, it is not possible to consider a sequence of rows as a single tensors

because the data must be homogeneous

process_row: x = 1, y = [1.1 1.2]

process_row: x = 2, y = [2.1 2.2]

process_row: x = 3, y = [3.1 3.2]

row-based

process_block: x = [1], y = [1.1 1.2]process_block: x = [2 3], y = [[2.1 2.2] [3.1 3.2]]

block-based

x y

1 [1]

2 [1 2]

3 [1 2 3]

Page 32: Meetup tensorframes

Row-based vs Block-based

import tensorflow as tfimport tensorframes as tfsfrom pyspark.sql import Rowfrom pyspark.sql.functions import *

data = [Row(x=float(x)) for x in range(5)]df = sqlContext.createDataFrame(data)

with tf.Graph().as_default() as g: x = tfs.row(df, "x") z = tf.add(x, 3, name='z') df2 = tfs.map_rows(z, df)

df2.show()

import tensorflow as tfimport tensorframes as tfsfrom pyspark.sql import Rowfrom pyspark.sql.functions import *

data = [Row(x=float(x)) for x in range(5)]df = sqlContext.createDataFrame(data)

with tf.Graph().as_default() as g: x = tfs.block(df, "x") z = tf.add(x, 3, name='z') df2 = tfs.map_blocks(z, df)

df2.show()

Page 33: Meetup tensorframes

Reduction operations• Reduction operations coalesce a pair or a collection of rows and transform them

into a single row, until there is one row left.• The transforms must be algebraic: the order in which they are done should not matter

f(f(a, b), c) == f(a, f(b, c))

import tensorflow as tf

import tensorframes as tfs

from pyspark.sql import Row

from pyspark.sql.functions import *

data = [Row(x=float(x)) for x in range(5)]

df = sqlContext.createDataFrame(data)

with tf.Graph().as_default() as g:

x_input = tfs.block(df, "x", tf_name="x_input")

x = tf.reduce_sum(x_input, name='x')

res = tfs.reduce_blocks(x, df)

print res

Page 34: Meetup tensorframes

Demo

Page 35: Meetup tensorframes

TensorFlow & TensorFrames w/ Apache Spark

1. Numerical Computing2. Apache Spark3. Google Tensorflow4. Tensorframes5. Future developments

Page 36: Meetup tensorframes

Spark worker process

C++buffer

Tungsten binary format

Javaobject

Direct memory copy

Improving communication

Page 37: Meetup tensorframes

Spark worker process

C++buffer

Direct memory copy

Columnarstorage

Improving communication

Page 38: Meetup tensorframes

Future

• Integration with Tungsten:• Direct memory copy• Columnar storage

• Better integration with MLlib data types

• Improving GPU support

Page 39: Meetup tensorframes

Questions ?


Recommended