+ All Categories
Home > Technology > HIP and CAFFE Porting and Profiling with AMD's ROCm

HIP and CAFFE Porting and Profiling with AMD's ROCm

Date post: 16-Apr-2017
Category:
Upload: inside-bigdatacom
View: 877 times
Download: 1 times
Share this document with a friend
12
1 | NOV EMBER 2016 HIP and CAFFE Porting and Profiling Presented by Ben Sander Aditya Atluri, Peng Sun, Jack Chung, Adrian Edwards
Transcript

1 | NOV EMBER 2016

HIP and CAFFE

Porting and Profiling

Presented by Ben SanderAditya Atluri, Peng Sun,

Jack Chung, Adrian Edwards

2 | NOV EMBER 2016

PORTING CAFFE TO HIP

3 | NOV EMBER 20163 | EVENT | DATE | AMD CONFIDENTIAL – INTERNAL USE ONLY

Easy Porting from CUDA to HIP for Caffe

• The Challenge: CAFFE- Popular open-source machine-learning framework

- 55000+ lines-of-code

- GPU-accelerated with CUDA

• The Tool: HIP (“Heterogeneous-computing Interface for Portability”)- Convert CUDA code into portable C++

- HIPIFIED code runs on AMD ROCm or NVIDIA CUDA Platform

• Results:- 99.6% of code unmodified or automatically converted

- Port required less than 1 week developer time

- Supports all CAFFE features (multi-GPU, P2P, FFT filters, etc.,)

- HIP on CUDA : Same perf as native CUDA including cuDNN support

4 | NOV EMBER 2016

Automatic, 688

Manual,

32227

Manual, 219

0

5000

10000

15000

20000

25000

30000

35000

OpenCL Port HIP Port

Lin

es

of

Co

de C

han

ged

Complexity of Application Porting: CAFFE

AMD Internal Data

namespace caffe {template <typename Dtype>

__global__ void BNLLForward(const int n, const Dtype* in,

Dtype* out) {

for (int i=blockIdx.x * blockDim.x + threadIdx.x; i < (n); i += blockDim.x * gridDim.x) {

out[index] = in[index] > 0 ?in[index] + log(1. + exp(-in[index])) :log(1. + exp(in[index]));

}}

HIP-ification of CUDA Kernel (CAFFE)

namespace caffe {template <typename Dtype>

__global__ void BNLLForward(hipLaunchParm lp, const int n,

const Dtype* in, Dtype* out) {

for (int i=hipBlockIdx_x * hipBlockDim_x + hipThreadIdx_x; i < (n);

i += hipBlockDim_x * hipGridDim_x) {

out[index] = in[index] > 0 ?in[index] + log(1. + exp(-in[index])) :log(1. + exp(in[index]));

}}

CUDA

HIPIFY

(Automated)

HIPC++ Features

Math Functions

UNCHANGED!

UNCHANGED!

6 | NOV EMBER 2016

Hipification of CUDA Runtime APIs (CAFFE)

void SyncedMemory::async_gpu_push(const cudaStream_t& stream) {CHECK(head_ == HEAD_AT_CPU);if (gpu_ptr_ == NULL) {cudaGetDevice(&gpu_device_);cudaMalloc(&gpu_ptr_, size_);own_gpu_data_ = true;

}const cudaMemcpyKind put = cudaMemcpyHostToDevice;cudaMemcpyAsync(gpu_ptr_,cpu_ptr_,size_,put,stream);// Assume caller will synchronize on the streamhead_ = SYNCED;

}

void SyncedMemory::async_gpu_push(const hipStream_t& stream) {CHECK(head_ == HEAD_AT_CPU);if (gpu_ptr_ == NULL) {hipGetDevice(&gpu_device_);hipMalloc(&gpu_ptr_, size_);own_gpu_data_ = true;

}const hipMemcpyKind put = hipMemcpyHostToDevice;hipMemcpyAsync(gpu_ptr_,cpu_ptr_,size_,put,stream);// Assume caller will synchronize on the streamhead_ = SYNCED;

}

CUDAHIPIFY

(Automated) HIP

CUDA

Porting with HIPIFY

hipify

Developer Cleanup and

Tuning

• ~99%+ Automatic Conversion

PortableHIP C++

• Developer maintains HIP port

• Resulting C++ code runs on NVIDIA

or AMD GPUs

HIP Compilation Process

HIP to HC

Header

HCC

Portable HIP

C++(Kernels + HIP API)

HCC C++(Kernels + HC)

• HIP API implemented with

lightweight HIP runtime

• Uses HCC’s hc::accelerator,

hc::accelerator_view,

hc::completion_future

• Some calls directly into ROCR

• Compute kernels mostly

unchanged

• Code compiled with HCC

• Can use CodeXL, ROCm tools

HIP to CUDA

Header

NVCC

CUDA(Kernels + CUDA API)

• HIP API implemented as

inlined calls to CUDA

Runtime

• Compute kernels mostly

unchanged

• Code compiled with

NVCC (same as CUDA)

• Can use nvprof, CUDA

debugger, other tools

AMDNVIDIA

CUDA Executable HCC Executable

• Source Portable

• Not Binary Portable

Strong support for most commonly used parts of CUDA API

‒ Streams, events, memory allocation/deallocation, profiling

‒ HIP includes driver API support (modules and contexts)

C++ support including templates, namespace, classes, lambdas

‒ AMD’s open-source GPU compiler based on near-tip CLANG/LLVM

‒ Support C++11, C++14, some C++17 features

Hipified code is portable to AMD/ROCM and NVIDIA/CUDA

‒ On CUDA, developers can use native CUDA tools (nvcc, nvprof, etc)

‒ On ROCM, developers can use native ROCM tools (hcc, rocm-prof, codexl)

‒ HIP ecosystem includes hipBlas, hipFFT, hipRNG

Hipify tools automate the translation from CUDA to HIP

‒ Developers should expect some final cleanup and performance tuning

HIP : Key Features

10 | NOV EMBER 2016

HIP : PROFILING & TOOLSHIP + CODEXL

HIP + CodeXL

CAFFE Layers/APIs

HIP APIs

GPU Kernels

Data Transfers

Built on HSA

12 | NOV EMBER 2016


Recommended