+ All Categories
Home > Documents > Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory...

Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory...

Date post: 08-Sep-2020
Category:
Upload: others
View: 4 times
Download: 0 times
Share this document with a friend
12
Estimating GPU Memory Consumption of Deep Learning Models Yanjie Gao Microsoft Research Yu Liu Microsoft Research Hongyu Zhang The University of Newcastle Zhengxian Li Microsoft Research Yonghao Zhu Microsoft Research Haoxiang Lin Microsoft Research Mao Yang Microsoft Research ABSTRACT Deep learning (DL) has been increasingly adopted by a variety of software-intensive systems. Developers mainly use GPUs to accel- erate the training, testing, and deployment of DL models. However, the GPU memory consumed by a DL model is often unknown to them before the DL job executes. Therefore, an improper choice of neural architecture or hyperparameters can cause such a job to run out of the limited GPU memory and fail. Our recent empirical study has found that many DL job failures are due to the exhaustion of GPU memory. This leads to a horrendous waste of computing resources and a significant reduction in development productivity. In this paper, we propose DNNMem, an accurate estimation tool for GPU memory consumption of DL models. DNNMem employs an an- alytic estimation approach to systematically calculate the memory consumption of both the computation graph and the DL framework runtime. We have evaluated DNNMem on 5 real-world representa- tive models with different hyperparameters under 3 mainstream frameworks (TensorFlow, PyTorch, and MXNet). Our extensive experiments show that DNNMem is effective in estimating GPU memory consumption. KEYWORDS Memory consumption, deep learning, estimation model, program analysis 1 INTRODUCTION In recent years, deep learning (DL) has rapidly become one of the most successful machine learning techniques and is widely integrated into a variety of software-intensive systems (such as computer vision systems, natural language processing systems, games, etc.). To accelerate the training, testing, and deployment of DL models (aka deep neural networks or DNNs), GPU (Graphics Processing Unit) is widely adopted by the developers. Enterprises also build dedicate DL platforms such as Amazon SageMaker [4] and Microsoft Azure Machine Learning [5] with a large number of GPUs, providing support for DL frameworks like TensorFlow (TF) [1], PyTorch [34], and MXNet [9]. However, since the GPU memory consumed by a DL model is often unknown to developers before the training or inferencing job starts running, an improper model configuration of neural archi- tecture or hyperparameters can cause such a job to run out of the limited GPU memory and fail. For example, as shown in Figure 1, if a PyTorch ResNet50 [16] training job with a batch size of 256 is scheduled on the NVIDIA Tesla P100 GPU, it will trigger an OOM Corresponding author. (out-of-memory) exception because the DL model requires 22 GB of GPU memory while P100 has only 16 GB in total. According to our recent empirical study on 4960 failed DL jobs in Microsoft (Section 2.1), 8.8% of the job failures were caused by the exhaustion of GPU memory, which accounts for the largest category in all deep learning specific failures. Therefore, knowing the accurate GPU memory consumption (aka memory footprint) in advance is very important to reduce OOM failures and save pre- cious platform resources including GPU/CPU/storage, by helping developers choose an optimal model configuration or facilitating DL frameworks to better utilize the mechanisms of dynamic mem- ory management [22](e.g., GPU memory swapping). This ability can also benefit AutoML tools in enhancing the search efficiency (e.g., excluding those model configurations that do not satisfy the memory requirement) and DL platforms in optimizing job plan- ning and scheduling (e.g., scheduling a group of DL jobs that can maximize the GPU memory usage). 64 128 256 12 16 24 80 100/ 100 40 GB VGG16 ResNet50 Figure 1: GPU memory consumption of training PyTorch VGG16 [41] and ResNet50 models with different batch sizes. The red lines indicate the memory capacities of three NVIDIA GPUs. There are already many program analysis based techniques [2, 6, 7, 12, 20, 45, 46] for estimating memory consumption of C, C++, and Java programs. For example, Albert et al. [2] presented a paramet- ric inference on the notion of object lifetime to inferring memory requirements of Java-like programs. Heo et al. [17] proposed a resource-aware (e.g., memory size) flow-sensitive analysis that can adjust behaviors by coarsening program abstraction. However, exist- ing work cannot be directly applied to DL models for the following three main reasons: (1) The hybrid programming paradigm adopted by DL frame- works hides the internal execution of a DL model from the high-level programs written by developers, therefore making it difficult to track the precise GPU memory usage.
Transcript
Page 1: Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory consumption of C, C++, Java, etc. code. For example, Albert etal.[2] presented a parametric

Estimating GPU Memory Consumption of Deep LearningModels

Yanjie GaoMicrosoft Research

Yu LiuMicrosoft Research

Hongyu ZhangThe University of

Newcastle

Zhengxian LiMicrosoft Research

Yonghao ZhuMicrosoft Research

Haoxiang Lin∗Microsoft Research

Mao YangMicrosoft Research

ABSTRACT

Deep learning (DL) has been increasingly adopted by a variety ofsoftware-intensive systems. Developers mainly use GPUs to accel-erate the training, testing, and deployment of DL models. However,the GPU memory consumed by a DL model is often unknown tothem before the DL job executes. Therefore, an improper choiceof neural architecture or hyperparameters can cause such a job torun out of the limited GPU memory and fail. Our recent empiricalstudy has found that many DL job failures are due to the exhaustionof GPU memory. This leads to a horrendous waste of computingresources and a significant reduction in development productivity.In this paper, we propose DNNMem, an accurate estimation tool forGPUmemory consumption of DLmodels. DNNMem employs an an-alytic estimation approach to systematically calculate the memoryconsumption of both the computation graph and the DL frameworkruntime. We have evaluated DNNMem on 5 real-world representa-tive models with different hyperparameters under 3 mainstreamframeworks (TensorFlow, PyTorch, and MXNet). Our extensiveexperiments show that DNNMem is effective in estimating GPUmemory consumption.

KEYWORDS

Memory consumption, deep learning, estimation model, programanalysis

1 INTRODUCTION

In recent years, deep learning (DL) has rapidly become one ofthe most successful machine learning techniques and is widelyintegrated into a variety of software-intensive systems (such ascomputer vision systems, natural language processing systems,games, etc.). To accelerate the training, testing, and deployment ofDL models (aka deep neural networks or DNNs), GPU (GraphicsProcessing Unit) is widely adopted by the developers. Enterprisesalso build dedicate DL platforms such as Amazon SageMaker [4]and Microsoft Azure Machine Learning [5] with a large numberof GPUs, providing support for DL frameworks like TensorFlow(TF) [1], PyTorch [34], and MXNet [9].

However, since the GPU memory consumed by a DL model isoften unknown to developers before the training or inferencing jobstarts running, an improper model configuration of neural archi-tecture or hyperparameters can cause such a job to run out of thelimited GPU memory and fail. For example, as shown in Figure 1,if a PyTorch ResNet50 [16] training job with a batch size of 256 isscheduled on the NVIDIA Tesla P100 GPU, it will trigger an OOM

∗Corresponding author.

(out-of-memory) exception because the DL model requires 22 GBof GPU memory while P100 has only 16 GB in total.

According to our recent empirical study on 4960 failed DL jobsin Microsoft (Section 2.1), 8.8% of the job failures were caused bythe exhaustion of GPU memory, which accounts for the largestcategory in all deep learning specific failures. Therefore, knowingthe accurate GPU memory consumption (aka memory footprint)in advance is very important to reduce OOM failures and save pre-cious platform resources including GPU/CPU/storage, by helpingdevelopers choose an optimal model configuration or facilitatingDL frameworks to better utilize the mechanisms of dynamic mem-ory management [22] (e.g., GPU memory swapping). This abilitycan also benefit AutoML tools in enhancing the search efficiency(e.g., excluding those model configurations that do not satisfy thememory requirement) and DL platforms in optimizing job plan-ning and scheduling (e.g., scheduling a group of DL jobs that canmaximize the GPU memory usage).

64 128 256

1216

24

𝐾80𝑉 100/𝑃100

𝑃40

GB

VGG16 ResNet50

Figure 1: GPU memory consumption of training PyTorch

VGG16 [41] and ResNet50 models with different batch

sizes. The red lines indicate the memory capacities of three

NVIDIA GPUs.

There are already many program analysis based techniques [2, 6,7, 12, 20, 45, 46] for estimating memory consumption of C, C++, andJava programs. For example, Albert et al. [2] presented a paramet-ric inference on the notion of object lifetime to inferring memoryrequirements of Java-like programs. Heo et al. [17] proposed aresource-aware (e.g., memory size) flow-sensitive analysis that canadjust behaviors by coarsening program abstraction. However, exist-ing work cannot be directly applied to DL models for the followingthree main reasons:

(1) The hybrid programming paradigm adopted by DL frame-works hides the internal execution of a DL model from thehigh-level programs written by developers, therefore makingit difficult to track the precise GPU memory usage.

Page 2: Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory consumption of C, C++, Java, etc. code. For example, Albert etal.[2] presented a parametric

Yanjie Gao, Yu Liu, Hongyu Zhang, Zhengxian Li, Yonghao Zhu, Haoxiang Lin, and Mao Yang

(2) It is hard to analyze the GPU memory usage of low-levelframework operators (e.g., Conv2d), since they are usuallyimplemented with proprietary NVIDIA cuDNN, cuBLAS, orCUDA APIs and nested loops.

(3) There are many hidden factors within the framework run-time, which could observably affect the final GPU memoryconsumption, including allocation policy (e.g., tensor align-ment, fragmentation, reservation, and garbage collection),internal usage (e.g., CUDA context), implementation choice(e.g., multiple convolution algorithms in cuDNN [32]), oper-ator scheduling, etc.

Simple workarounds cannot precisely estimate the GPU memoryconsumption. First, the Shape Inference capability of DL frame-works [23, 33] could be adapted for the estimation, by staticallyadding up all the GPU memory allocated to the initial input, oper-ator weights, intermediate outputs, and final output. However, itdoes not take into account the above-mentioned hidden frameworkfactors, leading to large estimation errors (Section 5.2). Second,running a DL job for a while and profiling it dynamically may helpestimate how much GPU memory is required. Nevertheless, such aresource-consuming workaround cannot avoid GPU OOM eitherand is unaffordable in the scenario of hyperparameter tuning, wherea large number of possible neural architectures and hyperparametercombinations exist.

This paper presents DNNMem, an accurate estimation tool forGPU memory consumption of DL models. Our key observation isthat the algorithmic execution of a DL model can be representedas iterative forward and backward propagation on its computa-tion graph. Such a graph is a directed acyclic graph (DAG), whereeach node is an invocation of a mathematical function called oper-ator (e.g., matrix addition) and each edge specifies the executiondependency. GPU memory is allocated to tensors (e.g., operator in-puts/outputs, and learnable parameters) and temporary buffers (e.g.,cuDNNworkspace), and is later released by the framework’s built-inmemory allocator [26] along with the execution of operators. Hence,estimating GPU memory consumption can be reduced to the calcu-lation of memory required by each operator on the computationgraph in accordance with a graph traversal ordering. For an op-erator, DNNMem defines an analytic and framework-independentmemory cost function since the operator is well defined with simi-lar implementations across different frameworks. DNNMem alsoextracts many of the above-mentioned runtime factors from eachsupported framework to refine the estimation. For example, it ana-lyzes the liveness of tensors to handle GPU memory deallocation.DNNMem is general and applies to not only single-device trainingbut also data-parallel training and model inference.

We have implemented DNNMem and evaluated it on 5 real-world representative models (VGG16 [41], ResNet50 [16], Incep-tionV3 [42], LSTM [18], and BERT [14]) with different hyperparam-eters under 3 mainstream DL frameworks (TensorFlow, PyTorch,and MXNet). The average estimation errors are below 16.3%, con-firming the effectiveness of our proposed approach. The results alsoshow that DNNMem is robust to the choices of neural architectures,hyperparameters, and DL frameworks.

In summary, this paper makes the following contributions:

(1) We systematically explore how GPU memory is consumedby DL models.

(2) We propose and implement DNNMem, which can accuratelyestimate the GPU memory consumption of a DL model.

(3) We perform comprehensive evaluations of DNNMem on avariety of DL models and frameworks. The results show theeffectiveness and robustness of DNNMem.

2 BACKGROUND AND MOTIVATION

2.1 The Out-of-Memory Problem in DL

Practice

We recently conducted an empirical study on 4960 failed DL jobscollected from the Philly platform in Microsoft within a three-weekperiod [49]. Every day, thousands of jobs from both research andproduct teams are executed on Philly, including machine transla-tion, reading comprehension, object detection, gaming, advertise-ment, etc. For each failed job, we collected all related informationincluding input data, source code, job scripts, execution logs, andruntime statistics for analysis. Failures in our study manifested asunexpected runtime errors that led to job termination.

In our empirical study, we analyzed the categories and the rootcauses of DL job failures. Our study shows that 8.8% of the totalfailures were caused by the exhaustion of GPU memory, which ac-counts for the largest category in all deep learning specific failures.The DL models with sophisticated network structures and largebatch sizes may improve the model learning performance but alsosignificantly increase memory consumption. Since GPU memory isrelatively limited, developers need to size the model very carefully.

In fact, the OOM problem is not specific to the DL jobs in Mi-crosoft. Another empirical study on 2716 Stack Overflow posts alsolisted OOM as one of the six major effects of deep learning bugs [19].Therefore, knowing the accurate GPU memory consumption in ad-vance is very important to reduce out-of-memory failures and saveprecious platform resources. A memory usage estimation tool isvery useful in this regard.

2.2 A Motivating Example of Our Approach

We motivate the design of DNNMem by describing how GPU mem-ory is used and calculated for a simplified PyTorch training pro-gram. Developers use deep learning frameworks such as TensorFlow(TF) [1], PyTorch [34], and MXNet [9] to design layered data repre-sentations called deep neural networks (DNNs) or deep learningmodels. These frameworks provide both high-level programminginterfaces and basic building blocks for model construction. DLmodels are essentially mathematical functions, which can be for-malized as tensor-oriented computation graphs. Inputs and outputsof the graph nodes are tensor (multi-dimensional array of numeri-cal values) variables. The shape of a tensor is the element numberin each dimension plus element data type. Each node representsthe invocation of a mathematical operation called an operator (e.g.,matrix addition). Since a node is completely decided by its invokedoperator, we may use “node” and “operator” interchangeably in therest of the paper. Each operator may additionally contain some nu-merical learnable parameters (i.e., weights1). A graph edge pointing

1Weight biases are included.

Page 3: Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory consumption of C, C++, Java, etc. code. For example, Albert etal.[2] presented a parametric

Estimating GPU Memory Consumption of Deep Learning Models

1 import torch2 class Net(nn.Module):3 def __init__(self):4 super(Net, self).__init__()5 self.conv = nn.Conv2d(3, 8, 3)6 self.pool = nn.AvgPool2d(2, 2)7 self.fc = nn.Linear(1800, 10)8 def forward(self, x):9 x = self.conv(x)10 x = self.pool(x)11 x = x.view(x.size(0), -1)12 x = self.fc(x)13 return x1415 model = Net().cuda()16 for epoch in range(500):17 ...18 outputs = model(inputs)19 loss = criterion(outputs, labels)20 loss.backward()21 optimizer.step()

Figure 2: A sample PyTorch training program which con-

structs a sequential DL model using Conv2d (2D convolu-

tion), AvgPool2d (2D average pooling), and Linear (fully-

connected layer) operators.

Figure 3: Computation graph for training the DL model in

Figure 2. Ovals represent tensors in which 𝑊 stands for

weight tensor, 𝑂 for In/Out tensor, and 𝐸 for ephemeral ten-

sor. Rectangles are operators.2Dash lines denote weight up-

dates by SGD.

from one output of operator 𝐴 to one input of 𝐵 delivers a tensorand specifies the execution dependency.

Figure 2 shows the sample PyTorch training program, which setsup a sequential model using the framework built-in Conv2d (2Dconvolution), AvgPool2d (2D average pooling), and Linear (fully-connected layer) operators (lines 5-12). The original code does notgive enough clues on how the training is processed. Under the hood,PyTorch constructs a computation graph depicted in Figure 3 andapplies iterative forward and backward propagation on it to learnthe optimal weights. Such a graph is augmented with some systemcrafted operators for backward propagation (e.g., 𝐴𝑣𝑔𝑃𝑜𝑜𝑙2𝑑_𝐵𝑃 in

2We split the backward propagation of Linear into two logical operators𝐿𝑖𝑛𝑒𝑎𝑟_𝐵𝑃1 and 𝐿𝑖𝑛𝑒𝑎𝑟_𝐵𝑃2 to clearly demonstrate the computation of weightand output gradients. The same is to𝐶𝑜𝑛𝑣2𝑑_𝐵𝑃1 and𝐶𝑜𝑛𝑣2𝑑_𝐵𝑃2 .

Figure 4: GPU memory allocation during the operator exe-

cution.

the middle of Figure 3).2 Under forward propagation (the left of Fig-ure 3), input data (𝐷𝑎𝑡𝑎_𝑋 ) is fed through the neural network andmanipulated by the above developer-specified operators. Producedoutput activations and input labels (𝐷𝑎𝑡𝑎_𝑌 ) are then propagatedbackward to compute weight gradients. Finally, an optimizer isresponsible for weight update to minimize the loss (e.g., the differ-ence between actual and expected outputs), marking the end ofone iteration. Popular optimization algorithms include Adam [21],RMSProp [44], and SGD (stochastic gradient descent) [3].

During the training, operators apply for necessary GPU memoryon demand to store the following dimensions of tensors, denotedby the ovals in Figure 3:

(1) Weight Tensor. This dimension includes operator weights(e.g.,𝑊 1

𝑚), and weight gradients (e.g.,𝑊 6𝑔 ) computed under

backward propagation for updating weights.(2) In/Out Tensor.This dimension includes the initial input (Data_

X for features and 𝐷𝑎𝑡𝑎_𝑌 for labels) and operator inputs/outputs. Outputs are further distinguished to forward out-puts (e.g., 𝑂1

𝑓), and output gradients (e.g., 𝑂6

𝑔 ) for calculatingweight gradients. We do not draw operator inputs becausethey are identical to the corresponding predecessors’ outputs.Note that they may occupy separate GPU memory buffersunder certain circumstances (e.g., in model-parallel training).

(3) Ephemeral Tensor. This dimension includes variables usedby cuDNN/cuBLAS/CUDA APIs such as cuDNN workspace(e.g., part of 𝐸1) and declared CUDA random numbers.

In addition, a DL model also requires some resident buffers. Forexample, extra GPU memory is allocated for tensors to meet thealignment requirements (i.e., internal tensor fragmentation). Othersinclude the CUDA context,3 runtime reservation, etc.

Figure 4 illustrates how GPU memory is possibly consumed intraining the above motivating DL model. The vertical axis repre-sents the operator execution ordering such that Conv2d executesfirst, AvgPool2d is the second, Linear then follows, etc. The horizon-tal axis shows the consumed GPU memory when a certain operatoris executing. The GPU memory consumption of a DL model is thetotal GPU memory consumption applied by the framework fromthe device, which can be logically viewed as a continuous area

3The CUDA context contains managing information to control and use GPUdevices.

Page 4: Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory consumption of C, C++, Java, etc. code. For example, Albert etal.[2] presented a parametric

Yanjie Gao, Yu Liu, Hongyu Zhang, Zhengxian Li, Yonghao Zhu, Haoxiang Lin, and Mao Yang

Figure 5: Architecture of DNNMem.

divided into memory blocks (rectangles in Figure 4). Green partsare the allocated memory for in-use tensors. Yellow parts are theinternal tensor fragmentation if the original tensor sizes do notalign to a power of two. The gray parts are the reserved memoryby the framework allocator. For example, after a tensor is out ofuse, its memory block could be cached instead of returning to theGPU immediately. Since the CUDA context is allocated when DLframeworks initialize, we do not draw it on the figure.

Initially, before the operator execution (the first line in Figure 4),GPU memory is applied for the two initial input tensors 𝐷𝑎𝑡𝑎_𝑋and𝐷𝑎𝑡𝑎_𝑌 and extra memory (the rightmost gray rectangle) is pre-allocated to make future allocation more efficient. When Conv2dexecutes, the framework allocator pads a little more GPU memoryas the internal tensor fragmentation to the ephemeral tensor 𝐸1since its size is not aligned. After Conv2d has finished, 𝐸1 reachesthe end of its life and is then released. However, the correspondingmemory block is cached and will be re-allocated to𝑊 3

𝑚 when Linearstarts. The remaining space (the gray rectangle next to𝑊 3

𝑚) is toosmall for any later tensors, therefore it becomes an external tensorfragmentation and waits for being garbage-collected.

Table 1: Selected settings in the model (upper part) and exe-

cution specifications. Mark “*” represents the default value.

Specification Example Affected

Settings Values Symbols

Framework TF * / PyTorch / MXNet 𝑀𝑐𝑡𝑥 , 𝑀𝑅(𝑢)Input Shape (H:224, W:224, C:3)4 𝑂 (𝑢)Batch Size 128 𝑂 (𝑢)

Optimization Algorithm SGD * / Adam 𝑊 (𝑢)Precision Format Float32 * / Double 𝑀𝐶𝐺

Execution Mode single-* / multi-device 𝑊 (𝑢)GPU SKU P40 𝑀𝑐𝑡𝑥

cuDNN Workspace Limit 4 GB 𝐸 (𝑢)

3 PROPOSED APPROACH

To fully understand how GPU memory is used by a DL model, weclassify the allocated GPU memory into 4 dimensions and presentthem in Table 2. Our key observation is that the algorithmic ex-ecution of a DL model is represented by frameworks as iterative

4H, W, and C represent the height, width, and channel dimensions of an imageinput tensor, respectively.

forward and backward propagation on its computation graph. Prop-agation follows the execution dependency between operators, beingspecified by the graph edges. The operator scheduling (i.e., in whichordering the framework executes operators) is influential to theGPU memory consumption since it could affect memory deallo-cation, preservation, garbage collection, etc. DNNMem reducesthe operator scheduling to the computation graph traversal. Cur-rently, since DL frameworks schedule one operator after another,5we assume that operators are traversed sequentially. Therefore,DNNMem adopts an analytic approach which formalizes the esti-mation of GPU memory consumption as the calculation of memoryrequired by each operator on the computation graph in accordancewith a topological (linear) graph traversal ordering (Section 4.1).Such an ordering is pre-generated by referring to the frameworkimplementations [22, 29, 36]

Figure 5 illustrates the architecture of DNNMem. It accepts theon-disk serialized model file(s), a model specification, and an ex-ecution specification as the input and then reports the estimatedGPU memory consumption. The model specification includes in-put tensor shape and hyperparameter values (e.g., kernel size ofsome convolutional operator). The execution specification containsruntime information such as execution mode (e.g., single-device)and GPU SKU (Stock Keeping Unit) (e.g., GPU type and memorycapacity). Some specification settings are shown in Table 1.

DNNMem implements a front-end parser for each supported DLmodel format, using the framework built-in model deserializationAPIs. Such a parser is responsible for reading the input DL modelfrom the disk file(s) and reconstructing it to the correspondingcomputation graph.

DL frameworks may allocate GPU memory in advance beforethe operator execution (e.g., the CUDA context, initial input ten-sors, and weight tensors of TensorFlow models). DNNMem de-fines two allocation policies: ALLOC_ON_START (at the initializingphase) and ALLOC_ON_DEMAND (at the first use). Before the graphtraversal, DNNMem counts tensors and resident buffers with theALLOC_ON_START policy to calculate an initial GPU memory con-sumption.

During the graph traversal, DNNMem calculates a current GPUmemory consumption for the operator under visiting. As tensorshave their lifecycles, DNNMem first computes the set of unreleasedtensors which are still in GPU memory. This set consists of thosealive tensors being dependent by the visiting and subsequent op-erators. Also, the framework may hold certain dead tensors fora while, therefore they should also be counted. DNNMem definestwo release policies: RELEASE_ON_EXIT (at the finalizing phase) andRELEASE_ON_DEATH (right after being out of use). At present, ac-cording to the framework implementations, only operator weightsare set to RELEASE_ON_EXIT since they will be released only af-ter the training finishes. Thus, unreleased tensors can be capturedwith the liveness analyzer as well as the release policy information(Section 4.3). Next, DNNMem analyzes tensors being allocated bythe visiting operator. Instead of performing program analysis onthe source code of operators, DNNMem defines an analytic andframework-independent memory cost function for each operator

5MXNet can be configured to execute several operators simultaneously (i.e., bulkexecution).

Page 5: Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory consumption of C, C++, Java, etc. code. For example, Albert etal.[2] presented a parametric

Estimating GPU Memory Consumption of Deep Learning Models

Table 2: Classification of allocated GPU memory.

Dimension Category Description

Weight Tensor Weight Learnable parameters of operatorsWeight Gradient Gradients computed under backward propagation for updating weights

In/Out Tensor

Initial Input Input data items in mini batches

Operator Input Inputs of an operator (identical to the corresponding predecessors’ outputs ifthe predecessors reside on the same GPU)

Forward Output Outputs of an operator computed under forward propagation (including themodel’s final output such as 𝑂3

𝑓in Figure 3)

Output Gradient Gradients under backward propagation for calculating weight gradients

Ephemeral Tensor cuDNN Workspace Additional GPU memory used by cuDNN APIsTemporary Tensor Temporary variables used in operator implementation

Resident Buffer

CUDA Context Managing information to control and use GPU devicesInternal Tensor Frag-mentation Extra memory allocated for alignment

Allocator Reservation

(1) Released yet unreclaimed tensors(2) Pre-allocated memory(3) External tensor fragmentation(4) Miscellaneous reservation (e.g., the fusion buffer used by Horovod)

(Section 4.2), which returns a list of allocated tensors with typeand memory size. In this paper, we only consider single-device ordata-parallel training in which an operator and its predecessors areplaced on the same GPU. Hence, input tensors of the operator areexcluded because they are identical to the predecessors’ outputs.DNNMem handles the internal tensor fragmentation by paddingextra memory according to the alignment requirements. It is pos-sible that several operators may share weights (i.e., aliasing) [35].DNNMem identifies them from their operator names and countsthe shared weight tensors only once.

The GPU memory occupied by the CUDA context is assumed tobe constant, being pre-computed by the GPU SKU, framework typeand version, etc. DNNMem finally identifies how GPU memory ismanaged and reserved by the framework runtime, which serves forincreasing the performance of memory allocation (Section 4.5).

When the graph traversal completes, the maximum consumptionamong all operators is reported as the GPU memory consumptionof the DL model. Note that our methodology requires that theGPU memory consumption across training iterations is identical.Therefore, the computation graph should be deterministic withoutcontrol-flow operators (e.g., loops and conditional branches) anddynamic graph changes (e.g., PyTorch employs the define-by-runapproach). Otherwise, users may unroll loops (as well as RNNs [?]) statically with a user-specified or framework-default count, orsupply multiple deterministic computation graphs (e.g., severalmodel files) to tackle this problem.

4 IMPLEMENTATION

4.1 Estimation on Computation Graph

Formally, the computation graph of a DL model is represented as adirected acyclic graph (DAG):

𝐶𝐺 = ⟨{𝑢𝑖 }𝑛𝑖=1, {(𝑢𝑖 , 𝑢 𝑗 )}, {𝑝𝑘 }𝑚𝑘=1⟩

Each node 𝑢𝑖 is an operator, while a directed edge (𝑢𝑖 , 𝑢 𝑗 ) deliversan output tensor of 𝑢𝑖 to 𝑢 𝑗 as an input and specifies the executiondependency between the two operators. Each 𝑝𝑘 is a hyperparam-eter such as input tensor shape, batch size, learning rate, etc. As

mentioned before, we suppose that 𝐶𝐺 is deterministic withoutcontrol-flow operators.

Let 𝑆 = ⟨𝑢𝑖1 , 𝑢𝑖2 , · · · , 𝑢𝑖𝑛 ⟩ be a topological (linear) orderingextended from the above graph edge ordering such that 𝑢𝑖 𝑗 ≺𝑆𝑢𝑖𝑘 =⇒ (𝑢𝑖𝑘 , 𝑢𝑖 𝑗 ) ∉ 𝐶𝐺 . We call 𝑆 the operator schedule, whichrepresents the actual runtime execution of operators. 𝑆 is pre-generated by reference to the framework implementations [22, 29,36]. DNNMem then follows 𝑆 to traverse the computation graph𝐶𝐺 sequentially. Suppose that 𝑢 is the operator under visiting, thecurrent GPU memory consumption for 𝑢 consists of 3 parts: previ-ously allocated but still in-use tensors, newly allocated tensors for𝑢,and resident buffers of the CUDA context and allocator reservation.The first two kinds of tensors are called the unreleased tensors.

Let 𝑀𝐹𝑖𝑛𝑖𝑡 and 𝑀𝐹 be the functions that return the initial andcurrent GPU memory consumption. Let𝑀𝑈 ,𝑀𝑅, and𝑀𝑐𝑡𝑥 be thefunctions that return the memory size of unreleased tensors, mem-ory size of allocator reservation, and GPU memory occupied by theCUDA context, respectively. Function𝑈𝑇 computes the set of allunreleased tensors, and𝑀𝑇 returns the allocated memory size ofa tensor 𝑡 . Note that𝑀𝑇 counts in the internal tensor fragmenta-tion. We use𝑀𝐶𝐺 to denote the GPU memory consumption of thecomputation graph 𝐶𝐺 , and calculate it as follows:

𝑀𝐹𝑖𝑛𝑖𝑡 = 𝑀𝑐𝑡𝑥 +∑

𝑀𝑇 (𝑡) 𝑡 has ALLOC_ON_START

𝑀𝑈 (𝑢) =∑

𝑡 ∈𝑈𝑇 (𝑢)𝑀𝑇 (𝑡)

𝑀𝐹 (𝑢) = 𝑀𝑈 (𝑢) +𝑀𝑅(𝑢) +𝑀𝑐𝑡𝑥

𝑀𝐶𝐺 = max{𝑀𝐹𝑖𝑛𝑖𝑡 , 𝑀𝐹 (𝑢𝑖 ) | 𝑢𝑖 ∈ 𝐶𝐺}

The above abstraction and formalization are general to differentframeworks and DL models in estimating GPU memory consump-tion. Users can also adapt the estimation model to new devicesand frameworks by using another operator schedule, associatingdifferent allocation/release policies to tensors, or modifying theabove functions such as𝑀𝑅,𝑀𝑐𝑡𝑥 , etc. functions.

4.2 Memory Cost Functions of Operators

Page 6: Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory consumption of C, C++, Java, etc. code. For example, Albert etal.[2] presented a parametric

Yanjie Gao, Yu Liu, Hongyu Zhang, Zhengxian Li, Yonghao Zhu, Haoxiang Lin, and Mao Yang

1 Tensor cudnn_convolution_forward(CheckedFrom c,2 const TensorArg& input, const TensorArg& weight,3 IntArrayRef padding, IntArrayRef stride, IntArrayRef

dilation,↩→

4 int64_t groups, bool benchmark, bool deterministic)5 {6 checkAllSameType(c, {input, weight});7 checkAllSameGPU(c, {input, weight});8

9 auto output_t = at::empty(10 conv_output_size(input->sizes(), weight->sizes(),11 padding, stride, dilation, groups), input->options());12 // Avoid ambiguity of "output" when this is being used as

backwards↩→

13 TensorArg output{ output_t, "result", 0 };14 convolution_shape_check(c, input, weight, output, padding,

stride, dilation, groups);↩→

15 // See #450016 Tensor weight_contig = weight->contiguous();17 raw_cudnn_convolution_forward_out(18 *output, *input, weight_contig, padding,19 stride, dilation, groups, benchmark, deterministic);20

21 return *output;22 }

Figure 6: Implementation of the PyTorch convolutional op-

erator’s forward propagation using NVIDIA cuDNN.6

Knowing how GPUmemory is allocated and used by an operatorfrom its source code is challenging using traditional program anal-ysis techniques. This is because operators are usually implementedby DL frameworks with NVIDIA cuDNN, cuBLAS, or CUDA APIinvocations (black box) and nested loops.

Instead, we define an analytic and framework-independent mem-ory cost function for each operator by reference to the frameworkimplementations. Our solution is technically feasible for two rea-sons. First, frequently-used operators are well-defined with clearsyntax and semantics. Second, DL frameworks implement themsimilarly by calling NVIDIA APIs. The memory cost function re-turns a set of allocated tensors with category and shape (in terms ofparameters such as batch size, input tensor shape, the filter number,and so on). Most of the concrete parameter values are fetched fromthe previously mentioned user specifications, while the input tensorshape can be inferred by Shape Inference.

We suppose that 𝑢 is the operator under visiting and𝑀𝐶 is itsmemory cost function. Let𝑊 ,𝑂 , and 𝐸 be the functions that returnthe sets of 𝑢’s weight/output/ephemeral tensors. As mentioned inSection 3, we exclude input tensors because only single-device anddata-parallel training are considered. Thus,

𝑀𝐶 (𝑢) =𝑊 (𝑢) ∪𝑂 (𝑢) ∪ 𝐸 (𝑢)Weight tensors include operator weights (𝑊𝑚) under forward prop-agation and weight gradients (𝑊𝑔) under backward propagation:

𝑊 (𝑢) =𝑊𝑚 (𝑢) ∪𝑊𝑔 (𝑢)

6https://github.com/pytorch/pytorch/blob/v1.2.0/aten/src/ATen/native/cudnn/Conv.cpp#L893

Output tensors consist of forward outputs (𝑂 𝑓 ) and output gradients(𝑂𝑔):

𝑂 (𝑢) = 𝑂 𝑓 (𝑢) ∪𝑂𝑔 (𝑢)Ephemeral tensors contain three parts:

(1) cuDNN workspace (𝐸𝑤 ), which is the additional GPU mem-ory buffer used by cuDNN APIs such as cudnnConvolutionForward() in the implementation of framework operators.Larger workspace brings better performance. DNNMem in-vokes standard interfaces such as cudnnGetConvolutionForwardWorkspaceSize() to obtain the amount of cuDNNworkspace required. In addition, frameworks may limit theworkspace size in case of GPU memory shortage. For exam-ple, TensorFlow exports an environment variable TF_CUDNN_WORKSPACE_LIMIT_IN_MB to set the upper bound of cuDNNworkspace. Thus, DNNMem returns the smaller value.

(2) CUDA data structures (𝐸𝑣 ), which are miscellaneous datastructures used by CUDA APIs like CUDA random numbers.

(3) Temporary tensors (𝐸𝑝 ), which are temporary variables usedin the implementation of framework operators. For example,we observe through runtime logs that TensorFlow’s convo-lution operator uses two temporary tensors with the samesizes as the weight and output tensors, respectively.

Thus,𝐸 (𝑢) = 𝐸𝑤 (𝑢) ∪ 𝐸𝑣 (𝑢) ∪ 𝐸𝑝 (𝑢)

Note that not all types of tensors are allocated for the operator 𝑢.Let us use the motivating example in Figures 2 and 3 to illustrate

how such memory cost functions look like. The following symbolsare used to denote each operator’s hyperparameters and tensorshapes. 𝑆𝑓 is for the precision format of the data type. 𝑁 representsbatch size.𝐻𝑜 ,𝑊𝑜 and𝐶𝑜 are output height, width, and channel.𝐻𝑖 ,𝑊𝑖 , 𝐶𝑖 are input height, width, and channels. 𝐻𝑓 and𝑊𝑓 are filterheight andwidth. 𝐹𝑜 represents the size of each output sample. SincecuDNN workspace depends on a specific cuDNN convolutionalalgorithm (denoted by A. e.g., GEMM, FFT, and Winograd), thesymbol of the workspace is represented as 𝐸A𝑤 .

Table 3 lists all allocated tensors of the operators used in ourmotivating example and their sizes. Although the developer mayspecify only three operators in code, DL frameworks automaticallyinsert auxiliary ones into the computation graph for backward prop-agation. For example,𝐶𝑜𝑛𝑣2𝑑_𝐵𝑃1 and𝐶𝑜𝑛𝑣2𝑑_𝐵𝑃2 are framework-crafted operators for calculating output and weight gradients toupdate the weights of the developer-specified 𝐶𝑜𝑛𝑣2𝑑 operator.The 𝐿𝑖𝑛𝑒𝑎𝑟 (FullyConnected) operator can be implemented by ma-trix multiplication and addition. The RNN [? ] operator needs toconsider the weight sharing of stacked cells.

Currently, DNNMem provides memory cost functions for 70+frequently-used operators. Although operators represent differentmathematical operations, they may share the same or similar mem-ory cost functions according to how they manipulate the input data.For example, operators such as ReLU and Sigmoid perform in-placeupdates by default (i.e., activation functions). They do not requireany additional GPU memory and thus share the same zero memorycost function. Another example is that the listed memory cost func-tion of operator Conv2d is adapted for Conv1d and Conv3d withlittle change needed since their principles are similar. In this way,

Page 7: Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory consumption of C, C++, Java, etc. code. For example, Albert etal.[2] presented a parametric

Estimating GPU Memory Consumption of Deep Learning Models

Table 3: Allocated tensors and their sizes in the motivating example.

Operator Category Operator Tensor Category Tensor Size

Convolution

𝐶𝑜𝑛𝑣2𝑑 Weight 𝑊 1𝑚 = 𝑆𝑓 ×

(𝐶𝑖 (𝑢) × 𝐻𝑓 (𝑢) ×𝑊𝑓 (𝑢) ×𝐶𝑜 (𝑢) +𝐶𝑜 (𝑢)

)Forward Output 𝑂1

𝑓= 𝑆𝑓 × 𝑁 ×𝐶𝑜 (𝑢) × 𝐻𝑜 (𝑢) ×𝑊𝑜 (𝑢)

cuDNN Workspace 𝐸1 = 𝐸A(𝐹𝑃 )𝑤 (𝑢)

𝐶𝑜𝑛𝑣2𝑑_𝐵𝑃1 Output Gradient 𝑂7𝑔 = 𝐷𝑎𝑡𝑎_𝑋

cuDNN Workspace 𝐸61 = 𝐸A(𝐵𝑃1)𝑤 (𝑢)

𝐶𝑜𝑛𝑣2𝑑_𝐵𝑃2 Weight Gradient 𝑊 6𝑔 =𝑊 1

𝑚

cuDNN Workspace 𝐸62 = 𝐸A(𝐵𝑃2)𝑤 (𝑢)

AveragePooling𝐴𝑣𝑔𝑃𝑜𝑜𝑙2𝑑 Forward Output 𝑂2

𝑓= 𝑆𝑓 × 𝑁 ×𝐶𝑜 (𝑢) × 𝐻𝑜 (𝑢) ×𝑊𝑜 (𝑢)

𝐴𝑣𝑔𝑃𝑜𝑜𝑙2𝑑_𝐵𝑃 Output Gradient 𝑂6𝑔 = 𝑂1

𝑓

FullyConnected

𝐿𝑖𝑛𝑒𝑎𝑟 Weight 𝑊 3𝑚 = 𝑆𝑓 × (𝐶𝑖 (𝑢) × 𝐻𝑖 (𝑢) ×𝑊𝑖 (𝑢) × 𝐹𝑜 (𝑢) + 𝐹𝑜 (𝑢))

Forward Output 𝑂3𝑓= 𝑆𝑓 × 𝑁 × 𝐹𝑜 (𝑢)

𝐿𝑖𝑛𝑒𝑎𝑟_𝐵𝑃1 Output Gradient 𝑂5𝑔 = 𝑂2

𝑓

𝐿𝑖𝑛𝑒𝑎𝑟_𝐵𝑃2 Weight Gradient 𝑊 4𝑔 =𝑊 3

𝑚

Table 4: Operators share the same memory cost functions.

Category Example Operators

Activation ReLU, LeakyReLU, Sigmoid, Tanh, ELUConvolution Conv1d, Conv2d, Conv3dPooling MaxPooling, AvgPooling

Elementwise Add, Mul, Mod, AndRNN VanillaRNN, LSTM, GRU

Constant DataInput, ConstantMisc Assert, Ignore

more operators could be supported in DNNMem. Table 4 showssome operators that share the memory cost functions.

4.3 Unreleased Tensors

Algorithm 1 demonstrates how to compute the unreleased tensorsduring graph traversal. We suppose that the computation graph,traversal ordering, and operator 𝑢 under visiting are given. First,DNNMem identifies the visited operators on the computation graphand obtains their tensors from the memory cost functions. Next,DNNMem enumerates each of such tensors to check if it has thepolicy RELEASE_ON_EXIT set or it is still live. If satisfied, this tensoris added to the set of unreleased tensors. Finally, DNNMem addsall the tensors of 𝑢 too.

The liveness of a tensor is computed by verifying whether itwill be used by any of the current and later operators (i.e., thereexists an edge on the computation graph). Figure 7 highlights thedependencies between certain tensors and operators. Suppose thatthe operator under visiting is 𝐿𝑖𝑛𝑒𝑎𝑟_𝐵𝑃1, and the immediate suc-cessor is 𝐴𝑣𝑔𝑃𝑜𝑜𝑙2𝑑_𝐵𝑃 .𝑊 3

𝑚 and 𝑂4𝑔 are used by 𝐿𝑖𝑛𝑒𝑎𝑟_𝐵𝑃1, so

they are live. When we proceed to visit 𝐴𝑣𝑔𝑃𝑜𝑜𝑙2𝑑_𝐵𝑃 , 𝑂4𝑔 is then

dead assuming that 𝐿𝑖𝑛𝑒𝑎𝑟_𝐵𝑃1 and 𝐿𝑖𝑛𝑒𝑎𝑟_𝐵𝑃2 have been visitedbefore. Although the weight tensor𝑊 3

𝑚 looks also dead, it is setRELEASE_ON_EXIT thus cannot be released since DL frameworkswill keep it in GPU memory for later weight updating.

Figure 7: Tensor liveness example.

Since DNNMem is extensible, users can add memory optimiza-tion strategies (e.g., SWAP [37] and gradient checkpoint [11]) asextensions to Algorithm 1 to simulate more application scenarios.

4.4 Memory Block Management

Asmentioned in Section 3, tracking the memory blocks is indispens-able to handle the impact factors of the DL framework runtime (e.g.,policies of memory pre-allocation, and reallocation). DNNMem im-plements a linked-list based memory block manager and the best-fitwith coalescing (BFC) algorithm.

When visiting an operator during the computation graph tra-versal, memory allocation is simulated for each of the operator’stensors. DNNMem searches the list for the first free block fittingthe tensor size (with alignment). If such a block is larger than therequested size such that the residual space exceeds a threshold, it issplit and the remainder will be inserted into the list right after. Oth-erwise, the full block should be returned. However, there may be nosuitable free blocks anymore. DNNMem then simulates applyingfor fresh memory from the GPU device by creating a new block datastructure and appending it to the list tail. Memory pre-allocationis handled by correctly setting the size of such a new block. For

Page 8: Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory consumption of C, C++, Java, etc. code. For example, Albert etal.[2] presented a parametric

Yanjie Gao, Yu Liu, Hongyu Zhang, Zhengxian Li, Yonghao Zhu, Haoxiang Lin, and Mao Yang

Algorithm 1: Compute the set of unreleased tensors.Input: The computation graph cg, traversal ordering tp_order, and

operator u under visiting.Output: A set of unreleased tensors ut.

1 ut← ∅ ;2 prev_tensors← ∅ ; // Already allocated tensors.

3 unvisited_ops← ∅ ; // u will also be included.

4 foreach op ∈ cg do5 if IsVisited(op) then

// MC() is the memory cost function.

6 prev_tensors← prev_tensors ∪ MC(op) ;7 else

8 foreach t ∈ MC(op) do

9 if t.alloc_policy == ALLOC_ON_START then

10 prev_tensors← prev_tensors ∪ { t } ;11 end

12 end

13 unvisited_ops← unvisited_ops ∪ { op } ;14 end

15 end

16 foreach t ∈ prev_tensors do17 if t.release_policy == RELEASE_ON_EXIT then

18 ut← ut ∪ { t } ; // t cannot be released.

19 continue;20 end

21 foreach op ∈ unvisited_ops do22 if IsDependent(op, t) then

23 ut← ut ∪ { t } ; // t is alive.

24 break;25 end

26 end

27 end

28 ut← ut ∪ MC(u) ; // Add tensors of u.29 return ut;

TensorFlow, the size equals to the total size of all existing memoryblocks (exponential backoff).

4.5 Resident Buffers

Resident buffers are essential GPU memory for the training andinference of DLmodels and are managed by the framework runtime.As shown in Table 2, DNNMem currently handles three categories:CUDA context, internal tensor fragmentation, and allocator reser-vation.

4.5.1 CUDA Context.The CUDA context𝑀𝑐𝑡𝑥 is mainly determined by three factors: GPUSKU, framework type and version. When such factors are fixed,it is constant to different DL models. DNNMem profiles values ofthe CUDA context under various combinations in advance for laterqueries. The profiling first obtains the total GPU memory consump-tion using NVIDIA NVML (NVIDIA Management Library) [31],then calculates the consumed memory by DL frameworks fromruntime logs, framework APIs, or CUDA hooks, and finally com-putes the difference. Table 5 shows some sample values.

Table 5: Profiled values of the CUDA context (GB).

Framework GPU Model CUDA Context

TensorFlow 1.13 P40 VGG16/ResNet50 0.38TensorFlow 1.12 P40 VGG16/ResNet50 0.37PyTorch 0.4.0 K80 VGG16/ResNet50 0.3PyTorch 1.2.0 P40 VGG16/ResNet50 0.63

4.5.2 Internal Tensor Fragmentation.To take maximum advantage of GPU hardware, the actual size ofallocated GPU memory for a tensor should meet some alignmentrequirements. For example, TensorFlow aligns with multiples of256 bytes while PyTorch aligns with multiples of 512 bytes. LetAlignSize be a constant of the alignment boundary in bytes. Therounded size of a tensor 𝑡 is calculated as follows:

𝐵𝑟𝑜𝑢𝑛𝑑 (𝑡) = AlignSize × ⌈ sizeof(𝑡)AlignSize

Let 𝐵(𝑡) be the size of the found memory block, and constantSplitThreshold be the threshold value that controls block split-ting. Thus, the final allocated memory size of 𝑡 is:

𝑀𝑇 (𝑡) ={𝐵𝑟𝑜𝑢𝑛𝑑 (𝑡), if 𝐵(𝑡) − 𝐵𝑟𝑜𝑢𝑛𝑑 (𝑡) ⩾ SplitThreshold

𝐵(𝑡), otherwise

The difference between 𝑀𝑇 (𝑡) and sizeof(𝑡) is the size of 𝑡 ’s in-ternal tensor fragmentation.

4.5.3 Allocator Reservation.Within the category of allocator reservation, released yet unre-claimed tensors, pre-allocated memory, and external tensor frag-mentation can be calculated by querying the memory block man-ager. For the miscellaneous reservation, DNNMem currently han-dles one case from the data-parallel training using Horovod. Toimprove the performance of ring-allreduce, Horovod implementsa feature called Tensor Fusion [39] to batch small allreduce opera-tions. A fusion buffer of size HOROVOD_FUSION_THRESHOLD (64 MBby default) will be allocated for caching data of selected tensorsready to be reduced. DNNMem treats such buffer size as a constantand provides a user configuration in the execution specification.

5 EVALUATION

5.1 Experimental Setup

We evaluate DNNMem under three popular DL frameworks: Ten-sorFlow 1.12.0, PyTorch 1.2.0, and MXNet 1.5.0 with CUDA 9.0 andcuDNN 7.0.3. For each framework, we experiment the following 5representative DL models shown in Table 6.

Table 6: The experimented DL models.

DL Model Field Dataset # of Layers

VGG16 CV ImageNet [13] 22ResNet50 CV ImageNet 50

InceptionV3 CV ImageNet 48LSTM NLP Synthetic 2

BERT (base) NLP GLUE [47] 12

Page 9: Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory consumption of C, C++, Java, etc. code. For example, Albert etal.[2] presented a parametric

Estimating GPU Memory Consumption of Deep Learning Models

Table 7: GPU memory consumption (GB) of different mod-

els. “Est.” is estimation. “SI” is Shape Inference. “BS128”

means batch size 128. The % values denote relative errors.

Models TensorFlow PyTorch MXNet

BS128 Real Est. SI Real Est. SI Real Est. SI

VGG1617.4 16.9 7.2 16.2 14.6 7.2 14.3 14.2 7.2

(2.8%) (58.6%) (9.8%) (55.5%) (0.6%) (49.6%)

ResNet508.4 8.2 5.1 13.2 10.9 10.1 11.8 11.5 10.1

(2.3%) (39.2%) (17.4%) (23.4%) (2.5%) (14.4%)Inception 10.1 8.7 7.0 15.6 12 11.2 12.9 11.6 11.2

V3 (13.8%) (30.6%) (23.0%) (28.2%) (10.0%) (13.1%)

LSTM4.1 4.3 2.3 7.9 8.5 2.3 11.9 12.2 2.3

(4.8%) (43.9%) (7.5%) (70.8%) (2.5%) (80.6%)

To obtain the real consumed GPUmemory of a DLmodel, we pro-filed the job using NVIDIANVML [31]. CUDAUnifiedMemory [38]was disabled to avoid tensors being migrated to the main memory.We did not limit the memory usage of the cuDNN workspace.

To evaluate the effectiveness of DNNMem, we use relative errorbetween the real and estimated GPU memory consumption:

% error =|Est. − Real|

Real× 100

Smaller errors indicate better estimation accuracy.

5.2 RQ1: How effective is DNNMem in

estimating GPU memory consumption of

DL models?

This RQ evaluates the overall effectiveness of DNNMem in esti-mating GPU memory consumption. Table 7 lists the experimentalresults for VGG16, ResNet50, InceptionV3 (with the input imagedata shape [Channel:3, Height:224, Width:224] and batch size 128),and LSTM (with the hidden and input sizes 5120, 2 layers, andbatch size 128) models. The results show that DNNMem is able tomake satisfactory estimations. For TensorFlow, the relative errorsrange from 2.3% to 13.8%, with an average of 5.9%. For PyTorch, therelative errors range from 7.5% to 23.0%, with an average of 14.4%.For MXNet, the relative errors range from 0.6% to 10.0%, with anaverage of 3.9%.

We also compare DNNMem with Shape Inference [23, 33], astatic analysis technique to infer the tensor shapes of operator in-puts, outputs, and weights. Currently, the three DL frameworksdo not provide stand-alone shape inference tools. DNNMem hasalready implemented our own using the framework APIs for es-tablishing the operator memory cost functions. We query suchcost functions for tensors of the initial input, weights, interme-diate outputs, and final output under forward propagation, andthen add them up as the GPU memory consumption estimated byShape Inference. On average, the relative errors of Shape Inferencereach 43.0% (TensorFlow), 44.4% (PyTorch), and 39.4% (MXNet),which are much higher than those of DNNMem. The reason is thatDNNMem considers hidden factors such as tensor allocation policyand cuDNN workspace.

To further evaluate DNNMem, for each framework, we exper-iment with the three DL CV models in Table 6 with 100 differentinput shapes (from [Channel: 3, Height: 224, Width: 224] to [Chan-nel: 3, Height: 300, Width: 300]) and batch sizes (from 2 to 256). Wethen compute the mean relative errors (MRE) of all 100 experiments

Table 8: GPUmemory consumption (GB) of BERT (base, un-

cased) model with different batch sizes (BS) and sequence

lengths (SL). “SI” is Shape Inference. The % values denote rel-

ative errors.

Models

TensorFlow PyTorch MXNet

Real Est. SI Real Est. SI Real Est. SI

BS32 4.2 3.4 1.8 3.5 2.4 1.8 3.7 2.9 1.8SL32 (19.0%) (57.1%) (31.4%) (48.5%) (21.6%) (51.3%)BS32 8.2 5.4 3.1 4.7 3.8 3.1 4.9 4.3 3.1SL64 (34.1%) (62.1%) (19.1%) (34.0%) (12.2%) (36.7%)BS128 16.2 15.4 11.2 12.7 12.3 11.2 11.9 13.1 11.2SL64 (4.9%) (30.8%) (3.1%) (11.8%) (10.0%) (5.8%)BS64 16.2 15.4 11.2 12.6 12.3 11.2 13.1 13.1 11.2SL128 (4.9%) (30.8%) (2.3%) (11.1%) (0.0%) (14.5%)BS100 21.2 22.7 17.3 18.4 18.6 17.3 20.5 19.6 17.3SL128 (7.0%) (18.3%) (1.0%) (5.9%) (4.3%) (15.6%)

for each framework. Figure 8 summarizes the results. The meanrelative errors achieved by DNNMem are 16.0% for TensorFlow,15.4% for PyTorch, and 16.3% for MXNet. While the mean relativeerrors achieved by Shape Inference (SI) range from 35.9% to 49.1%.The results show the robustness and effectiveness of DNNMem.

TensorFlow PyTorch MXNet

20

30

40

50 49.1

35.939.3

16 15.4 16.3

MeanRe

lativ

eErrors

(%)

SI (Shape Inference) Est. (DNNMem)

Figure 8: The effectiveness of DNNMem under different in-

put shapes and batch sizes. The Y-axis shows the mean rela-

tive errors (%).

To evaluate the effectiveness of DNNMem in predicting the GPUOOM (out-of-memory) cases, we also increase the batch size to512 and measure the memory consumption of three CV modelsunder the three frameworks (total 9 experiments). Among these9 experiments, 8 failed due to OOM. That is, the memory con-sumption is larger than the available memory of NVIDIA Tesla P40(22.38 GB), which is the GPU used in this experiment. For all theOOM experiments, the memory consumption estimates made byDNNMem range from 28.7 to 46.0 GB, which are all above the avail-able GPU memory (22.38 GB). For the remaining one experiment(TensorFlow ResNet50) that did not have the OOM failure, the esti-mation error achieved by DNNMem is only 3.9%. The results showthat DNNMem can successfully predict OOM cases, confirming theeffectiveness of DNNMem.

Table 8 shows the experiment of BERT [14] (base) model overthe GLUE (General Language Understanding Evaluation) bench-mark [47], with various batch sizes and sequence lengths. DNNMemachieves average errors of 13.9% (TensorFlow), 11.3% (PyTorch), and9.6% (MXNet) and Shape Inference achieves average errors of 39.8%(TensorFlow), 22.2% (PyTorch), and 24.7% (MXNet). The results showthat DNNMem is still effective under different hyperparameters.

An advantage of the analytic approach is the interpretability thatDNNMem can present memory usage details, which will greatly

Page 10: Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory consumption of C, C++, Java, etc. code. For example, Albert etal.[2] presented a parametric

Yanjie Gao, Yu Liu, Hongyu Zhang, Zhengxian Li, Yonghao Zhu, Haoxiang Lin, and Mao Yang

Table 9: Categories of GPU memory consumption (GB) of

TensorFlow VGG16 model. The % values are relative errors.

Batch Size

Category

64 128 256

Real Est. Real Est. Real Est.

Live Tensors 4.90 3.52 8.55 7.24 15.84 14.49Internal Fragmentation 0.14 0.02 0.27 0.04 0.13 0.04Allocator Reservation 3.08 5.08 8.3 9.34 5.16 2.59CUDA Context 0.37 0.37 0.37 0.37 0.37 0.37

Total

8.49 8.99 17.49 16.99 21.50 17.49(5.88%) (2.85%) (18.65%)

help developers tune model configurations and framework run-time parameters. Table 9 demonstrates how GPU memory is con-sumed by different categories of tensors and TensorFlow runtimewhen training the VGG16 model. The real memory consumptionof each part was obtained from TensorFlow runtime logs. “LiveTensors” refer to the Weight/In/Out/Ephemeral tensors in Table 2.DNNMem achieves low average errors of 5.88 % (Total Consump-tion), 17.33 % (Live Tensors), 42.42% (Allocator Reservation), and0.0% (CUDA Context). Because the internal fragmentation has arelatively small value, the estimation can cause a much higher av-erage error (80.04%). Nevertheless, it contributes only a very smallportion of the total GPU memory consumption.

As for the time performance, the estimation time of DNNMemranges from 0.6 to 0.7 seconds for the above experiments. DNNMemhas an order of magnitude speedup compared with real executionestimation.

5.3 RQ2: How accurate are the operator

memory cost functions of DNNMem?

Operators’ memory cost functions play a critical role in DNNMem.This RQ is to evaluate their accuracy. Four representative operators:𝐶𝑜𝑛𝑣2𝑑 , 𝐴𝑣𝑔𝑃𝑜𝑜𝑙2𝑑 , 𝐷𝑟𝑜𝑝𝑜𝑢𝑡 , and 𝐵𝑎𝑡𝑐ℎ𝑁𝑜𝑟𝑚 (batch normaliza-tion) were chosen for experiment. We crafted a minimal DL modelfor each of them to reduce distractions. For example, the 𝐶𝑜𝑛𝑣2𝑑model only adds an additional 𝐿𝑖𝑛𝑒𝑎𝑟 operator. To obtain the realmemory usage, we analyzed the runtime logs of TensorFlow andMXNet. For PyTorch, we added profiling code right before andafter operator construction/execution inside the framework. Theshape of the input data is [BatchSize:128, Channel:3, Height:224,Width:224]. 𝐶𝑜𝑛𝑣2𝑑 has the filter_count of 2 and kernel_sizeof 3. For 𝐴𝑣𝑔𝑃𝑜𝑜𝑙2𝑑 , its kernel_size and stride are both 2.

Figure 9 shows that the estimation errors of the four operatorsare all less than 8%, indicating that our memory cost functionsare accurate. Note that the values of TensorFlow 𝐵𝑎𝑡𝑐ℎ𝑁𝑜𝑟𝑚 aremarked as 0 because the in-place execution is enabled by default.

Conv2d AvgPool2d Dropout BatchNorm

0

20

40

60

80

49.3

18.4

73.5

0

49.3

18.4

73.5

0

50

20

74 74

49.3

18.4

73.5 73.5

49.3

18.4

73.5 73.5

49.3

18.4

73.5 73.5

GPU

Mem

ory(M

B)

TensorFlow Real TensorFlow Est. PyTorch RealPyTorch Est. MXNet Real MXNet Est.

Figure 9: GPU memory consumption (MB) of DL operators.

5.4 RQ3: How effective is DNNMem in

data-parallel training?

Nowadays, in industrial practice, many DL training jobs adopt dataparallelism, which employs multiple GPU devices (in a single ma-chine or distributed nodes) to increase the number of input dataprocessed simultaneously. This RQ is to evaluate the effectiveness ofDNNMem in such common scenarios. We experiment the ResNet50model with a batch size of 64 using Horovod (a popular data-paralleltraining framework supporting automatic parallelization [39]). Thefusion buffer has a default size of 64 MB. Note that here the Ten-sorFlow model is provided by Horovod using Keras APIs, whichis different from that in Section 5.2. We ran the multi-device ex-periments on a single node and ran the distributed experimentson a 3-node cluster. Each node is equipped with 4 NVIDIA K80GPUs with 12GB memory each. The reported real GPU memoryconsumption is the arithmetic mean value of all training instances.

Figure 10 shows that DNNMem achieves average errors of 11.8%(TensorFlow), 13.85% (PyTorch), and 8.9% (MXNet), indicating theeffectiveness of DNNMem in data-parallel training.

TensorFlow PyTorch MXNet

5

1010.6

6.8 6.5

9.4

5.9 5.9

10.7

6.9 6.4

9.4

5.9 5.9

GPU

Mem

ory(GB)

Multi-device Real Multi-device Est.Distributed Real Distributed Est.

Figure 10: The effectiveness of DNNMem in data-parallel

training (ResNet50).

6 RELATEDWORK

Quality attributes (e.g., reliability, cost, performance, and memoryconsumption) are non-functional properties of software, which arevital for the success of a real-world software-intensive system. Overthe years, many estimation models have been proposed to predictthese attributes. Examples include defect prediction [25? ], effortand cost estimation [27, 43], and performance prediction [15, 40].

There are also many program analysis techniques [2, 6, 7, 12, 20,45, 46] for memory footprint analysis and estimation. For example,Verbauwhede et al. [46] propose to estimate the memory of DSP ap-plications by modeling array dependencies and execution sequenceas an integer linear problem solved by the ILP solver. Albert etal. [2] present parametric inference on the notion of object lifetimeto inferring memory requirements of Java-like programs. Heo etal. [17] propose a resource-aware flow-sensitive analysis via onlineabstraction coarsening. However, as described in the paper, theycannot be directly applied to deep learning programs.

Frameworks’ built-in Shape Inference [22, 23, 30, 33], and someDL performance analysis work [8] estimate GPU memory usage bysummarizing weight, input, and output tensors on the computationgraph under forward propagation. However, they are just a subsetof the whole memory consumption. Shape Inference is incapableof analyzing the remaining yet complex memory usage by tensors

Page 11: Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory consumption of C, C++, Java, etc. code. For example, Albert etal.[2] presented a parametric

Estimating GPU Memory Consumption of Deep Learning Models

under backward propagation and framework runtime (e.g., mem-ory fragmentation/reallocation/reservation, cuDNN workspace),which could observably affect the final GPU memory consumption.DNNMem adopts a novel, comprehensive, and unified analyticapproach which systematically solves the challenges. We have com-pared DNNMem with Shape Inference in Section 5, and the resultsindicate that DNNMem is more effective and robust.

Real execution estimation [28] has issues of being limited bythe memory capacity of testing GPUs, high execution cost, andenvironmental dependency, which are especially not applicable toenterprise platforms. DL compilers such as TVM [10] focus on theinference phase, cross-platform deployment, and loop level costmodel. However, these techniques are beyond the scope of thispaper. Researchers have also observed the need for memory costmodeling for DNNmemory optimization and planning by analyzingthe computation graph [24, 37, 48]. Unlike these work, DNNMemfocuses on memory estimation for DL models.

7 CONCLUSION

In this paper, we have presented DNNMem, an accurate estima-tion tool for GPU memory consumption of deep learning models.This work is motivated by the many out-of-memory failures ofDL jobs in Microsoft. DNNMem adopts an analytic approach thatsystematically explores many memory consumption-related fac-tors. Our extensive experiments show that DNNMem can makesatisfactory estimations of GPU memory consumption. DNNMemis also effective and robust to the choices of neural architectures,hyperparameters, and frameworks.

While we use models developed under three popular deep learn-ing frameworks to evaluate the proposed approach, DNNMem isgeneralizable. We can define more memory cost functions of stan-dard/custom operators and adapt the analytic approach to differentdevices and frameworks. In the future, we will experiment with theextension of DNNMem to demonstrate its generalizability.

REFERENCES

[1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, JeffreyDean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, and etal. 2016. TensorFlow: A System for Large-Scale Machine Learning. In Proceedingsof the 12th USENIX Conference on Operating Systems Design and Implementation(Savannah, GA, USA) (OSDI ’16). USENIX Association, USA, 265–283.

[2] Elvira Albert, Samir Genaim, and Miguel Gómez-Zamalloa. 2010. ParametricInference of Memory Requirements for Garbage Collected Languages. In Proceed-ings of the 2010 International Symposium on Memory Management (Toronto,Ontario, Canada) (ISMM ’10). ACM, New York, NY, USA, 121–130. http://doi.acm.org/10.1145/1806651.1806671

[3] S. Amari. 1993. Backpropagation and stochastic gradient descent method. Neuro-computing, 5(4):185 - 196 (1993).

[4] Amazon. 2019. Amazon SageMaker. https://aws.amazon.com/sagemaker.[5] Microsoft Azure. 2019. Microsoft Azure Machine Learning. https://azure.

microsoft.com/en-us/services/machine-learning-service.[6] Antoine Blin, Cédric Courtaud, Julien Sopena, Julia Lawall, and GillesMuller. 2016.

Understanding the Memory Consumption of the MiBench Embedded Benchmark.In Networked Systems, Parosh Aziz Abdulla and Carole Delporte-Gallet (Eds.).Springer International Publishing, Cham, 71–86.

[7] Víctor Braberman, Federico Fernández, Diego Garbervetsky, and Sergio Yovine.2008. Parametric Prediction of Heap Memory Requirements. In Proceedings of the7th International Symposium on Memory Management (Tucson, AZ, USA) (ISMM’08). ACM, New York, NY, USA, 141–150.

[8] Alfredo Canziani, Adam Paszke, and Eugenio Culurciello. 2017. An Analysis ofDeep Neural Network Models for Practical Applications. ArXiv abs/1605.07678(2017).

[9] Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, TianjunXiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. 2015. MXNet: A Flexible

and Efficient Machine Learning Library for Heterogeneous Distributed Systems.CoRR abs/1512.01274 (2015). arXiv:1512.01274 http://arxiv.org/abs/1512.01274

[10] Tianqi Chen, Thierry Moreau, Ziheng Jiang, Haichen Shen, Eddie Q. Yan, LeyuanWang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. 2018.TVM: End-to-End Optimization Stack for Deep Learning. ArXiv abs/1802.04799(2018).

[11] Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. 2016. Training DeepNets with Sublinear Memory Cost. CoRR abs/1604.06174 (2016). arXiv:1604.06174http://arxiv.org/abs/1604.06174

[12] Duc-Hiep Chu, Joxan Jaffar, and Rasool Maghareh. 2016. Symbolic Execution forMemory Consumption Analysis. In Proceedings of the 17th ACM SIGPLAN/SIGBEDConference on Languages, Compilers, Tools, and Theory for Embedded Systems(Santa Barbara, CA, USA) (LCTES 2016). ACM, New York, NY, USA, 62–71.

[13] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: ALarge-Scale Hierarchical Image Database. In CVPR 2009.

[14] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT:Pre-training of Deep Bidirectional Transformers for Language Understanding. InNAACL-HLT.

[15] Huong Ha and Hongyu Zhang. 2019. DeepPerf: Performance Prediction forConfigurable Software with Deep Sparse Neural Network. In Proceedings of the41st International Conference on Software Engineering (Montreal, Quebec, Canada)(ICSE ’19). IEEE Press, Piscataway, NJ, USA, 1095–1106.

[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep ResidualLearning for Image Recognition. CoRR abs/1512.03385 (2015). arXiv:1512.03385http://arxiv.org/abs/1512.03385

[17] Kihong Heo, Hakjoo Oh, and Hongseok Yang. 2019. Resource-Aware ProgramAnalysis Via Online Abstraction Coarsening. In Proceedings - 2019 IEEE/ACM41st International Conference on Software Engineering, ICSE 2019. 94–104.

[18] SeppHochreiter and Jurgen Schmidhuber. 1997. Long Short-termMemory. Neuralcomputation 9 (12 1997), 1735–80.

[19] Md Johirul Islam, Giang Nguyen, Rangeet Pan, and Hridesh Rajan. 2019. AComprehensive Study on Deep Learning Bug Characteristics. In Proceedings ofthe 2019 27th ACM Joint Meeting on European Software Engineering Conferenceand Symposium on the Foundations of Software Engineering (Tallinn, Estonia)(ESEC/FSE 2019). Association for Computing Machinery, New York, NY, USA,510–520.

[20] Timotej Kapus and Cristian Cadar. 2019. A Segmented Memory Model forSymbolic Execution. In European Software Engineering Conference / ACM SIGSOFTSymposium on the Foundations of Software Engineering (ESEC/FSE 2019) (Tallinn,Estonia). 774–784.

[21] Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Opti-mization. (2014). http://arxiv.org/abs/1412.6980 cite arxiv:1412.6980Comment:Published as a conference paper at the 3rd International Conference for LearningRepresentations, San Diego, 2015.

[22] Rasmus Munk Larsen and Tatiana Shpeisman. 2019. TensorFlow Graph Opti-mizations.

[23] Malmaud. 2020. TensorFlow Shape Infer. https://malmaud.github.io/tfdocs/shape_inference.

[24] Chen Meng, Minmin Sun, Jun Yang, Minghui Qiu, and Yang Gu. 2017. TrainingDeeper Models by GPU Memory Optimization on TensorFlow.

[25] TimMenzies, ZachMilton, Burak Turhan, Bojan Cukic, Yue Jiang, and Ayşe Bener.2010. Defect prediction from static code features: Current results, limitations,new approaches. Automated Software Engineering 17, 4 (1 12 2010), 375–407.

[26] miglopst. 2018. Memory management for tensorflow. https://github.com/miglopst/cs263_spring2018/wiki/Memory-management-for-tensorflow

[27] K. Molokken and M. Jorgensen. 2003. A review of software surveys on soft-ware effort estimation. In 2003 International Symposium on Empirical SoftwareEngineering, 2003. ISESE 2003. Proceedings. 223–230.

[28] MXNet. 2020. MXNet symbol simple bind. https://beta.mxnet.io/api/symbol/_autogen/mxnet.symbol.Symbol.\simple_bind.html.

[29] Apache MXNet. 2019. The topological sorting algorithm for computa-tion graphs in Apache MXNet. https://github.com/apache/incubator-mxnet/blob/4149f8b8752989fce5d80cc13f92d99774988b4f/src/executor/simple_partition_pass.h#L67

[30] mxnet memonger. 2020. TensorFlow Shape Infer. https://github.com/dmlc/mxnet-memonger.

[31] NVIDIA. 2019. NVML API Reference Guide. https://docs.nvidia.com/deploy/nvml-api/index.html. (2019).

[32] Nvidia. 2020. cudnnConvolutionFwdAlgo. https://docs.nvidia.com/deeplearning/sdk/cudnn-api/index.html#cudnnConvolutionFwdAlgo_t.

[33] ONNX. 2020. ONNX Shape Inference. https://github.com/onnx/onnx/blob/master/docs/ShapeInference.md.

[34] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, GregoryChanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Des-maison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, AlykhanTejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and SoumithChintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learn-ing Library. In Advances in Neural Information Processing Systems 32. Curran

Page 12: Estimating GPU Memory Consumption of Deep Learning Models23, 51, 52] for estimating memory consumption of C, C++, Java, etc. code. For example, Albert etal.[2] presented a parametric

Yanjie Gao, Yu Liu, Hongyu Zhang, Zhengxian Li, Yonghao Zhu, Haoxiang Lin, and Mao Yang

Associates, Inc., 8024–8035.[35] PyTorch. 2019. PyTorch: Control Flow + Weight Sharing. https://pytorch.org/

tutorials/beginner/examples_nn/dynamic_net.html.[36] PyTorch. 2019. The topological sorting algorithm for computation graphs in Py-

Torch. https://github.com/pytorch/pytorch/blob/v1.2.0/caffe2/core/nomnigraph/include/nomnigraph/Graph/TopoSort.h#L26

[37] Minsoo Rhu, Natalia Gimelshein, Jason Clemons, Arslan Zulfiqar, and StephenW. Keckler. 2016. vDNN: Virtualized deep neural networks for scalable, memory-efficient neural network design. 1–13.

[38] Nikolay Sakharnykh. 2018. Everything you need to know about unified memory.NVIDIA GTC (2018).

[39] Alexander Sergeev and Mike Del Balso. 2018. Horovod: fast and easy distributeddeep learning in TensorFlow. CoRR abs/1802.05799 (2018). arXiv:1802.05799http://arxiv.org/abs/1802.05799

[40] Norbert Siegmund, Alexander Grebhahn, Sven Apel, and Christian Kästner. 2015.Performance-influence Models for Highly Configurable Systems. In Proceedingsof the 2015 10th Joint Meeting on Foundations of Software Engineering (Bergamo,Italy) (ESEC/FSE 2015). ACM, New York, NY, USA, 284–294.

[41] Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Net-works for Large-Scale Image Recognition. In 3rd International Conference onLearning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Confer-ence Track Proceedings. http://arxiv.org/abs/1409.1556

[42] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbig-niew Wojna. 2015. Rethinking the Inception Architecture for Computer Vision.CoRR abs/1512.00567 (2015). arXiv:1512.00567 http://arxiv.org/abs/1512.00567

[43] Hee Beng Kuan Tan, Yuan Zhao, and Hongyu Zhang. 2009. Conceptual DataModel-Based Software Size Estimation for Information Systems. ACM Trans.Softw. Eng. Methodol. 19, 2, Article 4 (Oct. 2009), 37 pages.

[44] T. Tieleman and G. Hinton. 2012. rmsprop: Divide the Gradient by a RunningAverage of Its Recent Magnitude. COURSERA: Neural Networks for MachineLearning, 4, 26-31. (2012).

[45] Leena Unnikrishnan, Scott D. Stoller, and Yanhong A. Liu. 2000. AutomaticAccurate Stack Space and Heap Space Analysis for High-Level Languages. TechnicalReport. Indiana University.

[46] Ingrid M. Verbauwhede, Chris J. Scheers, and Jan M. Rabaey. 1994. MemoryEstimation for High Level Synthesis. In Proceedings of the 31st Annual DesignAutomation Conference (San Diego, California, USA) (DAC ’94). ACM, New York,NY, USA, 143–148. http://doi.acm.org/10.1145/196244.196313

[47] AlexWang, Amanpreet Singh, JulianMichael, Felix Hill, Omer Levy, and Samuel R.Bowman. 2019. GLUE: A Multi-Task Benchmark and Analysis Platform forNatural Language Understanding. In the Proceedings of ICLR.

[48] Linnan Wang, Jinmian Ye, Yiyang Zhao, Wei Wu, Ang Li, Shuaiwen Leon Song,Zenglin Xu, and Tim Kraska. 2018. Superneurons: Dynamic GPU MemoryManagement for Training Deep Neural Networks. In Proceedings of the 23rdACM SIGPLAN Symposium on Principles and Practice of Parallel Programming(PPoPP’18). 41–53.

[49] Ru Zhang, Wencong Xiao, Hongyu Zhang, Yu Liu, Haoxiang Lin, and Mao Yang.2020. An Empirical Study on Program Failures of Deep Learning Jobs. In Toappear Proceedings of the 42nd International Conference on Software Engineering(ICSE ’20).


Recommended