NVIDIA Kepler Architecture
Paul BissonnetteRizwan Mohiuddin
Ajith Herga
Compute Unified Device Architecture
• Hybrid CPU/GPU Code• Low latency code is run
on CPU– Result immediately
available• High latency, high
throughput code is run on GPU– Result on bus– GPU has many more cores
than CPU
CPU/GPU Code
CUDA Program
GPU Routines
CPU Routines
NVCC GCC
GPU Object
CPU Object
CUDA Binary
Link
er
Execution Model (Overview)
CPU GPU CPU GPU
CPUGPU
RPC RPCResu
lt
ResultIntermediateResult
Execution Model (GPU)Th
read
Thre
ad
Thre
ad
Thread Block Thread Block Thread Block
Streaming Multiple Processor
Thread Grid
Graphics Card
Execution Model (GPU)
• Each procedure runs as a “kernel”• An instance of a kernel runs on a thread block– A thread block executes on a single streaming
multiple processor• All instances of a particular kernel form a
thread grid– A thread grid executes on a single graphics card
across several streaming multiple processors
Thread Cooperatively
• Multiple levels of sharing
• Thread blocks similar to MPI group
GPU Execution of Kernels
• In Kepler threads can spawn new thread blocks/grids
• Less time spent in CPU• More natural recursion• Completion dependent
on child grids
CUDA Languages
• CUDA C/C++ and CUDA Fortran• Scientific computing• Highly parallel applications• NVIDIA specific (unlike OpenCL)• Specialized for specific tasks– Highly optimized single precision floating point– Specialized data sharing instructions within thread
blocks
HYPER QWithout HYPER Q:
• Availability of only one work queue thus can receive work only from one queue.
• Difficult for a CPU core to keep a GPU busy.
• Using HYPER Q:– Allows connection from multiple CUDA streams,
Message Passing Interface (MPI) processes, or multiple threads of the same process.
– 32 concurrent work queues, can receive work from 32 process cores at the same time.
– 3X Performance increase on Fermi
• Removes the problem of false intra-stream dependencies.
Dynamic Parallelism• Without Dynamic
Parallelism– Data travels back and
forth between the CPU and GPU many times.
– This is because of the inability of the GPU to create more work on itself depending on the data.
• With Dynamic Parallelism:– GPU can generate
work on itself based intermediate results, without involvement of CPU.
– Permits Dynamic Run Time decisions.
– Leaves the CPU free to do other work, conserves power.
• Application Example: Adaptive Grid Simulation
• Application Example: Quick Sort Computation
Streams spawning Streams
CPU launches quicksort
CPU-GPU Stack Exchange
Runs on CPU
Looping based on intermediate results
Check if GPU has returned any more intermediate results
CPU spawns a stream to be computed on GPU
Memory Organization
Memory Organization
Core Stream
Stream Processor
Kepler Architecture
Scheduling
Warp Scheduler
Thread Block level/Grid Scheduling
References• NVIDIA Whitepapers
– http://www.geforce.com/Active/en_US/en_US/pdf/GeForce-GTX-680-Whitepaper-FINAL.pdf
– http://developer.download.nvidia.com/assets/cuda/files/CUDADownloads/TechBrief_Dynamic_Parallelism_in_CUDA.pdf
• NVIDIA Keynote Presentation– http://www.youtube.com/watch?v=TxtZwW2Lf-w
• Georgia Tech Presentation– http://www.cc.gatech.edu/~vetter/keeneland/tutorial-2011-04
-14/02-cuda-overview.pdf
• http://www.anandtech.com/show/6446/nvidia-launches-tesla-k20-k20x-gk110-arrives-at-last/4
• http://gpuscience.com/code-examples/tesla-k20-gpu-quicksort-with-dynamic-parallelism