+ All Categories
Home > Documents > Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Date post: 26-Dec-2015
Category:
Upload: andrew-harris
View: 212 times
Download: 0 times
Share this document with a friend
Popular Tags:
12
Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University
Transcript
Page 1: Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Adding PDC within a Six-Course Subset of the CS Major

Apan QasemTexas State University

Page 2: Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Texas State University2

Context

• Texas State University• Current enrollment ~35000

• 30,000 undergrad, 5,000 graduate

• Student body reflects area demographic • 28% of students are of Hispanic

origin• HSI designation since 2011

• CS Department • Faculty

• 21 TTF + 6 Lecturers• 7-8 “rotating” adjuncts every

semester• Students

• ~650 declared CS majors• ~90 graduate students

• Class sizes relatively small (< 40) SIGCSE14

Page 3: Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Texas State University3

The Early-and-Often Approach

• Breaks away from the traditional approach of teaching parallel programming as an upper-level elective• Follows the CSinParallel model

(Brown, Shoop, Adams et al. )

• Introduce parallelism early in the curriculum in required courses

• Repeat key concepts in different courses throughout the curriculum

• Tie concepts together in an upper-level capstone course

SIGCSE14

upper-level elective

major path

capstone m

ajor path

early introduction

reinforcement

collation

Page 4: Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Texas State University4

indicates prerequisite

Indicates new course

[Texas State CS curriculum]

Data and Task Parallelism

Divide-and-conquerDynamic Programming

B. Parallelization Techniques and

Parallel AlgorithmsCommunicationSynchronization

Scheduling for Power and Performance

Data Dependence

D. Task Orchestration

SMP, ClustersNUMA, UMA

Cache SharingCache Coherence

C. Parallel Architectures

SpeedupEfficiencyScalability

Cache LocalityLoad Balancing

Complexity Analysis

E. PerformanceConcurrency and Parallelism, Decomposition

Power and Performance

A. Elementary Notions

low

high

Level o

f A

bstra

ctio

n

CS II

A,B,E

CS I

A

Data Structures

A,B, E

Computer Architecture

C,D,E

ComputerNetworks

B,D

OperatingSystems

D,E

Compilers

D,E

Graphics

B, C

Programming for Mutlicore and

Manycore Platforms

A,B,C,D,E

freshman

senior

Year

SIGCSE14

Page 5: Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Texas State University5

Module Implementations to Date

No ModuleB1 Parallelization techniques

C1 Intra-processor parallel architecture

C2 Inter-processor parallel architecture

D1 Task orchestration – synchronization and communication

D2 Task orchestration - scheduling and mapping

E1 Performance - basic concepts

SIGCSE14

• 7 of 11 KUs from Tier 1• 8 of 16 KUs from Tier 2• 8 of 21 KUs from Elective

• 19 of 41 KUs total

CS 2013 CoverageAdding two more in Fall 14

Page 6: Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Texas State University6

Parallelization Techniques

• Description• Data and task parallelization techniques • Data parallel programs with OpenMP

• CS2013 Mapping

• Recommended Courses• CS II, Data Structures, Algorithms

SIGCSE14

KA Knowledge Unit Level

SF Computational Paradigms Tier 1

SF Parallelism Tier 1

PD Parallel Decomposition Tier 1, Tier 2

PD Parallel Performance Tier 2

PD ProcessingElective

PL Concurrency and Parallelism

Page 7: Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Texas State University7

Intra-processor Parallel Architecture

• Description• Instruction-level parallelism, superscalar, out-of-order

execution • Vector instructions, SMT• Multicore memory hierarchy

• CS 2013 Mapping

• Recommended Courses• Computer Organization, Computer Architecture

SIGCSE14

KA Knowledge Unit Level

AR Assembly level machine organization Tier 2

AR Functional organization Elective

Page 8: Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Texas State University8

Inter-processor Parallel Architecture

• Description• Shared-memory multiprocessors, distributed memory systems• GPUS, SMTs, SSEs

• CS 2013 Mapping

• Recommended Courses• Computer Organization, Computer Architecture

SIGCSE14

KA Knowledge Unit Level

PD Parallel Architecture Tier 1, Tier 2, Elective

AR Multiprocessing and alternative architectures Elective

Page 9: Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Texas State University9

Task Orchestration : Synchronization and Communication

• Description• Point-to-point and collective communication • Semaphores, locks, critical sections• MPI and OpenMP

• CS 2013 Mapping

• Recommended Courses• Operating Systems, Computer Networks

SIGCSE14

KA Knowledge Unit Level

PD Parallelism Fundamentals Tier 1

PD Communication and Coordination

OS Operating System Principles Tier 2

OS Concurrency

AR Interfacing and communication

HC Collaboration and communication Elective

Page 10: Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Texas State University10

Task Orchestration : Scheduling and Mapping

• Description• Pipelined parallelism, producer-consumer applications• Load balancing, energy efficiency, thread affinity• SMT and hardware threads

• CS 2013 Mapping

• Recommended Courses• Operating Systems

SIGCSE14

KA Knowledge Unit Level

OS Operating System Principles Tier 2

OS Concurrency

PD Parallel Algorithms, Analysis, and Programming Tier 2, Elective

Page 11: Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Texas State University11

Performance : Basic Concepts

• Description• Amdahl’s law, linear and superlinear speedup, throughput,

data locality, communication cost• Study of parallel merge sort and matrix multiplication

• CS2013 Mapping

• Recommended Course• CS I, Computer Architecture, Algorithms, Compilers

SIGCSE14

KA Knowledge Unit Level

SF Parallelism Tier 1

SF Evaluation

AR Digital logic and digital systems Tier 2

PD Parallel Performance Elective

PL Concurrency and Parallelism

Page 12: Adding PDC within a Six-Course Subset of the CS Major Apan Qasem Texas State University.

Questions


Recommended