Supercomputers Architecture

You are here

Credits
6
Types
Specialization complementary (High Performance Computing)
Requirements
This subject has not requirements, but it has got previous capacities
Department
AC
Supercomputers represent the leading edge in high performance computer technology. This course will describe all elements in the system architecture of a supercomputer, from the shared memory multiprocessor in the compute node, to the interconnection network and distributed memory cluster, including infrastructures that host them. We will also discuss the their building blocks and the system software stack, including their parallel programming models, exploiting parallelism is central for greater computational power . We will introduce the continuous development of supercomputing systems enabling its convergence with the advanced analytic algorithms required in today's world. At this point, We will pay special attention to Deep Learning algorithms and its executions on a GPU platform. The practical component is the most important part of this subject. In this course the “learn by doing” method is used, with a set of Hands-on, based on problems that the students must carry out throughout the course. The course will be marked by continuous assessment which ensures constant and steady work. The method is also based on teamwork and a ‘learn to learn' approach reading and presenting papers. Thus the student is able to adapt and anticipate new technologies that will arise in the coming years. For the Labs we will use supercomputing facilities from the Barcelona Supercomputing Center (BSC-CNS).

Teachers

Person in charge

  • Jordi Torres Viñals ( )

Weekly hours

Theory
2
Problems
0
Laboratory
2
Guided learning
0.15
Autonomous learning
4

Competences

Technical Competences of each Specialization

High performance computing

  • CEE4.1 - Capability to analyze, evaluate and design computers and to propose new techniques for improvement in its architecture.
  • CEE4.2 - Capability to analyze, evaluate, design and optimize software considering the architecture and to propose new optimization techniques.
  • CEE4.3 - Capability to analyze, evaluate, design and manage system software in supercomputing environments.

Generic Technical Competences

Generic

  • CG1 - Capability to apply the scientific method to study and analyse of phenomena and systems in any area of Computer Science, and in the conception, design and implementation of innovative and original solutions.

Transversal Competences

Teamwork

  • CTR3 - Capacity of being able to work as a team member, either as a regular member or performing directive activities, in order to help the development of projects in a pragmatic manner and with sense of responsibility; capability to take into account the available resources.

Basic

  • CB6 - Ability to apply the acquired knowledge and capacity for solving problems in new or unknown environments within broader (or multidisciplinary) contexts related to their area of study.
  • CB8 - Capability to communicate their conclusions, and the knowledge and rationale underpinning these, to both skilled and unskilled public in a clear and unambiguous way.
  • CB9 - Possession of the learning skills that enable the students to continue studying in a way that will be mainly self-directed or autonomous.

Objectives

  1. To train students to follow by themselves the continuous development of supercomputing systems that enable the convergence of advanced analytic algorithms as artificial intelligence.
    Related competences: CG1, CEE4.1, CEE4.2, CEE4.3, CB6, CB8, CB9, CTR3,

Contents

  1. Course content and motivation
  2. Supercomputing Basics
  3. Supercomputer Building Blocks
  4. Supercomputer Software Stack
  5. Parallel Programming Models: OpenMP
  6. Parallel Programming Models: MPI
  7. Parallel Performance Metrics and Measurements
  8. Supercomputer Building Blocks for AI servers
  9. Coprocessors and Programming Models
  10. Powering Artificial Intelligence, Machine Learning and Deep Learning with Supercomputing
  11. Parallel platforms for AI and its software stack
  12. Distributed AI platforms and its software stack
  13. Towards Exascale Computing

Activities

Activity Evaluation act


Course content and motivation


Objectives: 1
Contents:
Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
2h

Supercomputing Basics



Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
2h

HPC Building Blocks (general purpose blocks)



Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
2h

HPC Software Stack (general purpose blocks)



Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
2h

Parallel Programming Models: OpenMP



Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
2h

Parallel Programming Models: MPI



Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
2h

Parallel Performance Metrics and Measurements



Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
2h

HPC Building Blocks for AI servers



Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
2h

Coprocessors and Programming Models



Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
2h

Powering Artificial Intelligence, Machine Learning and Deep Learning with Supercomputing



Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
2h

Parallel AI platforms and its software stack



Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
2h

Distributed AI platforms and its software stack



Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
2h

Conclusions and remarks: Towards Exascale Computing



Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
2h

1- Supercomputing Building Blocks: Marenostrum visit



Theory
0h
Problems
0h
Laboratory
2h
Guided learning
0.2h
Autonomous learning
2h

2- Getting Started with Supercomputing



Theory
0h
Problems
0h
Laboratory
2h
Guided learning
0.2h
Autonomous learning
2h

3- Getting Started with Parallel Programming Models



Theory
0h
Problems
0h
Laboratory
2h
Guided learning
0.1h
Autonomous learning
2h

4- Getting Started with Parallel Performance Metrics



Theory
0h
Problems
0h
Laboratory
2h
Guided learning
0.2h
Autonomous learning
2h

5- Getting Started with Parallel Performance Model – I



Theory
0h
Problems
0h
Laboratory
2h
Guided learning
0.2h
Autonomous learning
2h

6- Getting Started with Parallel Performance Model – II



Theory
0h
Problems
0h
Laboratory
2h
Guided learning
0.1h
Autonomous learning
2h

7- Getting Started with GPU based Supercomputing



Theory
0h
Problems
0h
Laboratory
2h
Guided learning
0.1h
Autonomous learning
2h

8- Getting Started with CUDA programming model



Theory
0h
Problems
0h
Laboratory
2h
Guided learning
0.2h
Autonomous learning
2h

9- Getting Started with Deep Learning Frameworks in a Supercomputer



Theory
0h
Problems
0h
Laboratory
2h
Guided learning
0.1h
Autonomous learning
2h

10- Getting Started with Deep Learning basic model



Theory
0h
Problems
0h
Laboratory
2h
Guided learning
0.2h
Autonomous learning
2h

11- Getting Started with a Deep Learning real problems and its solutions



Theory
0h
Problems
0h
Laboratory
2h
Guided learning
0.1h
Autonomous learning
2h

12- Getting Started with parallelization of a Deep Learning problems



Theory
0h
Problems
0h
Laboratory
2h
Guided learning
0.1h
Autonomous learning
2h

13- Getting Started with a distributed Deep Learning problems



Theory
0h
Problems
0h
Laboratory
2h
Guided learning
0.1h
Autonomous learning
2h

Teaching methodology

The theoretical part of the course will follow the slides designed by the teacher during theory class The practical component is the most important part of this subject. In this course the “learn by doing” method is used, with a set of Hands-on, based on problems that the students must carry out throughout the course. The course will be marked by continuous assessment which ensures constant and steady work. The method is also based on teamwork and a ‘learn to learn' approach reading and presenting papers. Thus the student is able to adapt and anticipate new technologies that will arise in the coming years.

Course Activities:
Class attendance and participation: Regular and consistent attendance is expected and to be able to discuss concepts covered during class.

Lab activities: Hands-on sessions will be conducted during lab sessions using supercomputing facilities. Each hands-on will involve writing a lab report with all the results to be delivered one week later.

Homework Assignments: Homework will be assigned weekly that includes reading the documentation that expands the concepts introduced during lectures, and periodically will include reading research papers related with the lecture of the week, and prepare presentations (with slides).

Assessment: There will be 2 short midterm exams (and could be some pop quiz) along the course.

Student presentation: Students/groups randomly chosen will present the homework (presentations/projects).

Evaluation methodology

The evaluation of this course will take into account different items (tentative):

- Attendance (minimum 80% required) & participation in class will account for 15% of the grade.
- Homework, papers reading, paper presentations, will account for 25 % of the grade.
- Exams will account for 15% of the grade.
- Lab sessions (+ Lab reports) will account for 45 % of the grade.

Bibliography

Basic:

  • Class handouts and materials associated with this class - Jordi Torres , (can be found on the Racó-FIB web server), 2019.
  • Understanding Supercomputing, to speed up machine learning algorithms - Jordi Torres, Course notes, 2018.
  • Marenostrum User's guide - BSC documentation, Operations department, 2019.
  • Marenostrum User's guide - BSC documentation, Operations department, 2019.
  • High Performance Computing: Modern Systems and Practices - Thomas Sterling, Matthew Anderson, Maciej Brodowicz, Morgan Kaufmann, 2018 (available at the library of the UPC Barcelona Tech) and preview in google:.
    https://books.google.es/books?id=qOHIBAAAQBAJ

Web links

Previous capacities

Programming in C and Linux basics will be expected in the course. Prior exposure to parallel programming constructions, experience with linear algebra/matrices or machine learning knowledge, will be very helpful.