Parallelism

Credits
6
Types
Compulsory
Requirements
  • Prerequisite: AC
Department
AC
The subject Parallelism covers the fundamental aspects related to parallel programming, a basic tool today to take advantage of the multi-core architectures that constitute current computers. The course includes a description of the main strategies for task and data decomposition, as well as the mechanisms to ensure its correctness (synchronization, mutual exclusion, ...).

Teachers

Person in charge

  • Eduard Ayguadé Parra ( )

Others

  • Chenlen Yu ( )
  • Daniel Jimenez Gonzalez ( )
  • Gladys Miriam Utrera Iglesias ( )
  • Jordi Tubella Murgadas ( )
  • Josep Ramon Herrero Zaragoza ( )
  • Julian David Morillo Pozo ( )
  • Lluc Álvarez Martí ( )

Weekly hours

Theory
2
Problems
0
Laboratory
2
Guided learning
0.4
Autonomous learning
5.6

Competences

Technical Competences

Common technical competencies

  • CT1 - To demonstrate knowledge and comprehension of essential facts, concepts, principles and theories related to informatics and their disciplines of reference.
    • CT1.1B - To demonstrate knowledge and comprehension about the fundamentals of computer usage and programming. Knowledge about the structure, operation and interconnection of computer systems, and about the fundamentals of its programming.
  • CT5 - To analyse, design, build and maintain applications in a robust, secure and efficient way, choosing the most adequate paradigm and programming languages.
    • CT5.1 - To choose, combine and exploit different programming paradigms, at the moment of building software, taking into account criteria like ease of development, efficiency, portability and maintainability.
    • CT5.3 - To design, write, test, refine, document and maintain code in an high level programming language to solve programming problems applying algorithmic schemas and using data structures.
    • CT5.6 - To demonstrate knowledge and capacity to apply the fundamental principles and basic techniques of parallel, concurrent, distributed and real-time programming.
  • CT6 - To demonstrate knowledge and comprehension about the internal operation of a computer and about the operation of communications between computers.
    • CT6.2 - To demonstrate knowledge, comprehension and capacity to evaluate the structure and architecture of computers, and the basic components that compound them.
  • CT7 - To evaluate and select hardware and software production platforms for executing applications and computer services.
    • CT7.2 - To evaluate hardware/software systems in function of a determined criteria of quality.
  • CT8 - To plan, conceive, deploy and manage computer projects, services and systems in every field, to lead the start-up, the continuous improvement and to value the economical and social impact.
    • CT8.1 - To identify current and emerging technologies and evaluate if they are applicable, to satisfy the users needs.

Transversal Competences

Third language

  • G3 - To know the English language in a correct oral and written level, and accordingly to the needs of the graduates in Informatics Engineering. Capacity to work in a multidisciplinary group and in a multi-language environment and to communicate, orally and in a written way, knowledge, procedures, results and ideas related to the technical informatics engineer profession.
    • G3.2 - To study using resources written in English. To write a report or a technical document in English. To participate in a technical meeting in English.

Objectives

  1. The student should be able to formulate simple performance models given a parallelization strategy for an application, that allow to estimate the influence of major architectural aspects: number of processing elements, data access cost, cost of interaction between processing elements, among others.
    Related competences: CT7.2,
  2. The student should be able to measure, using instrumentation, visualization and analysis tools, the performance achieved with the implementation of a parallel application and to detect factors that limit this performance: granularity of tasks, equitable load, interaction between tasks, among others.
    Related competences: CT7.2,
  3. The student should be able to compile and execute a parallel program, using the basic command line tools to measure the execution time.
    Related competences: CT7.2, CT5.3,
  4. The student should be able to apply simple optimizations in parallel kernels to improve their performance for parallel architectures, attacking the factors that limit performance
    Related competences: CT7.2, CT6.2,
  5. The student should be able to choose the most appropriate decomposition strategy to express parallelism in an application (tasks, data).
    Related competences: CT5.1,
  6. The student should be able to apply the basic techniques to synchronize parallel execution, avoiding race conditions and deadlock, and enabling the overlap between computation and interaction, among others.
    Related competences: CT5.1,
  7. Students must be able to program in OpenMP the parallel version of a sequential application
    Related competences: CT5.3, CT5.6,
  8. The student should be able to identify the different types of parallelism that can be exploited in a computer architecture (ILP, TLP, and DLP within a processor, multiprocessor and multicomputador) and describe its principles of operation.
    Related competences: CT8.1, CT6.2, CT1.1B,
  9. Students must be able to understand the basics of coherence and data sharing in shared-memory parallel architectures, both with uniform and non-uniform access to memory.
    Related competences: CT8.1, CT6.2, CT1.1B,
  10. The student should be able to follow the course using the materials provided in English (slides, laboratory and practical sessions), as well as to do the mid-terms and final exams with the statement written in english.
    Related competences: G3.2,
  11. If the the foreign language competence is chosen, the student should be able to write the deliverables associated to laboratory assignments (partially or fully) in english.
    Related competences: G3.2,

Contents

  1. Introduction and motivation
    Necessitat del paral.lelisme, paral.lelisme vs. concurrència, possibles problemes en l'us concurrència: deadlock, lifelock, starvation, fairness, data races
  2. Analysis of parallel applications
    Mètriques bàsiques: paral·lelisme, temps d'execució, speedup i escalabilitat. Análisi de l'impacte dels overheads associats a la creació de tasques i la seva sincronització i la compartició de dades. Eines per la predicció i l'anàlisi de paral.lelisme i visualització de comportament: Paraver i Tareador
  3. Parallel programming principles: task decomposition
    Task decomposition vs. data decomposition. Descomposcio en tasques, granularitat i anàlisi de dependències. Identificació de patrons de paral.lelisme: iterative vs. divide and conquer task decompositions. Mecanismes per implementar la descomposició en tasques: creació de regions paral·leles i tasques; mecanismes per garantir task ordering i data sharing.
  4. Introduction to parallel architectures
    Paral.lelisme dins d'un processador (ILP, DLP i TLP) i entre els processadors que formen els multiprocessadors de memòria compartida SMP i ccNUMA (coherència de cache, consistència de memòria, sincronització).
  5. Parallel programming principles: data decomposition
    Descomposició de dades (descomposició geomètrica vs. estructures recursives) per arquitectures amb memoria compartida. Localitat en l'accés a les dades en arquitectures paral·leles de memòria compartida. Generació de codi en funció de la descomposició de dades. Breu introducció a les arquitectures de memòria distribuïda i la seva programació (cas concret: MPI).
  6. Shared-memory programming: OpenMP
    Regions paral.leles, threads i tasques. Task/thread barriers. Exclusió mútua i locks. Distribuïdors de feina: bucles.
  7. Midterm problems review
    En aquestes sessions es resoldran dubtes que els estudiants puguin tenir en els problemes dels controls

Activities

Activity Evaluation act


Assimilation of fundamental concepts and tools for modeling and analyzing the behavior of parallel applications

Actively participate in sessions of theory/problems. Study the contents of topics 1 and 2 and perform the proposed exercises. Resolution of the exercises in the laboratory sessions and understand the results.
Objectives: 1 3 2 10
Contents:
Theory
6h
Problems
0h
Laboratory
8h
Guided learning
0h
Autonomous learning
6h

Using OpenMP to express of parallelism in shared memory

Actively participate in sessions of theory/problems. Study the contents of topic 6 and prepare the implementation of exercises for the laboratory sessions. Resolution of the exercises in the laboratory sessions and extraction of conclusions.
Objectives: 4 7 10 11
Contents:
Theory
0h
Problems
0h
Laboratory
22h
Guided learning
0h
Autonomous learning
22h

Assimilation of the fundamentals for task decomposition

Actively participate in sessions of theory/problems. Study the contents of topic 4 and perform the proposed exercises. Apply new knowledge when solving the laboratory exercises for topic 6.
Objectives: 5 6 10
Contents:
Theory
8h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
10h

Control for topics 1, 2 and 3


Objectives: 9 1 5 6 7 10
Week: 7
Type: theory exam
Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
8h

Assimilation of the fundamental aspects in parallel architectures

Actively participate in sessions of theory/problems. Study the contents of topic 5 and perform the proposed exercises.
Objectives: 8 10
Contents:
Theory
6h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Assimilation of the fundamentals for data decomposition

Actively participate in sessions of theory/problems. Study the contents of topic 5 and perform the proposed exercises. Use OpenMP to express data decompositions for shared-memory architectures.
Objectives: 5 6 10
Contents:
Theory
6h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
8h

Control for topics 4 and 5


Objectives: 8 4 5 6 7 10
Week: 14
Type: theory exam
Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
8h

Midterm problems review

Actively participate in sessions of problems.
  • Guided learning: Carrying out practical activities, aimed at small groups, on other programming models
Objectives: 9 10
Contents:
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
3h
Autonomous learning
4h

Final exam


Objectives: 8 9 4 5 6 7 10
Week: 15 (Outside class hours)
Type: final exam
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
3h
Autonomous learning
12h

Teaching methodology

The theory classes introduce all the knowledge, techniques, concepts needed to be put into practice problems in class and lab as well as personal work using a collection of problems.

Two hours of theory/problems are done per week. The two hours of laboratory classes are also done ervery week.

The course uses the C programming language and mainly the OpenMP parallel programming model.

Evaluation methodology

The grade for the course is computed from 2 notes:
- Theory contents (weight 70%).
- Laboratory evaluation (weight 30%).

The laboratory grade (Lab) is mainly obtained from the marks obtained in the deliverables at the end of each assignment, modulated with the performance during the laboratory sessions and a possible interview at the end of the course by the laboratory professor.

During the course, 2 mid-term exams are done (C1 and C2). The continuous assessment mark (AC) is computed as the mean of the marks obtained in the 2 mid-term exams:

AC = 0.5*C1 + 0.5*C2

If AC>=5 then the student's final grade (NF) will be:

NF = 0.3*Lab + 0.7*AC.

Students with AC<5 will have to do the final exam (EF) that determines their grade for the theory part. In this case, the new final grade will be:

NF = 0.3*Lab + 0.7*max(EF, 0.25*AC + 0.75*EF)

Students with AC>=5 that want to do the final exam in order to improve their mark will have to send an e-mail to the coordinator at least one week before the exam date. In this case, the new final grade will be calculated as follows:

NF = 0.3*Lab + 0.7*max(EF, AC)

The foreign language competence will be evaluated from the reports delivered for the laboratory assignments. These reports should be written (partially or fully) in English and they will require reading the laboratory assignment description (also in English) as well as the OpenMP specifications. Both the structure of the written document and the ability to transmit the results and conclusions of the work will be used to evaluate the competence (following a rubrics document). The grade for the competence will be A (excellent), B (good), C (satisfactory) , D (fail) or NA (Not evaluated).

Bibliography

Basic:

Complementary:

Previous capacities

The capabilities are defined by the prior pre-requisites for the course.