The subject Parallelism covers the fundamental aspects related to parallel programming, a basic tool today to take advantage of the multi-core architectures that constitute current computers. The course includes a description of the main strategies for task and data decomposition, as well as the mechanisms to ensure its correctness (synchronization, mutual exclusion, ...).
Person in charge
Eduard Ayguadé Parra (
Chenlen Yu (
Daniel Jimenez Gonzalez (
Gladys Miriam Utrera Iglesias (
Jordi Tubella Murgadas (
Josep Ramon Herrero Zaragoza (
Julian David Morillo Pozo (
Lluc Álvarez Martí (
Common technical competencies
CT1 - To demonstrate knowledge and comprehension of essential facts, concepts, principles and theories related to informatics and their disciplines of reference.
- To demonstrate knowledge and comprehension about the fundamentals of computer usage and programming. Knowledge about the structure, operation and interconnection of computer systems, and about the fundamentals of its programming.
CT5 - To analyse, design, build and maintain applications in a robust, secure and efficient way, choosing the most adequate paradigm and programming languages.
- To choose, combine and exploit different programming paradigms, at the moment of building software, taking into account criteria like ease of development, efficiency, portability and maintainability.
- To design, write, test, refine, document and maintain code in an high level programming language to solve programming problems applying algorithmic schemas and using data structures.
- To demonstrate knowledge and capacity to apply the fundamental principles and basic techniques of parallel, concurrent, distributed and real-time programming.
CT6 - To demonstrate knowledge and comprehension about the internal operation of a computer and about the operation of communications between computers.
- To demonstrate knowledge, comprehension and capacity to evaluate the structure and architecture of computers, and the basic components that compound them.
CT7 - To evaluate and select hardware and software production platforms for executing applications and computer services.
- To evaluate hardware/software systems in function of a determined criteria of quality.
CT8 - To plan, conceive, deploy and manage computer projects, services and systems in every field, to lead the start-up, the continuous improvement and to value the economical and social impact.
- To identify current and emerging technologies and evaluate if they are applicable, to satisfy the users needs.
G3 - To know the English language in a correct oral and written level, and accordingly to the needs of the graduates in Informatics Engineering. Capacity to work in a multidisciplinary group and in a multi-language environment and to communicate, orally and in a written way, knowledge, procedures, results and ideas related to the technical informatics engineer profession.
- To study using resources written in English. To write a report or a technical document in English. To participate in a technical meeting in English.
The student should be able to formulate simple performance models given a parallelization strategy for an application, that allow to estimate the influence of major architectural aspects: number of processing elements, data access cost, cost of interaction between processing elements, among others.
The student should be able to measure, using instrumentation, visualization and analysis tools, the performance achieved with the implementation of a parallel application and to detect factors that limit this performance: granularity of tasks, equitable load, interaction between tasks, among others.
The student should be able to compile and execute a parallel program, using the basic command line tools to measure the execution time.
The student should be able to apply simple optimizations in parallel kernels to improve their performance for parallel architectures, attacking the factors that limit performance
The student should be able to choose the most appropriate decomposition strategy to express parallelism in an application (tasks, data).
The student should be able to apply the basic techniques to synchronize parallel execution, avoiding race conditions and deadlock, and enabling the overlap between computation and interaction, among others.
Students must be able to program in OpenMP the parallel version of a sequential application
The student should be able to identify the different types of parallelism that can be exploited in a computer architecture (ILP, TLP, and DLP within a processor, multiprocessor and multicomputador) and describe its principles of operation.
Students must be able to understand the basics of coherence and data sharing in shared-memory parallel architectures, both with uniform and non-uniform access to memory.
The student should be able to follow the course using the materials provided in English (slides, laboratory and practical sessions), as well as to do the mid-terms and final exams with the statement written in english.
If the the foreign language competence is chosen, the student should be able to write the deliverables associated to laboratory assignments (partially or fully) in english.
Introduction and motivation
Necessitat del paral.lelisme, paral.lelisme vs. concurrència, possibles problemes en l'us concurrència: deadlock, lifelock, starvation, fairness, data races
Analysis of parallel applications
Mètriques bàsiques: paral·lelisme, temps d'execució, speedup i escalabilitat. Análisi de l'impacte dels overheads associats a la creació de tasques i la seva sincronització i la compartició de dades. Eines per la predicció i l'anàlisi de paral.lelisme i visualització de comportament: Paraver i Tareador
Parallel programming principles: task decomposition
Task decomposition vs. data decomposition. Descomposcio en tasques, granularitat i anàlisi de dependències. Identificació de patrons de paral.lelisme: iterative vs. divide and conquer task decompositions. Mecanismes per implementar la descomposició en tasques: creació de regions paral·leles i tasques; mecanismes per garantir task ordering i data sharing.
Introduction to parallel architectures
Paral.lelisme dins d'un processador (ILP, DLP i TLP) i entre els processadors que formen els multiprocessadors de memòria compartida SMP i ccNUMA (coherència de cache, consistència de memòria, sincronització).
Parallel programming principles: data decomposition
Descomposició de dades (descomposició geomètrica vs. estructures recursives) per arquitectures amb memoria compartida. Localitat en l'accés a les dades en arquitectures paral·leles de memòria compartida. Generació de codi en funció de la descomposició de dades. Breu introducció a les arquitectures de memòria distribuïda i la seva programació (cas concret: MPI).
Shared-memory programming: OpenMP
Regions paral.leles, threads i tasques. Task/thread barriers. Exclusió mútua i locks. Distribuïdors de feina: bucles.
Midterm problems review
En aquestes sessions es resoldran dubtes que els estudiants puguin tenir en els problemes dels controls
Assimilation of fundamental concepts and tools for modeling and analyzing the behavior of parallel applications
Actively participate in sessions of theory/problems. Study the contents of topics 1 and 2 and perform the proposed exercises. Resolution of the exercises in the laboratory sessions and understand the results. Objectives:13210 Contents:
Using OpenMP to express of parallelism in shared memory
Actively participate in sessions of theory/problems. Study the contents of topic 6 and prepare the implementation of exercises for the laboratory sessions. Resolution of the exercises in the laboratory sessions and extraction of conclusions. Objectives:471011 Contents:
Assimilation of the fundamentals for task decomposition
Actively participate in sessions of theory/problems. Study the contents of topic 4 and perform the proposed exercises. Apply new knowledge when solving the laboratory exercises for topic 6. Objectives:5610 Contents:
Assimilation of the fundamentals for data decomposition
Actively participate in sessions of theory/problems. Study the contents of topic 5 and perform the proposed exercises. Use OpenMP to express data decompositions for shared-memory architectures. Objectives:5610 Contents:
15 (Outside class hours) Type:
The theory classes introduce all the knowledge, techniques, concepts needed to be put into practice problems in class and lab as well as personal work using a collection of problems.
Two hours of theory/problems are done per week. The two hours of laboratory classes are also done ervery week.
The course uses the C programming language and mainly the OpenMP parallel programming model.
The grade for the course is computed from 2 notes:
- Theory contents (weight 70%).
- Laboratory evaluation (weight 30%).
The laboratory grade (Lab) is mainly obtained from the marks obtained in the deliverables at the end of each assignment, modulated with the performance during the laboratory sessions and a possible interview at the end of the course by the laboratory professor.
During the course, 2 mid-term exams are done (C1 and C2). The continuous assessment mark (AC) is computed as the mean of the marks obtained in the 2 mid-term exams:
AC = 0.5*C1 + 0.5*C2
If AC>=5 then the student's final grade (NF) will be:
NF = 0.3*Lab + 0.7*AC.
Students with AC<5 will have to do the final exam (EF) that determines their grade for the theory part. In this case, the new final grade will be:
NF = 0.3*Lab + 0.7*max(EF, 0.25*AC + 0.75*EF)
Students with AC>=5 that want to do the final exam in order to improve their mark will have to send an e-mail to the coordinator at least one week before the exam date. In this case, the new final grade will be calculated as follows:
NF = 0.3*Lab + 0.7*max(EF, AC)
The foreign language competence will be evaluated from the reports delivered for the laboratory assignments. These reports should be written (partially or fully) in English and they will require reading the laboratory assignment description (also in English) as well as the OpenMP specifications. Both the structure of the written document and the ability to transmit the results and conclusions of the work will be used to evaluate the competence (following a rubrics document). The grade for the competence will be A (excellent), B (good), C (satisfactory) , D (fail) or NA (Not evaluated).