Credits
6
Types
- GIA: Elective
- GRAU: Specialization complementary (Computing)
Requirements
- Prerequisite: IA
Department
CS
Web
https://sites.google.com/upc.edu/grau-sid
Mail
salvarez@cs.upc.edu
Teachers
Person in charge
- Sergio Álvarez Napagao ( salvarez@cs.upc.edu )
Others
- Ander Barrio Campos ( ander.barrio@upc.edu )
- Javier Vazquez Salceda ( jvazquez@cs.upc.edu )
- Ramon Sangüesa Sole ( ramon.sanguesa.i@upc.edu )
- Víctor Giménez Ábalos ( victor.gimenez.abalos@upc.edu )
Weekly hours
Theory
2
Problems
0
Laboratory
2
Guided learning
0
Autonomous learning
6
Competences
Teamwork
- G5.3 - To identify the roles, skills and weaknesses of the different members of the group. To propose improvements in the group structure. To interact with efficacy and professionalism. To negotiate and manage conflicts in the group. To recognize and give support or assume the leader role in the working group. To evaluate and present the results of the tasks of the group. To represent the group in negotiation involving other people. Capacity to collaborate in a multidisciplinary environment. To know and apply the techniques for promoting the creativity.
Computer science specialization
- CCO2.1 - To demonstrate knowledge about the fundamentals, paradigms and the own techniques of intelligent systems, and analyse, design and build computer systems, services and applications which use these techniques in any applicable field.
- CCO2.2 - Capacity to acquire, obtain, formalize and represent human knowledge in a computable way to solve problems through a computer system in any applicable field, in particular in the fields related to computation, perception and operation in intelligent environments.
Objectives
-
To master the basic concepts of Distributed Artificial Intelligence
Related competences: G9.1, CCO2.1, CCO2.2, -
To become familiar with the intelligent agent paradigm as a key piece in the construction of multi-agent systems
Related competences: G7.1, G9.1, G5.3, CCO2.1, CCO2.2, -
To know the logical and computational models that allow the construction of goal-oriented agents
Related competences: G7.1, G9.1, G5.3, CCO2.1, CCO2.2, -
Know the logical and computational models that allow the construction of utility-driven agents
Related competences: G7.1, G9.1, G5.3, CCO2.1, CCO2.2, -
To know the different methodologies, algorithms and technologies to train agents through reinforcement learning
Related competences: G7.1, G9.1, G5.3, CCO2.1, CCO2.2, -
To learn the basic concepts of game theory and its relationship with multi-agent systems
Related competences: G7.1, G9.1, G5.3, CCO2.1, CCO2.2, -
To learn several cooperation methodologies and algorithms for agents in a multi-agent system
Related competences: G7.1, G9.1, G5.3, CCO2.1, CCO2.2, -
To know various methodologies and algorithms for the competition between agents in a multi-agent system
Related competences: G7.1, G9.1, G5.3, CCO2.1, CCO2.2, -
To understand the most relevant aspects of the field of Mechanism Design
Related competences: G9.1, CCO2.1, -
To know and to understand the social and ethical implications of Artificial Intelligence applied to systems capable of making decisions autonomously
Related competences: G9.1, CCO2.1,
Contents
-
Introduction: intelligent distributed systems
Perspectives on Artificial Intelligence.
Introduction to distributed computing systems.
Cognitive architecture paradigm and historical vision.
Introduction to multi-agent systems. -
Intelligent agents
Definition of intelligent agent.
Rationality.
Bounded rationality.
Definition of environment.
Properties of an environment.
Intelligent agent architectures: reactive, goal-driven deliberative, utility-driven deliberative, adaptive. -
Goal-driven agents
What is a logic-symbolic agent.
Modal logic.
Possible worlds logic.
Alethic, doxastic, epistemic modal logics.
Goal-guided practical reasoning: the agent as an intentional system.
Implementation of a goal-driven agent: the agent control loop.
Commitment management with respect to a goal.
BDI logic (Belief-Desire-Intention). -
Ontologies
Representing the world: ontology and epistemology.
The semiotic triangle.
Elements of an ontology.
Representation languages: OWL and RDF.
Knowledge graphs.
Ontological reasoning.
Descriptive logic: ABox, TBox. -
Utility-driven agents
Goals vs utility.
Definition of utility.
Reward hypothesis and reward signal.
Definition of sequential decision problem.
Markov Decision Processes (MDPs).
Trajectories and policies: discount factor.
Algorithms for solving MDPs: policy evaluation and value iteration.
Brief Introduction to Partially Observable Markov Decision Processes (POMDPs). -
Reinforcement learning
Multi-armed bandits: exploration vs exploitation.
How to learn to decide: reinforcement learning, categorization and taxonomy.
Model-based Monte Carlo.
Time difference learning algorithms: SARSA and Q-Learning.
Policy gradient algorithms: REINFORCE. -
Multi-agent systems: Game Theory
Why to formalize multi-agent systems: Braess's paradox.
Definition of multi-agent environment and multi-agent system.
Brief introduction to computational models for multi-agent systems: MDPs, DCOPs, planning, distributed systems, socio-technical systems, game theory.
Introduction to Normal Form Game Theory: the prisoner's dilemma.
Solution concepts: dominant strategy, minimax and maximin strategies, Nash equilibrium.
How to compute expected reward.
Equilibrium efficiency: price of anarchy, Pareto optimality.
Introduction to multi-agent coordination: competition vs cooperation. -
Cooperation
What is cooperation?
Challenges, structures and modes of cooperation.
Brief introduction to theories and models of cooperation.
Theory of Coalitions.
Definition of superadditive, simple and convex games.
Fair coalitional game: Shapley value.
Stable coalitional game: the Core.
Social choice theory: Condorcet's paradox and desirable properties.
Functions of social choice: majority, plurality, Condorcet, Borda, hare, fixed agenda, dictatorial.
Introduction to consensus algorithms: Paxos. -
Competition
What is competition?
Competition theories and models.
Definition of game in extensive form.
Reduction of extensive form to normal form.
How to compute Nash Equilibrium: the backward induction algorithm.
Negotiation as a mechanism of competition.
Bargaining problem definition and how to solve it using backward induction (subgame perfect equilibria).
Nash bargaining solution.
Competition resolution as an adversary game: Minimax, Expectiminimax, Monte Carlo search tree. -
Mechanism design
Definition of mechanism.
Theory of implementation.
Incentive compatibility.
Principle of revelation.
Design of mechanisms seen as an optimization problem.
Example of type of mechanism: auctions.
Market mechanisms.
Naive, first-price and second-price auction (Vickrey-Clarke-Groves).
Example of auction and consensus combination. -
Multi-agent reinforcement learning
From game theory to reinforcement learning: stochastic games and partially observable stochastic games.
How to add communication to a stochastic game.
Definition of multi-agent reinforcement learning problem.
Computing expected utility: individual policy vs joint policy.
Solution concepts: equilibria, Pareto optimality, social welfare, minimum entanglement.
Training process and guarantees and type of convergence to a solution: what happens when a policy is not stationary.
Agent reduction training methodologies: centralized learning, independent learning, self-play (AlphaZero).
Multi-agent training algorithms: Joint Action Learning, Agent Modeling. -
Symbolic models for social AI
Introduction to socio-technical systems: impact on society of intelligent distributed systems.
Social coordination and organizational models: social abstractions, norms, roles.
Electronic organizations: OperA.
Normative models: electronic institutions, HarmonIA.
Holistic models: OMNI. -
Agents and ethics
Review of the concepts of intelligent agent and rational agent.
Relationship between agency and intelligence.
Social and ethical issues of Artificial Intelligence: privacy, responsible AI.
Activities
Activity Evaluation act
Introduction: intelligent distributed systems
Perspectives on Artificial Intelligence. Introduction to distributed computing systems. Cognitive architecture paradigm and historical vision. Introduction to multi-agent systems.- Theory: Perspectives on Artificial Intelligence. Introduction to distributed computing systems. Cognitive architecture paradigm and historical vision. Introduction to multi-agent systems.
Contents:
Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h
Intelligent agents
Definition of intelligent agent. Rationality. Bounded rationality. Definition of environment. Properties of an environment. Intelligent agent architectures: reactive, goal-driven deliberative, utility-driven deliberative, adaptive.- Theory: Definition of intelligent agent. Rationality. Bounded rationality. Definition of environment. Properties of an environment. Intelligent agent architectures: reactive, goal-driven deliberative, utility-driven deliberative, adaptive.
Contents:
Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h
Goal-driven agents
What is a logic-symbolic agent. Modal logic. Possible worlds logic. Alethic, doxastic, epistemic modal logics. Goal-guided practical reasoning: the agent as an intentional system. Implementation of a goal-driven agent: the agent control loop. Commitment management with respect to a goal. BDI logic (Belief-Desire-Intention).- Theory: What is a logic-symbolic agent. Modal logic. Possible worlds logic. Alethic, doxastic, epistemic modal logics. Goal-guided practical reasoning: the agent as an intentional system. Implementation of a goal-driven agent: the agent control loop. Commitment management with respect to a goal. BDI logic (Belief-Desire-Intention).
- Laboratory: Introduction to Python. Setting up the Python environment. Installation of the multi-agent environment. Practical work with a logical-symbolic language for agents guided by objectives. Development of goal-driven agents.
Contents:
Theory
2h
Problems
0h
Laboratory
6h
Guided learning
0h
Autonomous learning
0h
State of the art analysis: agent architectures
In this activity, the students, organized in groups, will have to analyze a recent academic article in which a novel agent architecture is presented.Objectives: 1 2
Week: 3 (Outside class hours)
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h
Ontologies
Representing the world: ontology and epistemology. The semiotic triangle. Elements of an ontology. Representation languages: OWL and RDF. Knowledge graphs. Ontological reasoning. Descriptive logic: ABox, TBox.- Theory: Representing the world: ontology and epistemology. The semiotic triangle. Elements of an ontology. Representation languages: OWL and RDF. Knowledge graphs. Ontological reasoning. Descriptive logic: ABox, TBox.
- Laboratory: Learn how to use Protégé to define concepts using descriptive logic: definition by inclusion and by equivalence. Implementation of other axioms of descriptive logic. How to do ontological reasoning: theory and practice.
Contents:
Theory
2h
Problems
0h
Laboratory
2h
Guided learning
0h
Autonomous learning
0h
Utility-driven agents
Goals vs utility. Definition of utility. Reward hypothesis and reward signal. Definition of sequential decision problem. Markov Decision Processes (MDPs). Trajectories and policies: discount factor. Algorithms for solving MDPs: policy evaluation and value iteration. Brief Introduction to Partially Observable Markov Decision Processes (POMDPs).- Theory: Goals vs utility. Definition of utility. Reward hypothesis and reward signal. Definition of sequential decision problem. Markov Decision Processes (MDPs). Trajectories and policies: discount factor. Algorithms for solving MDPs: policy evaluation and value iteration. Brief Introduction to Partially Observable Markov Decision Processes (POMDPs).
- Laboratory: Practical exercises in solving Markov decision processes (MDPs). How to formalize a problem as a MDP. Solving a MDP with policy evaluation and value iteration.
Contents:
Theory
2h
Problems
0h
Laboratory
2h
Guided learning
0h
Autonomous learning
0h
Implementation of axioms with descriptive logic
In this activity, groups of students will have to modify an already existing ontology to apply a set of descriptive logic axioms, both on paper and in an ontology design tool (e.g. Protégé).Objectives: 3
Week: 5 (Outside class hours)
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h
Reinforcement learning
Multi-armed bandits: exploration vs exploitation. How to learn to decide: reinforcement learning, categorization and taxonomy. Model-based Monte Carlo. Time difference learning algorithms: SARSA and Q-Learning. Policy gradient algorithms: REINFORCE.- Theory: Multi-armed bandits: exploration vs exploitation. How to learn to decide: reinforcement learning, categorization and taxonomy. Model-based Monte Carlo. Time difference learning algorithms: SARSA and Q-Learning. Policy gradient algorithms: REINFORCE.
- Laboratory: Introduction to the Gymnasium library for agent simulation and training. Reinforcement learning practices with a functional environment: value iteration, direct estimation, Q-Learning, REINFORCE.
Contents:
Theory
2h
Problems
0h
Laboratory
4h
Guided learning
0h
Autonomous learning
0h
Lab assignment: goal-driven agents
In this laboratory assignment, the teams of students will design and develop intelligent agents in a complex environment, using techniques and logic seen in the theory and laboratory sessions.Objectives: 1 2 3
Week: 6 (Outside class hours)
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h
Multi-agent systems: Game Theory
Why to formalize multi-agent systems: Braess's paradox. Definition of multi-agent environment and multi-agent system. Brief introduction to computational models for multi-agent systems: MDPs, DCOPs, planning, distributed systems, socio-technical systems, game theory. Introduction to Normal Form Game Theory: the prisoner's dilemma. Solution concepts: dominant strategy, minimax and maximin strategies, Nash equilibrium. How to compute expected reward. Equilibrium efficiency: price of anarchy, Pareto optimality. Introduction to multi-agent coordination: competition vs cooperation.- Theory: Why to formalize multi-agent systems: Braess's paradox. Definition of multi-agent environment and multi-agent system. Brief introduction to computational models for multi-agent systems: MDPs, DCOPs, planning, distributed systems, socio-technical systems, game theory. Introduction to Normal Form Game Theory: the prisoner's dilemma. Solution concepts: dominant strategy, minimax and maximin strategies, Nash equilibrium. How to compute expected reward. Equilibrium efficiency: price of anarchy, Pareto optimality. Introduction to multi-agent coordination: competition vs cooperation.
- Laboratory: Solving exercises of games in normal form: problem modeling, calculation of strategies and equilibria, price of anarchy and Pareto-optimality. Best-answer algorithm for finding dominant strategies and equilibria: theory and practice. Mixed equilibrium calculation algorithm: theory and practice.
Contents:
Theory
2h
Problems
0h
Laboratory
2h
Guided learning
0h
Autonomous learning
0h
Cooperation
What is cooperation? Challenges, structures and modes of cooperation. Brief introduction to theories and models of cooperation. Theory of Coalitions. Definition of superadditive, simple and convex games. Fair coalitional game: Shapley value. Stable coalitional game: the Core. Social choice theory: Condorcet's paradox and desirable properties. Functions of social choice: majority, plurality, Condorcet, Borda, hare, fixed agenda, dictatorial. Introduction to consensus algorithms: Paxos.- Theory: What is cooperation? Challenges, structures and modes of cooperation. Brief introduction to theories and models of cooperation. Theory of Coalitions. Definition of superadditive, simple and convex games. Fair coalitional game: Shapley value. Stable coalitional game: the Core. Social choice theory: Condorcet's paradox and desirable properties. Functions of social choice: majority, plurality, Condorcet, Borda, hare, fixed agenda, dictatorial. Introduction to consensus algorithms: Paxos.
- Laboratory: Resolution of coalitional games. Practical calculation of the Shapley value and the Core. Resolution of social choice exercises.
Contents:
Theory
2h
Problems
0h
Laboratory
2h
Guided learning
0h
Autonomous learning
0h
Competition
What is competition? Competition theories and models. Definition of game in extensive form. Reduction of extensive form to normal form. How to compute Nash Equilibrium: the backward induction algorithm. Negotiation as a mechanism of competition. Bargaining problem definition and how to solve it using backward induction (subgame perfect equilibria). Nash bargaining solution. Competition resolution as an adversary game: Minimax, Expectiminimax, Monte Carlo search tree.- Theory: What is competition? Competition theories and models. Definition of game in extensive form. Reduction of extensive form to normal form. How to compute Nash Equilibrium: the backward induction algorithm. Negotiation as a mechanism of competition. Bargaining problem definition and how to solve it using backward induction (subgame perfect equilibria). Nash bargaining solution. Competition resolution as an adversary game: Minimax, Expectiminimax, Monte Carlo search tree.
- Laboratory: Solving competition problems. Formalization of problems as games in extensive form. Reduction of extensive form to normal form. Formalization and resolution of bargaining problems. Application of backward induction to find Nash equilibria and SPE (subgame perfect equilibria).
Contents:
Theory
2h
Problems
0h
Laboratory
2h
Guided learning
0h
Autonomous learning
0h
Lab assignment: reinforcement learning
Student teams will be required to write a report with a comparative study of the performance of various reinforcement learning techniques in a proposed environment.Objectives: 4 5
Week: 10 (Outside class hours)
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h
Mechanism design
Definition of mechanism. Theory of implementation. Incentive compatibility. Principle of revelation. Design of mechanisms seen as an optimization problem. Example of type of mechanism: auctions. Market mechanisms. Naive, first-price and second-price auction (Vickrey-Clarke-Groves). Example of auction and consensus combination.- Theory: Definition of mechanism. Theory of implementation. Incentive compatibility. Principle of revelation. Design of mechanisms seen as an optimization problem. Example of type of mechanism: auctions. Market mechanisms. Naive, first-price and second-price auction (Vickrey-Clarke-Groves). Example of auction and consensus combination.
Contents:
Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h
Solution to game theory exercises
Students will have to deliver the solution to game theory exercises proposed in Racó, potentially including: games in normal form, coalition games, games in extensive form and/or bargaining problems.Objectives: 6 7 8
Week: 11 (Outside class hours)
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h
Multi-agent reinforcement learning
From game theory to reinforcement learning: stochastic games and partially observable stochastic games. How to add communication to a stochastic game. Definition of multi-agent reinforcement learning problem. Computing expected utility: individual policy vs joint policy. Solution concepts: equilibria, Pareto optimality, social welfare, minimum entanglement. Training process and guarantees and type of convergence to a solution: what happens when a policy is not stationary. Agent reduction training methodologies: centralized learning, independent learning, self-play (AlphaZero). Multi-agent training algorithms: Joint Action Learning, Agent Modeling.- Theory: From game theory to reinforcement learning: stochastic games and partially observable stochastic games. How to add communication to a stochastic game. Definition of multi-agent reinforcement learning problem. Computing expected utility: individual policy vs joint policy. Solution concepts: equilibria, Pareto optimality, social welfare, minimum entanglement. Training process and guarantees and type of convergence to a solution: what happens when a policy is not stationary. Agent reduction training methodologies: centralized learning, independent learning, self-play (AlphaZero). Multi-agent training algorithms: Joint Action Learning, Agent Modeling.
- Laboratory: Introduction to multi-agent reinforcement learning environments. Reinforcement learning in adversarial games: self-play MCTS and AlphaZero. Practical work with various methodologies to train agents in environments of mixed interests: joint-action learning, agent modeling, policy gradient.
Contents:
Theory
2h
Problems
0h
Laboratory
8h
Guided learning
0h
Autonomous learning
0h
Symbolic models for social AI
Introduction to socio-technical systems: impact on society of intelligent distributed systems. Social coordination and organizational models: social abstractions, norms, roles. Electronic organizations: OperA. Normative models: electronic institutions, HarmonIA. Holistic models: OMNI.- Theory: Introduction to socio-technical systems: impact on society of intelligent distributed systems. Social coordination and organizational models: social abstractions, norms, roles. Electronic organizations: OperA. Normative models: electronic institutions, HarmonIA. Holistic models: OMNI.
Contents:
Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h
Agents and ethics
Review of the concepts of intelligent agent and rational agent. Relationship between agency and intelligence. Social and ethical issues of Artificial Intelligence: privacy, responsible AI.- Theory: Review of the concepts of intelligent agent and rational agent. Relationship between agency and intelligence. Social and ethical issues of Artificial Intelligence: privacy, responsible AI.
Contents:
Theory
1h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h
Lab assignment: multi-agent reinforcement learning
Student teams will have to write a report with a comparative study of the performance of various multi-agent reinforcement learning techniques in a proposed, cooperative, competitive environment, or a mixture of the two.Objectives: 5 6 7 8
Week: 15 (Outside class hours)
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h
Teaching methodology
The teaching methodology consists exposure theory classes in theory and application of concepts in classes and laboratory problems.The examination will the same for all groups.
Evaluation methodology
Evaluation is based on a final exam and a part exam, grading of course assignments, and a grade for lab work. The final and part exams will test the theoretical knowledge and the methodology acquired by students during the course. The grade for course assignments will be based on submissions of small problems set during the course. Lab grades will be based on students" reports and lab practical work carried out throughout the course.At about half of the 4-moth term there will be an exemptive exam, testing the first half of the course (exemptive only if the grade is 5 or more). The final exam will test both the first and the second part of the course. The first half is compulsory for those students who did not pass the part exam, and optional for the rest. The maximum of both grades (or only the one for the midterm exam) will stand as the grade for the first part.
The final grade will be calculated as follows:
GPar = part exam grade
GEx1 = 1st half of the final exam grade
GEx2 = 2nd half of the final exam grade
Total Exams grade = [max(Gpar, GEx1) + GEx2]/2
Final grade= Total Exams grade * 0.5 + Exercises grade * 0.2 + lab grade * 0.3 (code + inform)
Competences' Assessment
The assessment of the competence on teamwork is based on work done during the laboratory assignments.
Bibliography
Basic
-
Artificial intelligence: a modern approach
- Russell, S.J.; Norvig, P,
Pearson,
2022.
ISBN: 9781292401133
https://discovery.upc.edu/discovery/fulldisplay?docid=alma991005066379806711&context=L&vid=34CSUC_UPC:VU1&lang=ca -
Multiagent systems: algorithmic, game-theoretic, and logical foundations
- Shoham, Yoav; Leyton-Brown, Kevin,
Cambridge University Press,
2009.
ISBN: 9780521899437
https://www-cambridge-org.recursos.biblioteca.upc.edu/core/books/multiagent-systems/B11B69E0CB9032D6EC0A254F59922360 -
Programming multi-agent systems in AgentSpeak using Jason
- Bordini, Rafael H; Hübner, Jomi Fred; Wooldridge, Michael J,
John Wiley,
2007.
ISBN: 9780470029008
https://discovery.upc.edu/discovery/fulldisplay?docid=alma991003490179706711&context=L&vid=34CSUC_UPC:VU1&lang=ca -
Reinforcement learning: an introduction
- Sutton, Richard S; Barto, Andrew G,
MIT Press,
2020.
ISBN: 978-0262193986
https://discovery.upc.edu/discovery/fulldisplay?docid=alma991004166329706711&context=L&vid=34CSUC_UPC:VU1&lang=ca -
Multi-Agent Reinforcement Learning: Foundations and Modern Approaches
- Albrecht, Stefano V.;Christianos, Filippos; Schäfer, Lukas,
MIT Press,
2024.
ISBN: 9780262049375
https://discovery.upc.edu/discovery/fulldisplay?docid=alma991005317955806711&context=L&vid=34CSUC_UPC:VU1&lang=ca
Complementary
-
An introduction to multiagent systems
- Wooldridge, Michael J,
John Wiley & Sons,
2009.
ISBN: 9780470519462
https://discovery.upc.edu/discovery/fulldisplay?docid=alma91003779579706711&context=L&vid=34CSUC_UPC:VU1&lang=ca -
Algorithmic game theory
- Nisan, Noam; Papadimitriou, Christos H,
Cambridge University Press,
2007.
ISBN: 9780521872829
https://discovery.upc.edu/discovery/fulldisplay?docid=alma991003321009706711&context=L&vid=34CSUC_UPC:VU1&lang=ca -
Game Theory, Alive
- Karlin, Anna R.; Peres, Yuval,
American Mathematical Society,
2017.
ISBN: 1-4704-3667-1
https://ebookcentral-proquest-com.recursos.biblioteca.upc.edu/lib/upcatalunya-ebooks/detail.action?pq-origsite=primo&docID=4908296 -
The emotion machine: commensense thinking, artificial intelligence, and the future of the human mind
- Minsky, M.L,
Simon and Schuster,
2006.
ISBN: 0743276639
https://discovery.upc.edu/discovery/fulldisplay?docid=alma991003734189706711&context=L&vid=34CSUC_UPC:VU1&lang=ca -
Concurrent programming: algorithms, principles, and foundations
- Raynal, M,
Springer,
2013.
ISBN: 9783642320262
https://discovery.upc.edu/discovery/fulldisplay?docid=alma991004000289706711&context=L&vid=34CSUC_UPC:VU1&lang=ca