Unsupervised and Reinforcement Learning

You are here

Credits
4.5
Types
Elective
Requirements
This subject has not requirements

Department
CS
This course will introduce to different advanced tecniques in unsupervised machine learning and reinforcement. The unsupervised machine learning part is oriented to learning algorithms for structured (sequences, streams, graphs) and unstructured data. The case reinforcement learning

Teachers

Person in charge

  • Javier Béjar Alonso ( )

Others

  • Mario Martín Muñoz ( )

Weekly hours

Theory
3
Problems
0
Laboratory
0
Guided learning
0.38
Autonomous learning
5.7

Competences

Generic Technical Competences

Generic

  • CG1 - Capability to plan, design and implement products, processes, services and facilities in all areas of Artificial Intelligence.
  • CG3 - Capacity for modeling, calculation, simulation, development and implementation in technology and company engineering centers, particularly in research, development and innovation in all areas related to Artificial Intelligence.

Technical Competences of each Specialization

Academic

  • CEA12 - Capability to understand the advanced techniques of Knowledge Engineering, Machine Learning and Decision Support Systems, and to know how to design, implement and apply these techniques in the development of intelligent applications, services or systems.
  • CEA13 - Capability to understand advanced techniques of Modeling , Reasoning and Problem Solving, and to know how to design, implement and apply these techniques in the development of intelligent applications, services or systems.

Professional

  • CEP1 - Capability to solve the analysis of information needs from different organizations, identifying the uncertainty and variability sources.

Transversal Competences

Sustainability and social compromise

  • CT2 - Capability to know and understand the complexity of economic and social typical phenomena of the welfare society; capability to relate welfare with globalization and sustainability; capability to use technique, technology, economics and sustainability in a balanced and compatible way.

Solvent use of the information resources

  • CT4 - Capacity for managing the acquisition, the structuring, analysis and visualization of data and information in the field of specialisation, and for critically assessing the results of this management.

Basic

  • CB7 - Ability to integrate knowledges and handle the complexity of making judgments based on information which, being incomplete or limited, includes considerations on social and ethical responsibilities linked to the application of their knowledge and judgments.

Objectives

  1. To known and use advanced unsupervised machine learning and reinforcement learning techniques for application on all the domains of engineering and science
    Related competences: CB7, CT2, CT4, CEA12, CEA13, CEP1, CG1, CG3,

Contents

  1. Data Mining, a global perspective
    Brief introduction to what is Data Mining and Knowledge Discovery, the areas they are related to and the different techniques involved
  2. Pre-processing and unsupervised data transformation
    This topic will include different algorithms for unsupervised data preprocessing such as data normalization, discretization, outliers detection, dimensionality reduction and feature extraction (PCA, ICA, SVD, linear and non linear multidimensional scalling and non negative matrix factorization)
  3. Unsupervised Machine Learning
    This topic will include classical and current algorithms for unsupervised learning from machine learning and statistics including hierarchical and parititional algorithms (K-means,Fuzzy C-means, Gaussian EM, graph partitioning, density based algorithms, grid based algorithms, unsupervised ANN, affinity propagation, ...)
  4. Unsupervised methodologies in Knowledge Discovery and Data Mining
    This topic will include current trends on knowledge discovery for data mining and big data, (scalability, any time clustering, one pass algorithms, approximation algorithms, distributed clustering, ..)
  5. Advanced topics in unsupervised learning
    This topic will include and introduction to different advanced topics in unsupervised learning such as consensus clustering, subspace clustering, biclustering and semisupervised clustering
  6. Unsupervised learning for sequential and structured data
    This topic will include algorithms for unsupervised learning with sequential data and structured data, such as sequences, strings, time series and data streams, graphs and social networks
  7. Basic concepts of Reinforcement Learning
    This topic describes the framework of reinforcement learning as the agent-learning of a behavior by interacting with the environment. This framework will be mathematically formalized. Finally, the concepts of reward, long-term reward, Value functions and Policy function will be introduced. Concepts will be illustrated with several examples.
  8. Basic reinforcement learning algorithms: Model based methods
    This topic introduce the model-based-algorithms of RL. We will see Dynamic Programming methods of Policy Iteration (PI) and Value Iteration (VI). Asynchronous versions of the algorithm will also be described. Finally, we will stress the importance of convergence of the algorithms and the optimality of the policy learnt by the algorithms.
  9. Basic reinforcement learning algorithms: Model free methods
    We will see algorithm able to learn without a model of the world. We will present Monte Carlo, Q-learning and Sarsa algorithms. We will extend these methods to TD(lambda) and n-estimators backups. The role of exploration in learning will be discussed.
  10. Function approximation
    This topic explains what to do when the state space is too large to be represented with a table. We will discuss the advantages and problems of the two main approaches for this problem: Parametric and No parametric methods. We will show how to apply know supervised methods as RBFs, Trees, SVMs and Deep Leaning methods to RL.
  11. Policy gradient methods
    In some cases, value function approaches are not appropriate, for instance, when the action space is continuous or when long-term reward is not the best guide for learning. This topic show approaches developed to solve this cases. We will describe the actor-critic approach and also the Vanilla policy gradient method and REINFORCE and TROP algorithms.
  12. State of the art applications of RL.
    In this topic, we will describe the latest practical application of RL: Atari, Go, robotic applications and NLP

Activities

Unsupervised learning

This activity develops the topics of the unsupervised learning part of the course
Theory
18
Problems
0
Laboratory
0
Guided learning
2.3
Autonomous learning
34.2
  • Theory: Unsupervised Learning
  • Autonomous learning: Unsupervised Learning
Objectives: 1
Contents:

Reinforcement learning

This activity develops the syllabus of the reinforcement learning part of the course
Theory
18
Problems
0
Laboratory
0
Guided learning
2.3
Autonomous learning
34.2
Objectives: 1
Contents:

Teaching methodology

Presentation classes and group project classes

Evaluation methodology

The evaluation will be based on small questionnaires about each topic of the course (20%) and a coursework to choose between to write a report on the state of the art for a particular topic of the course or to implement machine learning algorithms (80%).