Machine Learning

Credits
6
Types
Specialization complementary (Computing)
Requirements
  • Prerequisite: PE
  • Corequisite: PROP
Department
CS
The aim of machine learning is the development of theories, techniques and algorithms that allow a system to modify its behavior through inductive inference. This inference is based on observed data that represent incomplete information about a statistical phenomenon or process. Machine learning is a meeting point of different disciplines: statistics, artificial intelligence, programming and optimization, among others.

The course is divided into three conceptual parts, corresponding to three kinds of fundamental problems: supervised learning (classification and regression), unsupervised learning (clustering) and reinforcement learning The modelling techniques studied include probabilistic models, regression trees, artificial neural networks and support vector machines. An additional goal is getting acquainted with python, a powerful computing environment based on free software, as well as learning how to develop practical solutions to difficult problems for which a direct approach is not feasible.

Teachers

Person in charge

  • Javier Béjar Alonso ( )

Others

  • Daniel Hinjos García ( )
  • David Garcia Soriano ( )

Weekly hours

Theory
2
Problems
1
Laboratory
1
Guided learning
0
Autonomous learning
6

Competences

Transversal Competences

Effective oral and written communication

  • G4 [Avaluable] - To communicate with other people knowledge, procedures, results and ideas orally and in a written way. To participate in discussions about topics related to the activity of a technical informatics engineer.
    • G4.3 - To communicate clearly and efficiently in oral and written presentations about complex topics, becoming adapted to the situation, the type of audience and the communication goals, using the strategies and the adequate means. To analyse, value and respond adequately to the questions of the audience.

Technical Competences of each Specialization

Computer science specialization

  • CCO2 - To develop effectively and efficiently the adequate algorithms and software to solve complex computation problems.
    • CCO2.1 - To demonstrate knowledge about the fundamentals, paradigms and the own techniques of intelligent systems, and analyse, design and build computer systems, services and applications which use these techniques in any applicable field.
    • CCO2.2 - Capacity to acquire, obtain, formalize and represent human knowledge in a computable way to solve problems through a computer system in any applicable field, in particular in the fields related to computation, perception and operation in intelligent environments.
    • CCO2.4 - To demonstrate knowledge and develop techniques about computational learning; to design and implement applications and system that use them, including these ones dedicated to the automatic extraction of information and knowledge from large data volumes.

Objectives

  1. Formulate the problem of machine learning from data, and know the different machine learning tasks
    Related competences: CCO2.1, CCO2.2,
  2. Organize the flow for solving a machine learning problem, analyzing the possible options and choosing the most appropriate to the problem
    Related competences: CCO2.1, CCO2.4,
  3. Decide, defend and criticize a solution to a machine learning problem, arguing the strengths and weaknesses of the approach
    Related competences: G4.3, CCO2.1, CCO2.4,
  4. To compare, judge and interpret a set of results after making a hypothesis about a machine learning problem
    Related competences: CCO2.1, CCO2.4,
  5. Understand and know how to apply least squares techniques for solving supervised learning problems
    Related competences: CCO2.4,
  6. Understand and know how to apply techniques for single and multilayer neural networks for solving supervised learning problems
    Related competences: CCO2.2, CCO2.4,
  7. Understand and know how to apply support vector machines for solving supervised learning problems
    Related competences: CCO2.4,
  8. Understand and formulate different theoretical tools for the analysis, study and description of machine learning systems
    Related competences: CCO2.4,
  9. Understand and know how to apply the basic techniques for solving unsupervised learning problems
    Related competences: CCO2.1, CCO2.2, CCO2.4,
  10. Understand and know how to apply basic techniques for solving reinforcement learning problems
    Related competences: CCO2.1, CCO2.2, CCO2.4,
  11. Understand the most important modern machine learning and computational learning techniques
    Related competences: CCO2.1,

Contents

  1. Introduction to Machine Learning
    General information and basic concepts. Overview and approach to the problems tackled by machine learning techniques. Supervised (classification and regression), unsupervised (clustering) and reinforcement learning. Examples of modern applications.
  2. Supervised Machine Learning Theory
    The supervised Machine Learning problem setup. Classification and regression problems. Bias-variance tradeoff. Overfitting and underfitting. Generalization bounds. Empirical Risk minimization/Log Likelihood. Model selection and feature selection.
  3. Visualitation and dimensionality reduction
    The curse of dimensionality. Methods for variable selection and data transformation. Linear methods for dimensionality reduction (Principal Components Analysis). Nonlinear dimensionality reduction methods (LLE, t-SNE)
  4. Supervised machine learning (I): Linear Methods
    Linear learning methods for regression and classification problems. Linear regression by least squares and regularization. Probabilistic linear models and discriminated by classification: Discriminant analysis (LDA/QDA), Bayesian models, logistic regression. Linear models interpretation.
  5. Supervised Machine Learning (II): Métodes no lineals
    Nonlinear models for regression and classification. Nonparametric models (k-nearest neighbors). Neural networks and multi-layer perceptron, deep learning. Vector support machines and kernels. Decision trees and combination of classifiers. Non linear models interpretation.
  6. Unsupervised machine learning
    Definition and approaches for unsupervised machine learning. Clustering algorithms: EM algorithm and k-means algorithm.
  7. Reinforcement learning
    Description of reinforcement learning. Markov processes. Bellman equations. Values and Temporal Difference methods. Q-learning and the Sarsa algorithm. Applications.

Activities

Activity Evaluation act


Development of item 1 of the course

The student sees an overview and basic concepts of machine learning as well as modern examples of application.
Objectives: 11 1
Contents:
Theory
2h
Problems
0h
Laboratory
2h
Guided learning
0h
Autonomous learning
6h

Development of item 2 of the course

The teacher explains the theory of unsupervised machine learning, focusing on clustering algorithms.
Objectives: 2 1 9
Contents:
Theory
2h
Problems
0h
Laboratory
2h
Guided learning
0h
Autonomous learning
4h

Resolution of the problems of item 2 of the course

The teacher poses up to 3 problems within the current topic and the students prepare them at home. In the class, the teacher solves difficulties, gives guidance towards full resolution of the problem and eventually answers general questions about the topic. The students work again on these problems and deliver them.
Objectives: 3 1 8
Week: 2
Theory
0h
Problems
2h
Laboratory
0h
Guided learning
0h
Autonomous learning
8h

Development of item 3 of the course

The teacher exposes the problem of supervised Machine Learning. Explains the differences between regression and classification problems, and the notions of bias-variance tradeoff, over and underfitting as well as other theoretical tools for models selection.
Objectives: 1 8
Contents:
Theory
6h
Problems
0h
Laboratory
2h
Guided learning
0h
Autonomous learning
4h

Resolution of the problems of item 3 of the course

The teacher poses up to 3 problems within the current topic and the students prepare them at home. In the class, the teacher solves difficulties, gives guidance towards full resolution of the problem and eventually answers general questions about the topic. The students work again on these problems and deliver them.
Objectives: 3 4 5 6
Week: 5
Theory
0h
Problems
3h
Laboratory
0h
Guided learning
0h
Autonomous learning
10h

Development of item 4 of the course

The teacher explains the basics of algorithms for separating hyperplanes: the Perceptron algorithm and maximum margin separation. The kernel functions and support vector machines for classification are introduced. Finally two neural networks for classification are covered: the multilayer perceptron and the radial basis function network.
Objectives: 11 1 6 7
Contents:
Theory
5h
Problems
0h
Laboratory
2h
Guided learning
0h
Autonomous learning
4h

Resolution of the problems of item 4 of the course

The teacher poses up to 3 problems within the current topic and the students prepare them at home. In the class, the teacher solves difficulties, gives guidance towards full resolution of the problem and eventually answers general questions about the topic. The students work again on these problems and deliver them.
Objectives: 3 4 2 6
Week: 8
Theory
0h
Problems
3h
Laboratory
0h
Guided learning
0h
Autonomous learning
10h

Development of item 5 of the course

The teacher introduces the methods for regression problems: basically least squares (analytical and iterative methods). Error functions for the regression case are introduced. Multilayer neural networks are adapted to the regression case. The support vector machine for regression is introduced.
Objectives: 1 5 6 7
Contents:
Theory
5h
Problems
0h
Laboratory
2h
Guided learning
0h
Autonomous learning
4h

Resolution of the problems of item 5 of the course

The teacher poses up to 3 problems within the current topic and the students prepare them at home. In the class, the teacher solves difficulties, gives guidance towards full resolution of the problem and eventually answers general questions about the topic. The students work again on these problems and deliver them.
Objectives: 3 4 7
Week: 10
Theory
0h
Problems
3h
Laboratory
0h
Guided learning
0h
Autonomous learning
10h

Development of item 6 of the course

The teacher introduces the basic techniques for ensemble learning: bagging, boosting and ECOC, and explains them in light of the bias / variance tradeoff.
Objectives: 2 11 8
Contents:
Theory
4h
Problems
0h
Laboratory
2h
Guided learning
0h
Autonomous learning
4h

Resolution of the problems of item 6 of the course

The teacher poses up to 3 problems within the current topic and the students prepare them at home. In the class, the teacher solves difficulties, gives guidance towards full resolution of the problem and eventually answers general questions about the topic. The students work again on these problems and deliver them.
Objectives: 3 4 2 9 10
Week: 12
Theory
0h
Problems
2h
Laboratory
0h
Guided learning
0h
Autonomous learning
8h

Development of item 7 of the subject

The teacher explains the basics of reinforcement learning and its applications and briefly introduces transductive learning.
Objectives: 11 10
Contents:
Theory
3h
Problems
0h
Laboratory
3h
Guided learning
0h
Autonomous learning
4h

Resolution of the problems of item 7 of the subject

The teacher poses up to 3 problems within the current topic and the students prepare them at home. In the class, the teacher solves difficulties, gives guidance towards full resolution of the problem and eventually answers general questions about the topic. The students work again on these problems and deliver them.
Objectives: 3 4 11
Week: 14
Theory
0h
Problems
2h
Laboratory
0h
Guided learning
0h
Autonomous learning
8h

Final exam


Objectives: 5 6 7 8 9 10
Week: 15 (Outside class hours)
Theory
3h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Delivery of the practical work


Objectives: 3 4 2 5 6 7 8 9 10
Week: 14
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Teaching methodology

The topics exposed in the main classes are very well motivated and motivating (why is it important to know), and supplemented with many examples. The theory lessons introduce all the knowledge, techniques, concepts and results necessary to achieve a well-grounded body of knowledge. These concepts are put into practice in the problem sessions, and in the laboratory.

Prior to each problem-solving session, the teacher proposes problems related to the current topic and the students have time to prepare them using their AA hours of personal work. In class, the students will be divided among several small groups depending on their number. The teacher provides guidance and eventually resolves questions, giving feedback and making the students progress in the resolution. The use of collaborative learning strategies is envisaged, in which one or more students take responsibility to lead the process. Students work again in these problems in their AA time and then deliver them. These deliveries are continued in time, have a uniform load and are evaluated. This strategy is also used to evaluate their skills for effective communication.

The theory lessons are weekly (two hours). The two hours of laboratory classes are twice a month. The problem sessions are weekly.

There is a deliverable practical work that solves a real problem (to be chosen by the student) that integrates all acquired knowledge and skills of the course. This practical work is also used to evaluate their skills for effective communication.

Evaluation methodology

The course can be passed with continuous assessment, as follows:

NProbs = Average mark of problems completed during the course
NPract = Mark for the practical work
NPart = Mark used to evaluate skills for effective communication

NF1 = 50% NProbs + 40% NPract + 10% NPart

The course can also be passed with a final exam, as follows:

NExF = Mark obtained in a final exam (during the exams period)

NF2 = 40% NExF + 20% NProbs + 30% NPract + 10% NPart

In any case, the final mark is the highest of the two:

FINAL MARK = max (NF1, NF2)

Bibliography

Basic:

Complementary:

Web links

Previous capacities

Elementary notions of probability and statistics.
Elementary linear algebra and real analysis
Good programming skills in a high-level language