The course covers basic techniques used in the statistical analysis of networks and systems. It first reviews and extends basic concepts on probability, information theory, and linear algebra. Then it presents basic estimation techniques. It covers the basic approaches of Machine Learning for regression and classification. Finally presents linear methods for dimensionality reduction.
Teachers
Person in charge
Jorge García Vidal (
)
Others
Jose Maria Barceló Ordinas (
)
Weekly hours
Theory
2.4
Problems
1.6
Laboratory
0
Guided learning
0
Autonomous learning
7.1
Competences
Technical Competences of each Specialization
Computer networks and distributed systems
CEE2.2 - Capability to understand models, problems and algorithms related to computer networks and to design and evaluate algorithms, protocols and systems that process the complexity of computer communications networks.
Generic Technical Competences
Generic
CG4 - Capacity for general and technical management of research, development and innovation projects, in companies and technology centers in the field of Informatics Engineering.
Transversal Competences
Appropiate attitude towards work
CTR5 - Capability to be motivated by professional achievement and to face new challenges, to have a broad vision of the possibilities of a career in the field of informatics engineering. Capability to be motivated by quality and continuous improvement, and to act strictly on professional development. Capability to adapt to technological or organizational changes. Capacity for working in absence of information and/or with time and/or resources constraints.
Basic
CB6 - Ability to apply the acquired knowledge and capacity for solving problems in new or unknown environments within broader (or multidisciplinary) contexts related to their area of study.
Objectives
The main goal of the course is to develop in the students quantitative modeling skills, based on probabilistic techniques.
Related competences:
CB6,
CTR5,
CEE2.2,
CG4,
Contents
Review of probability models
Probability axioms, basic combinatorics, random variables, independence and conditional probability, Bayes formula, total probability. Expected values, variance-covariance matrix, inequalities (Markov, Chebyshev, Chernoff, Jensen), (weak) law large numbers, entropy and mutual information. Cross entropy as a cost function. Properties of Gaussian distributions, central limit theorem.
Estimation. Basic Machine Learning techniques for classification and regression
Maximum likelihood and bayesian estimation. Decision functions, Loss, Risk, Empirical Risk minimization. Approximation and estimation. Bias-variance tradeoff. Classification. Linear regression. Basic neural networks architectures.
Linear models and dimensionality reduction
Review of basic linear algebra results, spectral theorem for symmetric matrices, positive definite matrices, matrix calculus, trace operator, SVD, matrix norms, Eckart-Young theorem, condition number, dimensionality reduction, PCA, eigenfaces, Sparse representation, Fourier matrices, Graph signal processing
Activities
ActivityEvaluation act
Review of probability models
Probability axioms, basic combinatorics, random variables, independence and conditional probability, Bayes formula, total probability. Expected values, variance-covariance matrix, inequalities (Markov, Chebyshev, Chernoff, Jensen), (weak) law large numbers, entropy and mutual information. Cross entropy as a cost function. Properties of Gaussian distributions, central limit theorem. Objectives:1 Contents:
Some materials will be posted online. The main results will be explained in the blackboard. Classes with problem solving and application examples.
Evaluation methodology
The evaluation is based on the development of several projects. Each of the projects will be evaluated (0=
FM = Sum_i (Wi*Mi)
Where:
Wi = is the weight of each project i = 1, ... N
Mi = is the mark of each project i = 1, ... N
The number of projects may vary over time, but in general, the following projects are foreseen:
* P1 (25%): Basic probability, information theory, and linear algebra,
* P2 (25%): Estimation, ML and Bayesian approaches
* P3 (25%): Understanding Bias-Variance tradeoff
* P4 (25%): Basic regression and classification