The course covers basic techniques used in the statistical analysis of networks and systems. It first reviews and extends basic concepts on probability, information theory, and linear algebra. Then it presents basic estimation techniques. It covers the basic approaches of Machine Learning for regression and classification. Finally presents linear methods for dimensionality reduction.
Teachers
Person in charge
Jorge García Vidal (
)
Others
Jose Maria Barceló Ordinas (
)
Weekly hours
Theory
3.4
Problems
0.6
Laboratory
0
Guided learning
0
Autonomous learning
7.1
Competences
Technical Competences of each Specialization
Computer networks and distributed systems
CEE2.2 - Capability to understand models, problems and algorithms related to computer networks and to design and evaluate algorithms, protocols and systems that process the complexity of computer communications networks.
Generic Technical Competences
Generic
CG4 - Capacity for general and technical management of research, development and innovation projects, in companies and technology centers in the field of Informatics Engineering.
Transversal Competences
Appropiate attitude towards work
CTR5 - Capability to be motivated by professional achievement and to face new challenges, to have a broad vision of the possibilities of a career in the field of informatics engineering. Capability to be motivated by quality and continuous improvement, and to act strictly on professional development. Capability to adapt to technological or organizational changes. Capacity for working in absence of information and/or with time and/or resources constraints.
Basic
CB6 - Ability to apply the acquired knowledge and capacity for solving problems in new or unknown environments within broader (or multidisciplinary) contexts related to their area of study.
Objectives
The main goal of the course is to develop in the students quantitative modeling skills, based on probabilistic techniques.
Related competences:
CB6,
CTR5,
CEE2.2,
CG4,
Contents
Review of probability models
Probability axioms, basic combinatorics, random variables, independence and conditional probability, Bayes formula, total probability. Expected values, variance-covariance matrix, inequalities (Markov, Chebyshev, Chernoff, Jensen), (weak) law large numbers, central limit theorem. Properties of multivariate Gaussian distributions. Entropy, mutual information, KL-divergence, and cross entropy.
Estimation. Basic Machine Learning techniques for classification and regression
Maximum likelihood and bayesian estimation. Decision functions, Loss, Risk, Empirical Risk minimization. Approximation and estimation. Bias-variance tradeoff. Classification. Linear regression, Logistic regression. k-means, clustering. Random forest. Introduction to Neural networks: FFNN, backprop, SGD, CNN, RNN, Autoencoders
Linear models and dimensionality reduction
Review of basic linear algebra results, spectral theorem for symmetric matrices, positive definite matrices, matrix calculus, trace operator, SVD, matrix norms, Eckart-Young theorem, condition number, dimensionality reduction, PCA, eigenfaces, Sparse representation, Fourier matrices.
Activities
ActivityEvaluation act
Review of probability models
Probability axioms, basic combinatorics, random variables, independence and conditional probability, Bayes formula, total probability. Expected values, variance-covariance matrix, inequalities (Markov, Chebyshev, Chernoff, Jensen), (weak) law large numbers, entropy and mutual information. Cross entropy as a cost function. Properties of Gaussian distributions, central limit theorem. Objectives:1 Contents:
Some materials will be posted online. The main results will be explained in the blackboard. Classes with problem solving and application examples.
Evaluation methodology
The assessment is based on the development of 3 projects and two controls (C1, C2), one in the middle of the course and one at the end of the course, which are marked between 0 and 10. Each project will be assessed (0=
FM = 0.2*(C1+C2+P1+P2+P3)
In this course the proposed projects will be:
* P1: Basic Probability, Information Theory and Linear Algebra,
* P2: Basic estimation methods and Machine Learning.
* P3: Neural Networks.