Statistical Analysis of Networks and Systems

You are here

Credits
6
Types
Specialization compulsory (Computer Networks and Distributed Systems)
Requirements
This subject has not requirements, but it has got previous capacities
Department
AC
The course covers basic techniques used in the statistical analysis of networks and systems. It first reviews and extends basic concepts on probability, information theory, and linear algebra. Then it presents basic estimation techniques. It covers the basic approaches of Machine Learning for regression and classification. Finally presents linear methods for dimensionality reduction.

Teachers

Person in charge

  • Jorge García Vidal ( )

Others

  • Jose Maria Barceló Ordinas ( )

Weekly hours

Theory
2.4
Problems
1.6
Laboratory
0
Guided learning
0
Autonomous learning
7.1

Competences

Technical Competences of each Specialization

Computer networks and distributed systems

  • CEE2.2 - Capability to understand models, problems and algorithms related to computer networks and to design and evaluate algorithms, protocols and systems that process the complexity of computer communications networks.

Generic Technical Competences

Generic

  • CG4 - Capacity for general and technical management of research, development and innovation projects, in companies and technology centers in the field of Informatics Engineering.

Transversal Competences

Appropiate attitude towards work

  • CTR5 - Capability to be motivated by professional achievement and to face new challenges, to have a broad vision of the possibilities of a career in the field of informatics engineering. Capability to be motivated by quality and continuous improvement, and to act strictly on professional development. Capability to adapt to technological or organizational changes. Capacity for working in absence of information and/or with time and/or resources constraints.

Basic

  • CB6 - Ability to apply the acquired knowledge and capacity for solving problems in new or unknown environments within broader (or multidisciplinary) contexts related to their area of study.

Objectives

  1. The main goal of the course is to develop in the students quantitative modeling skills, based on probabilistic techniques.
    Related competences: CB6, CTR5, CEE2.2, CG4,

Contents

  1. Review of probability models
    Probability axioms, basic combinatorics, random variables, independence and conditional probability, Bayes formula, total probability. Expected values, variance-covariance matrix, inequalities (Markov, Chebyshev, Chernoff, Jensen), (weak) law large numbers, entropy and mutual information. Cross entropy as a cost function. Properties of Gaussian distributions, central limit theorem.
  2. Estimation. Basic Machine Learning techniques for classification and regression
    Maximum likelihood and bayesian estimation. Decision functions, Loss, Risk, Empirical Risk minimization. Approximation and estimation. Bias-variance tradeoff. Classification. Linear regression. Basic neural networks architectures.
  3. Linear models and dimensionality reduction
    Review of basic linear algebra results, spectral theorem for symmetric matrices, positive definite matrices, matrix calculus, trace operator, SVD, matrix norms, Eckart-Young theorem, condition number, dimensionality reduction, PCA, eigenfaces, Sparse representation, Fourier matrices, Graph signal processing

Activities

Activity Evaluation act


Review of probability models

Probability axioms, basic combinatorics, random variables, independence and conditional probability, Bayes formula, total probability. Expected values, variance-covariance matrix, inequalities (Markov, Chebyshev, Chernoff, Jensen), (weak) law large numbers, entropy and mutual information. Cross entropy as a cost function. Properties of Gaussian distributions, central limit theorem.
Objectives: 1
Contents:
Theory
6h
Problems
6h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Linear models and dimensionality reduction

Review of basic linear algebra results, spectral theorem for symmetric matrices, positive definite matrices, matrix calculus, trace operator, SVD, matrix norms, Eckart-Young theorem, condition number, dimensionality reduction, PCA, eigenfaces, Sparse representation, Fourier matrices, Graph signal processing
Objectives: 1
Contents:
Theory
12.4h
Problems
7.6h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Estimation. Basic Machine Learning techniques for regression and classification

Maximum likelihood and bayesian estimation. Decision functions, Loss, Risk, Empirical Risk minimization. Approximation and estimation. Bias-variance tradeoff. Classification. Linear regression. Basic neural networks architectures.

Theory
8h
Problems
8h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

self-evaluating tests


Objectives: 1
Week: 4
Type: theory exam
Theory
1.5h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
17h

self-assesment test T2


Objectives: 1
Week: 8
Type: theory exam
Theory
1.5h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
17h

self assesment test T3


Objectives: 1
Week: 12
Type: theory exam
Theory
1.5h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
19h

self assesment test T4


Objectives: 1
Week: 16
Type: theory exam
Theory
1.5h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
19h

Homework1



Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Homework 2



Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Homework 3



Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Homework 4



Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Teaching methodology

Some materials will be posted online. The main results will be explained in the blackboard. Classes with problem solving and application examples.

Evaluation methodology

The evaluation is based on the development of several projects. Each of the projects will be evaluated (0=
FM = Sum_i (Wi*Mi)

Where:

Wi = is the weight of each project i = 1, ... N
Mi = is the mark of each project i = 1, ... N

The number of projects may vary over time, but in general, the following projects are foreseen:
* P1 (25%): Basic probability, information theory, and linear algebra,
* P2 (25%): Estimation, ML and Bayesian approaches
* P3 (25%): Understanding Bias-Variance tradeoff
* P4 (25%): Basic regression and classification


Bibliography

Basic:

Complementary:

Previous capacities

Basic knowledge of probability theory, linear algebra and calculus