# Statistical Analysis of Networks and Systems

## You are here

Credits
6
Types
Specialization compulsory (Computer Networks and Distributed Systems)
Requirements
This subject has not requirements, but it has got previous capacities
Department
AC
The course covers some basic techniques used in the statistical analysis of networks and systems. It first reviews and extends basic concepts on probability, information theory, and linear algebra. Then it presents basic estimation techniques. Finally, it covers the basic approaches of Machine Learning for regression and classification.

## Teachers

### Person in charge

• Jorge García Vidal ( )

### Others

• Jose Maria Barceló Ordinas ( )

## Weekly hours

Theory
2.4
Problems
1.6
Laboratory
0
Guided learning
0
Autonomous learning
7

## Competences

### Technical Competences of each Specialization

#### Computer networks and distributed systems

• CEE2.2 - Capability to understand models, problems and algorithms related to computer networks and to design and evaluate algorithms, protocols and systems that process the complexity of computer communications networks.

### Generic Technical Competences

#### Generic

• CG4 - Capacity for general and technical management of research, development and innovation projects, in companies and technology centers in the field of Informatics Engineering.

### Transversal Competences

#### Appropiate attitude towards work

• CTR5 - Capability to be motivated by professional achievement and to face new challenges, to have a broad vision of the possibilities of a career in the field of informatics engineering. Capability to be motivated by quality and continuous improvement, and to act strictly on professional development. Capability to adapt to technological or organizational changes. Capacity for working in absence of information and/or with time and/or resources constraints.

#### Basic

• CB6 - Ability to apply the acquired knowledge and capacity for solving problems in new or unknown environments within broader (or multidisciplinary) contexts related to their area of study.

## Objectives

1. The main goal of the course is to develop in the students quantitative modeling skills, based on probabilistic techniques.
Related competences: CB6, CTR5, CEE2.2, CG4,

## Contents

1. Probability models
Probability axioms, basic combinatorics, random variables, independence and conditional probability, expected values (review, only problems and online material), inequalities (Markov, Chebyshev, Jensen), (weak) Law large numbers, entropy and mutual information. Properties of Gaussian distributions, central limit theorem.
2. Linear models
Spectral theorem for symmetric matrices. Positive-definite matrices, quadratic forms. SVD. Curse of dimensionality, High-dimensional spaces. Dimensionality reduction. PCA. Monroe-Penrose pseudo-inverse.
3. Estimation. Basic Machine Learning techniques for classification and regression
Maximum likelihood and bayesian estimation. Decision functions, Loss, Risk, Empirical Risk minimization. Approximation and estimation. Bias-variance tradeoff. Classification. Linear regression.
4. Graphical models and dynamic systems
Graphical models. Belief propagation. Hidden Markov Models. Kalman filters. Time series

## Activities

Activity Evaluation act

### Probability models

Probability axioms, basic combinatorics, random variables, independence and conditional probability, expected values (review, only problems and online material), inclusion/exclusion, conditional independence, inequalities (Markov, Chebyshev, Jensen), examples: Bernouilli, Binomial, Multinomial, Poisson, (weak) Law large numbers, entropy and mutual information. Density functions, examples: uniform, exponential, Gaussian (review, problems and online material), beta, dirichlet, (eigenvalues/eigenvectors, symmetric, positive definite matrices video), multivariate gaussian, memoryless of exponential distribution. Properties of Gaussian distributions, central limit theorem.
Objectives: 1
Contents:
Theory
6h
Problems
6h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

### Linear models

Spectral theorem for symmetric matrices. Positive-definite matrices, quadratic forms. SVD. Dimensionality reduction. PCA. Monreo-Penrose pseudo-inverse. Infinite-dimension vector spaces. Continuity of linear operators. Hilbert spaces. Riesz representation theorem.
Objectives: 1
Contents:
Theory
6h
Problems
6h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

### Estimation. Basic Machine Learning techniques for regression and classification

Maximum likelihood and bayesian estimation. Linear regression. Bias-variance tradeoff. Classification.

Theory
6h
Problems
6h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

### Graphical models & dynamic systems

Graphical models. Belief propagation. Hidden Markov Models. Kalman filters. Time series.
Objectives: 1
Theory
6h
Problems
6h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

### self-evaluating tests

Objectives: 1
Week: 4
Type: theory exam
Theory
1.5h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
17h

### self-assesment test T2

Objectives: 1
Week: 8
Type: theory exam
Theory
1.5h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
17h

### self assesment test T3

Objectives: 1
Week: 12
Type: theory exam
Theory
1.5h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
19h

### self assesment test T4

Objectives: 1
Week: 16
Type: theory exam
Theory
1.5h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
19h

### Homework1

Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
4.5h

### Homework 2

Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

### Homework 3

Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

### Homework 4

Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

## Teaching methodology

Some materials will be posted online. The main results will be explained in the blackboard. Classes with problem solving and application examples.

## Evaluation methodology

The evaluation is based on the development of several projects. Each of the projects will be evaluated (0=
FM = Sum_i (Wi*Mi)

Where:

Wi = is the weight of each project i = 1, ... N
Mi = is the mark of each project i = 1, ... N

The number of projects may vary over time, but in general, the following projects are foreseen:
* P1 (25%): Basic probability, information theory, and linear algebra,
* P2 (25%): Estimation, ML and Bayesian approaches
* P3 (25%): Understanding Bias-Variance tradeoff
* P4 (25%): Basic regression and classification