Skip to main content

Statistical Analysis of Networks and Systems

Credits
6
Types
Specialization compulsory (Computer Networks and Distributed Systems)
Requirements
This subject has not requirements , but it has got previous capacities
Department
AC
The course covers basic techniques used in the statistical analysis of networks and systems. It first reviews and extends basic concepts on probability, information theory, and linear algebra. Then it presents basic estimation techniques. It covers the basic approaches of Machine Learning for regression and classification. Finally presents linear methods for dimensionality reduction.

Teachers

Person in charge

Others

Weekly hours

Theory
3.4
Problems
0.6
Laboratory
0
Guided learning
0
Autonomous learning
7.1

Competences

Computer networks and distributed systems

  • CEE2.2 - Capability to understand models, problems and algorithms related to computer networks and to design and evaluate algorithms, protocols and systems that process the complexity of computer communications networks.
  • Generic

  • CG4 - Capacity for general and technical management of research, development and innovation projects, in companies and technology centers in the field of Informatics Engineering.
  • Appropiate attitude towards work

  • CTR5 - Capability to be motivated by professional achievement and to face new challenges, to have a broad vision of the possibilities of a career in the field of informatics engineering. Capability to be motivated by quality and continuous improvement, and to act strictly on professional development. Capability to adapt to technological or organizational changes. Capacity for working in absence of information and/or with time and/or resources constraints.
  • Basic

  • CB6 - Ability to apply the acquired knowledge and capacity for solving problems in new or unknown environments within broader (or multidisciplinary) contexts related to their area of study.
  • Objectives

    1. The main goal of the course is to develop in the students quantitative modeling skills, based on probabilistic techniques.
      Related competences: CB6, CTR5, CEE2.2, CG4,

    Contents

    1. Review of probability models
      Probability axioms, basic combinatorics, random variables, independence and conditional probability, Bayes formula, total probability. Expected values, variance-covariance matrix, inequalities (Markov, Chebyshev, Chernoff, Jensen), (weak) law large numbers, central limit theorem. Properties of multivariate Gaussian distributions. Entropy, mutual information, KL-divergence, and cross entropy.
    2. Estimation. Basic Machine Learning techniques for classification and regression
      Maximum likelihood and bayesian estimation. Decision functions, Loss, Risk, Empirical Risk minimization. Approximation and estimation. Bias-variance tradeoff. Classification. Linear regression, Logistic regression. k-means, clustering. Random forest. Introduction to Neural networks: FFNN, backprop, SGD, CNN, RNN, Autoencoders
    3. Linear models and dimensionality reduction
      Review of basic linear algebra results, spectral theorem for symmetric matrices, positive definite matrices, matrix calculus, trace operator, SVD, matrix norms, Eckart-Young theorem, condition number, dimensionality reduction, PCA, eigenfaces, Sparse representation, Fourier matrices.

    Activities

    Activity Evaluation act


    Review of probability models

    Probability axioms, basic combinatorics, random variables, independence and conditional probability, Bayes formula, total probability. Expected values, variance-covariance matrix, inequalities (Markov, Chebyshev, Chernoff, Jensen), (weak) law large numbers, entropy and mutual information. Cross entropy as a cost function. Properties of Gaussian distributions, central limit theorem.
    Objectives: 1
    Contents:
    Theory
    8h
    Problems
    2h
    Laboratory
    0h
    Guided learning
    0h
    Autonomous learning
    0h

    Linear models and dimensionality reduction

    Review of basic linear algebra results, spectral theorem for symmetric matrices, positive definite matrices, matrix calculus, trace operator, SVD, matrix norms, Eckart-Young theorem, condition number, dimensionality reduction, PCA, eigenfaces, Sparse representation, Fourier matrices, Graph signal processing
    Objectives: 1
    Contents:
    Theory
    8h
    Problems
    2h
    Laboratory
    0h
    Guided learning
    0h
    Autonomous learning
    0h

    Estimation. Basic Machine Learning techniques for regression and classification

    Maximum likelihood and bayesian estimation. Decision functions, Loss, Risk, Empirical Risk minimization. Approximation and estimation. Bias-variance tradeoff. Classification. Linear regression. Basic neural networks architectures.

    Theory
    26.9h
    Problems
    4.1h
    Laboratory
    0h
    Guided learning
    0h
    Autonomous learning
    0h

    self-evaluating tests


    Objectives: 1
    Week: 8
    Theory
    0h
    Problems
    0h
    Laboratory
    0h
    Guided learning
    0h
    Autonomous learning
    0h

    self assesment test


    Objectives: 1
    Week: 16
    Theory
    0h
    Problems
    0h
    Laboratory
    0h
    Guided learning
    0h
    Autonomous learning
    0h

    Homework1



    Theory
    0h
    Problems
    0h
    Laboratory
    0h
    Guided learning
    0h
    Autonomous learning
    19.9h

    Homework 2



    Theory
    0h
    Problems
    0h
    Laboratory
    0h
    Guided learning
    0h
    Autonomous learning
    20h

    Homework 3



    Theory
    0h
    Problems
    0h
    Laboratory
    0h
    Guided learning
    0h
    Autonomous learning
    20h

    Teaching methodology

    Some materials will be posted online. The main results will be explained in the blackboard. Classes with problem solving and application examples.

    Evaluation methodology

    The assessment is based on the development of 3 projects and two controls (C1, C2), one in the middle of the course and one at the end of the course, which are marked between 0 and 10. Each project will be assessed (0=<mark=<10). The final mark for the course (FM) is calculated as:

    FM = 0.2*(C1+C2+P1+P2+P3)

    In this course the proposed projects will be:
    * P1: Basic Probability, Information Theory and Linear Algebra,
    * P2: Basic estimation methods and Machine Learning.
    * P3: Neural Networks.

    Bibliography

    Basic

    Previous capacities

    Basic knowledge of probability theory, linear algebra and calculus