Advanced Human Languages Technologies

You are here

Credits
5
Types
  • MIRI: Specialization complementary (Data Science)
  • MAI: Elective
Requirements
This subject has not requirements, but it has got previous capacities
Department
CS;TSC
This course offers an in-depth coverage of main basic tasks for Natural Language Processing. We will present fundamental models and tools to approach a variety of Natural Language Processing tasks, ranging from named entity recognition to syntactic processing and document classification. The flow of the course is along two main axis: (1) computational formalisms to describe natural language processes, and (2) statistical and machine learning methods to acquire linguistic models from large data collections and solve specific linguistic tasks

Teachers

Person in charge

  • Salvador Medina Herrera ( )

Others

  • Bardia Rafieian ( )
  • Lluis Padro Cirera ( )

Weekly hours

Theory
2
Problems
0
Laboratory
1
Guided learning
0
Autonomous learning
5.3

Competences

Generic Technical Competences

Generic

  • CG3 - Capacity for modeling, calculation, simulation, development and implementation in technology and company engineering centers, particularly in research, development and innovation in all areas related to Artificial Intelligence.

Technical Competences of each Specialization

Academic

  • CEA3 - Capability to understand the basic operation principles of Machine Learning main techniques, and to know how to use on the environment of an intelligent system or service.
  • CEA5 - Capability to understand the basic operation principles of Natural Language Processing main techniques, and to know how to use in the environment of an intelligent system or service.

Transversal Competences

Teamwork

  • CT3 - Ability to work as a member of an interdisciplinary team, as a normal member or performing direction tasks, in order to develop projects with pragmatism and sense of responsibility, making commitments taking into account the available resources.

Reasoning

  • CT6 - Capability to evaluate and analyze on a reasoned and critical way about situations, projects, proposals, reports and scientific-technical surveys. Capability to argue the reasons that explain or justify such situations, proposals, etc..

Analisis y sintesis

  • CT7 - Capability to analyze and solve complex technical problems.

Basic

  • CB6 - Ability to apply the acquired knowledge and capacity for solving problems in new or unknown environments within broader (or multidisciplinary) contexts related to their area of study.
  • CB8 - Capability to communicate their conclusions, and the knowledge and rationale underpinning these, to both skilled and unskilled public in a clear and unambiguous way.
  • CB9 - Possession of the learning skills that enable the students to continue studying in a way that will be mainly self-directed or autonomous.

Objectives

  1. Learn to apply statistical methods for NLP in a practical application
    Related competences: CEA3, CEA5, CT3, CB6, CB8,
  2. Understand statistical and machine learning techniques applied to NLP
    Related competences: CEA3, CG3, CT6, CT7, CB6,
  3. Develop the ability to solve technical problems related to statistical and algorithmic problems in NLP
    Related competences: CEA3, CEA5, CG3, CT7, CB6, CB8, CB9,
  4. Understand fundamental methods of Natural Language Processing from a computational perspective
    Related competences: CEA5, CT7, CB6,

Contents

  1. Statistical Models for NLP
    Introduction to statistical modelling for language. Maximum Likelhood models and smooting. Maximum entropy estimation. Log-Linear models
  2. Distances and Similarities
    Distances (and similarities) between linguistic units. Textual, Semantic, and Distributional distances. Semantic spaces (WN, Wikipedia, Freebase, Dbpedia).
  3. Sequence Predicion
    Prediction in word sequences: PoS tagging, NERC. Local classifiers, HMM, global predictors, Log-linear models.
  4. Syntactic Parsing
    Parsing constituent trees: PCFG, CKY vs Inside/outside
    Parsing dependency trees: CRFs for parsing. Earley algorithm
  5. Document-level modelling
    Document representation: from BoW to NLU.
    Document similarities.
    Document classification.
  6. Deep Leaning approaches - Introduction
    Introduction to ANN for NLP
    Lexical semantics. Word Embeddings
  7. Deep Learning approaches - Word Sequences
    PoS tagging, NERC
  8. Deep Learning Approaches - Sentences
    Sentence similarity, sentence classification. LSTM. BERT. Sentence embeddings
  9. Deep Learning approaches - Document Level
    Document similarity, document classification, document embeddings - doc2vec
  10. Deep Learning Approaches - Machine Translation
    Neural Machine Translation

Activities

Activity Evaluation act


Course Introduction

Introduction to statistical modelling for language. Maximum Likelhood models and smooting. Maximum entropy estimation. Log-Linear models
Objectives: 4 2
Theory
2h
Problems
1h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Distances and Similarities

Distances (and similarities) between linguistic units. Textual, Semantic, and Distributional distances. Semantic spaces (WN, Wikipedia, Freebase, Dbpedia).
Objectives: 4 2
Contents:
Theory
5h
Problems
3h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Sequence Models in NLP

These lectures will present sequence models, an important set of tools that is used for sequential tasks. We will present this in the framework of structured prediction (later in the course we will see that the same framework is used for parsing and translation). We will focus on machine learning aspects, as well as algorithmic aspects. We will give special emphasis to Conditional Random Fields. Also Deep Learning models will be presented
Objectives: 4 2
Contents:
Theory
6h
Problems
4h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Syntax and Parsing

We will present statistical models for syntactic structure, and in general tree structures. The focus will be on probabilistic context-free grammars and dependency grammars, two standard formalisms. We will see relevant algorithms, as well as methods to learn grammars from data based on the structured prediction framework. Sentence similarity, sentence classification. LSTM. BERT. Sentence embeddings
Objectives: 4 2
Contents:
Theory
6h
Problems
3h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Document-level modelling

Document representation: from BoW to NLU. Document similarities. Document classification document embeddings - doc2vec
Objectives: 4 2
Contents:
Theory
4h
Problems
2h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Neural Machine Translation

Neural Machine Translation
Objectives: 4 2
Contents:
Theory
4h
Problems
2h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Final Exam


Objectives: 4 2 3
Week: 15
Type: theory exam
Theory
3h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
10.5h

Project


Objectives: 4 2 1
Week: 16
Type: assigment
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
45h

Teaching methodology

The course will be structured around four different linguistic analysis levels: word level, phrase level, sentence level, and document level. Typical NLP tasks and solutions corresponding to each level will be presented.
The first half of the course is devoted to "classical" statistical and ML approaches. The second half of the course revisits the same levels under a deep learning perspective

Theoretical background and practical exercises will be developed in class.

Finally, students will develop a practical project in teams of two students. The goal of the project is to put into practice the methods learned in class, and learn how the experimental methodology that is used in the NLP field. Students have to identify existing components (i.e. data and tools) that can be used to build a system, and perform experiments in order to perform empirical analysis of some statistical NLP method.

Evaluation methodology

Final grade = 0.5*FE + 0.5*LP

where

FE is the grade of the final exam

LP is the grade of the lab project

Bibliography

Basic:

Web links

Previous capacities

- Although not mandatory, familiarity with basic concepts and methods of Natural Language Processing is strongly recommended

- Good understanding of basic concepts and methods of Machine Learning.

- Advanced programming skills.