Advanced Human Languages Technologies

You are here

Credits
5
Types
  • MIRI: Specialization complementary (Data Science)
  • MAI: Elective
Requirements
This subject has not requirements, but it has got previous capacities
Department
CS
Can a machine learn to correct the grammaticality of text? Can a machine learn to answer questions we make in plain English? Can a machine learn to translate languages, using Wikipedia as a training set?

This course offers an in depth coverage of methods for Natural Language Processing. We will present fundamental models and tools to approach a variety of Natural Language Processing tasks, ranging from syntactic processing, to semantic processing, to final applications such as information extraction, human-machine dialogue systems, and machine translation. The flow of the course is along two main axis: (1) computational formalisms to describe natural language processes, and (2) statistical and machine learning methods to acquire linguistic models from large data collections.

Teachers

Person in charge

  • Lluis Padro Cirera ( )

Others

  • Horacio Rodríguez Hontoria ( )

Weekly hours

Theory
2
Problems
0
Laboratory
1
Guided learning
0
Autonomous learning
5.3

Competences

Generic Technical Competences

Generic

  • CG3 - Capacity for modeling, calculation, simulation, development and implementation in technology and company engineering centers, particularly in research, development and innovation in all areas related to Artificial Intelligence.

Technical Competences of each Specialization

Academic

  • CEA3 - Capability to understand the basic operation principles of Machine Learning main techniques, and to know how to use on the environment of an intelligent system or service.
  • CEA5 - Capability to understand the basic operation principles of Natural Language Processing main techniques, and to know how to use in the environment of an intelligent system or service.

Transversal Competences

Teamwork

  • CT3 - Ability to work as a member of an interdisciplinary team, as a normal member or performing direction tasks, in order to develop projects with pragmatism and sense of responsibility, making commitments taking into account the available resources.

Reasoning

  • CT6 - Capability to evaluate and analyze on a reasoned and critical way about situations, projects, proposals, reports and scientific-technical surveys. Capability to argue the reasons that explain or justify such situations, proposals, etc..

Analisis y sintesis

  • CT7 - Capability to analyze and solve complex technical problems.

Basic

  • CB6 - Ability to apply the acquired knowledge and capacity for solving problems in new or unknown environments within broader (or multidisciplinary) contexts related to their area of study.
  • CB8 - Capability to communicate their conclusions, and the knowledge and rationale underpinning these, to both skilled and unskilled public in a clear and unambiguous way.
  • CB9 - Possession of the learning skills that enable the students to continue studying in a way that will be mainly self-directed or autonomous.

Objectives

  1. Learn to apply statistical methods for NLP in a practical application
    Related competences: CT3, CEA3, CEA5, CB6, CB8,
  2. Understand statistical and machine learning techniques applied to NLP
    Related competences: CT6, CT7, CEA3, CG3, CB6,
  3. Develop the ability to solve technical problems related to statistical and algorithmic problems in NLP
    Related competences: CT7, CEA3, CEA5, CG3, CB6, CB8, CB9,
  4. Understand fundamental methods of Natural Language Processing from a computational perspective
    Related competences: CT7, CEA5, CB6,

Contents

  1. Syntactic Parsing
    Three lectures of the course will be devoted to syntactic parsing:

    1.- Statistical parsing. The core are SCFG. Learning (supervised from treebanks or unsupervised using the inside/outside algorithm), parsing (Viterbi). Pros & Cons of SCFG. Other probabilistic approaches.

    2.- Dependency Parsing. Projective and non projective dependency trees. Eisner & Chu, Liu, Edmonds algorithms. Transition-based parsing.

    3.- Robust parsing. Chunking. HMM-based chunkers. Cascaded FSM chunkers, grammars for chunking.
  2. Distances and Similarities
    Distances (and similarities) between linguistic units. Textual, Semantic, and Distributional distances. Semantic spaces (WN, Wikipedia, Freebase, Dbpedia).
  3. Semantic Role Labelling
    The concept of semantic role. Mapping syntactic dependencies into semantic roles. Semantic arguments of a predicate. Semantic Role Labelers. Resources for learning SRL: VerbNet, PropBank.
  4. Semantic Parsing
    Semantic Representation. Semantic parsing. Building semantic grammars. Learning semantic parsers.
  5. Distributional models
    Distributional models of semantics. Vector Space Model (VSM). Dimensionality reduction. Latent Semantic Indexing (LSI). Using Topic models: Latent Dirichlet Allocation (LDA).
  6. Linguistic Inference
    Detecting inference between linguistic units. Recognizing Textual Entailment. The case of paraphrasing.
  7. Deep Learning for NLP
    Three lectures will be devoted to Deep Learning for NLP

    1.- Using standard Python modules for ML approaches to NLP tasks: Scipy, sklearn for basic ML models. Neural Networks. Linear models. Feed Forward NN. Simple Perceptron. Multilayer Perceptron (MLP). NLP applications.

    2.- Libraries and languages for NN: Theano, TensorFlow, Keras. More advanced NN. Convolutional NN, Embeddings of words and more complex units. Word2Vec and other embeddings. NLP applications.

    3.- Recurrent NN (RNN), Combination of RNN, NN with memory: GRU, LSTM. NLP applications.

Activities

Activity Evaluation act


Course Introduction

Review of the field of Natural Language Processing, and the main challenges in the field. Review of the statistical paradigm. Review of language models. The student has to understand the basic questions for which we will see a variety of techniques during the course.
Theory
2h
Problems
1h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Classification in NLP

These lectures present machine learning algorithms used in the field of NLP. Special attention is given to the difference between generative and discriminative methods for parameter estimation. We will also present the type of features that are typically used in NLP in discriminative methods. We expect that students already have some background in machine learning, and the goal of these lectures is to see how machine learning is applied to NLP.
Contents:
Theory
5h
Problems
3h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Problem Set 1


Week: 4
Type: assigment
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Sequence Models in NLP

These lectures will present sequence models, an important set of tools that is used for sequential tasks. We will present this in the framework of structured prediction (later in the course we will see that the same framework is used for parsing and translation). We will focus on machine learning aspects, as well as algorithmic aspects. We will give special emphasis to Conditional Random Fields.
Contents:
Theory
6h
Problems
4h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Problem Set 2


Week: 7
Type: assigment
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Syntax and Parsing

We will present statistical models for syntactic structure, and in general tree structures. The focus will be on probabilistic context-free grammars and dependency grammars, two standard formalisms. We will see relevant algorithms, as well as methods to learn grammars from data based on the structured prediction framework.
Contents:
Theory
6h
Problems
3h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Problem Set 3


Week: 10
Type: assigment
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Statistical Machine Translation

We will present the basic elements of statistical machine translation systems, including representation aspects, algorithmic aspects, and methods for parameter estimation.
Contents:
Theory
4h
Problems
2h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Unsupervised Methods in NLP

We will review several methods for unsupervised learning in NLP, in the context of lexical models, sequence models, and grammatical models. We will focus on bootstrapping and cotraining methods, the EM algorithm, and distributional methods.
Contents:
Theory
4h
Problems
2h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Problem Set 4


Week: 14
Type: assigment
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Final Exam


Week: 15
Type: theory exam
Theory
3h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
10h

Project


Week: 16
Type: assigment
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
45h

Teaching methodology

The course will be structured around five main blocks of lectures. In each theory lecture, we will present fundamental algorithmic and statistical techniques for NLP. This will be followed by problem lectures, where we will look in detail to derivations of algorithms and mathematical proofs that are necessary in order to understand statistical methods in NLP.

Furthermore, there will be four problem sets that students need to solve at home. Each problem set will consist of three or four problems that will require the student to understand the elements behind statistical NLP methods. In some cases these problems will involve writing small programs to analyze data and perform some computation.

Finally, students will develop a practical project in teams of two or three students. The goal of the project is to put into practice the methods learned in class, and learn how the experimental methodology that is used in the NLP field. Students have to identify existing components (i.e. data and tools) that can be used to build a system, and perform experiments in order to perform empirical analysis of some statistical NLP method.

Evaluation methodology

Final grade = 0.5*FE + 0.5*LP

where

FE is the grade of the final exam

LP is the grade of the lab project

Bibliography

Basic:

Web links

Previous capacities

- Introductory concepts and methods of Natural Language processing.

- Introductory concepts and methods of Machine Learning.

- Programming.