Advanced Natural Language Processing

You are here

Credits
5
Types
Elective
Requirements
This subject has not requirements, but it has got previous capacities
Department
CS
Can a machine learn to correct the grammaticality of text? Can a machine learn to answer questions we make in plain English? Can a machine learn to translate languages, using Wikipedia as a training set?

This course offers an in depth coverage of methods for Natural Language Processing. We will present fundamental models and tools to approach a variety of Natural Language Processing tasks, ranging from syntactic processing, to semantic processing, to final applications such as information extraction, human-machine dialogue systems, and machine translation. The flow of the course is along two main axis: (1) computational formalisms to describe natural language processes, and (2) statistical and machine learning methods to acquire linguistic models from large data collections.

Weekly hours

Theory
2
Problems
1
Laboratory
0
Guided learning
0
Autonomous learning
5.3

Objectives

  1. Learn to apply statistical methods for NLP in a practical application
    Related competences: CEA3, CEA5, CT3, CB6, CB8, CB9,
  2. Understand statistical and machine learning techniques applied to NLP
    Related competences: CEA3, CG3, CT6, CT7, CB6,
  3. Develop the ability to solve technical problems related to statistical and algorithmic problems in NLP
    Related competences: CEA3, CEA5, CG3, CT7, CB6, CB8, CB9,
  4. Understand fundamental methods of Natural Language Processing from a computational perspective
    Related competences: CEA5, CT7, CB6,

Contents

  1. Course Introduction
    Fundamental tasks in NLP. Main challenges in NLP. Review of statistical paradigms. Review of language modeling techniques.
  2. Classification in NLP
    Review of supervised machine learning methods. Linear classifiers. Generative and discriminative learning. Feature representations in NLP. The EM algorithm.
  3. Sequence Models.
    Hidden Markov Models. Log-linear models and Conditional Random Fields. Applications to part-of-speech tagging and named-entity extraction.
  4. Syntax and Parsing.
    Probabilistic Context Free Grammars. Dependency Grammars. Parsing Algorithms. Discriminative Learning for Parsing.
  5. Machine Translation
    Introduction to Statistical Machine Translation. The IBM models. Phrase-based methods. Syntax-based approaches to translation.
  6. Unsupervised and Semisupervised methods in NLP
    Bootstrapping. Cotraining. Distributional methods.

Activities

Activity Evaluation act


Course Introduction

Review of the field of Natural Language Processing, and the main challenges in the field. Review of the statistical paradigm. Review of language models. The student has to understand the basic questions for which we will see a variety of techniques during the course.
Objectives: 4 2
Contents:
Theory
2h
Problems
1h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Classification in NLP

These lectures present machine learning algorithms used in the field of NLP. Special attention is given to the difference between generative and discriminative methods for parameter estimation. We will also present the type of features that are typically used in NLP in discriminative methods. We expect that students already have some background in machine learning, and the goal of these lectures is to see how machine learning is applied to NLP.
Objectives: 4 2
Contents:
Theory
5h
Problems
3h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Problem Set 1


Objectives: 4 2 3
Week: 4
Type: assigment
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Sequence Models in NLP

These lectures will present sequence models, an important set of tools that is used for sequential tasks. We will present this in the framework of structured prediction (later in the course we will see that the same framework is used for parsing and translation). We will focus on machine learning aspects, as well as algorithmic aspects. We will give special emphasis to Conditional Random Fields.
Objectives: 4 2
Contents:
Theory
6h
Problems
4h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Problem Set 2


Objectives: 4 2 3
Week: 7
Type: assigment
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Syntax and Parsing

We will present statistical models for syntactic structure, and in general tree structures. The focus will be on probabilistic context-free grammars and dependency grammars, two standard formalisms. We will see relevant algorithms, as well as methods to learn grammars from data based on the structured prediction framework.
Objectives: 4 2
Contents:
Theory
6h
Problems
3h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Problem Set 3


Objectives: 4 2 3
Week: 10
Type: assigment
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Statistical Machine Translation

We will present the basic elements of statistical machine translation systems, including representation aspects, algorithmic aspects, and methods for parameter estimation.
Objectives: 4 2
Contents:
Theory
4h
Problems
2h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Unsupervised Methods in NLP

We will review several methods for unsupervised learning in NLP, in the context of lexical models, sequence models, and grammatical models. We will focus on bootstrapping and cotraining methods, the EM algorithm, and distributional methods.
Objectives: 4 2
Contents:
Theory
4h
Problems
2h
Laboratory
0h
Guided learning
0h
Autonomous learning
0h

Problem Set 4


Objectives: 4 2 3
Week: 14
Type: assigment
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
6h

Final Exam


Objectives: 4 2 3
Week: 15
Type: theory exam
Theory
3h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
10.5h

Project


Objectives: 4 2 1
Week: 16
Type: assigment
Theory
0h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
45h

Teaching methodology

The course will be structured around five main blocks of lectures. In each theory lecture, we will present fundamental algorithmic and statistical techniques for NLP. This will be followed by problem lectures, where we will look in detail to derivations of algorithms and mathematical proofs that are necessary in order to understand statistical methods in NLP.

Furthermore, there will be four problem sets that students need to solve at home. Each problem set will consist of three or four problems that will require the student to understand the elements behind statistical NLP methods. In some cases these problems will involve writing small programs to analyze data and perform some computation.

Finally, students will develop a practical project in teams of two or three students. The goal of the project is to put into practice the methods learned in class, and learn how the experimental methodology that is used in the NLP field. Students have to identify existing components (i.e. data and tools) that can be used to build a system, and perform experiments in order to perform empirical analysis of some statistical NLP method.

Evaluation methodology

Final grade = 0.6 final exam + 0.4 project

where

final exam is the grade of the final exam

project is the grade of the project

Bibliography

Basic:

Web links

Previous capacities

- Introductory concepts and methods of Natural Language processing.

- Introductory concepts and methods of Machine Learning.

- Programming.