Procesamiento Avanzado del Lenguaje Natural

Usted está aquí

Créditos
6
Tipos
Optativa
Requisitos
Esta asignatura no tiene requisitos
Departamento
CS
Can a machine learn to correct the grammaticality of text? Can a machine learn to answer questions we make in plain English? Can a machine learn to translate languages, using Wikipedia as a training set?

This course offers an in depth coverage of methods for Natural Language Processing. We will present fundamental models and tools to approach a variety of Natural Language Processing tasks, ranging from syntactic processing, to semantic processing, to final applications such as information extraction, human-machine dialogue systems, and machine translation. The flow of the course is along two main axis: (1) computational formalisms to describe natural language processes, and (2) statistical and machine learning methods to acquire linguistic models from large data collections.

Horas semanales

Teoría
2
Problemas
1
Laboratorio
0
Aprendizaje dirigido
0.6
Aprendizaje autónomo
6.5

Objetivos

  1. Understand fundamental methods of Natural Language Processing from a computational perspective
    Competencias relacionadas: CG3, CB6, CB9, CEC1, CEC2, CTR6,
  2. Understand statistical and machine learning techniques applied to NLP
    Competencias relacionadas: CG3, CB6, CB9, CEC1, CEC2, CTR6,
  3. Develop the ability to solve technical problems related to statistical and algorithmic problems in NLP
    Competencias relacionadas: CG3, CB6, CB8, CB9, CEC1, CEC2, CTR6,
  4. Learn to apply statistical methods for NLP in a practical application
    Competencias relacionadas: CG3, CB6, CB8, CB9, CEC1, CEC2, CTR3, CTR6,

Contenidos

  1. Course Introduction
    Fundamental tasks in NLP. Main challenges in NLP. Review of statistical paradigms. Review of language modeling techniques.
  2. Classification in NLP
    Review of supervised machine learning methods. Linear classifiers. Generative and discriminative learning. Feature representations in NLP. The EM algorithm.
  3. Sequence Models
    Hidden Markov Models. Log-linear models and Conditional Random Fields. Applications to part-of-speech tagging and named-entity extraction.
  4. Syntax and Parsing
    Probabilistic Context Free Grammars. Dependency Grammars. Parsing Algorithms. Discriminative Learning for Parsing.
  5. Machine Translation
    Introduction to Statistical Machine Translation. The IBM models. Phrase-based methods. Syntax-based approaches to translation.
  6. Unsupervised and Semisupervised methods in NLP
    Bootstrapping. Cotraining. Distributional methods.

Actividades

Actividad Acto evaluativo


Course Introduction


Objetivos: 1 2
Contenidos:
Teoría
2h
Problemas
1h
Laboratorio
0h
Aprendizaje dirigido
0h
Aprendizaje autónomo
0h

Classification in NLP


Objetivos: 1 2
Contenidos:
Teoría
5h
Problemas
3h
Laboratorio
0h
Aprendizaje dirigido
0h
Aprendizaje autónomo
0h

Problem Set 1


Objetivos: 1 2 3
Semana: 4
Teoría
0h
Problemas
0h
Laboratorio
0h
Aprendizaje dirigido
1.7h
Aprendizaje autónomo
10h

Sequence Models in NLP


Objetivos: 1 2
Contenidos:
Teoría
6h
Problemas
3h
Laboratorio
0h
Aprendizaje dirigido
0h
Aprendizaje autónomo
0h

Problem Set 2


Objetivos: 3 1 2
Semana: 7
Teoría
0h
Problemas
0h
Laboratorio
0h
Aprendizaje dirigido
1.7h
Aprendizaje autónomo
10h

Syntax and Parsing


Objetivos: 1 2
Contenidos:
Teoría
6h
Problemas
3h
Laboratorio
0h
Aprendizaje dirigido
0h
Aprendizaje autónomo
0h

Problem Set 3


Objetivos: 1 2 3
Semana: 10
Teoría
0h
Problemas
0h
Laboratorio
0h
Aprendizaje dirigido
1.7h
Aprendizaje autónomo
10h

Statistical Machine Translation

We will present the basic elements of statistical machine translation systems, including representation aspects, algorithmic aspects, and methods for parameter estimation.
Objetivos: 1 2
Contenidos:
Teoría
4h
Problemas
2h
Laboratorio
0h
Aprendizaje dirigido
0h
Aprendizaje autónomo
0h

Unsupervised Methods in NLP

We will review several methods for unsupervised learning in NLP, in the context of lexical models, sequence models, and grammatical models. We will focus on bootstrapping and cotraining methods, the EM algorithm, and distributional methods

Teoría
4h
Problemas
3h
Laboratorio
0h
Aprendizaje dirigido
0h
Aprendizaje autónomo
0h

Problem Set 4


Objetivos: 1 2 3
Semana: 13
Teoría
0h
Problemas
0h
Laboratorio
0h
Aprendizaje dirigido
1.7h
Aprendizaje autónomo
10h

Final Exam


Objetivos: 1 2 3
Semana: 15
Teoría
3h
Problemas
0h
Laboratorio
0h
Aprendizaje dirigido
0h
Aprendizaje autónomo
12.5h

Practical Project


Objetivos: 1 2 4
Semana: 16
Teoría
0h
Problemas
0h
Laboratorio
0h
Aprendizaje dirigido
2.2h
Aprendizaje autónomo
45h

Metodología docente

The course will be structured around five main blocks of lectures. In each theory lecture, we will present fundamental algorithmic and statistical techniques for NLP. This will be followed by problem lectures, where we will look in detail to derivations of algorithms and mathematical proofs that are necessary in order to understand statistical methods in NLP.

Furthermore, there will be four problem sets that students need to solve at home. Each problem set will consist of three or four problems that will require the student to understand the elements behind statistical NLP methods. In some cases these problems will involve writing small programs to analyze data and perform some computation.

Finally, students will develop a practical project in teams of two or three students. The goal of the project is to put into practice the methods learned in class, and learn how the experimental methodology that is used in the NLP field. Students have to identify existing components (i.e. data and tools) that can be used to build a system, and perform experiments in order to perform empirical analysis of some statistical NLP method.

Método de evaluación

Final grade = 0.6 final exam + 0.4 project

where

final exam is the grade of the final exam

project is the grade of the project

Bibliografía

Básica: