Big Data Management

Credits
6
Types
  • MIRI: Specialization complementary (Data Science)
  • BDMA21: Compulsory
Requirements
This subject has not requirements, but it has got previous capacities
Department
ESSI
The main goal of this course is to analyze the technological and engineering needs of Big Data Management. The enabling technology for such a challenge is cloud services, which provide the elasticity needed to properly scale the infrastructure as the needs of the company grow. Thus, students will learn advanced data management techniques (i.e., NOSQL solutions) that also scale with the infrastructure. Being Big Data Management the evolution of Data Warehousing, such knowledge (see the corresponding subject in Data Science speciality for more details on its contents) is assumed in this course , which will specifically focus on the management of data Volume and Velocity.

On the one hand, to deal with high volumes of data, we will see how a distributed file system can scale to as many machines as necessary. Then, we will study different physical structures we can use to store our data in it. Such structures can be in the form of a file format at the operating system level, or at a higher level of abstraction. In the latter case, they take the form of either sets of key-value pairs, collections of semi-structured documents or column-wise stored tables. We will see that, independently of the kind of storage we choose, current highly parallelizable processing systems using funtional programming principles (typically based on Map and Reduce functions), whose processing framework can rely on temporal files (like Hadoop MapReduce) or mainly in-memory structures (like Spark).

On the other hand, to deal with high velocity of data, we need some low latency system which processes either streams or micro-batches. However, nowadays, data production is already beyond processing technologies capacity. More data is being generated than we can store or even process on the fly. Thus, we will recognize the need of (a) some techniques to select subsets of data (i.e., filter out or sample), (b) summarize them maximizing the valuable information retained, and (c) simplify our algorithms to reduce their computational complexity (i.e., doing one single pass over the data) and provide an approximate answer.

Finally, the complexity of a Big Data project (combining all the necessary tools in a collaborative ecosystem), which typically involves several people with different backgrounds, requires the definition of a high level architecture that abstracts technological difficulties and focuses on functionalities provided and interactions between modules. Therefore, we will also analyse different software architectures for Big Data.

Teachers

Person in charge

  • Alberto Abello Gamazo ( )

Others

  • Sergi Nadal Francesch ( )

Weekly hours

Theory
1.9
Problems
0
Laboratory
1.9
Guided learning
0
Autonomous learning
6.85

Competences

Generic Technical Competences

Generic

  • CG5 - Capability to apply innovative solutions and make progress in the knowledge to exploit the new paradigms of computing, particularly in distributed environments.

Transversal Competences

Teamwork

  • CTR3 - Capacity of being able to work as a team member, either as a regular member or performing directive activities, in order to help the development of projects in a pragmatic manner and with sense of responsibility; capability to take into account the available resources.

Basic

  • CB7 - Ability to integrate knowledges and handle the complexity of making judgments based on information which, being incomplete or limited, includes considerations on social and ethical responsibilities linked to the application of their knowledge and judgments.

Technical Competences of each Specialization

Specific

  • CEC1 - Ability to apply scientific methodologies in the study and analysis of phenomena and systems in any field of Information Technology as well as in the conception, design and implementation of innovative and original computing solutions.
  • CEC2 - Capacity for mathematical modelling, calculation and experimental design in engineering technology centres and business, particularly in research and innovation in all areas of Computer Science.
  • CEC3 - Ability to apply innovative solutions and make progress in the knowledge that exploit the new paradigms of Informatics, particularly in distributed environments.

Objectives

  1. Understand the main advanced methods of data management and design and implement non-relational database managers, with special emphasis on distributed systems.
    Related competences: CB7, CEC1, CEC2, CEC3, CTR3, CG5,
  2. Understand, design, explain and carry out parallel information processing in massively distributed systems.
    Related competences: CB7, CEC1, CEC2, CEC3, CTR3, CG5,
  3. Manage and process a continuous flow of data.
    Related competences: CB7, CEC1, CEC2, CEC3, CTR3, CG5,
  4. Design, implement and maintain system architectures that manage the data life cycle in analytical environments.
    Related competences: CB7, CEC1, CEC2, CEC3, CTR3, CG5,

Contents

  1. Introduction
    Big Data, Cloud Computing, Scalability
  2. Big Data Design
    Polyglot systems; Schemaless databases; Key-value stores; Wide-column stores; Document-stores
  3. Distributed Data Management
    Transparency layers; Distributed file systems; File formats; Fragmentation; Replication and synchronization; Sharding; Distributed hash; LSM-Trees
  4. In-memory Data Management
    NUMA architectures; Columnar storage; Late reconstruction; Light-weight compression
  5. Distributed Data Processing
    Distributed Query Processing; Sequential access; Pipelining; Parallelism; Synchronization barriers; Multitenancy; MapReduce; Resilient Distributed Datasets; Spark
  6. Stream management and processing
    One-pass algorithms; Sliding window; Stream to relation operations; Micro-batching; Sampling; Filtering; Sketching
  7. Big Data Architectures
    Centralized and Distributed functional architectures of relational systems; Lambda architecture

Activities

Activity Evaluation act


Theoretical lectures

In these activities, the lecturer will introduce the main theoretical concepts of the subject. Besides lecturing, cooperative learning techniques will be used. These demand the active participation of the students, and consequently will be evaluated.
Objectives: 2 1 3 4
Contents:
Theory
25h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
25h

Theory
2h
Problems
0h
Laboratory
0h
Guided learning
0h
Autonomous learning
17h

Theory
0h
Problems
0h
Laboratory
27h
Guided learning
0h
Autonomous learning
54h

Teaching methodology

The course comprises theory, and lab sessions.

Theory: Inverted class techniques will be used, which require that the student works on the provided multimedia materials before the class. Then, theory lectures comprise the teacher's complementary explanations and problem solving.

Lab: The course contents are applied in a realistic problem in the
course project, done in teams, where students will put into practice the kinds of tools studied during the course. Since this course is part of the BDMA Erasmus Mundus
master syllabus, this project is conducted jointly with the Viability of
Business Projects (VBP), Semantic Data Management (SDM) and Debates on Ethics
of Big Data (DEBD) courses.

Evaluation methodology

Final Mark = min(10 ; 60%E + 40%L + 10%P)

L = Weighted average of the marks of the lab deliverables and tests
E = Final exam
P = Participation in the class

Bibliography

Basic:

Complementary:

Web links

Previous capacities

Being Big Data Management the evolution of Data Warehousing, such knowledge is assumed in this course. Thus, general knowledge is expected on: Relational database desing; Database management system architecture; ETL and OLAP

Specifically, knowledge is expected on:
- Multidimensional modeling (i.e, star schemas)
- Querying relational databases
- Physical design of relational tables (i.e., partitioning)
- Hash and B-tree indexing
- External sorting algorithms (i.e., merge-sort)
- ACID transactions