The main goal of this course is to analyze the technological and engineering needs of Big Data Management. The enabling technology for such a challenge is cloud services, which provide the elasticity needed to properly scale the infrastructure as the needs of the company grow. Thus, students will learn advanced data management techniques (i.e., NOSQL solutions) that also scale with the infrastructure. Being Big Data Management the evolution of Data Warehousing, such knowledge (see the corresponding subject in Data Science speciality for more details on its contents) is assumed in this course , which will specifically focus on the management of data Volume and Velocity.
On the one hand, to deal with high volumes of data, we will see how a distributed file system can scale to as many machines as necessary. Then, we will study different physical structures we can use to store our data in it. Such structures can be in the form of a file format at the operating system level, or at a higher level of abstraction. In the latter case, they take the form of either sets of key-value pairs, collections of semi-structured documents or column-wise stored tables. We will see that, independently of the kind of storage we choose, current highly parallelizable processing systems using funtional programming principles (typically based on Map and Reduce functions), whose processing framework can rely on temporal files (like Hadoop MapReduce) or mainly in-memory structures (like Spark).
On the other hand, to deal with high velocity of data, we need some low latency system which processes either streams or micro-batches. However, nowadays, data production is already beyond processing technologies capacity. More data is being generated than we can store or even process on the fly. Thus, we will recognize the need of (a) some techniques to select subsets of data (i.e., filter out or sample), (b) summarize them maximizing the valuable information retained, and (c) simplify our algorithms to reduce their computational complexity (i.e., doing one single pass over the data) and provide an approximate answer.
Finally, the complexity of a Big Data project (combining all the necessary tools in a collaborative ecosystem), which typically involves several people with different backgrounds, requires the definition of a high level architecture that abstracts technological difficulties and focuses on functionalities provided and interactions between modules. Therefore, we will also analyse different software architectures for Big Data.
Teachers
Person in charge
Besim Bilalli (
)
Others
Marc Maynou Yelamos (
)
Sergi Nadal Francesch (
)
Uchechukwu Fortune Njoku (
)
Weekly hours
Theory
1.9
Problems
0
Laboratory
1.9
Guided learning
0
Autonomous learning
6.85
Competences
Transversal Competences
Teamwork
CT3 - Ability to work as a member of an interdisciplinary team, as a normal member or performing direction tasks, in order to develop projects with pragmatism and sense of responsibility, making commitments taking into account the available resources.
Third language
CT5 - Achieving a level of spoken and written proficiency in a foreign language, preferably English, that meets the needs of the profession and the labour market.
Entrepreneurship and innovation
CT1 - Know and understand the organization of a company and the sciences that govern its activity; have the ability to understand labor standards and the relationships between planning, industrial and commercial strategies, quality and profit. Being aware of and understanding the mechanisms on which scientific research is based, as well as the mechanisms and instruments for transferring results among socio-economic agents involved in research, development and innovation processes.
Basic
CB6 - Ability to apply the acquired knowledge and capacity for solving problems in new or unknown environments within broader (or multidisciplinary) contexts related to their area of study.
CB7 - Ability to integrate knowledge and handle the complexity of making judgments based on information which, being incomplete or limited, includes considerations on social and ethical responsibilities linked to the application of their knowledge and judgments.
CB8 - Capability to communicate their conclusions, and the knowledge and rationale underpinning these, to both skilled and unskilled public in a clear and unambiguous way.
CB9 - Possession of the learning skills that enable the students to continue studying in a way that will be mainly self-directed or autonomous.
CB10 - Possess and understand knowledge that provides a basis or opportunity to be original in the development and/or application of ideas, often in a research context.
Generic Technical Competences
Generic
CG1 - Identify and apply the most appropriate data management methods and processes to manage the data life cycle, considering both structured and unstructured data
CG3 - Define, design and implement complex systems that cover all phases in data science projects
Technical Competences
Especifics
CE2 - Apply the fundamentals of data management and processing to a data science problem
CE4 - Apply scalable storage and parallel data processing methods, including data streams, once the most appropriate methods for a data science problem have been identified
CE5 - Model, design, and implement complex data systems, including data visualization
CE12 - Apply data science in multidisciplinary projects to solve problems in new or poorly explored domains from a data science perspective that are economically viable, socially acceptable, and in accordance with current legislation
CE13 - Identify the main threats related to ethics and data privacy in a data science project (both in terms of data management and analysis) and develop and implement appropriate measures to mitigate these threats
Objectives
Understand the main advanced methods of data management and design and implement non-relational database managers, with special emphasis on distributed systems.
Related competences:
CB10,
CB6,
CB7,
CB8,
CB9,
CT3,
CT5,
CE2,
CE4,
CE5,
CG1,
CG3,
Understand, design, explain and carry out parallel information processing in massively distributed systems.
Related competences:
CB10,
CB6,
CB7,
CB8,
CB9,
CT3,
CT5,
CE2,
CE4,
CE5,
CG1,
CG3,
Design, implement and maintain system architectures that manage the data life cycle in analytical environments.
Related competences:
CB10,
CB6,
CB7,
CB8,
CB9,
CT1,
CT3,
CT5,
CE12,
CE13,
CE2,
CE4,
CE5,
CG1,
CG3,
Contents
Introduction
Big Data, Cloud Computing, Scalability
Big Data Design
Polyglot systems; Schemaless databases; Key-value stores; Wide-column stores; Document-stores
Distributed Data Management
Transparency layers; Distributed file systems; File formats; Fragmentation; Replication and synchronization; Sharding; Distributed hash; LSM-Trees
In-memory Data Management
NUMA architectures; Columnar storage; Late reconstruction; Light-weight compression
Stream management and processing
One-pass algorithms; Sliding window; Stream to relation operations; Micro-batching; Sampling; Filtering; Sketching
Big Data Architectures
Centralized and Distributed functional architectures of relational systems; Lambda architecture
Activities
ActivityEvaluation act
Theoretical lectures
In these activities, the lecturer will introduce the main theoretical concepts of the subject. The active participation of the students will be required. Objectives:2134 Contents:
Theory: Classical theory lectures in conjunction with complementary explanations and problem solving.
Lab: There will be a project done in teams where students will put into practice the kinds of tools studied during the course. This will be evaluated in two deliverables and individual tests.
Evaluation methodology
Final Mark = 60%E + 40%L
L = Weighted average of the marks of the lab deliverables and tests
E = Final exam
Being Big Data Management the evolution of Data Warehousing, such knowledge is assumed in this course. Thus, general knowledge is expected on: Relational database desing; Database management system architecture; ETL and OLAP
Specifically, knowledge is expected on:
- Multidimensional modeling (i.e, star schemas)
- Querying relational databases
- Physical design of relational tables (i.e., partitioning)
- Hash and B-tree indexing
- External sorting algorithms (i.e., merge-sort)
- ACID transactions