Big Data is traditionally defined with the three V's: Volume, Velocity and Variety. Traditionally, Big Data has been associated with Volume (e.g., the Hadoop ecosystem) and recently Velocity has earned its momentum (especially, with the arrival of Stream processors such as Apache Flink). However, currently, associating Big Data with Volume or Velocity is simply a mistake. The biggest challenge in Big Data Management is nowadays the Variety challenge and how to tackle Variety in real-world projects is yet not clear and there are no standarized solutions.
In this course the student will be introduced to advanced database technologies, modeling techniques and methods for tackling Variety for decision making. The fundamental underlying theory is that of graph data management and processing. We will also explore the difficulties that arise when combining Variety with Volume and / or Velocity. The focus of this course is on the need to enrich the available data (typically owned by the organization) with external repositories (special attention will be paid to Open Data), in order to gain further insights into the organization business domain. There is a vast amount of examples of external data to be considered as relevant in the decision making processes of any company. For example, data coming from social networks such as Facebook or Twitter; data released by governmental bodies (such as town councils or governments); data coming from sensor networks (such as those in the city services within the Smart Cities paradigm); third parties, etc.
This is a new hot topic without a clear and established (mature enough) methodology. For this reason, it requires rigorous thinking, innovation and a strong technical background in order to master the inclusion of external data in an organization decision making processes. Accordingly, this course focuses on two main aspects:
1.- Technical aspect. This represents the core discussion in the course and includes:
- dealing with semi-structured or non-structured data (as in the Web),
- the effective use of metadata to understand external data,
- mastering the main formalisms (mostly coming from the Semantic Web) to enrich the data with metadata (ontology languages, RDF, XML, etc.),
- determine relevant sources, apply and use semantic mechanisms to automate the addition (potentially integration), linkage and / or cross of data between heterogeneous data sources
- learn the main approaches to perform data analysis natively on graph-based formalisms (i.e., reasoning, graph-based algorithms and machine learning).
2.- Entrepreneurship and innovation, which includes:
- working on the visionary aspect to boost new analytical perspectives on a business domain by considering external sources and,
- developing added value to current systems by means of (such) external data
Profesorado
Responsable
Otros
Horas semanales
Aprendizaje autónomo
7.11
Objetivos
-
Determine how to apply graph formalisms to solve the Variety challenge (data integration)
Competencias relacionadas:
-
Master the main semantic-aware formalisms to enable semantic modeling
Competencias relacionadas:
-
Determine how to apply graph formalisms to solve the Variety challenge (data integration)
Competencias relacionadas:
-
Reinforce team work capabilities in order to develop innovative solutions by means of complementing the organization data with external data
Competencias relacionadas:
-
Perform graph data processing both in centralized and distributed environments
Competencias relacionadas:
Contenidos
-
Introduction and formalisation of Variety in Big Data and its management
Definition of data management tasks: from a database perspective and knowledge representation.
Definition of Variety and Big Data. Syntactic and Semantic heterogeneities. Impact of heterogeneities in the identified data management tasks.
Data integration. Theoretical framework for the management and integration of heterogeneous data sources.
Main components of an integration system: data sources, global schema and mappings.
The concept of canonical model for data integration. Definition of data model. Main characteristics of a canonical data model.
-
Graphs as solution to the Variety challenge
Graphs as the best canonical model for data integration.
Graph data models main features. Differences with other data models (specially the relational data model).
Data and metadata concepts and their formalization in graph models.
Use cases (highlighting topological benefits): fraud detection, bioinformatics, traffic and logistics, social networks, etc.
Introduction to the main graph models: property graph and knowledge graphs.
-
Property graphs management
Data structures. Integrity constraints.
Basic operations. Based on topology, content and hybrid.
Graph query languages: GraphQL.
Graph database concept: tool heterogeneity when implementing the graph structures. Impact of such decisions in the main operations.
Distributed graph databases. Need and difficulties. Thinking like a vertex paradigm as standard de facto in distributed graph processing.
Main distributed graph algorithms.
-
Knowledge graph management
Data structure. RDF. Origin and relationship with Linked Open Data. Integrity constraints.
Data structure: RDFS and OWL. Relationship with first order logic. Foundations in Description Logics. Integrity constraints. Reasoning.
Basic operations and query language. SPARQL and underlying algebra. Entailment regimes (reasoning).
Triplestores. Differences with graph databases. Native implementations. Implementations based on the relational data model. Impact of such decisions on the basic operations.
Distributed triplestore. Needs and difficulties. Graph Engine 1.0 as paradigm of distributed triplestore.
Main distributed algorithms.
-
Property and knowledge graphs comparison. Use cases
Recap about both models. Commonalities and differences. Concepts to borrow between both paradigms.
Main use cases. Metadata management: Data Lake semantification and data governance.
Main use cases. Exploitation of their topological features: recommenders on graphs and data mining.
Visualization: by means of a GUI (Gephi) or programmatically (D3.js or GraphLab).
Actividades
Actividad
Acto evaluativo
Lectures
During lectures the main concepts will be discussed. Lectures will combine master lectures and active / cooperative learning activities. The student is meant to have a pro-active attitude during active / cooperative learning activities. During master lectures, the student is meant to listen, take notes and ask questions.
Objetivos:
3
2
4
1
5
Contenidos:
Hands-on Session
The student will be asked to practice the different concepts introduced in the lectures. This includes problem solving either on the computer or on paper.
Objetivos:
4
1
Contenidos:
Final Exam
Written exam of the theoretical concepts introduced along the course.
Objetivos:
3
2
1
5
Contenidos:
Metodología docente
The course comprises theory and lab sessions.
Theory: These lectures comprise the teacher's explanations and constitute the main part of the course. The students will also have some contents to read and prepare outside the classroom and will be asked to participate in cooperative learning activities.
Laboratory: Mainly, the lab sessions will be dedicated to the practice (with and without computer) of the concepts introduced in the theory lectures. Specific and relevant tools will be introduced in these sessions. Small-sized projects will be conducted using these tools.
Project: The course contents are applied in a realistic problem in the course project. Since this course is part of the BDMA Erasmus Mundus master, this project is conducted jointly with the Viability of Business Projects (VBP), Big Data Management (BDM) and Debates on Ethics of Big Data (DEBD) courses.
Método de evaluación
Final mark = 40% EX + 50% LAB + 10% P
EX = Final exam mark
LAB = Weighted mark of the labs
P = Project
Bibliografía
Básica:
-
Data Integration: A Theoretical Perspective -
Lenzerini, Maurizio,
ACM, 2002. ISBN: 1-58113-507-6
https://doi.org/10.1145/543613.543644
-
Managing and mining graph data -
Aggarwal, Charu C; Wang, Haixun,
Springer, cop. 2010. ISBN: 9781441960443
http://cataleg.upc.edu/record=b1384488~S1*cat
-
The Description logic handbook : theory, implementation, and applications -
Baader, Franz von,
Cambridge University Press, 2003. ISBN: 0521781760
http://cataleg.upc.edu/record=b1230856~S1*cat
-
Web data management -
Abiteboul, S,
Cambridge University Press, 2011. ISBN: 9781107012431
http://cataleg.upc.edu/record=b1410074~S1*cat
-
Ontology-Driven software development -
Pan, Jeff Z,
Spinger, cop. 2013. ISBN: 9783642312250
http://cataleg.upc.edu/record=b1427265~S1*cat
-
Data management and query processing in semantic web databases -
Groppe, Sven,
Springer, cop. 2011. ISBN: 9783642193569
http://cataleg.upc.edu/record=b1394290~S1*cat
-
Database systems : the complete book -
Garcia-Molina, Hector; Ullman, Jeffrey D; Widom, Jennifer,
Pearson Education, 2009. ISBN: 978-0131873254
http://cataleg.upc.edu/record=b1346544~S1*cat
-
A Survey of RDF Data Management Systems -
Özsu, M. Tamer,
Cornell University Library, 2016.
https://arxiv.org/abs/1601.00707
-
The Ubiquity of Large Graphs and Surprising Challenges of Graph Processing -
Sahu, Siddhartha; Mhedhbi, Amine; Salihoglu, Semih; Lin, Jimmy; Özsu, M. Tamer,
Cornell University Library, 2017.
https://arxiv.org/abs/1709.03188
Capacidades previas
The student must be familiar with basics on databases and data modeling. Programming skills are also mandatory.