L'objectiu del projecte és afegir noves mètafores tant de manipulació com d'interació a la plataforma de visualització de models volumètrics en un entorn de realitat virtual.
La interacció en entorns de Realitat Virtual és complexa perquè els dispositius amb els quals s'ha d'operar són diferents, i a la vegada permeten obtenir informació sobre la posició de l'usuari, la orientació del seu cap, etc.
L'objectiu del projecte és la implementació i avaluació de diferents mètodes de manipulació i interacció que siguin intuïtius i fàcils d'usar en entorns de realitat virtual per models mèdics. També s'estudiarà la viabilitat i usabilitat de l'aplicació per a ser utilitzada en diagnòstic.
Companies and scientists working in areas such as finance or genomics are generating enormously large datasets (in the order of petabytes) commonly referred as Big Data. How to efficiently and effectively process such large amounts of data is an open research problem. Since communication is involved in Big Data processing at many levels, at the NaNoNetworking Center in Catalunya (N3Cat) we are currently investigating the potential role of wireless communications in the Big Data scenario. The main focus of the project is to evaluate the impact of applying wireless communications and networking methods to processors and data centers oriented to the management of Big Data. OBJECTIVES =========== N3Cat is looking for students wanting to work in the area of wireless communications for Big Data. To this end, the candidate will work on one of the following areas: - Traffic analysis of Big Data frameworks and applications, as well as in smaller manycore systems. - Channel characterization in Big Data environments: indoor, within the racks of a data center, within the package of CPU, within a chip. - Design of wireless communication protocols for computing systems from the processor level to the data center level.
Machine Learning (ML) has taken the world by storm and has become a fundamental pillar of engineering. As a result, the last decade has witnessed an explosive growth in the use of deep neural networks (DNNs) in pursuit of exploiting the advantages of ML in virtually every aspect of our lives: computer vision, natural language processing, medicine or economics are just a few examples. However, NOT all DNNs fit to all problems: convolutional NNs are good for computer vision, recurrent NNs are good for temporal analysis, and so on. In this context, the main focus of N3Cat and BNN-UPC is to explore the possibilities of the new and less explored variant called Graph Neural Networks (GNNs), whose aim is to learn and model graph-structured data. This has huge implications in fields such as quantum chemistry, computer networks, or social networks among others. OBJECTIVES =========== N3Cat and BNN-UPC are looking for students wanting to work in the area of Graph Neural Networks studying their uses, processing architectures, and algorithms. To this end, the candidate will work on ONE of the following areas: - Investigating the state of the art on this area, surveying the different works done in terms of applications, processing frameworks, algorithms, benchmarks, datasets. This can be taken from a hardware or software perspective. - Helping to build a testbed formed by a cluster of GPUs that will be running pyTorch or Tensorflow. We will instrument the testbed to measure the computation workload and communication flows between GPUs. - Analyzing the communication workload of running a GNN either in the testbed or by means of architectural simulations. - Developing means of accelerating GNN processing in software (e.g., improving scheduling of the message passing) or hardware (e.g. designing a domain-specific architecture).
Robotic Process Automation is receiving significant attention, due to the promise of improving the performance of the main processes of an organization by incorporating robots that partially perform repetitive tasks. In this project, we will consider how Process Mining can help into finding opportunities to apply Robotic Process Automation for a real case study.
Recently, one of the leaders in Robotic Process Automation has adquired one of the main process mining tools (https://www.uipath.com/newsroom/uipath-acquires-process-gold-unparalleled-process-understanding). This is a confirmation of the potential link between the field of process mining and the field of robotic process automation.
In this project we will try to find out how strong is this link. By using real data from a company that is in trying to automate its processes, the student will dig into the field of process mining to propose a methodology to unleash the application of RPA.
In this project, there is a possibility to have a grant that covers the time invested.
Dissenyar i desenvolupar una aplicació informàtica per facilitar el contacte entre els membres d'associacions culturals o de lleure amb l'objectiu de millorar la seva relació, augmentar l'assistència als actes i la participació en les activitats que organitzen aquestes associacions. Com a prova pilot, es posarà en producció una versió de l'aplicació per a la Confederació Sardanista de Catalunya.
L'aplicació disposarà d'una part web de recaptació d'informació que s'enllaça amb l'aplicació i mantindrà les dades d'aquesta actualitzades sempre amb la informació necessària. I l'aplicació haurà de ser per a mòbil i haurà d'incorporar funcionalitat del tipus cerca d'usuaris amb mateixes afinitats, agenda d'actes, cerca d'actes, compartició de recursos, banqueta de reserves, etc.
L'aplicació haurà de disposar d'una bona interfície de fàcil ús i intuitiva de cara l'usuari.
The scope of this TFG project is to address the second part of the 16th Hilbert problem numerically and develop a parallel code that will run on a dedicated NVidia Titan V graphic card (15 TFLOPS float performance). The parallel code is to be written either directly in CUDA programming language either by using python or Julia frontends. There is a dedicated computer with two NVidia graphic cards that can be used for running the code.
The scope of the project is to develop a highly-efficient parallel code for simulation of the 16th Hilbert problem. This code should be capable of (i) performing the integration of a system of two differential equations with polynomials of second order and (ii) analyzing the solutions for the possible presence of limit cycles. While each simple integration is rather simple, the phase space of the free parameters is large. In a reduced form, there are 5 free parameters. If each parameter is sampled at 100 different values, a complete study requires analysis of ten billion different choices of parameters. Such a brute-force approach requires high-performance computations and will be performed on graphical processor units (GPUs). The efficient code will be written using CUDA programming language.
The main goal of this project is to develop a network monitoring system that can be used by network operators to detect bitcoin miners (or miners from other blockchain technologies) in their network. The system will rely only on network measurements obtained by standard network measurement tools and estimate interesting characteristics of detected miners, such as power consumption. How to apply: Please send an email to with your CV and academic file (pdf can be generated from the Raco).
L'objectiu del projecte és aplicar tècniques de Reinforcement Learning per a fer que un cotxe no es surti de la carretera en un simulador. S'aplicaran tècniques de Transfer Learning per fer un aprenentatge gradual del comportament de l'agent.
S'utilitzarà un simulador tipus CARLA i les llibreries Tensorflow/Keras o l'entorn Unity amb el seu asset d'aprenentatge MLAgents.
Web tracking technologies are extensively used to collect large amounts of personal information (PI), including the things we search, the sites we visit, the people we contact, or the products we buy. Although it is commonly believed that this data is mainly used for targeted advertising, some recent works revealed that it is exploited for many other purposes, such price discrimination, financial credibility, insurance coverage, government surveillance, background scanning or identity theft. The main objective of this project is to apply network traffic monitoring and analysis technologies to uncover the particular methods used to track Internet users and collect PI. This project will be useful for both Internet users and the research community, and will produce open source tools, real data sets, and publications revealing most privacy attempting practices. Some preliminary results of our work in this area were recently published in Proceedings of the IEEE (IF: 9.237) and featured in a Wall Street Journal article.
More info at:
Most EU citizens are concerned about online privacy. EPRIVO aims at building a European data-driven observatory that automatically looks for online services that do not respect our privacy rights.
Internet services are known to collect large amounts of personal information (PI). As a result, more than half of EU citizens are concerned about their online privacy. In this context, data brokers are companies devoted to collect and sell PI to other companies. Data brokers are often implemented as third-party trackers, which allow them to gain visibility across the Internet. Recent works show that collected PI is not only used for targeted advertising, but also for more obscure practices, such as price discrimination, background scanning, phishing or identity theft.
In this project, you will collaborate in the development of EPRIVO, the first European-wide online privacy observatory. EPRIVO will continuously scan the Internet, from multiple locations across Europe, in the search of third-party trackers that do not respect basic privacy rights and current EU regulations (GDPR 2016/679). Quantitative results will be published through an online service developed within the project. Such results will be useful to Internet users, policy makers, website owners and researchers.
The identification of the applications behind the network traffic (i.e. traffic classification) is crucial for ISPs and network operators to better manage and control their networks. However, the increasing use of encryption and web-based applications makes this identification very challenging. This problem is exacerbated with the widespread deployment of content distribution networks (e.g. Akamai) and cloud-based services (e.g. Amazon AWS). The goal of this project is to develop a traffic monitoring tool to accurately identify web services from HTTPS traffic, including Google, YouTube, Facebook, Twitter among others. The tool will combine the information from IP addresses and DNS, with novel classification methods inspired on the Google PageRank algorithm to identify encrypted traffic, even if served from Akamai, AWS or Google infrastructures. This project will be carried out in collaboration with the tech-based company Talaia Networks (https://www.talaia.io), which develops cloud-based network monitoring solutions.
How to apply: Please send an email to email@example.com with your CV and academic file (pdf can be generated from the Raco).
Reading charts in desktop and virtual reality environments can be difficult depending on the configuration of different parameters such as the width and height of different visual marks. The project aims to explore the perceptual limits of a set of well-known visualization techniques in desktop and VR-based environments.
It is known that humans cannot interpret visual depictions equally if some parameters are changed such as color palettes, size of marks, etc. However, it is still unkown the value ranges of certain parameters for a set of commonly used visualization techniques such as heatmaps, for example. The goal of this project is to explore the ability to carry out visualization tasks using different visualization techniques in desktop and Virtual Reality environments according to the visual variables chosen.
UPC and Nestlé are offering a new position to develop the TFG in the field of Machine Learning and Cybersecurity. This TFG will be fully funded (internship) and carried out in collaboration with the Global Security Operations Center of Nestlé and UPC.
Cybersecurity is becoming an increasingly important challenge for all companies and individuals alike. While big names used to be the main targets in the past, as people's lives move online, anyone is nowadays a potential target for any kind of cyber-attack, ranging from phishing to ransomware or serious privacy issues. In order to fight against those ever-evolving threats, Machine Learning is increasingly being used behind the scenes to design better systems that are able of self-learning to boost detection rates and boost overall resilience to unknown attacks. As AI-based solutions penetrate products across the industry, a new kind of threat that is often overlooked is becoming more and more prominent and dangerous: adversarial machine learning (AML).
AML focuses on designing specific inputs to deceive a previously trained Machine Learning models into misclassifying them for a specific purpose. One of the main flaws of any state-of-the-art Machine Learning or Deep Learning algorithms is that they assume that the nature of the data they receive is systematically benign, which is generally the case but does not hold true when an adversarial input is received. The motivation behind altering a ML model into thinking that, for example, a new sample is benign when in fact is malicious can range from pure research to more serious real-life issues such as an autonomous car wrongly classifying a stop sign (and thus provoking a fatal accident) or a wrongly diagnosed disease because of a slightly manipulated magnetic resonance image.
This problem is no exception for Cybersecurity where companies wrongly assume that once the last AI-based product is deployed in their network, their employees are safe...
The goal of the project is to create an application that facilitates the segmentation of medical models using immersive techniques.
Segmenting 3D medical models is an important task that can be only automatized with learning algorithms for concrete cases. In general, costly user intervention is needed. Most systems work in a 2D fashion, letting the user handle 2D images one at a time. However, using only 2D slices makes the user not getting use of the 3D immersion in the model. The goal of the project is to create an interface for the easy segmentation of medical models using 3D interaction.
While digital imaging devices have almost entirely substituted analog films in professional and amateur photography, only now digital X-ray imaging devices start being used in dental clinics. Such scanners allow much faster and convenient creation of individual teeth typically as black and white images. One of the important practical purposes is the detection of caries growth or of other alterations in the teeth. So far this is being done by the medic doctors in an old-fashioned way. The goal of this project is to develop software capable of detecting structural changes.
In more details, the goal is develop and test software which will be able to detect structural changes happening in teeth from a sequence of two or more images taken for the same tooth. Examples of such images will be provided by a dental clinic. Some complications might arise from different angles at which the X-ray images have been taken as well as a possible change of the image type (digital vs analog).
Ideally the software should be platform independent (windows, linux, android) and at least to work in Windows which is commonly used in dental clinics.
This project requires an ability to design and write a functional code for the whole application from scratch and test it on real images.
In a recent experiment (https://arxiv.org/abs/2002.10475) with ultracold dysprosium atoms, it was possible to realize a dipolar gas in one-dimensional geometry at low temperature. The goal of the project is to provide realistic simulation of such a system. To do so, quantum Monte Carlo code has to be developed.
Variational and Diffusion Monte Carlo codes will be implemented. One-dimensional geometry makes it easier to write the code. In case of a sussesfull simulation of the experiment, a common article with some of the authors of the experiment is possible.