Consulta ofertes d'altres estudis i especialitats
Boltzmann Machines are probabilistic models developed in 1985 by
D.H. Ackley, G.E. Hinton and T.J. Sejnowski. In 2006, Restricted
Boltzmann Machines (RBMs) were used in the pre-training step of
several successful deep learning models, leading to a new renaissance
of neural networks and artificial intelligence.
In spite of their nice mathematical formulation, there are a number of
issues that are hard to compute:
Therefore, in practice we have to approximate both the computation of
the probabilities and several components of the learning process
itself. These drawbacks have prevented RBMs to show their real
potential as truly probabilistic models.
Currently, we are working on trying to improve several of the unsolved
issues related to RBMs:
These works have opened new lines of research, some of which can be
the topic of a Master's Thesis. The scope and degree of depth of the
work can be adapted to the estimated times to complete the Thesis. For
further details, contact Enrique Romero (eromero@cs.upc.edu).
To date, traditional Deep Learning (DL) solutions (e.g. Feed-forward Neural Networks, Convolutional Neural Networks) have had a major impact in numerous fields, such as Speak Recognition (e.g., Siri, Alexa), Autonomous driving, Computer Vision,etc. It was just recently, however, that a new DL technique called Graph Neural Network (GNN) was introduced, proving to be unprecedentedly accurate to solve problems that are formalized as graphs.
This is precisely why companies such as Google have invested a lot of resources on exploring applications of GNN, some of which are as mediatic as Alpha Fold (recently solving the Protein-Folding problem), or used by Goolge Maps to make predictions of the expected traffic of a road. In this regard, the goal of this thesis is to continue investigating about the potential of GNNs when applied to the field of Computer Networks. Our goal is to provide GNN-based solutions that make computer networks run more autonomously and in a more efficient way.
Recent advances in the field of Reinforcement Learning (DRL) are rising a lot of attention due to its potential for automatic control and automatization. Breakthroughs from academia and the industry (e.g, Stanford, DeepMind and OpenAI) are demonstrating that DRL is an effective technique to face complex optimization problems with many dimensions and non-linearities. However, to train a DRL agent in large optimization scenarios still remains a challenge due to the computational intensive operations during backpropagation.
The goal of this thesis is to enable the application of Deep Reinforcement Learning techniques for optimization problems in very large and complex scenarios. Inspired by the last related work from OpenAI (https://openai.com/blog/evolution-strategies/), we will explore time-efficient alternatives to train a Graph Neural Network based DRL agent on large optimization scenarios in a reasonable amount of time.
This project aims to analyze the prediction capability of Optical Coherence Tomography Angiography (OCTA) images for Diabetes Mellitus (DM) and Diabetic Retinopathy (DR,) in a large high-quality image dataset from previous research projects carried out in the field of Ophthalmology (Fundacio¿ La Marato¿ TV3, Fondo Investigaciones Sanitarias, FIS). OCTA is a newly developed, non-invasive, retinal imaging technique that permits adequate delineation of the perifoveal vascular network. It allows the detection of paramacular areas of capillary non perfusion and/or enlargement of the foveal avascular zone (FAZ), representing an excellent tool for assessment of DR.
A more detailed description of the project can be found in
https://www.cs.upc.edu/~eromero/Downloads/Retina-TFM-Project-01.pdf
The project is proposed in collaboration with Javier Zarranz Ventura
(Institut Clínic d'Oftalmologia, ICOF, Hospital Clínic de Barcelona, and
Institut d'Investigacions Biomèdiques August Pi I Sunyer, IDIBAPS),
which would provide a large annotated database to develop the project. For further information, please contact Alfredo Vellido (avellido@cs.
upc.edu) or Enrique Romero (eromero@cs.upc.edu).
Recent advances in the field of Reinforcement Learning (DRL) are rising a lot of attention due to its potential for automatic control and automatization. Breakthroughs from academia and the industry (e.g, Stanford, DeepMind and OpenAI) are demonstrating that DRL is an effective technique to face complex optimization problems with many dimensions and non-linearities. However, to train a DRL agent in large optimization scenarios still remains a challenge due to the computational intensive operations during backpropagation.
The goal of this thesis is to enable the application of Deep Reinforcement Learning techniques for optimization problems in very large and complex scenarios, in this case Data Center Networks. Inspired by the last related work from OpenAI (https://openai.com/blog/evolution-strategies/), we will explore time-efficient alternatives to train a Graph Neural Network based DRL agent on large optimization scenarios in a reasonable amount of time.
Recent advances in the field of Reinforcement Learning (DRL) are rising a lot of attention due to its potential for automatic control and automatization. Breakthroughs from academia and the industry (e.g, Stanford, DeepMind and OpenAI) are demonstrating that DRL is an effective technique to face complex optimization problems with many dimensions and non-linearities. However, to train a DRL agent in large optimization scenarios still remains a challenge due to the computational intensive operations during backpropagation.
The goal of this thesis is to enable the application of Deep Reinforcement Learning techniques for optimization problems in very large and complex scenarios, in this case LEO satellite communication networks. Inspired by the last related work from OpenAI (https://openai.com/blog/evolution-strategies/), we will explore time-efficient alternatives to train a Graph Neural Network based DRL agent on large optimization scenarios in a reasonable amount of time.
Web tracking technologies are extensively used to collect large amounts of personal information (PI), including the things we search, the sites we visit, the people we contact, or the products we buy. Although it is commonly believed that this data is mainly used for targeted advertising, some recent works revealed that it is exploited for many other purposes, such price discrimination, financial credibility, insurance coverage, government surveillance, background scanning or identity theft.
The main objective of this project is to apply the most recent advances in Deep Learning, including Deep Reinforcement Learning and Graph Neural Networks, to uncover the particular methods used to track Internet users and collect PI.
This project will be useful for both Internet users and the research community, and will produce open source tools, real data sets, and publications revealing most privacy attempting practices.
Some preliminary results of our work in this area were recently published in Proceedings of the IEEE (IF: 9.237) and featured in a Wall Street Journal article.
More info at:
http://personals.ac.upc.edu/pbarlet/papers/web-tracking.survey2015.pdf
http://blogs.wsj.com/digits/2015/08/04/7-ways-youre-being-tracked-online-and-how-to-stop-it/
© Facultat d'Informàtica de Barcelona - Universitat Politècnica de Catalunya - Avís legal sobre aquest web
Aquest web utilitza cookies pròpies per oferir una millor experiència i servei. En continuar amb la navegació entenem que acceptes la nostra política de cookies..