Consulta ofertas de otros estudios y especialidades
Lung cancer remains the leading cause of cancer-related deaths worldwide. AI has recently emerged as a transformative tool for enhancing medical decision-making. However, its widespread adoption faces several challenges, including data quality, model transparency, and interpretability. This thesis seeks to explore how innovative AI techniques can revolutionize lung cancer research and treatment, offering new opportunities to address these challenges. It aims to contribute to the broader application of AI in healthcare.
This project offers the candidate a unique opportunity to apply artificial intelligence techniques to real-world challenges in lung cancer research and treatment. As part of this thesis, the candidate will work with datasets, including patient records, genetic data, molecular alterations, treatment outcomes, and exposome data. These datasets will serve as the foundation for developing AI models that address critical challenges in lung cancer treatment, such as predicting patient outcomes and identifying optimal treatment strategies.
The candidate will focus on the following core tasks:
Data Exploration and Preprocessing: The candidate will gain experience in handling complex medical datasets by cleaning, preparing, and structuring the data to ensure it is suitable for advanced AI analysis.
Building AI Models: Using machine learning and deep learning techniques, the candidate will develop models aimed at predicting lung cancer progression, evaluating treatment efficacy, and understanding the impact of various environmental and genetic factors.
Interpretability and Explainability: A significant emphasis will be placed on making AI models interpretable and transparent. The candidate will explore techniques to ensure that the models produced are not just accurate but also explainable, providing healthcare professionals with clear insights into the model's predictions and decisions.
Exploring Interaction Networks: The candidate will analyze interaction networks, studying relationships between patient genetics, environmental factors, and treatment responses to identify key drivers of lung cancer outcomes.
Throughout the project, the candidate will not only gain hands-on experience with cutting-edge AI tools and methodologies but also develop a deeper understanding of AI's role in healthcare. This project provides an impactful opportunity to contribute to a field where AI innovation can directly improve patient outcomes.
In this thesis, the focus is on understanding emergence in Large Language Models (LLMs). Emergence refers to complex behaviors that arise from interactions among individual components, even when those components lack those behaviors individually. LLMs exhibit surprising linguistic abilities beyond their constituent words or tokens. Assembly Theory (AT) provides a framework for quantifying complexity without altering fundamental physical laws. By applying AT to LLMs, this research aims to uncover how emergent properties emerge from the interplay of simple components.
What is Emergence? Emergence refers to the phenomenon where a complex system exhibits properties or behaviors that its individual components do not possess in isolation. These emergent features arise only when the components interact within a broader context. In philosophy, science, and art, emergence plays a pivotal role in theories related to integrative levels and complex systems. For example, life, as studied in biology, emerges from the underlying chemistry and physics of biological processes.
Emergence in Large Language Models Recent research has highlighted emergent behavior in Large Language Models (LLMs). These models, such as GPT-3, exhibit surprising capabilities beyond their individual components (words or tokens). The interactions between countless parameters give rise to emergent linguistic abilities, including natural language understanding, generation, and context-based reasoning. For further details, refer to https://arxiv.org/pdf/2206.07682
What is Assembly Theory (AT)? Assembly Theory (AT) provides a novel framework for quantifying complexity without altering fundamental physical laws. Unlike traditional point-particle models, AT defines objects based on their potential formation histories. These "objects" can exhibit evidence of selection within well-defined boundaries. AT allows us to explore emergent properties by considering how components assemble into coherent entities, shedding light on the intricate dynamics of complex systems. See this paper for further information: https://www.nature.com/articles/s41586-023-06600-9
Research Goals: This thesis aims to apply Assembly Theory to understand emergence in Large Language Models. We will test AT in this particular set emergence problem.
UPC and Nestlé are offering a new position to develop the TFM in the field of Machine Learning and Cybersecurity. This TFM will be fully funded (internship) and carried out in collaboration with the Global Security Operations Center of Nestlé and UPC.
Cybersecurity is becoming an increasingly important challenge for all companies and individuals alike. While big names used to be the main targets in the past, as people's lives move online, anyone is nowadays a potential target for any kind of cyber-attack, ranging from phishing to ransomware or serious privacy issues. In order to fight against those ever-evolving threats, Machine Learning is increasingly being used behind the scenes to design better systems that are able of self-learning to boost detection rates and boost overall resilience to unknown attacks. As AI-based solutions penetrate products across the industry, a new kind of threat that is often overlooked is becoming more and more prominent and dangerous: adversarial machine learning (AML).
AML focuses on designing specific inputs to deceive a previously trained Machine Learning models into misclassifying them for a specific purpose. One of the main flaws of any state-of-the-art Machine Learning or Deep Learning algorithms is that they assume that the nature of the data they receive is systematically benign, which is generally the case but does not hold true when an adversarial input is received. The motivation behind altering a ML model into thinking that, for example, a new sample is benign when in fact is malicious can range from pure research to more serious real-life issues such as an autonomous car wrongly classifying a stop sign (and thus provoking a fatal accident) or a wrongly diagnosed disease because of a slightly manipulated magnetic resonance image.
This problem is no exception for Cybersecurity where companies wrongly assume that once the last AI-based product is deployed in their network, their employees are safe...
We want to demonstrate experimentally that augmenting a model with fNIRS data carries neural activity features complementing the information captured by the model and demonstrate that it improves the models' performance. To this end, we will have to collect data from participants and test how different Transformer models benefit from different types of fNIRS attention masks.
Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging technique that measures changes in oxygenated (HbO2) and deoxygenated hemoglobin (HbR) in the cerebral cortex. Due to its portability and low cost, fNIRS has been used in Brain-Computer Interface (BCI) applications, characterizing hemodynamic responses to varying stimuli, and investigating auditory and visual-spatial attention during Complex Scene Analysis (CSA). In this project, we want to design and implement an fNIRS study with a goal of studying the impact of neural and BCI outcomes to improving the training of LAI models' attention mechanism (e.g., Transformer attention) during reading comprehension tasks (e.g., the participants will be judging the quality of generated text). We want to demonstrate experimentally that augmenting a model with fNIRS data carries neural activity features complementing the information captured by the model and demonstrate that it improves the models' performance. To this end, we will have to collect data from participants and test how different Transformer models benefit from different types of fNIRS attention masks.
The candidate will:
We want to demonstrate experimentally that augmenting a model with eye tracking (ET) data carries linguistic features complementing the information captured by the model and demonstrate that it improves the models' performance. To this end, we will have to collect data from participants and test how different Transformer models benefit from different types of ET attention masks.
Eye movement features are considered to be direct signals reflecting human attention distribution with a low cost to obtain, inspiring researchers to augment language models with eye-tracking (ET) data. In this project, we want to investigate how to operationalise eye tracking (ET) features, such as first fixation duration (FFD) and total reading time (TRT), as the cognitive signals to augment LAI models' attention mechanism (e.g., Transformer attention) during training. We want to demonstrate experimentally that augmenting a model with ET data carries linguistic features complementing the information captured by the model and demonstrate that it improves the models' performance. To this end, we will have to collect data from participants and test how different Transformer models benefit from different types of ET attention masks.
The candidate will:
Recent advancements in nanotechnology have enabled the concept of the "Human Intranet", where devices inside and on our body can sense and communicate, opening the door to multiple exciting applications in the healthcare domain. This thesis aims to delve into the computing, communication, and localization aspects of the "Human Intranet" and how to practically realize them in the next decade.
Recent advancements in nanotechnology have enabled the development of means for sensing and wireless communications with unprecedented miniaturization and capabilities, to the point that they can be introduced into the gastrointestinal tract inside a pill or into the bloodstream in the form of passively flowing nanomachines.
This opens the door to the idea of intra-body communication networks, this is, a swarm of nanosensors inside the human body that use communications to coordinate their actions to sense and localize specific events (lack of oxygen, biomarkers, etc). This can lead to the development of applications such as continuous monitoring of diabetes, detection and localization of cancer micro-tumors, or early detection of blood clots. These possibilities are currently investigated by our team at the N3Cat (www.n3cat.upc.edu).
In this context, we are looking for excellent and self-motivated individuals who are eager to work on developing AI schemes (based on graph neural networks or multi-agent RL) for the detection and localization of events inside of the human body. Data will be gathered with an in-house simulator that integrates mobility models (BloodVoyagerS) and communication models (TeraSim).
Quantum computers promise exponential improvements over conventional ones due to the extraordinary properties of qubits. However, quantum computing faces many challenges relative to the scaling of the algorithms and of the computers that run them. This thesis delves into these challenges and proposes solutions to create scalable quantum computing systems.
Quantum computers promise exponential improvements over conventional ones due to the extraordinary properties of qubits. However, a quantum computer faces many challenges relative to the movement of qubits which is completely different from the movement of classical data. This thesis delves into these challenges and proposes solutions to create scalable quantum computing systems and the algorithms that run within them, following the current European projects at N3Cat (www.n3cat.upc.edu) on scalable quantum computing.
The interested candidate will work in a group of several PhD students and in collaboration with Universitat Politècnica de València, working in ONE of the following areas:
This thesis aims to explore the possibilities of the new and less studied variant of neural networks called Graph Neural Networks (GNNs). While convolutional networks are good for computer vision or recurrent networks are good for temporal analysis, GNNs are able to learn and model graph-structured relational data, with huge implications in fields such as quantum chemistry, computer networks, or social networks among others.
Seeing that not all neural networks fit to all problems, and that relational data is present in a wide variety of aspects of our daily life, the main focus of this thesis in N3Cat (www.n3cat.upc.edu) and BNN-UPC (www.bnn.upc.edu) is to explore the possibilities of the Graph Neural Networks (GNNs), whose aim is to learn and model graph-structured relational data. We are looking for students willing to study the uses, architectures, and algorithms of GNNs. To this end, the candidate will work on ONE of the following areas:
© Facultat d'Informàtica de Barcelona - Universitat Politècnica de Catalunya - Avíso legal sobre esta web - Configuración de privacidad