Gender Competency

You are here

The TFM regulation states that the TFM must elaborate on the gender competency. The regulation distinguishes two different aspects:

  • inclusive language, references and examples and
  • technical aspects related to data management and analysis; issues related to equity, where possible biases are identified and assessed both in the data and in the processes carried out in relation to data management and analysis; and actions carried out to eliminate or mitigate such biases.

This document is a complement to support students when dealing with the gender competency. Importantly, talk to your supervisor / tutor in order to properly develop this competency, since it may substantially vary depending on the topic you tackle. This is a baseline to start working from.

Inclusion and Diversity in Writing

ACM explains these goals as follows. Diversity is achieved when the individuals around the table are drawn from various backgrounds and experiences. Diversity leads to a breadth of viewpoints, reasoning, and approaches (also referred to as "the who"). Inclusion is achieved when the environment is characterized by welcoming and embracing diversity ("the how"). Both are important in our writing and other forms of communication, such as posters and talks. Thus, bear in mind the following considerations when writing your master thesis final report.


In computing, be mindful of not using language or examples that further the marginalization, stereotyping, or erasure of any group of people, especially historically marginalized and/or under-represented groups (URGs). Of course, exclusionary, or indifferent treatment can arise unintentionally. Be vigilant and actively guard against such issues in your writing. Here are some examples of such issues for your benefit:

Examples of exclusionary and other non-inclusive writing to consider avoiding:

  • Implicit assumption: An example of constraints: "Every person has a mother and a father." This example is exclusionary and potentially hurtful to single-parent households and people with same-sex parents.
  • Oppressive terminology: Using the term "Master-Slave" to describe a distributed data system architecture can be hurtful to people whose families have suffered the inhumanity of enslavement. The article "Terminology, Power, and Inclusive Language in Internet-Drafts and RFCs" proposes alternative terms to an oppressive language often used in computer science.
  • Marginalization of URGs: An example of attribute domains: "The Gender attribute is either Male or Female." This example is exclusionary and potentially hurtful to people who are intersex, transgender, third gender, two-spirit, gender, or have other non-binary gender identities.
  • Lack of accessibility: Using colour alone to convey information in a plot when good alternative data visualization schemes exist. This design strategy can be exclusionary to people who are colour-blind. Please consider (additionally) using patterns, symbols, and textures to emphasize and contrast visual elements in graphs and figures, rather than using colours alone. Use a colour-blind friendly palette designed with accessibility for visually impaired people. Avoid bad colour combinations such as green/red or blue/purple.
  • Stereotyping: Reinforcing gender stereotypes in names or examples of roles, e.g., using only feminine names or presentations for a personal secretary or assistant roles.


Going further, please also consider actively raising the representation of URGs in your writing. Diversity of representation helps create an environment and community culture that could ultimately make our field more welcoming and attractive to people from URGs. This is a small but crucial step you can take towards celebrating and improving our community’s diversity.

Examples of infusing diversity into writing to consider adopting: 

  • Embracing different cultures: Names of people are a visible way to enhance the diversity of representation in writing. Instead of reusing overused names in computing such as Alice and Bob, consider using names from various languages, cultures, and nationalities, e.g., Alvarez and Bano. Avail of the many online resources on this front for ideas, e.g., the Wikipedia webpage Names by culture, where names across different cultures.
  • Embracing differences in figures: Depictions of people or people-like icons in illustrations are also an excellent way to enhance representation diversity. Consider depicting people of different gender presentations, skin colours, ability status, and other visible attributes of people.
  • Embracing gender diversity in pronouns: Consider using a variety of gender pronouns across your named examples consciously, including "he/him/his," "she/her/hers," and "they/them/theirs". Likewise, consider using gender-neutral nouns when referring to generic roles, e.g., "chairperson" or just "chair" instead of "chairman," and gender-neutral pronouns for such roles.


Finally, you should assess your data-driven techniques from a technical perspective, especially when they make decisions about people. Please, consider explicitly discussing whether it may lead to disparate impact on different groups, especially URGs. Consider examining the ethical and societal implications. For example, see the article What are important ethical implications of using facial recognition technology in health care? discussing the potential for disparate impact of facial recognition in healthcare and strategies to avoid or reduce harm. The SIGMOD Blog article Data, responsibly also gives a comprehensive overview of various dimensions and approaches for responsible data management ideas. We hope our community can help permeate this culture of responsibility and awareness about potentially harmful unintended negative consequences of our work within the larger computing landscape.

From a technical perspective, this means that in your TFM you should guarantee data equity. Data equity implies analysing the implicit biases (both data and algorithmic) to improve fairness. Incorporating and considering these aspects as first-class citizens in your data science projects is known as responsible data science. However, responsible data science goes beyond these concepts, and also implies considering transparency of data and algorithms (also known as interpretability / explainability), legal compliance (e.g., such as the GDPR), privacy and data protection.

The following seminal articles further elaborate on these concepts. You are highly advised to read them before writing your TFM final report. Further, discuss these terms with your supervisor / tutor since most probably there might be relevant articles tightly related to your TFM topic that may help you to better consider the technical aspects behind responsible data science.

Acknowledgments and Further Reading

These instructions are based on the hints and tips provided by the EDBT conference, which are elaborated from the following sources: