JRC Science Hub Communities
The European Commission's science and knowledge service

Women in Artificial Intelligence: mitigating the gender bias

Women in Artificial Intelligence: mitigating the gender bias

 Last Friday March 8th I was invited to speak at a lunch event of the European Commission intended to provide a scientific perspective to the challenges of gender equality. Several researchers of the Joint Research Centre of the European Commission introduced their studies on the monitoring of gender equality in European regions, the mechanisms and challenges behind the interest of girls in STEM in the adolescence, and the importance of making sure digital competences are widespread for both men and women. I provided a talk titled “Women in Artificial Intelligence: mitigating the gender bias”.  This text is a summary of my talk.

Motivation

I am left-handed. Since I was a child I have been able to experience the challenges of using tools that have been designed by and for right-handed people. Simple activities such as cutting paper with scissors, opening a food can or writing can become more complex without the right tool.

I am also a woman in a men-dominated research field. I was one of the 10% of women in my engineering promotion, the 1st female PhD student of the Music Technology Group (together with Beesuan Ong, 2 out of 56), the 1st female president of the International Society for Music Information Retrieval and the few of them.  When I started the MIR lab in 2015, I was coordinating a group of 11 male researchers, and since then I have tried to contribute to a more diverse team, not only in gender but in discipline and culture. I have also been very active in the Women in MIR initiative.

Artificial Intelligence systems can be seen as Machines or agents capable of observing their environment and taking actions towards a certain goal (Craglia et al. 2018). In particular, my research has mainly dealt with machine learning (ML) methods, where I have contributed to systems that analyze large-scale music data (and human annotations of this data) to find patterns or perform classification.

Data-driven machine learning methods are currently exploited in many applications that we use on a daily basis, such as internet search engines, music recommendation systems or navigation apps. They are also exploited in professional contexts, such as medical diagnosis or judicial decisions.

At the HUMAINT project,we research on the impact that AI systems have and will have on human behaviour, mostly on our cognitive capabilities and socio-emotional development.  And one of our first conclusions has been the need for diverse teams to develop AI technologies so that these technologies are meaningful for everyone.

One of the major sources of diversity is gender, of course, as the AI field is also a male-dominated one. According to Reuters (2017), the percentage of female employees in technical roles in major ML companies vis only around 20%. And the main problem is that, when those male developers create their systems, they incorporate, often in an unconscious way, their own biases in the different stages of its creation such as data sampling, annotation, algorithm selection, evaluation metrics and the human-algorithm user interface (Tolan, 2018). As a result, AI systems seem to be biased to male developers tastes.

Five examples of gender bias in AI systems

  1. Several authors have found out that voice and speech recognition systems performed worse for women than for men (Tatman, 2016; Times, 2011; Roger & Pendharkar, 2003; Nicol et al. 2002).
  2. Face recognition systems have also been found to provide more errors with female faces (Buolamwin and Gebru, 2018).
  3. Recruiting tools based on text mining can also inheritate gender bias from the data they train on.
  4. Search engines, widely used by all citizens, can also reflect gender biases, e.g. if we look for “work” or “go shopping” in image search engines, we may find more men photos for the former and more female photos for the latter, reflecting our societies stereotypes present in the data and annotations.
  5. Gender bias is very relevant also in sensitive applications such as health-related or in criminal justice (Tolan et al., 2018b), where traditionally there are manuals and very strong mechanisms that help humans take these important and sensitive decisions in a fair way. We then need to establish mechanisms to do that with AI systems following best engineering practices and smart evaluation strategies.

 

Three steps to improve gender diversity

Those examples have made the machine learning community be aware of these bias and several initiatives are currently implementing measures to make the AI research and development field more diverse.

First, monitoring is important to follow developments and assess the impact of some policies or initiatives. In the case of AI, the AI WATCH has been established by the European Commission (inside the JRC) to monitor the advancements, uptake and impact of AI in Europe. In this context, the JRC and the DTIC of the UPF are collaborating on the divinAI initative to measure and monitor how diverse are major AI conferences. We are organizing our first Hackfest event in June 1st 2019, you are all welcome! 

Second, we need to provide more visibility to existing women in the field so that their work has more impact. One example of visibilization efforts is the DTIC wisibilizalas context, also carried out at UPF in Barcelona, intended to outreach all stakeholders, boys, girls, teachers and parents, in being aware of the relevance of women in technological fields and providing good role models for next generations.

Third, we need mentoring programs, such as the one developed by the Women in Music Information Retrieval group, to make sure women that work there do not leave the field and still achieve senior and impactful roles.

 

Conclusions

We all have conscious and unconscious biases when talking to/about women in technology. Artificial Intelligence has the potential to overcome but also inherit/perpetuate biases.

We need more women in AI to make sure AI systems are developed WITH women and FOR women´s welfare.

 

What's next? 

If you are interested in this topic, please check and contribute to our divinAI initiative for monitoring, or to any initiative with similar goals! 

References

  • Buolamwini, J., Gebru, T. (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Proceedings of Machine Learning Research 81:1–15, 2018.
  • Craglia et al. 2018. Artificial Intelligence: A European Perspective. Joint Research Centre, 2018.  https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-research-reports/artificial-intelligence-european-perspective
  • Tolan, S. (2018). Fair and Unbiased Algorithmic Decision Making: Current State and Future Challenges, JRC technical report.
  • Songül Tolan, Marius Miron, Carlos Castillo, Emilia Gómez. (2018b) Performance, fairness and bias of expert assessment and machine learning algorithms: the case of juvenile criminal recidivism in Catalonia, Algorithms and Society Workshop.

 

Friday, 8 March, 2019 - 12:00
up
0 users have voted.