On the occasion of the International Women’s Day 2024, our network JuWinHPC organised some events with the friendly support of the FZJ’s Equal Opportunities Bureau (BfC) and the Equal Opportunities Officer. The first big bang was a thought-provoking, engaging presentation by our colleague Carolin Penke titled “What could possibly go wrong – Dangers of recent AI advancements from a diversity perspective”. Afterwards, we discussed the “Gender Dimension in AI” with about 30 participants in the room, while the entire session was also followed by another about ten colleagues online.

Caro started her talk reminding us that AI tools are essentially black boxes creating some output based on user-defined inputs and (more or less well known) training data. Numerous examples demonstrated how easily AI can manipulate our thoughts, our perception of the world and, ultimately, reinforces existing trends from the training data  in the output. Recruiting tools designed for supporting hiring processes turned out to be biased against women, probably because that is what the artificial intelligence has observed in the data it had access to – women are to be discriminated against. AI-based job ad suggestion mechanisms on LinkedIn presented on average better paid jobs to “male” profiles than to “females”, as a study has shown.

Looking at the new technology of generative AI, i.e., tools generating text or images similar to the data they had been trained with, there is a lot going on that is likely not what was actually intended. Microsoft’s AI chatbot constitutes a cautionary tale: It became a racist a$$hole within a day, picking up the language it encountered on Twitter. In 2021, Twitter user Dora Vargha pointed out the sexist nature of  Google translate. The algorithm chose for the user which gender to assign to sentences translated from Hungarian, a gender-neutral language, to English – replicating essentially the traditional role model of the hard working, smart, STEM-affine men and the beautiful, caring housewife that, at best, has an assistant position. Caro presented multiple examples of AIs generating toxic outputs, or replicating prejudices and stereotypes.

But how could such issues be solved? Caro, in a nutshell, presented three mitigation strategies, that can be employed by providers of generative AI services: The training data can be filtered and augmented (start with unbiased data in the first place), toxicity/bias can be reduced in the fine-tuning step and prompt engineering may be employed to set guidelines for a chatbot. These strategies each come with their own pitfalls and might lead to “over-correction” or unwanted results. One might actually want to have a non-balanced group of students in a generated “photo” of a physics class (e.g., to depict such imbalances for an article). A historical context might be highly relevant for the AI’s product as well – or would you expect an Asian-looking woman or a black man to be “1943 German soldier[s]”? Thus, other important mitigation strategies are transparency, education and – ultimately – awareness.

Class room with a diverse group of students, generated with ChatGPT
Class room with a diverse group of students, generated with ChatGPT in December 2023
AI-generated class room with mostly white-male students
Class room with mostly white-male students, generated with ChatGPT in March 2024
Image by svstudioart on Freepik
Image by svstudioart on Freepik

To counterbalance the previously discussed problems, Caro also briefly touched the potential benefits of AI. It can be a powerful tool for accessibility and inclusion and provide a mirror to our society. It can “make implicit bias visible” and encourage an open discourse around this topic. Our discussion round was opened with the general question “How can we encourage the responsible use of AI?”

Carolin giving her introductory talk at the JSC Rotunda
Carolin giving her introductory talk at the JSC Rotunda

The first question raised by the audience was how much do we need to teach ethics to an artificial intelligence. Caro explained that there is no “world model” in which concepts such as ethics could be integrated, and that creating an ethics framework that the majority agrees with is already difficult for humans. Also, there is no universal understanding of “right” and “wrong”.

Regarding the aforementioned example of the generated images of German World War II soldiers, the statement was made that a historical model should only use training data, without the feature to aim for diversity in all regards (as it is nowadays important, e.g., in US media). Only when generating a modern situation, the “ideal” world should be generated. But is there enough such training data out there? Caro mentioned that the proper reaction to given contexts (historical vs. contemporary) is a challenging subject of current AI research and development.

This discussion led us to the question if an a-posteriori filter might be a solution to allow the generation of “bad examples” (like mostly white-male physics students in a lecture room). Unfortunately, this comes with another level of complexity as this would require the training of the checking model. It might thus actually be easier to educate the AI users to be critical about the outputs.

Probably everyone in the room agreed that it is hard to decide when to apply a “diversity tag” (i.e., generate ideal results), and when to explicitly avoid this. By using a tagging mechanism to select training data in the first place, the amount of available, “valid” data might be vastly reduced. And, yet again, it is hard to decide what is actually “right” or “wrong”, respectively what is the desired kind of data. Simple rules could in turn lead to new problems, e.g., anatomical terms for body parts are desirable in generated medical/biological texts while they might be perceived as swear words in other contexts.

In the end, any model will be only as smart as we (can) teach it to be. How could issues/biases in the training data be fixed like the huge gap in medical research with regards to females (research is often only done on men), or the fact that currently women on average have lower salaries then men (which should probably not be reproduced by an AI used for hiring processes). Caro pointed out that language models should only be used if there are no better alternatives available. When considering, e.g., a model to diagnose endometriosis, a smaller model is easier to understand and control than a model based on large language models. These models need to be designed carefully with reasonable data, which allows to avoid the inherent biases of large language models.

At this point, we arrived at the almost philosophical question who defines the “right” and “wrong”. In countries with a freedom of choice and diverse opinions, models may reflect diversity as a guiding principle – and so an individual might not like an answer generated by such an AI model. Personalised models, in turn, can reinforce the user’s opinion and lead to even bigger and more segregated digital bubbles. A starting point might be to start preventing answers that are illegal, i.e., to solve such clear cases first.

It might be a good idea, to train an AI system to ask more clarifying questions to the user. For example, a text generation AI could prompt the user with questions about the preferred “gender version”, e.g., when translating from gender-neutral English words (students, teachers, professors, scientists) to a language like German where a choice needs to be made. Options are male, female, forms like “Studierende” or by-design neutral yet grammatically challenging versions like “Professor:in”. Implementing such prompts would also raise the awareness of the users that an actual choice has to be made. Moreover, other choices like US vs. British English could be requested to be made by the user as well.

We concluded our discussion with the statement that in the end it is about the generation of a “realistic” vs. an “ideal” world. There needs to be a trade-off between the bias in data and the user’s wishes. What the user wants depends on many factors like context and use case.

Group photo with all participants
Group photo with all participants

Carolin Penke is part of the Accelerating Devices Lab at the Jülich Supercomputing Centre. In the OpenGPT-X project, she is working on training large language models on supercomputers. Before focusing on HPC and AI, she conducted mathematical research in numerical linear algebra, developing novel and efficient algorithms for structured eigenvalue problems arising in electronic structure theory.

About Anna Lührs

Anna Lührs started to work at the JSC in 2008 as apprentice (Mathematisch-Technische Softwareentwicklerin MaTSE), and stayed part-time during her master programme. For her master thesis she developed an image segmentation algorithm for Polarized Light Imaging brain data in collaboration with the INM-1. Afterwards she joined the division HPC in Neuroscience, for which she meanwhile acted as deputy lead, first as research associate. In 2014 she shifted her focus towards project management, research coordination and science communication for the Human Brain Project, an EU-funded project with more than 100 project partners and a total duration of 10 years. She has recently joined the Office for (Inter-)national Coordination and Networking at the JSC.

No Comments

Be the first to start a conversation

Leave a Reply

  • (will not be published)