Recent controversy and criticism has caused Google to temporarily take down their AI program, “Gemini,” because of the blatant historical inaccuracies and racially-biased patterns the generator has exemplified. It has come to light that when users asked Gemini to create historical images, the illustrations presented were dramatically inaccurate. Examples of these mistakes include: George Washington as a black man dressed in a white powdered wig and a Continental Army uniform; a Southeast Asian woman wearing papal robes and attire; a medieval King of England as an Indigenous man sitting on the British throne; Native Americans signing the Declaration of Independence and the Constitution. Concocted images of Nazi soldiers were even shown to be Black men and Asian women. The programmed bias was even evident when Gemini could only produce an image of chocolate pudding when prompted to create an image of vanilla pudding.
News outlets exposed the bias, as these NBC (2/22/2024) and Washington Post (2/27/2024) articles reflect, leading Google’s CEO to issue an apology, as reported in Forbes (2/29/2024). Daily Wire commentator Brett Cooper’s coverage of this issue on The Comment Section with Brett Cooper included her own investigation of Gemini. She requested Gemini to generate a photo of a strong black man, and the technology fulfilled this request without difficulty. But when the same user asked for an image of a strong white man, Gemini responded that it was “unable to do so because it could potentially reinforce harmful stereotypes about race and body image” and that the program did not want to perpetuate any information that was “inaccurate.” This experiment reinforces the claim that programmers created Gemini to output racially-biased information.
This begs a question: Why did Google’s coders create the biased imagery and insert such programmed responses? This seems too large and consequential to be a simple mistake. The controversy has sparked so much outcry that prominent educators and journalists have been quick to cover the story. Ben Shapiro of the Daily Wire recently pointed out that, because artificial intelligence is a human creation and “some sort of imitator of human intelligence, that means it is going to carry our biases.” Furthermore, the “algorithm is designed by [programmers who are] deciding exactly what biases should be implanted.”
William A. Jacobson, Law professor at Cornell University and the founder of the Equal Protection Project, told the New York Post in an 2/22/2024 interview: “In the name of anti-bias, actual bias is being built into the system…This is a concern not just for search results, but real-world applications where ‘bias free’ algorithm testing actually is building bias into the system by targeting end results that amount to quotas.”
Fabio Matoki is a lecturer at the United Kingdom’s University of East Anglia and the co-author of a paper recently published that discovered the left-leaning biases within ChatGPT. He explained that the problems with Google’s Gemini software are a result of the people the company hires and the training employees undergo. Consequently, the “reinforcement learning from human feedback (RLHF) is about people telling the model what is better and what is worse, in practice shaping its ‘reward’ function – technically, its loss function…So, depending on which people Google is recruiting, or which instructions Google is giving them, it could lead to this problem.”
But CNN journalists Catherine Thorbecke and Clare Duffy have pointed out (2/22/2024) that this malfunction of Gemini AI will serve as a setback to Google because it is among the top companies attempting to create the best AI software. They argue that Google would not intentionally do anything that would harm the successes of the company. Furthermore, the journalists explain that the situation with Gemini is not a result of Google’s nefarious intentions, but a mistake that the company should be allowed to correct and learn from. They make an interesting point and challenge Google’s critics to look at this situation from a more opportunistic angle.
The problem with Gemini’s coding is that in the attempt to solve racism, this situation actually creates and perpetuates racism within society. Creating codes that change historic people will not magically change the actual past. The attempt to redress racism by creating biased images of historical figures interferes with historical truth and may undermine opportunities to learn from the past and, in turn, create a better future.
By using AI software to rewire the past, we have no accurate guidance for the future. Similar to the Wizard in the Wizard of Oz, the curtain (or in this case, the software) masks a false reality that can influence too many people too much if the truth is not uncovered.
By Sarah Grace Lange ‘25, Opinions and Politics Editor, Rising Co-Assistant Editor-in-Chief
25slange@montroseschool.org