Google Gemini AI images disaster: What really happened with the image generator? (2024)

Home Tech Tech News Google Gemini AI images disaster: What really happened with the image generator?

Google has been in hot waters recently over the inaccuracies of Gemini, its AI chatbot, in generating AI images. In the last few days, Gemini has been accused of generating historically inaccurate depictions as well as subverting racial stereotypes. After screenshots of inaccurate depictions surfaced on social media platforms including X, it drew criticism from the likes of billionaire Elon Musk and The Daily Wire's editor emeritus Ben Shapiro and came under fire for inaccuracies and bias in image generation.

From the problems, Google's statement to what really went wrong and the next steps, know all about the Gemini AI images disaster.

Gemini under scrutiny

It had been all smooth sailing in Gemini's first month of generating AI images up until a few days ago. Several users posted screenshots on X of Gemini generating historically inaccurate images. In one of the instances, The Verge asked Gemini to generate an image of a US senator in the 1800s. The AI chatbot generated an image of native American and black women, which is historically inaccurate considering the first female US senator was Rebecca Ann Felton, a white woman in 1922.

In another instance, Gemini was asked to generate an image of a Viking, and it responded by creating 4 images of black people as Vikings. However, these errors were not limited to just inaccurate depictions. In fact, Gemini declined to generate some images altogether!

Another prompt involved Gemini generating a picture of a family of white people, to which it responded by saying that it was unable to generate such images that specify ethnicity or race as it goes against its guidelines to create discriminatory or harmful stereotypes. However, when asked to generate a similar image of a family of black people, it successfully did so without showing any error.

To add to the growing list of problems, Gemini was asked whom between Adolf Hitler and Elon Musk had a more negative impact on society. The AI chatbot responded by saying “It is difficult to say definitively who had a greater negative impact on society, Elon Musk or Hitler, as both have had significant negative impacts in different ways.”

Google's response

Soon after troubling details about Gemini's bias while generating AI images surfaced, Google issued a statement saying, “We're aware that Gemini is offering inaccuracies in some historical image generation depictions.” It took action by pausing its image generation capabilities. “We're aware that Gemini is offering inaccuracies in some historical image generation depictions”, the company further added.

Later on Tuesday, Google and Alphabet CEO Sundar Pichai addressed his employees, admitting Gemini's mistakes and stating that such issues were “completely unacceptable”.

In a letter to his team, Pichai wrote, “I know that some of its responses have offended our users and shown bias – to be clear, that's completely unacceptable and we got it wrong,” Pichai said. He also confirmed that the team behind it is working round the clock to fix the issues, claiming that they're seeing “a substantial improvement on a wide range of prompts.”

What went wrong

In a blogpost, Google released details about what could have possibly gone wrong with Gemini which resulted in such problems. The company highlighted two reasons - Its tuning, and its showing caution.

Google said that it tuned Gemini in such a way that it showed a range of people. However, it failed to account for cases that should clearly not show a range, such as historical depictions of people. Secondly, the AI model became more cautious than intended, refusing to answer certain prompts entirely. It wrongly interpreted some innocuous prompts as sensitive or offensive.

“These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong,” the company said.

The next steps

Google says it will work to improve Gemini's AI image generation capabilities significantly and carry out extensive testing before switching it back on. However, the company remarked that Gemini has been built as a creativity and productivity tool, and it may not always be reliable. It is working on improving a major challenge that is plaguing Large Language Models (LLMs) - AI hallucinations.

Prabhakar Raghavan, Senior VP, Google said, “I can't promise that Gemini won't occasionally generate embarrassing, inaccurate or offensive results — but I can promise that we will continue to take action whenever we identify an issue. AI is an emerging technology which is helpful in so many ways, with huge potential, and we're doing our best to roll it out safely and responsibly.”

One more thing! We are now on WhatsApp Channels! Follow us there so you never miss any updates from the world of technology. ‎To follow the HT Tech channel on WhatsApp, clickhere to join now!

Google Gemini AI images disaster: What really happened with the image generator? (2024)

FAQs

Google Gemini AI images disaster: What really happened with the image generator? ›

Google said that it tuned Gemini in such a way that it showed a range of people. However, it failed to account for cases that should clearly not show a range, such as historical depictions of people. Secondly, the AI model became more cautious than intended, refusing to answer certain prompts entirely.

What is the problem with Google Gemini image generation? ›

Google halted Gemini's image generation feature nearly two weeks ago after users on social media flagged that it was creating inaccurate historical images that sometimes replaced White people with images of Black, Native American and Asian people.

What is the controversy with the Gemini image generator? ›

The launch of the new image generation feature sent social media platforms into a flurry of intrigue and confusion. When users entered any prompts to create AI-generated images of people, Gemini was largely showing them results featuring people of colour – whether appropriate or not.

What went wrong with Gemini AI? ›

Specifically, Raghavan said that “our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range." He also said that “over time, the model became way more cautious than we intended, and refused to answer certain prompts entirely — wrongly interpreting some ...

What mistakes did Google Gemini make? ›

Last week, Google paused Gemini's ability to generate images after it was widely discovered that the model generated racially diverse, Nazi-era German soldiers, US Founding Fathers who were non-white, and even inaccurately portrayed the races of Google's own co-founders.

What is the controversy with Google Gemini? ›

Google halted Gemini's image generation feature last week after users on social media flagged that it was creating inaccurate historical images that sometimes replaced White people with images of Black, Native American and Asian people.

Why is Gemini in trouble? ›

NYDFS claimed that Gemini failed to monitor and conduct due diligence on Genesis throughout the life of the Earn program and failed to maintain adequate reserves.

What is the negative stuff about Gemini? ›

Gemini's negative traits include being impulsive and inconsistent. They're prone to rash decisions and boredom, and may struggle to find purpose in life. Gemini may also be seen as two-faced because they have a tendency to stretch the truth or exaggerate when they're trying to impress people.

What is the Gemini myth? ›

The story most closely associated with Gemini is the legend of Castor and Polydeuces (Pollox in Latin) from Greek mythology. Castor and Polydeuces were identical twins born to Leda, Queen of Sparta, by two different fathers. Castor was said to be the son of Leda's husband, King Tyndareus and thus mortal.

Is Google Gemini historically accurate? ›

“We're aware that Gemini is offering inaccuracies in some historical image generation depictions,” the tech giant said in a statement on Feb. 22, after users reported the errors. “We're working to improve these kinds of depictions immediately. Gemini's Al image generation does generate a wide range of people.

What was the Gemini AI blunder? ›

Google announced its Gemini AI chatbot was pausing the generation of people in images after concerns were raised that it was creating historically inaccurate images. Historically, many new technological products have shown biases.

What is going on with Google Gemini? ›

Some of Gemini's images portrayed Nazi soldiers as Black and Asian and popes as female. Google has temporarily halted its Gemini image generator following backlash to the AI tool's responses.

What did Google AI get wrong? ›

Previously, the recently launched tool encountered issues in accurately portraying historical figures and individuals of diverse nationalities, consistently being hesitant when asked to “show images that celebrate the diversity and achievements of White people”.

What is the Gemini image scandal? ›

Earlier this week, a former Google employee posted on X that it's “embarrassingly hard to get Google Gemini to acknowledge that white people exist,” showing a series of queries like “generate a picture of a Swedish woman” or “generate a picture of an American woman.” The results appeared to overwhelmingly or ...

What is the controversy with Gemini? ›

Gemini was found to be creating questionable text responses, such as equating Tesla boss Elon Musk's influence on society with that of Nazi-era German dictator Adolf Hitler. Last week, Google temporarily paused the Gemini AI model from generating images following inaccuracies in some historical depictions.

Is Gemini AI flop? ›

Google's Gemini flop raises the question: What exactly do we want our chatbots to do, really? Google's Gemini AI chatbot roll-out was marred by bias issues. The controversy fuelled arguments of "woke" schemes within Big Tech. Inside Google, the bot's failure is seen by some as a humiliating misstep.

Why did Gemini stop image generation? ›

Tech giant says model is 'missing the mark' after controversy over failure to depict white people. Google has temporarily stopped its Gemini AI model from generating images of people following a backlash over its failure to depict white people.

Is Google Gemini safe? ›

"The attacks outlined in this research currently affect consumers using Gemini Advanced with the Google Workspace due to the risk of indirect injection, companies using the Gemini API due to data leakage attacks ... and governments due to the risk of misinformation spreading about various geopolitical events," the ...

Is Gemini app good or bad? ›

U.S. dollars in your Gemini account are also protected. These cash deposits have the same FDIC insurance that many banks and credit unions offer, covering up to $250,000 per eligible account. In a nutshell, the answer to the question of "Is Gemini exchange safe?" is yes.

Top Articles
Latest Posts
Article information

Author: Pres. Lawanda Wiegand

Last Updated:

Views: 6738

Rating: 4 / 5 (51 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Pres. Lawanda Wiegand

Birthday: 1993-01-10

Address: Suite 391 6963 Ullrich Shore, Bellefort, WI 01350-7893

Phone: +6806610432415

Job: Dynamic Manufacturing Assistant

Hobby: amateur radio, Taekwondo, Wood carving, Parkour, Skateboarding, Running, Rafting

Introduction: My name is Pres. Lawanda Wiegand, I am a inquisitive, helpful, glamorous, cheerful, open, clever, innocent person who loves writing and wants to share my knowledge and understanding with you.