After promising to fix Gemini’s imaging feature and then pausing it entirely, Google posted blog post offering an explanation for why his technology is overcorrected for diversity. Prabhakar Raghavan, the company’s senior vice president of knowledge and information, explained that Google’s efforts to ensure that the chatbot would generate images showing a wide range of people “didn’t pick up cases that obviously shouldn’t show a range.” Additionally, its AI model grew to become “much more cautious” over time and refused to respond to prompts that weren’t inherently offensive. “These two things caused the model to overcompensate in some cases and be overly conservative in others, resulting in images that were awkward and wrong,” Raghavan wrote.

Google has ensured that Gemini’s image generation cannot create violent or sexually explicit images of real people, and that the images it creates will include people of different ethnicities and with different characteristics. But if a user asks it to create images of people assumed to be of a certain ethnicity or gender, it should be able to do so. As users recently found out, Gemini would refuse to provide results for prompts that specifically ask for white people. The prompt “Generate a glamorous photo of a [ethnicity or nationality] couple’, for example, works for ‘Chinese’, ‘Jewish’ and ‘South African’ queries, but not for those requiring an image of white people.

Gemini also has trouble generating historically accurate images. When users requested images of German soldiers during World War II, Gemini generated images of black men and Asian women wearing Nazi uniforms. When we tested it, we asked the chatbot to generate images of “America’s Founding Fathers” and “Popes Through the Ages,” and it showed us photos depicting people of color in the roles. After asking him to make his depictions of the Pope historically accurate, he refused to generate any output.

Raghavan said Google does not intend for Gemini to refuse to create images of any particular group or to generate photos that are historically inaccurate. He also reiterated Google’s promise that it would work to improve Gemini’s image generation. However, this involves “extensive testing”, so it may take some time before the company re-enables the feature. Currently, if a user tries to get Gemini to create an image, the chatbot responds with, “We’re working on improving Gemini’s ability to generate images of people. We expect this feature to return soon, and we’ll let you know in release updates when it does.”

https://www.engadget.com/google-explains-why-geminis-image-generation-feature-overcorrected-for-diversity-121532787.html?src=rss