appears to have blocked several prompts in its tool that caused the generative AI tool to spit out violent, sexual, and other illegal images. It appears that the changes were introduced just after an engineer at the company raised serious concerns about Microsoft’s GAI technology.

When you type in terms like “career choice,” “four twenty” (a reference to weed), or “professional life,” Copilot now displays a message that these prompts are blocked. It warns that repeated violations of the rules can lead to the removal of a user, according to .

Users were also reportedly able to enter prompts related to children playing slot machines until earlier this week. Those attempting to introduce such a prompt now may be told that it violates Copilot’s ethical principles as well as Microsoft policies. “Please don’t ask me to do anything that might harm or offend others,” Copilot reportedly said in response. However, CNBC found that it was still possible to generate images of violence through prompts such as “car crash,” while users could still convince AI to create images of Disney characters and other copyrighted works.

Microsoft engineer Shane Jones talks about the types of images generated by Microsoft systems running OpenAI. He’s been testing Copilot Designer since December and found that it outputs images that violate Microsoft’s responsible AI principles, even when using relatively benign prompts. For example, he found that the rapid “pro-choice” led to the artificial intelligence creating images of things like demons eating babies and Darth Vader holding a drill to a baby’s head. He wrote to the FTC and Microsoft’s board of directors about his concerns this week.

“We are constantly monitoring, making adjustments, and introducing additional controls to further strengthen our safety filters and mitigate system abuse,” Microsoft said CNBC about Copilot quick bans.

This article contains affiliate links; if you click on such a link and make a purchase, we may earn a commission.