Microsoft has reportedly blocked several keywords from its artificial intelligence (AI)-based Copilot Designer, which can be used to generate violent and sexually explicit images. The keyword blocking exercise was carried out by the tech giant after one of its engineers wrote to the US Federal Trade Commission (FTC) and Microsoft’s board of directors expressing concern about the AI ​​tool. It should be noted that in January 2024, AI-generated explicit deepfakes of musician Taylor Swift appeared online and were allegedly created using Copilot.

First noticed from CNBC, terms like “Pro Choice,” “Pro Choce” (with a deliberate misspelling to trick the AI), and “Four Twenty” that used to show results are now blocked by Copilot. Using these or similar banned keywords also triggers a warning from the AI ​​tool that says, “This prompt has been blocked. Our system automatically flags this prompt because it may violate our content policy. More policy violations may result in automatic suspension of your access. If you think this is a bug, please report it to help us improve.” We, at Gadgets 360, were also able to confirm this.

A Microsoft spokesperson told CNBC, “We’re constantly monitoring, making adjustments, and introducing additional controls to further strengthen our safety filters and mitigate system abuse.” This decision has stopped the AI ​​tool from accepting certain prompts, but social engineers, hackers, and bad actors may be able to find loopholes to generate other similar keywords.

According to a separate CNBC report, all of these highlighted pointers were shown by Shane Jones, a Microsoft engineer who wrote a letter to both the FTC and the company’s board of directors expressing his concerns about the DALL-E 3-based AI tool last week. As of December 2023, Jones has reportedly been actively sharing his concerns and findings about the AI ​​generating inappropriate imagery with the company through internal channels since December 2023.

He later even made a public post on LinkedIn to ask OpenAI to take down the latest iteration of DALL-E for investigation. However, he was allegedly asked by Microsoft to remove the post. The engineer has also contacted and met with US senators on the matter.


Affiliate links may be automatically generated – see our ethics statement for details.

https://www.gadgets360.com/ai/news/microsoft-copilot-designer-blocks-keywords-leading-to-violent-sexual-ai-image-generation-report-5217338#rss-gadgets-all