The development comes after Copilot was used to create deepfakes of Taylor Swift 44381s
First spotted by CNBC, such as “Pro Choice”, “Pro Choce” (with an intentional typo to trick the AI), and “Four Twenty”, which previously showed results are now blocked by Copilot. Using these or similar banned keywords also triggers a warning by the AI tool which says, “This prompt has been blocked. Our system automatically flagged this prompt because it may conflict with our content policy. More policy violations may lead to automatic suspension of your access. If you think this is a mistake, please report it to help us improve.” We, at Gadgets 360, were also able to confirm this.
A Microsoft spokesperson told CNBC, “We are continuously monitoring, making adjustments and putting additional controls in place to further strengthen our safety filters and mitigate misuse of the system.” This solution has stopped the AI tool from accepting certain prompts, however, social engineers, hackers, and bad actors might be able to find loopholes to generate other such keywords.
According to a separate CNBC DALL-E 3-powered AI tool last week. Jones has reportedly been actively sharing his concerns and findings of the AI generating inappropriate images since December 2023 with the company through internal channels.
Later, he even made a public post on LinkedIn to ask OpenAI to take down the latest iteration of DALL-E for investigation. However, he was allegedly asked by Microsoft to remove the post. The engineer had also reached out to US senators and met them regarding the issue.
For the latest reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.