What responsibility do social media platforms have in filtering AI-generated content?
Posted April 6, 2023 12:25 pm.
Last Updated April 6, 2023 2:48 pm.
With the rise of artificial intelligence(AI)-generated images and videos going viral on social media, it opens up a discussion of how much responsibility the companies that run these platforms should carry when their users get duped.
Recent AI-generated images of the Pope wearing a puffy white coat and an AI-generated video of an interview between popular podcaster Joe Rogan and Prime Minister Justin Trudeau are just some examples where users have believed them to be reality.
AI expert Ritish Kotak said there is a role these platforms must play to ensure that type of information or content is not used for nefarious purposes.
“Clearly there’s parody accounts and then stuff that’s used for fun. But, on the flip side, is that this opens up a whole discussion around ‘Did that person actually consent to it?’ If they’re in the public eye, what is the extent of even parody accounts that can be used because can it be used to pull or trick somebody into thinking that oh, ‘this is legitimate,'” said Kotak.
He points out that sometimes even if something is labeled as AI content, it might be hard to find.
“In the interview with Joe Rogan and Prime Minister Justin Trudeau that was posted on YouTube .. if you look at the title, it says, ‘Interview Joe Rogan and Prime Minister Justin Trudeau.’ You actually have to click on the description to see the tags that it said AI,” explained Kotak. “So, at first glance, it might even seem legitimate and might actually trick somebody.”
YouTube relies on a combination of people and technology to enforce its policies. Their misinformation policy prohibits content that has been technically manipulated or doctored in a way that misleads users.
In their latest reported quarter, Q3 of 2022, 94.5 per cent of the videos that violated this policy were detected by their automated flagging system. YouTube removed over 121,000 videos for violating misinformation policies.
They do allow content that provides sufficient ESDA context, such as basic facts about what’s happening in the content.
In terms of the doctored video of Joe Rogan and Trudeau, YouTube said it does not violate their policies because, in the video description, it said the voices in the video are fake.
A TikTok spokesperson told CityNews they also use a combination of technology and moderation teams, including 40,000 talented safety professionals, to review and remove content that violates their community guidelines.
TikTok will be releasing updated community guidelines on April 21 that include rules on how they treat synthetic media, which they clarify as content created or modified by AI technology.
Their synthetic media policy prohibits manipulated and synthetic media that “distorts the truth of events in a way that can cause significant harm to the community or society.”
Also, if a user impersonates an individual, regardless of whether the content is synthetic, the content will be removed.
“Like many technologies, the advancement of synthetic media opens up both exciting creative opportunities as well as unique safety considerations, and we’re committed to responsible innovation,” said the TikTok spokesperson.
As for Meta, which covers Facebook and Instagram, they have a specific manipulated media policy that allows for the removal of manipulated content that:
- has been edited or synthesized beyond adjustments for clarity or quality in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say;
- is the product of artificial intelligence or machine learning that merges, replaces, or superimposes content onto a video, making it appear to be authentic.
The policy does not extend to any parody or satirical content.
They add any audio, photos or videos will be removed from their apps if they violate any of their other community standards that include nudity, graphic violence, voter suppression and hate speech.
Meta said they also believe that AI can be a tool in detecting harmful content. They have launched the “Deepfake Detection Challenge” to accelerate development of new ways to detect deepfake videos.
Kotak said that “deep fakes” can be devastating for some.
“I’ve dealt with individuals that have had deep fakes made of them, and it’s absolutely devastating and … even though that they reported it to these platforms, and it wasn’t taken down in time and the damage was already done.”
It can even be used in pornographic material.
“This has a very devastating human impact on individuals that are being victimized and then re-victimized by the process [of getting them removed.]”
To protect yourself from getting duped by AI-generated content, Kotak said to look at credible sources.
“To verify if what you’re seeing is actually correct, when something claims to be a fact, look at multiple sources just don’t take it at surface value because it’s on a particular platform,” said Kotak.
“Unfortunately, the world that we live in right now we’re in inundated with this type of messaging, this type of content that is fake does exist. We talk about misinformation and disinformation campaigns as well. So, it is important just to be cognizant that this information can may not be accurate. Do your homework, go to credible sources, and then make your own informed decision.”
As for new AI-powered chatbots like ChatGPT, the Privacy Commissioner of Canada announced this week they would be launching an investigation into the company who operates it, OpenAI.
“AI technology and its effects on privacy is a priority for my Office,” Privacy Commissioner Philippe Dufresne said. “We need to keep up with – and stay ahead of – fast-moving technological advances, and that is one of my key focus areas as Commissioner.”
The investigation was launched in response to a complaint alleging the collection, use and disclosure of personal information without consent.