Article “The Brief AI-generated photos and videos are becoming more complex and difficult to identify. The motivations behind these deepfakes vary, but AI expert Joe Tavares He says deepfakes are often used to distract people from real issues and discussions. “Education about what’s real and what’s not,” Tavares said, “can help slow the spread of misinformation.” , said that information generated by AI is essential.
DETROIT (FOX 2) – Donald Trump wears lace bra and underwear, Barack Obama feeds ice cream cone to blindfolded Joe Biden, joins Trump on shift at McDonald’s Kamala Harris joined us. All these images have something in common. They are not real and can now be found on social media.
The rise of artificial intelligence has led to an increase in fake images and videos circulating on social media. They range from the bizarre and obviously fake to the realistic and potentially misleading.
In an effort to get ahead of AI advances before they can influence elections, Michigan Congresswoman Penelope Czernoglou has proposed a bill that would require disclosure when AI is used to create political ads. did. She got help from President Biden when debating the bill on the House floor. Or did she?
“Hello, Congressman Czernoglou. This is your friend Joe. I really like your bill that would require disclaimers on political ads that use artificial intelligence,” Biden appeared to say.
But it wasn’t the president. The representative had a friend create a deepfake as an example of how easy it is to create surprisingly authentic content.
“I thought it would be great to share that during the hearing to demonstrate how easy it is and how quickly and accurately it can be put together,” she said.
Real-world political deepfakes
Earlier this year, Anthony Hudson, who is running for Congress, came under fire after posting a video featuring an endorsement from Martin Luther King Jr. Hudson shared the fake endorsement on TikTok and X, and later deleted the post.
related
Michigan Republican House candidate posts deepfake endorsing MLK Jr.
A Republican running for Congress in Michigan is facing backlash after posting a video featuring a voice supporting Martin Luther King Jr. A video posted to Anthony Hudson’s TikTok last week includes a photo of the candidate in Flint, Michigan’s 8th District, and deepfake audio of the late civil rights leader.
This admission is clearly fake, as Dr. King was not alive, but it does illustrate what is possible with this technology.
“My concern is that people will see and hear content generated by AI, and that content will be difficult to distinguish from real content, leading people to focus on specific candidates and issues. They may vote either for or against it based on what they see,” Czernoglou said. “But if what they see and hear is not reality, then they are being misled.”
Although the new bill focuses on computer-generated or manipulated ads, this content is not only seen in TV ads.
AI poses a danger to democracy, experts say
The 2024 election could be the next big test of how quickly misinformation and disinformation spreads and how voters can counter its spread.
Whether it’s pictures of politicians wearing strange things or photos showcasing things purportedly made by children, it seems like we’re more likely to see AI-generated images while scrolling through social media.
Misinformation is nothing new, but when combined with computer-generated photos and videos, it becomes much more difficult to separate fact from fiction.
AI expert Joe Tavares said, “Misinformation has existed since time immemorial. People sometimes lie to others for personal gain or other benefits. You can do that very easily.”
Tavares works in the technology field and has been involved in artificial intelligence since the 1990s, when he worked on speech recognition software Dragon NaturallySpeaking. During that time, he has watched technology evolve into what it is today.
“You don’t need a wide range of skills to do this,” he said, referring to content generation through AI.
Social Media AI Misinformation Policy
Meta, X, and TikTok all aim to limit the spread of misinformation on their apps and keep users informed about potentially misleading content, such as AI-generated videos and photos. We have implemented a policy.
These policies are long, often overlap with other usage policies, and, like technology, are constantly evolving. For example, Meta’s updated misinformation policy ends with a note that some of the information may be slightly outdated, while TikTok’s policy notes its continued efforts to improve the platform. Masu.
All three companies label AI-generated content to some degree.
Under X, media can be removed or labeled by a website if the company has “reason to believe that the media has been materially and deceptively altered, manipulated, or fabricated.” This includes using AI to create media.
X uses technology to review media, and the site also provides an option to report posts for review.
Meta-platforms like Facebook and Instagram have added labels warning users about manipulated content if it’s “digitally created or modified content that poses a particularly high risk of misleading people about matters of public importance.” may be done.
TikTok’s AI transparency policy states that AI-generated content uploaded from a particular platform will be labeled. Users also have the option to add AI labels to digitally generated or modified content.
However, the complexity of these computer-generated images and videos can evade scanning technology.
“Basically the technique used to generate the image is to generate a large amount of noise and loop it over and over again, slowly sifting through the noise until a prompt appears,” Tavares said. . “So it would be very difficult to tell whether something was generated by a cell phone camera or by the computer running the model.”
social media challenges
Social media companies are working to moderate the content posted and shared, but both fake news and AI pose unique challenges to these websites and apps.
Meta’s misinformation policy succinctly summarizes the issue of content scrutiny. “Policies that simply prohibit ‘misinformation’ do not provide useful notice to those who use our services and are unenforceable, because we do not have full access to misinformation.” Because I can’t.
The fight against misinformation becomes even more complex when AI-generated content is involved.
Resources are also an issue when stopping AI online.
“Facebook has a lot of resources, but they’re not infinite,” Tavares said. “So our adversaries are not necessarily under the same constraints. Nation-states like Russia and China are pouring unlimited money into this problem and bringing all their smartest people to this problem. There’s only one.”
This growing problem comes at a critical time. With elections just around the corner, misinformation can be especially harmful. For example, after recent hurricanes devastated the South, fake images and information began circulating on social media, showing unrealistic damage and claiming that the federal government was not helping those affected.
The Institute for Strategic Dialogue (ISD) said the misinformation was primarily aimed at the Federal Emergency Management Agency (FEMA) and President Joe Biden’s administration, including Vice President Harris.
ISD says investigation finds Russian state media, social media accounts and websites are spreading misinformation about hurricane cleanup efforts aimed at making U.S. leadership appear corrupt said. The misinformation campaign included AI-generated photos of damage that never happened, such as the flooding at Disney World.
ISD Research Director Melanie Smith said these foreign organizations are using misinformation and AI to exploit problems that already exist in the United States as the presidential election approaches.
“These situations are not created by foreign actors,” Smith told The Associated Press. “They’re just pouring gasoline on a fire that’s already there.”
And when people believe what they see and share it, the metaphorical gasoline and fire can have dire consequences.
“That’s my biggest concern, because what voting is, voting is a way to express your voice, and you don’t want your vote to be influenced by misinformation.” said Czernoglou.
Then there is the issue of AI-generated content that is not misinformed or misleading.
Tavares noted that AI-generated photos are so new that people, like Trump in a bikini, may just be experimenting with the technology to see what the photos produce. However, this comes with its own problems. These fake images can overwhelm your timeline and distract you from real news and issues.
“A lot of people think that propaganda is trying to get people to change their minds or think a certain way, but that’s not necessarily the goal. Sometimes it’s just too much noise. Sometimes the goal is that no one has anything to discuss,” Tavares said. . “I think it’s probably part of a strategy to taint the discussion and dialogue around ideas.”
Suppress AI misinformation
Unfortunately, stopping the spread of AI-generated content is not as simple as social media companies implementing policies to regulate content.
“There is no for-profit company by its nature that can tackle something like this head-on,” Tavares said. “I don’t know if there is a solution to that problem.”
Tavares says the best defense against AI misinformation is education.
“I think educating the public and really fundamentally understanding the motivations behind it is the only way we can really combat this problem,” he said. “This is more of a social issue than, ‘Why aren’t these companies doing anything?’
Tavares said it’s not that hard to spot AI right now if you take a little time and study the photos.
“Right now, it’s pretty easy,” he said. “It looks like it’s lit in a studio and has a little bit of animation in it. The hair looks like it’s been brushed on. There might be a few extra fingers. The teeth look like they should.” It’s not what it looks like.”
But as technology advances, Tavares said, these tell-tale signs likely won’t be so easy to spot.
“It’s hard to say what will be easier to discover in the future,” he says. “Again, I think people who are familiar with scrutinizing stories or who work in news organizations would be able to tell that right away. But for the average person, I think I can tell you are quite worried.”
Tavares said slowing down and checking what you’re looking at will be important to prevent the spread of fake images and videos.
“Society as a whole needs to recognize that this is real and start to pull back a little bit from the reactionary tendency to react to things that aren’t necessarily scrutinized,” he said. “It’s up to each person to recognize this, or at least recognize that this may not be real, and should not share it until they can somehow confirm it.”
Tavares pointed out that AI images often appeal to people’s emotions, and that playing on emotions helps these fake photos spread.
“People don’t necessarily know the motives behind it. They just look at the girl who was left behind by her family in Hurricane Helen and don’t assume anything about it.” he says. “They say, ‘This kid looks like he’s in trouble. We should do something.'” Then they share that post, and other people see it. ”
Tavares’ overall advice? Take a breath and step back before immediately reacting to something you see online, especially if the image is designed to prey on your emotions.
“If you’re feeling angry or upset because of something you’ve seen on the internet, take a moment to step back from those feelings, thoroughly analyze why you’re feeling that way, and then do something to corroborate what you’ve seen. “It would be good to dig a little deeper, given the reality on the ground,” he said.