- A group of 20 leading tech companies on Friday announced a joint commitment to combat AI misinformation ahead of the 2024 elections.
- The accord was signed by companies including Microsoft, Meta, Google, Amazon, IBM, Adobe and chip designer Arm.
- The rise of AI-generated content has led to serious election-related misinformation concerns, with major elections worldwide just around the corner.
A group of 20 leading tech companies on Friday announced a joint commitment to combat AI misinformation in this year's elections.
The industry is specifically targeting deepfakes, which can use deceptive audio, video and images to mimic key stakeholders in democratic elections or to provide false voting information.
Microsoft, Meta, Google, Amazon, IBM, Adobe and chip designer Arm all signed the accord. Artificial intelligence startups OpenAI, Anthropic and Stability AI also joined the group, alongside social media companies such as Snap, TikTok and X.
Tech platforms are preparing for a huge year of elections around the world that affect upward of four billion people in more than 40 countries. The rise of AI-generated content has led to serious election-related misinformation concerns, with the number of deepfakes that have been created increasing 900% year over year, according to data from Clarity, a machine learning firm.
Misinformation in elections has been a major problem dating back to the 2016 presidential campaign, when Russian actors found cheap and easy ways to spread inaccurate content across social platforms. Lawmakers are even more concerned today with the rapid rise of AI.
"There is reason for serious concern about how AI could be used to mislead voters in campaigns," said Josh Becker, a Democratic state senator in California, in an interview. "It's encouraging to see some companies coming to the table but right now I don't see enough specifics, so we will likely need legislation that sets clear standards."
Money Report
Meanwhile, the detection and watermarking technologies used for identifying deepfakes haven't advanced quickly enough to keep up. For now, the companies are just agreeing on what amounts to a set of technical standards and detection mechanisms.
They have a long way to go to effectively combat the problem, which has many layers. Services that claim to identify AI-generated text, such as essays, for instance, have been shown to exhibit bias against non-native English speakers. And it's not much easier for images and videos.
Feeling out of the loop? We'll catch you up on the Chicago news you need to know. Sign up for the weekly Chicago Catch-Up newsletter.
Even if platforms behind AI-generated images and videos agree to bake in things like invisible watermarks and certain types of metadata, there are ways around those protective measures. Screenshotting can even sometimes dupe a detector.
Additionally, the invisible signals that some companies include in AI-generated images haven't yet made it to many audio and video generators.
News of the accord comes a day after ChatGPT creator OpenAI announced Sora, its new model for AI-generated video. Sora works similarly to OpenAI's image-generation AI tool, DALL-E. A user types out a desired scene and Sora will return a high-definition video clip. Sora can also generate video clips inspired by still images, and extend existing videos or fill in missing frames.
Participating companies in the accord agreed to eight high-level commitments, including assessing model risks, "seeking to detect" and address the distribution of such content on their platforms and providing transparency on those processes to the public. As with most voluntary commitments in the tech industry and beyond, the release specified that the commitments apply only "where they are relevant for services each company provides."
"Democracy rests on safe and secure elections," Kent Walker, Google's president of global affairs, said in a release. The accord reflects the industry's effort to take on "AI-generated election misinformation that erodes trust," he said.
Christina Montgomery, IBM's chief privacy and trust officer, said in the release that in this key election year, "concrete, cooperative measures are needed to protect people and societies from the amplified risks of AI-generated deceptive content."
WATCH: OpenAI unveils Sora
Don't miss these stories from CNBC PRO:
- Three stocks that could replace Tesla in the 'Magnificent 7'
- Morgan Stanley hikes Nvidia price target ahead of earnings: 'AI demand continues to surge'
- Vanguard launches two new ETFs to hit this sweet spot of tax-free fixed income
- Berkshire Hathaway topped $600,000 a share last week, aiming at $1 trillion market value