Rise of the (AI) Political Machine

0

How voters will navigate an artificial intelligence-laden election cycle.

Anyone who believes the intelligence of politicians is artificial will have more reason than ever to think that is the case during the 2024 election cycle. Artificial intelligence (AI), in what might be said is the rise of the machines, has thundered into the political arena with great promise to enlighten, inform, confuse, anger—and influence—voters.

The problem is that AI images and voices may result in voters not knowing if the messages they’re seeing and hearing are truth, or lies. “So, what’s new?” one might ask. Twisting words, presenting falsehood-as-fact, and stretching the truth until it squawks in pain are as common in elections as politicians’ promises. The difference is AI’s revolutionary ability to make what isn’t real seem real.

Social media provides a glimpse of AI’s potential power, both positive and negative. Every day, political memes are posted on social media. Some are honest, but many contain only a shade of truth, or are outright lies. People prone to outrage, or who react to something they find outrageous, tend to react emotionally, without a nanosecond’s effort to investigate if it’s accurate: It exists, therefore it’s true.

AI can take such misinformation or disinformation to never-before-seen levels. The image is seen, the voice is heard. It looks and sounds exactly like the political candidate. The offended or outraged viewers or listeners post it, send it, e-mail it, and distribute it in every other way. As a saying goes, “A lie gets halfway around the world before the truth can pull on its boots.” In the 21st century, a lie is around the whole world in the blink of an eye.

AI has already had a new, but not necessarily positive, impact in the presidential primaries. For example, in July in Iowa, a super-PAC supporting Florida Gov. Ron DeSantis released an ad against former President Donald Trump using an AI-generated Trump voice. The super PAC also released an AI-generated ad showing Trump (though it wasn’t) and Dr. Anthony Fauci (though again, it wasn’t), the controversial former medical advisor to President Joe Biden.

AI is putting into common usage previously little-known words. For example, a “deepfake,” per the Merriam-Webster Dictionary, is “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.” Another is “prebunking,” which presumably enables people to distinguish when information is ‘mis’ or ‘dis.’

These developments are making a lot of people very nervous. “There are currently no federal rules for campaigns when it comes to using AI generated content in political materials such as ads,” ABC News reported on November 8, 2023. “’All campaigns can use this. So, in that sense, who is setting the rules of the road as the campaigns themselves, as they go?’ asked Russell Wald, the policy director at Stanford University’s Institute for Human Centered AI, [on] ABC News Live.”

Several U.S. states are acting to try to contain AI legally. Early in 2023, the National Conference of State Legislators published information regarding states’ AI regulation efforts and plans. A NCSL website posting, last updated Dec. 7, 2023, said, “… legislation may use different terms such as ‘deepfake’, ‘synthetic media’ or ‘deceptive media’ when referring to AI. These terms all refer to what people commonly think of as AI but may have different implications depending on what term is used and how statute defines it.”

Introducing legislation doesn’t ensure passage. The NCSL’s page said that in Illinois, a bill failed that would have prohibited deepfake videos to influence an election 30 days from Election Day. In Indiana, a bill failed to require a disclosure on “doctored” election-influencing media. Even if laws are passed, there’s no guarantee they’ll survive court challenges, or will even be followed: there are laws against identify theft, but it’s hardly been stopped. Restricting AI use may also run headlong into free speech considerations.

AI, like the fictional Terminator (in both its good and evil iterations), is coming to an election near you. This puts new pressure on voters to know if what they’re seeing is true. Whether they do or not, AI doesn’t care. Like the Terminator, AI is here. It doesn’t feel pity, or remorse, or fear, and it absolutely will not stop, ever, as it attempts to influence people’s votes. Artificially.

Leave A Reply

Your email address will not be published.