Rachel Leven, UC Berkeley
Recent polls show public trust in the integrity of U.S. elections is decreasing, largely among Republicans. But this doesn’t signal that our elections are getting less reliable, UC Berkeley scholars said.
In fact, elections in the U.S. are more secure and the results are more accurate than 20 years ago, said David Wagner, a UC Berkeley professor of computer science. While experts agree there are significant ways to improve our technical voting systems, Wagner said the largest vulnerabilities to election results have nothing to do with how the public’s votes are counted.
“Today, I think the biggest risk is the human element — disinformation, propaganda, the manipulation through the media, targeted efforts to try to get particular populations to vote or not vote and attempts by other countries to breed chaos or interfere in our election,” said Wagner, who serves on the U.S. Election Assistance Commission’s Technical Guidelines Development Committee that helps set certification requirements for voting machines.
Public trust in elections is an important pillar of a functioning democracy. Levels of trust can impact everything from levels of voter engagement to the likelihood of insurrection.
Through a technical lens, U.S. elections are very secure, Wagner said. In the early 2000s, the technical risks of voting equipment getting hacked was high. Now, many risks have been partly or wholly resolved, he said.
Voting machines must meet protective specifications, and a subset of the votes are audited manually in many states to ensure these systems tallied them accurately, Wagner said.
Still, election processes also have problems, many of which vary by the state or local government that runs them, said Philip Stark, a professor of statistics at Berkeley.
People disagree about whether ballot-marking devices are trustworthy enough to use to reliably and accurately record an individual's votes, Stark said. There are also issues confirming all votes have been counted, auditing votes after they’re cast and grappling with insider threats, he said. These kinds of challenges were on display in the 2020 presidential election in Georgia.
“We currently have what I would call ‘faith-based’ elections, where people are saying, ‘Trust me that I did it right,’” said Stark, who invented risk-limiting audits, a method several states have codified into law for assessing whether the reported electoral outcomes are correct. “What I think we should have instead is evidence-based elections, where instead of just saying, ‘George Washington won,’ the election official is required to provide convincing evidence that indeed George Washington won.”
Deepfake images and video are also a big problem, said James O’Brien, a professor of computer science at Berkeley. The technology is improving rapidly, he said. It’s becoming harder to differentiate maliciously manipulated images from real ones — even for experts like himself. These tools are also increasingly widely available, meaning more fakes are being shared, he said.
“We're just going to have to accept the fact that if I show some image of whatever it is, that doesn't prove that it was real,” said O’Brien.
This can have real consequences for trust in elections, O’Brien said. It means the public must figure out what to believe, which may illustrate more about their existing biases than their ability to discern the truth, he said.
The most likely predictor for whether someone will trust an image is whether their own views align with that image, O’Brien found in a 2021 study. Additionally, some people are prone to believe conspiracy theories and fake videos can offer the “proof” they need to validate those beliefs, he said.
“Now someone trying to manipulate others can take that underlying motivation to find the truth, which is actually a good one, and the manipulator can control people by showing them things that lead them off into some other direction that the manipulator wants,” said O’Brien. “We saw how this was done with text during the 2016 elections with Cambridge Analytica and how they were very effective at being able to manipulate people. This is like that, but even worse.”
Computer science and statistics experts have items they’ll be watching to assess the strength and reliability in the 2024 election cycle. For Wagner, that includes seeing whether candidates cede the election if the results show they lost. He’ll also look for the use and impacts of deepfakes, he said. And he will look for potential fraud in absentee ballots and for technical flaws, but he emphasized that these aren’t major risks.
Finally, Wagner will be on high alert for the political weaponization of science. He has watched “a fairly arcane debate” play out over the last two decades among technologists and election officials about the viability of the nation’s voting machines that spurred technical analyses from advocates arguing both sides. He noted that science has “been twisted” into conspiracy theories to argue the election was stolen.
“It's a positive thing that we have increased attention to elections and people want to make sure that they're trustworthy,” said Wagner. “But also the risk is that science gets taken out of context and used to support a narrative that was never what the scientists were trying to say or that goes beyond what you can support with existing science."