The rise of artificial intelligence could pose a potentially disastrous threat to online polling.
That’s according to a new research published in the Proceedings of the National Academy of Sciences that lays out just how easily online survey research data can be manipulated by AI agents. The paper by Sean Westwood, an associate professor of government at Dartmouth University and the director of the Polarization Research Lab, underscores the emerging threat that AI poses to a pillar of modern data collection and public opinion research.
“We can no longer trust that survey responses are coming from real people,” Westwood said in a press release. “With survey data tainted by bots, AI can poison the entire knowledge ecosystem.”
As part of the research, Westwood designed and tested “an autonomous synthetic respondent” – essentially an AI bot – “capable of producing survey data that possesses the coherence and plausibility of human responses.” That agent then completed surveys, evading detection by standard quality checks 99.8 percent of time.
According to Westwood’s paper, his synthetic respondent was able to avoid detection by the quality checks by simulating “realistic reading times calibrated to a persona’s education level,” generating “human-like mouse movements” and even answered questions by adding in “plausible typos and corrections.”
The problem, the paper argues: Existing fraud-detection measures in online survey research simply aren’t good enough anymore.