News

Now the Robots Are Coming for Your Opinions

We’ve long been warned about robots taking our jobs, but now they’re coming for our opinions as well.

Caller ID, mobile phones, and cord-cutters have long since forced market researchers and pollsters to conduct surveys online rather than reaching people by telephone. Solving the supply problem, however, created a problem of quality, with fraudsters spamming surveys with fake replies.

On balance, it has been a fruitful move: Market research has become an $81 billion global industry. But the AI revolution has empowered bad actors, threatening the very quality of the research which undergirds global business strategy. While not insurmountable, this challenge is the latest reminder that market researchers must be ever-vigilant to shifting threats.

The move online created a new class of vendors, supplying pre-selected, pre-screened panels of people for polls. Pollsters can order up custom panels suited to the survey challenge at hand.

It also brought bots – virtual robots programmed to flood polls with responses. You’ve probably encountered crude examples of this if you’ve ever seen a high profile Twitter poll, but the same tactics can be leveraged against credible polls as well. Our own testing has found that while broad public opinion research (political horse-race surveys are the best example) remains largely unaffected with a 2-3% fraud rate, business-to-business market research has rates which can range from 30-50%.

Why? Some bad actors use bots to gain benefits from filling out surveys (when, for example, cash or other rewards are offered as incentives). Others have an interest in disrupting a business’s market research efforts or simply want to mess with a company – a public opinion denial-of-service attack. And some are just interested in chaos for its own sake.

A natural back-and-forth has played out as public opinion polling has migrated online, with pollsters developing ways to weed out bots and adversaries developing work-arounds. Open-ended questions – “Describe your favorite vacation,” for example – once ferreted out bots fairly consistently, but generative AI has rendered that approach ineffective. Or take the Completely Automated Public Turing [test to tell] Computers and Humans Apart, or CAPTCHA.

You know, those puzzles which sometimes pop up when you’re entering something on a website, where you have to identify which pictures have a bicycle or bridge in them? They worked for a while, but a University of California, Irvine study published last summer found that bots are actually now better than humans at solving them.

Open-ended questions and CAPTCHA were part of a series of tests pollsters deployed over the years to battle the bots. With no silver bullet, one of the first things we learned was that you need to simultaneously use a variety of quality control checks to ensure that you’re getting good data. The decline and fall of CAPTCHA and the growing ineffectiveness of open-ended questions only underscore the need to continually evolve and innovate.

What will that look like? One new step involves incorporating images into the quality tests. AIs might now be able to find a specific item in a picture, but they still have trouble describing an image like a human being would. In our testing, this method unearthed more than twice as many poor-quality responses as simply using open-ended questions. In some cases, AIs tend to be more elaborate than real people would be; in others, their programming trips them up. In some cases where we included pictures of people, they responded that they could not assess the picture because, as an AI, they were not allowed to evaluate images with faces in it.

We don’t have hard data about which images work best. It makes sense not to use ones which are themselves found online, where a bot could grab a caption or meta-description. Simple images also seem to work better, giving AIs less to be overly elaborate about.

This won’t work forever. Fraudsters thrust and opinion researchers parry, back and forth. “Remember,” ChatGPT told me when I asked it about this topic, “combating generative AI specifically requires staying updated with the latest advancements in AI detection and continuously adapting your strategies as AI evolves.”

The bot gets it. We need to make sure the humans do as well.

This article was originally published by RealClearPolitics and made available via RealClearWire.

Print Friendly, PDF & Email

Related Articles

Back to top button