Polling by asking people about their neighbors
When does this work? Should people be doing more of it?
Several people pointed me to this news report on a successful bettor in an election prediction market:
Not only did he see Donald Trump winning the presidency, he wagered that Trump would win the popular vote—an outcome that many political observers saw as unlikely. . . . He made his wagers on Polymarket, a crypto-based betting platform, using four anonymous accounts. . . . In messages sent privately to a reporter before Election Day, Théo predicted that Trump would take 49% or 50% of all votes cast in the U.S., beating Harris. He also predicted that Trump would win six of the seven battleground states. . . .
In his emails and a Zoom conversation with a reporter, Théo repeatedly criticized U.S. opinion polls. . . . Trump had overperformed his swing-state polling numbers in 2020. . . .
So what did this bettor do?
To solve this problem, Théo argued that pollsters should use what are known as neighbor polls that ask respondents which candidates they expect their neighbors to support. The idea is that people might not want to reveal their own preferences, but will indirectly reveal them when asked to guess who their neighbors plan to vote for.
Théo cited a handful of publicly released polls conducted in September using the neighbor method alongside the traditional method. These polls showed Harris’s support was several percentage points lower when respondents were asked who their neighbors would vote for, compared with the result that came from directly asking which candidate they supported. . . .
The data helped convince him to put on his long-shot bet that Trump would win the popular vote. . . . he had commissioned his own surveys to measure the neighbor effect, using a major pollster whom he declined to name. The results, he wrote, “were mind blowing to the favor of Trump!” . . . he argued that U.S. pollsters should use the neighbor method in future surveys to avoid another embarrassing miss.
Steve Shulman-Laniel sent this to me and wrote:
If it’s such an obvious method that some random rich French guy would use it and get better results than ordinary RDD [random-digit-dialing] polling, then we’d already be using it. If that’s not true, then that also seems interesting. Either we’re just leaving money on the table (in the sense that we’re ignoring a method that would improve poll results, in expectation) or there’s some good reason why pollsters don’t habitually use this method. (I guess a third option is that we’re already using this method, that the WSJ is overselling how novel it is, and that it hasn’t borne the fruit that the WSJ claims it would.)
I would not say that the above-quoted Wall Street Journal is overselling the novelty of neighbor polling—nowhere does it say that the method is new—but I see where Laniel is coming from. The other thing is that a lot of polls don’t use random digit dialing; they use internet panels. So, yes, there are methods that give better results than RDD, or, at least, no worse than RDD, and people are using these methods.
But, what about these neighbor polls? I don’t know how long they’ve been going on, but Julia Azari and I did suggest the idea in our 2017 paper, 19 Things We Learned from the 2016 Election, where we wrote:
We recognize the value of research into social networks and voting, especially in a fractured news media environment and declining trust in civilian institutions. In future studies, we recommend studying information about networks more directly: instead of asking voters who they think will win the election, ask them about the political attitudes of their family, friends, and neighbors.
I’ve put such questions on polls from time to time but can’t right now put my hands on any of the results.
Back in 2016 when discussing “How many people do you know?” surveys, Tian Zheng, Matt Salganik, and I pointed out that one advantage of asking people about their social networks is that it’s a way to use a survey to learn about people other than respondents.
Given that differential nonresponse has been such a problem with recent surveys, there’s an appeal to asking people about their social networks as a way to indirectly reach those hard-to-reach people.
I don’t know how many survey organizations followed our advice from 2017 to asking about the political attitudes of family, friends, and neighbors, but we did discuss once such attempt in 2022. At that time, the neighbor poll didn’t do so well. Jay Livingston shared the story:
They went to the Marist College poll and got the directors to insert two questions into their polling on local House of Representatives races. The questions were:
– Who do you think will win?
– Think of all the people in your life, your friends, your family, your coworkers. Who are they going to vote for?
At the time, the direct question “Who will you vote for?” the split between Republicans and Democrats was roughly even. But these new two questions showed Republicans way ahead. On “Who will win?” the Republicans were up 10 points among registered voters and 14 points among the “definitely will vote” respondents. On the friends-and-family question, the corresponding numbers were Republicans +12 and +16.
On the plus side, that result was so far off that nobody took it seriously. Yes, it was featured on NPR, but more as an amusing feature story than anything else.
Here’s what I wrote at the time:
So what happened? One possibility is that Republican family/friends/coworkers were more public about their political views, compared to Democratic family/friends/coworkers. So survey respondents might have had the impression that most of their contacts were Republicans, even if they weren’t. Another way things could’ve gone wrong is through averaging. If Republicans in the population on average have larger family and friend groups, and Democrats are more likely to be solo in their lives, then when you’re asked about family/friends/coworkers, you might be more likely to think of Republicans who you know, so they’d be overrepresented in this target group, even if the population as a whole is split 50/50. . . .
Also it would be good to see exactly how that question was worded and what the possible responses were. When writing the above-quoted bit a few years ago, I was imagining a question such as, “Among your family and friends who are planning to vote in the election, how do you think they will vote?” and then a 6-point scale on the response: “all or almost all will vote for Republicans,” “most will vote for Republicans,” “slightly more will vote for Republicans,” “slightly more will vote for Democrats,” “most will vote for Democrats,” “all or almost all will vote for Democrats.”
I hope they look into what went wrong. It still seems to me that there could be useful information in the family/friends/coworkers question, if we could better understand what’s driving the survey response and how best to adjust the sample.
So that’s where things stood in 2022.
So what about 2024? Apparently this year the neighbor polls did better. I’d like to see the data, as all we have now is third-hand reporting—a journalist telling us what a bettor told him about some polls we haven’t seen. I’m not saying anyone’s trying to mislead us here; it would be just good to see the information on which the claims were based.
Summary
1. Polling people by asking about their neighbors is an interesting idea.
2. We have an example in 2022 where it didn’t work,
3. We have an example in 2024 where someone claims it worked well.
4. To the extent that existing survey approaches continue to have problems, there will always be interest in clever ways of getting around the problem of nonresponse.
5. If you’re doing a neighbor poll, I think you’ll want to adjust it using the usual poststratification methods and maybe more. I’m not quite sure how—it’s an applied research problem—but I imagine something can be done here.
6. These principles apply to surveys and nonresponse more generally, not just election polling!
Unlike the French dude in that interview, I’m offering all this advice for free. I’m fortunate to be supported by public funds, and the least I can do is share as much of my understanding as is feasible.
P.S. More here.