It can be difficult to tell how many elections a group is responsible for winning. Individual campaigns sink or swim due to half a hundred factors, including candidate quality, the quality of the opposing candidate, coattail effects from the top of the ticket, the national and local political environments, which issues are important to voters that cycle, the media, and even whether it rains on election day. When it comes to the handful of hotly contested races each cycle, dozens of groups pour resources into the battleground district or state—when a favored candidate wins, all of those groups will claim credit for the victory; in the case of a loss, no one is willing to blame themselves for a faulty strategy or wasted money.
Disentangling what effect a single organization has on a single election can be outright impossible. What’s an efficiency-minded donor to do?
One answer is to demand better metrics and more transparency from the groups that solicit donations. Backing the winning candidate is nice, that doesn’t really say anything about your group’s philosophy or strategy. The more important questions are: How do you allocate resources? How do you judge after the fact whether what you did worked? What’s your definition of success? A group may make a major spend on behalf of a winning campaign, but how do you know that this spending had any effect?
How to be transparent
Sister District Project, an organization focused on winning state legislative seats across the country, is one group notable for its attempts to answer these questions in public. Its detailed 2020 impact report not only details the number of phone calls and postcards SDP volunteers made during the elections, but it breaks down what percentage of a campaign’s total phone calls or postcards were made or sent by SDP. For example, in Laurie Pohutsky’s 2020 campaign for the Michigan House of Representatives, SDP made 40,291 calls, 34 percent of her campaign’s total, and sent 8,579 postcards, 71 percent of her total. (She won by 237 votes, demonstrating how thin the margins in these state-level races often are.)
That level of detail is one reason Blue Tent highly recommended SDP in our report on the group. It’s clear what SDP is trying to do—target a limited number of state legislative races and invest heavily in each one—and it is also clear whether it succeeds in doing that. In its impact reports and elsewhere, SDP tracks wins and losses but also how much of its fundraising and volunteer hours went to races decided by less than 5 and 10 percent, a metric that shows how many races were close enough that SDP’s interventions could plausibly have made a difference. (If a group is putting resources in 10-plus point wins or losses, something is wrong.) In the 2021 Virginia elections, less than 10 percent of SDP’s field touches and 13 percent of its fundraising went to races that weren’t close, a pretty good ratio considering the sometimes unpredictable nature of these small contests. SDP also uses surveys of candidates and campaign professionals to track which kinds of interventions are seen as most useful. Some of the top answers, like mobilizing volunteers and direct fundraising, are exactly the kind of assistance SDP offers.
SDP is also more self-critical than many organizations. In a Virginia post-mortem, the group noted that Democratic field operations were slower than they should have been to engage in person-to-person contact, and singled out “Democrats’ failure to articulate a forward-looking vision.” That kind of realism is a welcome respite from the only-share-good-news approach of some organizations, including the major national Democratic Party committees.
The Democratic Party itself doesn't tell donors much
These committees are a useful foil for SDP. Unlike SDP (and many other non-party organizations), committees like the Democratic Congressional Campaign Committee don’t issue impact reports and are generally tight-lipped about what they are doing and why. (DCCC Chair Sean Patrick Maloney shared some of the committee’s 2020 “post-mortem” with the Washington Post this year, but without the detailed numbers SDP provides the public.) They don’t criticize approaches that might have led to failure, they don’t break down how much was spent on which campaigns, and they engage in little public soul-searching.
Part of this might be chalked up to the committee leadership model, where the teams are replaced by new appointees every cycle. An outgoing chair has little reason to tell everyone how they screwed up, and an incoming chair probably wants to do things somewhat differently anyway.
To be clear, the committees may do this kind of analysis and keep it private—but that creates a problem for donors. Did the DCCC target the correct House races in 2020? Does it think it targeted the right races? What does it blame the surprising number of losses from that cycle on? What was the win-loss record of the incumbents it identified as vulnerable, or the challengers it touted as potential seat-flippers? You can’t find that information anywhere on the DCCC website, but you can find the big donate button.
Given that the DCCC is raking in record amounts of money in 2021, it's not likely to change anything. (The other major committees are similarly opaque and have had healthy fundraising hauls.) That’s a shame, because donors deserve to know how their money is being spent, and why. Sister District Project represents the gold standard for transparency in its metrics. Other new-wave organizations at least provide some numbers in their own annual reports. The Democratic Party should take a page from these playbooks and start thinking of itself as accountable to the small donors who make up an increasing proportion of its fundraising. When someone asks you for your money, you should be able to ask them some questions.