Computer modeling in elections

Submitted: Oct 08, 2017
Badlands Journal editorial board

One's view toward the computer invasion of another familiar field either with enthusiastic faith in new technology...or with dark laughter. I find the latter is better protection against the new religio-technodogmatics of our times. But we live close enough to Silicon Valley that sometimes its hot breath reaches across the Dumbarton Bridge over the Altamont and through the nut orchards to our door, generating slogans like the one to explain the land-developer boondoggle known as UC Merced as "this high-tech, bio-tech engine of growth!"-- wmh



A Misguided Faith in Computer Models of U.S. Voters

Andrew Cockburn

It’s a good time for computer modelers, the kind who believe that with enough information, or “data analytics” (as they prefer), an artificial reality can be constructed to match the real thing and change people’s behavior in a predictable fashion. If this idea seems obscure, think about recent “Russia-gate” headlines announcing that Vladimir Putin targeted specific Facebook users—albeit a mere $100,000 worth—during the 2016 election with ingeniously crafted ads that affected their votes.

To get so much bang for the buck on such a minuscule budget, Putin and his team would have had to have an accurate model of the United States electorate, to know whom to target and with what message. Thus, for example, the Russian manipulators reportedly targeted dog lovers, presumably on the assumption that pooch-fanciers have an innate affinity for Donald Trump and/or dislike of Hillary Clinton.

One might think that the rather more expensive fiasco of the 2016 Clinton campaign would leave Democrats disenchanted with political models. Famously, campaign manager Robby Mook placed a near-religious faith in Ada, a computer programming language that modeled the electorate 400,000 times a day, apparently assuring the techno-crazed Mook that he knew exactly how changing events were affecting voters. Such was his belief in “analytics” that he didn’t bother to take polls in the closing weeks of the race, with fatal results.

Nevertheless, the Democratic establishment professes to believe that where Mook failed, Putin succeeded. If this were so, then Putin should set up shop as a campaign consultant in this country in time for the next election. Surely one of those Democratic hopefuls currently traversing the country raising millions would pay well for his U.S. voter model, which he must be holding as a valuable asset. As Molly Schweickert, an executive with Cambridge Analytica (“We find your voters and drive them to action”), recently told The Verge, an online tech trade publication, “What’s proprietary is the research and model” used to formulate and target such ads.

Prior to Putin’s unmasking as the master manipulator, Schweickert’s firm occupied a central place in the Clintonian pantheon of evil, after the Mercer family of eccentric billionaires (part-owners of Cambridge Analytica) bankrolled the deployment of its mysterious data skills to the alleged benefit of Donald Trump. But this picture of omniscience, which Cambridge Analytica also promotes, presents a problem.

As Marina Bart writes in Naked Capitalism: “There is evidence that [Cambridge Analytica’s] program cannot even do the simplest first step towards understanding human beings by processing their Facebook data.” Citing the old rule of “garbage in, garbage out,” she explains that Facebook itself, possessor of infinitely more data than Analytica, can’t even get its own advertising and traffic metrics right. Ada, Mook’s object of worship, was, it turns out, filled with garbage in, because accurate reports on Clinton’s dimming prospects from human observers out in the real world were not included in the mountains of inaccurate or irrelevant data fed to its churning electrons.

One fundamental issue with models is that they do not cope well with change, such as the kind that happens in an election race, or, for that matter, a war. During the Vietnam War, for example, a group of eminent physicists sold Defense Secretary Robert McNamara on the idea that an “electronic fence” consisting of thousand of sensors scattered across the Ho Chi Minh trail and relaying sounds, smells and other data denoting the passage of enemy supply columns could, when processed by the largest computer then in existence, yield an infallible model of the enemy’s whereabouts. It took the Vietnamese a week to figure out that if they introduced simple, unanticipated changes—such as hanging buckets of urine on trees far off the trail to fool the smell-sensors—the billion-dollar fence would be rendered ineffective.

The other, and perhaps more serious problem with models, is that their creators and custodians come to believe in them, sometimes to an obsessive degree. Mook’s devotion to Ada serves as an obvious case. Commenting on this phenomenon, former Pentagon analyst Chuck Spinney suggested to me that “the laborious act of devoting so much mental and emotional energy to the construction of a model tends to displace the modeler from the world being modeled—i.e., his interactions with the model (the intense desire to make it work, shaping its mathematical logic, programming, debugging, etc.) take on more importance than the matchup of the model to reality. In effect, the model becomes the ‘reality’ to the modeler’s mind and model/reality mismatches become ‘anomalies,’ which are psychologically easy to dismiss as outliers.” Spinney repeatedly encountered this “self-delusion” in the Pentagon among military officials and weapons contractors during his 30-year career.

Their belief in the delusion was, for the most part, sincere, thereby increasing the model’s psychological power.

Among the earliest computer models conceived in the Pentagon and related offices are those for blowing up the world by means of thermonuclear war. Many minds, some of them brilliant, not to mention decades of computer time, have been devoted to charting the course of nuclear conflict, complete with intricate calculations of first strikes, second strikes, limited strikes, and so forth. Yet almost all of this constitutes what Spinney’s former Pentagon colleague Pierre Sprey dubbed “data-free analysis.” There are precisely two data points for the real-world effects of nuclear weapons: Hiroshima and Nagasaki. And as it so happens, they gave rather different results (the Nagasaki bomb killed far fewer people than its creators anticipated). The nuclear war models that dictate war plans (and weapons budgets) calculate target effects based on the theoretically projected explosive “yield” of various weapons, but I am reliably informed that actual bomb tests regularly produce totally unanticipated yields. Similarly, models assume a theoretical pinpoint accuracy for intercontinental ballistic missiles that has not been replicated in the limited number of actual tests of such missiles (real-world missile tests are expensive, after all, and they tend to generate unwelcome results).

In short, the models are worthless, and no one really has the faintest idea of what would happen in a nuclear war, a point those whipping up New Cold War hysteria with Russia-gate might bear in mind. Hopefully, Vladimir Putin—if he’s not too busy manipulating the New Jersey governor’s race or negotiating terms to handle Biden 2020—understands that, too.

| »

Post new comment

The content of this field is kept private and will not be shown publicly.
This question is for testing whether you are a human visitor and to prevent automated spam submissions.
Copy the characters (respecting upper/lower case) from the image.

To manage site Login