Falken’s Maze: Game Theory, Computer Science, and the Cold War Inspirations for ‘WarGames’

By Michael Grasso / December 15, 2016

“I Want to Play Those Games!”

The 1983 film WarGames, directed by John Badham and written by Walter Parkes and Lawrence Lasker, was a box office smash and eventually became a generational touchstone, offering mainstream audiences their first glimpse at the computer technology that would ineluctably become part of their lives in the next decade and beyond. It also gave voice to the entrenched American anxieties surrounding the escalation of the Cold War at the hands of President Ronald Reagan and the old-school apparatchiks who’d taken over the Kremlin in the aftermath of the death of General Secretary Brezhnev. (It’s even said that watching WarGames influenced Reagan’s opinion on networked computers in the Defense Department.) Surely, given the computer antagonist of the film, there’s also a deeper cultural parable about the fears of labor automation, which was slowly but surely coming for the middle American workforce in the 1980s. In the end, of course, good old American human know-how prevents total nuclear annihilation by the faceless artificial intelligence and sinister digital burblings of the WOPR (War Operation Plan Response) computer. For the era’s audience, the WOPR represents both the machine-like hive-mind evil of the Soviets, as well as the robots coming from Japan to level the manufacturing backbone of the nation.

WarGames kicks off with high school hacker David Lightman (Matthew Broderick) breaking into the WOPR, although he thinks he’s tapped into a computer game company’s development servers. While trying to test out the company’s hot new video games, which are actually computer-simulated war games, Lightman consults with a pair of computer science students at a local university (memorably played by Maury Chaykin and Eddie Deezen) and learns about “backdoor” passwords that he believes may have been left by the system’s developer, Dr. Stephen Falken. Lightman finds out that Falken is dead, but, as we find out in the film’s third act, he’s not. After losing his young son (for whom the computer he designed is named), Falken became a recluse, horrified at and simultaneously resigned to what his work has been used to justify. In an interesting side note, the initial version of the WarGames screenplay (begun as early as 1979) featured a young genius whose mentor figure was an astrophysicist inspired by Stephen Hawking. The writers wanted John Lennon to play the role, an inspired idea that ended with Lennon’s assassination in 1980. One wonders if the revised role—a former war game theorist and computer scientist, a wizard who buries his staff and drowns his books to go live on an island—would have been an even better fit for Lennon, whose association with peace activism and desire for solitude are of course well known.

“Go Right Through Falken’s Maze!”

Lightman researches Falken’s career, life, and writings in true 1983 style: by hitting several libraries and poring through books, magazines, films, and microfiche. In the research montage, we see tantalizing glimpses of Falken’s CV in the form of fake journal articles, a Ph.D. thesis, and archival footage showing his obsession with games, computer simulations, artificial intelligence, and how all three can be brought to bear on the defense sector’s need to understand the conditions around nuclear war. All of these seemingly disparate sectors of our modern economy and the defense establishment—game theory, artificial intelligence, and computer games—at one time overlapped in bleeding-edge research projects at the very front lines of the Cold War balance of power. World War II had proven to the world powers and to U.S. military strategists that the world’s next conflict would be fought very differently from all those that had gone before.

The Cold War would also be fought on multiple levels. On the conventional front, there were armed conflicts in the form of proxy wars, nuclear research and escalation, and non-conventional weapons research in the form of chemical and biological weapons. And on the diplomatic, “soft-warfare” front, there were propaganda victories and defeats, incidents of espionage, and cultural warfare between the superpowers. But it was nuclear weaponry and the possibility of nuclear war that obsessed strategists on both sides of the Iron Curtain even before the first atomic bomb was dropped on Japan. It was clear from the beginning of the nuclear arms race that new ways of thinking and strategizing about warfare would be necessary in this post-Hiroshima age.

Poker and Armageddon

As he’s researching Falken and Falken’s milieu, Lightman comes upon an article from The Atlantic written by both Falken and John McKittrick (Dabney Coleman). McKittrick, who remained with the military as the WOPR’s caretaker after Falken’s faked death, advocates for the WOPR and automation of the nuclear launch protocol against brass like General Beringer (Barry Corbin), who wants “our boys” to remain at the helm of the decision loop. Titled “Poker and Armageddon: The Role of Bluffing in a Nuclear Standoff,” the article’s peculiar envisioning of the nuclear arms race as a poker game has its origins in the immediate postwar period, when the newly christened Defense Department delved deeply into the relatively new field of game theory. Game theory’s developer, John von Neumann, had worked on the math behind his ideas in the 1920s and ’30s, and in 1944 published Theory of Games and Economic Behavior with Oskar Morgenstern, which added elements of psychology and philosophy to the math. And so the real-world applicability of game theory to the seemingly insoluble problem of mutual nuclear annihilation in the 1950s seemed evident to the West’s top military strategists. Most of game theory’s foundational thought experiments, like the Prisoner’s Dilemma, are familiar to economics and philosophy undergrads today, but back in 1950, these simple games and simulations were a brand new way of thinking about the seemingly “rational” and “self-interested” decisions that would occur in the new landscape of global nuclear war.

Concurrent with these 1950s developments was the rise in the West of “think tanks,” independent working groups dedicated to solving problems in science, technology, politics, and economics. In the postwar U.S., the preeminent think tank was the RAND Corporation, a hotbed for game theorists (the Prisoner’s Dilemma was first posited there), mathematicians, psychologists, sociologists and economists, and trailblazers in the new field of systems analysis, which grew out of a combination of business studies and WWII-era logistics work. Systems analysis sought to systematize warfare, to bring the light of rationality, science, statistics, and mathematics to the bloody confusion of the battlefield. At RAND, analysts soon became enamored of the potential of digital computers (a field in which von Neumann himself was a standout pioneer) to help them decipher the tedious math and decision-making required in advising the Defense Department on how to wage both conventional and nuclear war against the Soviets. RAND and von Neumann used these simulations and game theory applications to conceive the now-familiar strategy of “mutually assured destruction” (MAD), a chess-like stalemate in which neither “player” in the nuclear “game” could win due to the overwhelming destructive force of the superpowers’ arsenals. It was an old-fashioned Mexican standoff; self-interest would prevent either superpower from pulling the trigger—or, in the case of WarGames, turning the key.

The game board of nuclear conflict evolved radically in a very short time, however. Bigger and more profoundly destructive nuclear weapons were developed. A-bombs became H-bombs. Advances in rocketry ushered in the age of the intercontinental ballistic missile (ICBM), and new ways of examining the psychology of the Cold War were urgently needed. Herman Kahn, who’d been with RAND throughout much of the 1950s, watched the developments in missile technology and the buildup in Soviet conventional warfare capability with much concern. In an effort to both think outside the box and give the Defense Department a new strategy for the ICBM age, Kahn theorized that a nuclear war, while catastrophic, could also be survivable by the West. His 1960 book, On Thermonuclear War, shocked many who’d grown accustomed to and even comfortable with the carefully-engineered knife-edge insanity of MAD. His visions of imaginary, simulated worlds in the aftermath of nuclear war are practically science fiction themselves: how many fallout shelters would be needed, how many “megadeaths” would there be, how many cases of cancer caused by fallout, what percentage of agriculture and manufacturing would survive, and so on. Kahn’s book was partly a piece of psychological warfare against the Soviets, and part authentic thought experiment on the aftermath of a nuclear exchange. Kahn’s text is postmodern in its irony, often intentionally provocative in a near-literary sense. In the popular consciousness, his theories were even nuttier than MAD, and it’s commonly accepted that the title character in Dr. Strangelove (1963) was based on Kahn. If Mutually Assured Destruction was a chess-like stalemate, On Thermonuclear War was raising big at a game of Texas Hold ‘Em right before the flop, showing your opponent you weren’t afraid to lose 80% of your stake on one hand.

“The System Actually Learned How to Learn!”

As mentioned above, one of the ways that RAND analysts tried to conceive of wartime scenarios was through the use of games, simulations, and role-playing. Traditionally, these kinds of simulations were performed around a table using maps, charts, and pre-written scenarios in which “players” would act out a world power or army. The human element here is obvious; it echoes game theory’s fundamental precepts about “rational actors” and psycho-economics. But what about computers? Considering the power that digital computers had when it came to simulation math, it seemed logical to look to computers for help in simulating war scenarios.

In the West, yet another wartime mathematician, Norbert Weiner, had published a pair of books—Cybernetics: Or Control and Communication in the Animal and the Machine (1948) and The Human Use of Human Beings (1950)—positing the great strides that could be made through further and deeper automation of manufacturing, economics, and other elements of postwar life, as well as the enormous potential of a system that could both manage these affairs and immediately react to stimuli and unexpected changes. Weiner’s theories became the basis of the field of cybernetics—a Greek word meaning “helmsman.” In order to respond and react to thousands, even millions of pieces of data, some sort of super-fast calculating machine would be needed. That’s where digital computing comes in. (As if to prove his absolute indispensability in all of these areas, yes, our old friend von Neumann also theorized about automated processes taking on a life of their own in a thought experiment detailed in 1966’s The Theory of Self-reproducing Automata.) Given the enormous promise of cybernetics to manage the increasingly complex economic systems of the postwar Western world, the top thinkers in the field met in 1956 at Dartmouth College for a summer-long conference that covered issues related not only to cybernetics but to what Dartmouth professor John McCarthy had dubbed “artificial intelligence,” or AI, the previous year. The Dartmouth Conference might mark the birth of the computer age; it is certainly the birth of AI as a field of study.

So, could computers be used to run simulations with an intelligence close to approaching human? Given that researchers in 2016 are still struggling to develop convincing artificial intelligence, the answer is obviously no. But it’s no surprise that the first tentative steps towards a computerized artificial intelligence, one that could respond to changes in conditions and learn from its mistakes, happened in the field of games. These fields converged at the Massachusetts Institute of Technology, where hobbyists and computer science students had begun working on interactive games like Spacewar!, and where Advanced Research Projects Agency (ARPA) money would soon start flooding in to fund AI research in the latter half of the 1960s.

“I Know You Weren’t Always Like This. What Was the Last Thing You Cared About?”

Of course, the defense money that enriched university campuses in the lead-up to the Vietnam War was not limited to MIT. And anywhere computer wonks got a chance for time on a university computer system, games would inevitably follow. Most of the pioneers in video gaming and computer gaming emerged from this milieu, and, inevitably, these students and younger academics, despite their military-establishment funding, were influenced by the counterculture, which sprung from and disseminated between college campuses across the U.S. Falken’s crisis of conscience in WarGames came with the death of his young son, Joshua; all over academia in the late ’60s, though, computer programmers and systems analysts were dropping out of private think tanks and universities that owed their flush coffers to America’s escalation of war.

The axis of the computer field had shifted in the 1960s from the Eastern U.S. to California; John McCarthy (coiner of “artificial intelligence”) himself moved to Stanford in 1962, accompanied by one of Spacewar!‘s developers (and Dartmouth undergrad in the mid-’50s), Stephen Russell. The Bay Area engineering firms did pioneering work in making computers smaller and faster, and cutting-edge users inevitably followed companies like Fairchild Semiconductor and Intel to the West Coast. What else was percolating in the Bay Area in the mid-’60s? The counterculture. Even the think tanks were affected, and they began to look at the utility of computing beyond purely cybernetic economic planning or warfare. The dreamers at the Stanford Research Institute, or SRI, were funded by generous Defense Department grants, but they also worked on projects in personal computing and network technology—like the legendary Mother of All Demos in 1968—that would reap massive future economic and social implications. Also in 1968, SRI became one of the first four nodes of what would one day become the Internet. Proto-online communities began to pop up. The networking technology that David Lightman uses to break into what he believes is a computer game company had its roots in these areas where defense and pure research met.

The later generation of systems analysts and programmers who came of age in the early 1970s, those who were undergraduates in the heat of the Vietnam War, came to demand more out of their degrees than a blind adherence to the defense establishment. But money, jobs, and computer time at this point were still in think tanks and universities. While the personal computing revolution was beginning to emerge from the garages and hobby stores of Silicon Valley, futurists working at places like SRI, people like Stewart Brand and Peter Schwartz, were exposed to a rush of ideas from both the traditionalist defense establishment, the technophilic elements of the counterculture, and the nascent consumer computer sector. Schultz, a scenario planner at SRI throughout most of the ’70s, knew what the next generation of computers would allow simulations to accomplish, and when a pair of screenwriters with a script about a tech whiz kid were looking for a consultant in 1979, they came to Schwartz.

And so we’ve come full circle, at least from the vantage point of 1983. The military-industrial-computer complex joins forces with the entertainment industry. But what does WarGames and its origins in the defense establishment tell us about both computers and warfare today? Well, as more and more of the world’s battlefield is asymmetrical, as hackers wreak economic punishment on super-capitalism’s clunky, pseudo-cybernetic control systems, computer science is once again at the front lines. Governments surveil their own citizens at will, while self-appointed cyber-vigilantes fight proxy wars online by stealing and exposing state secrets, acting as puppets of the 21st century’s superpowers. Today, computer warfare is no simulation. The computers themselves are both the weapons and the battlefield.

How about a nice game of chess?


Grasso AvatarMichael Grasso is a Senior Editor at We Are the Mutants. He is a Bostonian, a museum professional, and a podcaster. Follow him on Twitter at @MutantsMichael.

Patreon Button

9 thoughts on “Falken’s Maze: Game Theory, Computer Science, and the Cold War Inspirations for ‘WarGames’

  1. Pingback: Vanishing Point: How the Light Grid Defined 1980s Futurism

  2. Pingback: George Plimpton Advertisements for Intellivision, 1980 – 1983

  3. Pingback: Microchips Here and There: Household and Workplace Robots in 1980s Film and TV

  4. Pingback: Gaming the Post-Apocalypse: ‘Gamma World’ and ‘Paranoia’

  5. Pingback: Grues and Invisiclues: A Personal Remembrance of Infocom

  6. Pingback: “A Totally Different Experience”: Atari Theatre Kiosk Brochure, 1976

  7. Pingback: Wat neem ik mee van SXSW 2019? – ICT en Onderwijs BLOG

  8. One might also see the movie as a response to “Frankenstein”, in that Man is able to bargain with his creation rather than having flight as the only option.

  9. Pingback: Non-Halloween Songs for Halloween

Please Leave a Responsible Reply