Will Barton


For as long as I can remember my dad has had a wood shop and made odds and ends for family, friends, extended acquaintances, and the like. In Real Life™ he’s a pharmacist, but when he got home most evenings he would tinker around in the garage. I have in my house a stool and a toy chest he made for my kids, a trash can, and two cabinets he made for his sister and mother from glass and wood that came from his father’s pharmacy. He is like my great grandfather in this way. I have a grandfather clock and coffee table that my great grandfather made for my grandfather.

My childhood was spent in that environment… every Xmas my dad would make something out of wood to give to all our family, friends, neighbors, etc. It started off as ducks, a different style of duck every year. And the designs were his (and my mom’s). It was something that he created, from concept to final product, to give as a gift. It’s a process I’ve always admired, and frequently tried (and failed, I rush to add) to replicate. It’s not that something I have his talent for… in the physical world that is.

I presently write software for a living, and do so as a hobby as well. To a large degree I view the process as similar to that of my dad’s — the process of creating. I enjoy creating for my own edification, as well as to solve problems for others. It helps make my day job writing healthcare-related software rewarding, and gives it meaning. I’m not simply a code-monkey.

For me, programming is akin to a craft. I get enormous pleasure out of well-designed and elegantly written code (though I readily admit I’ve written plenty that isn’t). I am not, if you’ll humor the political science graduate student in me, alienated from the product my labor. I don’t think there has been a comparable collection of abstractions suitable for expressing the creative freedom that high-level software development offers us.

As children we have our parents, teachers, and various other adults reassure us that everybody makes mistakes from time to time. Some mistakes are worse than others, to be sure. In my dad’s woodcraft, mistakes become increasingly difficult to correct as a piece nears completion. But mistakes can also give a particular piece character, and make it unique. The mistake becomes a way to remind us that it was a human endeavor. To err, as Pope tell us, is human; as a work becomes art they can become sublime.

There is no comparable analogy to software development. Mistakes are colloquially known as “bugs.” They’re not usually thought of in so gentle a terms as “mistake” either — they’re failures. They are the chair that collapses entirely when you sit on it. They can take the form of errors in math, logic, or memory usage. They can be stupidly simple or complex, one-in-a-million cases. But no matter what, they are failures. They are failures precisely because they cause our functional expectations not to be met. That expectation could fail because software simply “crashes,” or it could put your data at risk by corrupting it or by allowing someone else to read it.

No programmer wants to be responsible for bugs, big or small, but all of us — every last one — have been. The advantage we have is that we can fix them, and compared to concrete creative activities, we can do it with relative ease. That is the real power of the abstractions atop which we work: their mutability. Of course, there are design processes that can help to eliminate bugs — NASA’s software design is famous and fastidious — but they work through the process of review. It’s not that the bugs aren’t necessarily introduced, it’s that they’re caught. Bugs can only be fixed if they’re caught… and if the person or persons who can fix them are informed.

The consequences I’m considering when I write code are the functionality for the user (who is often me — a lot of software I write is because I have a niche to fill) and the user’s data. Sometimes I am responsible for the transmission of user data across the Internet, and I leave what I can up to third-party software that is designed to keep it secure, and what I can’t I ensure I follow best-practices, test, and have the code reviewed. As far as I’m aware, I’ve never personally introduced a bug that put user data at risk of being stolen (which is what it is — theft). But as I mentioned above, all programmers are responsible for introducing some bugs somewhere.

President Obama has decided that when the National Security Agency discovers major flaws in Internet security, it should — in most circumstances — reveal them to assure that they will be fixed, rather than keep mum so that the flaws can be used in espionage or cyberattacks, senior administration officials said Saturday.

But Mr. Obama carved a broad exception for “a clear national security or law enforcement need,” the officials said, a loophole that is likely to allow the N.S.A. to continue to exploit security flaws both to crack encryption on the Internet and to design cyberweapons.1

The idea that it’s possible that a mistake I make — a bug I introduce — could put user data at risk is not a pleasant one. The idea that someone would exploit that bug without informing me that they’ve discovered it is even worse. The idea that governments (which potentially have passive access to networks) could do so is worst still. But the idea that my own government would do so rather than informing me of the flaw is truly the worst. And they’re conceptualizing it as a weapon.

I expect autocratic and totalitarian governments (Russia and China are quoted in the article) to behave like… well, autocratic and totalitarian governments. I do not expect my democratic (in theory) government, based on the rule of law, to behave like an autocratic or totalitarian government. One of the key features of totalitarianism is its ability to turn all facets of life into weapons to use against its people — that’s what makes it “total,” the mundane everyday experience of life is coopted by the state to further state power.

Not surprisingly, officials at the N.S.A. and at its military partner, the United States Cyber Command, warned that giving up the capability to exploit undisclosed vulnerabilities would amount to “unilateral disarmament” — a phrase taken from the battles over whether and how far to cut America’s nuclear arsenal.

“We don’t eliminate nuclear weapons until the Russians do,” one senior intelligence official said recently. “You are not going to see the Chinese give up on ‘zero days’ just because we do."1

Richard Rhodes’s The Making of the Atomic Bomb makes painfully obvious that, from the beginning, physicists had some understanding of the devastatingly destructive power that a nuclear weapon could have. Shortly after the discovery of fission:

Fermi was standing at his panoramic office window high in the physics tower looking down the gray winter length of Manhattan Island, its streets alive as always with vendors and taxis and crowds. He cupped his hands as if he were holding a ball. “A little bomb like that,” he said simply, for once not lightly mocking, “and it would all disappear.”2

There were still unknowns, and most were shocked at the horrible magnificence of the Trinity test, to say nothing of the actual bombings of Japan. But they were under no illusions as to where their work would take them. Indeed, Niels Bohr saw the ultimate geopolitical outcome.

When nuclear weapons spread to other countries, as they certainly would, no one would be able any longer to win. A spasm of mutual destruction would be possible. But not war.3

Strategic nuclear weapons then can only be defensive in nature. A state possesses them to ensure that nuclear weapons are not used against that state. We don’t eliminate our nuclear weapons because that will remove the assurance that another state won’t use nuclear weapons on us. The value of possessing nuclear weapons, in other words, is that they remove the value of another state’s possession of nuclear weapons.

But software vulnerabilities aren’t the same as nuclear weapons. If an exploitable bug goes unreported in the quoted scenario, Russia, China, and the US are free to exploit it at will once they’ve discovered it. Reporting it and fixing it, however, removes the value of exploiting the bug for everyone, and improves the software for users. The nuclear analogy only works when it is inverted — the value for a given state of discovering an exploitable bug is that, by fixing it, you can remove its value for another state. Particularly when you’re considering software developed by citizens of the same state.

Technology and computers have incredible potential to radically change and simplify our lives; they frequently fail to live up to that potential, but it is there. It doesn’t have to create social revolution either, it can be fairly mundane. I recently experienced what I’d call technological bliss when using a piece of software called BackyardEOS to capture astrophotographs. The software was a pleasure to use, and it elevated the experience for me — which is what software should do.

This is a very personal essay because I take this very personally. The article in the New York Times that I’ve quoted from above filled me with both anger and despair. I’ve been working through both for the past couple of days, and this essay is an attempt to understand why. I would prefer any bugs discovered in any of my software to be reported to me, of course. But it’s more than that.

I don’t like the idea of mistakes I’ve made being weaponized. And it is clear from the direct quotes in that article that the NSA and Obama Administration view exploitable bugs in exactly this way: as weapons. I don’t set out to create weapons, I set out to create tools (usually for myself) that will make life just that tiny bit easier. Weaponization is a violent assault on a major part of my self-identity.

The nuclear weapons analogy makes the anger worse. The developers of nuclear weapons knew they were working on a weapon. There was also cutting-edge physics involved, but it was a weapons project with the end goal being a weapon that they were well aware would level a city.

I realize that my view of software as a “craft” might be considered naïve. I mean, afterall, software is the essential machinery of modern global capitalism — but in a way, that’s the point. As the book Flash Boys demonstrates, even in the context of the center of the global financial system, creative software development still requires human skill, human trial and error, and human ingenuity. It requires human craftmanship.

  1. David Sanger, “Obama Lets N.S.A. Exploit Some Internet Flaws, Officials Say,” New York Times, April 12, 2014. ↩︎

  2. Richard Rhodes, “Chapter 9: An Extensive Burst”, in The Making of the Atomic Bomb (New York: Simon & Schuster, 2012). ↩︎

  3. Rhodes, “Chapter 17: The Evils of This Time”, in The Making of the Atomic Bomb↩︎