Debugging terminology

The term “bug” has long been part of computing jargon. We have, in fact, been referring to software or computer problems as bugs practically for the entire time that computers have been around. Nowadays, the term has found its way well into the mainstream and it is easy to find mention in regular news coverage.

How did this happen?

The origin of the word in the context of computing is not that clear. No discussion is of course complete without mention of the oft-repeated story of Grace Hopper’s engineering team’s locating of an actual moth in the machine – with said bug still on display in a museum to this day. That was in 1947.

That is not the term’s origin story, though it may make a nice one, as emphatically explained in Etymology of the Computer Bug: History and Folklore. Apparently Thomas Edison was using the term bug to refer to technical difficulties and/or faults in the context of his inventions as early as 1878:

The first step is an intuition, and comes with a burst, then difficulties arise — this thing gives out and [it is] then that “Bugs” — as such little faults and difficulties are called — show themselves and months of intense watching, study and labor are requisite before commercial success — or failure — is certainly reached.

Stalking the Elusive Computer Bug, by Peggy Kidwell examines this in more historical detail. It makes for a fascinating read! More startlingly, Moth in the machine: Debugging the origins of ‘bug’ points out the relationship to old terms for monsters, gremlins. I cannot help but imagine (probably unfairly) people at the very dawn of the information age (superstitiously?) discussing “bugs” happening in (or to?) their programs or machines.

And now here we are. The most valuable and innovative companies in the world would not exist without modern computing technology — and we are without irony still talking about bugs in our machines.

Is that a problem?

When I think of bugs, I mostly think of, well, insects. On the spectrum of life’s problems, they would largely register as annoyances. Thus, this term seems to trivialize the issue — considering some of them (even if they started as mere typos) may well cause very costly damages. In Bugs or Defects?, Watts Humphrey made this argument:

By calling them “bugs,” you tend to minimize a potentially serious problem. When you have finished testing a program, for example, and say that it only has a few bugs left, people tend to think that is probably okay. Suppose, however, you used the term “land mines” to refer to these latent defects. Then, when you have finished testing the program, and say it only has a few land mines left, people might not be so relaxed. The point is that program defects are more like land mines than insects. While they may not all explode, and some of them might be duds, when you come across a few of them, they could be fatal, or at least very damaging.

In comparison to more abstract terms like errors, defects or faults, bugs seem more tangible. In fact, they seem rather like external entities. The inconvenient truth however is that those bugs are largely created by the same people who also create the product.  In On the cruelty of really teaching computing science, Edsger Dijkstra brought this to a point:

We could, for instance, begin with cleaning up our language by no longer calling a bug a bug but by calling it an error. It is much more honest because it squarely puts the blame where it belongs, viz. with the programmer who made the error. The animistic metaphor of the bug that maliciously sneaked in while the programmer was not looking is intellectually dishonest as it disguises that the error is the programmer’s own creation. The nice thing of this simple change of vocabulary is that it has such a profound effect: while, before, a program with only one bug used to be “almost correct”, afterwards a program with an error is just “wrong” (because in error).

Finally, usage of the term is burdened with ambiguity. Bugs can refer to bad code, incorrect execution or wrong results. In the excellent Why Programs Fail – A Guide to Systematic Debugging, Andreas Zeller elaborates:

This ambiguity of the term bug is unfortunate, as it confuses causes with symptoms: The bug in the code caused a bug in the state, which caused a bug in the execution— and when we saw the bug we tracked the bug, and finally found and fixed the bug.

The language we use is important to how we think about problems and go about solving them. There are at least a few issues with using the term “bug” and we would benefit from finally moving away from it.

System resiliency at Facebook

In Fail at Scale, Ben Maurer (tech lead on Facebook’s Web Foundation team and formerly of reCAPTCHA fame), relates lessons about dealing with software failures at Facebook. If you are building distributed systems, particularly very large ones, you have to also think about failure and resiliency in the face of it.

Apart from that, Facebook’s engineering environment requires that lots of people have to be able to commit and release changes, often – and be fearless about it.

From what I can tell this really comes down to two major ingredients:

  • Smart engineering to achieve high levels of resiliency to proactively handle potentially disruptive situations.
  • A culture that prioritizes and embraces continuous learning and productive problem solving.

The presentation below covers the paper as well as additional relevant bits.


Considering emotions

How do you feel after spending 30 minutes idly browsing social media or mainstream news websites? Frankly, I would venture to say that my happiest and most productive days tend to be those, when I avoid that activity altogether. Mostly, that is about the cognitive noise. This article is not about that.

In a fascinating (and controversial) study, researchers at Facebook and Cornell University showed that the content of users’ newsfeeds may affect their expressed emotions – and possibly indirectly their felt emotions as well.

Discussing this in When technologies manipulate our emotions, the authors pose a thought-provoking question:

Can design ever be emotionally neutral and if not, on what criteria should technologists base design decisions?

It seems inevitable: To the extent that we let our mental world be affected by interactions and observations in the outer world, we would often be challenged to not let products we use or processes we go through impact our emotions.

When creating a product, we are rightfully concerned with making it effective, usable. People using our solution should not only be able to carry out their tasks as they intend, they should be able to do so without confusion or worse yet fear of somehow getting it wrong. Beyond just ensuring that something works, and ideally very effectively, I think it is also generally instructive to pay attention to how a user’s emotions are affected by the product and the process of using it. Does the experience seem positive, enriching – or does it have a detrimental effect? How does it feel?

My two reasons to look at Comcast’s website

There are only two reasons, why I visit Comcast’s website:

  1. pay my monthly Internet bill
  2. check whether there is a service outage in my area.

So, if things are going really well, then I go there once a month to perform a single task. Here is a recent snapshot of their homepage, which illustrates an interesting disconnect.


Given hundred of links, guessing is fair game. Hint: their documentation reveals information about outages, though you may be as successful just searching Twitter.

Atari: Game Over

Atari: Game Over is an interesting exploration of both the rise and fall of Atari and the search for buried copies of the game E.T. Watch it for the stories around the pioneering work Atari did at the time, for a fix of 1980s gaming nostalgia — or perhaps because you have always been wondering what really happened to those games back then. There is a strong cast of contributors here (including Nolan Bushnell, Howard Scott Warshaw, Ernest Cline and, briefly, George R. R. Martin) and the narrative is both entertaining and instructive.

In the end, what happened to Atari?

A simple answer that is clear and precise will always have more power in this world than a complex one that is true.

Nolan Bushnell (co-founder Atari) on the idea of the video game E.T. leading to the demise of the 1980s video game industry.

The movie is freely available online.