It’s just ones and zeroes: the representational power of binary notation

This recent Saturday Morning Breakfast Cereal strip illustrates a ridiculous, but ultimately profound, issue around how we think about numbers and computers:

Most of us who use computers, regardless of age, do not actually think that there are little physical tokens that look like ‘1’ and ‘o’ physically bouncing around inside the CPU or residing on the hard drive.    We know that that can’t be true.   In some sense, we (hopefully) understand that ‘1’ and ‘0’ are symbols of ‘on-ness’ and ‘off-ness’, conventional representations using binary (a two-state numerical system) of the foundation of  modern electronic circuitry.  And yet, when we talk about how computers ‘think’, we inevitably end up talking about 1s and 0s. Which is why we chuckle when the same idea is used in the Onion article ‘Microsoft Patents Ones, Zeroes‘ or in the Futurama movie Bender’s Big Score, which relies on the conceit of a  series of ones and zeroes tattooed upon someone’s butt that, when read aloud, opens up a time-travelling portal.

We laugh because, at some level, we know that computers are not really reading ones and zeroes off a page.  But if not, what do we think they’re doing? I think it would be fascinating to figure out what the cultural model is that underlies this – that it would be a nice ethnographic question to ask, “What does it mean when people say that computers use 1s and 0s?”   You would surely get a lot of responses from computer scientists that talk about switches and logic gates, and some blank stares, but it would be very interesting to see how ordinary, average, computer-literate users talk about binary as a language that computers understand.

Like any good geek dad, I spend a lot of time trying to stop my son from spending all day watching Youtube videos of video games, and the solution, fortunately, seems to be that he also likes watching a lot of Youtube videos about science and technology, and so he introduces me to some cool ones, and we watch them together.    So take a few minutes to check out this recent video from the fantastic Computerphile channel, where James Clewett talks about the importance of abstraction as a means of allowing us to talk about  what’s going on in everyday computing in an understandable way:

Let’s focus on the segment starting at around 0:59: “Look, a transistor is just a switch, and that switch can be open or closed, and the electrons travelling down the wire, they’re either there or they’re not there, which is a 1 or a 0,  and in Numberphile we talk about 1s and 0s a lot, so we won’t go back into that, but it’s just numbers travelling down a wire.”

Clewett, who obviously does understand exactly what is going on, starts with a discussion of switches (real objects) which can be in one of two states, on or off, and then moves to electrons (real objects) either being present or absent, then makes an abstracting discursive move to talking about 1s and 0s, which are not real physical objects, but an abstract representation of the states of switches or the presence/absence of electrons.  And then, within twenty seconds, he’s moved to ‘just numbers travelling down a wire’, which is a highly concrete representation indeed, but clearly not a literal one.  And even though we and he know that numbers are abstractions of the properties of the world – that the numbers are not actually little objects moving down a wire – this seems to be a very central way of thinking about how computers think.  We can’t seem to do without it for very long.

I wonder whether this is tied in to the metalinguistic idea that entities need language to communicate or to think – that we need a metaphorical, language-like understanding of how computers process information, and so we build up this understanding that is close to how we imagine a thinking entity must process information, even though we understand at some other level that it cannot actually work this way.    It may be the most apt metaphor for understanding off/on switches (or digital information generally) but it is still a metaphorical understanding constrained by how we think entities that process information analogously to humans must work.



  1. “We need a metaphorical, language-like understanding of how computers process information”

    What I find interesting is that this holds largely true at all levels of abstraction. It’s not just the fundamental “what is the computer doing with the numbers”, but every level from bytecode/assembly through c-like languages and high-level languages, right through to interface design – and that leads me to a concept that I’ve been struggling with over the last few years.

    As a C++ programmer in a performance-critical industry, I’m part of a movement promoting “Data Oriented [software] Design”, the fundamental concept of which is that software is just a series of data transformations, and if you understand what the hardware is doing to the data and design it to be efficient, you end up with software that is both more efficient and easier to debug. If you go down that road far enough, it becomes pretty obvious that the endgame is to ditch object-oriented programming completely, and go back to a more functional approach. OO introduces a lot of framework gumpf that serves only to allow programmers to imagine that they understand and can control what’s happening in the system, but largely serves to obfuscate what’s really happening.

    However, I’ve come to accept that ditching OO and making more efficient software is basically never going to happen on a wide scale. Object-oriented programming is a way for us to look at code as if it’s representing the concepts of your design – it’s a way for us to think about what we want the system to be, rather than what we want the computer to do, and without it we lose a lot of talented programmers that would never be able to get their head around a non object-oriented system. OO reduces the barrier to entry, not just to programming in general, but to understanding any given piece of code. Sadly, it also increases the time it takes to take in a complex system.

    • Ian, thanks for this. I generally agree – sometimes it is possible to think non-metaphorically but very, very often it is highly advantageous to employ these shortcuts, even when, as you say, there are tradeoffs. I can’t comment on the specifics of the situation you are describing, obviously, since it is well outside my area of expertise, but it rings true given what I know about other expert domains where people engage with technology. Edwin Hutchins’ ‘Cognition in the Wild’ is a fascinating book about how expert Navy sailors engage with navigation technology and other individuals to solve problems like ‘how to figure out where we are at sea without dying’, and comes to many of the same conclusions.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s