Barbara Liskov is an Institute Professor at the Massachusetts Institute of Technology (MIT), where she is a member of the Department of Electrical Engineering and Computer Science. She has spent the past forty or so years finding and describing new ways to make computer programs work better. In addition to many other projects, she currently works on writing programs that make data safer. Dr. Liskov is a member of the National Academy of Engineering. She was recognized for her work with the A.M. Turing Award (given by the Association for Computing Machinery) in 2008. She is also a recipient of the John von Neumann Medal (2004) from the Institute of Electrical and Electronics Engineers.
What is a computer?
“It’s a machine that computes things. It knows how to do very simple things like add numbers and check whether something is zero or one. Out of something so simple, there are amazing things you can do if you string along the right instructions.
“Artificial intelligence is concerned with trying to make a computer act intelligently. But you can see that there is a huge difference between being intelligent and being able to add two numbers and check for zero or one. So there is going to be a lot of programming involved to get from the simple thing to artificial intelligence.”
Could you describe your intellectual relationship with Alan Turing?
“Turing defined this thing called the Turing machine. It’s basically the way that computers work. It’s what I described a minute ago when I said that computers are simple things. Turing had a simple model of what a computer can do. But it really is what computers do. At a very basic level, it’s a fundamental way to think about how programs work.”
Was he just right and that’s the way that it is?
“I think it is the way that the kinds of computers we use today work. Of course, today we have parallel computing and distributed computing. This doesn’t change the model in any really important way. Someday, if quantum computing becomes practical, it will be very different.”
How do you think about trying to understand computers?
“I don’t try to understand computers. I try to understand the programs. Whenever you design a program, you have a particular thing in mind you’d like it to do. Then you implement a lot of software to accomplish this thing. As part of doing this, you have to have some way of thinking about what the program is doing. That’s what I mean by understanding: understanding what the program is supposed to do and how it accomplishes this task. You can turn this understanding into a mathematical problem, because you can describe what the computer is supposed to be doing, and then you can reason, formally or informally, about whether the program does what it’s supposed to do. So it’s a very special kind of understanding, sort of like a mathematical proof.”
So it’s similar to a physicist or engineer trying to understand a combustion engine?
“Yes, except with a computer, all of the mechanical stuff is hidden way down inside. So really, when you think about computers, you think about them from the point of view of a Turing machine.”
Is there creativity in the work of a computer programmer?
“People have a tendency to think about creativity quite narrowly. They associate creativity with being, for example, what an artist does, or what a novelist does. In fact, there is tremendous creativity in science and creativity in mathematics. You have to use creativity every time you invent a structure for a program. I think the narrow view of creativity is just a lack of understanding of how creativity shows up in so many different things.”
Can you describe an archetypal computer scientist?
“I have no idea what that would be! Computer scientists come in a wide variety of forms. Some of them are very close to mathematicians, and others are very close to electrical engineers. We’re all doing different kinds of things.”
So is there a personality trait that is applicable to doing something interesting with computers?
“I think there is a kind of computational thinking that people who work in computer science have to be able to do. You have to think about things as a sequence of steps taken in order to accomplish something. So I think they all have that in common. Oddly enough (well, maybe this is not odd), when I was in school — high school and elementary school — this kind of computational thinking was not taught. When I think back, I remember I was taught how to take square roots by following an algorithm. But computational thinking was never explained as a general paradigm for thinking about things. And yet, it’s an extremely powerful general paradigm. Once you understand it, you see how to apply it to lots and lots of problems.”
Is that something that should be taught?
It should be taught below the college level. I’m afraid that what they call computer science in high schools is mostly little programming problems with Java or C++. But computational thinking is a very powerful intellectual technique.
The past forty years have seen an explosion in computing. Will that continue?
I think of what has happened with computers as similar to the Industrial Revolution. I think we’ve had a Computer Revolution. It happened in the ’90s. All of a sudden, computers became integrated into the lives of normal people. Before that, it was just computer scientists playing around with their machines and figuring out how to make them work better. This has been a huge change, and changes like this don’t happen very often. But computing will continue to progress. Maybe there will be robots that are actually useful, devices that we can’t imagine now. But it’s not going to be really different than what we’ve got. We went from a world where you aren’t online to a world where you are online. That’s a huge change.
Interview conducted May 8th, 2012 by Stephen Hinshaw, a PhD student in Biological and Biomedical Sciences at Harvard Medical School.