I had an epiphany last night.
So proud was I, that I chatted with our lead tester this morning, and found he'd already had the same thought a couple of years ago. Oh well, I've never claimed to be original. Here's Jon's thought that I, also, had two years later.
So I was thinking about the nature of Science (and, more particularly, the scientific method), and about the nature of Computer Science, and it struck me - Computer Science isn't Science at all.
Science is about extending our knowledge of existing systems. Astrophysics, for example, gives us an ever-increasing insight into the mechanics of stars and galaxies, of black holes and dark matter. It's amazing stuff, it really is. The key to science is that it allows us to answer questions - how does a star form, what holds the galaxies together, and so on.
Computer Science is about devising new algorithms, or analysis of existing ones. The questions it answers aren't about understanding the unknown, they're about learning how to do something. Computer Science focuses on algorithms, proofs, and so on.
So the relationship that Computer Science has on development is rather like the relationship that Mathematics has with Physics. Mathematics provides the tools with which to understand the universe, and not the understanding itself. So one could argue that Computer Science, though clearly a very valuable discipline, isn't actually Science at all. It's Computer Mathematics.
So where is the Science in software? Is there any? Well - yes. And I don't think it's something we study enough.
Given any piece of software, there are two important questions we should ask about it:
- Does it work?
- Why not?
If you're amazingly lucky, you don't have to answer the second - but when you need to answer it, you really need to answer it. These two questions form the disciplines of Testing and Debugging.
In the Scientific Method, popularised by Sir Francis Bacon a few centuries ago, there's a standard way of answering such questions.
- You guess - "Hypothesis"
- And then you figure what would prove, or disprove, your guess - "Prediction"
- And then you do that - "Experiment"
- Rinse, repeat, until your guess was right
So with Software Testing, we start with the hypothesis that the software works. Probably, more usefully, we break that down into a set of ways that it would work - some of these from a specification or requirements set or something, and many many others from common sense.
With debugging, it's more useful to be a little subtle. So the bug affects user login. We've checked the password is correct, so it's not that. Maybe the expiry check it wrong? If so, that would mean... And so we try...
Debugging the most poorly taught and discussed skill in software development. I don't think Testing is well taught either. In both cases, I think we focus too much on the mechanics of what tools to use (the debugger, the unit test) and too little on the method for deciding what to test and how to debug. To put it another way, we concentrate on the experiment, and not the hypothesis or prediction phases.
And this is batshit crazy - debugging is the single most important skill a developer can have. We should be working to create a solid, reusable approach to debugging instead of relying on gut feeling and ad-hoc approaches.
Without solid testing, whatever we ship is an unknown, volatile thing (well, unless you formally test the entire thing, which in some cases is sufficient). Developers tend to bias toward the types of experiment they understand, and condense testing into a single procedure and set of metrics. Code coverage, unit testing, and so on become the favoured approach. At the other end of the scale, Exploratory Testing, once the primary approach, is looked down on for being too soft and lacking rigour - it's considered to be based at best on the gut feeling we prize so highly when debugging. Yet what's important is that the correct hypotheses have been formed, and tested appropriately.
This way of thinking also raises some more troubling thoughts. Test Driven Development is a popular approach to development that brings some clear benefits. But if I described this as conducting your experiment to fit your hypothesis, it suddenly doesn't sound as good. I don't, actually, think it's quite like that, although I know have niggling concerns with the approach I didn't before. Could it be that the "tests" we write for TDD are not, quite, tests? Should we reconsider them later in the process?
I think there's a good case for applying more scientific rigour to both debugging and testing. I think making these two areas more structured in their overall approach would make them both easier to learn, and more beneficial to the overall project. And also, I can finally get to describe myself as a Scientist.
Top comments (5)
Have you read about Karl Popper's notion of falsifiability? This is really the underpinning of modern science, and is the test of whether you're dealing with a science or a pseudo-science. A proper science is one that produces falsifiable predictions.
The key thing is that you should be able to falsify your hypothesis, in software this would be your diagnosis of what is wrong. The power here is that if you can falsify it then you have specific predictions that you can test to tell if you if it holds or not.
That is, you don't try things to verify your hypothesis, rather you think, "What's the quickest way of showing my hypothesis is wrong.". I find this a much sharper knife than going down the verification path.
Of course, if you can't think of anything that you can do that would show that your hypothesis is wrong, then all you have is a nice story, but it doesn't help you to actually understand and fix the underlying fault.
This makes a vast amount of sense. I'm almost tempted to reedit the article to note this, but that seems almost cheating.
(Also, awesome to see you here! Can't wait for you to write some of your OO thoughts here)
"So the relationship that Computer Science has on development is rather like the relationship that Mathematics has with Physics. Mathematics provides the tools with which to understand the universe, and not the understanding itself. So one could argue that Computer Science, though clearly a very valuable discipline, isn't actually Science at all. It's Computer Mathematics."
So, does that mean that Mathematics is not a Science? It's the only study field which you can actually proof something. In physics (and in any other field of study) you can only provide evidence of proof that your model of the world has a percentage of similarity with the real world, based on how your model predicts the events of the real world.
No, Mathematics isn't a science, because it relies on formal proofs. It doesn't use the Scientific Method. Or as someone tweeted in response to this article:
"Ultimately, we can say that mathematics is the discipline of proving provable statements true. Science, in contrast, is the discipline of proving provable statements false." - @unclebobmartin, Clean Architecture
That doesn't lessen the importance of Mathematics, it's simply a different discipline - and one that underpins much of Science.
I don't agree, but this is a long (but interesting) conversation. My disagreement has nothing to do with the importance of the variety of study fields. Kirit mentioned Karl Popper above and the notion of falsifiability. Karl Popper had the opinion that in mathematics, a simple true statement (like 2+2=4) can't be proven false, thus mathematics don't comply with the notion of falsifiability and therefore mathematics is not a science. Bertrand Russell (alongside with Alfred Whitehead) wrote a 362 page essay in his book Principia Mathematica in order to prove that 1+1=2 that took him 10 years and until his death he had a lot of doubts about his work. Computer science is the result of the doubt in mathematical truth and proofness. Take as example the theory of incompleteness by Kurt Goedel.