A colleague outside my team asked me and other colleagues to review a merge request. Although we try to hone our reviewing skills, it doesn't happen all too often that we invite colleagues outside of our specific teams to review. This was one of the cases and I appreciated the opportunity to give my two cents.
I was in the middle of my review when the colleague came by and asked for a first impression. I had looked briefly into the sources so my answer was brief as well: "I don't like your unit tests". When he inquired further why I didn't like them my answer was "Because they are shit." I know it sounds harsh - I am not a mean person, but I'm blunt and tend to exaggerate. Thankfully my colleague knows that and we agreed to talk again when I've had enough time to figure out why I didn't like the tests.
I sat down and took some time to figure out what bothered me and I want to share it with you.
More Than One Assert Per Test
One thing that was clear to me when I looked at the code, was the use of more that one assert per unit test. I'm not a fan of rules without reason so I had to have a closer look to make sure that it wasn't gut feeling. This is a broadly discussed topic, but I want to add my two cents.
One reason that seems to come up every time the internet discusses this topic is:
You won't see if there is something wrong with the second assert if the first one fails.
Although the argument is true, I find it to be weak and tend not to use it. I don't consider it a problem.
I have a different gripe with it, and I've drawn two examples that I want to share with you to illustrate.
For privacy and brevity I simplified the tests, but let the essence intact.
public void testIfRegistryGetsValuesCorrectly() {
Registry a = new Registry();
// Imagine some setup code here.
// originally there was a before-each method
assertThat(a.getEntryByKey("RandOMstRIng"), equals(a.getEntryByKey("randomstring")));
assertThat(a.getEntryByKey(" rand oms.tri ng"), equals(a.getEntryByKey("randomstring")));
assertThat(a.getEntryByKey("randomst37ring"), equals(a.getEntryByKey("randomstring")));
}
The test works fine and meets the requirements. Cut out all non-characters of the key and make it lowercase before you ask for the value. Everything seems fine.
The problem becomes clear when you touch the tested method and break the 'testIfRegistryGetsValuesCorrectly' test.
As a developer working on the implementation, you're not going to find out what doesn't work any more. At least not at first sight.
You will understand that the test broke, this is what the name tells you.
I suggested refactoring the test to the below code.
public void testIfRegistryIgnoresCases() {
Registry a = new Registry();
// Imagine some setup code here.
// originally there was a before-each method
assertThat(a.getEntryByKey("RandOMstRIng"), equals(a.getEntryByKey("randomstring")));
}
public void testIfRegistryIgnoresNonCharacters() {
Registry a = new Registry();
// Imagine some setup code here.
// originally there was a before-each method
assertThat(a.getEntryByKey(" 3 ra8nd oms.tri ng"), equals(a.getEntryByKey("randomstring")));
}
With these changes in place, you will understand the problematic behavior with a quick glance.
Dots are now allowed? You know what to do:
- You broke one well defined test
- You fix the test according to new requirements
- You rename the test
Explicit Naming
Before continuing I'd like to present a second example to illustrate. Mind you: I simplified the tests, but let the essence intact.
public void testConstructorOneWorksCorrectly() {
SpecialObjectClass a = new SpecialObjectClass("someID", "attribute1", "attribute2");
assertThat(a.getID(), equals("someID"));
assertThat(a.getFirstAttribute(), equals("attribute1"));
assertThat(a.getSecondAttribute(), equals("attribute2"));
}
public void testConstructorTwoWorksCorrectly() {
SpecialObjectClass a = new SpecialObjectClass("someID", "attribute1:attribute2");
assertThat(a.getID(), equals("someID"));
assertThat(a.getFirstAttribute(), equals("attribute1"));
assertThat(a.getSecondAttribute(), equals("attribute2"));
}
public void testConstructorTwoWorksCorrectlyWithoutSecondParameter() {
SpecialObjectClass b = new SpecialObjectClass("someID", "attribute1");
assertThat(b.getID(), equals("someID"));
assertThat(b.getFirstAttribute(), equals("attribute1"));
assertThat(b.getSecondAttribute(), equals("attribute1"));
}
The problem with the above example is similar, but not as obvious. The complex logic lies in the second and optional third parameter. The ID parameter ("someID") is trivial and distracts more than it helps in each of the test cases. If something breaks in the handling of the ID parameter the three unit tests will break for the same reason.
I have two problems with that:
- It violates the idea behind a unit test, which is to test single isolated units
- It confuses the poor developer changing the tested code in the future. (Who could also be the author itself after a couple of months)
I suggested to refactor to the following form:
public void testConstructorSetsID() {
SpecialObjectClass a = new SpecialObjectClass("someID", "attribute1", "attribute2");
assertThat(a.getID(), equals("someID"));
}
public void testConstructorSetsBothAttributes() {
SpecialObjectClass a = new SpecialObjectClass("someID", "attribute1", "attribute2");
assertThat(a.getFirstAttribute(), equals("attribute1"));
assertThat(a.getSecondAttribute(), equals("attribute2"));
}
public void testConstructorTwoSplitsAttributesAtTheColon() {
SpecialObjectClass a = new SpecialObjectClass("someID", "attribute1:attribute2");
assertThat(a.getFirstAttribute(), equals("attribute1"));
assertThat(a.getSecondAttribute(), equals("attribute2"));
}
public void testConstructorTwoSetsSecondAttributeSameValueAsTheFirstIfNotGiven() {
SpecialObjectClass b = new SpecialObjectClass("someID", "attribute1");
assertThat(b.getFirstAttribute(), equals("attribute1"));
assertThat(b.getSecondAttribute(), equals("attribute1"));
}
My naming here may not be perfect, but it gets the point across. And as you may have noticed, the one word that was present in all the unit tests is now gone: "Correctly"
This was my small epiphany.
As a word, "Correctly" should never be part of a test case's name. It does not provide any information at all.
Having found out my two gripes with the tests, I came up with a pattern for myself:
- test case with multiple assertions -> look out for this dangerous word ("Correctly") and change test case accordingly
- dangerous word in the title -> look at the assertions to figure out what this test does and change the name
That was my learning of the day: There are words that should not be part of any test case's name, and it's: "Correctly".
There might be more. Look out for them!
P.S.: The colleague that I spoke of in this article, was asked to proofread the article. Besides helping a lot with proper English, he also added a thought of his own. "As developers we often think about naming, and sometimes we seem to miss the most obvious points - or we tend to become lazy."
Top comments (3)
I generally try to keep a (unit) test focused on ‘one concern’. Sometimes though, that concern is ‘this input flow should result in that bundle of outputs’.
Couldn’t agree more about the importance of naming. Semantics matter.
Thanks for your comment. Sometimes this might be the right choice. As I tried to express I am not an extremist when it comes to this topic. This 'bundle of outputs' might just come in the way of having clear tests with having clear names. So look out for your bundle size ;)
Yes indeed :).
For context, I'm mostly talking about (fairly extensive) API tests, which generally involve setting up some things (via API calls), making a single specific API call with parameters that reflect the test's concern, and then verifying that the call has had the effect it needed to.
The latter can relate to for example
Both of those involve multiple asserts, but really only a single concern.