Hey Readers,
I have been working in software since 2007. After a very frustrating project in 2010, I discovered test-driven development as a way t...
For further actions, you may consider blocking this person and/or reporting abuse
This is how I build now: Build it -> Refactor it -> test it (to be honest not all the time)
My question is how to incorporate TDD into that? I usually get consumed with the problem and just focus on solving it first
Hey Maher,
Sadly using TDD would require you to change how you work. TDD is a cycle of Red-Green-Refactor. It consists of writing the specification for some behavior—red, due to a failing test. Followed by writing the code that satisfies the specification—green, the tests pass. Lastly, refactoring, changing the structure of the code without adding or removing behavior.
This cycle Red -> Green -> Refactor (Repeat), is simply a different flow of working. One downside to this approach is that I don't always know what I am supposed to build. When I don't know how to solve a problem, I will write a Spike.
A Spike is an implementation that is an experiment. Before I start a Spike, I write out what I want to learn, and I commit to deleting all of the Spike code once I have learned that information. With the information in hand, I would use TDD to create the real implementation.
Since TDD is so different from your current workflow, I am not sure how you could mix the two styles together. At the end of the day, TDD is my preference and it is up to us individually to discover how we work best.
Thanks for the question!
Steve
Thank you for the detailed answer!
I will try out working in TDD and see how I like it.
My fear is always just time, but if TDD makes me have less bugs then I am ok with the time sacrifice
I've been trying to get into TDD, and find myself able to do it only in specific isolated cases. I still feel like I'm missing the bigger picture.
Thanks 😄
Hey Brad,
In my opinion there are two categories of code that are difficult to test: code I don't understand, code that is tightly coupled (I include testing across the wire, and asynchronous code in the tightly coupled group).
Code I don't understand
If I don't really understand the details of what I am trying to implement, attempting to write it with TDD will be very difficult. For instance, if I have never implemented an instance of the observer pattern before, I would first create a Spike in order to learn how I would structure this code. Once I have learned that, I throw away the untested code in the Spike, and use TDD to write the production code.
Tight coupling
Tightly coupled code prevents you from using bits of code in isolation, you end up bringing everything and the kitchen sink into the test in order to run the function or class. There are two strategies I could use depending on who owns the code.
If code is part of the system I am working on, then I will make an attempt to decouple it. I would use the patterns laid out in Working Effectively With Legacy Code by Michael Feathers. I also balance delivering working software and adding coverage. I do this by only decoupling enough code to allow me to test my new feature. The rest might happen at a later time. When learning TDD, I think it is important not to "boil the ocean", and just take the wins when you can get them. There are times for bigger refactorings, but it is up to you to determine when is the right moment.
If I don't own the code, I would test at a higher level of abstraction. For instance if two classes are using framework code that requires them to be tightly coupled, then I don't try to break them apart. The framework has already made a decision about coupling, it is going to be near impossible for me to change it—without making the code unreadable. Instead of fighting a loosing battle, I create a class or a function that wraps that interaction and I test from there. The tests calls out the high level behavior the framework code provides.
How much?
It is my personal opinion that a team should have enough tests in place to be confident that things will work when the system is deployed to production. When a team pushes to prod. and feels so scared that they spend an hour verifying the behavior of the system in production by hand, that tells me they didn't write enough automated tests.
CI vs CD
Every team I have worked on has had a CI pipeline in place, as making sure the master branch is always in a working state is necessary. However, due to the industries I have worked in, I have never seen automated hourly deploys to production. It was often the case that my team was one of a hand full that deployed twice a week, and that seemed pretty good.
It is more a function of your current company. Asking, "Is that much automation valuable to our users?" would tease out if that work is needed.
Thanks for the questions!
Steve
Hey Neil,
I know that it is a typical consulting answer, but it depends. If there is logic in the methods on a value object, I would definitely write tests. However if functions simply return a static value, I would leave those untested. I usually try to test how groups of objects and functions interact together. With that style I am free to extract out data structures or value objects that don't have their own tests, but are tested through a higher order interaction.
The factors at play are 1) how confident am I that the value object is a stable abstraction 2) is the behavior something that could actually break.
If I am confident that it can't break, I don't write a test.
If it is an unstable abstraction, I test it through an abstraction that is stable.
Thanks for your question!
Have you ever had any issues get stakeholders and managers at any of the companies you've worked at to "buy-in" to TDD, or is it something that is part of your workflow and you have never had to ask for permission?
Hey Dwayne,
I have had the benefit of working with companies who are beginning their path of figuring out what it means to work in an agile fashion. As part of that working-agreement, I am expected to show their developers how to use the tools of TDD, pair programming, and retrospection to incrementally deliver a product sooner. In that context I really don't have any push back from above. Most organizations already require tests to be written, so they don't have much to say about how the tests get there.
I'm not sure how I would behave if someone told me I explicitly can't do TDD.
Thanks for the question!
Steve
How do you deal with not-very-clear-requirements in order to use TDD? I've tried myself a couple of times to enforce TDD in my daily routine but 7/10 I failed because the requirements for that task wasn't that clear or was missing something and I found myself going back to the good old "I'll add the tests later on" and of course I did but I'd still like to fully embrace TDD.
Hey Higor,
TDD is a great tool for declaring that a piece of code should do X, Y, and Z. If you don't know what the code is supposed to do, you can't write it, and therefore can't write tests.
When I have been in situations where requirements are not defined or missing key information, I have worked with the Product Manager to correct stories before they get into a team's backlog. I would sit with the PM review the story. If the definition was fuzzy, then ask for more detail. Once the story was mostly clear, then it could be pointed and picked up by the team.
Another way to tackle this problem would be to set up a way to get questions answered by your stakeholder at a more frequent cadence.
I hope this helps.
Thanks for the question!
Steve
1 - i would like to use it in my job, can you recommend me books, online courses about?
2 - even with a short deadline is possible keep using TDD?
3 - you use any specific tool to TDD?
Hey Leandro,
1) I would recommend Kent Beck's book on TDD, entitled "Test Driven Development: By Example" —I don't want to include a link to it, as I fear people will think it is an affiliate link. The book is black and purple. That is the one I read when I was just starting out. It is structured like a giant tutorial with code that you should actually type in.
If you want to start using it at your work, then you should start. However, I would recommend a timebox—setting a limit—on how long you fight to test something. If you can't immediately figure out how to test something, don't spend hours on it—unless your employer is okay with that—just say, "If I can't figure it out in 20 mins, I am gonna move on." As you get more experience, you will be able to tackle those harder to test classes or functions.
2) I personally never act differently near a deadline. I want to remain calm, and not reactive. If I believe a process is worthwhile, I will do it as much as possible.
3) Could you rephrase this question? I am not sure what you mean.
Thank you for the questions!
Thank you for the opportunity!
what do you think of augur?
Hey Areahints,
I looked at this project for a bit. It seems that the author of this project and myself have fundamentally different views on testing. I get the impression that he views it as an annoying chore that happens after the real work is over. However I view testing as part of the job.
When I write a test I am automating the work of testing the software by hand. Testing by hand is tedious work that is difficult for humans to do well—we are bad at doing repetitive tasks without variation. I would rather make the computer do the boring work of running the same checks over and over.
I believe tests are software. Since software spends most of its life being read, it needs to communicate intent. I personally would never recommend someone use a tool that automates the writing of tests, because I believe it is impossible to automate communicating the intent of the code.
Thanks for the question!
Do you feel that there are industry/software-types that would gain more from TDD?
And once you get the swing of things, how much is the impact on output vs using manual testing?
Hey Sebastian,
I have mainly worked on web applications, in that context I have found TDD very helpful. I have also used TDD to verify that certain behaviors are working inside of a Windows container, when I contributed to an open-source cloud platform.
The biggest difference for me in those two contexts is what test granularity is the easiest to work with. The containers had to be rebuilt, started, and then code had to be executed in them. This caused us to write larger grained tests, since the feedback loop was so long. Whereas in a web app, I am free to test individual functions or classes.
I have spent the last few years training engineers to use TDD and pair programming. After about 3 months of doing it full time, they tell me that they feel more productive.
Manual testing still has a place when using TDD, however the goal is no longer to verify correctness—but to find bugs. Once a bug, edge-case, or even a security vulnerability is found, a new test can be written to make sure that issue is gone forever.
Thank you for the questions!
Besides fewer bugs, what do you consider the biggest advantage of TDD?
Opposite question: aside from the potential slowdown of writing tests, what do you consider the biggest drawback of TDD?
Hey Ryan,
I wouldn't try to make the case that there are fewer bugs when using TDD. However, I would say that there are fewer regressions. It is possible to write tests first, and not think about all of the edge cases that could exist when the system is running.
I agree that there is definitely a slow down when learning TDD. It really took me years to start doing it well. When I was just starting out it was easy to create tests that were very coupled to the implementation—rather than the behavior—of a piece of code. I would say that the learning curve is a huge drawback. I'm not sure what I would say it is the biggest, I might have to revisit your question after thinking about it for a bit. After doing it for years I feel that it is actually slower for me to work in a non-TDD way.
Thank you for the questions!