I was reflecting recently on habits and practices I have that go against the common wisdom of the software development industry. Do you have any?
Here are some of mine:
I don't ask a lot of questions
I often hear that one should ask as many questions as possible, especially when onboarding onto a new project. That has never been my style though. Instead, I like to answer my own questions by digging through code, stepping through it in a debugger, experimenting, reading old commit messages, things like that. I definitely do ask questions, but it's somewhat of a last resort for me when I either can't find the answer myself, or really need to know the answer ASAP.
When I answer my own question like this, I don't just get the answer to the question I'm asking, I also stumble upon the answers to a lot of other questions I haven't even asked yet. That doesn't really happen when I just ask someone my question and they tell me the answer.
I credit this habit for why I often seem to get a reputation for knowing a lot about code I didn't even write. Also it helps me operate when there's nobody I can ask about something (often happens in freelancing).
I write lots of integration tests, few unit tests
Usually I hear that one should write a lot of unit tests, some component tests, and a few integration tests. I generally do the opposite, and write many integration tests and few unit tests. Here are some reasons why I think this works:
- Integration tests are the best at telling you if your application actually works. The goal of software engineering is never really "write a function that does X", it's always "write an interface via which some external actor can do X", and that's what integration tests test.
- Integration tests can often survive refactoring and other overhauls intact. Let's consider an HTTP API server. You're going to want to have a stable API with request and response schemas that stay the same regardless of any internal refactoring you do. If a test targets that stable API, then it can remain unchanged and valid even if you completely rewrite all the code under the hood. In contrast big refactors often necessitate big changes to unit tests and especially component tests, which makes the refactor costlier and also introduces the possibility of breaking a test and having it pass when it shouldn't.
- Other developers and QA engineers can write integration tests without having whitebox insight into the internal code. If you can setup a good set of utilities for writing API-level tests and shoot some example tests and API documentation over to QA, they can start adding more tests while knowing little to nothing about the internals. In addition, since integration tests are less likely to need changes, you're less likely to need to update tests written by someone else (which is often pretty difficult) when you make changes.
I definitely write unit and component tests but I reserve them for certain conditions like:
- Pure functions that implement complex and well-defined algorithms. We often need such functions to be rock solid and have many tests that run quickly. And if the expected behavior is well-defined, we're less likely to change it in a way that requires changing/rewriting tests.
- If it's difficult to test certain behavior from higher level tests (sometimes this is true just due to the type of application you're building - for instance, UI tests can be very cumbersome to write).
I don't like frameworks
While I respect the structure that frameworks can impart on a project (especially a big project) and the magic they provide, I find that the magic is sometimes more of a curse than a blessing.
The more magical a framework is, the more times I experience the magic of "I don't understand why this doesn't work..." and spend a day waving my trial-and-error wand at it until I reach the nirvana of "now it works but I still don't understand why..." and move onto the next thing.
When I keep things simple and work with lower-level tools, I stumble upon puzzles much less often and can work through them much more quickly. Also I have more flexibility to build exactly what I need, without bumping into the railings that opinionated frameworks often present.
That said, it's a balancing act to be sure. I don't build applications in assembly just because "it's the least magical". Sometimes opinionated frameworks can do things that would be really hard or time consuming to do otherwise, and for bigger teams they can help everyone stay on the same page.
What about you?
How do you rebel against the common wisdom?
By the way, I don't claim to be objectively correct on any of those practices, it's just how I work, for better or for worse!
Top comments (2)
Abstracting code too fast was a curse in my past. Just because two similar functions did the same, it does not mean you should create a function to make things complicated at the last mile.
Writing tests to have a high coverage is also questionable. As if the test is really going to work like in the real world. Testing feels always like to cover the most painful and happy paths. Try running the UI user test with 10 browser extentions installed, that a lot of people have (might also add addblocker to it, if the user is working in IT), and then never question why people think the application/website is so slow and/or not working as expected.
The "good" thing about using all the "best practice" stuff is: building trust. Not only can you trust in yourself but also others trust you doing a good job.
And if it fails? No software is bug free and people make mistakes. This is life. Just learn from it, and cover the case with a new test when it should not happen again. ๐
i often repeat myself, violating the golden rule of thumb, D.R.Y.. Imho, thereโs nothing wrong with repetition in code if itโs easier to understand or maintain that way.