DEV Community

jadjare
jadjare

Posted on

Unit Testing. Logic in Domain vs Logic in SQL

Unit Testing. Logic in Domain vs Logic in SQL

We have a table of 10 million records that include details of a location, a product, order date and units ordered.

We need to query this data to find the sum of units ordered by product, for a given set of locations.
We have to apply a business rule that says that if the order date is N days in the past then the ordered units are classifed as late units else they are classified as pending units.

This information will be displayed in a read on table (i.e. an onscreen report)
| Product | Late Units | Pending Units |

Our current design is to have a SQL Query that is passed the date on which ordered units should be classified as late units. The logic that then aggregates the data and places the units either in "late" or "Pending" is then applied in the SQL Stored Procedure.
The query results are then placed into a Data Transfer Object (e.g. {"product": "a", "lateUnits": 0, "pendingUnits": 100} ) and supplied to the client application via a Web Api.

The challenge with this design is it becomes much harder to write a suite of unit tests to demonstrate the business rules are correctly applied. At present we have created canned integration test data, then we spin up an a special integration test instance of the database and run tests against the sample data.

This allows us to take advantage of the SQL query engine for performance, whilst having a reasonable degree of testing coverage.

I'm still not 100% convinced of the approach we're taking and wonder what approach other developer's might take to solve this problem, e.g. would you load all the data into the domain and apply the logic there, apply the logic in SQL, or another approach?

Top comments (4)

Collapse
 
bhserna profile image
Benito Serna • Edited

Hi @jadjare ,

I also think that your approach is good =)

Maybe my approach would be, to move to the place you are now in several iterations... Maybe my first implementation would be as you said to "load all the data into the domain and apply the logic there", maybe then "all grouped by product", and then something like what you have now... Moving responsibility to the database as needed.

Just in every case I would put the database behind an object to be able to call descriptive methods, like fetchAll, fetchAllGroupedByProduct, fetchCountOfLateAndPendingUnitsByProduct ... or something like that =) ... and maybe in the "unit tests" of the unit that calls this storage object, just be sure that it can handle the format of returned info using a "double" of the storage object.

I am not sure if my answer is clear enough... but I hope, it is =)

Collapse
 
jadjare profile image
jadjare

Yes, that makes sense. Sounds like we're on the same wave length.
We have a read only repository object with specific named methods along the lines you suggest.

We're writing the code in dotnetcore and using a library called Dapper for managing our requests to the database and converting the response to our strongly typed objects.

Collapse
 
rhymes profile image
rhymes • Edited

If you have the logic in the stored procedure I'm not sure you can effectively do "unit tests".
What you can do is setup the database, run the stored procedure and then verify that the data is how how you expect it after the procedure is ran which you're already doing if I'm not mistaken.

This is probably more of a functional test than a unit test but if it's reproducible (given the same base data setup) without side effects (it doesn't send millions of emails :D) it should be testable.

Does it really matter if it's not strictly a "unit test" ? I'd say "who cares" :-) Your business logic is tested, battle test your stored procedure for various sets of inputs like for example what happens with null values, check if there is any order precedence and so on.

I assume you put the logic in the stored procedure because the domain logic was too slow for your use case :-)

Collapse
 
jadjare profile image
jadjare

@rhymes , thanks for responding with your thoughts.
In terms of the database, we have a Visual Studio Database Project, this provides a "Dacpac" including specific pre-canned data for our tests. The test suite then uses the Dacpac to drop and create the database when the tests run. This means the data is consistent on every run.

The original reason for not placing the business logic in the domain was actually because we're implementing a CQRS style of design. It's not full on CQRS, but it does mean that we've got a query only path in the application. This particular query sits on the query side of the application. It's only after we knocked up the stored procedure that i realised that we were actually implementing business rules in the query and that they should really be tested.

It sounds like, from your point of view, that the approach we've taken makes sense to you. The more I explain it, the more comfortable I am becoming with it - certainly from a pragmatic point of view