DEV Community

Costin Manda
Costin Manda

Posted on • Originally published at siderite.dev on

Data access in code, using repositories, even with ORMs

Intro

In the .NET world one of the most used method of accessing databases is with Entity Framework (EF), an Object Relational Mapper (ORM) that is tightly integrated with the language syntax. Using the Language Integrated Queries (LINQ) native to .NET languages, it makes data access feel like working with normal .NET collections, without much knowledge of SQL. This has its benefits and drawbacks that I will try not to rant about here. But one of the issues that it consistently creates is confusion regarding the structure of the software project, levels of abstractions and ultimately unit testing.

This post will try to explain why the repository abstraction is ALWAYS useful. Note that many people use repository as a term for abstracted data access, while there is also a repository software pattern which relates to similar things, but it's not the same thing. In here, I will call a repository a series of interfaces abstracting the implementation details of data access and ignore the design pattern completely.

History

Feel free to skip this if you are aware of it, but I have to first address how we got to the idea of repositories to begin with.

In prehistory, code was just written as is, with no structure, everything in, doing what you wanted it to do or at least hoping to. There was no automated testing, just manual hacking and testing until it worked. Each application was written in whatever was on hand, with concerns about hardware requirements more important than code structure, reuse or readability. That's what killed the dinosaurs! True fact.

Slowly, patterns started to emerge. For business applications in particular, there was this obvious separation of the business code, the persistence of data and the user interface. These were called layers and soon separated into different projects, not only because they covered different concerns, but also because the skills necessary to build them were especially different. UI design is very different from code logic work and very different from SQL or whatever language or system was used to persist data.

Therefore, the interaction between the business and the data layer was done by abstracting it into interfaces and models. As a business class, you wouldn't ask for the list of entries in a table, you would require a filtered list of complex objects. It would be the responsibility of the data layer to access whatever was persisted and map it to something understandable to business. These abstractions started to be called repositories.

On the lower layers of data access, patterns like CRUD quickly took over: you defined structured persistence containers like tables and you would create, read, update or delete records. In code, this kind of logic will get abstracted to collections, like List, Dictionary or Array. So there was also a current of opinion that repositories should behave like collections, maybe even be generic enough to not have other methods than the actual create, read, update and delete.

However, I strongly disagree. As abstractions of data access from business, they should be as far removed from the patterns for data access as possible, instead being modelled based on the business requirements. Here is where the mindset of Entity Framework in particular, but a lot of other ORMs, started clashing with the original idea of repository, culminating with calls to never use repositories with EF, calling that an antipattern.

More layers

A lot of confusion is generated by parent-child relationships between models. Like a Department entity with People in it. Should a department repository return a model containing people? Maybe not. So how about we separate repositories into departments (without people) and people, then have a separate abstraction to map then to business models?

The confusion actually increases when we take the business layer and separate it into sublayers. For example, what most people call a business service is an abstraction over applying specific business logic only to a specific type of business model. Let's say your app works with people, so you have a model called Person. The class to handle people will be a PeopleService, which will get business models from the persistence layer via a PeopleRepository, but also do other things, including a mapping between data models and business models or specific work the relates only to people, like calculating their salaries. However, most business logic uses multiple types of models, so services end up being mapping wrappers over repositories, with little extra responsibility.

Now imagine that you are using EF to access the data. You already have to declare a DbContext class that contains collections of entities that you map to SQL tables. You have LINQ to iterate, filter and map them, which is efficiently converted into SQL commands in the background and give you what you need, complete with hierarchical parent-child structures. That conversion also takes care of mapping of internal business data types, like specific enums or weird data structures. So why would you even need repositories, maybe even services?

I believe that while more layers of abstraction may seem like pointless overhead, they increase the human understanding of the project and improve the speed and quality of change. There is a balance, obviously, I've seen systems architected with the apparent requirement that all software design patterns be used everywhere. Abstraction is only useful if it improves code readability and separation of concerns.

Reason

One of the contexts where EF becomes cumbersome is unit testing. DbContext is a complicated system, with a lot of dependencies that one would have to manually mock with great effort. Therefore Microsoft came with an idea: in memory database providers. So in order to test anything, you just use an in memory database and be done with it.

Note that on Microsoft pages this method of testing is now marked with "not recommended". Also note that even in those examples, EF is abstracted by repositories.

While in memory database tests work, they add several issues that are not easy to address:

  • setting up an in memory DbContext requires all of the dependencies to existing entities
  • setting up and starting the memory database for each test is slow
  • in order to get valid database output you need to set up a lot more than what you want to atomically test

Therefore, what ends up happening is that people set up everything in the database within a "helper" method, then create tests that start with this inscrutable and complex method to test even the smallest functionality. Any code that contains EF code will be untestable without this setup.

So one reason to use repositories is to move the testing abstraction above DbContext. Now you don't need a database at all, just a repository mock. Then test your repo itself in integration tests using a real database. The in memory database is very close to a real database, but it is slightly different, too.

Another reason, which I admit I've rarely seen be of actual value in real life, is that you might want to change the way you access the data. Maybe you want to change to NoSql, or some memory distributed cache system. Or, which is much more likely, you started with a database structure, perhaps a monolithic database, and now you want to refactor it into multiple databases with different table structures. Let me tell you right off the bat that this will be IMPOSSIBLE without repositories.

And specific to Entity Framework, the entities that you get are active records, mapped to the database. You make a change in one and save the changes for another and you suddenly get the first entity updated in the db, too. Or maybe you don't, because you didn't include something, or the context has changed.

The proponents of EF always hype the tracking of entities as a very positive thing. Let's say you get an entity from the database, you then do some business, then you update the entity and save it. With a repo you would get the data, then do business, then get the data again in order to perform a little update. EF would keep it in memory, know it wasn't updated before your change, so it would never read it twice. That's true. They are describing a memory cache for the database that is somehow aware of database changes and keeps track of everything you handle from the database, unless instructed otherwise, bidirectionally maps database entries to complex C# entities and tracks changes back and forth, while being deeply embedded in the business code. Personally, I believe this plethora of responsibilities and lack of separation of concerns is a lot more damaging than any performance gained by using it. Besides, with some initial effort, all that functionality can still be abstracted in a repository, or maybe even another layer of memory caching for a repository, while keeping clear borders between business, caching and data access.

In fact, the actual difficulty in all of this is determining the borders between systems that should have separate concerns. For example, one can gain a lot of performance by moving filtering logic to stored procedures in the database, but that loses testability and readability of the algorithm used. The opposite, moving all logic to code, using EF or some other mechanism, is less performant and sometimes unfeasible. Or where is the point where data entities become business entities (see the example above with Department and Person)?

Perhaps the best strategy is to first define these borders, then decide on which technology and design are going to fit into that.

My conclusion

I believe that service and repository abstractions should always be used, even if the repository is using Entity Framework or other ORM underneath. It all boils down to separation of concerns. I would never consider Entity Framework a useful software abstraction since it comes with so much baggage, therefore a repository much be used to abstract it in code. EF is a useful abstraction, but for database access, not in software.

My philosophy of software writing is that you start with application requirements, you create components for those requirements and abstract any lower level functionality with interfaces. You then repeat the process at the next level, always making sure the code is readable and it doesn't require understanding of the components using or the ones used at the current level. If that's not the case, you've separated concerns badly. Therefore, as no business application ever had requirements to use a specific database or ORM, the data layer abstraction should hide all knowledge of those.

What does business want? A filtered list of people? var people = service.GetFilteredListOfPeople(filter); nothing less, nothing more. and the service method would just do return mapPeople(repo.GetFilteredListOfPeople(mappedFilter)); again nothing less or more. How the repo gets the people, saves the people or does anything else is not the concern of the service. You want caching, then implement some caching mechanism that implements IPeopleRepository and has a dependency on IPeopleRepository. You want mapping, implement the correct IMapper interfaces. And so on.

I hope I wasn't overly verbose in this article. I specifically kept code examples out of it, since this is more of a conceptual issue, not a software one. Entity Framework may be the target of most of my complaints here, but this applies to any system that magically helps you in small things, but breaks the important ones.

Hope that helps!

Top comments (0)