Introduction
In my last article here, We discussed the implementations and steps that were taken to integrate our application to an external API. We highlighted the design patterns and strategies used to carry out the task. They were divided into the following sections.
Design patterns
Performance
Testing
In this article, I will be building on what we did in the last article on how to implement the following requirements:
Exception handling
Logging
Exception Handling
Exception handling can be seen as a way of providing Resilience to our application. It allows the application to continue to behave normally even when things go wrong but at the same time report what went wrong. Imagine we get a 404 HttpException when trying to access the external API due to its unavailability at some point or we get a database connection exception which will mostly likely lead to a 500 HttpException. We wouldn't want our application to become unusable or behave unxepectedly due to this scenarios. It should still function properly by giving a predfined response adhering to the expected response format or structure of the application. And this is a good practice because a system being down should not mean that every other system or feature in our deployment infrastructure should be down or inaccessible. Rather our application should still be accessible and respond normally but report that something went wrong in a presentable manner and behaviour. We can think of exception handling as a sort of prescriptive analytics in the Big Data world.
In our Spring Implementation, we create a class and annotate it with ControllerAdvice as seen in the CardDetailControllerAdvise class. Then we define methods to handle the exception scenarios. Ensure that in your controllers that you throw the appropriate anticipated exceptions as seen in the CardDetailController class. I have also introduced two new classes RecordNotFoundException and GeneralException that are used in my business use cases. I also introduced the HttpStatusCodeException class in the CardDetailControllerAdvise class to cater for HTTP level exceptions (in this case our 404 and 500). Of course you can extend or decorate the getHttpExceptionDetails method in the advice class to add other HTTP Exceptions.
On a side note, when implementing Exception handling, a clean approach is to avoid using Exceptions in your decision making. For example rather than do this:
public int execute () {
try{
// Code to execute
return 1;
}catch(Exception ex) {
return -1;
}
}
either do this
public int execute () {
int status = -1;
try {
//Code to execute
status = 1;
}catch(FileNotFoundException ex ) {
//Log exception details
}
return status;
}
or you simply throw the exception like this so it propagates to the caller of the method who then decides how to proceed as in our case
public int execute () throws CustomException {
int status = -1;
try {
//Code to execute
status = 1;
}catch(FileNotFoundException ex ) {
throw new CustomException("Could not access file resource.", e);
}
return status;
}
Here we are wrapping our exception in a CustomException class that is thrown when this exception scenario occurs. We simply override the constructor of the java.lang.Exception class that takes in a message (which is the message we would like to present to the client) and a Throwable object (in this case the Exception object that is thrown).
public class CustomException extends Exception {
public CustomException(String message, Exception ex) {
super(message, ex);
}
}
The above implementation can be seen in the getCardRequestLogsCountGroupedByCard method of the CardDetailService class. This gives a response format consistent with the application's predefined response format and also allows you to control the exception message content using a more centralized approach with the help of the CardDetailControllerAdvise class.
Now Guess what! Our CacheIntegrationTest class is now failing because it is now throwing an Exception based on the above implementation. In this case, because our CardDetailRequestLogRepository class is actually being mocked, its getCardRequestLogsCountGroupedByCardNumber method will return null resulting in throwing the GeneralException class. So we need to create an ExpectedException rule in the CacheIntegrationTest class and test for the GeneralException that is expected to be thrown.
It is good practice to wrap our API exceptions so that one can localize the source and cause of the exception at a high level of analysis and investigation.
Also due to the inherent increased I/O operation when writing to logs and memory usage which occurs as a result of throwing exceptions, we should either log the exception or throw it but not both as shown earlier. Doing this can lead to what is commonly referred to as the Hot Potato anti pattern design. This creates a scenario whereby a lot of CPU intensive work is done with relatively little valuable output leading to application spikes in CPU usage and memory consumption. Throwing and logging an exception will also result in a bunch of log messages which aren't really needed. Hence amount of text will reduce the visibility of the logs.
Apart from the above approach of handling exceptions, people do employ the strategy of caching data responses from an external API into a data store. These data responses are then accessed from the cached system when the external API is unavailable or something went wrong. This strategy is a design pattern known as the Circuit Breaker. The fault tolerance library Hystrix is a common implementation of this pattern as I will show you in a bit.
First of all we include the spring-cloud-starter-netflix-hystrix dependency in our project.
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-netflix-hystrix</artifactId>
<version>2.1.1.RELEASE</version>
</dependency>
Then we create a method verifyCardDetailHystrix in the CardDetailService class and annotate it with the HystrixCommand and setting the necessary HystrixProperty values. Then we create a fallback method defaultCardDetailDto that will be called when the API is unreacheable after a period so as to return our cached or default data. Finally we enable the circuit breaker by annotating our Application class with EnableCircuitBreaker. And that's it. The defaultCardDetailDto method would be called whenever the API is unavailable. Apart from the ones I used, there are a number of other HystrixProperties you can tweak to suit your application specification.
Logging
We have talked about how to keep track of application activities, present default or cached data and notify users of the application state when something goes wrong. What if we also want to keep track of application states when all goes well without any exceptions (i.e the Happy Path)? The advantage of this that we will then have a holistic view of all the activities that have taken place in the application. It can also help us to replay all the activities of an application between any given points in time during the lifecycle of the application.
Even though we introduced the concept of logging in our pseudo codes earlier, we will talk in depth as to ways and strategies of implementing application logging for auditing.
Logging is a means of auditing or keeping track of every activity that has taken place in the application or system. This is crucial as this can tell us the activities that took place at any particular point in time during the lifecycle of the application. There are quite a number of approaches to implementing this but I will highlight on two common approaches.
In the first approach, we can sprinkle our codes with logging information using standard libraries such as Log4j or Sl4j implementations. Here we add these libraries as dependencies in our projects and then call the methods that prints out whatever we want printed. To avoid unnecessary I/O operations, we can call these methods at the beginning and at the end of a method body.
public class CardDetailService {
Logger logger = LoggerFactory.getLogger(CardDetailService.class);
public int execute () throws CustomException {
logger.debug("execute method started");
int status = -1;
try {
//Code to execute
status = 1;
}catch(FileNotFoundException ex ) {
throw new CustomException("Could not access file resource.", e);
}
logger.debug("execute method ended");
return status;
}
}
(For those in the PHP Laravel world, there is a shipped in Log facade that has the same signature call as in above i.e Log::debug('An informational message.');)
This approach tells us the activities/operations that took place in the application at a given point time. The downside to this is that it doesn't tell us the states of objects being acted upon during that period. Note however that with the above implementation, if an exception is thrown, it will be logged as well.
We can also control the logging level to reduce the amount of logging information hence I/O operations. This can be done either using the application.yml and logback-spring.xml files in our project or at the container/application server level. Notice how I set the logging levels in the application.yml file based on java packages to ERROR level based on our earlier recommendation. We also set the output pattern of every log event that will be written to our log files such that it prints the log time and level, class where the logs is being triggered and the message to print. In the logback-spring.xml file, we set the RollingFileAppender properties to control the name and size of each log file that will be generated, the maximum number and size of total log files to be kept on the server.
The other approach of logging uses a very common design pattern known as Event Sourcing. Here we store all the events/activities and the corresponding changes made of objects or models in our application. Then, to retrieve an object's or model's state we then read the different events/activities related to it and apply them one by one. It ensures that all changes to application state are stored as a sequence of events. This is quite similar to JPA Hibernate Envers implementation (where we annotate the JPA entities with @Audited leading to mirror images of model states being created in the database) except that in Event Sourcing, these changes are tied to the actual end-user activity that led to these model state changes.
In the Laravel world, this is implemented by adding the either the EventSauce or prooph dependency. You create your models, then define the Events, create the Projector and handle side effects such as notifications using Reactors. For sake of scope of this article and the framework we are using, I will not go into details on this but the concept is the same and this pattern can be applied on other technology stacks as well.
In our case here, using the Spring framework, I will show a high level and basic implementation.
First we bring in the dependency like below into our pom.xml file :
<dependency>
<groupId>org.axonframework</groupId>
<artifactId>axon-spring-boot-starter</artifactId>
<version>4.1</version>
</dependency>
<dependency>
<groupId>org.axonframework</groupId>
<artifactId>axon-test</artifactId>
<version>4.0.3</version>
<scope>test</scope>
</dependency>
Then download the axon server, extract to a folder of your choice. This is what we will be using as our EventStore.
We go ahead and create the Aggregate (CardDetailRequestAggregate), Event (CardDetailRequestCreatedEvent) and Command (CreateCardDetailRequestCommand) classes. Then we create the command and event handler methods in the CardDetailRequestAggregate for the CardDetailRequestCreatedEvent and CreateCardDetailRequestCommand classes. In very simple terms, the command class and the method annotated with CommandHandler are responsible for determining which action or event was triggered based on a user input via the CommandGateway interface. It's similar to the Command design pattern in implementation. While the Event class and the corresponding annotated EventSourcingHandler methods are responsible via the EventStore interface for the side effect that occurred as a result of the event or action that was triggered. The event components are also responsible for retrieval from and persistence of the Aggregate models also via the EventStore interface. The usage of the CommandGateway interface and the EventStore interface can be seen in the CardDetailCommandComponent class. The method logCardDetailRequest which was our existing function in the CardDetailService class earlier in our last article is overloaded to accept the CardDetailRequestCreatedEvent object and annotated with the EventHandler. The EventHandler annotation will trigger the logCardDetailRequest method when we send our command object via the CommandGateway as can be seen in the createCardDetalRequestLog method of the CardDetailCommandComponent class.
We can now start up our axon server using java -jar axonserver.jar command and the spring boot application. When we hit the endpoint, our function will be called as expected. If you go to the Axon Server dashboard, you will see an entry having the CardDetailRequestCreatedEvent event that was triggered with the value of the cardNumber and date you made a request with. The beauty of this approach is that we no longer burden the verifyCardDetailHystrix method in the CardDetailService class with the responsibility of logging as in the previous implementation whereby we called the logCardDetailRequest method. Hence we have in a way applied the Single Responsibility principle to that method which is one of the tenets of clean coding (S.O.L.I.D).
Because logging is more or less a non-functional requirement, one crucial factor to consider here is to ensure that the logging process does not impact on the application performance hence user experience. For this reason, it is advisable to make the logging process asynchronous as can be seen the reason why I am using the CompletableFuture in the service class for the return object which of course is the default implementation of the Axon framework. Another point of consideration if you have the resources, is to point your Event Sourcing implementation to a different datastore from the primary application database as its event store. In our case, this is being handled by the Axon Server we are running locally. The Axon implementation can also be configured to use other data stores such as RDBMS or NOSql databases as its event store. This way you can have a sort of backed up data to replay your application lifecycle and restore previously generated data should something go wrong with the primary application database.
So there you have it, our application now has resilience, tolerance as well as best practices for exception handling and logging which are requirements for a well architected system. And not to worry, I have updated our repo here to include all the implementations discussed in this article. In my next article, I will be highlighting on Security, a very crucial requirement for building applications and how to achieve Scalability and Availability with our application using Docker as the virtualization technology. Happy Coding!
Top comments (0)