Background
With the continuation of Streaming Large JSON Response in Spring and Consume Large JSON Response in Spring now time to put a real world scenario to generate a report based on the received response data. We will be generate a csv (comma separated value) report based on the received response data.
Goal
Consume Large JSON Response in Spring here we received list of Employee
pojo which looks like following
Employee.java
public class Employee {
private Long empNo;
private Date birthDate;
private String firstName;
private String lastName;
private Gender gender;
private Date hireDate;
}
From here we will be generating a csv file. The goal here to generate the whole csv file in the file system. Then serve the csv file over the network from the file system directly without putting pressure on the heap memory.
Firstly a get request will show a web page, containing the prompt for download the report. Clicking on the prompting button will initiate generating the csv file and end up downloading that file within the browser.
Implementation
First task is easy, designing a landing page with the "Download Report" prompt button.
inside the template
folder
employee-report
... CSS details are skipped for focus
<body>
<a href="/download-employee-report" download>Download Report</a>
</body>
EmployeeController.java
@GetMapping("/employee-report")
public String getReportDownloadPage() {
return "employee-report";
}
Now running the consumer
application, and hitting http://localhost:8081/employee-report
will show up following
Now time to implement route /download-employee-report
.
Now we need the jackson-dataformat-csv
dependency, using maven
pom.xml
<dependencies>
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-csv</artifactId>
<version>2.13.0</version>
</dependency>
</dependencies>
Before doing that we need CSVMapper
bean
WebConfig.java
@Configuration
public class WebConfig {
... Previous Code
@Bean
@Primary
public ObjectMapper restTemplateObjectMapper(RestTemplate restTemplate) ... Previous Code
@Bean(name = "csv-mapper")
public CsvMapper csvMapper() {
CsvMapper csvMapper = new CsvMapper();
return csvMapper;
}
}
CSVMapper
is subclass of ObjectMapper
, so break the confusion while bean injection of ObjectMapper
, we declared @Primary
annotation on top of ObjectMapper
bean creation. To inject the CSVMapper
bean we need to add @Qualifier("csv-mapper")
before bean injection.
Now Injecting the CSVMapper
EmployeeReportService.java
@Service
public class EmployeeReportService {
private final RestTemplate restTemplate;
private final ObjectMapper objectMapper;
private final CsvMapper csvMapper;
public EmployeeReportService(RestTemplate restTemplate, ObjectMapper objectMapper,
@Qualifier("csv-mapper") CsvMapper csvMapper) {
this.restTemplate = restTemplate;
this.objectMapper = objectMapper;
this.csvMapper = csvMapper;
}
... Previous Code
}
From Consume Large JSON Response in Spring we already have
EmployeeReportService.java
public Long fetchAllEmployee(Consumer<Employee> employeeConsumer) {
String url = "http://localhost:8080/employees?stream=true";
return restTemplate.execute(url, HttpMethod.GET, null, (response) -> {
Long employeeCount = 0L;
JsonParser jsonParser = objectMapper.getFactory().createParser(response.getBody());
if(jsonParser.nextToken() == JsonToken.START_OBJECT) {
if(jsonParser.nextFieldName() == "employees") {
if(jsonParser.nextToken() == JsonToken.START_ARRAY) {
while(jsonParser.nextToken() != JsonToken.END_ARRAY) {
Employee employee = jsonParser.readValueAs(Employee.class);
employeeConsumer.accept(employee);
employeeCount++;
}
}
}
}
jsonParser.close();
return employeeCount;
});
}
It allow us to getting Employee
pojo one by one from the response. Now using the above method
EmployeeReportService.java
@Service
public class EmployeeReportService {
... Previous Code
public File prepareEmployeeReportFile() throws IOException {
CsvSchema schema = csvMapper.schemaFor(Employee.class);
File tempFile = null;
FileOutputStream fos = null;
try {
tempFile = File.createTempFile("employee", "csv");
fos = new FileOutputStream(tempFile);
CsvGenerator csvGenerator = csvMapper.getFactory().createGenerator(fos);
csvGenerator.setSchema(schema);
fetchAllEmployee((employee) -> {
try {
csvGenerator.writeObject(employee);
} catch(IOException ex) {
throw new RuntimeException(ex);
}
});
} catch(IOException ex) {
if(fos != null) {
fos.close();
}
if(tempFile != null) {
tempFile.delete();
}
} finally {
if(fos != null) {
fos.close();
}
}
return tempFile;
}
}
From the top of the view, this method return the temporary csv File
generated from the response.
Now explaining step by step.
- First declare the schema for the csv file.
- Create a temporary file into the file system's temp directory and getting the
FileOutputStream
. - From the
CSVMapper
bean andFileOutputStream
from previous step, aCSVGenerator
is prepared to write csv into theFileOutputStream
. Schema generated in the first step, is also set onCSVGenerator
. - Using the
fetchAllEmployee
writingEmployee
object one by one into the temporary file. - Whole task is wrapped inside
try...catch...finally
to ensure that any unwanted incident closes theFileOutputStream
and deletes the temporary file. - Finally return the temporary file.
Now the /download-employee-report
route implementation
EmployeeController.java
@GetMapping("/download-employee-report")
public void downloadEmployeeReport(HttpServletResponse response) throws IOException {
response.setContentType("application/octet-stream");
response.setHeader("Content-disposition", "attachment; filename=employees.csv");
File csvTempFile = employeeReportService.prepareEmployeeReportFile();
FileInputStream fis = new FileInputStream(csvTempFile);
IOUtils.copy(fis, response.getOutputStream());
fis.close();
//TODO: Need to be more careful about deleting this file.
csvTempFile.delete();
}
Here, setting the proper headers for file content type and attachment file name. Copying the whole file into the HttpServletResponse
's output stream. Thus ensured the whole file is not loaded into memory.
Finally closing the stream and remove the temporary file.
Here the implementation should be more careful, because any sort of IOException
can occur during the IOUtils.copy(...)
.
Securing the implementation will look as follows
FileInputStream fis = null;
try {
fis = new FileInputStream(csvTempFile);
IOUtils.copy(fis, response.getOutputStream());
} catch(IOException ex) {
// Handle the exception
} finally {
if(fis != null) {
fis.close();
}
if(csvTempFile != null) {
csvTempFile.delete();
}
}
Now running both the server
and consumer
application with -Xmx32m
jvm argument, visit the http://localhost:8081/employee-report
route and click the Download Report
prompt button. It will initiate a employees.csv
file download.
Content of the employees.csv
look like following
-536824800000,10001,Georgi,M,504640800000,Facello
-189669600000,10002,Bezalel,F,473191200000,Simmel
-347522400000,10003,Parto,M,504640800000,Bamford
-505375200000,10004,Chirstian,M,504640800000,Koblick
-473925600000,10005,Kyoichi,M,599594400000,Maliniak
-536824800000,10006,Anneke,F,599594400000,Preusig
-410421600000,10007,Tzvetan,F,599594400000,Zielinski
-378972000000,10008,Saniya,M,756842400000,Kalloufi
... ~300K Entries
So we successfully downloaded the csv file, but without header line nothing is understood.
To appear the header
EmployeeReportService.java
public File prepareEmployeeReportFile() throws IOException {
CsvSchema schema = csvMapper.schemaFor(Employee.class).withHeader(); // Instruction for header to appear
... Previous Code
}
Now the output becomes
birthDate,empNo,firstName,gender,hireDate,lastName
-536824800000,10001,Georgi,M,504640800000,Facello
-189669600000,10002,Bezalel,F,473191200000,Simmel
-347522400000,10003,Parto,M,504640800000,Bamford
-505375200000,10004,Chirstian,M,504640800000,Koblick
-473925600000,10005,Kyoichi,M,599594400000,Maliniak
-536824800000,10006,Anneke,F,599594400000,Preusig
-410421600000,10007,Tzvetan,F,599594400000,Zielinski
-378972000000,10008,Saniya,M,756842400000,Kalloufi
... ~300K Entries
Now the headers appear. But still few problems
- Ordering of the columns are not as expected.
- Dates are showing in in-human format.
By default the serial of the column is done in alphabetical order. To serial according to out necessity we need following on top of Employee.java
Employee.java
@Getter
@Setter
@NoArgsConstructor
@JsonPropertyOrder({"empNo", "firstName", "lastName", "gender", "birthDate", "hireDate"})
public class Employee {
... Previous Code
}
Doing so the downloaded employees.csv
looks like following
empNo,firstName,lastName,gender,birthDate,hireDate
10001,Georgi,Facello,M,-536824800000,504640800000
10002,Bezalel,Simmel,F,-189669600000,473191200000
10003,Parto,Bamford,M,-347522400000,504640800000
10004,Chirstian,Koblick,M,-505375200000,504640800000
10005,Kyoichi,Maliniak,M,-473925600000,599594400000
10006,Anneke,Preusig,F,-536824800000,599594400000
10007,Tzvetan,Zielinski,F,-410421600000,599594400000
10008,Saniya,Kalloufi,M,-378972000000,756842400000
... ~300K Entries
So our column is according to our need.
Now time for the date format. To customise the date format update the CSVMapper
bean creation
WebConfig.java
@Bean(name = "csv-mapper")
public CsvMapper csvMapper() {
CsvMapper csvMapper = new CsvMapper();
csvMapper.setDateFormat(new SimpleDateFormat("dd MMMM YYYY")); // Setting the date format.
return csvMapper;
}
Now the downloaded csv looks like following
empNo,firstName,lastName,gender,birthDate,hireDate
10001,Georgi,Facello,M,"28 December 1953","29 December 1986"
10002,Bezalel,Simmel,F,"29 December 1964","30 December 1985"
10003,Parto,Bamford,M,"28 December 1959","29 December 1986"
10004,Chirstian,Koblick,M,"27 December 1954","29 December 1986"
10005,Kyoichi,Maliniak,M,"26 December 1955","01 January 1989"
10006,Anneke,Preusig,F,"28 December 1953","01 January 1989"
10007,Tzvetan,Zielinski,F,"30 December 1957","01 January 1989"
... ~300K Entries
So our expected date format is in action.
Still the header names are not human friendly. i.e instead of empNo
"Employee No" is more preferred. To add this customisation, we need to add @JsonGetter
in the getter method. We are using lombok, so adding the @JsonGetter
will look like following
Employee.java
@Setter
@NoArgsConstructor
@JsonPropertyOrder({"empNo", "firstName", "lastName", "gender", "birthDate", "hireDate"})
public class Employee {
@Getter(onMethod = @__(@JsonGetter("Employee No")))
private Long empNo;
@Getter(onMethod = @__(@JsonGetter("Birth Date")))
private Date birthDate;
@Getter(onMethod = @__(@JsonGetter("First Name")))
private String firstName;
@Getter(onMethod = @__(@JsonGetter("Last Name")))
private String lastName;
@Getter(onMethod = @__(@JsonGetter("Gender")))
private Gender gender;
@Getter(onMethod = @__(@JsonGetter("Hire Date")))
private Date hireDate;
}
Now trying with the csv will give us
"Employee No","First Name","Last Name",Gender,"Birth Date","Hire Date"
,,,,,
,,,,,
,,,,,
,,,,,
,,,,,
,,,,,
,,,,,
... ~300K Entries
Header line is fine, but what happened to our data ?
Remember, the same pojo is used during deserialisation from the API. So we need to be more careful during deserialisation. We also need to update the @Setter
with @JsonSetter
. Doing so the final pojo will be
Employee.java
@NoArgsConstructor
@JsonPropertyOrder({"empNo", "firstName", "lastName", "gender", "birthDate", "hireDate"})
public class Employee {
@Getter(onMethod = @__(@JsonGetter("Employee No")))
@Setter(onMethod = @__(@JsonSetter("empNo")))
private Long empNo;
@Getter(onMethod = @__(@JsonGetter("Birth Date")))
@Setter(onMethod = @__(@JsonSetter("birthDate")))
private Date birthDate;
@Getter(onMethod = @__(@JsonGetter("First Name")))
@Setter(onMethod = @__(@JsonSetter("firstName")))
private String firstName;
@Getter(onMethod = @__(@JsonGetter("Last Name")))
@Setter(onMethod = @__(@JsonSetter("lastName")))
private String lastName;
@Getter(onMethod = @__(@JsonGetter("Gender")))
@Setter(onMethod = @__(@JsonSetter("gender")))
private Gender gender;
@Getter(onMethod = @__(@JsonGetter("Hire Date")))
@Setter(onMethod = @__(@JsonSetter("hireDate")))
private Date hireDate;
}
Downloading the csv file will be look like
"Birth Date","Employee No","First Name",Gender,"Hire Date","Last Name"
"28 December 1953",10001,Georgi,M,"29 December 1986",Facello
"29 December 1964",10002,Bezalel,F,"30 December 1985",Simmel
"28 December 1959",10003,Parto,M,"29 December 1986",Bamford
"27 December 1954",10004,Chirstian,M,"29 December 1986",Koblick
"26 December 1955",10005,Kyoichi,M,"01 January 1989",Maliniak
"28 December 1953",10006,Anneke,F,"01 January 1989",Preusig
... ~300K Entries
Now see the header line is scrambled again. Fixing the values of @JsonPropertyOrder
will look like
@NoArgsConstructor
@JsonPropertyOrder({"Employee No", "First Name", "Last Name", "Gender", "Birth Date", "Hire Date"})
public class Employee {
... Previous Code
}
Doing so the generated csv will be as follows
"Employee No","First Name","Last Name",Gender,"Birth Date","Hire Date"
10001,Georgi,Facello,M,"28 December 1953","29 December 1986"
10002,Bezalel,Simmel,F,"29 December 1964","30 December 1985"
10003,Parto,Bamford,M,"28 December 1959","29 December 1986"
10004,Chirstian,Koblick,M,"27 December 1954","29 December 1986"
10005,Kyoichi,Maliniak,M,"26 December 1955","01 January 1989"
10006,Anneke,Preusig,F,"28 December 1953","01 January 1989"
... ~300K Entries
Perfect !!!
Still few things are not quiet right, downloading of the csv took long time, but there is no indication that we have clicked the "Download Report" and csv generation is in progress. This issue will be covered in a separate post.
All the code above can be found into my github repository.
Top comments (0)