DEV Community

Cover image for Big chunk vs smaller chunks on transferring data
Jodhi
Jodhi

Posted on

Big chunk vs smaller chunks on transferring data

While working on online blake3 hasher, we want to support subjectively quite large file. Since default limit of grpc is around 4MB (on go lib at least), we got this idea : to chunk or not to chunk?

Then we are doing simple test to know what the impact of one very big sized data when transferred.

first thing first, we need to increase the limit of message size

const (
    maxMsgSize     = 900000000
)
...
conn, err := grpc.Dial(
        "127.0.0.1:7777",
    grpc.WithInsecure(),
    grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(maxMsgSize),grpc.MaxCallSendMsgSize(maxMsgSize)),
)

...
GRPCArgs := &pb.Request{
        Input:      []byte(strings.Repeat("A", 200000000)),
    }

r, err := pb.NewCallClient(conn).Handler(ctx, GRPCArgs)
Enter fullscreen mode Exit fullscreen mode

Basically, the code send 200*1 million bytes and receive the base64 result with size of 266666668 bytes.

result

You can see that the first increase (around 320MB) and decrease on memory because of the transfer process, and the last decrease on the graph is when I close all rpc server and client (GC maybe?).

So, to chunk or not to chunk? Subjectively, If you have the time and short on resources, go for chunking.

Check out also the csv linter with big size file support

Top comments (0)