DEV Community

sy z
sy z

Posted on

hpfs vs cephfs performance

In recent days, I conducted a performance test of HPFS. Under the same environment, I also deployed CephFS and made a performance comparison. The test case was to open a file, write 4096 bytes, close it, then open it again, read 4096 bytes, and close it. This continuous operation was repeated to create a total of 100 million files of 4096K each, with multiple threads operating concurrently. The IOPS was measured from this test.

From this test data, we can see that HPFS performance increases linearly with the addition of clients. This is because the capability of a single client is limited by the bottleneck of FUSE. If HPFS's API interface is used, this limitation is completely removed, and the performance of all NVMe disks can be fully utilized.

On the other hand, CephFS is limited by the bottleneck of MDS, and its IOPS cannot increase even with the addition of more clients. HPFS, however, can continue to scale its metadata file system's IOPS load capacity by expanding the number of HPFS-SRV.
Image description
github url:
https://github.com/ptozys2/hpfs

For an introduction to HPFS, please refer to this article:
https://dev.to/sy_z_5d0937c795107dd92526/multi-meta-server-ceph-4pe1

Top comments (0)