DEV Community

Cover image for How to Build High-Performance NFS Storage with xiRAID Backend and RDMA Access
Sergey Platonov
Sergey Platonov

Posted on • Edited on

How to Build High-Performance NFS Storage with xiRAID Backend and RDMA Access

This paper outlines the process of configuring a high-performance Network File System (NFS) storage solution using the xiRAID RAID engine, Remote Direct Memory Access (RDMA), and the XFS file system. Modern data-intensive workloads (such as those in AI & machine learning, high-performance computing, scientific research, media & entertainment (e.g., 4K/8K video rendering, real-time asset streaming), virtualized environments requiring rapid storage access, etc.) demand storage subsystems capable of delivering extreme throughput with minimal latency.

By leveraging xiRAID’s optimized RAID engine alongside RDMA’s low-latency data transfers and XFS’s scalability, this approach achieves unprecedented sequential access performance, critical for large-scale datasets, while offering actionable insights for improving random read/write efficiency.

The document focuses on maximizing throughput and reducing latency, particularly for sequential access patterns common in scenarios like AI/ML model training, large-scale HPC simulations, real-time media rendering pipelines, virtualized infrastructure requiring consistent I/O performance, etc. Though it does not cover full production configurations (e.g., security settings), the procedures outlined enable organizations to deploy a high-performance NFS storage foundation that balances simplicity, scalability, and raw speed.

InfiniBand Network Setup

For RDMA access, both the NFS server and clients must have the NVIDIA MLNX_OFED driver support (--with-nfsrdma) installed. Download the appropriate driver for your operating system from NVIDIA’s website and install it using the following command:

./mlnxofedinstall --with-nfsrdma
Enter fullscreen mode Exit fullscreen mode

Ensure that similar configurations are applied on the NFS clients. Configure the InfiniBand adapters and verify the network settings. The following utilities can be used to check the performance of the Infiniband network:

  • ib_send_bw for bandwidth testing.
  • ib_send_lat for latency testing.
  • ib_read_bw and ib_read_lat for RDMA read bandwidth/latency.
  • ib_write_bw and ib_write_lat for RDMA write bandwidth/latency.

Disk Subsystem Performance Check

Before setting up RAID, configuring a file system and NFS server, determine the desired RAID level for data and file system journals.

Test raw drive performance to ensure no bottlenecks exist at the server or disk subsystem level. Refer to Xinnor's performance guide for detailed recommendations:

https://xinnor.io/blog/performance-guide-pt-1-performance-characteristics-and-how-it-can-be-measured/

https://xinnor.io/blog/performance-guide-pt-2-hardware-and-software-configuration/

After ensuring that the drives performance meets the requirements, start setting up RAID and configuring the file system.

xRAID Setup

This document uses RAID 6 with 10 drives and a strip size of 128k for data, and RAID 0 with a strip size of 16k for file system logs. (In production environment it is better to use RAID 1 or 10).

Install the latest xiRAID version using the documentation and create the RAID arrays as follows:

xicli raid create -n media6 -l 6 -d /dev/nvme16n2 /dev/nvme9n2 /dev/nvme20n2 /dev/nvme18n2 /dev/nvme8n2 /dev/nvme12n2 /dev/nvme13n2 /dev/nvme19n2 /dev/nvme23n2 /dev/nvme24n2 -ss 128
xicli raid create -n media0 -l 0 -d /dev/nvme7n1 /dev/nvme6n1
Enter fullscreen mode Exit fullscreen mode

Check the RAID status after the initialization process is complete using xicli raid show

Image1

After creating the RAID array, verify that the performance at the RAID layer meets expectations before proceeding to file system creation. For RAID performance checks and additional tuning, refer to the materials in Xinnor's performance guide.

Based on our testing experience, the sequential write speed to the RAID should be approximately 90-95% of the write speed to raw data drives.

XFS Setup and Mount

Create the XFS file system with the following command:

mkfs.xfs -d su=128k,sw=8 -l logdev=/dev/xi_media0,size=1G /dev/xi_media6 -f -ssize=4k
Enter fullscreen mode Exit fullscreen mode

Depending on the geometry of your RAID, the file system creation options may vary. Pay special attention to parameters such as su=128k and sw=8, as these are important for aligning the file system geometry with the RAID configuration.

Image2

Mount the file system using:

mount -t xfs /dev/xi_media6 /mnt/data -o logdev=/dev/xi_media0,noatime,nodiratime,logbsize=256k,largeio,inode64,swalloc,allocsize=131072k
Enter fullscreen mode Exit fullscreen mode

Image3

Similar to the RAID setup, you should also test the performance of the file system by writing several large files to a directory. The performance should be approximately 70-80% of the RAID performance for writes and 90-100% for reads.

For permanent mounting of the file system, follow the recommendations in the xiRAID documentation.

NFS Server Setup

With the disk subsystem, RAID, and file system properly configured, the next step is to install and configure the NFS server. The installation is straightforward, but several optimizations are necessary to achieve better performance.

1.Install nfs-utils packages

yum install nfs-utils
Enter fullscreen mode Exit fullscreen mode

2.Firewall setup. Below is an example of a simple firewall configuration for a test environment:

firewall-cmd --permanent --add-service=nfs
firewall-cmd --permanent --add-service=mountd
firewall-cmd --permanent --add-service=rpc-bind
firewall-cmd --reload
Enter fullscreen mode Exit fullscreen mode

3.NFS share directory creation. The following commands create a directory for an NFS file share. These settings are suitable for testing purposes only, as they do not include access restrictions:

mkdir -p /mnt/data
chown nfsnobody:nfsnobody /mnt/data
chmod 777 /mnt/data
Enter fullscreen mode Exit fullscreen mode

4.NFS export configuration. For the /mnt/data directory, typically found in /etc/exports. If the file does not exist, create it manually and add the following line in the /etc/exports file:

/mnt/data *(rw,sync,insecure,no_root_squash,no_subtree_check,no_wdelay)
Enter fullscreen mode Exit fullscreen mode

Image4

5.NFS Server configuration tuning. Edit the NFS server configuration file /etc/nfs.conf to adjust the number of threads for handling requests and enable RDMA connections. The configuration should resemble the following:

[exportd]
# debug="all|auth|call|general|parse"
# manage-gids=n
# state-directory-path=/var/lib/nfs
threads=64

[nfsd]
# debug=0
threads=64
# host=
# port=0
# grace-time=90
# lease-time=90
# udp=n
# tcp=y
vers3=y
vers4=y
vers4.0=y
vers4.1=y
vers4.2=y
rdma=y
rdma-port=20049
Enter fullscreen mode Exit fullscreen mode

Image5

6.Enable, restart and check NFS Server
After applying all settings, enable and restart the NFS server:

systemctl enable nfs-server
systemctl restart nfs-server
Enter fullscreen mode Exit fullscreen mode

Check the status of the NFS server to ensure there are no errors, particularly those related to the RDMA module:

systemctl status nfs-server
Enter fullscreen mode Exit fullscreen mode

NFS Client Setup

You can now proceed to configure the NFS client.

7.Install nfs-utils packages

yum install nfs-utils
Enter fullscreen mode Exit fullscreen mode

8.NFS client kernel module options.

Add the following line to /etc/modprobe.d/nfsclient.conf:

options nfs max_session_slots=180
Enter fullscreen mode Exit fullscreen mode

Increasing the max_session_slots value allows more simultaneous in-flight requests, improving performance for workloads with many small or parallel I/O operations.

9.Reboot the system

reboot
Enter fullscreen mode Exit fullscreen mode

10.Mount NFS Share

Create a directory for the NFS file share (in this example, /mnt/nfs) and mount it using the following command:

mount -o rdma,port=20049,nconnect=16,vers=4.2 10.239.239.100:/mnt/data /mnt/nfs
Enter fullscreen mode Exit fullscreen mode

Image6

After completing the setup, you can run performance tests from the NFS client. If everything is configured correctly, the performance should meet expectations. Use fio with the following configuration for testing:

[global]
rw=read
#rw=write
bs=1024K
iodepth=32
direct=1
ioengine=libaio
runtime=4000
size=32G
numjobs=4
group_reporting
exitall

[job3]
directory=/mnt/nfs
Enter fullscreen mode Exit fullscreen mode

Depending on the configuration, you can expect NFS performance to be 50-70% of the XFS file system's performance.

Conclusion

By following the outlined steps, users can set up a high-performance NFS storage system leveraging xiRAID and RDMA. The configuration ensures optimal performance for sequential data access patterns and provides flexibility for tuning based on specific workload requirements. For production environments, additional configurations such as security settings should be applied as needed.

Read more here

Top comments (0)