Installing NFS Server

Modern application Proof of Concepts (PoCs)—especially those involving microservices or Kubernetes—often require a shared storage layer. While complex distributed storage solutions (such as Ceph or GlusterFS) are available, they may be overkill for a lab. Sometimes, the fastest and most reliable path to a working environment is a standard NFS server.

This guide covers setting up a high-performance NFS server on Ubuntu 22.04, with access strictly restricted to a specific internal subnet (10.0.1.0/24) and tuned for 10GbE networking.

Why Host-Based NFS?

We could install the NFS server on a dedicated Virtual Machine (VM). However, this introduces a “chicken and egg” problem: ensuring the NFS VM starts and becomes available before other VMs that depend on it try to mount the storage.

To eliminate this race condition and dependency complexity, the decision was made to install the NFS server directly on the host.


Part 1: Preparing a Dedicated Storage Partition

We reserve a dedicated disk partition exclusively for shared application data. This ensures the data is:

  • Isolated from the host root filesystem.
  • Protected from OS-level disk pressure.
  • Optimized for frequent file reads and writes.

We will mount this partition persistently using a stable partition identifier (UUID) to ensure consistent behavior across reboots.

1. Identify the Partition UUID

Avoid using device names like /dev/sdc9 in configuration files, as they can change. Use the PARTUUID instead.

# udevadm info --query=all --name=/dev/sdc9 | grep "ID_PART_ENTRY_UUID"
E: ID_PART_ENTRY_UUID=099bb01d-b1a3-364f-aef8-c1c72ef246f0

2. Create the Filesystem

Format the partition with ext4 and a label.

sudo mkfs.ext4 -L nfs_data /dev/disk/by-partuuid/099bb01d-b1a3-364f-aef8-c1c72ef246f0

3. Create a Mount Point

sudo mkdir -p /srv/nfs

4. Update /etc/fstab

Add the following line to /etc/fstab ensure the drive mounts automatically with performance flags (noatime, nodiratime):

PARTUUID=099bb01d-b1a3-364f-aef8-c1c72ef246f0  /srv/nfs  ext4  defaults,noatime,nodiratime  0  2

5. Mount and Verify

Mount the drive and perform a sanity check.

sudo mount -a
df -h /srv/nfs

Output should show:

Filesystem      Size   Used  Avail Use% Mounted on
/dev/sdc9       196G   28K   186G   1%  /srv/nfs

Quick Write Test:

sudo touch /srv/nfs/.test && sudo rm /srv/nfs/.test

Part 2: Installing and Configuring NFS

Now that the storage layer is ready, we will install the kernel-level NFS server.

1. Install NFS Server

sudo apt update
sudo apt install nfs-kernel-server -y

2. Configure the Shared Directory

We will create a specific subdirectory for the share. For a PoC environment, we configure permissive ownership to avoid “Permission Denied” errors when multiple client VMs with different user IDs write data.

# Create the directory
sudo mkdir -p /srv/nfs/share

# Set specific ownership for NFS anonymity
sudo chown nobody:nogroup /srv/nfs/share

# Set permissions to read/write/execute for all
sudo chmod 777 /srv/nfs/share

3. Configure Access Control

We will restrict access strictly to the 10.0.1.0/24 subnet. Open the exports file:

sudo nano /etc/exports

Add the following line to the bottom:

/srv/nfs/share 10.0.1.0/24(rw,sync,no_subtree_check,no_root_squash)

Explanation of flags:

  • rw: Read/Write access.
  • sync: Confirms writes only after they are committed to disk (critical for data integrity).
  • no_root_squash: Allows the root user on the client to operate as root on the server (beneficial for lab setups to avoid permission hassles).

Part 3: Performance Tuning (Critical for 10GbE)

Since a 10GbE network is often faster than the disk subsystem, Linux’s default settings can cause “buffer bloat.” This fills RAM too quickly with write data, causing the server to freeze while the disk catches up. We need to tune the kernel to smooth out these writes.

Edit sysctl.conf

Open /etc/sysctl.conf:

sudo nano /etc/sysctl.conf

Add these lines to the bottom of the file:

# --- Network Tuning for 10GbE (High Bandwidth) ---
# Allow larger window sizes for high-speed transfers
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# Increase the backlog for incoming packets (prevents drops on burst)
net.core.netdev_max_backlog = 30000
net.ipv4.tcp_max_syn_backlog = 4096

# --- Disk Write Cache Tuning ---
# Start writing to disk early (when 5% of RAM is dirty) to prevent spikes
vm.dirty_background_ratio = 5
# Throttle processes only when 20% of RAM is dirty (avoids system lockups)
vm.dirty_ratio = 20

Apply the changes:

sudo sysctl -p

Part 4: Start and Verify

1. Start the Service

Enable the NFS service to start on boot and apply the new export configuration.

sudo systemctl enable nfs-kernel-server
sudo systemctl restart nfs-kernel-server

2. Verify the Setup

Ensure the service is listening and the export is visible to the network.

Check the Port Listener:

ss -tlnp | grep 2049

Success Indicator: You should see *:2049 or 0.0.0.0:2049.

Check the Export Visibility:

showmount -e localhost

Output:

/srv/nfs/share 10.0.1.0/24

Part 5: Enforcing Startup Order (Critical for Host-Based Setups)

Since the NFS server runs on the same host as the hypervisor (libvirtd), we must ensure a strict boot order:

1. Network Online2. NFS Server3. Libvirt

If libvirtd If NFS is not ready before VMs are started, VMs set to auto-start may fail to mount their storage. We use systemd drop-in overrides to enforce this dependency chain.

1. Ensure NFS Waits for Networking

By default, services may start as soon as the network service starts, before an IP address is assigned. We need NFS to wait until the network is fully online.

Create an override for the NFS server:

sudo systemctl edit nfs-kernel-server

Paste the following into the editor that opens:

[Unit]
# Ensure the network stack is fully operational (IP assigned)
After=network-online.target
Wants=network-online.target

Save and exit

2. Ensure Libvirt Waits for NFS

Now, we explicitly tell the Libvirt daemon to wait until the NFS server is active before launching.

Create an override for Libvirt:

sudo systemctl edit libvirtd

Paste the following:

[Unit]
# Start Libvirt only after NFS is up to prevent VM mount failures
After=nfs-kernel-server.service
Wants=nfs-kernel-server.service

Save and exit.

3. Apply the Changes

Reload the systemd daemon to recognize these new dependencies.

sudo systemctl daemon-reload

Verification

You can verify the dependency chain has been applied by checking the service properties:

# Check if Libvirt now waits for NFS
systemctl show libvirtd --property=After

You should see nfs-kernel-server.service listed in the output.


The NFS server is now live, tuned for performance, and configured to start reliably before your VMs boot.