Installing RabbitMQ

Modern Proof of Concepts (PoCs)—especially those involving microservices or Kubernetes—rely heavily on a messaging backbone. While cloud-managed queues are popular, local PoC environments often require a self-hosted RabbitMQ instance to keep costs low and latency minimal.

A common architectural challenge in these environments is the “Boot Race.” If you run RabbitMQ on a separate VM, you must ensure that the VM is fully operational before your application VMs start. To eliminate this complexity, a practical decision is to install RabbitMQ directly on the virtualization host, ensuring it is available before any guest VMs launch.

This guide details how to set up a dedicated, high-performance RabbitMQ instance on an Ubuntu host with 4 vCPU and 8 GB of RAM, including storage optimization and strict boot ordering.


1. Sizing Strategy: The “Sweet Spot.”

For a robust PoC that handles reasonable throughput, we selected a 4 vCPU / 8 GB RAM configuration. Here is the technical rationale behind this choice:

vCPU: 4 Cores

RabbitMQ runs on the Erlang runtime, which is known for its high concurrency. It will automatically detect the four available cores and create 4 “Schedulers” (threads). This setup handles parallel processing and context switching effortlessly without manual tuning.

Memory: 8 GB (The “Leave-it-Alone” Strategy)

We rely on RabbitMQ’s default Memory High Watermark of 0.4 (40%).

  • The Math: 40% of 8 GB = 3.2 GB allocated to RabbitMQ.
  • The Benefit: This leaves ~4.8 GB of “unused” space for the application. In Linux, free RAM is effectively used as Page Cache.
  • Why it matters: RabbitMQ relies on the OS to cache disk files. By leaving ample RAM for the OS, we ensure that disk writes (persistence) are absorbed by the cache, making the system significantly faster and more stable under load.

2. Storage Optimization: Dedicated SSD Partition

To prevent I/O contention, we reserve a dedicated SSD partition exclusively for RabbitMQ data. This ensures the data is:

  1. Isolated from the host root filesystem (preventing OS crashes if queues fill up).
  2. Protected from OS-level logging pressure.
  3. Optimized for frequent random read/write operations.

Step 1: Identify the Partition UUID

Avoid using unstable device names (like /dev/sdc9). Always use the PARTUUID one that persists across reboots.

# Find the UUID for your partition (e.g., /dev/mq)
udevadm info --query=all --name=/dev/mq | grep "ID_PART_ENTRY_UUID"

# Example Output:
# E: ID_PART_ENTRY_UUID=57f2b8d3-0624-fb4e-a761-d42e20e2f138

Step 2: Create the Filesystem (XFS)

For RabbitMQ data stores, XFS is generally preferred over ext4 because it handles parallel I/O operations (simultaneous reading/writing to the message store) more efficiently.

# Install XFS tools if missing
sudo apt-get install xfsprogs

# Format the SSD partition
sudo mkfs.xfs -f /dev/mq

Step 3: Mount and Persist

We mount this partition to /var/lib/rabbitmq using performance flags (noatime) to reduce write overhead.

  1. Create the mount point:
    • sudo mkdir -p /var/lib/rabbitmq
  2. Update /etc/fstab: Add the following line to ensure automatic mounting on boot:
    • PARTUUID=57f2b8d3-0624-fb4e-a761-d42e20e2f138 /var/lib/rabbitmq xfs defaults,noatime 0 0
  3. Verify:
    • sudo mount -a df -h /var/lib/rabbitmq
    • Output should confirm ~100G size mounted on /var/lib/rabbitmq.

3. Installation & Access Configuration

To minimize maintenance, we use the tested RabbitMQ version available in the default Ubuntu repositories.

sudo apt update
sudo apt install rabbitmq-server

User Configuration

By default, the guest user is locked to localhost. For remote access, we must create an admin user:

# 1. Create a new user
sudo rabbitmqctl add_user admin <strong_password>

# 2. Tag this user as an administrator
sudo rabbitmqctl set_user_tags admin administrator

# 3. Grant full permissions
sudo rabbitmqctl set_permissions -p / admin ".*" ".*" ".*"

Enable Management UI

For PoCs, visibility is key. Enable the management plugin to see queues and message rates via the web dashboard:

sudo rabbitmq-plugins enable rabbitmq_management

Critical Check: File Descriptors

RabbitMQ requires a “file descriptor” (file handle) for every single incoming connection and queue index. This is the single most common failure point for new RabbitMQ setups.

  • The Trap: Standard Linux distributions often default the limit (nofile) to 1024. If you hit this limit, RabbitMQ will stop accepting new connections.
  • The Custom Image Advantage: In our specific environment, we utilize a custom cloud image where nproc and nofiles are pre-tuned to 65536. This eliminates the need for manual OS configuration in our case.

For Standard Installations: If you are not using a pre-tuned image, you must verify and fix this limit manually.

1. Check the Limit:

sudo rabbitmqctl status | grep "file_descriptors" -A 4

If you see total_limit near 1024, you must apply the fix below.

2. The Fix (Systemd Override): Do not edit complex config files. Use a systemd override:

sudo systemctl edit rabbitmq-server

Paste this configuration:

[Service]
LimitNOFILE=65536

Save, exit, and restart: sudo systemctl restart rabbitmq-server


4. Verification

Since the firewall (ufw) is disabled in this trusted environment, we verify the listeners directly.

Command:

Bash

sudo ss -tnlp | grep -e "beam\|epmd"

Success Criteria:

PortProtocolUsageStatus Required
5672AMQPMain Engine. Where apps connect.MUST be 0.0.0.0:5672 or *:5672
15672HTTPWeb UI. Dashboard access.MUST be 0.0.0.0:15672 or *:15672
25672ErlangInternal. CLI tools & clustering.Required for rabbitmqctl
4369EPMDMapper. Port Mapper Daemon.Essential system service

Export to Sheets


5. Enforcing Startup Order (Crucial)

Since RabbitMQ runs on the host, we must strictly enforce that RabbitMQ starts before Libvirt (the virtualization daemon). If Libvirt starts first, guest VMs might launch and fail to connect to the message broker.

The Goal: Network Online → RabbitMQ & NFS → Libvirt.

Step 1: Ensure RabbitMQ Waits for Network

Services sometimes start before the IP is assigned. We force RabbitMQ to wait until the network is fully online.

sudo systemctl edit rabbitmq-server

Paste into editor:

[Unit]
# Ensure the network stack is fully operational (IP assigned)
After=network-online.target
Wants=network-online.target

Step 2: Ensure Libvirt Waits for RabbitMQ

Now, explicitly tell Libvirt to wait until RabbitMQ (and NFS, if you use it) are active.

sudo systemctl edit libvirtd

Paste into editor:

[Unit]
# Defines the start order: Libvirt starts strictly AFTER these two
After=nfs-server.service rabbitmq-server.service

# (Optional but recommended)
# Ensures that if you start Libvirt manually, it tries to start these two as well
Wants=nfs-server.service rabbitmq-server.service

Step 3: Apply and Verify

Reload systemd to register the new dependency chain:

sudo systemctl daemon-reload

Verify the chain:

systemctl show libvirtd --property=After

You should see rabbitmq-server.service listed in the output, confirming that your host will now bring up the messaging backbone before attempting to launch any guest VMs.