Increasing DRBD Performance by Placing Metadata on NVDIMM Memory

When DRBD®’s performance is less than what you want, especially with small block I/O requests, DRBD’s activity log is often the bottleneck.

You can speed up the activity log and increase DRBD performance by placing the activity log on non-volatile DIMM (NVDIMM) memory. Before sourcing and buying NVDIMMs, you should know how much performance improvement you will get by using these exotic and costly components.

When you place DRBD’s metadata on an NVDIMM, DRBD will use the bitmap in place and a different, optimized for persistent memory (PMEM), on-storage data structure for the activity log.

Fortunately, there is a Linux kernel feature that you can enable that will consider a part of the regular DRAM as PMEM. By using this kernel feature, you can get an idea about the performance improvement that you might get before implementing an NVDIMM solution.

📝 NOTE: Before moving on to the next steps in this article, if you have not already done so, collect some performance benchmarks as a baseline or “before” picture of your cluster. This knowledge base article has some guidance on how to do that by using the Fio utility, if this is not something that you have done before. When using Fio, a typical I/O engine (plugin) to get DRBD performance benchmarks is the libaio engine.

IMPORTANT: Depending on the benchmark test that you decide to run, testing might be destructive to existing data on a storage backing device. Do not run benchmark tests in a production cluster. If you take this risk, know where your backups live.

Configuring Linux to Emulate PMEM

To configure your Linux kernel to emulate PMEM, you need to add the memmap= kernel parameter when you load your kernel. This parameter should take the form memmap=<size>G!<offset>G, where size is the size of the PMEM namespace you want to create and offset is the starting point of the PMEM namespace. For example, the kernel parameter memamp=1G!4G would instruct the kernel to create a 1G PMEM namespace in the 4-5G memory address range.

This knowledge base article uses the memmap=nn[KMG]!ss[KMG] form of the parameter to mark a region of memory as NVDIMM memory. You can find the full documentation of the memmap parameter here. Here is a how-to guide.

Based on experience, the ss value (the offset number specified after the exclamation point in the kernel parameter) needs to be in the range of your physical DRAM size.

Verifying That the Kernel Emulates PMEM

After adding the memmap= kernel parameter and rebooting, you can verify that PMEM emulation is enabled by entering the command:

grep pmem /proc/partitions

Output from the command should show an entry. You can also ls /dev/pmem* and see a device for each PMEM namespace that you specified with a memmap= kernel parameter.

Placing DRBD Metadata on the Emulated PMEM

To place DRBD metadata for a resource named r0 on emulated PMEM, edit the resource’s DRBD configuration file and use the meta-disk option in the volume stanza to specify the emulated PMEM device, /dev/pmem0, in the example below.

resource r0 {
    ...
    volume 0 {
        device minor 10;
        disk ...;
        meta-disk /dev/pmem0;

📝 NOTE: Depending on which meta-disk device you have configured for the DRBD resource, DRBD will use a different internal I/O method. The default DRBD method is called blk-bio1. If you have configured a meta-disk option in your DRBD resource configuration to use a PMEM device, the DRBD internal I/O method is called dax-pmem2.

After editing the resource configuration file on all of your DRBD cluster nodes, enter the following command to create the metadata on the emulated PMEM device:

drbdadm create-md r0

Verifying That DRBD Resource Metadata Uses PMEM

You can verify that the emulated PMEM works by entering the following command:

drbdadm dump-md r0

If output from the command shows a bunch of text, then the emulated PMEM works. If it does not work, command output will show a one-line error message complaining about a missing metadata signature.

After preparing all of your DRBD cluster test nodes in this way, you can activate the DRBD resource.

drbdadm up r0

Next, you can further verify that your DRBD resource metadata is using emulated PMEM by entering the following command:

dmesg | grep meta-data

Output from the command should show that the resource is using PMEM.

drbd r0/0 drbd10: meta-data IO uses: dax-pmem

Running Performance Benchmark Tests

After setting all this up, you are ready to move on to running DRBD performance benchmark DRBD performance is less than what you want it to be, especially with small block I/O requests.tests again. By doing this, you can get your “after” picture to compare to your “before” picture. This can help you have realistic expectations for the impact that adding NVDIMM memory will have on your setup, before you make a hardware purchase.


Created by PR, 2024/01/24

Reviewed and edited by MAT, 2024/01/26


  1. The DRBD internal I/O method, blk-bio is composed of blk short for block or block device and bio which comes from the kernels internals where struct bio is the “basic container for block I/O”. That means DRBD creates an I/O request and calls a kernel function to submit that I/O request to change the metadata. The smallest amount of data one can change through this interface is 512 bytes. Due to compatibility with 4k Disk drives, DRBD issues I/O requests containing 4096 bytes when updating the metadata.↩︎

  2. The dax-pmem is composed of dax, which stands for Direct Access, and pmem which stands for persistent memory. In that mode of I/O operation, DRBD updates the metadata using CPU instructions. The smallest amount of data one can change is 64 bytes. The dax-pmem method is much faster due to the smaller amount of written data. Because it provides smaller update granularity, different on-stable-storage data structures are helpful.↩︎