Preventing LINSTOR Resource Placement on a Node

It might be the case that you have reason to exclude a LINSTOR® node from new resource placement. This scenario might come up, for example, during a system maintenance, troubleshooting, or cluster rescue or recovery situation. This article discusses a few different ways that you can do this.

Preventing Resource Placement on a Node By Using Auto-Placement

You can exclude a node from LINSTOR’s auto-placement strategy by entering the following command:

linstor node set-property <node-name> AutoplaceTarget false

By setting this property on the node, LINSTOR will not place new resources on the node.

An advantage to this approach is that the LINSTOR satellite service continues to run on the node and maintain communication with the LINSTOR controller, for example, if you change your mind and want to revert the setting, or if you want to make some other LINSTOR modification that might affect the node or its resources.

Configuring LINSTOR to Deploy No New Resources to a Node By Using Auto-Evict

The following method is rather harsh. It accomplishes the stated goal but because you disable and stop the LINSTOR satellite service on the node, it isolates the node from the LINSTOR cluster and would require user intervention to bring it back into the cluster.

  1. Disable auto-eviction for the node: linstor node set-property <node-name> DrbdOptions/AutoEvictAllowEviction false

  2. Stop and disable the LINSTOR satellite service on the node:

    systemctl disable --now linstor-satellite

By following these steps, existing LINSTOR resources will continue to “run” on the node, but you will not be able to modify them by using LINSTOR, nor will you be able to deploy new resources to the node.

IMPORTANT: Some extra configuration might be necessary to prevent the LINSTOR satellite service from restarting, if you are using a cluster resource manager (CRM).

Setting LINSTOR Auxiliary Properties to Constrain Resource Placement

As described in the LINSTOR User’s Guide: “You can constrain automatic resource placement to place (or avoid placing) a resource with nodes having a specified auxiliary node property.”

This method is more involved than the previous methods, but it can be useful, particularly for LINSTOR-in-Kubernetes deployments.

This method differs from the first method in this article (setting the AutoPlaceTarget node property to false). The first method affects all resource groups on the node that you exclude, while the method in this section is more granular and only affects a resource group (or groups) that you specifically configure and then create (spawn) resources from.

IMPORTANT: This method of constraining automatic resource placement affects diskful and diskless resource placement. This could have important implications for your cluster, such as affecting resource quorum functionality. This is discussed at the end of this section.

The example steps below assume that you have a three-node LINSTOR cluster consisting of node-0, node-1, and node-2, and that you want to exclude new LINSTOR resources from being placed on node-1.

  1. Set an auxiliary property, deployable-node in this example, to 0 (false) on the node that you want to exclude from automatic resource placement:

    linstor node set-property --aux node-1 deployable-node 0
  2. Set the same auxiliary property on the other two nodes to 1 (true):

    for i in {0,2}; do linstor node set-property --aux node-$i deployable-node 1; done
  3. Verify the auxiliary property value on the nodes:

    for i in {0..2}; do linstor node list-properties node-$i; done
  4. Create a new resource group that you will create (spawn) new LINSTOR resources from:

    linstor resource-group create testRscGrp --place-count 2 --replicas-on-same deployable-node=1
  5. Create a volume group from the resource group:

    linstor volume-group create testRscGrp
  6. Create (spawn) a new 100MiB resource from the resource group:

    linstor resource-group spawn-resources testRscGrp testResource 100M
  7. Verify that the resource has not been placed on the node that you excluded (node-2):

    linstor resource list

Output from the resource list command should show that LINSTOR placed the resource on the other two nodes:


╭──────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node ┊ Port ┊ Usage ┊ Conns ┊ State ┊ [...] ┊
╞══════════════════════════════════════════════════════════════════╡
[...]
┊ testResource ┊ node-0 ┊ 7002 ┊ Unused ┊ Ok ┊ UpToDate ┊ [...] ┊
┊ testResource ┊ node-2 ┊ 7002 ┊ Unused ┊ Ok ┊ UpToDate ┊ [...] ┊
╰──────────────────────────────────────────────────────────────────╯

Pay attention to resource placement constraints in production clusters that might affect features such as DRBD® quorum. In this example, the deployed resource being constrained to two nodes in the three-node cluster means that there is no resource quorum safeguard against a split-brain situation. When you create a resource that is constrained in such a way, the LINSTOR controller will give you a warning message about this:

 WARNING: Could not find suitable node to automatically create a tie breaking resource for 'testResource'.

Written by: MAT - 2024-03-18

Reviewed by: GH - 2024-03-18