Some things to keep in mind regarding DRBD® with SSD backends. DRBD does submit the local IO and send via the network in parallel, and completes to the filesystem and application when both operations have completed. With SSDs, you reduce the local latency by a factor of 100 (order of magnitude rotating rust ~4ms, SSDs ~0.04ms) but the network latency stays the same, which means that the relative DRBD overhead for write latencies may increase very much.
That effect may be much less noticeable if you had decent controller caches before and after, so the "felt" latency of your rotating disk set was not milliseconds but already below a few tens of micro seconds anyways.
For DRBD to allow discards, all peers have to support discards, preferably with "discard zeroes data". Because at resync time, we have no efficient way to determine if a range was discarded (and should now be discarded at the peer as well), it can be useful to have "rs-discard-granularity" set, which makes DRBD "convert" runs of zeroes read by the resync process from the sync source into discards on the sync target.
Edited 2021/12/14 – DJV