Migrating RDM Disk
This guide walks you through the steps required to migrate a VM with RDM (Raw Device Mapping) disks using RDM disks are mostly used of windows machine.
RDM disk migration is only supported for PCD version >= July 2025 (2025.7). For multipath support (connecting to SAN array), PCD version >= October 2025 (2025.10) is required.
Prerequisites
Before you begin, ensure the following:
- RDM disk is attached to the Windows machine.
- vjailbreak is deployed in your cluster.
- PCD Requirements:
- Minimum version: July 2025 (2025.7).
- For multipath support (connecting to SAN array): October 2025 (2025.10) - includes default libvirt and QEMU packages
- Check PCD release notes for exact version names/numbers
- Volume type must have multi-attach support enabled in OpenStack.
- All required fields (like
cinderBackendPool
andvolumeType
) are available from yourOpenstackCreds
.
You can fetch these values using:
openstack volume backend pool listopenstack volume type list
Or by describing the Openstack credentials:
kubectl describe openstackcreds -n migration-system
On Vmware
-
Add the following annotation to the VMware Notes field for the VM:
VJB_RDM:Hard Disk:volumeRef:"source-name"="abac111"Replace
Hard Disk
with the RDM disk name andabac111
with the actual source ID. -
To obtain the
source-id
,source-name
or source details, you can run the following command against SAN Array:Terminal window openstack block storage volume manageable list SAN_Array_reference --os-volume-api-version 3.8
Migration Steps
1. Verify RDM Disk Resource
Check if the RDM disk resource is created in Kubernetes:
kubectl get rdmdisk -n migration-system
2. Fetch RDM Disk Details
For each VM to be migrated, list its details:
kubectl describe vmwaremachine <vm-name> -n migration-system
This will show the RDM disk identifiers.
3. Detach RDM Disk and Power Off VM in Vmware
Since VMware does not allow snapshots of a VM with attached RDM disks, you must:
- Power off the VM to be migrated.
- Detach the RDM disk from the VM.
Steps to detach RDM disks in vmware
- For each VM go to the Edit Settings, click on the cross icon near the RDM disks, keep “Delete files from storage” unchecked.
- For each VM go to the Edit Settings,click on Remove the SCSI controller used by these disks, this will be in Physical sharing mode.
This ensures the snapshot and migration can proceed without errors.
4. Patch RDM Disk With Required Fields
Edit the RDM disk to add cinderBackendPool
and volumeType
. Example:
kubectl patch rdmdisk <name_of_rdmdisk_resource> -n migration-system -p '{"spec":{"openstackVolumeRef":{"cinderBackendPool":"backendpool_name","volumeType":"volume_type"}}}' --type=merge
5. Create MigrationPlan
Finally, create a migration plan using the CLI.
Follow the detailed CLI steps here:
Migrating Using CLI and Kubectl
6. Wait For Disk To Become Available
Confirm that the disk is in Available state:
kubectl get rdmdisk <disk-id> -n migration-system -o yaml
Look for:
status: phase: Available
7. Ensure all VM’s of cluster is migrated
-
Check RDM disk is available as volume in PCD or openstack
-
Ensure all VM’s of cluster is migrated
-
Power on all VM’s together
Rollback plan - if migration fails
-
Delete VMs created in PCD or openstack
-
Delete managed volume
openstack volume delete volumeid --remote
-
Attach RDM disk in VMware to powered off VM’s
-
Add the reference VMDK disks
-
Add New Device > Existing Hard Disk. This will add the disk as New Hard disk.
-
Change the controller of this hard disk to “New SCSI Controller” which we created in firs step.
Repeat the process for all the RDM disk.
-
- Power on all the VM’s on Vmware