Generated 2026-02-14 14:51:16 UTC
| ODF Version | 4.19.9-rhodf |
| Ceph Version | 19.2.1 (squid) |
| Cluster Health | HEALTHY |
| Physical Disks (OSDs) | 24 |
| Raw Capacity | 70 TiB |
| Usable Capacity | ~35 TiB (raw / 2 replicas) |
| Failure Domain | rack |
| Storage Pool | nrt-2 |
| Replication | 2 copies of every data block |
| Compression | Enabled (aggressive mode) |
STORAGE ENVIRONMENT SUMMARY
======================================================================
Captured: 2026-02-14 11:33:52 GMT
OpenShift Data Foundation (ODF)
ODF Version: 4.19.9-rhodf
Ceph Version: 19.2.1 (squid)
Cluster Health: HEALTHY
ODF is the storage platform running on this OpenShift cluster.
It uses Ceph, an open-source distributed storage system, to
manage data across multiple disks.
Cluster Capacity
Physical Disks (OSDs): 24
Total Raw Capacity: 70 TiB
Usable Capacity: ~35 TiB (raw / 2 replicas)
The cluster has 24 storage devices (called OSDs). The raw capacity
is divided by the replication factor to give usable space.
Failure Domain & Topology
Failure Domain: rack
CRUSH Rule: nrt-2
The failure domain determines how Ceph spreads replicas. With
"rack" as the failure domain, each copy of data is
placed in a different rack. This means losing an entire
rack will not cause data loss.
OSD Tree (which disks are in which racks):
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 69.86133 root default
-6 69.86133 region eu-de
-5 69.86133 zone eu-de-1
-12 23.28711 rack rack0
-11 23.28711 host kube-d62edtlf05426pegte60-ocpvirt420c-default-000001d7
1 ssd 2.91089 osd.1 up 1.00000 1.00000
4 ssd 2.91089 osd.4 up 1.00000 1.00000
7 ssd 2.91089 osd.7 up 1.00000 1.00000
10 ssd 2.91089 osd.10 up 1.00000 1.00000
13 ssd 2.91089 osd.13 up 1.00000 1.00000
16 ssd 2.91089 osd.16 up 1.00000 1.00000
19 ssd 2.91089 osd.19 up 1.00000 1.00000
22 ssd 2.91089 osd.22 up 1.00000 1.00000
-16 23.28711 rack rack1
-15 23.28711 host kube-d62edtlf05426pegte60-ocpvirt420c-default-000002b1
2 ssd 2.91089 osd.2 up 1.00000 1.00000
5 ssd 2.91089 osd.5 up 1.00000 1.00000
8 ssd 2.91089 osd.8 up 1.00000 1.00000
11 ssd 2.91089 osd.11 up 1.00000 1.00000
14 ssd 2.91089 osd.14 up 1.00000 1.00000
17 ssd 2.91089 osd.17 up 1.00000 1.00000
20 ssd 2.91089 osd.20 up 1.00000 1.00000
23 ssd 2.91089 osd.23 up 1.00000 1.00000
-4 23.28711 rack rack2
-3 23.28711 host kube-d62edtlf05426pegte60-ocpvirt420c-default-00000311
0 ssd 2.91089 osd.0 up 1.00000 1.00000
3 ssd 2.91089 osd.3 up 1.00000 1.00000
6 ssd 2.91089 osd.6 up 1.00000 1.00000
9 ssd 2.91089 osd.9 up 1.00000 1.00000
12 ssd 2.91089 osd.12 up 1.00000 1.00000
15 ssd 2.91089 osd.15 up 1.00000 1.00000
18 ssd 2.91089 osd.18 up 1.00000 1.00000
21 ssd 2.91089 osd.21 up 1.00000 1.00000
Storage Pool: nrt-2
Replication: 2 copies of every data block
Compression: Enabled (aggressive mode)
"2 replicas" means every piece of data is stored on 2 different
disks for redundancy. If one disk fails, no data is lost.
Compression squeezes data before writing to save physical space.
VM Disk Storage (StorageClass: nrt-2-rbd)
Type: Ceph RBD (block storage)
Clone Method: Copy-on-write via CSI
Ceph RBD provides virtual block devices - each VM gets a disk
backed by the Ceph pool above. Cloning uses copy-on-write,
meaning new VMs share the original data and only store their
differences (similar to VMware linked clones).
StorageClass Parameters
{"clusterID":"openshift-storage","csi.storage.k8s.io/controller-expand-secret-name":"rook-csi-rbd-provisioner","csi.storage.k8s.io/controller-expand-secret-namespace":"openshift-storage","csi.storage.k8s.io/fstype":"ext4","csi.storage.k8s.io/node-stage-secret-name":"rook-csi-rbd-node","csi.storage.k8s.io/node-stage-secret-namespace":"openshift-storage","csi.storage.k8s.io/provisioner-secret-name":"rook-csi-rbd-provisioner","csi.storage.k8s.io/provisioner-secret-namespace":"openshift-storage","imageFeatures":"layering,deep-flatten,exclusive-lock,object-map,fast-diff","imageFormat":"2","pool":"nrt-2"}
======================================================================
This test measures how efficiently OpenShift Data Foundation (ODF) stores VM disks when cloning at scale, using Ceph's copy-on-write capabilities.
A single "golden" VM template is created with a full OS installation and ~5 GB of test data on a 20 GB virtual disk. This becomes the baseline — the one master copy all clones will share.
Multiple VMs are cloned from the golden image using Ceph's copy-on-write (CoW) mechanism. Each clone initially uses near-zero additional storage because it shares the golden image data and only records differences.
Each clone writes new, unique files filled with random (incompressible) data to simulate real-world divergence. Drift levels are cumulative and additive — at each level, a new file is written alongside previous ones, so storage grows with every phase. Measurements are taken at 1%, 5%, 10%, and 25% of disk size.
Note on test data: Both the golden image payload and all drift data are written
using /dev/urandom (random, incompressible data). This represents a worst case for
Ceph compression — real VM workloads containing logs, databases, and application data would see
significantly better compression ratios. The random data isolates copy-on-write efficiency without
compression masking the results.
Storage is measured at each stage using Ceph pool-level metrics. The key question: how much less storage does ODF use compared to making full copies of every VM?
Each row represents a measurement taken at a specific point in the test. "Data Stored" is the actual unique data; "Disk Used" includes replication overhead.
| Phase | Data Stored (GB) | Disk Used (GB) | Delta (GB) | VM Disks | Efficiency | Compress Saved (GB) |
|---|---|---|---|---|---|---|
| baseline | 5.734 | 11.346 | — | 0 | — | 0.12 |
| after-golden-image | 5.735 | 11.347 | +0.001 | 1 | — | 0.12 |
| after-100-clones | 5.736 | 11.350 | +0.002 | 101 | 101.0x | 0.12 |
| drift-200mb-.9pct | 32.296 | 59.978 | +26.562 | 101 | 18.8x | 4.62 |
| drift-1024mb-5.0pct | 116.304 | 225.373 | +110.570 | 101 | 5.9x | 7.24 |
| drift-2048mb-10.0pct | 218.404 | 428.458 | +212.670 | 101 | 3.6x | 8.35 |
| drift-5120mb-25.0pct | 524.065 | 1037.369 | +518.331 | 101 | 2.1x | 10.76 |
Baseline Clone Drift
Shows how much storage CoW cloning actually used (green) versus what full copies would require (gray). The gap is your savings.
Efficiency ratio at each phase. Higher means ODF is using proportionally less storage than full copies would.
At each phase: the green portion is actual storage used, the blue portion is space saved by CoW cloning.
Total data stored (green) vs space saved by compression (orange). The combined height shows what storage would be without compression.
Each VM is provisioned with a 20 GB virtual disk, but only blocks the VM has actually written consume storage. With 101 VMs, the total provisioned capacity is 2,020 GB, yet only 5.7 GB is actually stored — a 0.3% utilisation rate.
Thin provisioning alone saves 2,014 GB. Copy-on-write cloning and Ceph compression provide additional reductions on top of this.
After cloning 101 VMs from the golden image, the storage pool grew by just 0.00 GB — that's only 0.0% overhead on top of the original 5.7 GB golden image.
If every clone were a full, independent copy of the 5.7 GB disk, the total would be 579 GB. Instead, ODF's copy-on-write cloning brought the actual usage to just 5.7 GB, saving 573 GB of storage.
Clone method breakdown: 100 used efficient CoW cloning.
As VMs run, each drift level writes a new file of random data to every clone — previous files are kept, so storage grows cumulatively. (For example, after the 5% level each clone holds both a 200 MB file and an 824 MB file.) The table below shows how efficiency decreases as data accumulates:
| Drift Level | New Data Added (GB) | Total Stored (GB) | Efficiency |
|---|---|---|---|
| drift-200mb-.9pct | 26.56 | 32.30 | 18.8x |
| drift-1024mb-5.0pct | 84.01 | 116.30 | 5.9x |
| drift-2048mb-10.0pct | 102.10 | 218.40 | 3.6x |
| drift-5120mb-25.0pct | 305.66 | 524.07 | 2.1x |
Even with significant drift, ODF still uses substantially less storage than full copies would require, because the majority of each VM's data (the OS, base packages, etc.) remains shared with the golden image.
Ceph inline compression saved 10.8 GB, reducing overall storage by 2.1% on top of the CoW savings. (Of the 524.1 GB stored, 21.5 GB was eligible for compression and was reduced to 10.8 GB.)
Note: This test uses random data (/dev/urandom), which is incompressible by design — a worst-case scenario for compression. Production VMs with real application data (logs, databases, documents) would see substantially higher compression savings.
Organizations migrating from VMware often use linked clones to save storage. ODF's Ceph RBD copy-on-write cloning provides an equivalent capability. Here's how they compare:
| Feature | VMware Linked Clones | ODF / Ceph CoW Clones |
|---|---|---|
| Mechanism | VMDK redo logs (delta disks) | Ceph RBD layered images (CoW snapshots) |
| Initial clone cost | Near-zero (pointer to parent) | Near-zero (metadata reference to parent image) |
| Write behavior | New writes go to delta disk | New writes go to child image; reads fall through to parent |
| Dependency | Clone depends on parent snapshot | Clone depends on parent image (can be flattened later) |
| Replication | VMFS/vSAN handles replication | Ceph replicates across OSDs and failure domains |
| Compression | Depends on vSAN/datastore config | Inline compression at pool level (configurable) |
| Scale | Typically per-host or per-datastore | Cluster-wide, scales with OSD count |
Bottom line: In this test, ODF achieved 101x storage efficiency when cloning 101 VMs — comparable to what VMware linked clones provide. After maximum drift, efficiency remained at 2.1x. Both approaches fundamentally work the same way: clones share a common base and only store differences.
Plain-language definitions of storage terms used in this report.
Generated by ODF Storage Efficiency Test Harness · 2026-02-14 14:51:16 UTC