doc: add design for replication destination info#993
doc: add design for replication destination info#993Madhu-1 wants to merge 2 commits intocsi-addons:mainfrom
Conversation
b14e1dd to
bfef138
Compare
| // handles for each PVC. | ||
| // The maximum number of allowed PVCs in the group is 100. | ||
| // +optional | ||
| PersistentVolumeClaimsRefList []VolumeGroupReplicationPVCStatus `json:"persistentVolumeClaimsRefList,omitempty"` |
There was a problem hiding this comment.
This is a new field replacing the existing PersistentVolumeClaimsRefList we need to see how this behaves in the upgrade. If we get into the upgrade problem, we will replace this with new fields for mapping, keeping the existing PersistentVolumeClaimsRefList []corev1.LocalObjectReference
There was a problem hiding this comment.
We already store volumeHandle in VGRContent and in the proposal I see we are planning to have the mapping too in VGRContent. I think we can remove it from VGR.Status in that case and let it only be on VGRContent.
The VGR as designed earlier, should only contain the source PVC names and all other SP/CSI level details should be in the VGRContent.
bfef138 to
262a859
Compare
262a859 to
9d1d519
Compare
| // DestinationVolumeGroupID is the volume group ID on the | ||
| // destination/target side. | ||
| // +optional | ||
| DestinationVolumeGroupID string `json:"destinationVolumeGroupID,omitempty"` |
There was a problem hiding this comment.
As we are already storing the groupID on the VGRContent, we can remove it from here and only let it be on VGRContent.
There was a problem hiding this comment.
The current reason for adding this to the VR status
-
VGRContent is a cluster-scoped resource; VGR is namespace-scoped. DR orchestrators (like Ramen) typically watch namespace-scoped resources. Requiring them to look up a cluster-scoped VGRContent just to read the destination group ID adds unnecessary cross-scope lookups and RBAC complexity.
-
Single-writer principle is preserved. The VGRContent controller writes to VGRContent.Status, the VGR controller reads it and propagates to VGR.Status. Each controller writes only to its own resource — there is no dual-write problem.
-
The VGR status is the consumer-facing API. The DestinationInfoAvailable condition on VGR tells consumers when the data is ready. If the group ID is only on VGRContent, consumers must watch two resources and coordinate readiness themselves.
This is propagation, not duplication — the VGRContent is the source of truth, VGR is the consumer-facing projection.
| // handles for each PVC. | ||
| // The maximum number of allowed PVCs in the group is 100. | ||
| // +optional | ||
| PersistentVolumeClaimsRefList []VolumeGroupReplicationPVCStatus `json:"persistentVolumeClaimsRefList,omitempty"` |
There was a problem hiding this comment.
We already store volumeHandle in VGRContent and in the proposal I see we are planning to have the mapping too in VGRContent. I think we can remove it from VGR.Status in that case and let it only be on VGRContent.
The VGR as designed earlier, should only contain the source PVC names and all other SP/CSI level details should be in the VGRContent.
| On each reconcile, the VGR controller reads the VGRContent status and | ||
| validates that the destination map is complete. If complete, it populates | ||
| the `VolumeHandleMapping` pointer on each PVC ref entry with both source | ||
| and destination handles as a unit, so DR orchestrators can directly read | ||
| per-PVC destination handles without cross-referencing PV objects: |
There was a problem hiding this comment.
I understand having this makes the task of DR orchestrator easier to fetch details, but by doing this we are duplicating info on both VGR and VGRContent. We should plan to update either of the both and considering VGRContent contains the volumeHandles, groupHandle and PV names already, we should go with adding the handle mapping in VGRContent only. And, let the VGR contain only PVC names, as the VGR originally acts upon the PVC's only so it should contain only that info, and rest any SP level details should be in VGRContent IMO.
Adding a design document on how to handle the DR when the volumeID's and volumeGroupID's are not same on the destination cluster. Assisted-by: Claude Code Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
9d1d519 to
fd178cf
Compare
|
@Nikhil-Ladha few comments seems valid after thinking more about it. added a new commit to address it. will squash all at last |
Nikhil-Ladha
left a comment
There was a problem hiding this comment.
The updated design looks good to me, just a small nit for mapping field name.
| // present, consumers SHOULD prefer this field. | ||
| // The maximum number of allowed PVs in the group is 100. | ||
| // +optional | ||
| PersistentVolumeStatuses []PersistentVolumeStatus `json:"persistentVolumeStatuses,omitempty"` |
There was a problem hiding this comment.
Maybe a better name could be PersistentVolumeMappingList and the type could be PersistentVolumeMapping as the info stored in the structure is not exactly a status?
fd178cf to
9b43a22
Compare
Assisted-by: Claude Code Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
9b43a22 to
b04dc91
Compare
|
@ShyamsundarR can we have review on this one, i would like to get this merged before we merge implementation PR's |
Adding a design document on how to handle the DR when the volumeID's and volumeGroupID's are not same on the destination cluster.