This document provides a comprehensive overview of currently implemented and tested features in the TNS CSI Driver.
The TNS CSI Driver is a Kubernetes Container Storage Interface (CSI) driver that enables dynamic provisioning and management of persistent storage volumes on TrueNAS systems. This driver is in active development with core features implemented and undergoing testing.
- Status: ✅ Functional, testing in progress
- Access Modes: ReadWriteMany (RWX), ReadWriteOnce (RWO), ReadWriteOncePod (RWOP)
- Use Case: Shared filesystem storage, multi-pod access
- Mount Protocol: NFSv4.2 with nolock option
- TrueNAS Requirements:
- TrueNAS Scale 25.10+
- NFS service enabled
- Accessible NFS ports (111, 2049)
- Status: ✅ Functional, testing in progress
- Access Modes: ReadWriteOnce (RWO), ReadWriteOncePod (RWOP)
- Use Case: High-performance block storage, low-latency workloads
- Transport: TCP (nvme-tcp)
- TrueNAS Requirements:
- TrueNAS Scale 25.10+ (NVMe-oF feature introduced in this version)
- Static IP address configured (DHCP not supported)
- Pre-configured NVMe-oF port with TCP transport (default: 4420)
- Architecture: Dedicated subsystem model (1 subsystem per volume)
- Status: ✅ Functional, testing in progress
- Access Modes: ReadWriteOnce (RWO), ReadWriteOncePod (RWOP)
- Use Case: Traditional block storage, broad compatibility
- Transport: TCP (default port: 3260)
- TrueNAS Requirements:
- TrueNAS Scale 25.10+
- iSCSI service enabled
- Pre-configured iSCSI portal
- Architecture: Dedicated target model (1 target per volume with 1 extent)
- Node Requirements:
open-iscsipackage installed on Kubernetes nodes
- Status: ✅ Functional, testing in progress
- Access Modes: ReadWriteMany (RWX), ReadWriteOnce (RWO), ReadWriteOncePod (RWOP)
- Use Case: Authenticated file sharing, Windows-compatible storage
- Mount Protocol: CIFS (SMB 3.0 default)
- TrueNAS Requirements:
- TrueNAS Scale 25.10+
- SMB service enabled
- SMB user account configured
- Node Requirements:
cifs-utilspackage installed on Kubernetes nodes - Authentication: Username/password via Kubernetes Secret (nodeStageSecretRef)
File Storage (NFS vs SMB):
- NFS: Best for Linux-native workloads, simpler setup, no authentication required
- SMB: Best when you need user-level authentication or Windows compatibility
Block Storage (NVMe-oF vs iSCSI):
- NVMe-oF: Higher performance, lower latency, better for modern NVMe SSDs
- iSCSI: Broader compatibility, works with any storage, well-established protocol
- Status: ✅ Fully implemented and functional
- Protocols: NFS, NVMe-oF, iSCSI, SMB
- Description: Automatic creation of storage volumes when PVCs are created
- Implementation:
- NFS: Creates ZFS dataset and NFS share automatically
- NVMe-oF: Creates ZVOL, dedicated subsystem, and namespace
- iSCSI: Creates ZVOL, dedicated target, extent, and target-extent mapping
- SMB: Creates ZFS dataset and SMB share automatically
- Parameters:
protocol: nfs, nvmeof, iscsi, or smbpool: ZFS pool nameserver: TrueNAS IP/hostnameport: (NVMe-oF/iSCSI) Target port number
- Status: ✅ Fully implemented and functional
- Protocols: NFS, NVMe-oF, iSCSI, SMB
- Description: Automatic cleanup when PVCs with reclaimPolicy: Delete are removed
- Implementation:
- NFS: Removes NFS share and deletes ZFS dataset
- NVMe-oF: Removes namespace, subsystem, and deletes ZVOL
- iSCSI: Removes target-extent, extent, target, and deletes ZVOL
- SMB: Removes SMB share and deletes ZFS dataset
- Idempotent operations (safe to retry)
- Supports
deleteStrategyparameter for volume retention (see below)
- Status: ✅ Implemented
- Protocols: NFS, NVMe-oF, iSCSI, SMB
- Description: Control whether volumes are actually deleted or retained when a PVC is deleted
- Parameter:
deleteStrategyin StorageClass parameters - Values:
delete(default): Volume is deleted when PVC is deletedretain: Volume is kept on TrueNAS when PVC is deleted (useful for data protection)
- Use Case: Protect important data from accidental deletion while still using
reclaimPolicy: Delete
Example StorageClass with Delete Strategy:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nfs-retained
provisioner: tns.csi.io
parameters:
protocol: nfs
pool: tank
server: truenas.local
deleteStrategy: "retain" # Volumes kept on TrueNAS when PVC deleted
allowVolumeExpansion: true
reclaimPolicy: Delete- Status: ✅ Fully implemented and functional
- Protocols: NFS, NVMe-oF, iSCSI, SMB
- Description: Attach volumes to nodes and detach when no longer needed
- Implementation:
- NFS: Handled by NFSv4 protocol
- NVMe-oF: Uses nvme-cli for discovery, connect, and disconnect operations
- iSCSI: Uses open-iscsi for target discovery, login, and logout operations
- SMB: CIFS mount with credentials file
- Status: ✅ Fully implemented and functional
- Protocols: NFS, NVMe-oF, iSCSI, SMB
- Description: Mount volumes into pod containers at specified paths
- Implementation:
- NFS: Standard NFSv4.2 mount with optimized options
- NVMe-oF: Block device formatting (ext4/xfs) and filesystem mount
- iSCSI: Block device formatting (ext4/xfs) and filesystem mount
- SMB: CIFS mount with configurable SMB version and options
- Proper cleanup on unmount
- Status: ✅ Implemented
- Protocols: NFS, NVMe-oF, iSCSI, SMB
- Description: Customize mount options via StorageClass
mountOptionsfield - Behavior: User-specified options are merged with sensible defaults, with user options taking precedence for conflicting keys
Default Mount Options:
| Protocol | Platform | Defaults |
|---|---|---|
| NFS | Linux | vers=4.2, nolock |
| NFS | macOS | vers=4, nolock |
| NVMe-oF | Linux | noatime |
| iSCSI | Linux | noatime, _netdev |
| SMB | Linux | vers=3.0, file_mode=0777, dir_mode=0777 |
Example StorageClass with Custom Mount Options:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nfs-custom
provisioner: tns.csi.io
parameters:
protocol: nfs
pool: tank
server: truenas.local
mountOptions:
- hard
- nointr
- rsize=1048576
- wsize=1048576
allowVolumeExpansion: true
reclaimPolicy: DeleteNVMe-oF Mount Options Example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nvmeof-custom
provisioner: tns.csi.io
parameters:
protocol: nvmeof
pool: tank
server: truenas.local
subsystemNQN: nqn.2025-01.com.truenas:csi
mountOptions:
- discard
- data=ordered
allowVolumeExpansion: true
reclaimPolicy: Delete- Status: ✅ Fully implemented and functional
- Protocols: NFS, NVMe-oF, iSCSI, SMB
- Description: Dynamically resize volumes without downtime
- Requirements: StorageClass must have
allowVolumeExpansion: true(enabled by default in Helm chart) - Limitations:
- Only expansion supported (shrinking not possible)
- Volume must not be in use during expansion for some operations
- Implementation:
- NFS: Expands ZFS dataset quota
- NVMe-oF: Expands ZVOL size and resizes filesystem
- iSCSI: Expands ZVOL size and resizes filesystem
- SMB: Expands ZFS dataset quota
Example:
kubectl patch pvc my-pvc -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'- Status: ✅ Implemented, testing in progress
- Protocols: NFS, NVMe-oF, iSCSI, SMB
- Description: Create point-in-time copies of volumes using ZFS snapshots
- Features:
- Near-instant snapshot creation
- Space-efficient (copy-on-write)
- Snapshot deletion with proper cleanup
- List snapshots
- Requirements:
- Kubernetes Snapshot CRDs (v1 API)
- External snapshot controller
- CSI snapshotter sidecar (included in Helm chart)
Key Operations:
- Create snapshot: ZFS snapshot created instantly
- Delete snapshot: Snapshot removed from ZFS
- Idempotent operations
- Status: ✅ Implemented, testing in progress
- Protocols: NFS, NVMe-oF, iSCSI, SMB
- Description: Create new volumes from existing snapshots
- Features:
- Instant clone creation via ZFS clone
- Space-efficient (shares blocks with snapshot until modified)
- Full read/write access to cloned volume
- Detached clones (promoted) for independent volumes (see below)
- Limitations:
- Cannot clone across protocols (NFS snapshot → NFS volume only)
- Must restore to same or larger size
- Same ZFS pool required
- Status: ✅ Implemented
- Protocols: NFS, NVMe-oF, iSCSI, SMB
- Description: Create clones that are independent from the source snapshot
- Features:
- Clone is promoted immediately after creation
- No dependency on parent snapshot
- Source snapshot can be deleted without affecting the clone
- Useful for snapshot rotation and cleanup
- Parameter:
detached: "true"in StorageClass parameters - Use Cases:
- Snapshot rotation policies where old snapshots need to be cleaned up
- Creating fully independent copies of data
- Avoiding clone dependency issues
Example StorageClass with Detached Clones:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nfs-detached
provisioner: tns.csi.io
parameters:
protocol: nfs
pool: tank
server: truenas.local
detached: "true" # Clones will be promoted to break parent dependency
allowVolumeExpansion: true
reclaimPolicy: Delete- Status: ✅ Implemented
- Protocols: NFS, NVMe-oF, iSCSI, SMB
- Description: Create snapshots that survive deletion of the source volume
- Features:
- Uses
zfs send | zfs receivefor full data copy - Stored as independent datasets in a dedicated folder
- Source volume can be deleted without affecting the snapshot
- Snapshots are stored under configurable parent dataset (default:
{pool}/csi-detached-snapshots)
- Uses
- Parameters:
detachedSnapshots: "true"in VolumeSnapshotClassdetachedSnapshotsParentDataset(optional) - where snapshots are stored
- Use Cases:
- Backup/DR scenarios requiring snapshots that outlive source volumes
- Data migration where source will be deleted
- Long-term archival with independent snapshot lifecycle
- Compliance requirements for independent backup copies
Example VolumeSnapshotClass for Detached Snapshots:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: truenas-nfs-snapshot-detached
driver: tns.csi.io
deletionPolicy: Delete
parameters:
detachedSnapshots: "true"
detachedSnapshotsParentDataset: "tank/backups/csi-snapshots" # optionalNote: Detached snapshots take longer to create than regular COW snapshots since they perform a full data copy via zfs send/receive. Use regular snapshots for fast point-in-time recovery, and detached snapshots when you need snapshots that survive source volume deletion.
Example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: restored-pvc
spec:
storageClassName: truenas-nfs
dataSource:
name: my-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi- Status: ✅ Implemented
- Protocols: NFS, NVMe-oF, iSCSI, SMB
- Description: Report volume health status to Kubernetes via CSI
ControllerGetVolumecapability - CSI Capability:
GET_VOLUME- enables Kubernetes to query volume health - Features:
- Reports
VolumeConditionwithAbnormalflag and descriptiveMessage - Health checks performed on-demand when Kubernetes queries volume status
- Protocol-specific validation of underlying storage resources
- Reports
Health Checks Performed:
| Protocol | Check | Abnormal If |
|---|---|---|
| NFS | Dataset exists | Dataset not found or inaccessible |
| NFS | NFS share enabled | Share disabled or missing |
| NVMe-oF | ZVOL exists | ZVOL not found |
| NVMe-oF | Subsystem exists | Subsystem missing |
| NVMe-oF | Namespace exists | Namespace not found in subsystem |
| iSCSI | ZVOL exists | ZVOL not found |
| iSCSI | Target exists | Target missing |
| iSCSI | Extent exists | Extent not found |
| SMB | Dataset exists | Dataset not found or inaccessible |
| SMB | SMB share enabled | Share disabled or missing |
Return Values:
Abnormal: false- Volume is healthy, all checks passedAbnormal: true- Volume has issues,Messagecontains details
Use Cases:
- Kubernetes can detect storage issues before pods fail
- Operators can monitor volume health via CSI events
- Automated alerting on storage problems
Note: This is a controller-side capability. Kubernetes periodically queries volume health for volumes with GET_VOLUME capability enabled.
- Status: ✅ Implemented (since v0.15.2)
- Protocols: NVMe-oF, iSCSI
- Description: Block storage volumes support
ReadWriteManyaccess mode withvolumeMode: Block, enabling KubeVirt live migration
KubeVirt requires RWX block volumes to live migrate VMs between worker nodes. Both NVMe-oF and iSCSI support this mode — the CSI driver exposes the raw block device without mounting a filesystem, and KubeVirt's QEMU driver handles concurrent access.
PVC for KubeVirt:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vm-disk
spec:
accessModes:
- ReadWriteMany
volumeMode: Block
resources:
requests:
storage: 20Gi
storageClassName: tns-csi-nvmeof # or tns-csi-iscsiStorageProfile for CDI:
KubeVirt's Containerized Data Importer (CDI) uses StorageProfiles to determine the default access mode and volume mode for DataVolumes. Configure a StorageProfile for your storage class:
apiVersion: cdi.kubevirt.io/v1beta1
kind: StorageProfile
metadata:
name: tns-csi-nvmeof # must match the StorageClass name
spec:
claimPropertySets:
- accessModes:
- ReadWriteMany
volumeMode: BlockCreate one for each block protocol storage class you use with KubeVirt (e.g., tns-csi-nvmeof, tns-csi-iscsi).
See the KubeVirt live migration documentation for more details on requirements.
- Status: ✅ Stable and functional
- Description: Resilient WebSocket client for TrueNAS API communication
- Features:
- Automatic reconnection with exponential backoff
- Ping/pong heartbeat (30-second intervals)
- Read/write deadline management
- Connection state tracking
- Graceful error handling
- Endpoints:
wss://for HTTPS (recommended)ws://for HTTP (development only)
- Status: ✅ Implemented and tested
- Description: Automatic recovery from network disruptions
- Features:
- Exponential backoff for reconnections (1s → 2s → 4s → ... max 30s)
- Operation retries during connectivity issues
- State preservation across reconnections
- Connection health monitoring
- Testing: Validated with manual connection disruption tests
- Status: ✅ Supported
- Description: Multiple controller replicas for redundancy
- Implementation: Kubernetes leader election
- Default: Single controller (can be increased via Helm chart)
- Status: ✅ Fully implemented
- Endpoint:
/metricson port 8080 (configurable) - Available Metrics:
tns_csi_operations_total: Counter of CSI operations by method and statustns_csi_operations_duration_seconds: Histogram of operation durations
tns_volume_operations_total: Counter by protocol, operation, and statustns_volume_operations_duration_seconds: Histogram of volume operation durationstns_volume_capacity_bytes: Gauge of provisioned volume sizes
tns_websocket_connected: Connection status gauge (1=connected, 0=disconnected)tns_websocket_reconnects_total: Counter of reconnection attemptstns_websocket_messages_total: Counter by direction (sent/received)tns_websocket_message_duration_seconds: Histogram of API call durationstns_websocket_connection_duration_seconds: Current connection duration
- Status: ✅ Implemented
- Description: Automatic Prometheus Operator integration
- Configuration: Optional, enabled via Helm chart values
- Status: ✅ Comprehensive logging
- Levels: Standard klog verbosity levels (--v=1 to --v=10)
- Default: v=2 (info level)
- Components:
- Controller logs: Volume operations, API interactions
- Node logs: Mount/unmount operations, device management
- Structured logging with context
- Status: ✅ Fully implemented
- Port: 9090 (configurable)
- Enable:
controller.dashboard.enabled: truein Helm values - Features:
- Live volume inventory with protocol, capacity, and health status
- Volume health checks (verifies datasets, shares, subsystems, targets exist)
- Kubernetes enrichment (PV/PVC binding, namespace, attached pods)
- Snapshot and clone tracking
- Unmanaged volume discovery (non-CSI volumes on same pool)
- Parsed Prometheus metrics summary
- JSON API endpoints for programmatic access
- Access:
kubectl port-forward svc/tns-csi-driver-dashboard 9090:9090, or configure Ingress
- Status: ✅ Fully implemented
- Port: 2137 (default, configurable via
--port) - Command:
kubectl tns-csi dashboard - Description: Local dashboard that connects directly to TrueNAS via WebSocket. Same UI as the in-cluster dashboard but runs on your machine. Auto-discovers credentials from installed driver.
- Status: ✅ Fully implemented
- Enable:
grafana.dashboards.enabled: truein Helm values - Description: Pre-built Grafana dashboard (
tns-csi-overview.json) provisioned via ConfigMap with sidecar auto-discovery - Panels: WebSocket health, operation counts by protocol, operations by type (create/delete/expand), message throughput, per-protocol breakdown (NFS, NVMe-oF, iSCSI, SMB)
- Status: ✅ Production-ready chart
- Registry:
- Docker Hub (recommended):
oci://registry-1.docker.io/bfenski/tns-csi-driver - GitHub Container Registry:
oci://ghcr.io/fenio/tns-csi-driver
- Docker Hub (recommended):
- Features:
- Configurable resource limits
- Multiple storage class support (NFS, NVMe-oF, iSCSI, SMB)
- ServiceMonitor for Prometheus
- RBAC configuration
- Customizable mount options
- Volume expansion enabled by default
- Status: ✅ Flexible configuration
- Support: Multiple storage classes per driver installation
- Parameters:
- Common:
protocol,pool,server,deleteStrategy,parentDataset - Adoption:
markAdoptable,adoptExisting(see "Volume Adoption" section) - NFS-specific:
path - NVMe-oF specific:
subsystemNQN,fsType,transport,port - SMB-specific:
smbCredentialsSecret(name/namespace for nodeStageSecretRef) - ZFS properties: See "Configurable ZFS Properties" section below
- Common:
- Mount Options: Configurable via StorageClass
mountOptionsfield (see "Configurable Mount Options" above)
- Status: ✅ Implemented
- Description: Configure ZFS dataset/ZVOL properties via StorageClass parameters
- Prefix: All ZFS properties use the
zfs.prefix in StorageClass parameters
| Parameter | Description | Valid Values |
|---|---|---|
zfs.compression |
Compression algorithm | off, lz4, gzip, gzip-1 to gzip-9, zstd, zstd-1 to zstd-19, lzjb, zle |
zfs.dedup |
Deduplication | off, on, verify, sha256, sha512 |
zfs.atime |
Access time updates | on, off |
zfs.sync |
Synchronous writes | standard, always, disabled |
zfs.recordsize |
Record size | 512, 1K, 2K, 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K, 1M |
zfs.copies |
Number of data copies | 1, 2, 3 |
zfs.snapdir |
Snapshot directory visibility | hidden, visible |
zfs.readonly |
Read-only mode | on, off |
zfs.exec |
Executable files | on, off |
zfs.aclmode |
ACL mode | passthrough, restricted, discard, groupmask |
zfs.acltype |
ACL type | off, nfsv4, posix |
zfs.casesensitivity |
Case sensitivity (creation only) | sensitive, insensitive, mixed |
| Parameter | Description | Valid Values |
|---|---|---|
zfs.compression |
Compression algorithm | off, lz4, gzip, gzip-1 to gzip-9, zstd, zstd-1 to zstd-19, lzjb, zle |
zfs.dedup |
Deduplication | off, on, verify, sha256, sha512 |
zfs.sync |
Synchronous writes | standard, always, disabled |
zfs.copies |
Number of data copies | 1, 2, 3 |
zfs.readonly |
Read-only mode | on, off |
zfs.sparse |
Thin provisioning | true, false |
zfs.volblocksize |
Volume block size | 512, 1K, 2K, 4K, 8K, 16K, 32K, 64K, 128K |
Example StorageClass with ZFS Properties:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nfs-compressed
provisioner: tns.csi.io
parameters:
protocol: nfs
pool: tank
server: truenas.local
# ZFS properties
zfs.compression: "lz4"
zfs.atime: "off"
zfs.recordsize: "128K"
allowVolumeExpansion: true
reclaimPolicy: DeleteExample NVMe-oF StorageClass with ZFS Properties:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nvmeof-compressed
provisioner: tns.csi.io
parameters:
protocol: nvmeof
pool: tank
server: truenas.local
transport: tcp
port: "4420"
# ZFS properties
zfs.compression: "lz4"
zfs.sparse: "true"
zfs.volblocksize: "16K"
allowVolumeExpansion: true
reclaimPolicy: Delete- Status: ✅ Implemented
- Description: Enable ZFS native encryption for datasets and ZVOLs at creation time
- Protocols: NFS, NVMe-oF, iSCSI, SMB
ZFS native encryption provides transparent, at-rest encryption for your volumes. Once enabled, all data written to the volume is automatically encrypted using AES-256-GCM (default) or other supported algorithms.
| Parameter | Description | Default | Required |
|---|---|---|---|
encryption |
Enable encryption | "false" |
Yes (to enable) |
encryptionAlgorithm |
Encryption algorithm | "AES-256-GCM" |
No |
encryptionGenerateKey |
Auto-generate encryption key | "false" |
One of key source options |
Supported Algorithms:
AES-256-GCM(recommended, default)AES-128-CCMAES-192-CCMAES-256-CCMAES-128-GCMAES-192-GCM
-
Auto-generate Key (simplest): TrueNAS generates and manages the encryption key
encryption: "true" encryptionGenerateKey: "true"
-
Passphrase: Provide a passphrase via Kubernetes Secret (min 8 characters)
encryption: "true" csi.storage.k8s.io/provisioner-secret-name: encryption-secret csi.storage.k8s.io/provisioner-secret-namespace: kube-system
Secret contents:
apiVersion: v1 kind: Secret metadata: name: encryption-secret namespace: kube-system type: Opaque stringData: encryptionPassphrase: "my-secret-passphrase"
-
Hex Key: Provide a 256-bit hex-encoded key via Kubernetes Secret (64 hex chars)
encryption: "true" csi.storage.k8s.io/provisioner-secret-name: encryption-secret csi.storage.k8s.io/provisioner-secret-namespace: kube-system
Secret contents:
apiVersion: v1 kind: Secret metadata: name: encryption-secret namespace: kube-system type: Opaque stringData: encryptionKey: "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef"
Example: Encrypted NFS StorageClass (Auto-Generated Key):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nfs-encrypted
provisioner: tns.csi.io
parameters:
protocol: nfs
pool: tank
server: truenas.local
encryption: "true"
encryptionGenerateKey: "true"
allowVolumeExpansion: true
reclaimPolicy: DeleteExample: Encrypted NVMe-oF StorageClass (Passphrase from Secret):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nvmeof-encrypted
provisioner: tns.csi.io
parameters:
protocol: nvmeof
pool: tank
server: truenas.local
transport: tcp
port: "4420"
encryption: "true"
encryptionAlgorithm: "AES-256-GCM"
csi.storage.k8s.io/provisioner-secret-name: encryption-secret
csi.storage.k8s.io/provisioner-secret-namespace: kube-system
allowVolumeExpansion: true
reclaimPolicy: Delete
---
apiVersion: v1
kind: Secret
metadata:
name: encryption-secret
namespace: kube-system
type: Opaque
stringData:
encryptionPassphrase: "my-secret-passphrase-at-least-8-chars"- Key Recovery: If using passphrase or hex key, you are responsible for key backup. Losing the key means losing access to encrypted data.
- Auto-Generated Keys: When using
encryptionGenerateKey: "true", TrueNAS manages the key. The key is stored on TrueNAS and is accessible to TrueNAS administrators. - Existing Volumes: Encryption can only be set at volume creation time. Existing unencrypted volumes cannot be encrypted.
- Snapshots: Snapshots of encrypted volumes inherit the encryption settings.
- Performance: Encryption has minimal performance impact with modern CPUs (AES-NI acceleration).
- Status: ✅ Implemented
- Description: All volumes are tagged with ZFS user properties for reliable identification and cross-cluster adoption
- Schema Version:
1(versioned for future migrations)
The driver stores metadata as ZFS user properties on each volume's dataset/ZVOL. This enables:
- Reliable volume identification without searching by name/path
- Cross-cluster volume adoption
- Ownership verification before deletion
- Easy debugging via
zfs get all <dataset>
| Property | Description | Example |
|---|---|---|
tns-csi:schema_version |
Schema version for migrations | "1" |
tns-csi:managed_by |
Ownership marker | "tns-csi" |
tns-csi:csi_volume_name |
CSI volume identifier | "pvc-abc123" |
tns-csi:protocol |
Storage protocol | "nfs", "nvmeof", or "iscsi" |
tns-csi:capacity_bytes |
Volume size in bytes | "10737418240" |
tns-csi:created_at |
Creation timestamp (RFC3339) | "2024-01-15T10:30:00Z" |
tns-csi:delete_strategy |
Retain/delete policy | "delete" or "retain" |
| Property | Description | Example |
|---|---|---|
tns-csi:pvc_name |
Original PVC name | "my-data" |
tns-csi:pvc_namespace |
Original namespace | "default" |
tns-csi:storage_class |
Original StorageClass | "truenas-nfs" |
NFS Volumes:
| Property | Description | Mutable? |
|---|---|---|
tns-csi:nfs_share_path |
NFS export path (stable) | No |
tns-csi:nfs_share_id |
TrueNAS share ID | Yes (on re-share) |
NVMe-oF Volumes:
| Property | Description | Mutable? |
|---|---|---|
tns-csi:nvmeof_subsystem_nqn |
Subsystem NQN (stable) | No |
tns-csi:nvmeof_subsystem_id |
TrueNAS subsystem ID | Yes |
tns-csi:nvmeof_namespace_id |
TrueNAS namespace ID | Yes |
iSCSI Volumes:
| Property | Description | Mutable? |
|---|---|---|
tns-csi:iscsi_iqn |
Target IQN (stable) | No |
tns-csi:iscsi_target_id |
TrueNAS target ID | Yes |
tns-csi:iscsi_extent_id |
TrueNAS extent ID | Yes |
Clone/Content Source Properties (set automatically when cloning):
| Property | Description | Values |
|---|---|---|
tns-csi:content_source_type |
Type of clone source | "snapshot" or "volume" |
tns-csi:content_source_id |
Source snapshot/volume ID | CSI ID of source |
tns-csi:clone_mode |
Clone dependency mode | "cow", "promoted", or "detached" |
tns-csi:origin_snapshot |
ZFS origin (COW clones only) | ZFS snapshot path |
Clone modes determine dependency relationships:
- cow (Copy-on-Write): Clone depends on snapshot. Snapshot CANNOT be deleted while clone exists.
- promoted: After ZFS promote, dependency is reversed. Snapshot CAN be deleted.
- detached: Created via zfs send/receive. No dependency - both can be deleted independently.
Viewing Volume Properties:
# On TrueNAS, view all properties for a volume
zfs get all tank/csi/pvc-12345678 | grep tns-csi- Status: ✅ Fully Implemented
- Description: Import existing tns-csi managed volumes into a new Kubernetes cluster
- Use Cases:
- GitOps recovery - recreate cluster from same Git repo, volumes are automatically adopted
- Disaster recovery - restore volumes to a new cluster
- Cluster migration - move workloads between clusters
- Upgrade recovery - re-import retained volumes after breaking upgrades
- Migration from democratic-csi - move volumes without data loss
For step-by-step migration and adoption instructions, see ADOPTION.md.
New in v0.8.0: Volumes can be automatically adopted when a PVC with the same name is created.
StorageClass Parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
markAdoptable |
bool |
false |
Mark new volumes as adoptable (tns-csi:adoptable=true) |
adoptExisting |
bool |
false |
Automatically adopt any managed volume with matching name |
Adoption Behavior Matrix:
Volume has adoptable=true |
StorageClass adoptExisting |
Result |
|---|---|---|
| Yes | No | Volume is adopted |
| No | Yes | Volume is adopted |
| Yes | Yes | Volume is adopted |
| No | No | Volume is NOT adopted (new volume created) |
Example StorageClass for GitOps:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nfs-gitops
provisioner: tns.csi.io
parameters:
protocol: nfs
pool: tank
parentDataset: csi
server: truenas.local
markAdoptable: "true" # New volumes can be adopted by future clusters
adoptExisting: "true" # Adopt existing volumes with matching names
reclaimPolicy: Retain # Keep volumes on PVC deletion
allowVolumeExpansion: trueAdoption Process:
- When
CreateVolumeis called, the driver searches for an existing volume by CSI name - If found, it checks adoption eligibility (adoptable property or adoptExisting parameter)
- If eligible, it re-creates any missing TrueNAS resources (NFS share, NVMe-oF subsystem/namespace, or iSCSI target/extent)
- Volume capacity is expanded if requested size is larger than existing size
- Volume is returned as if newly created, but data is preserved
Capacity Handling:
- If requested capacity > existing capacity: volume is expanded
- If requested capacity ≤ existing capacity: existing capacity is used
A volume is adoptable if it has:
tns-csi:managed_by="tns-csi"(ownership marker)- Valid
tns-csi:schema_version(currently"1") tns-csi:protocolset to"nfs","nvmeof", or"iscsi"- Protocol-specific stable identifier:
- NFS:
tns-csi:nfs_share_path - NVMe-oF:
tns-csi:nvmeof_subsystem_nqn - iSCSI:
tns-csi:iscsi_iqn
- NFS:
Note: Dataset paths in examples use {pool}/{parentDataset}/{volume} format (e.g., tank/csi/my-volume).
Your actual paths depend on StorageClass configuration:
poolparameter sets the ZFS pool (e.g.,tank)parentDatasetparameter sets an optional parent dataset (e.g.,csi)- If
parentDatasetis not set, volumes are created directly under the pool
-
Identify adoptable volumes on TrueNAS:
# List all tns-csi managed datasets (adjust path to match your pool/parentDataset) zfs list -o name,tns-csi:managed_by,tns-csi:csi_volume_name,tns-csi:protocol -r tank -
Extract volume information:
# Get all properties for a specific volume zfs get all tank/my-volume | grep tns-csi
-
Re-create NFS share or NVMe-oF namespace if missing (TrueNAS UI or API)
-
Create static PV in Kubernetes referencing the existing volume:
apiVersion: v1 kind: PersistentVolume metadata: name: adopted-volume spec: capacity: storage: 10Gi # Match tns-csi:capacity_bytes accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: truenas-nfs csi: driver: tns.csi.io volumeHandle: my-volume # Must match tns-csi:csi_volume_name volumeAttributes: protocol: nfs server: truenas.local share: /mnt/tank/my-volume # Must match tns-csi:nfs_share_path
-
Create PVC bound to the static PV:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: adopted-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: truenas-nfs volumeName: adopted-volume # Bind to static PV
The kubectl plugin provides CLI tooling for volume discovery and adoption:
# Discover orphaned volumes (volumes on TrueNAS without matching PVCs)
kubectl tns-csi list-orphaned
# Generate PV manifest to adopt a specific volume
kubectl tns-csi adopt <dataset-path> -o yaml > pv.yaml
kubectl apply -f pv.yaml
# Mark volumes as adoptable for future cluster recreation
kubectl tns-csi mark-adoptable --allSee kubectl Plugin Documentation for full details on adoption workflows.
- Status: ✅ Implemented
- Description: Customize volume/dataset names on TrueNAS using Go templates
- Protocols: NFS, NVMe-oF, iSCSI, SMB
- Use Cases:
- Use meaningful names instead of auto-generated PV UUIDs
- Include namespace/PVC name in dataset names for easier identification
- Organize volumes with consistent naming patterns
| Variable | Description | Example Value |
|---|---|---|
.PVCName |
PVC name | postgres-data |
.PVCNamespace |
PVC namespace | production |
.PVName |
PV name (CSI volume name) | pvc-abc123-def456 |
| Parameter | Description | Example |
|---|---|---|
nameTemplate |
Go template for full name | {{ .PVCNamespace }}-{{ .PVCName }} |
namePrefix |
Simple prefix | prod- |
nameSuffix |
Simple suffix | -data |
commentTemplate |
Go template for dataset comment (visible in TrueNAS UI) | {{ .PVCNamespace }}/{{ .PVCName }} |
Note: nameTemplate takes precedence over namePrefix/nameSuffix if both are specified.
Volume names are automatically sanitized for ZFS compatibility:
- Invalid characters replaced with hyphens
- Leading/trailing hyphens removed
- Multiple consecutive hyphens collapsed
- Truncated to 63 characters (K8s label compatibility)
Example StorageClass with Name Template:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nfs-named
provisioner: tns.csi.io
parameters:
protocol: nfs
pool: tank
server: truenas.local
# Volume name templating
nameTemplate: "{{ .PVCNamespace }}-{{ .PVCName }}"
allowVolumeExpansion: true
reclaimPolicy: DeleteWith this StorageClass, a PVC named postgres-data in namespace production would create a dataset named tank/production-postgres-data instead of tank/pvc-abc123-def456-789....
Example with Comment Template:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nfs-commented
provisioner: tns.csi.io
parameters:
protocol: nfs
pool: tank
server: truenas.local
# Dataset comment visible in TrueNAS UI
commentTemplate: "{{ .PVCNamespace }}/{{ .PVCName }}"
allowVolumeExpansion: true
reclaimPolicy: DeleteWith this StorageClass, datasets will have a comment like production/postgres-data visible in the TrueNAS web UI, making it easy to identify which PVC a dataset belongs to. Unlike volume names, comments are free-form text — no sanitization or length limits are applied.
Example with Simple Prefix/Suffix:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: truenas-nfs-prefixed
provisioner: tns.csi.io
parameters:
protocol: nfs
pool: tank
server: truenas.local
namePrefix: "k8s-"
nameSuffix: "-vol"
allowVolumeExpansion: true
reclaimPolicy: Delete- Status: ✅ Complete RBAC configuration
- Components:
- ServiceAccounts for controller and node components
- ClusterRoles with minimal required permissions
- ClusterRoleBindings
- Status: ✅ Fully automated
- Platform: GitHub Actions with self-hosted runner
- Workflows:
- CI (lint, build, unit tests)
- Integration tests (NFS, NVMe-oF, iSCSI, and SMB)
- Release automation
- Dashboard generation
- Status: ✅ Comprehensive test suite
- Infrastructure: Self-hosted (k3s + real TrueNAS)
- Test Scenarios:
- Basic volume provisioning and deletion (NFS, NVMe-oF, iSCSI, SMB)
- Volume expansion (NFS, NVMe-oF, iSCSI, SMB)
- Concurrent volume operations
- StatefulSet workloads
- Snapshot creation and restoration (NFS, NVMe-oF, iSCSI, SMB)
- Volume adoption (GitOps workflows)
- Connection resilience
- Orphaned resource cleanup
- Persistence testing
- Execution: Automatic on every push to main branch and pull requests
- Status: ✅ CSI spec compliance testing
- Framework: csi-sanity test suite
- Coverage: Basic CSI operations validation
- Status: ✅ Live dashboard
- URL: https://fenio.github.io/tns-csi/dashboard/
- Features: Test results history, trend analysis
- Status: ✅ Secure API key authentication
- Storage: Kubernetes Secrets
- Support: TrueNAS API key authentication
- Status: ✅ Supported
- WebSocket: WSS (WebSocket Secure) protocol
- Recommended: Always use
wss://in production
- Status: ✅ Minimal privilege principle
- Configuration: Separate service accounts for controller and node components
- NFS:
- ✅ ReadWriteMany (RWX) - Multiple pods on multiple nodes
- ✅ ReadWriteOnce (RWO) - Single pod access
- ✅ ReadWriteOncePod (RWOP) - Single pod access with stricter enforcement
- NVMe-oF:
- ✅ ReadWriteOnce (RWO) - Block storage limitation
- ✅ ReadWriteOncePod (RWOP) - Single pod access with stricter enforcement
- iSCSI:
- ✅ ReadWriteOnce (RWO) - Block storage limitation
- ✅ ReadWriteOncePod (RWOP) - Single pod access with stricter enforcement
- SMB:
- ✅ ReadWriteMany (RWX) - Multiple pods on multiple nodes
- ✅ ReadWriteOnce (RWO) - Single pod access
- ✅ ReadWriteOncePod (RWOP) - Single pod access with stricter enforcement
- ✅ Immediate - Volume provisioned immediately when PVC created
- ✅ WaitForFirstConsumer - Volume provisioned when pod scheduled
- ✅ Delete - Volume deleted when PVC removed (default)
- ✅ Retain - Volume kept on TrueNAS after PVC deletion
- ✅ Multiple storage classes per driver
- ✅ Default storage class support
- ✅ Custom parameters per class
- ✅ Tested: k3s (self-hosted CI/CD)
- ✅ Supported: Standard Kubernetes 1.27+
⚠️ Should Work:- kind (local development)
- K0s, K3s, RKE2
- Managed Kubernetes (EKS, GKE, AKS) - untested
- Note: Earlier Kubernetes versions (< 1.27) may work but are not tested
- ✅ Linux: Primary platform
- Ubuntu 22.04+ (tested)
- Debian-based distributions
- RHEL/CentOS-based distributions
- ❌ Windows: Not supported (Linux-focused driver)
- ❌ macOS: Not supported as node OS (development on macOS works)
- ✅ amd64 (x86_64): Fully supported
- ✅ arm64: Fully supported (tested on Apple Silicon via UTM)
- ✅ containerd (primary)
- ✅ CRI-O
⚠️ Docker (should work, not extensively tested)
- NFS Support: TrueNAS Scale 25.10+
- NVMe-oF Support: TrueNAS Scale 25.10+ (feature introduced in this version)
- WebSocket API: v2.0 (current endpoint:
/api/current) - Authentication: API key-based
- NFS service enabled
- Network access from Kubernetes nodes
- ZFS pool with available space
- SMB service enabled
- SMB user account configured (Credentials > Local Users)
- Network access from Kubernetes nodes (port 445)
- ZFS pool with available space
- Static IP address (DHCP not supported)
- Pre-configured NVMe-oF subsystem with:
- At least one initial namespace (ZVOL)
- TCP port configured (default: 4420)
- Accessible from Kubernetes nodes
- NVMe-oF service enabled
- Production Readiness: Early development, not production-ready
- Testing Coverage: Core features functional, extensive validation needed
- Error Handling: Improving, some edge cases may not be covered
- Network latency affects performance
- NFSv4.2 required (older versions not tested)
- Firewall rules must allow NFS ports
- Requires TrueNAS Scale 25.10+ (not available on TrueNAS CORE)
- Static IP mandatory (DHCP interfaces not shown in configuration)
- Subsystem must be pre-configured (driver doesn't create subsystems)
- Block storage only (ReadWriteOnce access mode)
- TCP transport only (RDMA not implemented)
- Cross-protocol cloning not supported (NFS ↔ NVMe-oF ↔ iSCSI ↔ SMB)
- Cross-pool cloning not supported
- Restored volumes must be same size or larger
- Shrinking not supported (ZFS limitation)
- Some operations may require volume to be unmounted
- Multi-pool Support: Advanced scheduling across multiple TrueNAS pools
- Topology Awareness: Multi-zone deployments
- Volume Migration: Move volumes between protocols/pools
- Quota Management: Advanced quota and reservation features
- Windows Node Support: Linux-focused driver (SMB support is for Linux CIFS clients)
- Legacy Protocol Support: Focus on modern protocols only
- Throughput: Network-limited, typically 1-10 Gbps depending on network
- Latency: ~1-5ms additional latency vs local storage
- IOPS: Moderate (1000-10000 IOPS typical)
- Best For: Shared file storage, read-heavy workloads, multi-pod access
- Throughput: Higher than NFS, can approach local NVMe speeds
- Latency: Lower than NFS (~100-500µs additional latency)
- IOPS: High (10000-100000+ IOPS depending on storage)
- Best For: Databases, high-performance applications, latency-sensitive workloads
- Creation Time: Near-instant regardless of volume size
- Space Overhead: Minimal until data diverges
- Restore Time: Instant (clone operation)
- ✅ README.md - Project overview and quick start
- ✅ DEPLOYMENT.md - Detailed deployment guide
- ✅ KUBECTL-PLUGIN.md - kubectl plugin for volume management
- ✅ QUICKSTART.md - NFS quick start guide
- ✅ QUICKSTART-NVMEOF.md - NVMe-oF setup guide
- ✅ QUICKSTART-SMB.md - SMB/CIFS setup guide
- ✅ SNAPSHOTS.md - Snapshot and cloning guide
- ✅ ADOPTION.md - Volume adoption and migration guide (including democratic-csi migration)
- ✅ METRICS.md - Prometheus metrics documentation
- ✅ TESTING.md - Comprehensive testing guide and infrastructure details
- ✅ FEATURES.md - This document
- ✅ CONTRIBUTING.md - Contribution guidelines
- ✅ CONNECTION_RESILIENCE_TEST.md - Connection testing guide
- ✅ charts/tns-csi-driver/README.md - Complete Helm configuration reference
- ✅ charts/tns-csi-driver/values.yaml - Documented default values
All features are tested on real infrastructure - not mocks or simulators:
Test Environment:
- ✅ Self-hosted GitHub Actions runner (dedicated Akamai/Linode infrastructure)
- ✅ Real Kubernetes clusters (k3s) provisioned for each test run
- ✅ Real TrueNAS Scale 25.10+ server with actual storage pools
- ✅ Real protocol operations (NFS mounts, NVMe-oF connections, SMB shares, actual I/O)
CSI Specification Compliance:
- ✅ Passes kubernetes-csi/csi-test v5.4.0 sanity tests
- ✅ Full CSI specification compliance verified
Integration Test Coverage:
- ✅ Basic volume operations (NFS, NVMe-oF & iSCSI)
- ✅ Volume expansion testing
- ✅ Snapshot creation and restoration
- ✅ Volume adoption (GitOps workflows)
- ✅ StatefulSet volume management (3 replica testing)
- ✅ Data persistence across pod restarts
- ✅ Concurrent volume creation (5 simultaneous volumes)
- ✅ Connection resilience (WebSocket reconnection)
- ✅ Orphaned resource detection and cleanup
Test Results:
- View live dashboard: Test Dashboard
- CI status:
See TESTING.md for comprehensive testing documentation.
- Kubernetes cluster 1.27+
- TrueNAS Scale 25.10+
- TrueNAS API key
- Helm 3.0+
- Protocol-specific tools on nodes: NFS client (NFS), nvme-cli (NVMe-oF), open-iscsi (iSCSI), or cifs-utils (SMB)
helm install tns-csi oci://registry-1.docker.io/bfenski/tns-csi-driver \
--namespace kube-system \
--create-namespace \
--set truenas.url="wss://YOUR-TRUENAS-IP:443/api/current" \
--set truenas.apiKey="YOUR-API-KEY" \
--set storageClasses[0].name="tns-csi-nfs" \
--set storageClasses[0].enabled=true \
--set storageClasses[0].protocol="nfs" \
--set storageClasses[0].pool="YOUR-POOL-NAME" \
--set storageClasses[0].server="YOUR-TRUENAS-IP"# Pre-requisite: Configure NVMe-oF port in TrueNAS first!
helm install tns-csi oci://registry-1.docker.io/bfenski/tns-csi-driver \
--namespace kube-system \
--create-namespace \
--set truenas.url="wss://YOUR-TRUENAS-IP:443/api/current" \
--set truenas.apiKey="YOUR-API-KEY" \
--set storageClasses[0].name="tns-csi-nvmeof" \
--set storageClasses[0].enabled=true \
--set storageClasses[0].protocol="nvmeof" \
--set storageClasses[0].pool="YOUR-POOL-NAME" \
--set storageClasses[0].server="YOUR-TRUENAS-IP"# Pre-requisite: Configure iSCSI portal in TrueNAS and install open-iscsi on nodes!
helm install tns-csi oci://registry-1.docker.io/bfenski/tns-csi-driver \
--namespace kube-system \
--create-namespace \
--set truenas.url="wss://YOUR-TRUENAS-IP:443/api/current" \
--set truenas.apiKey="YOUR-API-KEY" \
--set storageClasses[0].name="tns-csi-iscsi" \
--set storageClasses[0].enabled=true \
--set storageClasses[0].protocol="iscsi" \
--set storageClasses[0].pool="YOUR-POOL-NAME" \
--set storageClasses[0].server="YOUR-TRUENAS-IP"- GitHub Issues: https://github.com/fenio/tns-csi/issues
- Include: Kubernetes version, TrueNAS version, logs, reproduction steps
- See CONTRIBUTING.md for guidelines
- Pull requests welcome
- Focus areas: Testing, documentation, bug fixes
- Test Dashboard: https://fenio.github.io/tns-csi/dashboard/
- GitHub Actions: https://github.com/fenio/tns-csi/actions
Last Updated: 2026-01-29 Driver Version: v0.17.5 Kubernetes Version Tested: 1.27+ Go Version: 1.26.0+