Skip to main content
Version: v0.24 Stable

Snapshot and Restore

There are multiple ways to back up and restore a virtual cluster. vCluster provides a built-in method to create and restore snapshots using its CLI.

warning

If you use an external database, such as MySQL or PostgreSQL, that does not run in the same namespace as vCluster, you must create a separate backup for the datastore. For more information, refer to the relevant database documentation.

Use the vCluster CLI​

info

This method requires vCluster version v0.24.0 or higher

The vCluster CLI is the recommended way to back up the etcd datastore. When you run a backup, vCluster creates a temporary pod to save the snapshot at the specified location and automatically determines the configured backing store. The snapshot includes:

  • Backing store data (for example, etcd or SQLite)
  • vCluster Helm release information
  • vCluster configuration (for example, vcluster.yaml)
info

The vCluster CLI backup method currently does not support backing up persistent volumes. To back up persistent volumes, use the Velero backup method.

Snapshot URL​

vCluster uses a snapshot URL to save the snapshot to a specific location. The snapshot URL contains the following information:

ParameterDescriptionExample
ProtocolDefines the storage type for the snapshotoci, s3, container
Storage locationSpecifies where to save the snapshotoci://ghcr.io/my-user/my-repo:my-tag, s3://my-s3-bucket/my-snapshot-key, container:///data/my-snapshot.tar.gz
Optional flagsAdditional options for snapshot storageskip-client-credentials=true

Supported protocols​

The following protocols are supported for storing snapshots:

  • oci – Stores snapshots in an OCI image registry, such as Docker Hub or GHCR.
  • s3 – Saves snapshots to an S3-compatible bucket, such as AWS S3 or MinIO.
  • container – Stores snapshots as a local file inside a vCluster container or another persistent volume claim (PVC).

For example, the following snapshot URL saves the snapshot to an OCI image registry:

vcluster snapshot my-vcluster "oci://ghcr.io/my-user/my-repo:my-tag"

Store snapshots in OCI image registries​

You can save snapshots to OCI image registries.

You can authenticate in two ways: by using locally stored OCI credentials or by passing credentials directly in the snapshot URL.

To authenticate with local credentials, log in to your OCI registry and create the snapshot:

# Log in to the OCI registry using a password access token.
echo $PASSWORD_ACCESS_TOKEN | docker login ghcr.io -u $USERNAME --password-stdin

# Create a snapshot and push it to an OCI image registry.
vcluster snapshot my-vcluster "oci://ghcr.io/my-user/my-repo:my-tag"

Alternatively, you can pass authentication credentials directly in the snapshot URL and create the snapshot.

# Pass authentication credentials directly in the URL and create a snapshot.
export OCI_USERNAME=my-username
export OCI_PASSWORD=$(echo -n "my-password" | base64)
vcluster snapshot my-vcluster "oci://ghcr.io/my-user/my-repo:my-tag?username=$OCI_USERNAME&password=$OCI_PASSWORD&skip-client-credentials=true"

Use the following parameters to configure authentication when passing credentials directly in the URL:

ParameterDescriptionRequired
usernameUsername for authenticating with the OCI registryYes, when not using local credentials
passwordBase64-encoded password for authenticating with the OCI registryYes, when not using local credentials
skip-client-credentialsWhen set to true, ignores local Docker credentialsNo, defaults to false

Store snapshots in S3 buckets​

Store snapshots in an S3-compatible bucket using the s3 protocol.

You can authenticate in two ways: by using local environment credentials or by passing credentials directly in the URL.

To use local environment credentials, log in to AWS CLI or configure your environment with the required variables, then create and save the snapshot:

# Create a snapshot and store it in an S3 bucket.
vcluster snapshot my-vcluster "s3://my-s3-bucket/my-bucket-key"

Alternatively, you can pass authentication credentials directly in the snapshot URL. The following parameters are supported:

FlagDescription
access-key-idBase64-encoded S3 access key ID for authentication
secret-access-keyBase64-encoded S3 secret access key for authentication
session-tokenBase64-encoded temporary session token for authentication
regionRegion of the S3-compatible bucket
profileAWS profile to use for authentication
skip-client-credentialsSkips use of local credentials for authentication

Run the following command to create a snapshot and store it in an S3-compatible bucket, such as AWS S3 or MinIO:

# Pass S3 credentials in the URL and create a snapshot.
export ACCESS_KEY_ID=$(cat my-access-key-id.txt | base64)
export SECRET_ACCESS_KEY=$(cat my-secret-access-key.txt | base64)
export SESSION_TOKEN=$(cat my-session-token.txt | base64)
vcluster snapshot my-vcluster "s3://my-s3-bucket/my-bucket-key?access-key-id=$ACCESS_KEY_ID&secret-access-key=$SECRET_ACCESS_KEY&session-token=$SESSION_TOKEN"

Store snapshots in containers​

Use the container protocol to save snapshots as local files inside a vCluster container or another PVC.

Run the following command to create a snapshot and store it in the specified path inside a container:

# Save to the vCluster container.
vcluster snapshot my-vcluster "container:///data/my-snapshot.tar.gz"

Create a snapshot​

You can create a snapshot using the following commands:

oci​

# Create an OCI snapshot with local credentials.
vcluster snapshot my-vcluster "oci://ghcr.io/my-user/my-repo:my-tag"

# Create an OCI snapshot and mount an external image pull secret.
vcluster snapshot my-vcluster "oci://ghcr.io/my-user/my-repo:my-tag?skip-client-credentials=true" --pod-image-pull-secret=my-image-pull-secret

s3​

# Create an S3 snapshot.
vcluster snapshot my-vcluster "s3://my-s3-bucket/my-snapshot-key"

container​

# Create a snapshot to local vCluster PVC (if using embedded storage).
vcluster snapshot my-vcluster "container:///data/my-snapshot.tar.gz"

# Create a snapshot to another PVC (needs to be in the same namespace as vCluster).
vcluster snapshot my-vcluster "container:///my-pvc/my-snapshot.tar.gz" --pod-mount "pvc:my-pvc:/my-pvc"

Each snapshot command creates a new pod (using the vCluster image) to back up the backing store. To run the snapshot process inside an existing pod using kubectl exec, add the --pod-exec flag instead of launching a separate pod.


Restore from a snapshot​

Restoring from a snapshot pauses the vCluster, stops all workload pods, and launches a temporary restore pod. Once the restore completes, the vCluster resumes, and workload pods are recreated. This process may result in temporary downtime while the restore is in progress.

You can restore a vCluster using the following commands:

# Restore from an OCI snapshot (use local credentials).
vcluster restore my-vcluster oci://ghcr.io/my-user/my-repo:my-tag

# Restore from an s3 snapshot.
vcluster restore my-vcluster s3://my-s3-bucket/my-snapshot-key

# Restore from a local pvc snapshot (if using embedded storage).
vcluster restore my-vcluster container:///data/my-snapshot.tar.gz
tip

You can also use snapshots to clone a vCluster or migrate it to a new cluster.

When creating a new vCluster, you can restore it directly from a snapshot:

# Restore a new vCluster from an OCI snapshot (uses local credentials).
vcluster create my-vcluster --restore oci://ghcr.io/my-user/my-repo:my-tag

# Migrate an existing vCluster by restoring from a snapshot and applying a new vcluster.yaml.
vcluster create my-vcluster --upgrade -f vcluster.yaml --restore oci://ghcr.io/my-user/my-repo:my-tag

If a restore fails:

  • When using vcluster create, the vCluster is automatically deleted.
  • When using vcluster restore, the restore process is aborted. You must retry the restore to avoid leaving the vCluster in an inconsistent or broken state.

Limitations​

The vCluster CLI has the following limitations when backing up and restoring a virtual cluster:

  • Backing up persistent volumes (PVs) or persistent volume claims (PVCs) is not supported.
  • Snapshotting a sleeping virtual cluster is not supported.
  • Snapshotting a k0s-based virtual cluster requires the --pod-exec flag.
  • Restoring a virtual cluster running the k0s distribution is not supported.
  • When restoring a vCluster that uses an external database backend, the database is not truncated before the snapshot is restored. Manual intervention is required.
  • Restoring a vCluster to a different namespace or with a different name changes the vCluster certificates. This prevents multiple virtual clusters from sharing the same certificates, which is expected behavior.

Use Velero​

You can use Velero to back up virtual clusters.

Make sure your cluster supports volume snapshots to allow Velero to backup persistent volumes and persistent volume claims that save the virtual cluster state. Alternatively, you can use Velero's restic integration to back up the virtual cluster state.

Back up a vCluster​

Install the Velero CLI, Velero server components and run the following command:

velero backup create <backup-name> --include-namespaces=my-vcluster-namespace

Verify a backup was successfully created with the following command:

velero backup describe <backup-name>

This should create an output similar to the following:

Name:         <backup-name>
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion=v1.24.0
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=24

Phase: Completed

Errors: 0
Warnings: 0

Namespaces:
Included: test
Excluded: <none>

Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto

...

Restore a vCluster​

After creating a backup with the Velero CLI or a scheduled backup, you can restore a vCluster using the Velero CLI:

velero restore create <restore-name> --from-backup <backup-name>

Verify the restore process using the following command:

velero restore logs <restore-name>

This restores the vCluster workloads, configuration, and state in the virtual cluster namespace.

Moving vCluster

Moving a vCluster from one namespace to another can be challenging because some objects, such as cluster role bindings and persistent volumes, contain namespace references. Velero supports namespace mapping, which works in most cases, but use cautionβ€”this may not be compatible with all vCluster setups.

Use Velero inside vCluster​

To use Velero for backups, first enable the hostpath-mapper component in vCluster.

After enabling hostpath-mapper, install the Velero CLI (as described above), connect to your vCluster, and install Velero inside the virtual cluster:

velero install --provider <provider> --bucket <bucket_name>  --secret-file <your_secret_file> --plugins velero/velero-plugin-for-<provider>:<semver> --use-restic
note

Ensure you replace provider, bucket_name, secret-file, and other placeholders with the correct values for your environment.

After installation is complete, you can check the status of the Velero resources:

$ kubectl get all -n velero
NAME READY STATUS RESTARTS AGE
pod/restic-5szkb 1/1 Running 0 118s
pod/velero-75c5479dfd-4x7sl 1/1 Running 0 118s

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/restic 1 1 1 1 1 <none> 118s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/velero 1/1 1 1 119s

NAME DESIRED CURRENT READY AGE
replicaset.apps/velero-75c5479dfd 1 1 1 119s

Now you're ready to create a backup using restic:

velero backup create test1 --default-volumes-to-restic

Wait for the backup to complete and eventually you should see the following:

$ velero backup describe test1
Name: test1
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion=v1.25.0+k3s1
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=25

Phase: Completed

Errors: 0
Warnings: 0

Namespaces:
Included: *
Excluded: <none>

Resources:
Included: *
Excluded: <none>
Cluster-scoped: auto

...