Skip to main content
Welcome banner for VESSL Cloud CLI showing a VESSL ASCII wordmark, the supported GPU lineup (A100, H100, H200, B200, GB200, B300, Ada), and three starter commands: vesslctl auth login, vesslctl workspace create, vesslctl --help
This page walks through three real scenarios you will encounter when working with VESSL Cloud from the terminal. Each assumes you have already installed vesslctl and logged in.

Launch a GPU workspace

Launch an interactive GPU workspace, connect over SSH, and pause it when you are done.
1

Create the workspace

vesslctl workspace create \
  --name my-dev-box \
  --cluster <cluster-name> \
  --resource-spec <spec-name> \
  --image pytorch/pytorch:2.3.0-cuda12.1-cudnn8-devel
Replace <cluster-name> and <spec-name> with values from vesslctl cluster list and vesslctl resource-spec list. After creation, copy the workspace slug from vesslctl workspace list (for example, my-dev-box-abc123) — you will use it in the next steps.
2

Check workspace status

vesslctl workspace show <workspace-slug>
Wait until the status shows running.
3

Connect using SSH

vesslctl workspace ssh <workspace-slug>
4

Pause to save cost

When you are finished for the day, pause the workspace. Your files and environment are preserved at a lower cost.
vesslctl workspace pause <workspace-slug>
5

Resume later

Pick up where you left off:
vesslctl workspace start <workspace-slug>

Upload a dataset

Create a volume, upload data from the CLI, and verify the contents.
1

Create a volume

vesslctl volume create \
  --name training-data \
  --storage <storage-name> \
  --teams <team-name> \
  --description "ImageNet subset for fine-tuning"
Replace <storage-name> with a storage backend from vesslctl storage list and <team-name> with a team from vesslctl team list. --name, --storage, and --teams are all required. After creation, copy the volume slug from vesslctl volume list (for example, training-data-abc123).
2

Upload your dataset

vesslctl volume upload <volume-slug> ./dataset/
Upload individual files or entire directories in one command. Use --remote-prefix datasets/v1/ to place files under a specific path, or --exclude "*.pyc" to skip patterns.
3

Verify the upload

vesslctl volume ls <volume-slug> --prefix /
Need S3-compatible access for DVC, aws s3 cp, or a custom pipeline? Run vesslctl volume token <volume-slug> to get temporary S3 credentials and an endpoint URL.

Submit a batch job

Submit a batch job, watch its progress, and pull the logs.
1

Submit the job

vesslctl job create \
  --name nightly-train \
  --resource-spec <spec-name> \
  --image pytorch/pytorch:2.3.0-cuda12.1-cudnn8-devel \
  --cmd "python train.py --epochs 10 --lr 3e-4"
2

Check job status

vesslctl job list
3

Stream the logs

vesslctl job logs <job-slug> --follow
Copy the job slug from vesslctl job list (such as nightly-train-abc123). The --follow flag streams logs in real time until the job completes.