Organization
The top-level entity that owns billing, clusters, teams, and policies. Admins manage organization-wide settings and resources.Team
A logical grouping of members who share resources such as storage volumes and cost controls. Teams enable collaboration within an organization.Admin
An organization-level role with permissions to manage members, teams, volumes, billing, and policies. Admins have organization-wide scope.Member
A user role with permissions to create and use workspaces, use team volumes, and accept team invites. Members operate within their assigned teams.Workspace
An isolated, containerized environment with GPU/CPU where you develop and run code. You can open Jupyter notebooks or connect using SSH in a workspace.Volume
A persistent storage resource managed by VESSL Cloud. Volumes can be attached to workspaces to store data, datasets, models, and artifacts.Cluster storage
High-availability distributed storage (CephFS/NVMe) bound to a specific cluster. Supports read-write-many (RWX) semantics — multiple workspaces on the same cluster can share the storage simultaneously. Data persists even after workspace termination. ~150 MB/s throughput. Replaces the legacy “Workspace volume.”Object storage
S3-backed, POSIX-compatible storage that can be shared across teams and multiple workspaces across any cluster. Uses read-write-many (RWX) semantics for concurrent access. ~150 MB/s throughput. Previously called “Shared volume.”Workspace volume (deprecated)
A legacy per-workspace persistent volume with read-write-once (RWO) semantics. Replaced by Cluster storage. Contact support@vessl.ai for migration assistance.Temporary storage
Ephemeral storage included in every workspace session. Data is cleared when the workspace stops or terminates.Cluster
The compute backend that schedules GPU/CPU resources for your workspaces. Clusters provide the infrastructure for running workspaces.Jupyter Notebook
An interactive environment for writing and running code cells. On VESSL Cloud, Jupyter runs inside a workspace and is accessible in the browser.SSH
Secure Shell protocol for terminal access to your workspace. Useful for CLI workflows and advanced debugging.Connect
The workspace tab from which you open JupyterLab or fetch SSH connection instructions.Billing states
Workspaces have three billing states that affect cost:- Running: compute is billed while the workspace is active
- Paused: compute is stopped; Cluster storage charges continue to apply
- Terminated: the workspace is deleted; no further charges accrue
GPU
Graphics Processing Unit, a specialized processor originally designed for rendering graphics but now widely used for machine learning. GPUs can perform many calculations in parallel, making them ideal for training and running AI models. VESSL Cloud provides on-demand access to various GPU types (like NVIDIA A100, H100) for your workloads.Docker
A container platform for packaging and running workspace environments. Docker images help ensure consistent environments across teams and projects.Resource spec
A predefined hardware configuration specifying GPU type, CPU cores, memory, and Temporary storage. When creating a workspace, you select a resource spec in three steps — GPU product, region, and GPU count — to match your computational needs. Each option shows availability status and estimated hourly cost.Credit
Prepaid balance used to pay for VESSL Cloud resources. Credits are consumed based on workspace runtime and resource usage. You can top up credits through the billing page.Credit buffer
A $10 negative balance allowance before workspace termination. If your credits run out, workspaces continue running until the balance reaches -$10. This buffer amount is deducted from your next top-up.Container image
A packaged environment containing the operating system, libraries, and tools needed to run your code. VESSL Cloud offers official images (PyTorch, CUDA, etc.) or you can use custom images. Images ensure consistent, reproducible environments.Port
A network endpoint for accessing services running inside your workspace. You can expose custom ports (HTTP, TCP) to access web servers, APIs, or other applications from outside the workspace.Mount path
The directory location where storage is attached in your workspace file system. Cluster storage mount paths are user-configurable (common choices:/root, /data). Object storage must not be mounted at /root; use /shared or another separate path.
PyTorch
A popular open-source deep learning framework developed by Meta. VESSL Cloud provides pre-configured PyTorch images so you can start training models immediately without setup.CUDA
NVIDIA’s parallel computing platform and toolkit for GPU acceleration. CUDA enables software to use NVIDIA GPUs for general-purpose processing, essential for most deep learning workloads.pip
The standard package manager for Python. Use pip to install libraries and dependencies (e.g.,pip install numpy). Install packages in a persistent volume to preserve them across workspace restarts.
conda
An open-source environment and package manager for Python and other languages. Conda can create isolated environments with specific Python versions and dependencies, useful for managing complex ML projects.OOMKilled
Out of Memory Killed. An error that occurs when your workspace exceeds its allocated memory, causing the system to terminate the process. If you see this error, consider using a resource spec with more memory.NVMe
Non-Volatile Memory Express, a high-performance storage protocol. Cluster storage uses NVMe/CephFS for fast read/write speeds (~150 MB/s), ideal for loading large datasets and model checkpoints.S3
Amazon Simple Storage Service, a cloud Object storage service. Object storage in VESSL Cloud is backed by S3, providing scalable and durable storage that can be accessed from multiple workspaces and clusters simultaneously.Home dashboard
A personal overview page showing your active workloads, GPU usage, and spend rate across all teams you belong to.Organization dashboard
An admin-only overview page showing organization-wide GPU utilization, spend trends, team breakdown, and workload status.GPU Idle
A workload state where GPU utilization remains at 0% for three or more hours. Displayed as anIdle (3hr) badge on the Organization dashboard.
