Apptainer
Introduction
Apptainer is free software for containerizing applications. It allows users to package software along with all its dependencies, providing a consistent environment across different systems. Containers are particularly useful on HPC systems because they isolate software and libraries, preventing conflicts with other users’ applications.
Availability
Apptainer is available across all our systems via the module system. Using containers on our systems typically involves loading the Apptainer module and starting the container image.
Container runtimes are not available on login nodes
If you try to use a container on a login node, you will get an error about namespaces similar to this one:
INFO : A system administrator may need to enable user namespaces, install
INFO : apptainer-suid, or compile with ./mconfig --with-suid
ERROR : Failed to create user namespace: maximum number of user namespaces exceeded, check /proc/sys/user/max_user_namespaces
The apptainer runtime will fail when attempting to use it on a cluster login node (e.g. owl1.arc.vt.edu or tinkercliffs1.arc.vt.edu) because user namespaces are disabled on login nodes. Some security controls are set more strictly on login nodes than on cluster compute nodes because the login nodes have greater exposure to security threats and are generally not an appropriate place to run computational workloads.
The solution is to use an interactive job to get dedicates resources and an interactive shell on a compute node and interact with the container there. For example:
[user@owl1 ~]$ interact --account=<your_acccount> --time=4:00:00 --cpus-per-task=4
--- Warning:
Your session consumes resources (CPUs, memory, and GPUs) while it remains open.
Close your session whenever you finish your work.
Other users cannot use the resources allocated to your job until you close your session.
Consider the use of batch jobs to optimize resources allocation.
srun: job 155192 queued and waiting for resources
srun: job 155192 has been allocated resources
[user@owl030 ~]$ module load apptainer
[user@owl030 ~]$ apptainer pull docker://pytorch/pytorch:latest
[user@owl030 ~]$ apptainer shell pytorch_latest.sif
Apptainer>
Tutorial: Creating Your Own Ubuntu Container with Root Privileges
This tutorial will guide you through creating an interactive Ubuntu 24.04 container, giving yourself root-like privileges to install packages, and setting up Ollama, an example application, for GPU use.
Step 1: Start an interactive GPU job
interact -A <your_account> --partition t4_normal_q --gres gpu:1 --time 4:00:00
module load apptainer
This allocates a T4 GPU compute node for 4 hours and loads Apptainer.
Step 2: Build a writable Ubuntu container
apptainer build --sandbox ubuntu24.04 docker://nvidia/cuda:13.0.2-devel-ubuntu24.04
--sandboxcreates a writable directory instead of a .sif read-only file.We start from NVIDIA’s CUDA-enabled Ubuntu 24.04 image.
Step 3: Enter the container with root privileges
apptainer shell --fakeroot --bind /home --writable ubuntu24.04
--fakerootallows you to act as root inside the container.--writableensures changes persist in the container./homeis bound so your user files are accessible.
Verify the OS:
cat /etc/os-release
You should see:
PRETTY_NAME="Ubuntu 24.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
...
Step 4: Update and install packages
Inside the container:
apt update
apt upgrade -y
apt install -y build-essential pkg-config cmake git \
ninja-build make autoconf automake libtool wget curl zstd
mkdir -p /localscratch
All packages are installed as root, but only inside the container. You do not need
sudo./localscratchwill be used as temporary storage inside the container.
Exit the container:
exit
Step 5: Install Ollama inside the container
Re-enter the container with write access because Ollama requires sudo / root access internally for its installation:
apptainer shell --fakeroot --bind /localscratch --bind /home --writable ubuntu24.04
Run the Ollama installation script:
curl -fsSL https://ollama.com/install.sh | sh
Exit the --fakeroot instance of the container after the installation:
exit
Step 6: Use the container with GPU support
apptainer shell --nv \
--bind /localscratch --bind /home --bind /projects --bind /common \
ubuntu24.04
--nvprovides access to the GPU of the compute node.--fakerootis no longer needed when you’re not performing root-level actions.--writableis no longer needed when you’re not modifying the container.
Verify GPU:
nvidia-smi
You should see the GPU(s) allocated to your node.
Step 7: Start Ollama
Inside the container:
ollama serve
This starts a local Ollama server. To interact with it:
Open a second terminal and SSH into the same node.
Load Apptainer and re-enter the container:
module load apptainer apptainer shell --nv \ --bind /localscratch --bind /home --bind /projects --bind /common ubuntu24.04
Run a model:
ollama run gemma3:1b
Summary:
You now have an interactive and writable Ubuntu container with root privileges.
You can install any package you need without affecting the host system.
GPU-enabled containers allow you to run GPU workloads like Ollama.
Containers are portable and reproducible, ensuring your environment works across compute nodes.