Virtual Environments: How to Build Using pip
and venv
This information is part of a collection of information on constructing and using virtual environments.
ARC suggests the use of MiniForge as one way to construct conda virtual environments (CVEs). (As noted elsewhere, Ananconda can no longer be used because of changes in Anaconda’s terms of use that are affecting universities nation-wide and beyond.)
However, there may be times when you want to go with a different approach
to create a virtual environment (VE).
One approach is using pip
and venv
.
The approach is provided here.
Note that the resulting VE is not a conda VE.
Steps for Building a Virtual Environment (VE)
The steps are given concisely here. Sections below, named as the bullets here, provide additional detail if needed.
Steps for Build
Log onto the machine on which you wish to run code.
Identify the partition (i.e., queue) to which you will submit your job.
Request resources on a compute node of that partition. These resources will be used to build the VE. A sample form of a request, for TC or Owl:
salloc --account=<account> --partition=<partition_name> --nodes=<number of nodes> --ntasks-per-node=<number of tasks per node> --cpus-per-task=<number of cores per task> --time=<duration of resources>`
This will return several pieces of information when slurm provides the resources to you. Two of the most important are the slurm JOB ID corresponding to this request and the compute node whose resources you will use.
ssh
to the compute node returned from thesalloc
resource request. Enter:
ssh XXX
where XXX
is the compute node name that is returned from the
salloc
command and consists of the cluster name (e.g., owl
for the Owl cluster) or an abbreviation
of the cluster name (e.g., tc
for Tinkercliffs cluster) and a three-digit number.
Examples for XXX
are tc032
and owl007
.
The version of python that you need. A key aspect of this approach is that one needs a module that resides on the particular cluster for the particular version of python that you need in your VE. This is generally not too difficult of a problem. For example, on Tinkercliffs, the modules that contain various versions of python from 3.7 through 3.11 are:
Python/3.7.2-GCCcore-8.2.0
Python/3.7.4-GCCcore-8.3.0
Python/3.8.2-GCCcore-9.3.0
Python/3.8.6-GCCcore-10.2.0
Python/3.9.5-GCCcore-10.3.0-bare
Python/3.9.5-GCCcore-10.3.0
Python/3.9.6-GCCcore-11.2.0-bare
Python/3.9.6-GCCcore-11.2.0
Python/3.10.4-GCCcore-11.3.0-bare
Python/3.10.4-GCCcore-11.3.0
Python/3.10.8-GCCcore-11.3.0-bare
Python/3.10.8-GCCcore-11.3.0
Python/3.10.8-GCCcore-12.2.0-bare
Python/3.10.8-GCCcore-12.2.0
Python/3.11.3-GCCcore-12.3.0
Python/3.11.5-GCCcore-13.2.0
Reset modules. Enter
module reset
Load a module for that version of python. If one needs python version 3.9, then one can load from the list above the following module by entering
module load Python/3.9.6-GCCcore-11.2.0
Create the virtual environment (VE). Enter
python -m venv /path/to/virt-env/<VE name>
Activate the VE. You must specify the
activate
script within thebin
directory of your VE.
Enter
source /path/to/virt-env/<VE name>/bin/activate
Check the python version in the VE. Enter
python --version
and you should get the same version of python as is in the module.
Add packages to your VE. Enter the following command as many times as you need, each time loading a package (
<package_name>
) that is not yet in the VE:
python -m pip install <package_name>
If the system prints a message to update pip, you can update it. Update
pip
by entering
python -m pip install --upgrade pip
List the packages in the VE. Enter
pip list
Deactivate the VE. Enter
deactivate
Leave the compute node. After you are done building the CVE, exit off the compute node by typing
exit
Relinquish resources. Enter:
scancel XXX
where XXX
is the slurm JOB ID (i.e., an integer) corresponding
to the resource request.
Note that if you find you want additional packages in your VE at this point, then you merely repeat the steps starting with step 9 above.
Details
Log onto the machine on which you wish to run code
From a terminal, type ssh <username>@<clustername>.arc.vt.edu
where <username>
is your user name
and
<clustername>
is the name of the cluster you are trying to log into.
Examples of the latter are tinkercliff2
and owl1
.
Identify the partition (i.e., queue) to which you will submit your job
To list the partitions on a cluster, type:
sinfo
or
sinfo --long
or
sinfo | awk -F " " 'NR > 1 { a[$1]++ } END { for (b in a) { print b } }'
Request resources on a compute node of that partition
To build a VE, it is most likely that you will only need one core of one compute node. For the sample form of resource request, for TC or Owl,
salloc --account=<account> --partition=<partition_name> --nodes=<number of nodes> --ntasks-per-node=<number of tasks per node> --cpus-per-task=<number of cores per task> --time=<duration of resources>
one may take <number of nodes>
as 1, <number of tasks per node
as 1, and <number of cores per task>
as 1.
A duration <duration of resources>
of two hours, i.e., 2:00:00 will usually suffice.
When slurm returns with your resources, note the names of the compute node(s) given to you
and the slurm JOB ID.
The names of compute nodes are used to determine which nodes to ssh
into.
The slurm JOB ID is used to relinquish resources when done with them, as a last step
in this process.
How to determine an appropriate Python module
To find all occurrences of python (i.e., to find all python modules), first write to file all of the modules available on the cluster by typing:
module avail >& list.of.all.modules
Then open this file list.of.all.modules and search
for Python (note the capital P) to find the versions of python.
Use this full name in the module load
command above.
Create a VE
There are many ways to create a virtual environment (VE),
and if you use multiple ways to construct VEs, then you might want to
consider putting -pv- (or similar) in the name of the VE to denote it was built
using pip
and venv
.
Different methods of generating modules result in different ways to activate
them.
Use of VEs
You can only use a VE on the cluster and with the type of compute nodes that was used to build the VE.