ARC System Changes: 2025-05

During the mid-May, ARC systems will be offline for regular maintenance. During this time a number of major changes will be implemented. This page presents a brief outline of the changes, explains the impact it may have on your use of ARC systems, and provides a FAQ regarding these changes.

If you have questions about this, there are several ways to get more information or request help:

  • submit a help request via https://arc.vt.edu/help

  • attend ARC office hours: https://arc.vt.edu/office-hours

  • return to this page for updates

[[TOC]]

Partition Changes

Consolidation

ARC clusters host a wide variety of resource types because Virginia Tech researchers have a wide variety of computational needs. But having small pools of resources disconnected from each other creates isolated pockets of resources which suffer alternately from very low and very high demand. By grouping resources into larger pools, jobs will have access to more resources and this can help them start faster.

  • dev_q partitions are being removed (see info on QOS’s for alternative priority scheduling for short jobs)

  • All CPU-only partitions within a cluster are combined into a single partition.

  • GPU partitions can be combined when the GPU devices are of the same model.

Cluster

Node Types

Partitions

Notes

Falcon

A30 GPU nodes

a30_normal_q
a30_preemptable_q

a30_dev_q removed

Falcon

L40S GPU nodes

l40s_normal_q
l40s_preemptable_q

l40s_dev_q removed

Falcon

V100 GPU nodes

v100_normal_q
v100_preemptable_q

formerly part of the Infer cluster

Falcon

T4 GPU nodes

t4_normal_q
t4_preemptable_q

formerly part of the Infer cluster

Owl

AMD Zen4 “Genoa” nodes

normal_q
preemptable_q

dev_q removed

Owl

AMD Zen3 “Milan” large-memory nodes

normal_q
preemptable_q

formerly in largemem_q and hugemem_q

Tinkercliffs

AMD Zen2 “Rome” nodes

normal_q
preemptable_q

dev_q removed

Tinkercliffs

AMD Zen2 “Rome” large-memory nodes

normal_q
preemptable_q

formerly in largemem_q

Tinkercliffs

Intel “CascadeLake-AP” nodes

normal_q
preemptable_q

formerly in intel_q

Tinkercliffs

HPE 8x A100-80G GPU nodes

a100_normal_q
a100_preemptable_q

no change

Tinkercliffs

Nvidia DGX 8x A100-80G GPU nodes

a100_normal_q
a100_preemptable_q

formerly in dgx_normal_q

Tinkercliffs

Dell 8x H200 GPU nodes

h200_normal_q
h200_preemptable_q

FY25 new acquisition

User-selectable features for heterogeneous partitions

Cluster

Node Types

Partitions

How to select

Falcon

A30 GPU nodes

a30_normal_q
a30_preemptable_q

n/a - homogeneous partitions

Falcon

L40S GPU nodes

l40s_normal_q
l40s_preemptable_q

n/a - homogeneous partitions

Falcon

V100 GPU nodes

v100_normal_q
v100_preemptable_q

n/a - homogeneous partition

Falcon

T4 GPU nodes

t4_normal_q
t4_preemptable_q

n/a - homogeneous partition

Owl

AMD Zen4 “Genoa” nodes

normal_q
preemptable_q

--constraint=avx512

Owl

AMD Zen3 “Milan” large-memory nodes

normal_q
preemptable_q

--mem=<size> larger than 768G

Tinkercliffs

AMD Zen2 “Rome” nodes

normal_q
preemptable_q

--constraint=amd

Tinkercliffs

AMD Zen2 “Rome” large-memory nodes

normal_q
preemptable_q

--constraint=amd and --mem=<size> larger than 256G

Tinkercliffs

Intel “CascadeLake-AP” nodes

normal_q
preemptable_q

--constraint=intel

Tinkercliffs

HPE 8x A100-80G GPU nodes

a100_normal_q
a100_preemptable_q

n/a

Tinkercliffs

Nvidia DGX 8x A100-80G GPU nodes

a100_normal_q
a100_preemptable_q

--constraint=dgx

Tinkercliffs

Dell 8x H200 GPU nodes

h200_normal_q
h200_preemptable_q

n/a - homogeneous partitions

Decoupling CPU and Memory Requests to enable “right-sizing” of jobs

CPU and memory by default will continue to be allocated together as “slices” of a node, but users will now have full capability to select exactly how much of each resource their job will need. Decoupling the previous locking-together of CPU and memory requests provides several advantages:

  • no surprise CPU core additions to compensate for memory requests

  • extra CPU cores meant that job billing could higher than expected

  • since unrequested CPU cores are not added to jobs to provide requested memory, more CPU cores will remain available for other jobs

  • more accurate billing incentivizes research groups to monitor resources utilization and to “right-size” their jobs

New Model

Default CPU/memory allocation behavior is unchanged to help minimize the impact of the changes and to provide a memory allocation scheme which avoids many accidental “out of memory” (OOM) situations. If you find that a job needs more memory, but you don’t need more cores, then simply request more memory.

Use the command seff <jobid> on a completed job to examine Slurm’s report of memory allocated versus used for a job and customize the memory allocation in future jobs with #SBATCH --mem=<size>[units] where <size> is an integer and [units] is one of M (default), G, or T.

Old Model

Between 2018 and 2025, the standard ARC resource request model was to allocate CPU and memory resources in fixed “slices” of a node. This meant that, conceptually, when 8 cores was requested on a Tinkercliffs normal_q node with 128 CPU cores and 256GB of memory, the job would get the 8 requested cores and a proportional amount of node memory ((8/128)*256=16GB). Likewise, if a job requested 1 CPU core and 16GB of memory, Slurm was configured to allocate the proportional share of CPU cores to go with the memory. The job would again get 8 CPU cores and 16GB of memory.

Job Billing

Free Tier Increases to 1M units per PI monthly

ARC is implementing a unified billing model along with the other changes intended to make the usage of the cluster more uniform and convenient. This could result in more units being consumed by a job than before due to the decoupling of CPU, memory, and GPU resources.

For example, a Tinkercliffs normal_q full-node will cost about 143 units per hour due to billing for memory where previously it was only 128 units per hour - an increase of about 12%. However the free-tier monthly allocation is also increasing from 800,000 units per PI to 1,000,000 units to more than compensate.

Additionally, with the decoupling of memory and CPU, users are better able to reduce effective billing by examing resource utilization of completed jobs and tuning future jobs to request precisely the CPUs and memory they need.

Billing Begins for Owl and Falcon clusters

New cluster resources are sometimes released with no billing to help encourage adoption and migration to the new resources and to provide a grace period while adapting jobs. The Owl and Falcon clusters were both released for general use in the Fall 2024 term with zero billing.

After the May maintenace, usage of these clusters will be billed against “free-tier” accounts.

Billing Reflects All Resources Allocated

Job billing will now take into account all the requested resources: CPU, memory, and GPU.

Adding QOS Options for Scheduling Flexibility

We are introducing some user-selectable Quality of Service (QOS) options to provide enhanced flexibility to balance tradeoffs in resource scale and scheduling priority.

Jobs are scheduled in order of priority with higher priority jobs generally being scheduled sooner. While multiple factors affect a job’s total priority calculation, the QOS factor is perhaps the most impactful.

Tradeoff examples:

  • get higher priority for a job, but the maximum timelimit is reduced

  • get an extended duration for a job, but scheduling priority is reduced and the job is more limited than normal

  • get a larger job than is normally allowed, but maximum timelimit is reduced

Concept table for the Short and Long QOS’s:

QOS

Priority

Max Timelimit

Resource Scale

Base

1000

7-00:00:00

100% of partition limits

Short

2000

1-00:00:00

150% of partition limits

Long

500

14-00:00:00

25% of partition limits

Preemptable

0

30-00:00:00

12.5% of partition limits

Cluster organization

Infer cluster being retired

The Infer cluster aggregated a variety of GPU resources. The P100 nodes had been in service for 9 years and have been fully eclipsed by resources in other clusters. As of the May maintenance, they are being removed from service.

The remaining T4 and V100 nodes are also aging (5 and 7 years old, respectively) but will merged into the Falcon cluster, which aligns well with their current utility. Along the way, they will get updates to their operating systems and software stacks.

Operating System Upgrade on All Clusters

After the maintenance, all ARC clusters will be running the same operating system and a common set of OS packages. This will provide a more unified experience for accessing cluster resources.

Cluster data to be made “private”

With over a thousand active users and a million annual jobs, commands like squeue to view cluster status dump tons of data to the screen. To streamline views and protect personal information, we are enabling Slurm features which limit the visibility of most job information to only show your own jobs.

ARC Provided Software and Modules Unification and Overhaul

To make using clusters easier and more efficient for research, ARC provides pre-installed software modules for a large number of scientific applications and their dependencies. Most of these are built from source code and we attempt to tune the codes so that they make full use of the architectural features of each node type such as GPU devices and CPU microarchitecture instruction sets, particularly vectorization instructions and variants like AVX, AVX-2, and AVX-512.

We have historically performed software installations in an ad-hoc manner based on requests from researchers, but this has resulted in highly differentiated sets of available software depending on the cluster and node type. We are modifying this approach by standardizing on a common set of applications to be provided on all clusters. This should make it easier to move workloads among various cluster resources and generally reduce the likelihood of having to wait for software installations.

Mount point updates for software stacks

ARC has used several different mount points for the software we provide. Most included reference to the system name and elements of the node micro-architecture. This made paths long and complex and also made it more complicated to search for and load some modules. (e.g. module load tinkercliffs-rome/MATLAB/<version>). We are streamlining these mount points in a way that will provide a more consistent experience within node-types and also across clusters.

installation system

new standard mount

example of previous

EasyBuild software

/apps/arch/software/

/apps/easybuild/software/tinkercliffs-rome

EasyBuild modules

/apps/arch/modules/

/apps/easybuild/modules/tinkercliffs-rome/all

manually installed software

/apps/arch/software/

/apps/packages/tinkercliffs-rome/

manually installed modules

/apps/arch/modules/

/apps/modulefiles/tinkercliffs-rome/

Answers for some frequently asked questions (FAQ)

Job Script Syntax and Parameters

sbatch: error: invalid partition specified: dev_q

We have consolidated partitions to make more resources available to jobs without having to guess and check multiple partitions. See the section above on partition changes above for more details and a list of available partitions.

If you used dev_q partitions for increased job priority for a few short jobs, then you may consider using the --qos=short option as described above.

Software

“The software module I used before maintenance isn’t there now. Can you reinstall it for me?”

Yes, we preinstalled as many packages as we could and software installations are getting priority attention for the weeks after the Tinkercliffs cluster was reprovisioned. Additionally, ARC is making a concerted effort to standardize the software available across all clusters.

Use module spider <string> from the command line to search for packages which are already installed on Tinkercliffs nodes. If a package you need is not found, please submit a help request via https://arc.vt.edu/help. We will add it to our to-do list.

“What does ‘Legacy Apps’ on Open OnDemand mean?”

Before the maintenance, our Open OnDemand (OOD) apps were developed for the Tinkercliffs cluster with containerized implementations. Due to the containerization, much of the apps functionality remains in tact after the update to the operating system. Therefore, we are keeping them available as we continue to develop OOD. While we have tested the more commonly used ones like RStudio, Matlab, and Desktop, be advised that there may be some issues with Legacy apps since they were developed for the previous system.

We are actively developing new and improved apps. You can see the recently release apps in the “Interactive Apps” dropdown as before.

Billing