Skip to content

Platforms

WAVIhpc contains templates for the following platforms: ARCHER2, BAS HPC and JASMIN. But which one should you use?

This page aims to give a high-level comparison and quickstart guide to each platform.

Comparison

Below is a comparison table of usage and hardware specification of the different supported platforms. Number of nodes/cores are not exact, but are intended to be indicative.

Feature ARCHER2 JASMIN BAS HPC
Purpose National HPC service for compute-intensive research Data-intensive computing for climate/environmental science BAS's internal HPC cluster
Permitted Use ESPRC + NERC NERC BAS internal only
CPU AMD EPYC 7742, 64-core Zen 2 processors Various Intel Xeon CPUs Various Intel Xeon CPUs
Number of Nodes ~5860 total ~300 total ~20 total
Number of Cores ~750,000 total ~19,000 total ~ 430 total
Memory per Node 256 GB to 512 GB (standard nodes) 256 GB to 1024 GB (depending on host group) 512 GB

ARCHER2

ARCHER2 is the National HPC service for compute-intensive research for ESPRC + NERC funded research. You can read more about its storage types and hardware specifications.

ARCHER2 is the most powerful HPC supported by WAVIhpc, and therefore well-suited to the most compute-intensive ensembles.

Recommended reading:

  1. Quickstart for users.
  2. Running jobs.
  3. Data Management & Transfer.

BAS HPC

The BAS HPC is BAS's internal HPC cluster. You can read more about the hardware specifications (BAS internal only).

While not as powerful as ARCHER2, this cluster is only open to other BAS users. It is well suited for day-to-day running and testing of ensembles, and internal use.

Recommended reading:

  1. HPC User Guide.
  2. HPC Training.
  3. Linux Service Desk Solutions.

JASMIN

JASMIN is a data analysis facility for NERC funded research or related environmental science projects. It provides storage and compute. You can read more about its storage types, and hardware specifications.

JASMIN provides larger and more versatile storage, but ARCHER2 is a far more powerful computing facility. It is well suited for day-to-day running and testing of ensembles, high-memory runs or simply testing your ensembles on a different platform.

Recommended reading:

  1. Getting started.
  2. Batch computing.
  3. Introduction to group workspaces.

Run on ARCHER2, store on JASMIN

As you have read above, ARCHER2's compute is more powerful while JASMIN's storage is more versatile - you can choose to combine these, by following the Transfers from ARCHER2 documentation.