3clp_HREM  and  LIG_only_ligand_HREM from
your workstation to the remote HPC working directory. E.g. from the working directory
where you launched the HPC_Drug command in  Step 2,  just issue 
scp -r 3clp_HREM user@hpcaddress:USERHOME scp -r LIG_only_ligand_HREM user@hpcaddress:USERHOMEwhere
 user@hpcaddress  is the username and
address of your HPC account and  USERHOME  is your home
directory at the HPC front-end. On most HPC platforms, disk quotas on non volatile storage are
limited. In this case it may be necessary to copy the two HREM
directories to the HPC scratch area prior to execution, as the present
computational Step 3 will generate several tens of GB of data on
the HPC:
  
cp USERHOME/3clp_HREM USER_SCRATCH/bound cp USERHOME/LIG_only_ligand_HREM USER_SCRATCH/unbound
where USER_SCRATCH is the user scratch area on the HPC.
In each of the two folders bound and unbound, you will find a script file named 
MAKETPRFILES.sh, one for the  bound
state run (ligand annihilation in thye complex) and one for the
 unbound state run (ligand growth in the
solvent).  This scripts serve to generate all GROMACS 
tpr  files that are necessary to run the HREM simulations on
the HPC using the HPC_Drug-generated mdp and top and
gro  files
  in  Step 2.
Once you execute interactively the  MAKETPRFILES.sh
scripts on the HPC_Drug-generated HREM directories, you are ready to
submit you parallel jobs for vDSSB enhanced sampling on the HPC.
To this end, in  Step 2, HPC_Drug also
generated, in each of the two directories
  3clp_HREM  and 
LIG_only_ligand_HREM, two tentative batch files for HPC
submission, based on the syntax of the
 SLURM workload
manager.  These two SLURM submission files,
for  bound-state
and  unbound-state, afford the
enhanced sampling of the vDSSB end-state for the complex
PF-07321332-3CLpro on the
heterogeneous 
Marconi100 HPC platform (CINECA), equipped with 4 Nvidia VOLTA
  GPUs per node.
cd USER_SCRATCH/bound ./MAKETPRFILES.sh sbatch HREM_input.slr cd USER_SCRATCH/unbound ./MAKETPRFILES.sh sbatch HREM_input.slr
The bound-state and unbound-state jobs requests 36 nodes (144 Nvidia VOLTA GPUs) and 8 nodes (32 Nvidia VOLTA GPUs), respectively. The job relative to the bound state produces about 3.5 microseconds of simulation in total (142 ns on the target state) in 24 wall clock hours, running six replicates of 24-replica exchange simulation involving a hot-zone including the ligand and nearby residues. The job relative to the unbound state requests 8 nodes (32 Nvidia VOLTA GPUs) and produces abount 250 ns in total (32 ns on the target state) in 4/5 wall clock hours, running four replicates of 8-replica exchange simulation with torsional trempering of the full ligand.
 N.B.(1): On the HPC platforms,  GROMACS  is
usually made available by issuing a specific  module
load  directive prior to submission or directly into the batch
submission scripts (see e.g. the bound-state
SLURM  script).  The HREM execution
requires GROMACS to be patched with PLUMED. If the GROMACS-PLUMED
module is not available on the HPC, then the user must compile and
patch GROMACS with PLUMED on the HPC before submission, generating his/her own
 gmx_mpi  executable, and change the
SLURM script specifying the full path of the PLUMED-patched  gmx_mpi 
command.   Compiling and patching GROMACS with PLUMED is described 
  here (section  Patching your MD code ). 
N.B.(2): The provided SLURM files (for bound-state run and unbound-state run) must be hacked and adapted to the specific HPC platform job scheduling/accounting system by the end-user. In the Zenodo repository, a PBS script for batch submission is also provided.