Users can always get a brief summary of their projects and the amount of resource remaining in those projects with the budgets command:
bash-2.05a$ budgets z001: 999999 AU 416666:15:0
Workspace for each user is located at:
where project_code is the code for your project (eg, x01); group_code is the code for your project group, if your project has groups, or the same as the project code, if not (eg, x01-a) username is your login name.
JTMPDIR=`lljtmp`in your LoadLeveler script. The variable
JTMPDIRcontains the name of the temporary directory and can be used with the standard UNIX commands, e.g.
The temporary scratch space (4.5TB at the time of writing) is shared between all jobs running on the machine on a first come first served basis. Unfortunately no guarantees -- another job(s) might have taken it all.
Important note: The scratch space is erased once your job finishes. You have to copy the data from inside your LoadLeveler script into your home or work space.
User and group quotas can be interrogated with the mmlsquota command:
bash-2.05a$ mmlsquota -g z001
Block Limits | File Limits
Filesystem type KB quota limit in_doubt grace | files quota limit in_doubt grace
Disk quotas for group z001 (gid 900):
hsm GRP 3900800 0 10485760 146624 none | 14910 0 0 115 none
work GRP no limits
tmpchkpt GRP no limits
home GRP no limits
Note, this command is in /usr/lpp/mmfs/bin/ which should be added to the path before use.
All users have a login and password on the HPCx Administration Web Site (aka the SAF page):
Once logged into this web page, users can find out much about their usage of the HPCx system, including:
These features are largely self-documenting and are not described further here.
See here and here for more details of the web-site.
The standard charging policy on HPCx is to multiply the elapsed wall-clock time for each job by the number of nodes used. Since parallel production jobs are granted exclusive access to their nodes, each node is charged with the cost of all its 16 CPUs, even if your job uses only a fraction of it. For example, a job lasting 1 hour on 50 CPUs from four nodes will be charged as 64 CPU hours. For most jobs it will be most economic to ask for multiples of 16 CPUs, that is 16, 32, 48, 64, ... CPUs.
However, there are two exceptions to this rule: serial and interactive jobs. In both these cases users are only charged for the CPU time actually used.
For serial jobs we would expect this to be almost identical to the wall-clock time as we don't over subscribe CPUs (ie: there are never more serial processes than the number of CPUs dedicated to serial jobs). For interactive jobs we allow only one process to run on each CPU, so users will be charged for the number of processes multiplied by the elapsed wall-clock time.