There is currently no quota/limit on the number of jobs that can be executed or submitted to the Portable Batch System (PBS), or allocation, queue for the Linux cluster. For optimal use of resources @users have to follow Rules, “Computing Resource Usage Policy “ as mentioned for Running Jobs.

It is assumed that user have carefully read 'Mini-Guide for New User'.

Cluster Resource Allowed
C10-Cluster (14 nodes): max 25% (4 nodes)
C11-Cluster (40 nodes): max 25% (10 nodes)
C12-Cluster (44 nodes) max 20% (7 nodes)
C13-Cluster (32 nodes) max 16% (4 nodes)
  1. If resources are free, users can submit jobs exceeding the above limits. As long as this does not cause jobs from other users to be queued. However, if it is found that user using more than his/her quota leads to other users getting queued, jobs in excess of quota will be cancelled by the system admin without prior warning. Jobs submitted last will be cancelled first.

  2. Short sequential jobs should not exceed 100 at a time.

  3. It is strongly recommended that C10 and C11 can be used for serial jobs, and C11, C12 and C13 for parallel shared memory jobs.

  4. Users can request the coordinators for allocation of a larger than usual resource for a limited period of time.

The Linux cluster is a shared computing resource. Jobs with a long wait or sleep loop jobs are not allowed on the cluster, as this wastes valuable computing time that could be used by other researchers. Any jobs with a long wait or that contain a sleep loop may be terminated without advance notice. Additionally, any processes that may create performance or load issues on the head node or interfere with other users’s job may be terminated. This includes jobs running other than batch queue jobs on the compute nodes.

  1. Do not use your /home/$user area for installation of programs and/or for any serious computational I/O work. User should not exceed 10.0 GB on his/her home area without any advance notice.

  2. Use /c$scratch/username partition to run/install your programs on respective cluster accordingly.

  3. User will be responsible to BACKUP his/her generated data on /c$scratch/username area. User can move it on to /data$/username.

  4. User shall occupy not more than 1.0 TB of /c$scratch/area storage of respective Cluster, including installed programs, generated data etc.

  5. If you generate data related to your research work. It is User’s responsibility to BACK IT UP SOMEWHERE ELSE OTHER THAN HRI HPC STORAGE. HRI HPC will not be responsible in any case of crash/failure in Storage servers or lose of DATA

An account on any HRI-HPC systems may only be used by the authorized account holder. Defective configurations, program errors or other damaging/disruptive activities directed against the HRI-HPC facilities or other users is prohibited.

If it has been determined Violations of Rules, as mentioned in “Computing Resource Usage Policy“ & “Admissible Restrictions for Running Jobs“ & “Disk Usage Policy“ & “Access Policy“or any other policy, would attract following actions:

  1. Termination of job/jobs without any warning.

  2. Locking of account.

  3. Statutory action as decided by the competent authority.

If you have any questions or concerns regarding any of these policies, please send an email to us.