There is currently no quota/limit on the number of jobs that can be executed or submitted to the Portable Batch System (PBS), or allocation, queue for the Linux cluster. For optimal use of resources @users have to follow Rules, “Computing Resource Usage Policy “ as mentioned for Running Jobs.

It is assumed that user have carefully read 'Mini-Guide for New User'.

Type Of Job Resource Allowed
Requiring < 6 hours: 25% in first cluster ,10% in the others¹.
Requiring 6-48 hours: 25% in first cluster ,10% in the others.
Requiring > 48 hours: 20% in first cluster ,10% in the others¹.
  1. In the < 6 hr category one can use upto 40% of the resource if a cluster is unoccupied. However, the user has to cancel the above quota jobs in case others users submit jobs in the queue.

  2. Short sequential jobs should not queued more than 100 at a time.

  3. It is strongly recommended that C8 and C10 be used for serial jobs, and C9, C11 and C12 for parallel shared memory jobs.

  4. Users can request the coordinators for allocation of a larger than usual resource for a limited period of time.

The Linux cluster is a shared computing resource. Jobs with a long wait or sleep loop jobs are not allowed on the cluster, as this wastes valuable computing time that could be used by other researchers. Any jobs with a long wait or that contain a sleep loop may be terminated without advance notice. Additionally, any processes that may create performance or load issues on the head node or interfere with other users’s job may be terminated. This includes jobs running other than batch queue jobs on the compute nodes.

  1. Do not use your /home/$user area for installation of programs and/or for any serious computational I/O work. User should not exceed 10.0 GB on his/her home area without any advance notice.

  2. Use /c$scratch/username partition to run/install your programs on respective cluster accordingly.

  3. User will be responsible to BACKUP his/her generated data on /c$scratch/username area. User can move it on to /data$/username.

  4. User shall occupy not more than 1.0 TB of /c$scratch/area storage of respective Cluster, including installed programs, generated data etc.

  5. If you generate data related to your research work. It is User’s responsibility to BACK IT UP SOMEWHERE ELSE OTHER THAN HRI HPC STORAGE. HRI HPC will not be responsible in any case of crash/failure in Storage servers or lose of DATA

An account on any HRI-HPC systems may only be used by the authorized account holder. Defective configurations, program errors or other damaging/disruptive activities directed against the HRI-HPC facilities or other users is prohibited.

If it has been determined Violations of Rules, as mentioned in “Computing Resource Usage Policy“ & “Admissible Restrictions for Running Jobs“ & “Disk Usage Policy“ & “Access Policy“or any other policy, would attract following actions:

  1. Termination of job/jobs without any warning.

  2. Locking of account.

  3. Statutory action as decided by the competent authority.

If you have any questions or concerns regarding any of these policies, please send an email to us.