Cluster 9
Nodes Summary
Total Number of CPUs: 768State | No. of Nodes | No. of CPUs occupied/down | No. of CPUs free | % of total CPUs free |
---|---|---|---|---|
down | 33 | 528 | 0 | 0.00 |
down,offline | 15 | 240 | 0 | 0.00 |
Free CPUs (nodewise)
There is no free CPU available now.
Jobs Summary
† Avg. Efficiency per CPU =
∑
CPU time / Walltime
/∑ No. of CPUs assigned†† Overall Efficiency =
∑ CPU time ∑ (Walltime × No. of CPUs assigned)
[Sums are over all the running jobs.]
Job State | User | No. of Jobs | No. of CPU using | % of total CPU using | Avg. Walltime per CPU | Avg. Efficiency per CPU† | Overall Efficiency†† |
---|---|---|---|---|---|---|---|
R |
Cluster 9
Nodes Status
Node | np | state | No. of CPUs occupied | No. of free cpus | ||
---|---|---|---|---|---|---|
compute-0-0 | 16 | down | 0 | 0 | ||
compute-0-1 | 16 | down,offline | 0 | 0 | ||
compute-0-2 | 16 | down | 0 | 0 | ||
compute-0-3 | 16 | down,offline | 0 | 0 | ||
compute-0-4 | 16 | down | 0 | 0 | ||
compute-0-5 | 16 | down,offline | 0 | 0 | ||
compute-0-6 | 16 | down | 0 | 0 | ||
compute-0-7 | 16 | down | 0 | 0 | ||
compute-0-8 | 16 | down | 0 | 0 | ||
compute-0-9 | 16 | down | 0 | 0 | ||
compute-0-10 | 16 | down | 0 | 0 | ||
compute-0-11 | 16 | down | 0 | 0 | ||
compute-0-12 | 16 | down | 0 | 0 | ||
compute-0-13 | 16 | down | 0 | 0 | ||
compute-0-14 | 16 | down | 0 | 0 | ||
compute-0-15 | 16 | down,offline | 0 | 0 | ||
compute-0-16 | 16 | down | 0 | 0 | ||
compute-0-17 | 16 | down | 0 | 0 | ||
compute-0-18 | 16 | down | 0 | 0 | ||
compute-0-19 | 16 | down | 0 | 0 | ||
compute-0-20 | 16 | down | 0 | 0 | ||
compute-0-21 | 16 | down,offline | 0 | 0 | ||
compute-0-22 | 16 | down,offline | 0 | 0 | ||
compute-0-23 | 16 | down,offline | 0 | 0 | ||
compute-0-24 | 16 | down,offline | 0 | 0 | ||
compute-0-25 | 16 | down | 0 | 0 | ||
compute-0-26 | 16 | down | 0 | 0 | ||
compute-0-27 | 16 | down | 0 | 0 | ||
compute-0-28 | 16 | down | 0 | 0 | ||
compute-0-29 | 16 | down,offline | 0 | 0 | ||
compute-0-30 | 16 | down,offline | 0 | 0 | ||
compute-0-31 | 16 | down | 0 | 0 | ||
compute-0-32 | 16 | down | 0 | 0 | ||
compute-0-33 | 16 | down | 0 | 0 | ||
compute-0-34 | 16 | down | 0 | 0 | ||
compute-0-35 | 16 | down | 0 | 0 | ||
compute-0-36 | 16 | down,offline | 0 | 0 | ||
compute-0-37 | 16 | down | 0 | 0 | ||
compute-0-38 | 16 | down | 0 | 0 | ||
compute-0-39 | 16 | down,offline | 0 | 0 | ||
compute-0-40 | 16 | down | 0 | 0 | ||
compute-0-41 | 16 | down | 0 | 0 | ||
compute-0-42 | 16 | down | 0 | 0 | ||
compute-0-43 | 16 | down | 0 | 0 | ||
compute-0-44 | 16 | down,offline | 0 | 0 | ||
compute-0-45 | 16 | down,offline | 0 | 0 | ||
compute-0-46 | 16 | down | 0 | 0 | ||
compute-0-47 | 16 | down,offline | 0 | 0 | ||
Cluster 9
Jobs Status
† Efficiency (of parallelization) = CPU time Walltime × No. of CPUs assigned
Job ID | User | Job Name | Job State | Walltime Used | No. of CPU using | Memory Using | Efficiency† |
---|
Cluster 10
Nodes Summary
Total Number of CPUs: 280State | No. of Nodes | No. of CPUs occupied/down | No. of CPUs free | % of total CPUs free |
---|---|---|---|---|
down | 12 | 240 | 0 | 0.00 |
down,offline | 1 | 20 | 0 | 0.00 |
free | 1 | 0 | 20 | 7.14 |
Free CPUs (nodewise)
Node name | No. of free CPUs |
---|---|
compute-0-8 | 20 |
Total | 20 |
Jobs Summary
† Avg. Efficiency per CPU =
∑
CPU time / Walltime
/∑ No. of CPUs assigned†† Overall Efficiency =
∑ CPU time ∑ (Walltime × No. of CPUs assigned)
[Sums are over all the running jobs.]
Job State | User | No. of Jobs | No. of CPU using | % of total CPU using | Avg. Walltime per CPU | Avg. Efficiency per CPU† | Overall Efficiency†† |
---|---|---|---|---|---|---|---|
R |
Cluster 10
Nodes Status
Node | np | state | No. of CPUs occupied | No. of free cpus | ||
---|---|---|---|---|---|---|
compute-0-0 | 20 | down | 0 | 0 | ||
compute-0-1 | 20 | down | 0 | 0 | ||
compute-0-2 | 20 | down | 0 | 0 | ||
compute-0-4 | 20 | down | 0 | 0 | ||
compute-0-5 | 20 | down | 0 | 0 | ||
compute-0-6 | 20 | down | 0 | 0 | ||
compute-0-7 | 20 | down,offline | 0 | 0 | ||
compute-0-8 | 20 | free | 0 | 20 | ||
compute-0-10 | 20 | down | 0 | 0 | ||
compute-0-11 | 20 | down | 0 | 0 | ||
compute-0-12 | 20 | down | 0 | 0 | ||
compute-0-13 | 20 | down | 0 | 0 | ||
compute-0-3 | 20 | down | 0 | 0 | ||
compute-0-9 | 20 | down | 0 | 0 | ||
Cluster 10
Jobs Status
† Efficiency (of parallelization) = CPU time Walltime × No. of CPUs assigned
Job ID | User | Job Name | Job State | Walltime Used | No. of CPU using | Memory Using | Efficiency† |
---|
Cluster 11
Nodes Summary
Total Number of CPUs: 960State | No. of Nodes | No. of CPUs occupied/down | No. of CPUs free | % of total CPUs free |
---|---|---|---|---|
free | 19 | 108 | 348 | 36.25 |
down | 8 | 192 | 0 | 0.00 |
down,job-exclusive | 1 | 24 | 0 | 0.00 |
job-exclusive | 12 | 288 | 0 | 0.00 |
Free CPUs (nodewise)
Node name | No. of free CPUs |
---|---|
compute000 | 24 |
compute001 | 24 |
compute002 | 24 |
compute003 | 24 |
compute006 | 24 |
compute007 | 24 |
compute008 | 4 |
compute009 | 4 |
compute010 | 4 |
compute011 | 4 |
compute012 | 24 |
compute013 | 24 |
compute014 | 24 |
compute016 | 24 |
compute019 | 24 |
compute031 | 14 |
compute034 | 24 |
compute035 | 24 |
compute039 | 6 |
Total | 348 |
Jobs Summary
† Avg. Efficiency per CPU =
∑
CPU time / Walltime
/∑ No. of CPUs assigned†† Overall Efficiency =
∑ CPU time ∑ (Walltime × No. of CPUs assigned)
[Sums are over all the running jobs.]
Job State | User | No. of Jobs | No. of CPU using | % of total CPU using | Avg. Walltime per CPU | Avg. Efficiency per CPU† | Overall Efficiency†† |
---|---|---|---|---|---|---|---|
R | |||||||
slgupta | 2 | 48 | 5.00% | 1 day 20:16:36 hrs | 98.99% | 98.99% | |
debs | 2 | 240 | 25.00% | 1 day 05:55:13 hrs | 100.07% | 100.07% | |
tisita | 1 | 24 | 2.50% | 02:45:34 hrs | 100.06% | 100.06% | |
hbhillol | 10 | 100 | 10.42% | 14:57:27 hrs | 60.70% | 60.49% | |
sud98 | 8 | 8 | 0.83% | 13:40:30 hrs | 100.09% | 100.09% |
Cluster 11
Nodes Status
Node | np | state | No. of CPUs occupied | No. of free cpus | ||
---|---|---|---|---|---|---|
compute000 | 24 | free | 0 | 24 | ||
compute001 | 24 | free | 0 | 24 | ||
compute002 | 24 | free | 0 | 24 | ||
compute003 | 24 | free | 0 | 24 | ||
compute004 | 24 | down | 0 | 0 | ||
compute005 | 24 | down,job-exclusive | 24 | 0 | ||
compute006 | 24 | free | 0 | 24 | ||
compute007 | 24 | free | 0 | 24 | ||
compute008 | 24 | free | 20 | 4 | ||
compute009 | 24 | free | 20 | 4 | ||
compute010 | 24 | free | 20 | 4 | ||
compute011 | 24 | free | 20 | 4 | ||
compute012 | 24 | free | 0 | 24 | ||
compute013 | 24 | free | 0 | 24 | ||
compute014 | 24 | free | 0 | 24 | ||
compute015 | 24 | job-exclusive | 24 | 0 | ||
compute016 | 24 | free | 0 | 24 | ||
compute017 | 24 | job-exclusive | 24 | 0 | ||
compute018 | 24 | down | 0 | 0 | ||
compute019 | 24 | free | 0 | 24 | ||
compute020 | 24 | job-exclusive | 24 | 0 | ||
compute021 | 24 | job-exclusive | 24 | 0 | ||
compute022 | 24 | job-exclusive | 24 | 0 | ||
compute023 | 24 | down | 0 | 0 | ||
compute024 | 24 | job-exclusive | 24 | 0 | ||
compute025 | 24 | job-exclusive | 24 | 0 | ||
compute026 | 24 | down | 0 | 0 | ||
compute027 | 24 | job-exclusive | 24 | 0 | ||
compute028 | 24 | down | 0 | 0 | ||
compute029 | 24 | down | 0 | 0 | ||
compute030 | 24 | job-exclusive | 24 | 0 | ||
compute031 | 24 | free | 10 | 14 | ||
compute032 | 24 | job-exclusive | 24 | 0 | ||
compute033 | 24 | job-exclusive | 24 | 0 | ||
compute034 | 24 | free | 0 | 24 | ||
compute035 | 24 | free | 0 | 24 | ||
compute036 | 24 | job-exclusive | 24 | 0 | ||
compute037 | 24 | down | 0 | 0 | ||
compute038 | 24 | down | 0 | 0 | ||
compute039 | 24 | free | 18 | 6 | ||
Cluster 11
Jobs Status
† Efficiency (of parallelization) = CPU time Walltime × No. of CPUs assigned
Job ID | User | Job Name | Job State | Walltime Used | No. of CPU using | Memory Using | Efficiency† |
---|---|---|---|---|---|---|---|
661794 | slgupta | CVS_Ph3 | R | 1 day 20:22:56 hrs | 24 | 57.19 GB | 100.01% |
661795 | slgupta | CVS_Ph3_1 | R | 1 day 20:10:18 hrs | 24 | 57.35 GB | 97.96% |
661799 | debs | CsVI3_mig7 | R | 1 day 13:34:43 hrs | 168 | 11.02 GB | 100.07% |
661812 | tisita | BiSb_bader | R | 02:45:34 hrs | 24 | 45.52 GB | 100.06% |
661827 | hbhillol | 1900-12-1 | R | 15:09:17 hrs | 10 | 8.52 GB | 75.89% |
661828 | hbhillol | 1900-12-2 | R | 15:09:17 hrs | 10 | 8.51 GB | 75.86% |
661829 | hbhillol | 1900-12-3 | R | 15:09:42 hrs | 10 | 8.29 GB | 75.83% |
661830 | hbhillol | 1900-12-4 | R | 15:09:41 hrs | 10 | 8.30 GB | 75.83% |
661831 | hbhillol | 1901-24-4 | R | 15:09:40 hrs | 10 | 3.03 MB | 0.00% |
661832 | hbhillol | 1901-24-3 | R | 15:09:40 hrs | 10 | 3.02 MB | 0.00% |
661833 | hbhillol | 1901-24-2 | R | 15:09:32 hrs | 10 | 8.52 GB | 75.89% |
661834 | hbhillol | 1901-24-1 | R | 15:09:28 hrs | 10 | 8.53 GB | 75.90% |
661835 | sud98 | fabs_4.23 | R | 14:33:21 hrs | 1 | 14.45 MB | 100.08% |
661838 | sud98 | xabs_4.23 | R | 14:31:13 hrs | 1 | 14.46 MB | 100.09% |
661839 | sud98 | xabs_4.2 | R | 14:20:53 hrs | 1 | 14.46 MB | 100.09% |
661840 | sud98 | xrel_4.2 | R | 14:19:17 hrs | 1 | 14.46 MB | 100.09% |
661843 | hbhillol | 1901-24-3 | R | 14:09:35 hrs | 10 | 8.51 GB | 75.90% |
661845 | hbhillol | 1901-24-4 | R | 14:08:40 hrs | 10 | 8.53 GB | 75.91% |
661848 | sud98 | xabs_3.924 | R | 14:07:12 hrs | 1 | 14.46 MB | 100.09% |
661849 | sud98 | xabs_4.19 | R | 14:06:27 hrs | 1 | 14.46 MB | 100.09% |
661854 | sud98 | frel_3root2 | R | 12:10:20 hrs | 1 | 14.46 MB | 100.09% |
661855 | debs | CsVI3_mig9 | R | 12:03:05 hrs | 72 | 21.94 GB | 100.06% |
661856 | sud98 | frel_3root2 | R | 11:15:24 hrs | 1 | 14.46 MB | 100.09% |
Cluster 12
Nodes Summary
Total Number of CPUs: 1033State | No. of Nodes | No. of CPUs occupied/down | No. of CPUs free | % of total CPUs free |
---|---|---|---|---|
free | 16 | 1 | 383 | 37.08 |
job-busy | 27 | 648 | 0 | 0.00 |
state-unknown,down | 1 | 1 | 0 | 0.00 |
Free CPUs (nodewise)
Queue | Node name | No. of free CPUs |
---|---|---|
workq | ||
node2 | 24 | |
node4 | 23 | |
node11 | 24 | |
node14 | 24 | |
node6 | 24 | |
node7 | 24 | |
node18 | 24 | |
node32 | 24 | |
node34 | 24 | |
node35 | 24 | |
node38 | 24 | |
node39 | 24 | |
node40 | 24 | |
node33 | 24 | |
node43 | 24 | |
node44 | 24 | |
Total | 383 |
Jobs Summary
† Avg. Efficiency per CPU =
∑
CPU time / Walltime
/∑ No. of CPUs assigned†† Overall Efficiency =
∑ CPU time ∑ (Walltime × No. of CPUs assigned)
[Sums are over all the running jobs.]
Job State | Queue | User | No. of Jobs | No. of CPU using | % of total CPU using | Avg. Walltime per CPU | Avg. Efficiency per CPU† | Overall Efficiency†† |
---|---|---|---|---|---|---|---|---|
R | ||||||||
workq | ||||||||
shuvam | 1 | 1 | 0.10% | 3 days 19:12:28 hrs | 100.08% | 100.08% | ||
souravmal | 6 | 144 | 13.94% | 3 days 05:39:54 hrs | 96.76% | 96.86% | ||
psen | 6 | 144 | 13.94% | 2 days 02:26:03 hrs | 98.62% | 98.76% | ||
junaidjami | 2 | 48 | 4.65% | 2 days 13:52:45 hrs | 100.08% | 100.08% | ||
bikashvbu | 2 | 48 | 4.65% | 1 day 14:18:23 hrs | 99.99% | 100.00% | ||
tonbiswa618 | 1 | 168 | 16.26% | 1 day 16:32:19 hrs | 99.89% | 99.89% | ||
shilendra | 1 | 96 | 9.29% | 14:32:16 hrs | 95.98% | 95.98% |
Cluster 12
Nodes Status
Node | Queue | np | state | No. of CPUs occupied | No. of free cpus | |
---|---|---|---|---|---|---|
node2 | workq | 24 | free | 0 | 24 | |
node3 | workq | 24 | job-busy | 24 | 0 | |
node4 | workq | 24 | free | 1 | 23 | |
node5 | workq | 24 | job-busy | 24 | 0 | |
node8 | workq | 24 | job-busy | 24 | 0 | |
node9 | workq | 24 | job-busy | 24 | 0 | |
node10 | workq | 24 | job-busy | 24 | 0 | |
node11 | workq | 24 | free | 0 | 24 | |
node12 | workq | 24 | job-busy | 24 | 0 | |
node14 | workq | 24 | free | 0 | 24 | |
node15 | workq | 24 | job-busy | 24 | 0 | |
node16 | workq | 24 | job-busy | 24 | 0 | |
node17 | workq | 24 | job-busy | 24 | 0 | |
node19 | workq | 24 | job-busy | 24 | 0 | |
node20 | workq | 24 | job-busy | 24 | 0 | |
node21 | workq | 24 | job-busy | 24 | 0 | |
node22 | workq | 24 | job-busy | 24 | 0 | |
node1 | workq | 24 | job-busy | 24 | 0 | |
node23 | workq | 24 | job-busy | 24 | 0 | |
node24 | workq | 24 | job-busy | 24 | 0 | |
node25 | workq | 24 | job-busy | 24 | 0 | |
node6 | workq | 24 | free | 0 | 24 | |
node7 | workq | 24 | free | 0 | 24 | |
node26 | workq | 1 | state-unknown,down | 0 | 0 | |
node27 | workq | 24 | job-busy | 24 | 0 | |
node13 | workq | 24 | job-busy | 24 | 0 | |
node28 | workq | 24 | job-busy | 24 | 0 | |
node29 | workq | 24 | job-busy | 24 | 0 | |
node18 | workq | 24 | free | 0 | 24 | |
node30 | workq | 24 | job-busy | 24 | 0 | |
node31 | workq | 24 | job-busy | 24 | 0 | |
node32 | workq | 24 | free | 0 | 24 | |
node34 | workq | 24 | free | 0 | 24 | |
node35 | workq | 24 | free | 0 | 24 | |
node36 | workq | 24 | job-busy | 24 | 0 | |
node37 | workq | 24 | job-busy | 24 | 0 | |
node38 | workq | 24 | free | 0 | 24 | |
node39 | workq | 24 | free | 0 | 24 | |
node40 | workq | 24 | free | 0 | 24 | |
node41 | workq | 24 | job-busy | 24 | 0 | |
node42 | workq | 24 | job-busy | 24 | 0 | |
node33 | workq | 24 | free | 0 | 24 | |
node43 | workq | 24 | free | 0 | 24 | |
node44 | workq | 24 | free | 0 | 24 | |
Cluster 12
Jobs Status
† Efficiency (of parallelization) = CPU time Walltime × No. of CPUs assigned
Job ID | User | Queue | Job Name | Job State | Walltime Used | No. of CPU using | Memory Using | Efficiency† |
---|---|---|---|---|---|---|---|---|
115965 | shuvam | workq | two_qbit | R | 3 days 19:12:28 hrs | 1 | 33.46 MB | 100.08% |
119732 | souravmal | workq | batch-45 | R | 3 days 16:44:09 hrs | 24 | 14.11 GB | 91.90% |
119733 | souravmal | workq | batch-44 | R | 3 days 16:43:01 hrs | 24 | 12.36 GB | 99.99% |
119734 | souravmal | workq | batch-4 | R | 3 days 16:44:16 hrs | 24 | 12.85 GB | 100.00% |
119735 | souravmal | workq | batch-35 | R | 3 days 16:43:26 hrs | 24 | 13.46 GB | 100.00% |
119742 | psen | workq | batch-39 | R | 3 days 16:42:39 hrs | 24 | 17.29 GB | 100.00% |
119743 | psen | workq | batch-43 | R | 3 days 16:42:36 hrs | 24 | 17.33 GB | 100.00% |
119880 | junaidjami | workq | ZrMnP_Tc | R | 2 days 20:35:10 hrs | 24 | 15.67 GB | 100.07% |
119899 | bikashvbu | workq | Tuya | R | 2 days 18:20:58 hrs | 24 | 9.56 GB | 100.01% |
119901 | souravmal | workq | batch-24 | R | 2 days 17:33:34 hrs | 24 | 14.48 GB | 88.68% |
119915 | junaidjami | workq | ZnFeO | R | 2 days 07:10:20 hrs | 24 | 4.49 GB | 100.08% |
119918 | souravmal | workq | batch-8 | R | 1 day 21:31:03 hrs | 24 | 14.24 GB | 99.99% |
119919 | psen | workq | batch-9 | R | 1 day 21:29:28 hrs | 24 | 12.42 GB | 97.94% |
119920 | psen | workq | batch-10 | R | 1 day 21:28:51 hrs | 24 | 17.89 GB | 93.83% |
119932 | tonbiswa618 | workq | phonon_InS | R | 1 day 16:32:19 hrs | 168 | 25.58 GB | 99.89% |
120270 | psen | workq | batch-16 | R | 17:06:26 hrs | 24 | 16.25 GB | 99.98% |
120271 | psen | workq | batch-17 | R | 17:06:18 hrs | 24 | 10.88 GB | 99.98% |
120274 | shilendra | workq | opt_CdPb | R | 14:32:17 hrs | 96 | 10.01 GB | 95.98% |
120277 | bikashvbu | workq | Tuya | R | 10:15:49 hrs | 24 | 6.69 GB | 99.97% |
Cluster 13
Nodes Summary
Total Number of CPUs: 1024State | No. of Nodes | No. of CPUs occupied/down | No. of CPUs free | % of total CPUs free |
---|---|---|---|---|
job-busy | 25 | 800 | 0 | 0.00 |
free | 4 | 0 | 128 | 12.50 |
state-unknown,down | 2 | 64 | 0 | 0.00 |
offline | 1 | 32 | 0 | 0.00 |
Free CPUs (nodewise)
Queue | Node name | No. of free CPUs |
---|---|---|
workq | ||
c13node9 | 32 | |
neutrino | ||
c13node28 | 32 | |
c13node30 | 32 | |
c13node31 | 32 | |
Total | 128 |
Jobs Summary
† Avg. Efficiency per CPU =
∑
CPU time / Walltime
/∑ No. of CPUs assigned†† Overall Efficiency =
∑ CPU time ∑ (Walltime × No. of CPUs assigned)
[Sums are over all the running jobs.]
Job State | Queue | User | No. of Jobs | No. of CPU using | % of total CPU using | Avg. Walltime per CPU | Avg. Efficiency per CPU† | Overall Efficiency†† |
---|---|---|---|---|---|---|---|---|
R | ||||||||
workq | ||||||||
psen | 4 | 128 | 12.50% | 2 days 07:37:56 hrs | 99.46% | 99.50% | ||
souravmal | 4 | 128 | 12.50% | 1 day 13:30:18 hrs | 97.90% | 98.14% | ||
vanshreep | 2 | 64 | 6.25% | 1 day 14:39:34 hrs | 99.67% | 99.67% | ||
shilendra | 2 | 128 | 12.50% | 21:06:53 hrs | 99.68% | 99.66% | ||
tanmoymondal | 24 | 96 | 9.38% | 20:22:22 hrs | 99.68% | 99.68% | ||
pradhi | 1 | 128 | 12.50% | 20:06:29 hrs | 99.49% | 99.49% | ||
prajna | 2 | 128 | 12.50% | 06:22:00 hrs | 99.48% | 99.65% |
Job State | Queue | User | No. of Jobs | No. of CPU Requested |
---|---|---|---|---|
Q | ||||
workq | ||||
psen | 2 | 64 | ||
vanshreep | 2 | 256 | ||
tisita | 1 | 128 | ||
sankalpa | 1 | 128 |
Cluster 13
Nodes Status
Node | Queue | np | state | No. of CPUs occupied | No. of free cpus | |
---|---|---|---|---|---|---|
c13node1 | workq | 32 | job-busy | 32 | 0 | |
c13node2 | workq | 32 | job-busy | 32 | 0 | |
c13node3 | workq | 32 | job-busy | 32 | 0 | |
c13node4 | workq | 32 | job-busy | 32 | 0 | |
c13node5 | workq | 32 | job-busy | 32 | 0 | |
c13node7 | workq | 32 | job-busy | 32 | 0 | |
c13node8 | workq | 32 | job-busy | 32 | 0 | |
c13node9 | workq | 32 | free | 0 | 32 | |
c13node10 | workq | 32 | job-busy | 32 | 0 | |
c13node11 | workq | 32 | job-busy | 32 | 0 | |
c13node12 | workq | 32 | job-busy | 32 | 0 | |
c13node14 | workq | 32 | job-busy | 32 | 0 | |
c13node15 | workq | 32 | job-busy | 32 | 0 | |
c13node0 | workq | 32 | job-busy | 32 | 0 | |
c13node16 | workq | 32 | job-busy | 32 | 0 | |
c13node17 | workq | 32 | job-busy | 32 | 0 | |
c13node18 | workq | 32 | job-busy | 32 | 0 | |
c13node6 | workq | 32 | state-unknown,down | 0 | 0 | |
c13node19 | workq | 32 | job-busy | 32 | 0 | |
c13node20 | workq | 32 | job-busy | 32 | 0 | |
c13node22 | workq | 32 | offline | 0 | 0 | |
c13node13 | workq | 32 | job-busy | 32 | 0 | |
c13node23 | workq | 32 | job-busy | 32 | 0 | |
c13node21 | workq | 32 | job-busy | 32 | 0 | |
c13node24 | workq | 32 | job-busy | 32 | 0 | |
c13node25 | workq | 32 | job-busy | 32 | 0 | |
c13node26 | workq | 32 | job-busy | 32 | 0 | |
c13node27 | workq | 32 | job-busy | 32 | 0 | |
c13node28 | neutrino | 32 | free | 0 | 32 | |
c13node29 | neutrino | 32 | state-unknown,down | 0 | 0 | |
c13node30 | neutrino | 32 | free | 0 | 32 | |
c13node31 | neutrino | 32 | free | 0 | 32 | |
Cluster 13
Jobs Status
† Efficiency (of parallelization) = CPU time Walltime × No. of CPUs assigned
Job ID | User | Queue | Job Name | Job State | Walltime Used | No. of CPU using | Memory Using | Efficiency† |
---|---|---|---|---|---|---|---|---|
406098 | psen | workq | batch-42 | R | 3 days 16:39:31 hrs | 32 | 14.76 GB | 99.55% |
406100 | psen | workq | batch-26 | R | 3 days 07:59:26 hrs | 32 | 13.69 GB | 99.53% |
406138 | souravmal | workq | batch-0 | R | 2 days 15:40:01 hrs | 32 | 16.09 GB | 99.50% |
406139 | souravmal | workq | batch-1 | R | 1 day 08:29:40 hrs | 32 | 13.45 GB | 94.72% |
406140 | souravmal | workq | batch-2 | R | 1 day 06:51:22 hrs | 32 | 14.36 GB | 98.00% |
406141 | souravmal | workq | batch-3 | R | 23:00:09 hrs | 32 | 16.21 GB | 99.40% |
406142 | psen | workq | batch-4 | R | 1 day 03:03:19 hrs | 32 | 14.62 GB | 99.29% |
406143 | psen | workq | batch-5 | R | 1 day 02:49:30 hrs | 32 | 15.95 GB | 99.48% |
406144 | psen | workq | batch-6 | Q | 32 | 0.00% | ||
406145 | psen | workq | batch-7 | Q | 32 | 0.00% | ||
406201 | vanshreep | workq | psb4.0t_pbt | R | 1 day 14:41:25 hrs | 32 | 98.00 GB | 99.67% |
406202 | vanshreep | workq | psb3.5t_pbt | R | 1 day 14:37:43 hrs | 32 | 98.05 GB | 99.67% |
406210 | vanshreep | workq | psb3.0t_pbt | Q | 128 | 0.00% | ||
406221 | shilendra | workq | opt_CsPbH | R | 1 day 05:41:22 hrs | 64 | 24.59 GB | 99.64% |
406225 | tanmoymondal | workq | sf_1_2.0_8_0.0008 | R | 20:24:49 hrs | 4 | 128.72 MB | 99.68% |
406226 | tanmoymondal | workq | sf_1_2.0_8_0.00125 | R | 20:24:45 hrs | 4 | 136.86 MB | 99.68% |
406227 | tanmoymondal | workq | sf_1_2.0_8_0.00175 | R | 20:24:32 hrs | 4 | 127.53 MB | 99.68% |
406228 | tanmoymondal | workq | sf_1_2.0_8_0.0025 | R | 20:24:19 hrs | 4 | 128.09 MB | 99.68% |
406229 | tanmoymondal | workq | sf_1_2.0_8_0.005 | R | 20:24:06 hrs | 4 | 133.14 MB | 99.68% |
406230 | tanmoymondal | workq | sf_1_2.0_8_0.01 | R | 20:24:03 hrs | 4 | 125.52 MB | 99.68% |
406231 | tanmoymondal | workq | sf_1_2.0_8_0.02 | R | 20:24:00 hrs | 4 | 128.90 MB | 99.68% |
406232 | tanmoymondal | workq | sf_1_2.0_8_0.05 | R | 20:23:47 hrs | 4 | 127.45 MB | 99.68% |
406233 | tanmoymondal | workq | sf_2_2.0_8_0.0008 | R | 20:22:14 hrs | 4 | 139.02 MB | 99.68% |
406234 | tanmoymondal | workq | sf_2_2.0_8_0.00125 | R | 20:22:11 hrs | 4 | 124.89 MB | 99.68% |
406235 | tanmoymondal | workq | sf_2_2.0_8_0.00175 | R | 20:22:07 hrs | 4 | 126.88 MB | 99.68% |
406236 | tanmoymondal | workq | sf_2_2.0_8_0.0025 | R | 20:22:04 hrs | 4 | 125.59 MB | 99.68% |
406237 | tanmoymondal | workq | sf_2_2.0_8_0.005 | R | 20:21:51 hrs | 4 | 128.83 MB | 99.68% |
406238 | tanmoymondal | workq | sf_2_2.0_8_0.01 | R | 20:21:38 hrs | 4 | 133.86 MB | 99.68% |
406239 | tanmoymondal | workq | sf_2_2.0_8_0.02 | R | 20:21:35 hrs | 4 | 130.11 MB | 99.68% |
406240 | tanmoymondal | workq | sf_2_2.0_8_0.05 | R | 20:21:32 hrs | 4 | 126.89 MB | 99.68% |
406241 | tanmoymondal | workq | sf_3_2.0_8_0.0008 | R | 20:21:20 hrs | 4 | 138.07 MB | 99.68% |
406242 | tanmoymondal | workq | sf_3_2.0_8_0.00125 | R | 20:21:17 hrs | 4 | 128.88 MB | 99.68% |
406243 | tanmoymondal | workq | sf_3_2.0_8_0.00175 | R | 20:21:13 hrs | 4 | 136.91 MB | 99.68% |
406244 | tanmoymondal | workq | sf_3_2.0_8_0.0025 | R | 20:21:02 hrs | 4 | 123.79 MB | 99.68% |
406245 | tanmoymondal | workq | sf_3_2.0_8_0.005 | R | 20:20:49 hrs | 4 | 137.78 MB | 99.67% |
406246 | tanmoymondal | workq | sf_3_2.0_8_0.01 | R | 20:20:36 hrs | 4 | 127.47 MB | 99.68% |
406247 | tanmoymondal | workq | sf_3_2.0_8_0.02 | R | 20:20:32 hrs | 4 | 130.01 MB | 99.68% |
406248 | tanmoymondal | workq | sf_3_2.0_8_0.05 | R | 20:20:29 hrs | 4 | 132.81 MB | 99.68% |
406259 | pradhi | workq | mos2v_p9 | R | 20:06:29 hrs | 128 | 26.03 GB | 99.49% |
406263 | vanshreep | workq | psb2.5t_pbt | Q | 128 | 0.00% | ||
406283 | shilendra | workq | opt_Cd | R | 12:32:25 hrs | 64 | 23.82 GB | 99.71% |
406295 | tisita | workq | test | Q | 128 | 0.00% | ||
406296 | sankalpa | workq | M-X | Q | 128 | 0.00% | ||
406297 | prajna | workq | lanbon2_010 | R | 12:32:31 hrs | 64 | 10.51 GB | 99.66% |
406298 | prajna | workq | lanbon2_100 | R | 00:11:29 hrs | 64 | 11.97 GB | 99.31% |
Cluster 14
Nodes Summary
Total Number of CPUs: 1040State | No. of Nodes | No. of CPUs occupied/down | No. of CPUs free | % of total CPUs free |
---|---|---|---|---|
job-busy | 6 | 336 | 0 | 0.00 |
free | 13 | 106 | 598 | 57.50 |
Free CPUs (nodewise)
Queue | Node name | No. of free CPUs |
---|---|---|
workq | ||
node2 | 56 | |
node5 | 28 | |
node8 | 28 | |
node9 | 56 | |
node10 | 56 | |
node11 | 56 | |
node12 | 6 | |
node14 | 56 | |
node15 | 56 | |
node16 | 56 | |
node17 | 56 | |
node18 | 56 | |
Total | 598 |
Jobs Summary
† Avg. Efficiency per CPU =
∑
CPU time / Walltime
/∑ No. of CPUs assigned†† Overall Efficiency =
∑ CPU time ∑ (Walltime × No. of CPUs assigned)
[Sums are over all the running jobs.]
Job State | Queue | User | No. of Jobs | No. of CPU using | % of total CPU using | Avg. Walltime per CPU | Avg. Efficiency per CPU† | Overall Efficiency†† |
---|---|---|---|---|---|---|---|---|
R | ||||||||
workq | ||||||||
swapnild | 1 | 112 | 10.77% | 3 days 15:41:54 hrs | 99.52% | 99.52% | ||
mab5 | 1 | 112 | 10.77% | 3 days 15:05:39 hrs | 99.21% | 99.21% | ||
abhishodh | 1 | 50 | 4.81% | 2 days 05:55:59 hrs | 34.99% | 34.99% | ||
tanoykanti | 1 | 56 | 5.38% | 18:42:28 hrs | 5.12% | 5.12% | ||
psen | 4 | 112 | 10.77% | 11:02:58 hrs | 99.34% | 99.27% |
Job State | Queue | User | No. of Jobs | No. of CPU Requested |
---|---|---|---|---|
Q | ||||
workq | ||||
swapnild | 1 | 112 |
Cluster 14
Nodes Status
Node | Queue | np | state | No. of CPUs occupied | No. of free cpus | |
---|---|---|---|---|---|---|
node1 | workq | 56 | job-busy | 56 | 0 | |
node2 | workq | 56 | free | 0 | 56 | |
node3 | workq | 56 | job-busy | 56 | 0 | |
node4 | workq | 56 | job-busy | 56 | 0 | |
node5 | workq | 56 | free | 28 | 28 | |
node6 | workq | 56 | job-busy | 56 | 0 | |
node7 | workq | 56 | job-busy | 56 | 0 | |
node8 | workq | 56 | free | 28 | 28 | |
node9 | workq | 56 | free | 0 | 56 | |
node10 | workq | 56 | free | 0 | 56 | |
node11 | workq | 56 | free | 0 | 56 | |
node12 | workq | 56 | free | 50 | 6 | |
node13 | workq | 56 | job-busy | 56 | 0 | |
node14 | workq | 56 | free | 0 | 56 | |
node15 | workq | 56 | free | 0 | 56 | |
node16 | workq | 56 | free | 0 | 56 | |
node17 | workq | 56 | free | 0 | 56 | |
node18 | workq | 56 | free | 0 | 56 | |
gpu1 | neutrino | 32 | free | 0 | 32 | |
Cluster 14
Jobs Status
† Efficiency (of parallelization) = CPU time Walltime × No. of CPUs assigned
Job ID | User | Queue | Job Name | Job State | Walltime Used | No. of CPU using | Memory Using | Efficiency† |
---|---|---|---|---|---|---|---|---|
23044.c14m1.clusternet | swapnild@c14m2.clusternet | workq | complex2 | R | 3 days 15:41:54 hrs | 112 | 66.57 GB | 99.52% |
23050.c14m1.clusternet | mab5@c14m2.clusternet | workq | _p1_SR_BR_ | R | 3 days 15:05:40 hrs | 112 | 78.35 GB | 99.21% |
23100.c14m1.clusternet | abhishodh@c14m2.clusternet | workq | FZ_KgFixed | R | 2 days 05:55:59 hrs | 50 | 3.72 GB | 34.99% |
23190.c14m1.clusternet | swapnild@c14m2.clusternet | workq | Si_intr1 | Q | 112 | 0.00% | ||
23195.c14m1.clusternet | tanoykanti@c14m2.clusternet | workq | a_1 | R | 18:42:28 hrs | 56 | 16.51 GB | 5.12% |
23201.c14m1.clusternet | psen@c14m2.clusternet | workq | batch-12 | R | 17:09:02 hrs | 28 | 18.38 GB | 99.23% |
23202.c14m1.clusternet | psen@c14m2.clusternet | workq | batch-13 | R | 17:09:02 hrs | 28 | 18.91 GB | 99.20% |
23203.c14m1.clusternet | psen@c14m2.clusternet | workq | batch-14 | R | 04:56:59 hrs | 28 | 15.89 GB | 99.35% |
23204.c14m1.clusternet | psen@c14m2.clusternet | workq | batch-15 | R | 04:56:53 hrs | 28 | 19.84 GB | 99.57% |