Cluster 9
Nodes Summary
Total Number of CPUs: 768State | No. of Nodes | No. of CPUs occupied/down | No. of CPUs free | % of total CPUs free |
---|---|---|---|---|
down | 33 | 528 | 0 | 0.00 |
down,offline | 15 | 240 | 0 | 0.00 |
Free CPUs (nodewise)
There is no free CPU available now.
Jobs Summary
† Avg. Efficiency per CPU =
∑
CPU time / Walltime
/∑ No. of CPUs assigned†† Overall Efficiency =
∑ CPU time ∑ (Walltime × No. of CPUs assigned)
[Sums are over all the running jobs.]
Job State | User | No. of Jobs | No. of CPU using | % of total CPU using | Avg. Walltime per CPU | Avg. Efficiency per CPU† | Overall Efficiency†† |
---|---|---|---|---|---|---|---|
R |
Cluster 9
Nodes Status
Node | np | state | No. of CPUs occupied | No. of free cpus | ||
---|---|---|---|---|---|---|
compute-0-0 | 16 | down | 0 | 0 | ||
compute-0-1 | 16 | down,offline | 0 | 0 | ||
compute-0-2 | 16 | down | 0 | 0 | ||
compute-0-3 | 16 | down,offline | 0 | 0 | ||
compute-0-4 | 16 | down | 0 | 0 | ||
compute-0-5 | 16 | down,offline | 0 | 0 | ||
compute-0-6 | 16 | down | 0 | 0 | ||
compute-0-7 | 16 | down | 0 | 0 | ||
compute-0-8 | 16 | down | 0 | 0 | ||
compute-0-9 | 16 | down | 0 | 0 | ||
compute-0-10 | 16 | down | 0 | 0 | ||
compute-0-11 | 16 | down | 0 | 0 | ||
compute-0-12 | 16 | down | 0 | 0 | ||
compute-0-13 | 16 | down | 0 | 0 | ||
compute-0-14 | 16 | down | 0 | 0 | ||
compute-0-15 | 16 | down,offline | 0 | 0 | ||
compute-0-16 | 16 | down | 0 | 0 | ||
compute-0-17 | 16 | down | 0 | 0 | ||
compute-0-18 | 16 | down | 0 | 0 | ||
compute-0-19 | 16 | down | 0 | 0 | ||
compute-0-20 | 16 | down | 0 | 0 | ||
compute-0-21 | 16 | down,offline | 0 | 0 | ||
compute-0-22 | 16 | down,offline | 0 | 0 | ||
compute-0-23 | 16 | down,offline | 0 | 0 | ||
compute-0-24 | 16 | down,offline | 0 | 0 | ||
compute-0-25 | 16 | down | 0 | 0 | ||
compute-0-26 | 16 | down | 0 | 0 | ||
compute-0-27 | 16 | down | 0 | 0 | ||
compute-0-28 | 16 | down | 0 | 0 | ||
compute-0-29 | 16 | down,offline | 0 | 0 | ||
compute-0-30 | 16 | down,offline | 0 | 0 | ||
compute-0-31 | 16 | down | 0 | 0 | ||
compute-0-32 | 16 | down | 0 | 0 | ||
compute-0-33 | 16 | down | 0 | 0 | ||
compute-0-34 | 16 | down | 0 | 0 | ||
compute-0-35 | 16 | down | 0 | 0 | ||
compute-0-36 | 16 | down,offline | 0 | 0 | ||
compute-0-37 | 16 | down | 0 | 0 | ||
compute-0-38 | 16 | down | 0 | 0 | ||
compute-0-39 | 16 | down,offline | 0 | 0 | ||
compute-0-40 | 16 | down | 0 | 0 | ||
compute-0-41 | 16 | down | 0 | 0 | ||
compute-0-42 | 16 | down | 0 | 0 | ||
compute-0-43 | 16 | down | 0 | 0 | ||
compute-0-44 | 16 | down,offline | 0 | 0 | ||
compute-0-45 | 16 | down,offline | 0 | 0 | ||
compute-0-46 | 16 | down | 0 | 0 | ||
compute-0-47 | 16 | down,offline | 0 | 0 | ||
Cluster 9
Jobs Status
† Efficiency (of parallelization) = CPU time Walltime × No. of CPUs assigned
Job ID | User | Job Name | Job State | Walltime Used | No. of CPU using | Memory Using | Efficiency† |
---|
Cluster 10
Nodes Summary
Total Number of CPUs: 280State | No. of Nodes | No. of CPUs occupied/down | No. of CPUs free | % of total CPUs free |
---|---|---|---|---|
down | 12 | 240 | 0 | 0.00 |
down,offline | 1 | 20 | 0 | 0.00 |
free | 1 | 0 | 20 | 7.14 |
Free CPUs (nodewise)
Node name | No. of free CPUs |
---|---|
compute-0-8 | 20 |
Total | 20 |
Jobs Summary
† Avg. Efficiency per CPU =
∑
CPU time / Walltime
/∑ No. of CPUs assigned†† Overall Efficiency =
∑ CPU time ∑ (Walltime × No. of CPUs assigned)
[Sums are over all the running jobs.]
Job State | User | No. of Jobs | No. of CPU using | % of total CPU using | Avg. Walltime per CPU | Avg. Efficiency per CPU† | Overall Efficiency†† |
---|---|---|---|---|---|---|---|
R |
Cluster 10
Nodes Status
Node | np | state | No. of CPUs occupied | No. of free cpus | ||
---|---|---|---|---|---|---|
compute-0-0 | 20 | down | 0 | 0 | ||
compute-0-1 | 20 | down | 0 | 0 | ||
compute-0-2 | 20 | down | 0 | 0 | ||
compute-0-4 | 20 | down | 0 | 0 | ||
compute-0-5 | 20 | down | 0 | 0 | ||
compute-0-6 | 20 | down | 0 | 0 | ||
compute-0-7 | 20 | down,offline | 0 | 0 | ||
compute-0-8 | 20 | free | 0 | 20 | ||
compute-0-10 | 20 | down | 0 | 0 | ||
compute-0-11 | 20 | down | 0 | 0 | ||
compute-0-12 | 20 | down | 0 | 0 | ||
compute-0-13 | 20 | down | 0 | 0 | ||
compute-0-3 | 20 | down | 0 | 0 | ||
compute-0-9 | 20 | down | 0 | 0 | ||
Cluster 10
Jobs Status
† Efficiency (of parallelization) = CPU time Walltime × No. of CPUs assigned
Job ID | User | Job Name | Job State | Walltime Used | No. of CPU using | Memory Using | Efficiency† |
---|
Cluster 11
Nodes Summary
Total Number of CPUs: 960State | No. of Nodes | No. of CPUs occupied/down | No. of CPUs free | % of total CPUs free |
---|---|---|---|---|
free | 20 | 17 | 463 | 48.23 |
down | 11 | 264 | 0 | 0.00 |
job-exclusive | 8 | 192 | 0 | 0.00 |
down,job-exclusive | 1 | 24 | 0 | 0.00 |
Free CPUs (nodewise)
Node name | No. of free CPUs |
---|---|
compute000 | 24 |
compute001 | 24 |
compute002 | 24 |
compute003 | 24 |
compute005 | 24 |
compute007 | 24 |
compute008 | 24 |
compute009 | 24 |
compute010 | 24 |
compute011 | 24 |
compute012 | 24 |
compute013 | 24 |
compute021 | 24 |
compute022 | 24 |
compute024 | 24 |
compute026 | 24 |
compute027 | 24 |
compute028 | 19 |
compute035 | 18 |
compute036 | 18 |
Total | 463 |
Jobs Summary
† Avg. Efficiency per CPU =
∑
CPU time / Walltime
/∑ No. of CPUs assigned†† Overall Efficiency =
∑ CPU time ∑ (Walltime × No. of CPUs assigned)
[Sums are over all the running jobs.]
Job State | User | No. of Jobs | No. of CPU using | % of total CPU using | Avg. Walltime per CPU | Avg. Efficiency per CPU† | Overall Efficiency†† |
---|---|---|---|---|---|---|---|
R | |||||||
tanoykanti | 12 | 12 | 1.25% | 68 days 16:05:30 hrs | 83.97% | 83.97% | |
sud98 | 5 | 5 | 0.52% | 43 days 07:24:09 hrs | 80.85% | 80.85% | |
tanmoymondal | 24 | 24 | 2.50% | 1 day 18:04:15 hrs | 100.11% | 100.11% | |
sganguly | 1 | 96 | 10.00% | 5 days 14:44:44 hrs | 100.08% | 100.08% | |
debs | 1 | 96 | 10.00% | 12:08:16 hrs | 100.06% | 100.06% |
Cluster 11
Nodes Status
Node | np | state | No. of CPUs occupied | No. of free cpus | ||
---|---|---|---|---|---|---|
compute000 | 24 | free | 0 | 24 | ||
compute001 | 24 | free | 0 | 24 | ||
compute002 | 24 | free | 0 | 24 | ||
compute003 | 24 | free | 0 | 24 | ||
compute004 | 24 | down | 0 | 0 | ||
compute005 | 24 | free | 0 | 24 | ||
compute006 | 24 | down | 0 | 0 | ||
compute007 | 24 | free | 0 | 24 | ||
compute008 | 24 | free | 0 | 24 | ||
compute009 | 24 | free | 0 | 24 | ||
compute010 | 24 | free | 0 | 24 | ||
compute011 | 24 | free | 0 | 24 | ||
compute012 | 24 | free | 0 | 24 | ||
compute013 | 24 | free | 0 | 24 | ||
compute014 | 24 | job-exclusive | 24 | 0 | ||
compute015 | 24 | job-exclusive | 24 | 0 | ||
compute016 | 24 | down | 0 | 0 | ||
compute017 | 24 | job-exclusive | 24 | 0 | ||
compute018 | 24 | down | 0 | 0 | ||
compute019 | 24 | down | 0 | 0 | ||
compute020 | 24 | job-exclusive | 24 | 0 | ||
compute021 | 24 | free | 0 | 24 | ||
compute022 | 24 | free | 0 | 24 | ||
compute023 | 24 | down | 0 | 0 | ||
compute024 | 24 | free | 0 | 24 | ||
compute025 | 24 | down | 0 | 0 | ||
compute026 | 24 | free | 0 | 24 | ||
compute027 | 24 | free | 0 | 24 | ||
compute028 | 24 | free | 5 | 19 | ||
compute029 | 24 | down,job-exclusive | 24 | 0 | ||
compute030 | 24 | job-exclusive | 24 | 0 | ||
compute031 | 24 | down | 0 | 0 | ||
compute032 | 24 | job-exclusive | 24 | 0 | ||
compute033 | 24 | down | 0 | 0 | ||
compute034 | 24 | job-exclusive | 24 | 0 | ||
compute035 | 24 | free | 6 | 18 | ||
compute036 | 24 | free | 6 | 18 | ||
compute037 | 24 | down | 0 | 0 | ||
compute038 | 24 | down | 0 | 0 | ||
compute039 | 24 | job-exclusive | 24 | 0 | ||
Cluster 11
Jobs Status
† Efficiency (of parallelization) = CPU time Walltime × No. of CPUs assigned
Job ID | User | Job Name | Job State | Walltime Used | No. of CPU using | Memory Using | Efficiency† |
---|---|---|---|---|---|---|---|
651705 | tanoykanti | fin07 | R | 68 days 16:08:25 hrs | 1 | 33.69 MB | 83.97% |
651706 | tanoykanti | fin08 | R | 68 days 16:07:55 hrs | 1 | 33.84 MB | 83.97% |
651707 | tanoykanti | fin09 | R | 68 days 16:07:21 hrs | 1 | 33.75 MB | 83.97% |
651708 | tanoykanti | fin10 | R | 68 days 16:06:37 hrs | 1 | 33.76 MB | 83.97% |
651709 | tanoykanti | fin11 | R | 68 days 16:06:05 hrs | 1 | 33.69 MB | 83.97% |
651710 | tanoykanti | fin12 | R | 68 days 16:05:36 hrs | 1 | 33.73 MB | 83.97% |
651711 | tanoykanti | fin13 | R | 68 days 16:05:19 hrs | 1 | 33.77 MB | 83.97% |
651712 | tanoykanti | fin14 | R | 68 days 16:04:49 hrs | 1 | 33.78 MB | 83.97% |
651713 | tanoykanti | fin15 | R | 68 days 16:04:21 hrs | 1 | 31.87 MB | 83.97% |
651714 | tanoykanti | fin16 | R | 68 days 16:03:45 hrs | 1 | 33.69 MB | 83.97% |
651715 | tanoykanti | fin17 | R | 68 days 16:03:15 hrs | 1 | 33.77 MB | 83.97% |
651716 | tanoykanti | fin18 | R | 68 days 16:02:41 hrs | 1 | 33.81 MB | 83.97% |
659000 | sud98 | 5_qubit_1 | R | 43 days 07:43:21 hrs | 1 | 14.04 MB | 80.84% |
659001 | sud98 | 5_qubit_2 | R | 43 days 07:42:39 hrs | 1 | 14.07 MB | 80.87% |
659002 | sud98 | 5_qubit_3 | R | 43 days 07:23:27 hrs | 1 | 14.07 MB | 80.86% |
659003 | sud98 | 5_qubit_4 | R | 43 days 07:06:02 hrs | 1 | 14.04 MB | 80.83% |
659004 | sud98 | 5_qubit_5 | R | 43 days 07:05:18 hrs | 1 | 14.04 MB | 80.84% |
659150 | tanmoymondal | sf_2_3.0_8_0.5_0.038593 | R | 1 day 18:05:07 hrs | 1 | 706.75 MB | 100.12% |
659151 | tanmoymondal | sf_2_3.0_8_0.5_0.035084 | R | 1 day 18:05:01 hrs | 1 | 654.37 MB | 100.31% |
659152 | tanmoymondal | sf_2_3.0_8_0.5_0.031895 | R | 1 day 18:04:58 hrs | 1 | 663.71 MB | 100.28% |
659153 | tanmoymondal | sf_2_3.0_8_0.5_0.028995 | R | 1 day 18:04:52 hrs | 1 | 651.44 MB | 100.04% |
659154 | tanmoymondal | sf_2_3.0_8_0.5_0.026359 | R | 1 day 18:04:49 hrs | 1 | 615.97 MB | 100.05% |
659155 | tanmoymondal | sf_2_3.0_8_0.5_0.023963 | R | 1 day 18:04:44 hrs | 1 | 650.77 MB | 99.99% |
659156 | tanmoymondal | sf_2_3.0_8_0.5_0.021785 | R | 1 day 18:04:40 hrs | 1 | 617.48 MB | 100.17% |
659157 | tanmoymondal | sf_2_3.0_8_0.5_0.019804 | R | 1 day 18:04:35 hrs | 1 | 613.68 MB | 100.12% |
659158 | tanmoymondal | sf_2_3.0_8_0.5_0.018004 | R | 1 day 18:04:30 hrs | 1 | 621.50 MB | 100.21% |
659159 | tanmoymondal | sf_2_3.0_8_0.5_0.016367 | R | 1 day 18:04:27 hrs | 1 | 640.98 MB | 100.17% |
659160 | tanmoymondal | sf_2_3.0_8_0.5_0.014879 | R | 1 day 18:04:21 hrs | 1 | 615.02 MB | 100.06% |
659161 | tanmoymondal | sf_2_3.0_8_0.5_0.013527 | R | 1 day 18:04:18 hrs | 1 | 607.94 MB | 99.95% |
659162 | tanmoymondal | sf_2_3.0_8_0.5_0.012297 | R | 1 day 18:04:14 hrs | 1 | 613.84 MB | 100.15% |
659163 | tanmoymondal | sf_2_3.0_8_0.5_0.011179 | R | 1 day 18:04:10 hrs | 1 | 618.93 MB | 99.97% |
659164 | tanmoymondal | sf_2_3.0_8_0.5_0.010163 | R | 1 day 18:04:05 hrs | 1 | 617.34 MB | 100.14% |
659165 | tanmoymondal | sf_2_3.0_8_0.5_0.009239 | R | 1 day 18:04:01 hrs | 1 | 627.28 MB | 100.18% |
659166 | tanmoymondal | sf_2_3.0_8_0.5_0.008399 | R | 1 day 18:03:57 hrs | 1 | 616.14 MB | 100.20% |
659167 | tanmoymondal | sf_2_3.0_8_0.5_0.007635 | R | 1 day 18:03:53 hrs | 1 | 619.93 MB | 100.17% |
659168 | tanmoymondal | sf_2_3.0_8_0.5_0.006941 | R | 1 day 18:03:48 hrs | 1 | 626.80 MB | 100.16% |
659169 | tanmoymondal | sf_2_3.0_8_0.5_0.006310 | R | 1 day 18:03:44 hrs | 1 | 615.64 MB | 99.94% |
659170 | tanmoymondal | sf_2_3.0_8_0.5_0.005737 | R | 1 day 18:03:36 hrs | 1 | 616.26 MB | 100.15% |
659171 | tanmoymondal | sf_2_3.0_8_0.5_0.005215 | R | 1 day 18:03:35 hrs | 1 | 618.10 MB | 99.92% |
659172 | tanmoymondal | sf_2_3.0_8_0.5_0.004741 | R | 1 day 18:03:31 hrs | 1 | 613.44 MB | 100.07% |
659173 | tanmoymondal | sf_2_3.0_8_0.5_0.004310 | R | 1 day 18:03:27 hrs | 1 | 613.11 MB | 100.17% |
660833 | sganguly | C11-test | R | 5 days 14:44:45 hrs | 96 | 1.94 GB | 100.08% |
661045 | debs | CsVI3_final | R | 12:08:16 hrs | 96 | 17.98 GB | 100.06% |
Cluster 12
Nodes Summary
Total Number of CPUs: 1056State | No. of Nodes | No. of CPUs occupied/down | No. of CPUs free | % of total CPUs free |
---|---|---|---|---|
job-busy | 25 | 600 | 0 | 0.00 |
free | 18 | 66 | 366 | 34.66 |
down | 1 | 24 | 0 | 0.00 |
Free CPUs (nodewise)
Queue | Node name | No. of free CPUs |
---|---|---|
workq | ||
node9 | 3 | |
node11 | 7 | |
node12 | 23 | |
node17 | 16 | |
node19 | 12 | |
node22 | 24 | |
node24 | 24 | |
node25 | 24 | |
node6 | 24 | |
node7 | 24 | |
node27 | 24 | |
node13 | 17 | |
node28 | 24 | |
node29 | 24 | |
node30 | 24 | |
node31 | 24 | |
node32 | 24 | |
node34 | 24 | |
Total | 366 |
Jobs Summary
† Avg. Efficiency per CPU =
∑
CPU time / Walltime
/∑ No. of CPUs assigned†† Overall Efficiency =
∑ CPU time ∑ (Walltime × No. of CPUs assigned)
[Sums are over all the running jobs.]
Job State | Queue | User | No. of Jobs | No. of CPU using | % of total CPU using | Avg. Walltime per CPU | Avg. Efficiency per CPU† | Overall Efficiency†† |
---|---|---|---|---|---|---|---|---|
R | ||||||||
workq | ||||||||
debarupa11 | 86 | 86 | 8.14% | 64 days 04:17:39 hrs | 100.03% | 100.03% | ||
shuvam | 4 | 4 | 0.38% | 28 days 03:55:08 hrs | 99.77% | 99.85% | ||
souravmal | 7 | 168 | 15.91% | 2 days 07:09:20 hrs | 97.79% | 97.79% | ||
subhojit | 2 | 48 | 4.55% | 1 day 21:40:44 hrs | 100.06% | 100.06% | ||
bikashvbu | 4 | 96 | 9.09% | 21:35:47 hrs | 113.27% | 106.57% | ||
psen | 7 | 168 | 15.91% | 1 day 09:26:09 hrs | 99.94% | 99.94% | ||
arijeetsarangi | 2 | 96 | 9.09% | 0:02:10 hrs | 98.69% | 98.71% |
Cluster 12
Nodes Status
Node | Queue | np | state | No. of CPUs occupied | No. of free cpus | |
---|---|---|---|---|---|---|
node2 | workq | 24 | job-busy | 24 | 0 | |
node3 | workq | 24 | job-busy | 24 | 0 | |
node4 | workq | 24 | job-busy | 24 | 0 | |
node5 | workq | 24 | job-busy | 24 | 0 | |
node8 | workq | 24 | job-busy | 24 | 0 | |
node9 | workq | 24 | free | 21 | 3 | |
node10 | workq | 24 | job-busy | 24 | 0 | |
node11 | workq | 24 | free | 17 | 7 | |
node12 | workq | 24 | free | 1 | 23 | |
node14 | workq | 24 | job-busy | 24 | 0 | |
node15 | workq | 24 | job-busy | 24 | 0 | |
node16 | workq | 24 | job-busy | 24 | 0 | |
node17 | workq | 24 | free | 8 | 16 | |
node18 | workq | 24 | job-busy | 24 | 0 | |
node19 | workq | 24 | free | 12 | 12 | |
node20 | workq | 24 | job-busy | 24 | 0 | |
node21 | workq | 24 | job-busy | 24 | 0 | |
node22 | workq | 24 | free | 0 | 24 | |
node1 | workq | 24 | job-busy | 24 | 0 | |
node23 | workq | 24 | job-busy | 24 | 0 | |
node24 | workq | 24 | free | 0 | 24 | |
node25 | workq | 24 | free | 0 | 24 | |
node6 | workq | 24 | free | 0 | 24 | |
node7 | workq | 24 | free | 0 | 24 | |
node26 | workq | 24 | down | 0 | 0 | |
node27 | workq | 24 | free | 0 | 24 | |
node13 | workq | 24 | free | 7 | 17 | |
node28 | workq | 24 | free | 0 | 24 | |
node29 | workq | 24 | free | 0 | 24 | |
node30 | workq | 24 | free | 0 | 24 | |
node31 | workq | 24 | free | 0 | 24 | |
node32 | workq | 24 | free | 0 | 24 | |
node34 | workq | 24 | free | 0 | 24 | |
node35 | workq | 24 | job-busy | 24 | 0 | |
node36 | workq | 24 | job-busy | 24 | 0 | |
node37 | workq | 24 | job-busy | 24 | 0 | |
node38 | workq | 24 | job-busy | 24 | 0 | |
node39 | workq | 24 | job-busy | 24 | 0 | |
node40 | workq | 24 | job-busy | 24 | 0 | |
node41 | workq | 24 | job-busy | 24 | 0 | |
node42 | workq | 24 | job-busy | 24 | 0 | |
node33 | workq | 24 | job-busy | 24 | 0 | |
node43 | workq | 24 | job-busy | 24 | 0 | |
node44 | workq | 24 | job-busy | 24 | 0 | |
Cluster 12
Jobs Status
† Efficiency (of parallelization) = CPU time Walltime × No. of CPUs assigned
Job ID | User | Queue | Job Name | Job State | Walltime Used | No. of CPU using | Memory Using | Efficiency† |
---|---|---|---|---|---|---|---|---|
102710 | debarupa11 | workq | GMBF100000000 | R | 89 days 04:51:15 hrs | 1 | 6.76 MB | 99.46% |
104309 | debarupa11 | workq | Therm1.00 | R | 68 days 08:24:57 hrs | 1 | 7.28 MB | 100.05% |
104310 | debarupa11 | workq | Therm1.02 | R | 68 days 08:24:45 hrs | 1 | 9.28 MB | 100.05% |
104311 | debarupa11 | workq | Therm1.04 | R | 68 days 08:24:35 hrs | 1 | 9.26 MB | 100.05% |
104312 | debarupa11 | workq | Therm1.06 | R | 68 days 08:24:24 hrs | 1 | 7.27 MB | 100.05% |
104313 | debarupa11 | workq | Therm1.08 | R | 68 days 08:24:13 hrs | 1 | 9.28 MB | 100.06% |
104314 | debarupa11 | workq | Therm1.10 | R | 68 days 08:24:02 hrs | 1 | 9.27 MB | 100.05% |
104315 | debarupa11 | workq | Therm1.12 | R | 68 days 08:23:51 hrs | 1 | 7.28 MB | 100.05% |
104316 | debarupa11 | workq | Therm1.14 | R | 68 days 08:23:40 hrs | 1 | 7.27 MB | 100.05% |
104318 | debarupa11 | workq | Therm1.18 | R | 68 days 08:23:19 hrs | 1 | 7.27 MB | 100.05% |
104319 | debarupa11 | workq | Therm1.20 | R | 68 days 08:23:08 hrs | 1 | 7.28 MB | 100.05% |
104320 | debarupa11 | workq | Therm1.22 | R | 68 days 08:22:57 hrs | 1 | 7.28 MB | 100.05% |
104321 | debarupa11 | workq | Therm1.24 | R | 68 days 08:22:46 hrs | 1 | 7.28 MB | 100.05% |
104322 | debarupa11 | workq | Therm1.26 | R | 68 days 08:22:36 hrs | 1 | 9.26 MB | 100.05% |
104323 | debarupa11 | workq | Therm1.28 | R | 68 days 08:22:25 hrs | 1 | 7.27 MB | 100.05% |
104324 | debarupa11 | workq | Therm1.30 | R | 68 days 08:22:14 hrs | 1 | 7.27 MB | 100.05% |
104325 | debarupa11 | workq | Therm1.32 | R | 68 days 08:22:03 hrs | 1 | 9.27 MB | 100.05% |
104326 | debarupa11 | workq | Therm1.34 | R | 68 days 08:21:53 hrs | 1 | 9.27 MB | 100.05% |
104327 | debarupa11 | workq | Therm1.36 | R | 68 days 08:21:42 hrs | 1 | 9.27 MB | 100.05% |
104328 | debarupa11 | workq | Therm1.38 | R | 68 days 08:21:31 hrs | 1 | 7.28 MB | 100.05% |
104330 | debarupa11 | workq | Therm1.42 | R | 68 days 08:21:09 hrs | 1 | 7.28 MB | 100.06% |
104331 | debarupa11 | workq | Therm1.44 | R | 68 days 08:20:58 hrs | 1 | 7.29 MB | 100.05% |
104332 | debarupa11 | workq | Therm1.46 | R | 68 days 08:20:47 hrs | 1 | 9.28 MB | 100.05% |
104333 | debarupa11 | workq | Therm1.48 | R | 68 days 08:21:03 hrs | 1 | 9.30 MB | 100.06% |
104335 | debarupa11 | workq | Therm1.52 | R | 68 days 08:20:42 hrs | 1 | 7.29 MB | 100.06% |
104337 | debarupa11 | workq | Therm1.56 | R | 68 days 08:20:20 hrs | 1 | 7.30 MB | 100.06% |
104338 | debarupa11 | workq | Therm1.58 | R | 68 days 08:20:09 hrs | 1 | 7.30 MB | 100.06% |
104339 | debarupa11 | workq | Therm1.60 | R | 68 days 08:19:58 hrs | 1 | 9.30 MB | 100.06% |
104340 | debarupa11 | workq | Therm1.62 | R | 68 days 08:19:48 hrs | 1 | 7.29 MB | 100.06% |
104341 | debarupa11 | workq | Therm1.64 | R | 68 days 08:19:37 hrs | 1 | 7.29 MB | 100.06% |
104342 | debarupa11 | workq | Therm1.66 | R | 68 days 08:19:26 hrs | 1 | 7.30 MB | 100.06% |
104343 | debarupa11 | workq | Therm1.68 | R | 68 days 08:19:15 hrs | 1 | 7.30 MB | 100.07% |
104344 | debarupa11 | workq | Therm1.70 | R | 68 days 08:19:04 hrs | 1 | 9.29 MB | 100.06% |
104345 | debarupa11 | workq | Therm1.72 | R | 68 days 08:18:54 hrs | 1 | 9.29 MB | 100.06% |
104350 | debarupa11 | workq | Therm1.82 | R | 68 days 08:17:59 hrs | 1 | 7.29 MB | 100.07% |
105277 | debarupa11 | workq | T3.00 | R | 61 days 08:35:48 hrs | 1 | 6.99 MB | 100.06% |
105278 | debarupa11 | workq | T3.02 | R | 61 days 08:35:38 hrs | 1 | 8.98 MB | 100.06% |
105279 | debarupa11 | workq | T3.04 | R | 61 days 08:35:28 hrs | 1 | 8.97 MB | 100.06% |
105280 | debarupa11 | workq | T3.06 | R | 61 days 08:35:18 hrs | 1 | 6.99 MB | 100.07% |
105281 | debarupa11 | workq | T3.08 | R | 61 days 08:35:08 hrs | 1 | 6.99 MB | 100.06% |
105282 | debarupa11 | workq | T3.10 | R | 61 days 08:34:58 hrs | 1 | 6.99 MB | 100.06% |
105283 | debarupa11 | workq | T3.12 | R | 61 days 08:34:48 hrs | 1 | 6.98 MB | 100.07% |
105284 | debarupa11 | workq | T3.14 | R | 61 days 08:34:38 hrs | 1 | 8.99 MB | 100.06% |
105285 | debarupa11 | workq | T3.16 | R | 61 days 08:34:28 hrs | 1 | 8.98 MB | 100.06% |
105286 | debarupa11 | workq | T3.18 | R | 61 days 08:34:18 hrs | 1 | 7.00 MB | 100.07% |
105287 | debarupa11 | workq | T3.20 | R | 61 days 08:34:08 hrs | 1 | 6.97 MB | 100.07% |
105288 | debarupa11 | workq | T3.22 | R | 61 days 08:33:58 hrs | 1 | 6.99 MB | 100.07% |
105289 | debarupa11 | workq | T3.24 | R | 61 days 08:35:18 hrs | 1 | 6.98 MB | 99.87% |
105290 | debarupa11 | workq | T3.26 | R | 61 days 08:35:08 hrs | 1 | 6.97 MB | 99.88% |
105291 | debarupa11 | workq | T3.28 | R | 61 days 08:34:58 hrs | 1 | 6.98 MB | 99.88% |
105292 | debarupa11 | workq | T3.30 | R | 61 days 08:34:48 hrs | 1 | 6.98 MB | 99.87% |
105294 | debarupa11 | workq | T3.34 | R | 61 days 08:34:28 hrs | 1 | 6.98 MB | 99.88% |
105295 | debarupa11 | workq | T3.36 | R | 61 days 08:34:18 hrs | 1 | 8.97 MB | 99.88% |
105296 | debarupa11 | workq | T3.38 | R | 61 days 08:34:08 hrs | 1 | 6.97 MB | 99.88% |
105297 | debarupa11 | workq | T3.40 | R | 61 days 08:33:58 hrs | 1 | 6.99 MB | 99.87% |
105298 | debarupa11 | workq | T3.42 | R | 61 days 08:33:47 hrs | 1 | 8.98 MB | 99.88% |
105299 | debarupa11 | workq | T3.44 | R | 61 days 08:33:37 hrs | 1 | 6.99 MB | 99.88% |
105300 | debarupa11 | workq | T3.46 | R | 61 days 08:33:26 hrs | 1 | 6.98 MB | 99.87% |
105301 | debarupa11 | workq | T3.48 | R | 61 days 08:31:45 hrs | 1 | 6.99 MB | 100.07% |
105302 | debarupa11 | workq | T3.50 | R | 61 days 08:31:35 hrs | 1 | 6.98 MB | 100.07% |
105303 | debarupa11 | workq | T3.52 | R | 61 days 08:31:25 hrs | 1 | 6.98 MB | 100.07% |
105304 | debarupa11 | workq | T3.54 | R | 61 days 08:31:15 hrs | 1 | 6.98 MB | 100.07% |
105305 | debarupa11 | workq | T3.56 | R | 61 days 08:31:05 hrs | 1 | 6.99 MB | 100.07% |
105306 | debarupa11 | workq | T3.58 | R | 61 days 08:30:55 hrs | 1 | 6.97 MB | 100.07% |
105307 | debarupa11 | workq | T3.60 | R | 61 days 08:30:45 hrs | 1 | 6.98 MB | 100.07% |
105308 | debarupa11 | workq | T3.62 | R | 61 days 08:30:35 hrs | 1 | 6.98 MB | 100.07% |
105309 | debarupa11 | workq | T3.64 | R | 61 days 08:31:22 hrs | 1 | 6.98 MB | 100.07% |
105310 | debarupa11 | workq | T3.66 | R | 61 days 08:31:12 hrs | 1 | 6.98 MB | 100.07% |
105311 | debarupa11 | workq | T3.68 | R | 61 days 08:31:02 hrs | 1 | 6.98 MB | 100.07% |
105312 | debarupa11 | workq | T3.70 | R | 61 days 08:30:52 hrs | 1 | 8.98 MB | 100.07% |
105313 | debarupa11 | workq | T3.72 | R | 61 days 08:30:42 hrs | 1 | 6.97 MB | 100.07% |
105314 | debarupa11 | workq | T3.74 | R | 61 days 08:30:32 hrs | 1 | 6.98 MB | 100.07% |
105315 | debarupa11 | workq | T3.76 | R | 61 days 08:30:22 hrs | 1 | 6.98 MB | 100.07% |
105316 | debarupa11 | workq | T3.78 | R | 61 days 08:30:12 hrs | 1 | 6.98 MB | 100.07% |
105317 | debarupa11 | workq | T3.80 | R | 61 days 08:30:02 hrs | 1 | 8.96 MB | 100.07% |
105318 | debarupa11 | workq | T3.82 | R | 61 days 08:29:52 hrs | 1 | 6.99 MB | 100.07% |
105319 | debarupa11 | workq | T3.84 | R | 61 days 08:29:42 hrs | 1 | 6.99 MB | 100.07% |
105320 | debarupa11 | workq | T3.86 | R | 61 days 08:29:33 hrs | 1 | 6.98 MB | 100.07% |
105321 | debarupa11 | workq | T3.88 | R | 61 days 08:28:48 hrs | 1 | 7.00 MB | 100.07% |
105322 | debarupa11 | workq | T3.90 | R | 61 days 08:28:38 hrs | 1 | 8.99 MB | 100.07% |
105323 | debarupa11 | workq | T3.92 | R | 61 days 08:28:28 hrs | 1 | 6.98 MB | 100.07% |
105324 | debarupa11 | workq | T3.94 | R | 61 days 08:28:18 hrs | 1 | 7.00 MB | 100.07% |
105325 | debarupa11 | workq | T3.96 | R | 61 days 08:28:08 hrs | 1 | 6.99 MB | 100.07% |
105326 | debarupa11 | workq | T3.98 | R | 61 days 08:27:58 hrs | 1 | 7.00 MB | 100.07% |
105327 | debarupa11 | workq | T4.00 | R | 61 days 08:27:48 hrs | 1 | 6.98 MB | 100.07% |
109963 | debarupa11 | workq | 5dcheck0.00 | R | 38 days 13:18:07 hrs | 1 | 7.07 MB | 100.05% |
110602 | shuvam | workq | 0.40_0.80_0.10 | R | 34 days 17:28:13 hrs | 1 | 12.48 MB | 100.07% |
110629 | shuvam | workq | 0.30_0.60_0.20 | R | 34 days 17:20:57 hrs | 1 | 332.94 MB | 100.06% |
115965 | shuvam | workq | two_qbit | R | 27 days 02:36:05 hrs | 1 | 77.74 MB | 99.61% |
116471 | shuvam | workq | commutativity_check | R | 16 days 02:15:19 hrs | 1 | 12.70 MB | 99.34% |
117086 | souravmal | workq | batch-10 | R | 2 days 07:10:05 hrs | 24 | 13.49 GB | 100.00% |
117087 | souravmal | workq | batch-11 | R | 2 days 07:09:50 hrs | 24 | 14.53 GB | 93.77% |
117088 | souravmal | workq | batch-12 | R | 2 days 07:09:56 hrs | 24 | 13.80 GB | 100.01% |
117089 | souravmal | workq | batch-6 | R | 2 days 07:09:37 hrs | 24 | 12.22 GB | 100.01% |
117090 | souravmal | workq | batch-7 | R | 2 days 07:08:59 hrs | 24 | 16.70 GB | 100.00% |
117091 | souravmal | workq | batch-8 | R | 2 days 07:08:46 hrs | 24 | 15.51 GB | 93.07% |
117092 | souravmal | workq | batch-9 | R | 2 days 07:08:09 hrs | 24 | 12.16 GB | 97.65% |
117114 | subhojit | workq | test | R | 1 day 21:48:17 hrs | 24 | 3.76 GB | 100.04% |
117115 | subhojit | workq | test | R | 1 day 21:33:11 hrs | 24 | 3.60 GB | 100.08% |
117123 | bikashvbu | workq | bik1 | R | 1 day 15:36:06 hrs | 24 | 7.52 GB | 100.01% |
117139 | psen | workq | batch-18 | R | 1 day 10:12:01 hrs | 24 | 10.55 GB | 100.03% |
117140 | psen | workq | batch-19 | R | 1 day 09:46:40 hrs | 24 | 16.79 GB | 100.00% |
117141 | psen | workq | batch-20 | R | 1 day 09:45:02 hrs | 24 | 14.29 GB | 100.00% |
117142 | psen | workq | batch-21 | R | 1 day 09:45:30 hrs | 24 | 13.98 GB | 99.57% |
117143 | psen | workq | batch-22 | R | 1 day 09:07:20 hrs | 24 | 16.04 GB | 100.00% |
117144 | psen | workq | batch-23 | R | 1 day 09:06:45 hrs | 24 | 16.41 GB | 100.00% |
117145 | psen | workq | batch-24 | R | 1 day 08:19:46 hrs | 24 | 12.22 GB | 100.01% |
117149 | bikashvbu | workq | Tuya | R | 18:01:30 hrs | 48 | 5.00 GB | 99.87% |
117214 | bikashvbu | workq | Mousumi | R | 10:43:50 hrs | 16 | 5.09 GB | 179.94% |
117215 | bikashvbu | workq | Mousumi | R | 10:44:29 hrs | 8 | 3.06 GB | 100.07% |
117300 | arijeetsarangi | workq | test | R | 00:02:31 hrs | 48 | 11.14 GB | 98.79% |
117301 | arijeetsarangi | workq | test | R | 00:01:50 hrs | 48 | 10.42 GB | 98.60% |
Cluster 13
Nodes Summary
Total Number of CPUs: 1024State | No. of Nodes | No. of CPUs occupied/down | No. of CPUs free | % of total CPUs free |
---|---|---|---|---|
job-busy | 25 | 800 | 0 | 0.00 |
free | 5 | 32 | 128 | 12.50 |
down | 2 | 64 | 0 | 0.00 |
Free CPUs (nodewise)
Queue | Node name | No. of free CPUs |
---|---|---|
workq | ||
c13node5 | 4 | |
c13node12 | 28 | |
neutrino | ||
c13node28 | 32 | |
c13node30 | 32 | |
c13node31 | 32 | |
Total | 128 |
Jobs Summary
† Avg. Efficiency per CPU =
∑
CPU time / Walltime
/∑ No. of CPUs assigned†† Overall Efficiency =
∑ CPU time ∑ (Walltime × No. of CPUs assigned)
[Sums are over all the running jobs.]
Job State | Queue | User | No. of Jobs | No. of CPU using | % of total CPU using | Avg. Walltime per CPU | Avg. Efficiency per CPU† | Overall Efficiency†† |
---|---|---|---|---|---|---|---|---|
R | ||||||||
workq | ||||||||
souravmal | 4 | 128 | 12.50% | 2 days 07:21:34 hrs | 97.83% | 97.83% | ||
prajna | 1 | 64 | 6.25% | 1 day 12:53:27 hrs | 99.67% | 99.67% | ||
psen | 4 | 128 | 12.50% | 1 day 10:41:06 hrs | 98.34% | 98.33% | ||
vanshreep | 1 | 128 | 12.50% | 01:44:21 hrs | 99.64% | 99.64% | ||
pradhi | 1 | 128 | 12.50% | 04:41:15 hrs | 98.43% | 98.43% | ||
ayushitripathi | 2 | 64 | 6.25% | 08:14:54 hrs | 99.64% | 99.64% | ||
shilendra | 1 | 64 | 6.25% | 01:58:21 hrs | 99.63% | 99.63% |
Job State | Queue | User | No. of Jobs | No. of CPU Requested |
---|---|---|---|---|
Q | ||||
workq | ||||
sankalpa | 1 | 128 | ||
swapnild | 1 | 128 | ||
shilendra | 1 | 64 |
Cluster 13
Nodes Status
Node | Queue | np | state | No. of CPUs occupied | No. of free cpus | |
---|---|---|---|---|---|---|
c13node1 | workq | 32 | job-busy | 32 | 0 | |
c13node2 | workq | 32 | job-busy | 32 | 0 | |
c13node3 | workq | 32 | job-busy | 32 | 0 | |
c13node4 | workq | 32 | job-busy | 32 | 0 | |
c13node5 | workq | 32 | free | 28 | 4 | |
c13node7 | workq | 32 | job-busy | 32 | 0 | |
c13node8 | workq | 32 | job-busy | 32 | 0 | |
c13node9 | workq | 32 | job-busy | 32 | 0 | |
c13node10 | workq | 32 | job-busy | 32 | 0 | |
c13node11 | workq | 32 | job-busy | 32 | 0 | |
c13node12 | workq | 32 | free | 4 | 28 | |
c13node14 | workq | 32 | job-busy | 32 | 0 | |
c13node15 | workq | 32 | job-busy | 32 | 0 | |
c13node0 | workq | 32 | job-busy | 32 | 0 | |
c13node16 | workq | 32 | job-busy | 32 | 0 | |
c13node17 | workq | 32 | job-busy | 32 | 0 | |
c13node18 | workq | 32 | job-busy | 32 | 0 | |
c13node6 | workq | 32 | down | 0 | 0 | |
c13node19 | workq | 32 | job-busy | 32 | 0 | |
c13node20 | workq | 32 | job-busy | 32 | 0 | |
c13node22 | workq | 32 | job-busy | 32 | 0 | |
c13node13 | workq | 32 | job-busy | 32 | 0 | |
c13node23 | workq | 32 | job-busy | 32 | 0 | |
c13node21 | workq | 32 | job-busy | 32 | 0 | |
c13node24 | workq | 32 | job-busy | 32 | 0 | |
c13node25 | workq | 32 | job-busy | 32 | 0 | |
c13node26 | workq | 32 | job-busy | 32 | 0 | |
c13node27 | workq | 32 | job-busy | 32 | 0 | |
c13node28 | neutrino | 32 | free | 0 | 32 | |
c13node29 | neutrino | 32 | down | 0 | 0 | |
c13node30 | neutrino | 32 | free | 0 | 32 | |
c13node31 | neutrino | 32 | free | 0 | 32 | |
Cluster 13
Jobs Status
† Efficiency (of parallelization) = CPU time Walltime × No. of CPUs assigned
Job ID | User | Queue | Job Name | Job State | Walltime Used | No. of CPU using | Memory Using | Efficiency† |
---|---|---|---|---|---|---|---|---|
405653[] | mab5 | workq | spawn_test | B | 4 | 0.00% | ||
405797 | souravmal | workq | batch-2 | R | 2 days 07:22:01 hrs | 32 | 19.99 GB | 96.26% |
405798 | souravmal | workq | batch-3 | R | 2 days 07:21:46 hrs | 32 | 21.24 GB | 99.36% |
405799 | souravmal | workq | batch-4 | R | 2 days 07:21:19 hrs | 32 | 17.16 GB | 99.57% |
405800 | souravmal | workq | batch-5 | R | 2 days 07:21:11 hrs | 32 | 20.44 GB | 96.11% |
405811 | prajna | workq | E_0.05 | R | 1 day 12:53:27 hrs | 64 | 162.32 GB | 99.67% |
405817 | psen | workq | batch-14 | R | 1 day 11:03:05 hrs | 32 | 18.22 GB | 94.95% |
405818 | psen | workq | batch-15 | R | 1 day 10:54:50 hrs | 32 | 21.81 GB | 99.50% |
405819 | psen | workq | batch-16 | R | 1 day 10:23:27 hrs | 32 | 16.11 GB | 99.41% |
405820 | psen | workq | batch-17 | R | 1 day 10:23:05 hrs | 32 | 16.48 GB | 99.50% |
405828 | vanshreep | workq | psb4.5t_pbt | R | 01:44:21 hrs | 128 | 34.63 GB | 99.64% |
405834 | pradhi | workq | mos2v_r9 | R | 04:41:15 hrs | 128 | 16.69 GB | 98.43% |
405836 | ayushitripathi | workq | CsCu2I3 | R | 08:14:52 hrs | 32 | 34.51 GB | 99.64% |
405837 | ayushitripathi | workq | CsCu2I3 | R | 08:14:58 hrs | 32 | 34.44 GB | 99.64% |
405839 | sankalpa | workq | M-X | Q | 128 | 0.00% | ||
405846 | swapnild | workq | wdodp321 | Q | 128 | 0.00% | ||
405848 | shilendra | workq | MMW_PH | R | 01:58:21 hrs | 64 | 46.32 GB | 99.63% |
405849 | shilendra | workq | opt_MBA2 | Q | 64 | 0.00% | ||
Cluster 14
Nodes Summary
Total Number of CPUs: 1040State | No. of Nodes | No. of CPUs occupied/down | No. of CPUs free | % of total CPUs free |
---|---|---|---|---|
job-busy | 12 | 672 | 0 | 0.00 |
free | 7 | 0 | 368 | 35.38 |
Free CPUs (nodewise)
Queue | Node name | No. of free CPUs |
---|---|---|
workq | ||
node11 | 56 | |
node12 | 56 | |
node13 | 56 | |
node14 | 56 | |
node15 | 56 | |
node16 | 56 | |
Total | 368 |
Jobs Summary
† Avg. Efficiency per CPU =
∑
CPU time / Walltime
/∑ No. of CPUs assigned†† Overall Efficiency =
∑ CPU time ∑ (Walltime × No. of CPUs assigned)
[Sums are over all the running jobs.]
Job State | Queue | User | No. of Jobs | No. of CPU using | % of total CPU using | Avg. Walltime per CPU | Avg. Efficiency per CPU† | Overall Efficiency†† |
---|---|---|---|---|---|---|---|---|
R | ||||||||
workq | ||||||||
mab5 | 1 | 112 | 10.77% | 3 days 18:18:23 hrs | 99.06% | 99.06% | ||
psen | 2 | 112 | 10.77% | 2 days 07:27:06 hrs | 99.43% | 99.43% | ||
souravmal | 2 | 112 | 10.77% | 1 day 08:00:27 hrs | 99.45% | 99.39% | ||
prajna | 1 | 112 | 10.77% | 1 day 11:14:46 hrs | 99.50% | 99.50% | ||
swapnild | 1 | 112 | 10.77% | 14:54:52 hrs | 99.55% | 99.55% | ||
vanshreep | 1 | 112 | 10.77% | 03:20:56 hrs | 99.60% | 99.60% |
Job State | Queue | User | No. of Jobs | No. of CPU Requested |
---|---|---|---|---|
Q | ||||
workq | ||||
mab5 | 1 | 56 | ||
souravmal | 4 | 224 | ||
swapnild | 1 | 112 | ||
vanshreep | 1 | 112 |
Cluster 14
Nodes Status
Node | Queue | np | state | No. of CPUs occupied | No. of free cpus | |
---|---|---|---|---|---|---|
node1 | workq | 56 | job-busy | 56 | 0 | |
node2 | workq | 56 | job-busy | 56 | 0 | |
node3 | workq | 56 | job-busy | 56 | 0 | |
node4 | workq | 56 | job-busy | 56 | 0 | |
node5 | workq | 56 | job-busy | 56 | 0 | |
node6 | workq | 56 | job-busy | 56 | 0 | |
node7 | workq | 56 | job-busy | 56 | 0 | |
node8 | workq | 56 | job-busy | 56 | 0 | |
node9 | workq | 56 | job-busy | 56 | 0 | |
node10 | workq | 56 | job-busy | 56 | 0 | |
node11 | workq | 56 | free | 0 | 56 | |
node12 | workq | 56 | free | 0 | 56 | |
node13 | workq | 56 | free | 0 | 56 | |
node14 | workq | 56 | free | 0 | 56 | |
node15 | workq | 56 | free | 0 | 56 | |
node16 | workq | 56 | free | 0 | 56 | |
node17 | workq | 56 | job-busy | 56 | 0 | |
node18 | workq | 56 | job-busy | 56 | 0 | |
gpu1 | neutrino | 32 | free | 0 | 32 | |
Cluster 14
Jobs Status
† Efficiency (of parallelization) = CPU time Walltime × No. of CPUs assigned
Job ID | User | Queue | Job Name | Job State | Walltime Used | No. of CPU using | Memory Using | Efficiency† |
---|---|---|---|---|---|---|---|---|
21516.c14m1.clusternet | mab5@c14m2.clusternet | workq | _p1_SR_BR_ | R | 3 days 18:18:23 hrs | 112 | 80.15 GB | 99.06% |
21607.c14m1.clusternet | psen@c14m2.clusternet | workq | batch-0 | R | 2 days 07:27:07 hrs | 56 | 118.52 GB | 99.51% |
21608.c14m1.clusternet | psen@c14m2.clusternet | workq | batch-1 | R | 2 days 07:27:06 hrs | 56 | 69.36 GB | 99.35% |
21610.c14m1.clusternet | souravmal@c14m2.clusternet | workq | batch-13 | R | 2 days 07:05:08 hrs | 56 | 75.63 GB | 99.37% |
21654.c14m1.clusternet | prajna@c14m2.clusternet | workq | Prstn_HSE06 | R | 1 day 11:14:46 hrs | 112 | 46.20 GB | 99.50% |
21664.c14m1.clusternet | mab5@c14m2.clusternet | workq | _p4_SR_ | Q | 56 | 0.00% | ||
21694.c14m1.clusternet | souravmal@c14m2.clusternet | workq | site2 | R | 08:55:46 hrs | 56 | 38.13 GB | 99.53% |
21695.c14m1.clusternet | swapnild@c14m2.clusternet | workq | wdv | R | 14:54:52 hrs | 112 | 78.75 GB | 99.55% |
21696.c14m1.clusternet | souravmal@c14m2.clusternet | workq | site3 | Q | 56 | 0.00% | ||
21698.c14m1.clusternet | souravmal@c14m2.clusternet | workq | vcrelax | Q | 56 | 0.00% | ||
21699.c14m1.clusternet | souravmal@c14m2.clusternet | workq | site4 | Q | 56 | 0.00% | ||
21700.c14m1.clusternet | souravmal@c14m2.clusternet | workq | site5 | Q | 56 | 0.00% | ||
21706.c14m1.clusternet | swapnild@c14m2.clusternet | workq | wdp | Q | 112 | 0.00% | ||
21712.c14m1.clusternet | vanshreep@c14m2.clusternet | workq | Gr | R | 03:20:56 hrs | 112 | 134.31 GB | 99.60% |
21713.c14m1.clusternet | vanshreep@c14m2.clusternet | workq | Sxk | Q | 112 | 0.00% | ||