Review: world’s first Supermicro 2026TT chassis

10 March 2010, by: Development

Just after our house style has been redesigned in blue, we technicians go green. Our intention was to modernize our server park with exclusive 2.5″ chassis, and where long waiting for a 2U, 2.5″ chassis that would accommodate a lot of SSDs. We are already preparing a new database server (fitted with 24 x 2.5″ INTEL X25-E SSDs) based on the Supermicro SC826 chasis, but our XEN cluster would love something similar too!

The wait is over: we were shipped the world’s very first Supermicro 2026TT-HIBXRF 2U Server. While the blades more or less existed before, the speciality on this configuration is that all 6 Sata channels are connected to the backplane. This enables us to use all Sata ports for LVM setups, using SSDs for performance and power saving sake.

front1

front1

front2

front2

swap bays and switching panel details

swap bays and switching panel details

rear1

rear1

rear connection details

rear connection details

Which goodies did we place in the blades:

  • CPU: 2 x Xeon X5570 (Nehalem, 4 cores, 8 threads, 2.93 GHz, 95 W)
  • RAM: 6 x Kingston KVR1333D3D4R9S/4G
  • HDD: 1 x Seagate ST9500530NS 500GB SATA
  • SSD: 4 x INTEL X25-M Postville SSDSA2MH160G2C1 (@FW 02HD)

On the HDD we installed debian lenny with XEN kernel Xen 3.2-1-amd64; DOM0 and DOMu are running 2.6.26-2-xen-amd64 kernels, the virtual machines will run on LVM volumes created using 4 SSDs. Since the INTEL SSDs are based on MLC cells we are taking a risk with potentially intensive writing; however, testing on our workstations showed that the life expectancy of this setup will be a couple of years. Manufacturers are planning longer living MLC SSDs at the end of this year, so replacements will be on hand pretty soon.

bios

bios

What about power usage of such a server? I measured it quickly with no optimizations in the Linux kernel and with the Hyper-Threading option switched off in the BIOS:

  • STANDBY: 30 Watt
  • 1 BLADE: 210 Watt
  • 2 BLADES: 347 Watt
  • 3 BLADES: 499 Watt
  • 4 BLADES: 647 Watt

This comes down to ~ 150 Watt per blade (idle), and ~50 Watt for 4 x HDD, 16 x SSD, and some case fans – pretty cool, isn’t it? What this setup will do on high load will be determined later; for now, the cooling conditions in our test room where far from optimal: with an ambient temperature of 27 degrees Celsius, the temperature within the chassis rose to 53 degrees. We have to wait for stress testing when the server is at its final destination, with much better cooling conditions.

What about XEN, LVM and SSD performance? We are not done yet, but measurements in DOM0 showed pretty nice figures. Since a hardware RAID solution using these twin blades is more or less off limits, software RAID is the best alternative. From our experience, we know that the xfs file system performs best for benchmarking (with schedulers set to deadline). After some sweet-spot measurements using lvm2, using 4 SSDs (RAID 0), we figured out that the following settings are best:

pvcreate --metadatasize 511K /dev/sdb /dev/sdc /dev/sdd /dev/sde
vgcreate xenvg-ssd /dev/sdb /dev/sdc /dev/sdd /dev/sde 
lvcreate -i4 -I256 -L40G -n benchmark -n xenvg-ssd

Figures derived from this setup are not benched using IOZone, IOMeter or similar, but we used our own tools that will do the trick. For more information on this, please see: http://jdevelopment.nl/hardware/one-dvd-per-second/:

bm-flash:

Filling 4G before testing  ...   4096 MB done in 12 seconds (341 MB/sec).

Read Tests:

Block |   1 thread    |  10 threads   |  40 threads
 Size |  IOPS    BW   |  IOPS    BW   |  IOPS    BW
      |               |               |
 512B |  8695    4.2M | 58401   28.5M |153774   75.0M
   1K |  7712    7.5M | 54920   53.6M |148026  144.5M
   2K |  6455   12.6M | 46069   89.9M |134606  262.9M
   4K |  4909   19.1M | 35301  137.8M |103674  404.9M
   8K |  4516   35.2M | 32108  250.8M | 72833  569.0M
  16K |  3954   61.7M | 27518  429.9M | 43003  671.9M
  32K |  3262  101.9M | 19297  603.0M | 22875  714.8M
  64K |  2376  148.5M | 11136  696.0M | 11750  734.3M
 128K |  1665  208.1M |  5880  735.1M |  5933  741.7M
 256K |  1001  250.4M |  2979  744.7M |  2973  743.4M
 512K |   841  420.7M |  1415  707.5M |  1422  711.2M
   1M |   533  533.5M |   619  619.0M |   621  621.0M
   2M |   280  560.0M |   307  615.5M |   309  619.3M
   4M |   143  574.3M |   153  614.7M |   151  606.3M

Write Tests:

Block |   1 thread    |  10 threads   |  40 threads
 Size |  IOPS    BW   |  IOPS    BW   |  IOPS    BW
      |               |               |
 512B | 11062    5.4M | 21375   10.4M | 26693   13.0M
   1K |  6834    6.6M | 15384   15.0M | 22303   21.7M
   2K |  6244   12.1M | 13582   26.5M | 23145   45.2M
   4K |  7473   29.1M | 18849   73.6M | 25007   97.6M
   8K |  7106   55.5M | 24629  192.4M | 31830  248.6M
  16K |  7254  113.3M | 18285  285.7M | 23884  373.1M
  32K |  4842  151.3M |  8619  269.3M | 11580  361.8M
  64K |  2525  157.8M |  4604  287.7M |  5943  371.4M
 128K |  1319  164.8M |  2377  297.2M |  3048  381.0M
 256K |   561  140.4M |  1244  311.0M |  1531  382.7M
 512K |   368  184.0M |   745  372.8M |   778  389.3M
   1M |   335  335.2M |   381  381.8M |   401  401.5M
   2M |   174  348.1M |   192  385.7M |   210  421.0M
   4M |    91  364.7M |   103  414.0M |   107  428.3M

xdd:

Random READ tests:

          |      1 Thread |    10 Threads |    40 Threads |
Blocksize |   IOPS   MB/s |   IOPS   MB/s |   IOPS   MB/s |
          |               |               |               |
      512 |  13639      6 | 120414     61 | 186044     95 |
     1024 |  14256     14 | 109734    112 | 181448    185 |
     2048 |  12669     25 |  95246    195 | 171345    350 |
     4096 |  10302     42 |  75704    310 | 132238    541 |
     8192 |   8591     70 |  55870    457 |  78980    647 |
    16384 |   7244    118 |  35797    586 |  43133    706 |
    32768 |   5786    189 |  21985    720 |  22711    744 |

Sequential READ tests:

          |      1 Thread |    10 Threads |    40 Threads |
Blocksize |   IOPS   MB/s |   IOPS   MB/s |   IOPS   MB/s |
          |               |               |               |
      512 |  35796     18 | 119992     61 | 178309     91 |
     1024 |  34838     35 | 113584    116 | 170864    174 |
     2048 |  28590     58 |  97803    200 | 173524    355 |
     4096 |  19967     81 |  72748    297 | 134078    549 |
     8192 |  14151    115 |  57131    468 |  79959    655 |
    16384 |   9276    151 |  38128    624 |  43480    712 |
    32768 |   4460    146 |  22309    731 |  22812    747 |

Random WRITE tests:

          |      1 Thread |    10 Threads |    40 Threads |
Blocksize |   IOPS   MB/s |   IOPS   MB/s |   IOPS   MB/s |
          |               |               |               |
      512 |  23271     11 |  33162     16 |  40870     20 |
     1024 |  16571     16 |  26695     27 |  37758     38 |
     2048 |  16747     34 |  25156     51 |  34664     70 |
     4096 |  14019     57 |  24817    101 |  29577    121 |
     8192 |  12817    104 |  25704    210 |  30310    248 |
    16384 |  11149    182 |  15612    255 |  23467    384 |
    32768 |   6613    216 |   8525    279 |  12281    402 |

Sequential WRITE tests:

          |      1 Thread |    10 Threads |    40 Threads |
Blocksize |   IOPS   MB/s |   IOPS   MB/s |   IOPS   MB/s |
          |               |               |               |
      512 |  29471     15 |  36580     18 |  41892     21 |
     1024 |  26631     27 |  35478     36 |  36696     37 |
     2048 |  23431     47 |  32128     65 |  39953     81 |
     4096 |  22747     93 |  33924    138 |  40566    166 |
     8192 |  19811    162 |  23773    194 |  38880    318 |
    16384 |  12436    203 |  16751    274 |  24396    399 |
    32768 |   7470    244 |   8978    294 |  13039    427 |

Random READ/WRITE [90/10] tests:

          |      1 Thread |    10 Threads |    40 Threads |
Blocksize |   IOPS   MB/s |   IOPS   MB/s |   IOPS   MB/s |
          |               |               |               |
      512 |  14961      7 |  57521     29 |  85189     43 |
     1024 |  12284     12 |  43737     44 |  73368     75 |
     2048 |   9762     19 |  33229     68 |  66863    136 |
     4096 |   7366     30 |  27530    112 |  58668    240 |
     8192 |   6298     51 |  25379    207 |  48998    401 |
    16384 |   5283     86 |  20828    341 |  29309    480 |
    32768 |   4019    131 |  15410    504 |  19318    633 |

Sequential READ/WRITE [90/10] tests:

          |      1 Thread |    10 Threads |    40 Threads |
Blocksize |   IOPS   MB/s |   IOPS   MB/s |   IOPS   MB/s |
          |               |               |               |
      512 |  14278      7 |  72588     37 |  88427     45 |
     1024 |  10767     11 |  49585     50 |  73693     75 |
     2048 |   9110     18 |  34068     69 |  72447    148 |
     4096 |   7592     31 |  27516    112 |  66647    272 |
     8192 |   6271     51 |  26221    214 |  53545    438 |
    16384 |   5512     90 |  22818    373 |  33087    542 |
    32768 |   4138    135 |  16400    537 |  20735    679 |

sequential:

# dd if=/dev/zero of=/bench/xdd/S1 bs=8K count=2M
2097152+0 records in
2097152+0 records out
17179869184 bytes (17 GB) copied, 34.8562 s, 493 MB/s
# dd of=/dev/zero if=/bench/xdd/S1 bs=8K
2097152+0 records in
2097152+0 records out
17179869184 bytes (17 GB) copied, 27.2719 s, 630 MB/s
#  time cp /bench/xdd/S1 /bench/xdd/S0
real	1m8.972s
user	0m0.468s
sys	0m13.049s

More to come….

One comment to “Review: world’s first Supermicro 2026TT chassis”

  1. Pete says:

    I noticed the 186044 IOPS number in one of the readings. 186044 IOPS! Think of this…

    A good year ago this was unthinkable to reach with anything but the highest performing hardware. Not less than 2 years we couldn’t even obtain this no matter what money we threw at it.

    If you’re sure these numbers are correct, you maybe should have made this the headline of your article 😉

Type your comment below:

Time limit is exhausted. Please reload CAPTCHA.

css.php best counter