Currently in the process of building my second microserver to provide iscsi storage to my esxi lab. I have fitted a total of 13 disks inside and powered it up and performed initial benchmarks without it blowing up.
Using:
IBM serveraid 8 port SAS controller in 16x pci-e slot
-4x 500GB 3.5" 7200RPM in front bays connected to the motherboard SAS port
-6x hot swap 500GB 2.5" 7200 in 5.25" optical bay, ports 1-6 on IBM SAS controller
-2x 500GB 2.5" 7200RPM on top of optical bay, ports 7+8 on IBM SAS controller
1x 120GB SSD below the optical bay, connected to onboard data port
Running ESXi with the 120GB SSD as a datastore, the 12x 500GB disks have been passed through to a VM as RDM's.
Initial benchmarks showed positive results using Linux CentOS minimal, all 12 disks in one large 5.5TB linux raid 5 volume with SCST as the iscsi target. Local reads around 800MB a second, writes around 450MB a second.
Testing iscsi from a VM running on another ESXi host gave good results, completey maxing out the onboard 1GB nic on sequential reads/writes and decent random performance also. I have a dual port Intel gigabit pcie 2.0 x1 card on order for the x1 slot which will allow much higher total throughput.
Once I am happy with the build I will probably put something which is easier to manage on there such as synology DSM as centos / scst is command line only ( although it is very lightweight and fast). May also bypass ESXi and install on the bare metal, but during testing it makes it easy to try various different OS'.
The choice for 500GB disks was down to the fact that I had most of them already. I wanted to keep all disks the same size, and any larger disks I have are mixed sizes. I may remove two disks and add another two SSD's as cache drives depending on which OS I settle on.