SAN guys: Anyone using iSCSI?

Discussion in 'OT Technology' started by critter783, Feb 26, 2009.

  1. critter783

    critter783 OT Supporter

    Joined:
    Jul 15, 2005
    Messages:
    1,785
    Likes Received:
    0
    We're getting ready to build a pretty big VM environment here(5 Dell R900s with 128GB ram each) at work and we're looking at SAN space for the VMs. We currently use EMC CX3-20s over fiber, but for the VMs we're going to buy an Equallogic 6.4TB iSCSI SAN. Anyone had the chance to compare iSCSI against fiber?
     
  2. deusexaethera

    deusexaethera OT Supporter

    Joined:
    Jan 27, 2005
    Messages:
    19,712
    Likes Received:
    0
    No, but I'd like to.
     
  3. dissonance

    dissonance reset OT Supporter

    Joined:
    May 23, 2006
    Messages:
    5,652
    Likes Received:
    1
    Location:
    KS
    If you don't mind me asking, which models? Are they clustered? What software do you have licenses for?
     
  4. crontab

    crontab (uid = 0)

    Joined:
    Nov 14, 2000
    Messages:
    23,446
    Likes Received:
    12
    So you buy everything through dell? servers and storage?

    what is your underlying infrastructure like? That will dictate the performance and reliability of your iSCSI san.

    Will you be using the same network with tagged traffic or will get get a dedicated set of switches, like one would with a fiber san.

    The SAN's you're comparing a low to mid tier, so whether it be fiber or iscsi, performance will all depend on the under lying architecture and infrastructure.... dedicated switches, dedicated iscsi cards or nics. I assume that you will not be going down the 10G route.

    pricewise, iscsi *CAN* be cheaper, but some aspects will be sacrificed, most significantly performance. iscsi can be made highly available for dirt cheap. One huge advantage over most SAN.

    On the vm part, how big are your vm's? Do you plan to run a few large vm's or tons of small ones? If you plan to run tons of small ones, there is a limit based on the service console memory. That is only 800MB max and when it gets used up, new vm's can't be powered on or vm's can be vmotion'd to. We have R900's with 64GB and we hit the SC memory limit with ~80 vm's on a ESX host before we even tax the cpu or ram.
     
  5. BlazinBlazer Guy

    BlazinBlazer Guy Witness to The De-Evolution of Mankind.

    Joined:
    Jul 24, 2002
    Messages:
    18,783
    Likes Received:
    0
    Location:
    Lansing, MI USA
    I'd make sure you check Equallogic's licensing for iSCSI. When we were pricing out SANs at my last job we discovered that Netapp does a one time iscsi license for unlimited node connections, whereas EMC charges a per-node license fee for iscsi (like Terminal Services uses CALs).

    Depending on your application, that may be a big factor to consider.
     
  6. critter783

    critter783 OT Supporter

    Joined:
    Jul 15, 2005
    Messages:
    1,785
    Likes Received:
    0
    As far as the underlying structure, we currently have a 48 port gig blade in a Cisco 6500e that we're using for the network connections. It is currently going to be tagging traffic, and all the other network blades in our data center of about 150 servers are connected to blades in this 6500e. The current gig blade we're using doesn't have a store-forward daughter card, but we're going to get one when we get our last-quarter disbursement of money so the traffic won't have to go back to the 720 supervisor. The ESX host machines are going to have dual iscsi HBAs for san traffic, and we'll be doing 1 dedicated vmotion port, 1 dedicated management port, and 4 teamed ports for vms.

    As far as VM size, we'll probably have one of the R900's with about 40 or 50 little VMs on it, and another one hosting a couple of Exchange 2007 backends. The little stuff will be mostly web servers and one-off application servers.
     
  7. trouphaz

    trouphaz New Member

    Joined:
    Sep 22, 2003
    Messages:
    2,666
    Likes Received:
    0
    so, you're going to have dedicated cards for iSCSI? do you really save much going for an iSCSI solution over a fiber attached solution when doing that? that was the big issue i always had with iSCSI. i found it great for one off systems that i just couldn't get attached to the fiber SAN for whatever reason, but i never found the savings enough to justify the compromise.

    what's the price difference between an iSCSI array and a fiber attached array + a couple of 8-port fiber switches?
     
  8. critter783

    critter783 OT Supporter

    Joined:
    Jul 15, 2005
    Messages:
    1,785
    Likes Received:
    0
    The SAN in question is $10k more than a 1TB($25000) enclosure for our EMC fiber SAN. That doesn't include any fiber switches, PowerPath licenses, or fiber HBAs for our servers.
     
  9. dissonance

    dissonance reset OT Supporter

    Joined:
    May 23, 2006
    Messages:
    5,652
    Likes Received:
    1
    Location:
    KS
    If you go up to the CX4-120, the PowerPath license is included just like it was on the CX3-10. The only downside is that if you still want iSCSI the modules for the CX4s are quite pricy for what they are. Hell, since you don't need much storage, if you can get away with a weaker array you could use an AX4-5.

    Or you could check out the IBM DS3000 family, comparable performace with the low end CX's and the only licensing you have to worry about other than premium feature shit is partitioning.
     
  10. trouphaz

    trouphaz New Member

    Joined:
    Sep 22, 2003
    Messages:
    2,666
    Likes Received:
    0
    is there that much of a cost difference for the fiber HBAs vs the iSCSI HBAs (which i'm assuming are just network cards with controllers to offload the SCSI commands from the CPU)? are you going to have a dedicated network just for the iSCSI traffic? i don't think you'd need dedicated switches, but if you're going for a large virtual environment i wouldn't put any other data on the same ports.

    one major reason i like fiber based SAN for disk is because it forces you to segregate traffic. you rarely see IP over fiber, so you don't have any worries about your regular network traffic conflicting with your disk traffic.
     

Share This Page