Jump to content


Photo

NetApp FAS2240-2 Review Discussion


  • You cannot start a new topic
  • Please log in to reply
10 replies to this topic

#1 Brian

Brian

    SR Admin

  • Admin
  • 5,156 posts

Posted 02 April 2014 - 02:46 PM

 

The NetApp FAS2240-2 is a versatile unified storage array that is extremely simple to deploy and manage, yet has the capability to grow with an organization's data needs. Equally at home as the central storage repository for a SMB or in a branch/remote office for a larger enterprise, the FAS2240-2 delivers a complete set of enterprise features with an aggressive starting price point.

NetApp FAS2240-2 Review

Brian

Publisher- StorageReview.com
Twitter - @StorageReview

 

#2 dilidolo

dilidolo

    Member

  • Member
  • 51 posts

Posted 03 April 2014 - 11:21 AM

There is no reason to split the disks into 2 aggregates in this setup. You are wasting more disk space and limiting your performance. With the internal shelf only, you are not going to hit controller hard, Active/Standby is more than enough.


#3 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,411 posts

Posted 03 April 2014 - 11:46 AM

It was actually the configuration NetApp wanted us to test the platform in. During many of our tests controller CPU usage was 80-90% utilization.


#4 dalek

dalek

    Member

  • Member
  • 1 posts

Posted 03 April 2014 - 01:39 PM

Thanks for this review!

So, this configuration was basically RAID-6+0 (Qty-2 12x 10k 480GB drive RAID-6 arrays that are striped)?  Sorry if I missed this in article.

If flash is added, it is used a separate storage pool?  Not to cache read/write to spinning disks?


#5 Brian

Brian

    SR Admin

  • Admin
  • 5,156 posts

Posted 03 April 2014 - 01:41 PM

We did not get a flash configuration with this model unfortunately. We are however hopeful that our review relationship with NetApp is young and we'll see more of their hybrid and higher-end gear in the coming months. 


Brian

Publisher- StorageReview.com
Twitter - @StorageReview

 

#6 bobbbino

bobbbino

    Member

  • Member
  • 1 posts

Posted 04 April 2014 - 08:25 AM

Afternoon Folks

 

In the interests of disclosure, before going any further, I work for NetApp in the UK as a presales engineer. I'd like to first say great review. Really in depth and it's great to get such positive independent feedback.

 

I did want to address the one item you've highlighted as a 'con' of the system in your analysis of the pros and cons. You highlight an imbalance between block and file protocol performance. This is actually not down to the array, but rather the SMB 2.x protocol itself. It simply isn't performant and that's why Microsoft have been waiting for SMB 3 before allowing you to run Hyper-V, SQL and Exchange over SMB.

 

If you were to re-test the file protocol using SMB 3.0 on Windows Server 2012 or indeed NFS using a Linux or VMware ESX host then you'd find that there an almost unmeasurable difference in performance between the block and file protocols.

 

Thanks


#7 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,411 posts

Posted 04 April 2014 - 01:59 PM

That is definitely something we will look at on our next Netapp project.


#8 mikey_b79

mikey_b79

    Member

  • Member
  • 1 posts

Posted 10 April 2014 - 09:33 AM

I had the 2240-4 in a previous role with 24x 1 TB drives. My configuration of choice was as follows:

 

Controller 1:

Aggr0 - 3 disk RAID-DP for vol0

Aggr1 - 16 disk RAID-DP for storage

1 hot spare

 

Controller 2:

Aggr0 - 3 disk RAID-DP for vol0

1 host spare

 

This would operate in an active-passive form for a test lab environment. Even with 1TB 7k spindles the performance was reasonable (until someone did something dramatic), in the end there was a 2-node vSphere 5.1 test cluster with 40 virtual machines, a standalone Oracle VM host with 5 virtual machines, and a pair of Oracle Database Appliances using it for various things. 

 

Using an 8k 70/30 read/write benchmark I was able to get ~2700 IOPS out of that aggregate, or roughly 170 IOPS/spindle. Not bad for 7k (although fairly unscientific).

 

The killer features in this environment was the virtual storage console licensed for backup and recovery (as my devs would regularly backup a VM then ruin it and be able to restore it almost instantly on their own), as well as deduplication and thin provisioning performance on the virtual machines (great savings, >30% with dedupe and slim overhead on the thin provisioning).

 

My only regret is using iSCSI for the vSphere cluster, I was going to play more with the NFS configuration but there was a sudden urgency so I went with what I already knew (we had FAS3140 in production).

 

It's a good product and definitely a swiss army knife of the storage world. Its funny to think that the 2240-4 we had actually had more powerful controllers (4-core vs. 2-core CPUs) and more RAM (6GB vs. 4GB) than our production FAS3140 (which sadly never performed well due to bad implementation and left a bad taste with management).


Edited by mikey_b79, 10 April 2014 - 09:39 AM.

#9 Brian

Brian

    SR Admin

  • Admin
  • 5,156 posts

Posted 10 April 2014 - 10:06 AM

Mikey, thanks for checking in, it's always great to here from users about their real world experiences with arrays like this. 


Brian

Publisher- StorageReview.com
Twitter - @StorageReview

 

#10 Afrojazz

Afrojazz

    Member

  • Member
  • 1 posts

Posted 10 April 2014 - 01:31 PM

Hi. Thanks for review.

But I think it lacks detailed FAS2240-2 configuration. What size HDD did you use? How were aggregates and raid groups configured? How many usable space did you utilized during your tests? As I know NetApp has performance problem with >90% used space.

Some of your results looks very unrealistic. What about performance under 20ms average latency?

And it looks like mistype - 272,282 IOPS write.


#11 Kevin OBrien

Kevin OBrien

    StorageReview Editor

  • Admin
  • 1,411 posts

Posted 12 April 2014 - 08:52 AM

Our configuration tested included 450GB HDDs, split into two pools with dual parity. Space consumed was 100GB per controller (50GB for iSCSI LUNs and the same for CIFS test files). Also good catch on the typo, fixed the text, the chart itself had the correct IOPs value.



0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users