Search the Community

Showing results for tags 'SAN'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Storage Hardware
    • SMB/Enterprise Storage
    • RAID Cards and HBAs
    • Solid State Drives (SSD)
    • Hard Disk Drives (HDD)
    • Home Storage and Computer Hardware
    • What Drive Should I Buy?
  • StorageReview News, Reviews and Projects
    • Storage News & Reviews
    • Enterprise Test Lab Reviews/Projects
  • General
    • Site Suggestions & Help
    • Storage and Tech Bargains
    • StorageReview Marketplace
    • The Bar & Grille
    • Reliability & Performance Drive Databases

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







Found 17 results

  1. mike.christensen

    HPE MSA 2042 vs NetApp E-2724

    Hi All, I've got a HP G7 2 node cluster running on NetApp FAS just now. The site is not compute intensive, with some licence servers, AD, Printer, DNS, etc, and Exchange. However, they have a relatively large storage requirement - approx. 12TB and it needs to be fast (not all-flash fast, but responsive). I'm contemplating renewing this with G9 hosts directly attached with 10G iScsi using either an HP MSA 2042 or NetApp E-2724. Both are almost identical commercially. They seem fairly similar, although the MSA offers more drive compatibility and uses re-direct on write. Both specs have SSD cache. Anyone have any input? Thanks.
  2. How much performance do you lose by running NVMe devices over Ethernet or InfiniBand compared to running them natively? We did the test using ConnectX-3 adapters on Ethernet and InfiniBand. Here are the results:
  3. Good morning! Are there some best practices, literature or tools you could use to try configuring/sizing your SAN-Tiers in a most optimal way? Thank you very mutch!
  4. We are an innovation company that is looking to create a fast, simple, and easy to use network. We are not IT people, and we don’t want to be. That’s why is so important to us that our network can be managed with ease, performs as fast as possible, and be low maintenance. We need help establishing our storage architecture (which servers do we need? which software? which brand? which configuration? which drives?) in a way that will fit our current needs, be future proof, and is as close to worry-free as possible. Goals: Needs to be easy to manage and worry-free (almost no maintenance). Scale out easily, as we grow. High performance Reliable Easy to manage user permissions, shares... Have a simple way for us to get qualified support, when needed. Work for Mac and Windows users seamlessly. Allow our employees to work directly from the server, as if they were accessing the files directly from their hard-drive. Storage Needs: Virtual Environment - extremely fast storage, with high IOPS - for our VMWare hosts. Business Storage - fast storage for our creative users (mac) and standards users (windows) with considerable size and made for accessing files like they were on the user machine. Security Cameras - storage for 24 cameras, recording 1920x1080, 10fps around 10 hours per day, retaining the footage for 180 days. DAM - database, index, caching… everything our Digital Asset Management solution will need. Backup - everything that we own and operate must be backed up on a daily basis (even with hourly increments for some cases) Archive - made to archive files that will be read not frequently. BitBucket - a big storage so we can dump everything we want to, make sense of it, and move to a better place (or leave it there) - like external drives, old hard-drives, etc. Long-Term Backup - a way for us to backup everything in a cheap but reliable media for a long time. Considerations: We are willing to pay more for a solution that has a brand behind and proven track record, if it makes sense. Video storage will be dealt with later on. Our current solution works for our current needs. Will need to be revisited in the near future. Concerns: Having servers and hard-drives that were bought and custom build, having no brand to support it and provide any help when needed. Every place we read says that FreeNAS should NOT be used in an enterprise environment. Should we use SAS or SATA drives? I am attaching the way we see our storage needs in terms of tiers, showing how the relationship of size x speed x cost is. I am adding also a storage flow that shows how we believe the flow of our data needs to be. Here is the solution that was initially proposed to us, by another consultants: Tier 0 - Flash Storage (For VMs) Chassis: 1U SuperMicro CPU: 2 Intel 4-core Xeon E5-2609v2 RAM: 128GB OS: FreeNAS 9.3 RAID configuration: 2 2-disk RAID 10 vdevs Bays: 8 (4 available) Disks: Samsung 850 PRO 1TB Usable Storage: 1.5TB Max Storage: 27TB (adding 2 more JBODs) Price: $6,350.00 ($2,823.33 per TB) Tier 1 - Business Storage Chassis 4U SuperMicro CPU: 2 Intel 4-core Xeon E5-2609v2 RAM: 128GB OS: FreeNAS 9.3 RAID configuration: 3 6-disk RAID Z1 vdevs + 2 hot spares Bays: 24 (0 available) Disks: Samsung WD RE 4TB Cache: 2-disk read zil, 2-disk write zil (Samsung 850 PRO 256GB) Usable Storage: 40TB Max Storage: 2.46PB (adding 8 more JBODs) Price: $11,704.00 ($292.60 per TB) Tier 3 - Backup Storage Chassis 4U SuperMicro CPU: 2 Intel Xeon E5-1650 RAM: 256GB OS: FreeNAS 9.3 RAID configuration: 13 2-disk RAID 10 vdevs + 4 hot spares Bays: 36 (0 available) Disks: Samsung WD RE 6TB Cache: 2-disk read zil, 3-disk write zil (Samsung 850 PRO 256GB) Usable Storage: 57TB Max Storage: 475TB (adding 8 more JBODs) Price: $24,977.21 ($438.55 per TB) Tier 4 - Archive Storage Chassis 4U SuperMicro CPU: 2 Intel Xeon E5-1650 RAM: 256GB OS: FreeNAS 9.3 RAID configuration: 13 2-disk RAID 10 vdevs + 4 hot spares Bays: 36 (0 available) Disks: Seagate Archival HDD 8TB Cache: 2-disk write zil (Samsung 850 PRO 256GB) Usable Storage: 160TB Max Storage: 1.37PB (adding 8 more JBODs) Price: $17,716.75 ($110.73 per TB) What do you think about this configuration? Shoot holes in it! Anything that is bad that we are being suggested? Anything that we should be aware of? What your configuration would be? What is your recommended storage solution? Drives? Servers? Please share some insights, ideas, recommendations, advice, examples, anything - so frustrated and willing to pay for help.
  5. Good morning, this is my first post here so thank you for reading! :-) We are using a software defined SAN (SanSymphony) an have some serious performance-problem running our processes. Now its on me to tell if the SAN ist the bottleneck. Unfortunately im a Database-Guy and not a SAN-Dude. Can someone tell me some best-practices how to tell if a SAN is running hot? The SAN: Some 15k Disk-Raids on lower Tier, 2 SSD-Raid 5 for first tier (about max 15k IOPS Random Write for 8k Blocks each), connected via 8GPS Fibrechannel to the hosts. All put together with SanSymphony. All Servers an services are running virtualized on vmware. What i can tell: Disk latencys (shown in Perfmon on the servers) are pretty high! 2 to 20ms Average on low load-dutys, up to 100-400ms on heavy load are typical for our MS SQL-Server. IMHO these numbers are horrific for physical Database-Servers but some consultants told us that this high latencys are normal for virtualization + san. So we tried to ignore the latencys. I grap some logs of our SAN and figured out the most frequently Workload-Profiles (% Read, %Write, Average Read-Blocksize, Average Write-Blocksize) and setup an IOMeter-Scenario reflecting these workloads. I fired this IOMeter-Setup to our SAN in the off-duty-time, measured the maximum IOPS per Workload-Profile and compared it to the IOPS happening in real world for the specific Workloadscenario. I put all these numbers together in some Excel-sheets and now ... i dont know any further. IOPS_rel.jpg is showing 2 days in our companys live. Each datapoint represents about 30 minutes. I named the maximal benchmarkt IOPS 100% and compaired the real world iops with them. What i can see: - our SAN is continously running at about 30% maxload. - the to peaks (1 to 13 and 51 to 59) shown in the diagramm are the processes causing trouble. The first spike hits the 100%-mark (the 120% i would tell benchmark-tollerance...) the second one is not touching the 80% -line. So ... shall we upgrade our SAN or not? I know that this decision is at the last step a comparison of money. But what would you say from a technical point of view? Tank you very mutch for reading! Andre
  6. Hello, I'm currently tasked with overhauling our data-center from top to bottom. The first stage is to get our new storage fabric in place. I'm looking for the community views, experience and knowledge you are welling to share. I have spoke with HP, EMC, Dell, Gridstore, Nutanix, and looked at Windows Storage Spaces. We are a small company that is going though growing pains as it transitions into a medium size. We are a totally hyper-v shop and have about 25 servers that are currently running the following work loads ranging from, Exchange 2010, SQL 2005-2012, Microsoft CRM 2011, Microsoft GP, and a host of other servers all Microsoft. No matter what solution I buy my top concerns are flexibly and ease of expansion, performance, and cost effectiveness. Right now we have about 7.5 TB of live data and could possibly take a large jump if we land certain clients. Thus why I'm concerned about the ease of expatiation. Thank you for taking the time to read this.
  7. Hello All, just joined the forum as I have a couple of questions about or san and wondered if we are getting the best from it. I have tried to give as much information as possible without every possible setting (I'll wait for requests for them). in short I think I might be having throughput/iop performance issues for my VM's. not sure if its iSCSI network related or san related or VMware related. or maybe I have none and was just expecting more. The Problem We recently had a company in to upgrade our infrastructure (we were still server 2003). We are noticing a bit of a lag in one program. The program in question (an ERP system) is a new program for us so we are not sure if its the hardware or the software as we have no benchmarks on either. Information 3 hosts (HP DL380p Gen8 dual 3ghz 10 core cpu, 64gb mem) with latest esxi on SD card. no disk in any host. Cisco (C2960x) stacked switch connecting all the host for network, management, vmotion, etc.. The iSCSI network is then on 2 separate cisco switches (non stacked) again C2960x we currently only have 6vms setup (2 DCs, 1 exchange, 1 vcenter, 1 ERP application, 1 ERP DB ) the ERP software is not used by anyone yet. SAN is 1 HP MSA 1040 (dual controller with dual 1GB on each) I believe 2GB cache on each controller. + 1 HP D2700 enclosure. the disks are as follows: Name Size RAID Disk Type Current Owner Disks vDisk01 2398.0GB RAID50 SAS A 6 (10k 600GB) vDisk02 4196.5GB RAID5 SAS B 8 (10k 600GB) vDisk03 3597.0GB RAID5 SAS A 7 (10k 600GB) vDisk04 2996.8GB RAID50 SAS B 12 (15k 300GB) The ERP is on vDisk04 (only the application and db vm are on this). Here are some stats from CrystalDiskMark: Results This is from one of the erp vm's on vDisk04 Sequential Read : 108.898 MB/s Sequential Write : 107.679 MB/s Random Read 512KB : 102.853 MB/s Random Write 512KB : 98.401 MB/s Random Read 4KB (QD=1) : 8.630 MB/s [ 2107.0 IOPS] Random Write 4KB (QD=1) : 5.251 MB/s [ 1281.9 IOPS] Random Read 4KB (QD=32) : 98.198 MB/s [ 23974.2 IOPS] Random Write 4KB (QD=32) : 49.038 MB/s [ 11972.3 IOPS]T Test : 500 MB [D: 8.1% (16.3/200.0 GB)] (x5) OS : Windows Server 2012 Server Standard Edition (full installation) [6.2 Build 9200] (x64) running crystaldiskmark gives almost identical results from any vm (no matter the vdisk the vm is on) I would of expected slightly different results due to the raid. we also have 1 host running server 2012 native and ran crystaldiskmark on vDisk03 and get slightly improved results. So this leads me to question the VMware settings being a slight issue possibly? also not sure if I should be seeing better throughput/iops in general or if these figures are reasonable? This is the other result: Sequential Read : 205.442 MB/s Sequential Write : 189.707 MB/s Random Read 512KB : 172.833 MB/s Random Write 512KB : 169.600 MB/s Random Read 4KB (QD=1) : 9.021 MB/s [ 2202.3 IOPS] Random Write 4KB (QD=1) : 6.164 MB/s [ 1505.0 IOPS] Random Read 4KB (QD=32) : 136.312 MB/s [ 33279.2 IOPS] Random Write 4KB (QD=32) : 50.407 MB/s [ 12306.4 IOPS] Test : 500 MB [E: 22.5% (752.6/3349.4 GB)] (x5) OS : Windows Server 2012 Server Standard Edition (full installation) [6.2 Build 9200] (x64) Hopefully some intelligent knowledgeable person on here can help me troubleshoot where any issues may be or can tweak a few settings to get some better performance (if any). or failing that I can pinpoint the issue to be the ERP software and know my hardware is all fine. I hope the limitation isn't the new infrastructure that's been implemented. Thank you in advanced. EDIT: Forgot to mention I have roundrobin / MPIO setup. Thanks again.

    EMC VNX upgrade options

    Hi, A friend has asked for some informal advice about upgrading a data archive system based around an EMC VNX 5100. It is configured with an external drive enclosure filled with 3 TB SATA drives, so it's not a performance oriented system - more bulk entry-level online storage. What are the upgrade options? My understanding is that I can just add another DAE configured with drives. Another shelf of 3 TB drives would be OK, but given data growth expectations, I was wondering whether 4 TB drives are available or suitable. The plan would be to add 8 drives (in RAID6 + spare) initially, and then as demands increase fill the shelf with another 7 drives in RAID6. In the more distant future, it is possible that the 5100 might need upgrading, are the V3 DAEs/drives compatible with the 5200 series?
  9. HP StoreVirtual 4335 Hybrid Storage is ideal for SMBs and ROBO scenarios that need a quality full-featured enterprise storage product with a tilt toward ease of deployment, management and robust data services. That's not to detract from performance; the cluster we tested posted impressive results almost everywhere, with especially impressive numbers in Microsoft-oriented environments. HP StoreVirtual 4335 Hybrid Storage Review
  10. Dell believes that they've released a product that will absolutely kill it in the midmarket. With a little hands on time in the Dell Storage lab, we tend to agree. There's really not a system out there now that has the combination of data services, software tool integration, interface flexibility, support and cost structure that the SC4020 brings to bear. A midmarket Compellent was a gaping hole in the Dell Storage portfolio, which has clearly been repaired. Dell Announces SC4000 Mid-Tier Flash and Hybrid Arrays
  11. ​American Megatrends Inc. (AMI) has announced the release of the StorTrends iDATA (Intelligent Data Analysis Tracking Application), a free software tool that is specifically designed to offer an accurate assessment of IT infrastructure performance, capacity, and throughput requirements. With these essential measurements, the StorTrends iDATA tool can assess pain points in an environment before they become an issue all the while providing the details needed to make informed storage decisions. AMI StorTrends iDATA Performance Analytics Tools Now Available
  12. ​Nimble Storage has announced new performance analytics capabilities within Nimble Storage InfoSight. These new capabilities are key improvements to the predictive analytics InfoSight has offered providing real-time monitoring, reporting, forecasting and planning capabilities to customers. InfoSight eliminates the time-consuming and manual process of determining how and where performance bottlenecks originate by providing an automatic diagnosis and correlation of leading factors that impact performance or latency. This results in customers being better positioned to implement specific tasks in real-time to keep their storage infrastructure running at full efficiency. Nimble Storage InfoSight Updated With New Performance Analytics Capabilities
  13. HP has announced the new HP StoreVirtual 4335 Storage system, a 1U software-defined hybrid storage solution for virtualized applications in SMB environments. StoreVirtual 4335 Storage incorporates HP’s Adaptive Optimization technology which profiles data access at 256KB granularity. High priority or high-demand data is migrated to SSD storage, while replicated data (RAID 10, RAID 10+1) and parity data (RAID 5 and 6) is stored on more economical HDDs. HP Announces 1U SMB StoreVirtual 4335 Storage with Auto-Tiering
  14. StorageReview recently spent a few days in Dell's facilities in Nashua, New Hampshire to take a close look at Dell's Compellent storage platform. Compellent is an all flash or hybrid system for large-scale enterprise storage with multiple tiers that can accommodate different drive types, speeds and interfaces and delivers automated tiering based on analysis of data access requirements. In this overview we'll break down Compellent's key tiering technology which is dubbed "Data Progression" and highlight other key features and configuration options that give Compellent its flexibility and performance. Dell Compellent Hands-On
  15. The Infortrend EonStor DS S16F-R2651 is a dual-controller, 16Gb/s Fibre Channel storage system that offers 16 internal drives with 6Gb/s SAS connectivity and is scalable to 240 drives via a JBOD expansion unit. EonStor DS is Infortrend’s lineup of entry-level family of SAN storage solutions for small and medium businesses, providing SAN counterparts to the company’s EonNAS product line which we examined in our October 2013 review of the EonNAS 3510. Infortrend EonStor DS S16F-R2651 Review
  16. Dot Hill today launched a lineup of AssuredSAN 4004 storage devices based on the company’s ninth-generation RAID architecture, including 12Gb SAS arrays and first-to-market 16Gb Gen 5 Fibre Channel/10Gb iSCSI converged interface models. Dot Hill Launches AssuredSAN 4004
  17. Review of Dell's hybrid array that features auto tiering, load balancing and full VMware integration. Dell EqualLogic PS6110XS Review