Posted 01 July 2014 - 12:44 AM
One of the things I have been wondering about: from a virtualization storage perspective, it would seem that the resultant large cluster sizes (ie. for a 32TB volume) would cause performance issues.
What are some of your experiences with this?
Posted 01 July 2014 - 09:12 AM
You're asking a very broad question...are you talking about making your own storage solution or some of the hybrid solutions from name brand vendors? All of the latter offer caching or tiering or both.
Posted 03 July 2014 - 02:26 AM
As Brian said, you're asking a rather broad question. What exactly are you asking with regards to performance issues?
Did you have an application or actual proposed use in mind in sufficient detail to make this a question we can helpfully answer, or at least try to?
Have you talked to any of the relevant vendors in this space-- newcomers like Tegile, Nimble, Tintri, Nutanix, etc.? Or established ones, such as Equallogic, EMC, Netapp, Fusion-io,
Is commentary such as this sufficient...?
there is an increasing trend of large slow discs cached with SSDs in a giant RAID10.
The harddisks and the SSD's are usually organized separately when it comes to the actual hardware disk sets, it's not a straight RAID10, if that's what you're asking. Now how the firmware manages the various controllers involved with the harddisk arrays and the SSD arrays is the tricky part...
Posted 27 July 2014 - 04:22 AM
Slow archival and faster performance hard drives are normally segregated in separate disc shelves for management reasons. The same thing normally goes for any type of solid-state storage. Down at the array level, individual array shelves may use two or three or more individual RAID-5 or RAID-6 volumes in each shelf. Mirroring of those RAID volumes can also be done if it is deemed necessary.
Groups of these arrays, or slices carved from these arrays, can be presented as a single LUN to the end-user. The storage controllers will simply compute out a LUN with a compatible geometry and present it to the end-user. The end-user of the presented storage is normally unaware of the RAID level actually being used in the LUN, unless the end-user has management software that can query the array to report statistics and status.
In modern enterprise storage systems with both solid state and spinning magnetic disc storage, storage volumes are often presented to end-users that appear as unified storage, but are in reality tiered via storage management software that runs on storage controllers which are connected to the storage fabric switches.
Storage tiering is configured in a way that makes sense for the type of usage patterns that have been presumed by the storage management team that will be maintaining the storage system. You can configure the migration policies within the tiering to fit your usage patterns and tweak the policies over time by analysing storage reports. A storage tiering policy might be configured to write all new data to magnetic disc first, then migrate that data to solid-state storage if certain access patterns occur.
By the way, these storage controllers that are between the end-user and the storage system, have "large" (as in 32, 64, 256 GB, or larger) battery-protected global solid-state RAM buffers and caches that act to help smooth out storage traffic and keep the storage system running without contention issues.
Storage tiering is usually licensed software, where you pay license fees typically by total storage system capacity and/or number of tiers and/or types of storage devices in use, and so forth. As for the number and types of storage tiers you could have, you could have something crazy like six tiers of storage:
Tier 1: SSD arrays based on SLC flash devices.
Tier 2: SSD based on "enterprise MLC" flash devices.
Tier 3: Storage arrays based on 15kRPM hard drives.
Tier 4: Storage arrays based on 5400 RPM hard drives.
Tier 5: A tape library using LTO-6 tape.
Tier 6: An offsite storage facility with LTO-6 tapes in storage containers.
- chuckleb likes this
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users