kdkirmse

Member
  • Content Count

    123
  • Joined

  • Last visited

Community Reputation

0 Neutral

About kdkirmse

  • Rank
    Member
  1. kdkirmse

    Suggestions for SAN hardware

    The noise level will vary depending on model. The highest noise level chassis I have every dealt with was an intel box. The Intel chassis / MB combos are decent technology but they don't have the options that the Supermicro case line has. RAID levels and SATA vs SAS will depend on what kind of services you are running on your VMs. Highest reliablity will be gotten from RAID6 in an array of a size you are considering. RAID10 would give the highest IOPS. SAS vs SATA would be determined by what your IOPS requirements. Cache is not a cureall. In the case of everyone logging in at the same time and accessing their roaming profile the various caches in the system are not going to do you much good. SATA/RAID6 is more than sufficent for many common servers. If you have some servers that have a much higher disk io requirement they can be put onto their own RAID set. One thing you should do before you commit to a system is to determine if using more than one SANMelody box would be more cost effective. A single very high performance box is unlikely to be twice as fast as a more modest box.
  2. kdkirmse

    Suggestions for SAN hardware

    I have built quite a few SANMelody/SANSymphony boxes. First SanMelody does not require Windows 2003 it runs fine on Windows XP. What SANMelody supports is the Windows 2003 capabilities on the client machines. This is independant of the OS on the SANMelody boxes. The main limitation on a windows XP box is the 3.5GB memory. My recomendation would be to purchase a Supermicro based machine. They offer bare systems which have the drive bay's and slots that would fit your requirements. When I was specing machines the Dell, HP and other prebuilt machines did not offer sufficent drive bays and slots to make a good SDS (Storage Domain Server). This may have changed. It would take a very strange configuration on a SANMelody machine to see a major performance difference between 4GB and 8GB setups. If all you have is a modest number of volumes you wont even use 3.5GB. Both 3ware and Areca offer RAID6 capable RAID cards. Keep in mind that RAID6 writes have tendancy to be slow. I probably would not start out using the TOE cards. Since the box would be a dedicated datamover the lower CPU utilization that a TOE might offer would be less of an advantage. You would also need to check the compatability of the TOE card.
  3. OLTP and making actual $$ is not always the same. An array optimised for data base IO like some of the better known models is sometimes a poor choice for things like video editing, video production and online archive. Trying to get large amounts of "plain old" reliable storage from some of the bigger vendors can be painful. Vendors can be so scared of canabalizing their higher end storage that they cripple or overprice their basic models. Having actually worked for a VAR integrating Hitacti storage my biggest complaint with them is not their product but their corporate structure. The bureaucracy when dealing with them was at times insane. For a new scientific data storage system design in the 100s of TB range I would use some of the better midrange 3U FC-SATA chassis. The previous one I did used FC drives. All told I would expect the modern SATA version to be more reliable than the earlier FC version. Six years makes a lot of difference. I am sorry, we do not allow the use of profanity in the SR forums. Please refrain from using the company name EMC in any future posts. Otherwise, you are absolutely correct. Hitachi is more about IO, rather than "plain old" storage. For 50TB of "plain old" storage, you can use just about any white box SATA->FC 4U arrays out there. That said, I'd never use them for an OLTP instance, and I'd never use them in a production environment where actual $$ are involved. BBH
  4. I have built 50TB arrays using much smaller drives then are currently avaliable. Many of the products produced by the Hitachi's and EMCs of the world are not all that good of an idea for storing scientific data. You pay a large price premium for storage that is heavily optimised for a different application. At least the modern drives available today arent quite the room heaters that we had to deal with 5-6 years ago. A 1PB storage system would be a lot of work to put together but you would not be doing anything exotic.
  5. In my experience most storage on SANs is not shared. A lot of this storage is just carved up into pieces and allocated to individual servers. These servers access the storage using industry standard protocols.
  6. A client machine accesses a SAN drive just like a physical drive maintaining things like directory structures and block free lists. If two clients access the same drive at the same time each machine could change these structures. This concurrent access is what will likely cause the corruption of a file system. There are software packages which use some form of comunication between clients to make sure that the file system integrity is maintained but they can run anywhere from $500 - $4000 per machine. A properly setup SAN shared file system would be faster than a NAS system for objects like photos and videos. However, It may not be practical for a smaller environment. If the library consists of a large amount of fixed content then a shared volume could work as long as access to the volume is restricted to read only.
  7. There is a fundemental difference between NAS, of which samba is an example, based storage and SAN, of which iSCSI is an example, storage. NAS storage deals with files and directories on the server. The file system and its integrity is maintained by the server. SAN storage deals with blocks. The file system and its integrity is maintained by the client(s). There are often many communication round trips associated with NAS traffic. SAN traffic is much more basic and as such has lower overhead. If you only have a single client per volume then there will not be many obvious differences between a SAN storage device and a NAS storage device. The SAN device will likely be faster but it takes a certain amount of tuning of systems and applications to take advantage of it. Where things get hard is when you attempt to attach multiple clients to a common volume. In the case of NAS this is very simple. It just works. The only time there is trouble is when more than one client attempts to access the same file. Just attaching multiple clients to a common volume on a SAN is a recipe for disaster. Unless there is special, read expensive, software running on the clients the data on that common volume will be corrupted in short order. Once an iSCSI target server is set up attaching Linux or Windows initiators to the associated volume is not too difficult. In the case of Windows 2K you have to download and install the iSCSI initiator from Microsoft.
  8. kdkirmse, you are referring to the mimum of failures to render the array nonviable. If you have both drives in a mirror pair in RAID 1+0 fail, then yes, the array is no longer accessible. With RAID 6, it requires 3 concurrent failures to render the array nonviable. However, you are either not considering or downplaying that with RAID 6, any three concurrent drive failures will have this effect, whereas with RAID 1+0, you must have both drives in a mirror pair fail. Looking at it strictly from a numbers viewpoint, the odds of the 2nd drive failure happening to the mirror of a failed drive before the failed drive is rebuilt decrease as more drives are added to the array. Once you have a failure in a RAID10 array the reliability of the array drops to the reliability of the 2nd drive in the degraded pair. In the case of a RAID6 array with a failed drive the reliability of the array drops to that of a RAID5 array. a first order calculation for the MTBF of a RAID10 array MTBF^2 / (2 * N * MTTR10) MTBF mean time between failures N number of pairs in the array MTTR10 mean time to repair RAID10 a first order calculation for the MTBF of a RAID 6 array MTBF^3 / (D * (D - 1) * (D - 2) * MTTR6^2) MTBF mean time between failures D number of drives in the array MTTR6 mean time to repair RAID6 some normalization of terms MTTR6 = K * MTTR10 N = D - 2 Calculate crossover point where RAID10 becomes more reliable than RAID6 MTBF^2 / ( 2 * (D - 2) * MTTR10) <> MTBF^3 / (D * (D - 1) * (D - 2) * K^2 * MTTR10^2) 1 / 2 <> MTBF / (D * (D - 1) * K^2 * MTTR10) MTBF = 10^6 hours MTTR = 1 hour K = 24 // RAID10 rebuild time 1 hour RAID6 rebuild time 24 hour A 60 drive RAID6 array would be just less reliable than a 116 drive RAID10 array. There are enough 2nd order effects that make this calculation overly optimistic. For a modest size array it is going to be hard to argue against RAID6 on a reliability standpoint.
  9. Actually a RAID10 is not always more reliable than a RAID6 array. Since a RAID6 array must have 3 concurrent failures before loss of data while a RAID10 array must have a pair fail.
  10. kdkirmse

    Understanding ExpressCard

    Looks like all ExpressCard slots must provide both PCI-e and USB 2 support. Individual modules are likely to only support one or the other. From the FAQ "All ExpressCard slots accommodate modules designed to use either Universal Serial Bus (USB*) 2.0, or the emerging PCI*Express standards." "The ExpressCard interface uses high-speed PCI Express or USB 2.0 serial interfaces through a 26 contact high-performance beam-on-blade connector. "
  11. Lets say in 2 years they come out with such a drive. It will likely cost $1K - $2K for a 100GB drive. It would likely be several years before it become cost effective for the general market. With that kind of timespan I suspect that you would be considered replacing a drive like a Raptor before then. Right now you can get compact flash based memory, with its inherent limitations, for around $400 for a 8GB module. Even with a significant breakthrough of some sort it will be years before a typical consumer is going to have access to much better. Such a product would likely be released for the enterprise market first. For a personal machine I would only worry if a product had already been announced.
  12. While a virus could not damage a hard drive physically it would be possible for a virus or the removal process to damage the file system on the drive. Such a damaged file system could cause your operating system to hang . Putting the drive into an alternate computer might work. A windows 2K machine could work better. Booting a modern CD ROM based linux distribution might allow you to access your files as well.
  13. kdkirmse

    Areca 1230 startup problem

    Bus powered means that a USB device does not have an external power supply and gets all of its power from the USB bus. I was not clear on that.
  14. kdkirmse

    Areca 1230 startup problem

    First thing to do is attempt the reboot process without the USB hardware attached. I have had enough problems with machines failing due to loose hardware. Second question, are the USB hubs externally powered? I have had some problems getting machines to boot when bus powered devices are attached.
  15. Which version of Windows are you running? If you have managed to associate the wrong driver with a piece of hardware you may need to remove the card, delete the device entry and then delete the oem*.inf entry for the card in the windows directory tree. I have managed to get myself in this bind a couple of times. Most often it has happened with my own hardware that I am in the process of developing. I have also had it happen with some drivers that incorrectly detect some hardware.