SAN_guy

Member
  • Content Count

    153
  • Joined

  • Last visited

Community Reputation

0 Neutral

About SAN_guy

  • Rank
    Member
  1. This is actually a huge growth industry for the online porn merchants - that's pocket porn. What better way to pass that subway commute or taxi ride can you think of over watching people get down and dirty! In Japan this is already a huge market. When the dot.com bubble burst the only industry on the net that didn't take a hit was the porn merchants, and they've been growing quarter after quarter. SG
  2. I love the people who claim "tape is dead" but have not used the most recent high end tape solutions. Right now LTO3 is probably the biggest standard. LTO3 allows for 400GB of uncompressed data to be stored n a $75.00 tape cartridge. Performance is 60MB/s sustained read/write. The biggest problem is having a disk subsystem that can sustain that level of throughput to keep the tape drives from wind-milling. This is typically accomplished by deep buffers on the tape drives themselves. We routinely see 100MB/s performance to LTO3 on typical user shares where we get about 1.6X compression. LTO4 is due out soon, which will double the above to 800Gb uncompressed and 120MB/s read/write performance. Cartridges will be about $120 initially but will come down with time. Quantum has SDLT600 which is similar in performance and capacity. Sony has AIT and SAIT which are also competitive in performance and capacity. So, with this said, we have 3 high end tape formats that are all alive and prospering. This certainly doesn't sound like "tape is dead" to me.... SG Yeah, that helps to keep things in perspective. I think what you're saying is that we're just seeing a new layer being added into the storage hierarchy, but the old layers won't necessarily go away. Though I think in many deployments, they will. A lot of people just backup HDDs by dup'ing them onto other HDDs now, forgoing tape or optical media. But there's probably still many more data centers still using tape. Personally I haven't used any tape drives for backup since the SCSI Exabytes about 12 years ago, and don't see any reason to look in that direction. I played with backing up my stuff onto miniDV tapes using firewire, but it's too slow, and DV tapes are too unreliable, you need to use an awful lot of redundancy to safely write a backup, which eats into the overall capacity. So anyway, I believe in due time the bottom (old/slow) layers of the storage hierarchy will vanish.
  3. SAN_guy

    iSCSI, complete network upgrade

    $300 is either for an OEM or recertified drive, and warranty will be a problem. In Canadian dollars from a large distributor the Raptor 150Gb is $344 and a Seagate 10K 147Gb is $470. That is in CDN dollars but bother are new with 5 year warranty. Your $250 is either luck or you don't have warranty from Seagate... SG
  4. SAN_guy

    iSCSI, complete network upgrade

    SATA isn't so bad for these applications as the price allows you to address it's shortcoming. For example we've setup a lot of clusters using a pair of ISCSI attached arrays using SATA disks. The way to make this something you can sleep at night over is to run the arrays RAID-10 with a hotspare, and also mirror in realtime across two arrays. This way you have no SPOF and also can take advantage of the mirrored arrays to help performance. SATA is cheap enough to allow this 100% redundancy with still being under SCSI drive pricing. A lot of this savings is that nobody does a ISCSI attached SCSI drive enclosure for reasonable $$$$. SG I'm not sure that you fully appreciate that centralised storage is also a centralised point of failure. If there's a software burp or catastrophic hardware failure, all of your storage is out. I'd think that it would be preferable for some single function be out for a while rather than everything down until the storage server is back up. I am aghast that you would seriously consider using ATA drives in mission critical systems -- especially a new, unproven model. Last time I checked, they're not even cheaper than comparable SCSI drives.
  5. SAN_guy

    iSCSI, complete network upgrade

    This is true, but the ISCSI stacks written for embedded hardware is far more feature rich. The software stacks written to run on a host OS like Windows are not. Perhaps this will change in time, but for now it is a huge weakness. Microsoft didn't do a ISCSI target driver for a reason, and I am sure they would have liked to do that for WSS2003. SG Actually there are few "hardware" iSCSI targets on the market. Some just hide their implementation better than others. It's a nice solution and would work well, albeit Xeon's and not Opterons. My experience shows Dell will always come down or throw in some freebies. SG Why not go with 2 ax100iDP, support ISCSI out of the box. More storage than 24 raptors, you will most likely never see a difference. A dell blade chassis with 10 blades 2GB ram and dual 3ghz processors would cost about 30 and the 2 ax100s would go for about 10 each. These would could be integrated with larger emc solutions in the future. If you went to them with an order like this you could probably get them to throw in a couple switches.
  6. SAN_guy

    iSCSI, complete network upgrade

    This will be your weak link. None of the ISCSI software targets fully support the ISCSI error recovery levels which means they are not suitable for your application. The hardware targets properly do this, and this is a huge part of the ISCSI spec for the software guys to "ignore". Think of it this way - your exchange server sends a IO that has an error somewhere on the LAN but it's *never* detected nor corrected by your software ISCSI target, so the Exchange box thinks it's all ok and goes on with life.....until this issue comes back in a very big ugly way. As for Stringbean support everything that isn't the case - it is a ISCSI target mode driver and that's it. All the other functionality comes from the OS and the fact that the OS "owns" the disk prior to being exported by the ISCSI target driver. This is much different then being done in specialized hardware designed for the task. You have been warned...... SG
  7. SAN_guy

    iSCSI, complete network upgrade

    Shy away from your ISCSI server concept and go with a real hardware target ISCSI array - performance will be much better and you'll gain functionality like snapshots, mirroring, etc, etc. Infortrend makes a nice 12 bay ISCSI hardware target array and Equalogic also makes some nice systems (albeit a bit more pricey.) Paying for raptors is no use as the latency of ISCSI will mask any perceived response benefit, so your better bet is to go with larger capacity drives and setup the array as RAID10. As for the HBA's on the servers Alacritech is a good choice and performance is quite good. Also ensure you budget for a good wire speed gig-e switch preferably with level-3 capabilities. What you propose is a workable solution if you do it correctly, but correctly doesn't mean cheap. You may want to consider a blade chassis setup for your servers and buying one spare blade. That way if you have ablade go down you can just zone it's ISCSI LUN to the spare blade, reboot the blade, and presto it's taken on the image of the downed server -- if the problem didn't toast the partition. If you want to go the next level then it would involve VMWare and virtualize your servers to allow more flexibility. SG
  8. Seagate will in late 2006 release a 1Tb version of their NL series. SATA, SAS, and FC interfaces. It will use perp recording to get that capacity, and it will be a 2 platter design (250Gb/side per platter) Hitachi will also have something as well in that range. SG
  9. SAN_guy

    SAS availability

    FC can easily be deployed as point to point and in many cases that's now how it's done. Many semi firms are building FC switch on a chip IC's that provides for deploying point to point connections. This allows better performance and also better fault isolation (no more single device bringing down a loop). The beauty with this is it leverages current FC technology that has gone through the maturity "baking" cycle. These SOC add about $30 to the cost of a drive enclosure so they are really nothing in the entire scheme of things. So in essence you now can have a fabric behind the raid arrays which really adds some nice possibilities. SAS should eventually do the same thing, the problem is the time for it to mature and scale out. With FC you have all the bits right now - FC technology all the way from the HBA's to the drives including the fabric between. With SAS you have HBA's and you have drives, but none of the infrastrucutre inbetween those two are available really. I think we will see soon SOC's that act as a FC to SAS bridge to allow SAS enclosures of drives to sit behind FC controllers and FC fabrics without anyone knowing. This can be done right now with FC attached SATA enclosures, so SAS is the next logical step. They simply bridge the SATA drives to look like a FC drive to the FC controllers. I actually think this whole FC/SAS/SATA/ISCSI thing will go away in the future because for datacenters it's rapidly begining to not matter -- just link everything with Inifiband and virtualize whatever protocol you need on top of that. Then simply have a IB to FC/SAS/SATA/Gig-E/10Gig-e gateway to connect in the legacy products. We are already starting to deploy native connectect IB disk controllers that use FC on the back end. This has simlified our cluster computing dramatically - we only have a single interconnect/fabric to manage that all the protocols run on in a virtualized fashion. This is the future for the datacenter..... SG
  10. SAN_guy

    SAS availability

    FC int going anywhere anytime soon.... 1Gig stuff is EOL and selling for pennies on Ebay. Nobosy is building 1Gig FC stuff any longer. 2Gig is the "commodity" value product and the standard, fully backward compatible with 1Gig 4Gig is on the ramp up (available in quantity, just it's at a slight price premium over 2Gig -- our last FC switch was an extra $100/port to go 4gig over 2Gig which is not a lot) and backward compatible to 2/1Gig 8Gig is on the roadmap for 2007/2008 and will be backward compatible with 1/2/4/8 Also a derivative that isn't physical layer compatible with but protocol compatible will run on 10Gig (same physical layer as 10Gig ethernet). As for SCSI I know a Ultra-640 is on the roadmap but Seagate has announced it is not going to go there, the current drives are the last of the parallel SCSI models and all the future holds is SAS/SATA/FC. I beleive the Cheetah-5 is serial interfaces only. SG
  11. SAN_guy

    SAS availability

    SAS is in that period in a technologies life after it's been announced but before the manufactures have everything worked out and products shipping in volume. Sun is about the only one I know shipping real SAS product (the Galaxy 4X00 servers use the 2.5" Savio in SAS). Lots of people are working on products, we've tested a few, and give it 6 to 9 more months and we should start to see serious rollouts and availabilty. SG
  12. SAN_guy

    FV cs iSCSI

    Trusting Adaptec to launch a product on time and without serious bugs is a big risk. SanBloc comes from Eurologic which they purchased a few years ago. Back then they had a 'ok' product and it's not really gained any traction since. I beleive SAS will be the death blow to SCSI drives, not native FC drives. SAS may also impact SATA in a big way as a lot of the new SAS drives are rumoured to also speak SATA as the physical interconnects are the same. It is very cheap to add the proper controller that allows the drive to work with both SATA and SAS controllers -- far cheaper then the cost of making two different drive models! Seagate has announced that future Cheetah models will be SAS based with parallel SCSI being phased out in the next 2 years. If you do want to be on the razors edge with new technology you can always get a SAS enclosure attach it to a Windows server and run the String Bean software iSCSI target driver to allow the SAS storage to be remotely mounted by iSCSI initiators. This works quite well and is simple to maintain. You can put multiple ISCSI TOE's in the String Bean box to get good performance out to your iSCSI SAN. You can mount any stroage that the Windows box can see as a iSCSI target so even a server box with a bunch of SATA drives/arrays internally can be served out as iSCSI targets.... Hope this helps, SG
  13. SAN_guy

    FV cs iSCSI

    We do a lot of this stuff, and have almost 500Tb of FC attached SATA arrays right now for nearline storage. We also have almost 200Tb of FC attached FC arrays. The true limiting factor is the disk technology. Standard 7200 RPM SATA drives just don't perform well in highly random workloads. Some of the newer FC attached SATA chassis support SATA-II and when paired with SATA-II drives (that do NCQ) the performance improvement in random workloads can be significant, but still shy of 10K or 15K FC/SCSI drives performance. So the question is what is your workloads? If they are highly sequential in nature then FC attached SATA is great and can easily saturate a 4Gb FC channel (we see RAID5 386MB/writes off our 16 disk FC-SATAII arrays) but if they are random you need to consider higher end 15K drives. I am not a big fan of FC-SCSI arrays as I feel if you are going to spend the $$$ on high performance drives you might as well go all out for a FC-FC solution. As for iSCSI if you go with hardware TOE's and a dedicated enterprise quality ethernet switch (or a pair for redundant paths) you will find it's about the same price as 2Gb FC stuff. Also the iSCSI targets are still limited in choices whereas FC targets are plentiful and well tuned. Also iSCSI targets seem to be price targeted rather then performance targeted. I won't say stay away from iSCSI, I'll just say if it's close to equal dollars FC gives you far more vendor choices and is a proven technology. I think iSCSI will be the future, I just think we need to wait for onboard TOE's and cheap 10 gig ethernet to become readily available to really make this shift happen. I also expect our datacenter will have FC in one form or another well into the next decade. Hope this helps.... SG PS: Check out www.infortrend.com for some great performing array chassis available in FC or iSCSI. I like this product, have installed over 40 of their 16 bay units and they are a good solid performing array. I have not used their iscsi target version but have heard good things.
  14. SAN_guy

    LaCie reliability

    Take the cover off the Lacie and put a fan blowing on the guts to keep the drives cool. Then get your data off as quick as you can copy it somewhere else. We used to see this that when the drives cooled down they sometimes woud work for awhile before the heat got to them again, the above maybe will work to get your data off. SG
  15. SAN_guy

    LaCie reliability

    These things are pure crap. We bought over 20 of them to shuffle large data between offices and they have ALL failed. So we now use Maxtor one-touch-II 300Gb drives and just split the data onto multiple drives. So far no problems, just a bit more data mangement and more stuff to ship around. Lacie is absolutely useless on this. On many occasions we've returned units still under warranty and received replacements that died within hours of coming out of the box. This is a company that deserves a class action lawsuit, and deserves to be put out of business. They are knowingly shipping product that is defective, and they definately know this. Form before function at Lacie..... SG