Spod

Patron
  • Content count

    1960
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Spod

  • Rank
    Guru > me > newbie

Contact Methods

  • Website URL
    http://
  • ICQ
    0

Profile Information

  • Location
    Leeds, UK
  • Interests
    Spodding
  1. There's an alternate riser card available with 2x PCIe 3.0 x16 (x16 electrical as well as physical) connectors on each riser, but one of those only supports half length cards due to the physical internal space in the server. You can have one of those risers per CPU installed, for up to 4 cards with 2 CPUs. As for 150W options, the quickspecs note: So it can do 4x 150W PCIe x16, but if you want them all to be full length, look at the ML350p Gen8, which has 4 full length full height PCIe 3.0 x16 connectors, one of which is only x8 electrical. It's also 5U (rack or tower), however, takes up to 18 LFF or 24 SFF drives.. Should be comparable in price and other specs to the DL380p Gen8. On a separate note, the DL100 series are definitely HP's lower end server offerings... fewer management features, not as tool-less, less hot swappable components, lower end drive controllers, and of course the shelves instead of extendable rails. They still have their place, but for the difference in price, I think the DL300 series are a worthwhile investment (over the DL100 series, I'm not comparing with other brands here) for any organisation big enough to devote a significant amount of employee time to managing physical servers. That's just my personal opinion, I know I don't like working with them and have tried to persuade my employer to avoid buying them in future. We're more or less standardised on DL380s (with DL585s for virtualisation hosts) now. Simplifies everything if the majority of your server estate (we're into 3 figures) is based on one or two models. Of course, in terms of Server OS installs, the majority of our estate is virtual, which is even better... but it all has to run on something.
  2. HP do 200 GB and 400 GB SLC drives, though you can expect to pay £4-5K for the 200 GB and twice that for the 400 GB model. They're rebadged SanDisk Lightning SAS 6 Gbps SSDs, I think. There was an article on the press release here on SR a month or two back. I can't find the SR article, but here's Sandisk's press release. They also do 200 GB, 400 GB and 800 GB MLC drives, and there's a chance I'll get to play with the latter at work sometime in the next month or two, if our supplier gets some stock! I know it doesn't help with the Micron stuff, but since HP are a tier one vendor, it ought to be easier to find their stuff.
  3. Old thread, I know, but I wanted to mention that I've updated my Vertex 3 (120 GB regular version, not the Max-IOPS version) boot drive by turning a USB stick into a bootable Linux stick, following instructions linked on the OCZ forums. Copied the update utility for linux onto the stick, then booted from the stick, ran the utility, it pulled the latest firmware off t'internet (Yorkshire-ism) and updated flawlessly. Not as easy as a utility that could update a boot drive would have been, but not as much of a faff as keeping a separate Windows PC just for firmware updates would have been.
  4. So what's the practical difference between one of these and a regular Green drive? Cost? Firmware? Testing?
  5. Put it this way - if you pick a 60 GB drive for a boot drive, you can make it work. You won't find yourself with 60 GB of stuff that can't possibly be moved or reinstalled onto a separate drive. You can clean up the drive, disable hibernate, move or reinstall less used programs on a hard disk, store some of your bigger user data on the hard disk, etc. At work, we often use 24 GB OS drives for Windows 2008, and that can get a bit tight once you've installed Office and a few updates, but 60 GB should be plenty to get by. Me personally, I'd choose something a little bigger just so that I'd know I wouldn't have to mess around, and could store more of my stuff on the SSD (I've only got about 150 GB of stuff in total on my main PC). From what you've said, you don't need to install a whole lot of programs and games on top of the OS, so 60 GB should be fine for you. It's certainly not worth getting another drive because you might possibly need the extra space. If you've got a 60 GB drive, use it. If you run out of space, and prefer to replace the drive rather than move some stuff off it, well you haven't lost out, and it'll probably be a little bit cheaper or better for having been bought a few months down the line.
  6. So what's the actual difference between these? Same controller, cheaper flash and performance that's artificially limited in the firmware? Using fewer channels / modules per channel / die per module / whatever? The Solid series used to have a different controller, and the Agility used to be like a Vertex but with cheaper flash. And for some reason, the Vertex was often cheaper than the Agility, at least in the UK, presumably something to do with volumes sold. So now they've got three tiers based on the same controller? Seems excessive.
  7. Just to be clear, we're talking 150 MB/s (1.5 Gbps) SATA, right? Most good SSDs will saturate that for sequential transfers, but random transfers might not be interface limited. What size do you need? Might the drive survive a system upgrade and get used on a 6 Gbps SATA connection in future? If you're sticking with SATA 150 MB/s, then I'd say SandForce SF-1200 is where you want to be. The SF-2000 series isn't that much faster on legacy connections, and the SF-1200 is much more widely available, at a wider range of capacities. And it's more likely to fit in budget.
  8. Upgrading?

    I wouldn't hang around for Windows 8, unless you've really no pressure to upgrade. I reckon it's about a year and a half away, from the usual rumour mills.
  9. I'm not sure RAID 0 was ever likely to stress either controller much. Something with parity, like RAID 5 or 50, might show up the differences better. And yes, more drives would help, though good luck in getting Intel to send you another 4 drives!
  10. So if the software depends on having the latest Intel storage drivers loaded... what happens if you're not using an Intel controller? I understand what OSes can use GPT for boot and data, but how do the dependencies for MBR use of 3 TB drives work? Is there a controller dependency for 3 TB drives even if the OS supports GPT?
  11. That controller will only do RAID 0/1/10, it will not do any parity based RAID. If you want RAID 5, you have to upgrade the Smart Array controller, and it has to have a cache module (as I understand it, HP won't let you do RAID 5 unless you've got cache - this is an artificial limitation to stop you using entry level parts to do enterprise level jobs). If you want to do RAID 6, you additionally have to buy a Smart Array Advanced pack, which is a license key to unlock further features of the array controller you bought. Don't blame me, I'm just the messenger! Here's HP's official line: More bad news - neither the case nor the integrated Smart Array controller will take more than 4 internal drives total, including the one it ships with. So your 4x 2TB RAID 10 array will need a partition for the OS carved out of it, since you'll have to take out the 250 GB drive. The good news is that it will take any 3.5" SATA drive without modification, and HP should still warranty the rest of the server (with the possible exception of drive cables and anything that the drives could conceivably cause to break). You could also install a third party RAID controller if you really want RAID 5 or more than 4 ports (you could add a 5.25" drive bay adapter), though you're starting to water down the "everything's tested with everything else" advantage of sticking to HP parts. I don't work for or represent HP, I just use their stuff.
  12. Shopping from HP, we're lucky to find out anything about the brands they use. I'm guessing that they'll eventually expand their SSD offerings beyond the 60 GB and 120 GB Samsung based SATA offerings going for about four times the price of the equivalent consumer drives. But it'll all be HP badged, and we might not even find out what controller's on there without buying one and taking it apart. Still, reading this, I'm expecting they'll offer 900 GB 10K and 300 GB 15K SFF drives sometime in the next few months. Here's hoping that the new SSDs offer a greater improvement over previous generation performance!
  13. "best" RAID configuration?

    6x 600 GB drives, including 1 hot spare. 5x 600 GB = 3 TB total storage, so you can spare one drive's worth of capacity, but not two. So none of the multiple physical array options will provide enough storage, even if they share a hot spare, as they'd each use at least one disk for mirror/parity. give you 1.8 TB at best. Now, there's an option with HP array controllers that might help you if Dell do something similar (all my experience is with HP servers, but Dell are usually not too far behind on the technology front). It lets you have multiple logical arrays with different RAID levels in one physical array, I'll try to explain. HP will let you configure a physical array with 5 drives and a hot spare, but not specify a RAID level for the physical array. You can then split it into logical arrays with specified RAID levels and arbitrary sizes. So you could specify a RAID 1 and a RAID 5 logical array, and it would use part of each drive for the RAID 1 and part of each drive for the RAID 5. Performance is distributed between the two arrays in proportion to the native performance of that RAID level. So, if the RAID 5 logical array is idle, the RAID 1 logical array will get all the performance of 5 drives. When the RAID 1 logical array is idle, the RAID 5 logical array will get all the performance of 5 drives. When both logical drives are equally active, then performance will be roughly equivalent to a 2 drive RAID 1 and a 3 drive RAID 5, but in practice it should be better as long as both logical arrays aren't constantly saturated with reads and writes. It's comparable to the way SANs virtualise storage, where you get RAID 1, 10 or 5 equivalent performance and redundancy for each LUN, but spread across maybe 60 disks that are shared with many other LUNs. Advantages - you can specify logical arrays that aren't a multiple of the drive size, without wasting space. For example a 250 GB RAID 10 (which will use 500 GB of your total storage, 100 GB per disk) and a 2 TB RAID 5 array (which will use 2.5 TB of your total storage, 500 GB per disk). When one array is utilised much more than the other, it will get more drives worth of performance. For logical arrays with parity, you lose proportionally less space because it's spread across more drives. Disadvantages - if you lose one drive, all logical arrays get degraded. If both logical arrays are heavily used at the same time, then the perfomance advantage is reduced - it could even be a slight overhead v.s. two separate physical arrays. But that's usually worthwhile for the added flexibility and capacity. Sorry if I've just given everyone a headache, it took me a while to get my own head round it while writing this!
  14. Haven't Indilinx got a new controller coming out for the 6 Gbps 25 nm generation? It would have to be something special to dethrone Sandforce, Crucial & Intel, but they've proven themselves capable of designing a controller that actually works well, so I'm still interested to see how that new controller pans out. Even if they are a value play, at least their name isn't mud (unlike JMicron).
  15. Intel's Light Peak I/O & beyond

    I think the lack of comments stems from the lack of products implementing this technology at the consumer level. So, silicon photonics makes optical connections cheaper. Great. Maybe they'll use it for USB4, or start wiring new buildings with optical fibre instead of copper networking. But this needs to be built into laptops and motherboards and devices before anyone will call it the latest must-have technology. And it will take longer to displace copper networking, given the installed base of copper and the cost of rewiring buildings for optical fibre networks. Still, if it speeds up the deployment of fibre to the home for broadband, faster networks in the datacentre, and displaces USB 3 / eSATA as the universal standard for connecting external storage devices, good luck to it!