Adde

Member
  • Content Count

    230
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Adde

  • Rank
    Member
  1. My work laptop had a really f*cked up vista sp1 installation. I decided to install win 7 x64 just to try it out and then install xp or windows 2008 server for real use. I haven't managed to part with win 7 though, everything works, no stability problems so far (previously the computer crashed a few times a day), and it's really fast! Feels faster than xp ever did (xp was 32bit though). I don't know about drivers, but any vista driver is supposed to work. All hardware so far has been recognized automatically including chipset with lan, wlan, sound etc...
  2. Adde

    Need opinions on choice of RAID

    Without thinking to much about it, it seems to me that a 10 drive raid6 array would have better expected reliability than 2x 5drives raid5. As for expansion you could always perform an online capacity expansion, swaping out one drive at a time. I'm not sure about the way the areca works but I assume that if you swapped out say 5 drives for larger ones you could put another array on the new available space, making uppgrades easy.
  3. Adde

    Disks bigger than 1GB - when?

    Ok, in Sweden currently the seagate 7200.11 1TB is the sweet spot. Closely followed by Samsung's 750GB drive. They are both under the magical 1 SEK / GB . But with 2 platter designs the 750GB drives might become cheaper.
  4. Adde

    Disks bigger than 1GB - when?

    Nice, maybe they just want to get some experince with simpler design first and then will launch a 1.5TB drive. After all Hitatchi has a known track record with multiplatter designs... they would be close to 2TB with 5 platters . Maybe 750GB will be the next GB/dollar sweet spot, although 1TB drives are quite good value too currently.
  5. Adde

    Disks bigger than 1GB - when?

    1.5TB !!! http://www.dailytech.com/article.aspx?newsid=12335 Nice... four platters... didn't expect seagate to be first... I still don't they just announce it first . A link in the article points to a page at seagate's home page which says: "Shipments of the Barracuda 7200.11 1.5TB are set to begin August 2008", so hopefully they will be possible to purchase by october-november or something? Hitatchi just announce a second generation 1TB drive with higher platter density, maybe they are on the right track for something larger too? Anyway I've seen paper launches before so let's not get to excited.
  6. Adde

    Disks bigger than 1GB - when?

    My guess is that 1.3TB drives or similar sizes are going to be "announced" soon. 1TB drives have been around a long while, platter sizes have increased, many manufacturers have reached 333GB platters so at least 1.3TB is within reach. Currently I think the manufacturers would be looking at introducing the next capacity soon since the 1TB drives are very aggressively priced (at least in sweden), not costing that much more per gigabyte than smaller drives such as the 500GB drives. Anyway even if a 1.3TB drive would arrive soon I doubt those drives would push down 1TB drive prices that much, but instead the 1.3TB drive would be more expensive per gigabyte. When you could actually buy a 1.3TB drive is another thing I doubt you would find one much earlier than christmas. Not based on any facts just pure guesses.
  7. Hello! With som performance issues noted on a few sql servers of a client I've noted that all sql server data is placed on a 4 spindle raid 5 array. The machine has a HP Smart Array 6i controller which should handle online raid migration but since I want to change from raid 5 to raid 0+1 some space will be lost. There is plenty of free space but how will Windows server 2003 handle this? Current raid 5 have a stripe size of 64KB and no write-back cache. If my logic is right this would be a very poor configuration for sql server that mainly performs 8KB IO's. I'm planning to change it to raid 0+1 with 16KB stripe size, calculating that a single 8KB IO will only require one pair of drives to be satisfied, am I right? The controller has 64MB of non backed up memory, if that affects the choice. Even though this is supposed to be an online operation I'll be damn sure to have backups .
  8. So is it correct to draw this conclusion: SAN's are great and even though they are not cheap they will deliver, it just takes a larger scale? The 10-30 spindle SAN's I've seen used mostly for one or two databases and a few servers and some file serving are not big enough to really motivate the expense of a SAN, or there is at least not much point doing it for performance/price?
  9. A question that has been troubling me a lot lately is a few problems at a few clients. They where all related to databases and with no exception these databases all rum on different types of SAN's (some on HP some on NetApp, with varying specs). The first major point is performance... this is where the SAN solution should really do well, Impressive specs, loads of drives, nvram caches etc... but still they fail to impress me. Granted the setups I've worked with have all been on the cheap side (example: a NetApp-something with 10 15krpm drives (connected with FC though)) but with the pricetags on these pieces buying anything but the cheap stuff will ruin you. Still even quite reasonable amounts of drives (10 15krpm drives is not massive, but not that bad) performance doesn't seem to be that great. And for most installations that I've "fixed" the solution has been redesigning/optimizing software or in the worst case adding some more (sometimes quite alot more) ram (which seems to be a very cheap upgrade compared to buying the "one step better" SAN-solution). It strikes me though that ram is probably the cheapest solution since even a few hours of optimization add up to some money. The next issue is reliability, which should also be a SAN execellence. Although I've never seen anyone completely out I've seen the effects of serious controller/drive issues causing failure to write and read data. Maybe the local array of drives is even worse but I just can't get rid of the feeling that having a single point of failure (the SAN) is worse than having two completely independent arrays of drives (one in each database server). The systems I've worked with have typically been anything from 2-8 cores with 4-16 GB memory, often but not always two servers using clustering (using the same data on the SAN, allowing one of the servers to fail) or similar technology. Is the SAN the best choice for some other reason? Or is it just that I'm working with the wrong type of setups (too small scale) for SAN's to really show their best? From the setups I've seen comparable performance could be reached with a few more drives (but split the bunch in two (one for each server)) using mirroring or some similar technology to have two completely redundant servers, and spend the money saved on some extra memory. For mass storage a common file server can be used. I really need to know because I'm getting the feeling that I'll quite soon will be in the position to advice customers on solutions to choose, and I really don't want them to see the problems that I've had to solve. Confused about SAN's. All these emerging SSD's of different types makes me even more confused... but that maybe for another day...
  10. Adde

    Fusion IO

    Is it just me or is stuff like this really cheap? I've seen several project where really expensive SAN solutions have been used just to handle some quite small databases and to act as large capacity file servers. Instead one or a few cards like this and a "normal" file server better performance can be had for less money. Sure SAN's have some very nice features but they cost an arm and a leg, and really large configurations are needed to deliver truly awsome performance. Are you thinking in similar ways?
  11. Adde

    raid array number of disks

    Since 25% of your load are writes I wouldn't use raid 5 unless the writes are very "non random". I think your calculation is slightly wrong for the following reasons: 100-150 random IO per second is a little low.. if you are building a high performance database server you are likely to use the fastest 15krpm drives available, have a look in the performance database. You will probably not have truly random IO's... if the database isn't huge you will only use a portion of the drives, writing transaction log is sequential so using a separate mirror for that eliminates some "randomness". Many of the datapages that are required to get data are cached (the upper part of the index tree). So many IO-patterns are not that random... for example scanning an entire non-clustered index, scanning a range of a clustered index. Use the SQL Server trace tool... figure out the load... These number can't just be calculated and just because you can manage one database with that load on a setup, it's not certain that you can handle two databases with twice the number of drives. If you are really uncertain go for raid10 and make sure you can add more drives.
  12. A pretty good guees would be to look back in time to when the Gigabyte drives appeared... similar capacities (but with T instead of G) are likely to appear.
  13. Adde

    photoshop and ssd for swapfile

    I agree, the nice side effect of very low latency is that building really fast storage solutions will become slightly simpler... just add as many as you can in raid 10 and run everything on that array. The bad part is that building high capacity solutions will require conventional drives and what to store where will still be a problem.
  14. Adde

    2007, Coolest year in storage ever

    What's the purpose of cache on a SSD? I agree... if there is no significant improvement from using cache (and I doubt there is on a SSD) skip it... simplicity always rocks and avoiding cache should lower the potential for data loss caused by sudden power loss. The SuperTalent drives are dead slow. Putting a RAM buffer on that drive would hide a lot of that without driving up their cost. Even with the new Samsung and Sandisk drives at 60-some MB/sec, that's nowhere near the full potential of ATA5 or SATA. But put 16MB of buffer on those drives, and run a pair in RAID0 and you could stream at 120MB/sec to them all day long. There's tons of large-scale server applications for storage at that speed. In the notebook space it's still a win because bursting data to the drive at full interface speed means overall I/O time is shorter, which means less time powering the I/O interface, which means more battery life. As for the whole power loss angle - you already take that risk with every hard drive out there, but in a notebook the risk of sudden power loss is near zero since you always have a battery and a power gauge... I thought the point of the cache was more or less to have some storege available while repositioning the head... For your streaming example I can't really see why those drives wouldn't be capable of doing this without a 16MB buffer. For the notebook example maybe you have a point... but that RAM also consumes power, although once it has been emptied I guess it could be shut down entirely. So there might be a net gain in power consumption, anyway I think that it must be a pretty IO-heavy notebook scenario for this to make any impact... even the rotating HD is not a big consumer in a notebook going SSD really removes the storage component from the "TODO-list" for the time being.
  15. Adde

    2007, Coolest year in storage ever

    What's the purpose of cache on a SSD? I agree... if there is no significant improvement from using cache (and I doubt there is on a SSD) skip it... simplicity always rocks and avoiding cache should lower the potential for data loss caused by sudden power loss.