Darking

Member
  • Content Count

    241
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Darking


  1. I will give my advice also.

    As kremlar says i wouldnt go for a hotspare either, you just have too few disks, and i would also go for the best practice of splitting up logs and database volume, on a database server.

    as i see it you have two options:

    1) Go for the high capacity option, and do a 4 disk raid 5 for Data, and a 2 disk raid 1 for OS/logs

    2) go for the higher speed option and do a 4 disk raid10 for data, and a 2 disk raid 1 for OS/logs

    And yeah.. just buy a disk to have in reserve when you dont have hotspares.

    I believe SQL server writes 8K pages, im sure it can be googled fairly easy.


  2. What is the preferred network switch around these parts? I am guessing most readers here are probably using gigabit equipment by now, although did anyone have any specific brand preference or home/business preference? Personally I am using a Netgear GS108 at home (plays a big role in all of our NAS tests) and hopefully something similar will exist in our office.

    Cisco 3750 would probably be the obvious answer, but i must admit i find their whole pricing idea a bit scary, so we use HP Procurve's

    We use simple 2610-48PWR switches with 100mbit ports around the office, since bandwith isnt really a major requirement, when you run citrix on all your clients (we could prolly do fine with 10mbit really). For our Core Switches we use Chassis based Procurve 5412ZL.

    Its a nice switch, and can do alot more than is really needed in our datacenter, but it handles both ISCSI and "normal" network traffic with ease, and supports 10Ge modules.

    Main reason for choosing HP is Lifetime Warrenty and a more "fair" price than Cisco can deliver.

    Oh.. And at home i just use my Dlink DIR-655, its capable enough for Gigabit, no Jumbo Frame support tho unfortunately


  3. yes, there are most used files , uploaded recently and old files used sometimes.. so tier storage could be one option.

    Will it gain with SAS needed performance ? Is that really that difference between sata drives ?

    There is one more affordable option and it's a lots of raid 5 etc 6x raid 5 (array of 4 drives, 3+1spare).. there is storage loss but better than raid 10, what do you think about this idea ?

    With sata drives there will be failure less horrible if just one array drops off..

    Could you perhaps explain what the problem is with your current setup atm.?

    you say your not running raid or anything on it, how is the data managed?

    SAS is about twice the speed that sata is, a decent SATA disk runs around 90 IOPS and a SAS disk can run around 180 IOPS, In RAW MB/S they probably transfer about the same, but seek latency is alot lower on SAS.

    I agree with you it might make sense to make smaller raid5/6 sets, mainly because if you make a 12 or 20 disk raid5 it takes way way too long to rebuild. Your probably better off making either 4 6-disk raid5's without hotspare, but i would advice you then have disks on premise in case of failure, or make 3 6-disk and 1 5-disk raid5 and have 1 hotspare available.

    Or just make raid 60 with your controller, if the write penalty isnt too much.

    Its a bit hard to give sound advice when we know so little about your enviroment and what you need really.


  4. Thanks for answers, no 700MB/s won't be needed for one server or pair of servers in raid 60/50.. max 2gbits per server or 4 per pair..

    it is used for file hosting, so IOPS imho is the thing we need most.. we tried raid 5 and 6 for one server with 22TB , it went down after few minutes of operation.

    Are all the files accessed frequently or would it be possible for you to tier your storage?

    You could probably build some sort of system that would allow you to have highly accessed files on SAS and have some sort of rutine that moves less used files to a slower media type like sATA.

    Raid 50/60 is fine if the IO load is mostly reads, but if your very write bound i would go for raid10, but that ofcause causes you to have even less storage available pr disk.

    More professional arrays allow for several raid levels on each disk, making it possible to utilize the IO of a disk better, but im not sure any homebrew systems allow for that via software.

    Honestly i doubt your a candidate for SSD in any form, yes you could probably get some IOs from it, but i have a feeling if the amount of data written changes alot, that performance degradation would be horrific over time with non EFD drives


  5. If you are running VM's I would not go with SATA, you should be looking into SAS instead. VM's use a lot of IOPS and SATA will be very painful. Also, you need to make sure the VM's are aligned.

    Sorry i have to object a bit here.

    Im running a vmware enviroment with around 50 VMs in, and im not seeing above 500 Iops, even with SQL and exchange servers running, so claiming that VMs inheritly a big IO hogs i wont agree on. Primarily what you need is good latency and redundancy. pure IO and bandwith, less so.

    If your running VDI we are talking about something entirely else, exactly the same if your running multiuser enviroments like TS/Citrix

    I agree on alignment tho, that is always important. Both VMware and Hyper-V are running on aligned filesystems, so its mainly a raid issue.


  6. Nobody to give me advice ? :(

    If space is an issue i would probably go for RAID50, with 1-2 hotspares pr chassis

    Write penalty on Raid 6(+0) can be a bit big depending on the controller or hardware doing it.

    Another route would be to look into ZFS, i heard great things about it, but got no clue about how it works.

    I would never advice you to buy Standard drives, they arent rated for 24/7 running, and i will almost bet money on you will see a higher failure rate on them than the enterprise rated drives.

    More ram doesnt do stinker, unless you can use it for Cache on the controllers, and im highly doubtfull you can do that.

    Do you run anything that actually needs 700MB/s reads or writes?? If its simple file storage, i highly doubt your ever gonna use more than 200MB/s in any likely senario


  7. How would you rate this one? Assuming quality enterprise level drives?

    I would go for a server PSU instead of a high-end gaming PSU, they arent rated for 24/7 running, so i wouldnt trust them, i would look into a good redundant server PSU.

    As you said the drives need to be rated for 24/7 also. and i would look into how to secure my data from something disasterous happening, like data corruption or anything like that.

    Also i would look into figuring out how its supposed to be connected to the network. are you planning on running multipathed 1gig nics 4-8 ports? or are you planning on running 10GE.

    10GE can be very good, but you also need to be aware of IO control, so a single VM host doesnt drag the entire SAN down with it. If you run VMware enterprise plus f.x enable I/O Control


  8. I understand and totally agree with your sentiment. However, since I am unable to satisfactorily provide you the details you need to provide a definitive answer, how about providing a range of options? Such as so and so combination of hardware would support 15-20 super high bandwidth VMs, where as such and such el-cheapo components would probably work better if you were only expecting to do low traffic stuff.

    Is providing a range of options possible?

    I would probably just buy a server like a Dell r610 fit it with two Perc H800 and buy the needed number of MD1200 chassis, and try and get the disks somewhere cheaply like newegg or something. Main problem will be getting the drive caddys when you dont buy the disks from dell.

    The major problem is reliability in cheap solutions. If you dont have every component redundant, thats where the system fails (and thats typically why enterprise storage costs a premium). in Enterprise SANs every component should be redudant, from storage controllers, to chassis interfaces, to the switches and HBA/NICs.

    Im sure there are cheaper ways from something like supermicro or something, but dont expect to build 2x12 disks for anything under 10k$


  9. Well, the server will be serving as a storage SAN for a small to medium sized virtualization project. Maybe between 15-30 VMs max? I don't really have a hard number for the budget, I'm still trying to get a feel for the various options and requirements for such a server. Obviously, the cheaper the better, but before putting together a proposal, I'd like to understand the pros and cons of the various hardware choices, etc.

    Oh, and thanks for starting the new section!

    The major problem giving advice, especially about stuff like IO (which you need) is that 15-30VMs could mean anything from 30 servers doing absolutely nothing, to 15 large enterpise SQL servers or Exchange. running thousand of IO/s.

    basically size, spindle count etc, all needs to be compared to the performance requirements you have, and when you have those two kind of information, you pretty much know what you need, and can build a system from that knowledge.b


  10. that PERC will nbot even begin to handle the power of those SSD. it will hold them back more than anything.

    the fastest raid card available for your needs is a 9260-8i with Fastpath key. 160K IOPS @ 4k random. it is UBER.

    get a good SSD with good GC, and say screw the trim. you will be fine without it.

    my 9260 with 8 generation 1 crappy 30 gb vertex beat the pants off the Fusion I/O, not only in max read speed (i hit 1.875 gb/s, top is 2.8 read) but in IOPS and real world testing as well. imagine if you get 8 C300 on there.

    here are results of this controller with several different arrays: intels, c300's, vertexes, and acards.

    this is by far the fastest solution on the planet right now :)

    FASTPATH

    check out post 83 for nice summary

    The PERC is a LSI Megaraid controller.

    Its using LSI 2108 ROC which 9260 8i cards also use..

    I see no reason why there shouldnt be a fastpath option.


  11. Do you mean to say that you think Windows Server 2003 is 4k sector aware? I keep reading that WHS doesn't like the AF drives _because_ it's based on Windows Server 2003.

    Do you mean with or without pins 7/8 jumpered? I think I've read that the align tool should not be used if the drive has been jumpered.

    Easily done. I run the WD Align tool after setting up the partition (I plan on using just one partition), but before formatting, correct?

    To give an reply to why WHS does not support EARS drives without the jumper, its simply a matter of the Align util used not to run on WHS, since it doesnt recognize it as win xp, and i think the issue is the same for 2k3.

    Personally i use a WD15EARS in my whs without issue with the jumper.

    It may be possible to align it according to http://forum.wegotserved.com/index.php/topic/14632-wd-ears-align-ok-on-whs/ but i must admit i have not personally tried it. (i guess they fixed something in the utility)


  12. Nice revival of a nearly four year old thread! I love it...and love the wine too BTW.

    Lately I've been trying to master the $12-$15 range of reds. I drink them frequently. Some recent favorites include Mettler Zin (heavy in alcohol) and Copola's Zin - Director's Cut. Both are excellent zins under $15 at my place in Kentucky (cheaper than Ohio).

    Anyone else have a suv $15 red they like?

    Generally i kind of like the heavier red wines, but thats mostly because i normally only drink red wine to stuff like Steaks...

    The Ripasso and Amarone type of red wines are my personal favorites. Here in denmark they go from anywhere between 10 - 20$ for the 3-4 year stuff.


  13. Team sent us this press release today, though it's nearly barren of detail. But the claim is a 10 second boot time into Windows 7. No details on the machine or whether or not they crippled Win 7 to boot more quickly.

    This lead me to wonder though, what people are seeing in the real world in terms of boot time. Please post your SSD and system details with OS and boot time.

    Its not really clearly defined what the rules are here..

    After bios splash screen, my machine boots in somewhere between 8.5 and 9 seconds to login in windows.

    This is a Core I-7 875K with 4 gigs of ram, and a 2nd gen Intel X25-M disk

    If Splashscreen and expressgate from bios needs to be counted with.. i would say my time is probably 20... but then.. what about people with a motherboard that checks memory before boot ;-)

    Ive even included a video, recorded from press of button to windows login.


  14. Here is a snap shot of my drive. Is this the driver I should be using? I don't think post-70887-12827003752947_thumb.pngpost-70887-12827003910184_thumb.pngso???

    Im note certain you have AHCI enabled in your bios.

    Alternatively you can try and update the driver to your sata controller.

    what sort of computer or motherboard are you using?


  15. hello, im planning on building a 48 tb server for movie storage.

    this would most likely be the configuration

    1 areca ARC-1680IX-24-4G PCI-Express x8 SAS RAID Card

    24 Seagate Constellation ES 2TB 3.5" SATA 3.0Gb/s Enterprise Internal Hard Drive -Bare Drive

    1 SUPERMICRO MBD-X8DTi-F-O Extended ATX Server Motherboard

    2 Intel Xeon E5506 2.13GHz LGA 1366 80W Quad-Core Server Processor

    2 Crucial 6GB (3 x 2GB) 240-Pin DDR3 SDRAM Triple Channel Kit Server Memory Model CT3KIT25672BA1339

    1 SUPERMICRO CSE-846TQ-R900B Black 4U Rackmount Server Case

    this would be configured in raid 6 and windows server 2008

    what i would like to know is what are the chances of a raid like this failing and what kind of environment should this equipment be placed in ?

    Stop thinking of raid with that data size..

    use WHS and duplicate the shares you need to be secured on more than one disk (data you cant afford to loose by disk failure).

    if Data integrity is more important than that buy two boxes ;-)


  16. Hi All.

    We are a medium sized company running a host of different software solutions:

    Sharepoint

    MSSQL

    Exchange

    Windows File servers

    Webservers

    Vmware ESXi 4.1

    also some Oracle hosted on Linux.

    we are sort of considering insourcing our backup solution again, and are now in the market for a backup solution including storage, that would allow us to do around 5TB of backup of different systems in a full backup.

    Currently our backup is done as an Outsourced TSM solution, and it works great, but the size of our data is making it a bit costly.

    I have used Backup exec 10 before, and always felt it was prone to unstability and that you could not be sure that backups had completed successfully, something we generally do not see in our TSM solution. Are there similar priced solutions you could recommend?

    Do any of you have any recommendations to Storage of backup.. is Disk an ok solution to keep data "forever" (backups should be kept 5 years in a grandfather-father-son kind of rotation) or is the safe solution to go with some LTO-5 tapes in an library, and is it safe to keep tapes forever in a library or should the be vaulted to somewhere other than the backup location?

    Both in the disk or tape solution the backup would be performed over a 1gbit fiber connection, to a remote site, away from the primary site of the live data.

    Regards

    Darking


  17. I am running W7 in all my machines. I am a member of Technet and wonder if my W7 version might be a tad different? I just popped in my other caddy that has the Corsair SSD in it. I looked at my Services.exe and sure enough defrag was not disabled. Interesting.....

    Most likely its just that your machine, either dont have AHCI enabled, or that your using some sort of older nforce chipset, where windows decides the devices are SCSI (raid controllers have a tendency to do this).

    It can be determined pretty fast by looking at the name of the disk in your device manager.

    If its named something with SCSI in the name, there's your fault.

    On windows 7 with AHCI enabled, a Intel 2nd gen drive is labled= 'INTEL SSDSA2M080G2GC' (in my case its a 2nd gen 80 GB SSD)


  18. The 50GB and 100GB SSD capacities may not be common in the consumer space, in fact I can't recall any that hit those numbers on the nose, but I suspect they're more common with enterprise level SSDs. For instance, the Seagate Pulsar comes in a capacities up to 200GB. I don't know, but would hazard a guess that it's reasonable that the Pulsar could also come in 50GB and 100GB capacities, which are what's used in that Equallogic PS6000s.

    As to being sure about the drives used, I tried using Dell's chat, but no one answered and it failed. Tried calling the number, but only got voicemail. Sorry :( I did email a contact though to see if I could get a definitive answer or at least support for my enterprise SSD capacity theory above.

    Thanks Brian.

    There were talks when it was first released, that it probably would be STEC drives, but the pricing doesnt really support it, ive gotten a qoute on the box for around 40.000$, wich may sound like alot, but really for enterprise storage isnt totally crazy, i know from EMC dealings in the past that the STEC drives cost 5000-6000$ a piece, so im fairly certain its another vendor :)