Darking

Member
  • Content Count

    241
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Darking

  1. Darking

    SSDs in equallogic PS6000S boxes

    Thanks Brian. There were talks when it was first released, that it probably would be STEC drives, but the pricing doesnt really support it, ive gotten a qoute on the box for around 40.000$, wich may sound like alot, but really for enterprise storage isnt totally crazy, i know from EMC dealings in the past that the STEC drives cost 5000-6000$ a piece, so im fairly certain its another vendor
  2. Darking

    SAN advice

    Hi Tom. I Assume it works in much the same way as Equallogic does it then. There you make storage pools containing x number of arrays, it allows for several different raid levels in the same pool, and will automatically distribute the load to the fastest volumes, if its needed (logs etc.) Admittedly ive taken the old route and just made three pools as tiered storage, Logs on raid10, data on raid50 and slow data on SATA raid50. It basically allows you to at any given time, just to add extra spindels to XX number of volumes(LUNs) hosted in a storage pool, as it automatically stripes the data out over the newly added storage. there are some clear limitations in how EQL implents it though, you do not have the faingrained setup you have on EMC where you can say.... hmmm these 5 disks i want to run raid 5, and the next 10 disks i want to run raid10. To me in my small enviroment (we have 4 PS6000XV and a PS6000E) it doesnt really matter. as long as the performance is delivered, to me atleast :-)
  3. Darking

    SAN advice

    Hi I was kind of thinking the same thing about the budget. It really depends on your enviroment and what you can do out of the box, when you look into prices. Ive found a link to a 3par pricelist http://storagemojo.com/storagemojos-pricin...par-price-list/ And it seems to me that you like with EMC fx. have to pay an extra for pretty much ever function outside the basic disk system. Thats why i ended up selecting equallogic, it's all free. (im starting to sound like a salesman here.. :S ) Anyhow as you correctly calculated, lets not get blinded by 250.000 iops promises, when A) its not needed in your enviroment, even with a bit of growth Its normally not very true, or insanely expensive. With the 4.21 formware from equallogic they support upto 16 6500x boxes in one disk group... thats 768 spindles ;-) but again, money can buy you pretty much anything. There are 3 reasons why i wont get back to EMc a) all software functions are free SAN HQ (IO analyzer) software rocks c) It performs just as well as a Clariion if not better. Anyhow good luck with your findings, and do look into SSD for your application, if its oracle or stuff like that, there can be some advantages getting tempdbs and indexes etc, over on that.
  4. Darking

    SAN advice

    Ive tested a bit using io meter. 50/50 Readwrite with 50% randomization and 64KiB request size, i get 2135 iops and 131MB/S out of an 16 disk PS6000XV array (2 hotspares) running raid 10. ive also tested on a 3x 16 disk PS6000XV raid 50 volume (6 hotspares) and there i got 4387 IOPS and 272MB/s. the raid 50 is doing other stuff though at the same time, like running our exchange enviroment, and some SQL databases, so it might be a bit more busy than the raid10 volume. from a functionality point, im very satisfied with our Equallogic Boxes. There are some drawbacks though. No 10Ge support atm, it will arrive late Q4 though, with 2 x 10Gbe ports pr. controller. Its not Super cheap. They Do however deliver some nice PS6500X boxes with 48x600GB 10k Sas disks. Biggest advantage with equallogic is that they almost scale linear because they have controllers and logic in each box you setup in pools, unlike f.x EMC CX ceries SANs. Im sure DELL/Equallogic, can and will help you setup an test enviroment to test your load. Regards Darking
  5. Darking

    Hardened RAID

    Zipping and uploading a 4-6TB database... Priceless!
  6. I must admit im not an expert into virtualisation (Yet!! we are starting a 50 server migration soon into vmware.) But its my best belief its still the safest bet to place logs and data on two physically sets of disks, please explain why this assumption isnt valid? In my installation we are running LCR on the exchange installation. But you must understand its a replication of the running data. You cannot place them on the same volumes(physical). If you have a volume corruption, or two disks break at the same time (allthough its unlikely), you will have to restore from the last good backup. The whole point of having log files, is the ability to restore a destroyed database upto or closeto the last write to the log. It is simply a matter of how important your users mail is. If your business can handle a whole days loss of data, then there basically isnt even a reason to have logs (other than the fact its a builtin function). CCR is "better" it allows you to have a second mail host, but its double the windows/exchange licenses, but within budget. Again it should not be placed on the same physical disks at the the "primary" exchange server has its data. But Mitch808, i might be misunderstanding something entirely, so please enlighten me LUNs are basically just data volumes the host can see. Normally a LUN is one or more Raid groups, presented to the host as one volume. Never SAN equipment also allows for stuff like Thin provisioning, wich basically is telling the host that it has 500gb, while only reserving fx. 200gb on the SAN, and then expanding it when the data space is needed. I must admit i do not know much about HP MSA equipment, ive primarily worked with EMC claariion equipment(fiber channel), and we are just switching to a Iscsi SAN platform (Dell Equallogic) for our new VmWare platform.
  7. Hi. Alot has been said about it allready, but i just thought i might give you my word of advice. with 12 disks, i feel that the optimal storage solution you can make is the following. 1. 8 disk raid5 for both fileshares and exchange Databases. 2. 4 disk raid10 for the exchange logs. Its not only best practice to seperate logs and storage on exchange, its putting your data in severe risk, not doing so. Ofcause you dont need to make On huge lun of all the space on the raid10, you can use whatever you feel you need, and use the rest for extra data space for files and stuff like that. But with all database products, never mix DBs and logs on the same physical drives. I wouldnt worry too much about performance on the exchange. I run a 1400 user installation on 24 drives, allthough its raid 50 for data and raid 10 for logs. Second best advice i can give you. Make sure to align your partitions to 64KB, for both the log and the DB volumes. It gives in my experience upto 25+% performance, since the Exchange Database write 8 KB Pages Regards
  8. Darking

    fusion-io

    Hi Frank. A shame that the result was as expected. An old player on the market has released a similar product tho... you might want to check out if it provides a more reasonable performance. http://www.texmemsys.com/files/f000257.pdf The RamSan-20 Personally im afraid it wont prove much different than the fusion-io Darking
  9. Darking

    fusion-io

    Alot of nice information in that thread. I would take the review on ssdworld.ch with a huge grain of salt. allthough tomshardware is a bit iffy in some of their tests, the difference i write speeds with regards to the different lowlevel formatted sizes, is what is really interresting. The more room for garbage collection, the faster the drive is.
  10. Darking

    fusion-io

    Price? I'd suck ### for 40k IOPS! On a serious note, I simply cannot believe the spec and benchmark data. There HAS to be a downside, such as degraded performance with use or different data paterns. The IOmeter tests I just received are showing me 32k IOPS on my OLTP workload. I need to get a PO to grab one of these things. Frank I dont think i even want to know what your running with that high of a iops throughput. All i know is that IBM has started implementing them on their SVC System, and i dont believe its the norm for them to just pickup vaporware... but im doubtfull about 40k IOPS too. The sucking #### id leave up to you
  11. Darking

    fusion-io

    Agreed, but for 40K Database IOPS - per "disk"..... And if you can stick 4 fusion disks in a server... not 4000 IOPS, not 14000 IOPS, 40,000..... per disk Forty MotherF'ing thousand. I have 'got' to be misreading something. They should get Samuel Jackson to do their advertisements. I was thinking the same... our oracle installation would really love a raid 1 of two 160gig cards... im not sure what the price is for the 160gig model, but lets just guesstimate its 10.000$ thats still cheap for that kind of performance. Btw we are running on a Dell R900 quad socket server.. with 7 PCI-Express slots .. hmmmm 7 cards?
  12. Darking

    fusion-io

    Agreed, but for 40K Database IOPS - per "disk"..... And if you can stick 4 fusion disks in a server... not 4000 IOPS, not 14000 IOPS, 40,000..... per disk Forty MotherF'ing thousand. I have 'got' to be misreading something. They should get Samuel Jackson to do their advertisements. I was thinking the same... our oracle installation would really love a raid 1 of two 160gig cards... im not sure what the price is for the 160gig model, but lets just guesstimate its 10.000$ thats still cheap for that kind of performance.
  13. Darking

    RAM based SANs

    Well marketing aside, you cant really compared Enterprise SSD to what is on the consumer market today, they are two very different products. Non the less, ive gotten a qoute for an Flash disk for a EMC CX4 SAN.. and the price is pretty damn scary, a bit over 24500$ For 1 146gig Flash disk. Thats the same price i can buy a whole drawer of 15 146Gig 15K FiberChannel Disks. including 3 year onsite 24/7 and installation service
  14. Darking

    3ware 9550 v. Adaptec 51245

    Ok Interresting, are you sure you should even be looking at a SATA(several TB in a 12 disk solution, sounds like SATA to me)solution for that, or is there a small sized budget behind it? Reason im asking is that it sounds like it would be pretty IO intensive with so many users/connections. If unlucky seeks will kill your performance way more than any potential partition offset, raid stripe size will ever generate. Since you cant be any more specific with the application, its hard to give specific answers, but normally applications dont swing too much in how it pulls data(128KB -> a couple of MBs is a huge span), im sure it would benefit you alot actually determining where the soft spot in the config is, Real Transfer speed is probably the easiest thing to get out of a system, IO is harder. Hope it made sense at all ;-)
  15. Darking

    3ware 9550 v. Adaptec 51245

    Hi mate. Im trying to figure out why you would even want a default stripe size of 1MB. what kind of application gains benefit from such a huge stripe size. media oriented thingamajig?
  16. Darking

    Is SCA interface dead?

    Im fairly sure that the market has moved to 2.5 inch SAS for 1-2U servers, and 3.5 inch SAS disks for larger servers. I doubt the manufactors will continue to make new drives for older hardware, its doesnt make sense with a 3-4 year replacement scheme. In the storage market, its SATA II, SAS or Fiberchannel these days. connected either via FC og ISCSI on the Guests. My experiences are mainly with EMC SAN storage CX300-700 and CX3-80.
  17. Darking

    Real gigabit speeds

    Remember when running Jumbo Frames, you can have _no_ units connecting with anything below the set speed and MTU. Meaning that if you run 1Gbit/s 9000 MTU, you cannot have a router attached to the same network (VLANing is allowed on the other ports as far as i know) that runs at a slower speed. if you have, the network will default to 1gbit/s 1500 MTU, where you will receive between 250-350mbit/s performance depending on the OS and TCP stack.
  18. Im fairly sure the interface bandwith wont be an issue with splitters/expanders. You'll get around 330-350MB/s through each channel. that should be enough for 2-3 disks for each channel. If sticking with SATA solutions i would advice you to look at 3ware too. http://www.3ware.com/products/serial_ata2-9650.asp I have fairly good experience with their controllers and their software, and the newer versions should perform just as good as the Areca controllers. -- I must admit my experience with SAS is non existant, but i know a bit about SATA devices. and SAN's for that matter. When building a database server, number of spindles(io) is normally more important than size of the disks, and the amount of cache on the controllers, and more specifically how it writes to the arrays also has some importance. I must admit most larger database installations ive seen have been built over SANs, mostly for the flexibility Fiberchannel solutions give.. ofcause you wont get it on a limited budget either. Darking
  19. Darking

    WD740 gets 9Mb thorouput. Any Ideas?

    I had the exact same problem on my new XPS 700 (the raptor i installed myself). Disable Command Queing. That'll fix ya up. It can on my XPS be done, under the device manager and under the correct SATA raid device. Good luck! Darking
  20. Darking

    Finally, dell to use AMD

    What do they buy? HPaq or IBM. Dell doesn't know how to build a decent server IMO let alone provide decent support on one. It might just be the american support that sucks? else your experience differ very much from mine. My experience with the Dell Business(gold) support is that its exellent, fast and very serviceminded. And believe me.. beeing through Compaq/HP, IBM and dell servers, i can honestly say i havnt seen any differences in build quality at all. And regarding support tools, HP still is in the lead with insight manager tools suite, followed by dells openmanage... and ibm is far far far far far down at the bottom with the director.. worst software _EVER_. We have a medium sized installation with 25 HP proliant servers, and around 40 Dell servers, mainly 2850 and alot 1855s.. hooked up to a Dell/EMC CX500 with 10TB storage. Darking
  21. Look for some of my older posts on the matter. Very impressive, do you have any pictures and/or can you describe the case/setup/layout?
  22. You will not have any trouble at all with a 600watt power supply as a user previously stated.. Im running a fileserver on an Antec 650watt powersupply with the following configuration: 1x2.8ghz Xeon CPU. 2 GIG memory 16x 250GB WD disks 2x 72GB 15K scsi disks 6x 4bay Disk cases. and im nowhere drawing more than 500watt (im using UPS so i can see the actual load)
  23. Darking

    Gbit connects at 100Mb

    From all the Gbit network cards ive seen(broadcom netextreme, and Intel cards), ive never seen a card beeing able to use anything but Autosensing for Gbit speeds. I simply think its in the standard, that its not supposed to be hardset at any time. Darking
  24. Darking

    Dual Core Intel chip on eBay

    im not german but i know enough to get around. what he is saying is that its in Super condition (on the outside)but he has no idea what type of Processor it is. Ill try to translate the rest for you: Here he is claiming its an Pentium IV processor for sale. The processor was found when liquidating an warehouse, and he has no board for it, so it can't be tested Here he is noting that he is selling the Processor as a Defect. It could be a Tresure, or schnaps.... (im guessing the first) Allready gave you that one It has no pins on the back of the die, and it is produced ont he typical socket 775 layout, but it has the two die's on the top. Here he is speculating that its some sort of server CPU, and that its running 2x 3000Ghz ZZ----->>> Im speculating some sort of IBM custom chip for a mainframe or something, but im not really that familiar with socket 775 and what it has been used for. Darking
  25. Another way to fix the problems, is to make a fully automated system, like we have in our metro here in copenhagen. Sure there are faults at times, but generally they run smoothly every 1 1/2 minutes day and night. We also have the plexiglass wall, with automatic doors that open when the metro train stops.. i havnt seen it fail as of yet, but if it did, there are 1 or 2 Metro-personell on each station that can take care of problems that arises. Its an expensive system, and i can imagine it not just something you implement in a city like New York over night, but its the way to go for smoother public transportation. Darking