Stoyan Varlyakov

Member
  • Content count

    48
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Stoyan Varlyakov

  • Rank
    Member
  • Birthday 07/27/85

Contact Methods

  • Skype
    sivaplus
  1. Hi, all the benchmark pics report the hitachi drive as 7K400. Besides that - 50G on 10 drives is really short stroking them. Maybe it makes more sense to use larger sizes. Plus 10 drives will set you back a good 3000$ and using RAID10 is a waste. SMBs will be interested in mainly RAID5/6 performance (SHR1/SHR2), so if you have some information on rebuild times with 10 drive RAID6 or just a hint of performance info, that will be really useful. FYI: I did test that I found it extremely slow rebuilding 4x1 Tb RAID5 array took good 2 days, whereas the same test on an LSI controller took 1,5 hours. My test unit was really small, so I will be highly interested to see how the big guys cope with the load.
  2. Another thing - switch to IOMeter. Intel RST does Cache into RAM a lot and thus shows miraculous results. If this is the result using AHCI, then I wonder why the case. I have 2xintel 520 on RAID1 on an 9271 and the picture is different altogether.
  3. In a word: caching. Such a test writes data first, which then gets read, so you are 100% running in cached mode, so it is blazing fast. For accurate testing it is commoly accepted to use data sizes in the area of (all caches)*2 If I have a client/server model, client has 1 GB RAM, Server has 2 GB RAM + controller 1 GB + 12 Drives with 64 MB each... do the math. Its a lot If you have SSDs, use a mutiplier of 3 or else the SSD will flush a lot of the operations to flash, then you will be running cached again...
  4. Hi, what you are looking for is CX4 cables. They should be the same for SAS expanders, but if you acquire CX4 infiniband certified cables, you will be good to go.
  5. OK, so here is where the confusion came from. Once installed, select the Host, then go to the tab "configuration" and look at ESX Server Licence type. A freshly generated ESXi Free key shows up as this: Product: VMware vSphere 5 Hypervisor Licensed for 1 physical CPUs (unlimited cores per CPU)
  6. If RAID10 - capacity is formed as a multiple of the smallest drive in the group. If you have 3x 3Tb Drives and 1 1 TB drive, your RAID10 array will have a total capacity of (4x1)/2 = 2 Tb. You have lost your capacity, unless you destroy the RAID and build 2 separate RAID1 groups.
  7. SAN vs NAS - is Servers vs Clients If client-centric and small offices, NAS is great. If enterprise or multi server environment SAN is a must. SAN with OS = Nexenta; Open-E etc. But those are customized OSes, you cannot co-exist a SAN-specialized OS with say a Web server...
  8. To answer you directly: 1. go for 500$ budget or 2. go for SW raid A proper HW RAID card will need a BBU for RAID5, or else cache is limited to read only operations. Your money is better spent for a UPS with decent software and a SW RAID, especially if you are dealing with media files. Large sequential reads/writes are handled not-that-bad with SW RAID. Another point - your drives are Greens. They suffer from TLER problem, which means they will be rejected from a decent HW RAID Controller every now and then. So if you want to replace drives and buy controller and BBU, then 500$ will be just the start of it... I am not sure your money is justified with that performance requirement.
  9. modern SSDs have an algorithm to spread the blocks and not cause instant failures... Just get an SF based SSD and you are all set. I would suggest Intel 330/520 as they have no onboard RAM which can be problematic in case of a power failure.
  10. This can be done, sure. But then you need clustering in order to access the *same* portions of the hard drive (volumes, blocks, etc...) or else you will have data corruption.
  11. Great info! About the access patterns - are those IOMeter tests? If yes, how excatly do you perform 4 thread / 4 QD tests? 4 workers with 4 outstanding IOs each? About the 50 Gb LBA - this means you configured the IO Meter to use 100 000 000 000 blocks? If yes - did you time limit the test? I hope the answer to the first question is not no Cheers, SV
  12. Hi Brian, would it not be better to change the values 8K Sequential 100% Read, 100% Write 100% 8K 128K Sequential 100% Read, 100% Write 100% 128K 4K Random 100% Read, 100% Write 100% 4K to 50 % read / 50% write ? Also I wanted to ask did you get the chance to test larger Jumbo frames with MTU>1500 ? I have tried that with MTU 4000 and got miserable stability over iSCSI.
  13. Hi Clicker, yes that is possible. But when you look at the setup, you have 2 separate connections: 1. Openflier to OS which is a SAN-type connection 2. OS to Clients which is NAS-type connection So yes, of course it is possible. But in a SAN or sometimes referred to as "storage network" you need different NIC settings - larger MTU (9000 is what is most compatible these days), TOE Options to a certain extent, Link Aggregation etc. In an NAS environment because you usually have a large mix of clients, it is best not to fiddle with the MTU size, and disable TOE altogether, because from my professional experience it is the number 1 cause of surprisingly low network performance. Therefore we still draw a fat red line between the both. Just my 2 cents.
  14. This is indeed possible, but I would want to do that only on a cluster file system OR a cluster aware application like say Microsoft Clustering Add-on. Then you add the same LUN/DISK on many servers, but only 1 has actual access to the LUN. Most common scenario for FS Corruption in the given example is when the hearbeat NIC goes bad/away and both nodes become active. Instant FS Lock up/corruption. On a large scale cluster file systems, you access the same block device with multiple nodes, but the algorithm for write locking is a very comprehensive one...
  15. You are absolutely right. I have to stand corrected - they changed the limits in free ESXi from 8 Gb per VM to 32Gb per Host and the initial CPU limitation is now gone. Maybe I have fooled myself with something else, don´t know. I am absolutely positive, there is the SSH which you can use to login to ESXi, because I use that to perform backups. When you enable it you get a big fat warning "SSH is activated" on the home page of vSphere client. I have not tested RemoteCLI so maybe that is the "disabled feature". HTH, SV