• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About outRIAAge

  • Rank
  1. TomsHardware finally put up a review of the RocketHybrid, but they fundamentally misunderstand what it's about! They test it only in safe mode, which reduces it to a pure read cache. Anand seemingly hasn't yet heard about it, but he at least understands the concept. Here's his take on the Intel X86 chipset, which will allow motherboards to offer SSD caching: "Intel's SRT functions like an actual cache. Rather than caching individual files, Intel focuses on frequently accessed LBAs (logical block addresses). Read a block enough times or write to it enough times and those accesses will get pulled into the SSD cache until it's full. When full, the least recently used data gets evicted making room for new data." That is exactly what the RocketHybrid should do, without you having to buy a new motherboard and Intel processor. I'm currently combining my Crucial 6Gb/s C300 with a 6Tb/s WD Black drive, will do a full Win7 install, will variously optimize some folders, and I hope to finally get some meaningful numbers. Please somebody else chip in?
  2. (I'll trim these posts down when I and everybody else figures the RocketHybrid out.) Finally the Pause and Stop buttons went away (1.5-2 hours later) and Analyze and Advanced lit up. Clicked on Advanced. RHS pane showed columns Auto, Cache, Program (actually folders), Size, and Status. I clicked the Auto column check-box for the 467G (indicated) folder I wanted optimized. There is also a check-box next to the Auto column title, so I clicked that too, then Submit. An hour went by with a running Optimize icon showing. I right-clicked the Marvell icon in the system tray and it put up a window with a progress bar, showing "0%". But then it displayed "No schedule running!" and the main program also lost the running Optimize icon. I spent several hours, repeatedly copying 20G of files from my auto-optimized hybrid folder to my 6g/s system SSD, figuring it would speed up over time. It didn't. When I ran a control (copied 20G from my system HDD to my system SSD) it was faster. Back on the hybrid, I gave up on auto-optimizing and explicitly cached the 20 G folder (which took a while). Copying from an explicitly-cached hybrid folder to my system SSD should be the same as copying from one SSD to another, right? Look for yourselves: Source Destination Size Time HDD SSD 20G 3:27 SSD HDD 20G 4:34 HDD SSD 20G 3:27 Hybrid SSD 20G 6:07 Hybrid SSD 20G 7:06 Hybrid SSD 20G 7:07 So an explicitly-optimized hybrid folder is twice as slow as an HDD? Something is obviously wrong, so I unoptimized the 20G folder and tried again. According to the dreadful documentation, that should turn it back into a 3g/s HDD. Here's the results: Source Destination Size Time HDD Hybrid 20G 4:50 Hybrid SSD 20G 5:30 SSD Hybrid 20G 4:54 The interface has undocumented Analyze and Submit buttons. I tried pressing one or both of them between identical tests. The only way to tell anything is happening is to right-click on the system tray icon and bring up the Accelerate Status window. It shows 0% complete until it suddenly doesn't. Those buttons can make a difference in test times: they can raise or lower test times by about a minute, but not in any predictable way. Just to see what would happen, I also ran a set of standard hard drive tests (PerformanceTest). On the hybrid drive, the test wrote and read on the root folder, which cannot be configured (only sub-folders can). The results are simply puzzling, given my above results, and indicate the hybrid, unoptimized, would be 3.5 times faster than my current HDD. SSD Written 2180MB Read 2173MB MB/S 72 Avg_WAR 0.08ms HDD Written 103MB Read 105MB MB/S 3.46 Avg_WAR 0.28ms Hybrid Written 379MB Read 379MB MB/S 12.65 Avg_WAR 0.84ms Now I'm going to wait and see what other people have to say.
  3. 2 hours later, and mystery is still the order of the day. I followed the instructions to tell it which folders to optimize. I have the HyperDuo "drive" configured as drive R:, but the "Valid Volumes" showed only D: (my 2TB WD Black with my real data on it.) Following instructions, I clicked "Advanced", but got scared when the folder-picker showed only folders on drive D. So I rebooted. Now it's showing R as the only "valid volume", but "Advanced" is greyed-out, and the two lit buttons are Pause and Stop. After I waited 40 minutes, I tried clicking Pause. It indeed paused (whatever it is doing) and offered "Resume", so I resumed and went back to waiting. I'll wait all day and all of the night, if necessary, but to be on the safe side I'll refresh my backup copy of my D drive on my external in the meantime (USB 3.0, purraise the Lord and pass the application!)
  4. (Posted in real time) I now (04/30 11am PST) have a RocketHybrid in my box, combining (for test purposes only) a 1TB Hitachi 3GB/s drive with a 128GB Crucial 6GB/s SSD. The install documentation is (of course) dreadful, and also spends 2/3 of its time explaining how to set up the BIOS, but since they don't ship ANY printed documentation, I skipped all that, booted, and RTFM. The software (seems to) duplicate every function available in the BIOS. I got it to start creating a HyperDuo drive, "optimized for capacity" - the other mode is "safe" (make duplicate of SSD data on the HDD, which is of no interest here). THERE IS NO INDICATION THAT ANYTHING IS HAPPENING. The docs say it can take 30 min, so I checked back, clicked on the HyperDuo. It claimed to be functional, and was showing the correct (1050.6 GB) capacity. Docs make no mention of formatting. I checked with WinExplorer, but no sign of the drive. I quick-formatted it NTFS, and am now loading 450 GB of Apple Lossless (m4a) files onto it. I figure the real thing of interest here is the auto-optimization function for frequently-accessed files. So standard HDD tests with temporary, fake data are pointless. I discovered that if you don't configure this thing yourself, it will optimize nothing. You have to select folders and check: Cache - if you want those folders put on the SSD (of no interest to me) Auto - if you want those folders optimized automatically (duh!) If you check neither, you'll get strictly HDD performance. (A simply awful choice for default.) When it's done loading, I'll tell it to auto-optimize all 450 GB. I will then load 60GB of files into a MediaMonkey library, delete the library, and repeat, measuring time taken for each iteration. After 10 repeats, I'll delete the library and load all 450 GB into it once, then start at the beginning again, measuring each time. Nowhere in the documentation is the TRIM command mentioned, so I expect mixed results over time. Right now I have an hour to wait, so I'm going out for lunch.
  5. Thanks: I've been away from forums in general for a while. My day job is writing about this stuff, so this is a bit of a busman's holiday, but Highpoint's idea has me so intrigued that I've been trying to implement it in my head since I heard about it. I'm stymied because I don't actually know enough about low-level NTFS communication: Win7 knows that a file is fragmented, but does the HDD even know that those fragments belong to one file? I doubt it, which would make native, transparent defrag impossible. (I apologize for being Win-specific, but that's my cross to bear :-) I'm trying to step back and see if there's an approach that could work, and I can't see one using current file systems. What Highpoint is trying to do is exactly the same idea as transparently providing "memory" that is a combination of an on-chip cache and slower, actual memory. But the equivalence breaks down because on-chip cache and "actual" memory are both RAM, and they don't have different maintenance issues. I'm starting to think that my early idea that this might not work properly until Win8 might not be that far off. Say you have a file system that moves the file-system abstraction layer all the way down to the storage device. The OS tells the storage device to store a file: here it is, it's this big, give it back to me when I want it. (And why not? It would allow for innovations like this one and gawdonlyknows what else: there are lots of bright people in the storage field, even though as of just today most of them now work for WD.) THAT would make proper implementation of RocketHybrid trivial, but in the meantime (thinking while writing again) Highpoint could write a "maintenance" program that ran on the OS, communicated with and took over the drive, knew where everything was and where everything should be, and it could TRIM and defrag to its heart's content... I'm about ready to lay money that they're shipping that program with the controller.
  6. Thanks for your post: those are excellent points that I hadn't thought of (but then it's only 30 minutes since I heard about the concept). As you write, the needs of the two kinds of drives are SO very different. As to how Windows treats it? Well, Windows doesn't know about it; thinks it's just looking at a hard drive. I wonder what rotational speed Highpoint tells Windows it's running at? Rotational speed=0 would enable TRIM and disable defrag (in Win7), and anything other than 0 would do the opposite. But Windows has no idea what sort of disk everything is currently on: proper disk maintenance would HAVE to be done natively by Highpoint ... hang on ... thinking while I'm writing ... so Highpoint obviously tells Win7 it's running at 0 rpm so that Win7 will send it TRIM commands, which will be applied when that particular section is actually on the SSD, and ignored otherwise. But Highpoint would have to implement defrag natively, and do it transparently (not at all trivial). I wonder if Highpoint has thought this through all the way? I'll be impressed if they have. We might have to wait until Win8 before the concept works properly, but RIGHT NOW I'm ready to plunk down the $60 and go play, perhaps answer some of these questions myself. Except for one little vaporware problem: I CANNOT find a vendor.