Guest Eugene

Should notebook disks be compared a desktop unit?

Recommended Posts

Guest Eugene

... at least in our initial roundup? The performance database, of course, will let you directly compare a notebook drive with the Fujitsu MAU if that kind of thing excites you. For this roundup's purposes, however, I'm trying to decide whether or not to include scores from a typical desktop unit.

Pro: This will reinforce just how much a notebook drive lags a desktop drive in overall performance and demonstrate why performance is so important here.

Con: Including an entry for a desktop drive tends to hog up the graph real-estate and in effect compresses the usuable area in which notebook drive graphics span and as a result make it more difficult to easily discern differences. I don't, after all, include a typical 10,000 RPM drive in a 7200 RPM drive review to show how much slower they are (or vice versa given drives like the 7K500 and WD4000YR :P). A couple examples:

notebook_office_old.png

notebook_iometer_old.png

Any thoughts?

Share this post


Link to post
Share on other sites

The compression of the results is definitely a serious drawback. Even when you're being honest it's easy to lie with statistics, and the compression could definitely mislead less critical/aware readers. Combine this with SR's tradition of comparing drives to others in the same class I'm inclined to agree with Olaf that the results should be presented without the context of desktop disks.

I think, however, that you should definitely reference their relative performance to their desktop brethren in the discussion after the results of the initial roundup. A roundup like this would definitely be lacking without that context, since it would be a valuable point of curiousity for just about every reader. It's a better article for your readers with the reference.

Which desktop disk to use is a bit of a conundrum. My initial inclination was to include the current desktop leaderboard champion as the measuring stick, but it might be nice to use the fastest and slowest of today's generation. This provides a much better context, since using the leader, alone, has a bit of a misleading effect. Using the average is an alternative choice, but provides imprecise context. The reader would be left wondering if the average obscures the presence of desktop disks slower than the laptop disks. It doesn't account for deviation. This is a no no if we can avoid it. Of course using two values for the desktop disks exacerbates the effect that you note, of occupying valuable graph real estate, it also reduces the impact of the graph by crowding it with too many values. Reducing the impact detracts from its purpose.

Since this specific graph's express purpose would be to place all the results in the context of desktop performance, my final inclination would be to cut out all the laptop disks except the performance champion revealed earlier on the results page and compare it to the fastest and slowest desktop disks of this generation. Of course, I think this should go in the discussion of the results, not on the actual obervations page itself since its more of a contextual cue than a directly relevant result. This keeps the graph to three bars which gives it much more impact --general purpose context data is always best with three values--, it avoids the compression issues in the general observations which is definitely a good idea, and, since a reader already knows the relation of the other desktop disks to the laptop champion, it says no more and no less than it needs to to make its specific point as best as possible.

Why not use average laptop disk performance rather than the leader? Because of the values themselves, because the fastest laptop disk is significantly slower than the average desktop disk, the graph has more impact and provides a better context this way. Using the top disk places all the laptop disks below it (as opposed to some above and some below). This method accounts for deviation, just like the reasoning for using the fastest and slowest desktop disks takes account of deviation.

Why not use the fastest and slowest laptop disks then? The full results section of the review serves the purpose of thoroughly describing the relative performance of the laptop disks amongst themselves. The desktop-laptop comparison graph only needs one laptop value, anymore is superfluous given the results section that makes up the meat of the roundup.

I bet that was more thought than you were expecting!

Share this post


Link to post
Share on other sites

No.

Aside from the aforementioned compression issues, the simple fact remains that doing this would not be an apples-to-apples comparison.

I would LOVE to see some laptop hard disk shootouts though. :)

-Wayfarer

Share this post


Link to post
Share on other sites

Pardon the double post; for some reason, I am not being allowed to edit my post. What I would LIKE my original post to say is:

No.

Aside from the aforementioned compression issues, the simple fact remains that including different form factor drives on the same chart would not be an apples-to-apples comparison. I agree with the statement that links to laptop drive reviews should be provided, should the reader wish to make the cross-reference. I feel that mixing statistics in a single review would undermine the great effort SR has put into maintaining a fair, balanced, uniform testing platform and protocols.

I would LOVE to see some laptop hard disk (both 1.8" and 2.5") "shootouts" though. :)

-Wayfarer

Sorry about that. :(

Edited by The Wayfarer

Share this post


Link to post
Share on other sites

Eugene, I'd prefer you kept them separate from 3.5" units. Like Olaf said, it will make for some strange looking graphs. Besides, notebook drives live totally different "lives" (i.e., working conditions) than desktop units. So why throw in a apple when comparing oranges...

Share this post


Link to post
Share on other sites

Even though my next desktop drive will probably be a 2.5" drive I don't think that the tests should compare with 3.5" drives (unless the 2.5" drive would actually be so fast that comparing it only to other 2.5" drives wouldn't make sense).

I would prefer a comparison based on main characteristics such as price, capacity and power/sound. Currently the 2.5" drives (the normal laptop ones) have a very different capacity and power/sound characteristics from normal 3.5" desktop drives, this makes comparing across all drives quite meaningless.

When I buy my next drive for my desktop computer I will have a look at test results from 2.5" drives and select the one that suits me best, and then (as a final check) I will compare its performance to my current drive just to make a sanity check that I'm not loosing too much performance.

Share this post


Link to post
Share on other sites

Hmm, maybe not entirely apples-to-oranges since even though you can't use a desktop drive in a notebook, people do use notebook drives in desktops. There were a lot of people who stuck to using Barracuda IVs long after they became painfully slow simply because they were so quiet. I believe many readers are like Adde and would like to easily see exactly how much that quietness would cost them in performance. It would be logical to only include "quiet" 3.5" disks to compare for this reason. And the scale issue can be handled by simply making the graphs much larger rather than compressing the scales.

After all, when the Raptor came out, it was compared here to both the fastest 7200rpm IDE drives at the time, and 10k rpm SCSI drives. Apples-to-oranges? All could be used in a desktop...

Actually since SR is so late to the party, I'd really like to see some elderly models tested too like the 7k60 to see if it's worth upgrading.

Share this post


Link to post
Share on other sites
Eugene, I'd prefer you kept them separate from 3.5" units. Like Olaf said, it will make for some strange looking graphs. Besides, notebook drives live totally different "lives" (i.e., working conditions) than desktop units. So why throw in a apple when comparing oranges...

217541[/snapback]

2.5" is not equivalent to notebook class and 3.5" is not equivalent to desktop/server class.

Consider the Savvio for example. :)

Share this post


Link to post
Share on other sites

On the other hand, it would illustrate that no matter what notebook HDD you're getting, it will always be dog slow.

Showing a graph where the most optimized laptop drive is shown as being 130% or 140% faster than some of its competition might (will) delude some readers to believe that the "fastest" laptop drive is trully snappy, which is false. At the very least, include the two graphs above in your initial article on notebook drives, so that technically-challenged readers can put things in perspective.

There's a trend toward small form factor PC and many are tempted to put laptop drives into them in order to save space and cut noise. That's fine, but I think it is a good idea to make it very clear that there's a significant performance trade-off by using a laptop drive instead of a regular desktop drive.

Share this post


Link to post
Share on other sites

Why not create graphs that show the typical performance of each size / type of drive.

you could include 1.8", 2.5"laptop", "desktop" 5400rpm, 7200rpm 10k rpm 15K rpm.

A chart for each of the performance catagories except multiuser that are used in typical reviews would be very interesting and allow quick assesment of the performance hit taken with each speed/drive size reduction.

The data for destop drives is already in the reviews, just add laptop drive info and create the charts. Updates would only be required when one class of drive makes a major change.

Just an idea.

Share this post


Link to post
Share on other sites

5400 rpm desktop drives are dead by now. Even 300$ Dell cheapo boxes include 7200 rpm drives.

Share this post


Link to post
Share on other sites
... at least in our initial roundup? The performance database, of course, will let you directly compare a notebook drive with the Fujitsu MAU if that kind of thing excites you. For this roundup's purposes, however, I'm trying to decide whether or not to include scores from a typical desktop unit.

Pro: This will reinforce just how much a notebook drive lags a desktop drive in overall performance and demonstrate why performance is so important here.

Con: Including an entry for a desktop drive tends to hog up the graph real-estate and in effect compresses the usuable area in which notebook drive graphics span and as a result make it more difficult to easily discern differences. I don't, after all, include a typical 10,000 RPM drive in a 7200 RPM drive review to show how much slower they are (or vice versa given drives like the 7K500 and WD4000YR :P). A couple examples:

Any thoughts?

217503[/snapback]

Damn, am I old man (odd man) out again! :-)

I don't mind at all, the representations in the graphs above. Further I guess I disagree with everyone else about the 'significant'(ance) of the alleged huge disparities.

You guys have a funny way of bias in your interpretations of results.

Point in fact:

1. a.) Seagate Momentus 100GB? 7200.1 (remember this is the very, very 1st iteration of this drive) does a 'numbers' score of 107 I/O's at cue depth of 128. Not fast compared to the latest greatest 3.5in drives, but quite fast if you were to compare to 3.5in drives just a few years ago! Nothing to sneeze at (oh, I know, FS will sneeze ;-) ).

b.) The Hitachi 7k100 100GB? a bit slower at 87 I/O's but both of these are as fast as many new big capacity 3.5in SATA drives w/o NCQ enabled. Holly S**T! The Seagate Momentus 100GB? 7200.1 only trails the spanking brand new Seagate NL35 400GB w/NCQ enabled, but 9 freaking I/O's. Damn, that's impressive! (or maybe it's not? ;-) )

2. Let's then take a look at the other graph for Office DriveMark '06, and then go to the SR Performance Database again, for some other interesting comparisons.

a.) At 448 & 437 for the 7.2k laptops this compares favorably with Segate's 10k Savvio (yeah, I know, it's not marketed as a desktop drive) or the NL35, but [/b] surpasses the 15k Hitachi Ultrastar.[/b] Two years ago, that would have been wicked fast for such a laptop drive! So what power user programs are SR's members using that would seem so damned dog slow on the systems they used just 2yrs ago, if they were to use these 7.2k laptop drives for silent running desktop machines? In day to day use, what typical consumer (not that such a beast actually exists ;) ) will notice the difference of ~100pts/unit results between lower scoring desktop 7.2k drives, and the 7.2k laptops?

b.) Dog slow is in the eye of the beholder. 2yrs ago, if I came on SR and told everyone that in just over 2yrs, there would be 7.2k laptop drives that would use the same amount of power or less than the fastest 5.4k drives, but outperform a 15k/rpm drive like the 147GB Hitachi Ultrastar, you'd tell me to stop taking so many hallucinogenic drugs :). Yes, 100GB laptop drives lag behind the latest greatest 300GB+ desktop 7.2k SATA drives by a significant amount at the top end of the latest greatest SATA drives, but not so much at the lower end of the desktop drives. From my POV, these 7.2k laptop drives are in fact fairly swift.

Fujitsu has an SATA drive for laptops (announced recently), and a number of these ATA laptop models also come in a blade server designed SATA configuration (same performance specs except 25% greater current draws, and 24/7 operation). The 5.4k 160GB models with higher areal density are due to hit the market early? next year, making for some interesting mobile RAID configurations, that could run off of Firewire 800 ports w/o an AC power adapter.

Question for Eugene or anyone else.

Other than the lower power consumption (and an assumption that higher current drawing actuator motors would be needed at present for parity seek times), what can be the reason's for the gaps between performance on the desktop 3.5in drives, and the 2.5in laptops?

After all the Cheetah 15k drives are using smaller than 3.5in platters to get more speed. What I am wondering is, given that next year's Intel mobile processors/chips sets are supporting SATA drives and that more and more Hi-def material is going to be done on laptops (heavy-duty users of uncompressed 2k or greater true HD material will still have to use a high-end desktop with huge TB RAID's); what would be the obstacles in the near term future, that the laptop drives need to overcome & address, to gain near parity as far as speed, if not capacity? I'm must wondering what accounts for these differences, and if the next generation of laptop SATA drives will narrow the gap at all?

Keeping in mind that lower capacity storage needs on the laptops of late 2006 or early 2007 may be met by 100GB+ NAND solid-state memory. Apple's inexpensive Mac Mini (using laptop innards) still uses 5.4k laptop drives, and will probably do so as long as that model is offered.

I'm just glad SR will finally, after years of whining by me (and others) as far back as 2001, have laptop drives reviews, woohoo!!! Oh yeah, maybe you'd better not include the 15k Ultrastar in the comparisons graphs with those 'dog slow' 7.2k laptop drives... it's just too embarrassing, lol.

Share this post


Link to post
Share on other sites

absolutely yes

an entry for an avg 7200 drive is good.

no need to waste space on all the desktop drives, but 1 is good...

this is useful for people interested in building/buying SFF computers... and to be reminded of the gap between notebook and desktop drive performance.

Share this post


Link to post
Share on other sites

I also would like to see a comparison to desktop drives. Why not?! It gives you very interesting comparison info, so you know what's the real world difference to a "real" desktop drive. Average 7200rpm desktop drive is fine for comparison!

Share this post


Link to post
Share on other sites

If there's room to populate the standard-sized comparison boxes with laptop-class drives, I think that's a more appropriate comparison. Otherwise, including a representative desktop drive clearly labeled as such would be OK.

I really like the suggestion of separate comparison tables showing differences between drive classes, rather than individual drives. What would be particularly interesting would be a graph showing, for each test, the range of scores (perhaps omitting the highest and lowest scores?) turned in by each class of drive. Ultimately, the trends are fairly well known, but graphs showing the relative magnitude of difference between desktop and server class drives would be useful.

In an ideal world, such drive characteristics would be a part of the results database so users could generate graphs of their own, not just comparing individual drives, but comparing particular characteristics (NCQ/TCQ v. non- drives; laptop v. desktop; 2M buffer v. 8M buffer, etc.) of many drives at once. I realize this is pie-in-the-sky territory:)

Share this post


Link to post
Share on other sites

Udaman, the performance gap is a consequence of several factors. In no particular order:

1. Caching algorithms optimized to save power.

2. Form factor limitations on the actuator assembly. (15K disks with 2.5" platters come in a 3.5" form factor because their actuator assemblies need that extra space. The magnets for the wave motor in particular.)

3. Smaller capacity.

4. Smaller platters.

All of these are necessary compromises given the disks' target application. They're unlikely to ever be overcome.

Share this post


Link to post
Share on other sites

1. a.) Completely irrelevant.

b.) Completely irrelevant (again).

2. b.) Why would you use a drive that used to be fast two years ago if you can get one that's fast now? And, access patterns change, data gets bigger, processors get faster.

Fujitsu has an SATA drive for laptops (announced recently), and a number of these ATA laptop models also come in a blade server designed SATA configuration (same performance specs except 25% greater current draws, and 24/7 operation).

25% more power without performance advantage? Why?

Other than the lower power consumption (and an assumption that higher current drawing actuator motors would be needed at present for parity seek times), what can be the reason's for the gaps between performance on the desktop 3.5in drives, and the 2.5in laptops?

Due to much more data/track larger drives have a big advantage in the benchmarks.

Keeping in mind that lower capacity storage needs on the laptops of late 2006 or early 2007 may be met by 100GB+ NAND solid-state memory. Apple's inexpensive Mac Mini (using laptop innards) still uses 5.4k laptop drives, and will probably do so as long as that model is offered.

I'd not expect (cheap) big solid state storage that early.

BTW, where did you get the performance numbers from laptops from?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now