sPECtre

New 500Gb Hitachi drives announced

Recommended Posts

Note that SATA I (first generation) has triple the bandwidth that any regular drive can use, and double the fastest drive you can buy, so consumers probably won't even use SATA I's capability for several years. My western digital 250 gig drive maxes out at 58 MB/s, and averages only about 49 MB/s. (See WD 250gig benchmarked) But SATA I already goes up to 187 MB/s. Clearly SATA 2 is mostly a marketing move.

What I want to know is where are the higher areal densities? That would increase throughput better than high rpms. The new 500 gb drive announced yesterday by Hitachi uses 5 platters at only 100 Gigabytes each. But it was almost 4 years ago that IBM announced the "pixie dust":

Known technically as "antiferromagnetically-coupled (AFC) media," the new multilayer coating is expected to permit hard disk drives to store 100 billion bits (gigabits) of data per square inch of disk area by 2003.

(from IBM Pixie Dust press release)

There are about 30 square inches per platter side on a 3.5" drive, so each platter should now hold something like 30x100x2/8=750 Gigabytes per platter. But the newest, biggest drive has 1/8th that density. What's the problem?? Are they intentionally holding down the density in order to sell the high rpm drives?

I want terabytes!

Share this post


Link to post
Share on other sites
What I want to know is where are the higher areal densities?  That would increase throughput better than high rpms.  The new 500 gb drive announced yesterday by Hitachi uses 5 platters at only 100 Gigabytes each.  But it was almost 4 years ago that IBM announced the "pixie dust":

193776[/snapback]

It's always a long, hard struggle to go from lab and "proof-of-concept" to a economically-viable product for mass production. It's no use making a drive that costs a few hundred bucks to make, as few will pay what it's worth. Ditto if the yields are very poor because you have to cherry-pick your components to assemble said drive.

The typical consumer does not need a terabyte drive. Those with large MP3/video collections might, as would folks building inexpensive file servers or Tivo-like devices. Is that market large enough to offset the costs of developing a product for them? I think we're starting to see that the answer is "yes," but will it grow quickly enough so we see the 100%+ annual areal density improvements again? Is the technology there to keep up that kind of pace? And more importantly, is the capital there to support it? The drive industry isn't exactly rolling in money to fund that kind of investment...

Share this post


Link to post
Share on other sites
It's always a long, hard struggle to go from lab and "proof-of-concept" to a economically-viable product for mass production. It's no use making a drive that costs a few hundred bucks to make, as few will pay what it's worth. Ditto if the yields are very poor because you have to cherry-pick your components to assemble said drive.

Well, actually, if you read the article, IBM was planning to put max areal density drives on the market in 2003. Obviously that didn't happen on the desktop, and still hasn't happened years later. If it had, we'd have 2-platter 1.5 terabyte drives. I don't think it's like silicon, and so it's not a matter of cherry picking. I suspect they've buried the technology until the demand is awakened. As you point out, most people haven't yet dreamed that they need terabytes of storage. But I don't think there's ANY technological/production barrier in the way--purely a business (read: greed) decision. I'd bet they're using the new density only on laptop (and smaller) drives, so they can charge a fortune.

I get the feeling that they're not going to let us have terabyte drives for at least a couple more years, even though they're perfectly easy to make since a few years ago.

Bastards!

Share this post


Link to post
Share on other sites
It's always a long, hard struggle to go from lab and "proof-of-concept" to a economically-viable product for mass production. It's no use making a drive that costs a few hundred bucks to make, as few will pay what it's worth. Ditto if the yields are very poor because you have to cherry-pick your components to assemble said drive.

Well, actually, if you read the article, IBM was planning to put max areal density drives on the market in 2003. Obviously that didn't happen on the desktop, and still hasn't happened years later. If it had, we'd have 2-platter 1.5 terabyte drives. I don't think it's like silicon, and so it's not a matter of cherry picking. I suspect they've buried the technology until the demand is awakened. As you point out, most people haven't yet dreamed that they need terabytes of storage. But I don't think there's ANY technological/production barrier in the way--purely a business (read: greed) decision. I'd bet they're using the new density only on laptop (and smaller) drives, so they can charge a fortune.

I get the feeling that they're not going to let us have terabyte drives for at least a couple more years, even though they're perfectly easy to make since a few years ago.

Bastards!

193782[/snapback]

Then perhaps its like diamonds... where they control how much are released every year... here i guess they have their own Moorse Law, and if they give out too much too soon, they'd have no market.

i remember reading about that "pixie dust" too... about 6 or 7 yrs ago or something like that... but that was when HD's where 20 or 30 gigs... they had said back then that with this new "pixie dust" they'd be able 2 reach 400GB's.... so i think they had reached their promise.

Share this post


Link to post
Share on other sites

now the question is THIS or Seagate's 7200.8 ..... :X

Hopefully the Hitachi will be quieter than the SATA 7200.8 ......

Share this post


Link to post
Share on other sites
There are about 30 square inches per platter side on a 3.5" drive, so each platter should now hold something like 30x100x2/8=750 Gigabytes per platter.  But the newest, biggest drive has 1/8th that density.  What's the problem??

193776[/snapback]

Check your math.

Actually usable surface on a 3.5 inch drive is more like 5 square inches per platter side, maybe even less.

5x100x2/8 = 125 Gigabytes per platter.

Share this post


Link to post
Share on other sites
I'm also a little disappointed by the fact that the 7K80 has slower seek times than the T7K250.  I suppose that makes sense for a volume-oriented drive, but I am annoyed by the trend towards reducing the performance of the lowest capacity drives.  I have far less use for performance on giant storage disks than I do on low capacity system disks.

I've actually have a 7K80 2Mo PATA since 3 months. Globally access times is really 0,3ms higher than 7K250, but its better than my old 180GXP 180GB drive or even a the level of my very old 75GXP 30GB drive, and also even better than any Maxtor, Samsung, and Seagate PATA :)

If this access time is higher, its because the mechanics is simplified. It is near than 200g less than an 7K250 :D

Share this post


Link to post
Share on other sites
I don't think it's like silicon, and so it's not a matter of cherry picking.  I suspect they've buried the technology until the demand is awakened.  As you point out, most people haven't yet dreamed that they need terabytes of storage.  But I don't think there's ANY technological/production barrier in the way--purely a business (read: greed) decision.  I'd bet they're using the new density only on laptop (and smaller) drives, so they can charge a fortune.

193782[/snapback]

But it is, in many ways. I'm not sure about how media is made, but I *do* know that heads are definitely cherry-picked. From a given wafer, you're going to have a certain number of heads that are "hot" (i.e. they are capable of higher TPI than "nominal"). These are the ones that get used in the denser capacities. Obviously, the media has to have the capability, too; IIRC, BPI is mainly dependent on media and channel, while TPI is head and mechanics-dependent. It's not much good having heads capable of terabyte/platter density when the media isn't, and vice versa. Mechanics will likely have to start using dual-stage actuation for those high densities, but everyone's been postponing having to switch to it because of significant costs, both in development and manufacture.

Share this post


Link to post
Share on other sites
What ATAPI support SATA II brings I don't know, can someone else fill us in?

Maybe optical drives with SATA connections? Haven't looked recently but I'm under the impression they still don't exist yet?

193711[/snapback]

Plextor's new PX-716 16X DVD writer is available with both IDE (PX-716A) and SATA (PX-716SA) interfaces. Plextor tech support did tell me, though, that they didn't think the SATA version, used with my EPoX EP-8RDA3+ motherboard, would offer any performance improvement over the IDE version.

Share this post


Link to post
Share on other sites
vrrrrrooooommmmm

i wonder how sound will be....

vvvvrooooommmmm?

193656[/snapback]

I heard that you can get it with a custom exhaust that will make it sound faster ;)

Share this post


Link to post
Share on other sites
Plextor's new PX-716 16X DVD writer is available with both IDE (PX-716A) and SATA (PX-716SA) interfaces.  Plextor tech support did tell me, though, that they didn't think the SATA version, used with my EPoX EP-8RDA3+ motherboard, would offer any performance improvement over the IDE version.

My guess is that it's some kind of bridge solution in there, just because the SATA version is 3 cm longer, yeah I might be totally wrong...

They also got a third model, the PX-716AL with slot loading!!!

Share this post


Link to post
Share on other sites
Shouldn't the cache size be irrelevant with proper NCQ support where the data can be send after the command, only when really needed?

193747[/snapback]

There is more to cache size than that. In addition to those ram being used for reading previously read sectors, they are used for firmware as executable programs (yes, part of the firmware of the drive is on the media, thats why when your utility area of the drive fail you will see wierd post name instead of the actual model number).

Another use of the ram is to let the drive read the entire track, without waiting for the data to sent back to the host. For large drive like 300+gb it is possible that some track contain 8+MB of data. If you want to send 16mb all together, you may have to wait half way down instead of reading the whole track in one spin.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now