Remember when 1 MB was a lot?

OK, so I'm feeling old today. I just came across a post on slashdot, talking about a 1.5 petabyte system. A petabyte is 1,000 terabytes. What's a terabyte, you ask? That is 1,000 Gigabytes. I have about .5 Terabytes worth of storage attached to my computer (500 Gigabytes), and that's a lot. Most people are happy with 40 Gigabyte hard drives.

I remember, somewhat fondly, the old PDP-11 70 that I worked with in graduate school, back in the early '80s. It had a 10 MB hard drive, that required two people to lift and put into the drive bay. I have no idea how expensive it was, but I imagine it cost thousands of dollars. Now, a flash drive with 10+ times that will sit lightly on your neck, and lighten your wallet by a mere $20.

And there are 1 TB hard drives (that's 100,000 times that old 10 MB drive) that you can now take away for a mere $900, and will sit on your desk.

No wonder people keep talking about how people will stop deleting things. With tools like spotlight, or google desktop search, you can find anything, at any time. I have files that I've carried from my first computer in 1987, that are on my hard drive now. I won't be surprised if I still have those files on my new computer with a 1 PB hard drive that I buy in 20 years time.

Comments

20 years ? I don't think so, expect 1 PB systems a lot sooner.

The world has continued to shrink, and 1TB drives are now available for $200. However, we're increasingly limited not by density, but by spindles, so I doubt we will ever see a petabyte drive based on current HDD technology, because we'd need a 1000x throughput on spindle speed and bus throughput -- and I don't think we can get to 10,000,000 RPM in a commercially-viable implementation. Modern 10,000 RPM drives on a 2.7" platter have a velocity at the platter edge of 160MPH... 160,000MPH is 207x speed of sound, and isn't rational. I see us instead likely going to 5TB drives in arrays of 200 drives to reach a petabyte. I can't see foresee applications past 10TB drive size that don't make spindle limitations an incredible burden. I don't think we'll ever see a 100TB single-spindle system.

And yes, I remember when 1MB was big and 110bps was fast.

I'm only 19 and I remember back when 5 1/4" diskettes were popular. I still can't see how anyone could ever have any real use for an entire petabyte of information though; sorta mind-blowing. Good article though.

CISSP-ISSAP, they use to say the same thing about the technology we have today, yet look what we have.

Keep an open mind, anything is possible.

CISSP-ISSAP, we don't need a math lesson, this is was a friendly article meant to entertain. Please don't drown it out with stupid computer jargon that no one cares about anyway.

The old fundamental, "we don't care how it works, just as long as it works."

Save it for the classroom. or at least a tech blog that cares more about it

1 PB? I'll bet the movie industry would just eat these up like no tomorrow.

I was just on Canada Computers looking at 1TB for 130$ CAD.

900$ in 2005, 3 years later 130$ and by next year under 100$'s. I cant wait for a 80 Yottabyte HD :D

oh ryan, does your head hurt? was that maths too hard for you? this article's about where computing was and how it's changed - CISSP ISSAP explained the upper limit of single spindle technology and how far computing can go before the next bottleneck; if you don't like technology don't stumble in a tech channel

bring on holographic data storage!

well, just because you might not see such size on a single spindle doesn't mean you couldn't make a small efficient container some other way. they had stuff before these spindles, so who's to say we can't make smaller and more powerful stuff yet? we have yet to truly be at our technological pinnacle

It's interesting that I found this article on the cusp of 2009. In 4 years from this article, you can get a 1TB internal HDD from tiger direct for less than $100. Google pushes 20PBs of data a day. I think the argument for a single PB drive is moot. Most all drives will be solid state before 2025. So we will see a change from that big frumpy computer to a slick screen that can do whatever you want, where-ever you want. If you think the future is not maintained by the movies, watch some older ones, and see where we stand. I think Wall-E is talking to us. Now.... what do we do with all this processing power and space?

"... what do we do with all this ..." ?!?

OK, I'll try to narrow the question a bit.

What records should expecting parents-to-be plan to keep on
their child's HD diary?
- Apgar score? height,weight, 3d MRI? other birth records?
- all formal assessments
* physical, cognitive, etc?
- all performances?
(recall the broadcast clips of Sammy Davis Jr, Michael Jackson, Tiger Woods, from before age 10)
- all publications? (i.e. school work)

Then, presuming there's sufficient value to justify creating such a digital scrapbook ... HOW could such a scrapbook best be used?
- to scan the totality of one's own writings for usage patterns? (like Amazon's "key phrases")
- to score ones own work for "sophistication", using something like MS-Word's "Flesch-Kincaid Grade Level" ... to track that score over time ... to give feedback on educational efforts

... and no, I would not be comfortable leaving that much on a Google server.

ya I can remember those old days when ZIP drives came like a magic , then came broadband like super magic and came PenDrives and DVDs like Booom !

Nice Article :)

Yeah, this explosion of data processing is crazy! With photos running a few Meg and sites like Facebook so popular, already have a goodly number of Petabytes floating around. With multimedia becoming more accessible to the Everyman, we'll start getting video files coming in similar numbers. (For comparison, a packed DVD runs just under 20 Gig & CDs run a Gig or 2. Think video games for PSPs etc run in similar sizes.) Then, like the guy said earlier, Google's pushing about 20 Peta a day -- that's over 7 Exabytes in a year! And that's on the comparatively family-friendly web, alone; between the ubiquitous porn sites and pirate-software networks, wouldn't be surprised if we weren't fairly close to running Zettabytes over the entire net in that same year. If the numerous academic research libraries ever get their individual libraries (roughly a couple Terabytes) scanned online & updated to multimedia presentations, we definitely will.

Gotta wonder about that. The way future development tends to come at us faster and larger than we expect, if we can see Zettabytes becoming reachable in the near future, even larger numbers can't be far behind. And SI prefixes only go up to yotta (1000 zetta).

But then, we are still measuring data based on bits and bytes. That's essentially like measuring the universe in Angstroms when we need the equivalent of a lightyear or parsec. I know of an old term, "ana", which means a collection of works. Stretching that to equate with "library", maybe we ought to adopt it as a new unit. Say, 1 Ana = 1 Terabyte of data. Something like that ought to be less wieldy than the current system.

Yo, whoever runs the SI system -- get on it! ;)

I agree with all of you to be honest.

But I don't think that the current system will be in use for that much longer. The physical limitations of actually reading all that data quick enough are far too unwieldy.

I'm expecting that in the next 20 years or so most standard hdd's will be along the 100 terabyte market but made out of much smaller hdd's striped together like standard raid set-ups today but on a much smaller and much more integral level. Therefore the limitations on spindle-speed as stated above would be shared equally.

And in 50 years time I expect to see (hopefully) the complete removal of any need for storage data whatsoever....just think what that means....would there be a central mainframe where all data is stored and accessible from anywhere and any place? Would the main storage be in outer space?

All I know is that whatever the direction this industry is going in, there is fundamental change on the way...they have already started hitting walls with graphics and gpu capabilities...

1 TB $900!!??

Now-a-Days $95 =]

if there was a PB HD. u could probly download the internet and still have room...

Sir. Polaris: A whole YOTTABYTE?!?! Hell yeah my man!! If only time travel were possible... xD (ATM though, I don't even have one near as spacious as a TB...)

I do agree with Baldwin; using alternative terminology certainly would be more efficient! I don't believe we would ever be past having to register at times bits & bytes, however. As for the limit being "yotta-", I'm fairly certain there's a chance that as we approach zettabytes at least another number beyond that will be denoted.

Rob K "And in 50 years time I expect to see (hopefully) the complete removal of any need for storage data whatsoever….just think what that means….would there be a central mainframe where all data is stored and accessible from anywhere and any place? Would the main storage be in outer space?"

No offense, but I think that's not a very good idea. One chance hit from an asteroid (or man-made satellite) and it would all be wiped out. Not terribly safe.

No, it's *better* that data storage is decentralized as it is now. Having multiple copies distributed across multiple servers around the world helps to ensure that data is never lost by fire, flood, earthquake, or other such disasters, because somewhere on earth, there's always a spare.

That said, I also like to have at least one copy of my data *physically* close by, on a hard drive or flash drive that I personally own, so I know that no one can ever limit my access to my data, and I needn't worry if my internet connection goes down (although I do still have to worry about the electricity staying on, but that goes off for me far less frequently than my internet).

I think humanity's current system of data storage, as I described it, is quite good, and it will continue to be good well into the conceivable future.

Spindles. How quaint!

Leave a comment