2007/01/11

no moore (part 1)...

A recent string of comments in Alex's blog at Pythian got me into thinking about this whole subject of "where is database technology heading".

Sure: it's true that Moore's law has been valid for a long time.

Fact is: it ain't no more! I don't often use wikipedia as a reference as there is far too much plain incorrect content there. But in this case, it's spot-on. Read the whole entry, not just the definition!

There are huge consequences for IT from this. But before anyone looks into that, what of data management and databases?

Moore's law dealt with density of fabrication. Not speed, not capacity. And no, the three are not inter-dependent although they are related!

The undeniable fact is that processing speed - directly dependent on density of packaging due to propagation and temperature constraints - has been increasing exponentially.

The other undeniable fact is that disk storage access speed has NOT been increasing at anywhere near the same rate. We now have disk storage systems that can sustain 100MB/s IO rates at best. Ignore buffer cache speeds - those are not sustainable.

That is hardly fantastically more than the 10MB/s of 1996 and nowhere near the exponential growth rates of other electronic devices.

Those with an electrical engineering background will know perfectly well why this is so: the physics of transmission lines and their limits in speed have been understood for many, many decades. In fact the main reason one needs to reduce the size of CPU components in order to jack up the clock frequency is PRECISELY these limits!



So, for databases and more generally data managment, where to then?


Because undeniably, there is a need for this technology to progress at a rate than can supply with data all those speed hungry CPU and processing capacities created by the consequences of Moore's law in the last two decades.

There are those who propose that with the improvements in CPU speed, there will be less need for adequate indexing and optimisation of access methods: an approach based on simple brute force "full scan" of a solid state disk - SSD - storage will suffice, the hardware will be capable of returning results in usable time.

Anyone interested in this should check out the research carried out by Jim Gray of Microsoft. In one of his papers he proposes that the entire life history of a human being can be contained in a Petabyte (PB) of data - he calls it the "Personal Petabyte". Read it, it's very interesting and I believe he is 100% right. We are heading fast into a world where it will be possible to store that PB about a person in a finite storage element!

Any future marketing exercises wanting to address a given population better be prepared to be able to digest this sort of volume of information, in a usable time! Because it will happen, very soon, scaled out to the size of their audience. Yes, Virginia: we're talking Hexabytes - HB - here!

OK, so let's be incredibly optimistic and assume for a second that we'll see the same rate of growth for data access speed in the next 10 years. Yeah, that brings us to 1GB/sec by 2017, doesn't it?

Hang on, didn't good old Jim say 1PB? well, that's 1 million (10**6) of these GB/s, boys and girls! Think you can fast scan 1PB at these rates? Yeah, that's 1 million seconds, or aproximately 11 days of solid data transfer rate to go through it. Don't forget as well that your marketing campaign will be remarkably ineffective if all you can look at is the data of one client... and that you might as well be prepared to multiply all those times by 1000 as you may well be dealing with HB volumes, not PB!


'scuse the French folks, but "fast scan" my arse!


Of course: one can argue that disks will soon become solid state and much faster. That, is so not the problem. The problem was NEVER the speed of the disk but how fast that data can travel to the CPU and its memory!

That is what is taking 100MB/s now and was taking 10MB/s 10 years ago and 10 years from now - with luck - will take 1GB/s!

No, clearly we'll need MUCH MORE sophisticated indexing and cataloguing techniques to manage and derive any value out of these huge amounts of data.

"fast scans" and/or "full scans" are so not there, it's not even worth thinking about them!


So, what will these techniques be, you ask?


I'll blog about what I think they'll be very soon.


catchyalata, folks

10 Comments:

Blogger Jeff Moss said...

I look at it as a great career opportunity for me for the next few decades...efficiently dealing with big Oracle systems processing more data in ever more complex ways...bring it on!

Friday, January 12, 2007 1:25:00 am  
Blogger Alex Gorbachev said...

And I see even more opportunities for an IT specialist who can help avoid that complexity and instead of complex solutions will implement smart design so processing can be done in a simple and efficient way.
On the other hand, majority of projects will probably go wrong anyway and the demand for emergency problem-solvers will be high as usual.

Friday, January 12, 2007 4:04:00 am  
Anonymous Paul Vallee said...

Hi Nuno,

This could be the game changer you've been waiting for:

http://www.pythian.com/blogs/357/sandisks-600-solid-state-drive-could-be-a-game-changer-62mbs-7000-iops-in-its-first-rev

Paul

Friday, January 12, 2007 4:26:00 am  
Blogger Joel Garry said...

I think there will be a several orders of magnitude data transfer rate increase when optical computing takes off over the next decade. Not only will it be faster than copper, but frequency separation will allow extreme parallelization along the same data channels. 7 year old article. Something new.

Of course, this also will result in less optimization as people come up with "new paradigms of computing." But so what? Exabytes!

Friday, January 12, 2007 11:46:00 am  
Blogger Noons said...

Sorry for the delay folks. My son Sam decided to punch a hole through his leg while doing some x-country and I had to attend to the emergency. Stitches and all! Messy... All well now.

Jeff: yes, very much so. Even more for development than anything else.

Alex: spot-on, dude! But it'll need a better effort from the db makers than "you don't need a dba" and other such flights of fancy... More later.

Paul: Impressive progress, but that's really not in the ball park. I can get 100MB/s sustained, not burst, and 10k IOPS from a $4K Apple SAN with 4TB capacity, now. Correction: I could last year.
Now it might be even better! These things change too fast...

SSD needs to provide at least three orders of magnitude those numbers before it can be price competitive AND demonstrate the ability to handle the Personal PB. Again, more later.

Joel: 100% in agreement, optical is definitely one way to go. But they have been talking about that since the Cray days when they had to use a cilindrical shape to reduce the transmission line losses. I think the stumbling block there has traditionaly been the connectors. They can't just be plug-and-go, not at those light levels and frequencies: too much IR interference.

I think the solution is gonna have to be part "if Mohamed can't travel to the mountain, then bring the mountain to Mohamed" and part totally new db designs.

Not new from the point of view of relational theory or some other theory: too much invested already in relational for that to be viable.

Instead, from the point of view of mapping logical definition to physical storage. An area that has seen remarkably shallow R&D in the last 10 years: the last big development there was partitioning!

But I'll add-on part2 very soon and hopefully my speculation will become a lot clearer then.

I stress that word: speculation.

But also based on what I learned from dealing with guys who ARE planning to store AND make use of those PBs, now.

Friday, January 12, 2007 7:46:00 pm  
Anonymous Steve Karam said...

"The problem was NEVER the speed of the disk but how fast that data can travel to the CPU and its memory! "

Remember that a true Personal PB would be mostly composed of non-correlated data. The movies I watch have no true relationship to the sandwich I ate for lunch, except that they are both related to me. If you were to try to form an ERD of a complete Personal PB (that is, every experience I have ever been through) you would have a central "Me" table surrounded by thousands of child tables that have no real bearing on each other except for form new instances of "Me."

The problem you mention above regarding the CPU and its memory being the true bottleneck is true if we're talking about a traditional Von Neumann architecture: a sequential flow of data between a CPU and its memory. However, using multiple CPUs with cache and branch prediction, we can achieve a high level of parallelism that can break through the conventional boundaries that you mention. Add in Solid State Disk and you have an extremely fast system that can tackle huge volumes of data.

But even that still falls under the Von Neumann architecture, which has mostly been deemed inadequate to handle large amounts of non-correlated data such as a Personal PB. If the aim is to capture EVERY bit of data regarding a person's life (and not just the data that pertains to our business/marketing scheme), a different architecture entirely will be required.

I would say this is where neural networks come into play. Neural networks are made to store huge amounts of raw sensory data, then process it with multiple asynchronous systems to find patterns and correlations (e.g. "People who like beef, wear flip flops, and watch movies about ninjas are more apt to buy Tide Detergent than Gain"). The concept here is to mimic the human brain (and by cross-referencing your Personal PB with other people's Personal PBs, to simulate a Super Conscious) to figure out what the next Instance of You and ultimately the next Instance of Group Mind would like to buy.

I suppose the closest thing we would have in the current abilities of Oracle would be a massive snowflake schema based around a fact table called "HUMAN". All the data, forming together into every instance of a person, would be crunched and Materialized Views generated to store statistics regarding all the discovered correlations. A system such as this would absolutely require lightning fast disk resources such as SSD, coupled with a large amount of processors distributed amongst multiple systems in order to crunch the data. And because of the randomness of human experience, a large amount of the data would have to be self-identifying...SYS.ANYDATA galore!

SQL> CREATE NEURAL CLUSTER "PERSON"
2 (id number not null,
....
4389475394843220598 fingernail_clip_date date);

Sunday, January 14, 2007 11:16:00 am  
Anonymous Anonymous said...

Storing a single person in a RDBMS is a bit of a waste. Most people will have thousands of tables with only one row. How many times does the average person graduate from high school, get married, die? Most people only get one insert into their personal "DEATHS" table, and when that row gets committed, their personal DB is locked against writes for good...

Better to store a group of people in a RDBMS, preferably a large group, or a single person in a more suitable data structure.

More interesting would be ways to unify several RDBMS schemas in several RDBMS systems. We can expect storage systems to get more intelligent as Moore's law keeps putting faster processors and larger memories closer to storage devices. Why send an entire disk over a 1GB/s bus when you could compile an SQL query, execute it in the disk drive, and get only the matching rows at 1GB/s?

Wednesday, January 17, 2007 11:16:00 am  
Blogger Noons said...

Steve:

apologies, I forgot to reply to you. Yes, neural network technology is IMHO the way to go for handling the "intelligent disk" control. Modified of course for the database domain. It needs to be able to understand the universe of rdbms, not any universe. Recall that we are after dedicated bulk processing performance, not research data.

Anon:

I think you completely misunderstood what the personal PB is all about. Please do read Jim's work on that subject.

Of course you store a "group of people" in a rdbms, instead of a "single person". Ever heard the expression "set-at-a-time"?

As for compiling the SQL query and executing it in the disk drive: that is PRECISELY what this entire blog is all about.

In fact, I think it will be a lot more than that. You'll see disk drives capable of merging and further optimizing SQL from many different meta-databases and the local indexing of its own data.

And able to also handle the intricacies of updating widely partitioned data, for example.

But it will require that other rdbms products get a partitioning feature even remotely resembling what Oracle does now: none do at the moment.

No, db2's "partitioning" is a completely different concept. And the one in Postgres doesn't even understand the most basic tennet of SQL: independence between syntax and physical location of data.

As for SQL Server partitioning, the lesser said the better. In fact, nothing can be said anyways: it's simply not there.

Wednesday, January 17, 2007 2:18:00 pm  
Blogger Don Burleson said...

Hey noons, I never said this, don't put word in my mouth!

"There are those who propose that with the improvements in CPU speed, there will be less need for adequate indexing and optimisation of access methods"

However, I AM saying that as hardware costs continue to fall, management will always choose the least risky, fastest and cheapest solution to Oracle tuning. It ain't elegant, it does fix the core issue, but I see IT managers doing it all the time.

Dude, SQL tuning is expensive, fast hardware isn't . . .

Friday, March 30, 2007 8:43:00 am  
Blogger Noons said...

Actually, I got it from one of your articles somewhere. Pity I didn't capture a link to it back then. Took you a while to catch up, Don! :-)
Ah well: never mind, not important...


But you're quite right: *at this stage* of the game, tuning is expensive while hardware isn't.


The point I'm trying to make is that things change from now on.


Assertions that hardware is cheaper presume a linear scaling between the two - tuning costs, hardware costs.


IME, that is almost never the case: while you may enhance performance by an order of magnitude when changing hardware, degradation due to bad design and lack of basic tuning involves factors of two or more orders of magnitude.

Ie, degradation is "exponential" rather than "linear", if I may use somewhat incorrect common place terms.

And hardware-based improvements ain't gonna be giving us one order of magnitude changes for much longer, Moore's law is finished: that is the point I'm making here.

Friday, March 30, 2007 9:59:00 pm  

Post a Comment

Links to this post:

Create a Link

<< Home