quick catch up on peoplesoft

It's been a while since I posted anything to do with my "beloved" Peoplesoft...

umm... well,

you know what I mean...

Some might recall this post a while ago?

It's where I discussed our approach to this common problem with scratchpad tables in Peoplesoft.

Anyways: some developments I reckon could be of use to anyone going through the same problem.

I've since had a good exchange with Dave Kurtz where he suggested we try making the Peopletools code itself analyze these tables as it populates them.

Do spend some time reading his blog: that's where we got a lot of ideas to improve our PS environment. Definitely worth your while.

Still, changing Peoplesoft code was totally outside of what I wanted to do.

First of all, I'm not a Peopletools coder: I could easily get into a "the cure is worse than the malaise" situation.

Second, none of our guys knows how to spell "Peopletools". As such, no go.

We had to stick with the database solution. Not perfect, but workable.

Recently a new guy joined us. Scott is a very experienced Peopletools developer and administrator. As soon as he looked at our problem, his first reaction was: "Let's turn on the analyze from inside the Peopletools code, it's silly to force the database to do this".

This is done by turning on the trace facility of Peopletools for the module in question and ignoring the trace output. Part of the "tracing" code does an analyze on all scratchpad tables after populating them with interim results.

We removed our database stats blocking code and let Peoplesoft do the analyze during the paycalc.

Our paycalcs dropped from an already short 20 minutes to a 5 minutes run for a full month-all employees one.

That's as good as we could possibly expect!

Lessons? Never, ever try to fix an application performance problem by tuning the database.

The problem is in the application. Fixing it by changing the db setup is akin to trying to fix a flat tyre in a car by revving the engine:
it won't work in the long run and it'll definitely do more damage!

It's been said many times, but it doesn't hurt to say it again:

find where the problem is, fix it there!

I recently watched Oracle's Graham Wood talk performance monitoring and tuning with Grid at the last Sydney Oracle DBA Meetup.

He made a very strong point to which I can relate entirely: when tuning SQL performance problems, start by tuning the SQL. Do not tune the database.

Change the SQL - not the spfile - if all you have is a SQL performance problem.

Same principle, different conditions.

Assess the symptom, find the real problem and resolve it.

Stop fiddling with init.ora or spfile parameters. These affect the entire database instance, not just the problem you are seeing!

Anyways: just to let folks know the final outcome of this long saga.

Speaking of the Oracle Meetup, here is a fine bunch of folks at the last one:

Some of you might recognize in the centre, Graham Wood from Oracle and Alex Gorbachev - the Meetup dad - from Pythian. And a lot of other fine folks from the Sydney dba gang as well as some local Oracle folks.
Me? I'm behind the camera!

On a different note, I recently attended the first large format photography meeting in Sydney, with the APUG folks. Some of the fine equipment on display:

Man! I wish I had time to chase this form of photography.
Looks very promising, for gearhead geeks like me!

As is I made use of old faithful: the Zeiss rangefinder. Here are some more shots:

Catchyalata, folks!


11gr2: it looks like someone is listening, after all...

Some of you folks might recall my 2008 wishlist for Oracle.

The number one pet peeve was the need to create the initial segment of any data object even when it is empty.

A big no-no for products such as Peoplesoft, where in a typical installation one gets 25000 tables and 35000 indexes of which only around 1000 are ever filled with any data.

Well, it appears someone at Oracle is reading this blog, after all:

This is it, right there in fresh 11gr2!

I had given up hope Oracle development would stop adding useless new features and instead give dbas the ones they have been asking for.

It appears at least once, they listened!

Now, if only someone would listen again and give us a way to load Statspack historical information into AWR...

It might make the new AWR functionality mildly useful for us: we have nearly 3 years of Statspack data.

And no way to use it as a source for all the excellent Grid/EM AWR analysis tools!

Who knows, this might actually be heard?


No photos on this one, folks: too busy at the moment.