Monday, January 23, 2012

OBIEE 11g: Where's the Compare Repositories Dialog?

Recently I decided to try out the new patching capabilities for the metadata in OBIEE 11g. The big reason for me was to avoid having to reset my connection pools. Each time I did a 3 way merge, the connection pools would get over-written with their development credentials. When I was doing the merge, I could never find a way to isolate (leave out) those objects, so it was off to try the patching.

What this patching does is creates an xml file of the differences between 2 RPDs. You can then edit that XML file if you so desire. Most of my duties over the last couple of years have centered around the RPD and front-end stuff, not nearly as much on the administrative side (i.e. migrations). So I need to catch up with the rest of the world.

Reading through the docs, I'm told that I need to use the "Compare repositories" dialog. OK, easy enough.



Where is my entry for it?

OK, let's try the Compare entry.



Since I'm using the prod_20120122 RPD, I select the dev_20120122 RPD.

I'm prompted for the Repository Password and then it spins for a few seconds, then gives me this



I select Yes.



OK...where's the Compare Repositories Dialog like the docs say?

I see the icons have changed signifying differences, but no dialog as mentioned in the docs.

What if I select No?



Ah, there it is.

Interestingly, if I maximize that Compare Repositories Dialog, the next time I run the Compare, I can see it plain as day.



Hopefully you'll find this next time you endeavor to learn how to patch your RPD and can't seem to find the Compare Repositories Dialog.

Thursday, December 15, 2011

Trace Data: Not Just For DBAs

On Wednesday, I attended Cary Millsap's Mastering Oracle Trace Data class here in Tampa.

Why?

Why would I go? I am working with OBIEE which is about 42 layers above the database...who cares about trace data? Well, performance is usually the number 1 item on everyone's list. Page loads to slow. Report takes to long to run. Whether it's trying to tune the application server (WLS) or figure out why Report A takes so long to run, we do a lot of performance analysis. In most cases, it ends up being the SQL that is run. What I mean by the SQL is that it's usually bringing back quite a few records. I've seen people try to pull back millions of records and then pivot all that data putting a nice load on the BI Server and hogging all the resources from everyone else.

On occasion though, there are other things that are going on (with the SQL) that we can't pinpoint.

Recently we had to back out a production implementation because one of the load processes seemed to just hang, threatening to slow down a larger process.

I asked the DBAs why.

Crickets.

Shouldn't that be an answer a DBA provides?

Disk? Network? CPU? Memory? Which one?

Crickets. (I didn't ask those exact questions, I think I said, "Reads? Writes? Network? Load?")

That is just one of the reasons I wanted to attend Mr. Millsap's class. That, and I've heard he's well regarded and does a pretty decent job presenting. OK, I admit it, I just want to show the DBA up. There, said it.

I really shouldn't have to though. It's supposed to be a partnership. They don't know much about OBIEE, so I help them there. I expect help in things like this.

Why? Part II

If you are a developer, understanding trace data will make you better. You'll no longer have to guess, you'll know.

Of course there's what I hinted at above, being able to go to your DBA(s) and prove something. No better feeling in the world.

How?

MR Trace is by far the easiest. It's integrated with SQL Developer. It's a great place to start.

MR Tools - For the more advanced professional. Mostly geared towards the DBA type, but incredibly useful to developers as well. It includes:

- mrskew - your trace file profiler and data miner
- mrls: your trace file lister
- mrcallrm: your trace file correction fluid
- mrtrim: your trace file tim calculator
- mrtrimfix: your trace file time value fixer

Method R Profiler:
The classic tool that started it all, the Method R Profiler is software that makes simple work of knowing exactly why your application consumes the response time it does. With minimal training, a Method R Profiler user can—in just minutes—identify the root cause of an Oracle-based application performance problem, propose sensible solutions to it, and predict the end-user response time impact of each proposed solution.

There are of course other products, check them out here.

Ask Mr. Millsap to come to your town. Try out MR Trace. You won't be sorry you did.

Tuesday, October 18, 2011

KScope + DevOps

Last year I had the pleasure of getting the Sunday Symposium together for KScope 11, this year, I have completed my takeover of the Database track by becoming the track lead.

I thought this was the best job ever, then I was attacked Nancy Kerrigan style by my handlers.

All that said, I think I've gathered a pretty good group of people to help review and select the abstracts for next year's conference (San Antonio, TX).

There will be 4 sub-tracks this year:
- Design/Data Modeling
- Maintenance (Performance, Tuning, Upgrades)
- MySQL
- (Dev)Operations

The one I am most excited about is the (Dev)Operations sub-track, aka, DevOps.

What is DevOps?

I'm glad you asked..

"DevOps" is an emerging set of principles, methods and practices for communication, collaboration and integration between software development (application/software engineering) and IT operations (systems administration/infrastructure) professionals.[1] It has developed in response to the emerging understanding of the interdependence and importance of both the development and operations disciplines in meeting an organization's goal of rapidly producing software products and services.

I am not necessarily a fan of the movement, but I am a fan of the principles behind it.

Every developer has a story about working with an evil DBA. LIkewise, every DBA has a story about some application that went to production where they were left completely out of the process.

But it is more than just a simple, "Can't we all just get along?" plea, this is about creating better software and streamlining processes.

My personal experience has been one of woeful cooperation, at any level. Our thought, our hope, is that this well help give other Oracle professionals better ideas on how to start down this road.

If you are interested in this topic, sign up. If you want to present on this (or any other) topic, register here.

Tuesday, October 11, 2011

Good DBA, Bad DBA, Deadlock

By Enrique Aviles

A few days ago a fellow DBA asked me to review an email he received from a developer. In the email, the developer explained his application was affected by database errors and asked us to check the attached file for details. The error was a database deadlock. Attached to the email was the trace file Oracle generates whenever a deadlock occurs in the database. I don’t see deadlocks regularly so I hardly ever need to dissect one of those trace files. The trace file contained the following information:
Deadlock graph:
                       ---------Blocker(s)--------  ---------Waiter(s)---------
Resource Name          process session holds waits  process session holds waits
TX-002b0006-00000968        23     223     X             25      35           X
TX-002c0007-00000b13        25      35     X             23     223           X
 
session 223: DID 0001-0017-000163D0     session 35: DID 0001-0019-00002809
session 35: DID 0001-0019-00002809      session 223: DID 0001-0017-000163D0
 
Rows waited on:
  Session 223: obj - rowid = 0001FBA6 - AAAfumAAFAAAWikAAA
  (dictionary objn - 129958, file - 5, block - 92324, slot - 0)
  Session 35: obj - rowid = 0001FBA6 - AAAfumAAFAAAWikAAB
  (dictionary objn - 129958, file - 5, block - 92324, slot - 1)
Session 223 holds an exclusive lock on a row and session 35 holds another exclusive lock on a different row. Session 35 wants to lock session 235 row and vice versa. This clearly shows there is a deadlock. Immediately following this section the SQLs involved in the deadlock are shown in the trace file. I was expecting to see two different queries but the current session and the “other” session executed exactly the same SQL:
UPDATE PERSON_TAB SET PERSON_ID=:1, 
  NAME=:2, FIRST_NAME=:3, 
  MIDDLE_NAME=:4, 
  LAST_NAME=:5, 
  DATE_OF_BIRTH=:6, 
  PHONE_NUMBER=:7, 
  ALT_PHONE_NUMBER=:8, 
  FAX_NUMBER=:9, 
  E_MAIL=:10 
where DATA_SOURCE_ID=:11
The fact that the same UPDATE was executed by both sessions against the same table confused me for a moment. For some reason I wanted to see two different tables but found the same table in both UPDATEs. I started thinking one session updated a row and another session wanted to update the same row. On that scenario the second session would just wait for the first session to either commit or rollback the update. Once that happens the exclusive lock on the row is released and the UPDATE from the second session goes through. How can that cause a deadlock? As you can tell, I didn’t read the trace file close enough. The rowids above are different so both sessions were trying to update different rows. Once again, I rushed to faulty reasoning thinking two sessions updating two different rows should not cause a deadlock. Clearly, Oracle is able to handle two sessions updating two different rows with ease. They are completely independent transactions so there shouldn’t be a deadlock. Remember, I don’t analyze deadlock trace files on a daily basis so that’s my defense for not being able to immediately explain what caused the deadlock. After a few moments trying to imagine what could have caused the deadlock I was able to see the full picture. The first session updates row 1, the second session updates row 2. The first session tries to update row 2 and the second session tries to update row 1. This sequence causes a deadlock. In order to validate my reasoning I opened two SQL*Plus sessions and ran the following:
On session #1:
SQL> CREATE TABLE T (N NUMBER);
SQL> INSERT INTO T VALUES (1);
SQL> INSERT INTO T VALUES (2);
SQL> COMMIT;
SQL> UPDATE T SET N = 10 WHERE N = 1;
On session #2:
SQL> UPDATE T SET N = 20 WHERE N = 2;
On session #1:
SQL> UPDATE T SET N = 20 WHERE N = 2; (this one blocks because it’s locked by session #2)
On session #2
SQL> UPDATE T SET N = 10 WHERE N = 1; (this one causes a deadlock)
After a few seconds the database reported a deadlock on session #1. As a result, the second update on session #1 was lost. After issuing a commit on both sessions I noticed the table didn’t contain two rows with 1 and 2 but 10 and 20. No updates were lost because both sessions tried to update the table with the same values. The same would happen if the UPDATEs on PERSON_TAB contained the same values on all columns. If PHONE_NUMBER was different on both sessions one of them would be lost as a result of the deadlock. With this information on hand my colleague replied the email with a detailed explanation as to what caused the deadlock and provided the small case scenario to help the developer reproduce the issue. We also supplied the SQL showing the table involved in the deadlock.

We acted as good DBAs (we think so) because we took the time to examine the trace file, compose a detailed explanation, and supply steps on how to reproduce the issue.

What would a bad DBA do if faced with the same request? The bad DBA would open the trace file and copy the following section on a reply email:
The following deadlock is not an ORACLE error. It is a
deadlock due to user error in the design of an application
or from issuing incorrect ad-hoc SQL.
The email would close with a simple “fix your code”.