Thursday, February 26, 2009

Google Code

I've create a project on Google Code where I plan to start putting everything that I create, whether I blog about it or not.

http://code.google.com/p/oraclenerd/

I'm trying to figure out if I should do it project based or schema based...or application based? Don't really know.

I'll probably start with the Poor Man's Data Vault.

Google has some pretty cool stuff in their apps department there:
http://code.google.com/more/#products-products-android

I'd definitely like to check out the Chart API


http://chart.apis.google.com/chart?cht=p3&chd=t:60,40&chs=250x100&chl=Hello|World

How cool is that?

Also want to check out their Visualization API. A nice example can be found here.

So I strayed?

Has anyone else out there used Google Code?

ORA-08177 can't serialize access for this transaction

We've been getting this error recently and we are in the process of researching the problem.

I figured I might as well share some of my reference material in case someone else may find it useful.

First, what is it? From the docs:
ORA-08177: can't serialize access for this transaction
Cause: Encountered data changed by an operation that occurred after the start of this serializable transaction.

Action: In read/write transactions, retry the intended operation or transaction.
There are 4 isolation levels when it comes to transaction processing, all are based on the ANSI/ISO SQL standard SQL92:
  • READ UNCOMMITTED
  • READ COMMITTED
  • SERIALIZABLE
  • REPEATABLE READ
Oracle handles these the following ways:

READ UNCOMMITTED:
Oracle Database never permits "dirty reads." Although some other database products use this undesirable technique to improve thoughput, it is not required for high throughput with Oracle Database.

READ COMMITTED
Oracle Database meets the READ COMMITTED isolation standard. This is the default mode for all Oracle Database applications. Because an Oracle Database query only sees data that was committed at the beginning of the query (the snapshot time), Oracle Database actually offers more consistency than is required by the ANSI/ISO SQL92 standards for READ COMMITTED isolation.

SERIALIZABLE
Oracle Database does not normally support this isolation level, except as provided by SERIALIZABLE.

REPEATABLE READ
Oracle Database does not normally support this isolation level, except as provided by SERIALIZABLE.

I've learned more about transaction isolation levels in the last few days than I ever cared to know.

READ COMMITTED is the default for Oracle. There's a possibility that our calling applications are using SERIALIZABLE via the driver. I've never worked with SERIALIZABLE.

According to the trace files it is happening on both INSERTs and SELECTs and we've seen multiple occurrences on the same 2 statements.

I suspect, but can't prove, it's (evil) trigger related. Any pointers as to how to prove that would be most helpful. Or any pointers in general in dealing with this issue.

Update
I did find this post where the author says:
Once more - in the vaste majority of cases usage of serializable isolation level is a design error that leads to non-scalable applications. Most databases enforce it with very restictive locks that kill concurrency and in case of Oracle and other databases that rely on multiversioning (PosgreSQL, Interbase) you have to be ready to failed ("Can not serialize") transactions. Latter is, usually, tolerable as soon as application is prepared for them (normally, it is enough just to restart transaction). For Oracle quite good discussion of this topic (as almost any other Oracle-related topic) may be found on asktom.oracle.com (http://asktom.oracle.com).
Unfortunately it was just a statement without the proof to back it up. I'm trying to get away from just making blanket statements without providing the evidence.

Wednesday, February 25, 2009

Coding is Easy - II

Yesterday I said Coding is Easy.

Some of the comments here and on reddit got me thinking a little bit (more).

I'll expand on that as it relates to the database world. First though, how do you measure code complexity? There has to be a scientific way, as compared to just stating, "This code is too complex." right?

Cyclomatic Complexity
...(or conditional complexity) is a software metric (measurement). It was developed by Thomas J. McCabe in 1976 and is used to measure the complexity of a program. It directly measures the number of linearly independent paths through a program's source code.

The concept, although not the method, is somewhat similar to that of general text complexity measured by the Flesch-Kincaid Readability Test.

Cyclomatic complexity is computed using the control flow graph of the program: the nodes of the graph correspond to the commands of a program, and a directed edge connects two nodes if the second command might be executed immediately after the first command.
Later in the Key Concepts section:
The cyclomatic complexity of a section of source code is the count of the number of linearly independent paths through the source code. For instance, if the source code contained no decision points such as IF statements or FOR loops, the complexity would be 1, since there is only a single path through the code. If the code had a single IF statement containing a single condition there would be two paths through the code, one path where the IF statement is evaluated as TRUE and one path where the IF statement is evaluated as FALSE.
In the database world, specifically Oracle (PL/SQL), your Control statements, IF/END IF, CASE, LOOP, etc. would increase the number of paths through the code.

Have you ever seen a CASE statement like this?
CASE
WHEN some_column = 'VALUE 1' THEN
do_something;
WHEN some_column = 'VALUE 2' THEN
do_something_else;
WHEN some_column = 'VALUE 3' THEN
do_something_else_thrice;
and it goes on for another 20 lines or more? Is this a good time to possibly rethink the underlying model? I say it is.

It may lead to the creation of another (lookup) table.

It may mean reorganizing your data in other ways.

It may mean absolutely nothing.

From Code Simplicity:
Often, if something is getting very complex, that means that there is an error somewhere far below the level that things are getting complex on.

For example, it’s very difficult to make a car move if it has square wheels. You’re going to be spending lots and lots of time figuring out how to make the car work, when really it should just have round wheels.

Any time there’s an “unsolvable complexity” in your program, it’s because there’s something fundamentally wrong with it. If the problem is “unsolvable” at one level, maybe you should back up and look at what’s underlying the problem. Maybe you put square wheels on the car, and now you’re trying to figure out how to make it go fast.

Programmers actually do this quite often. For example, “I have this terribly messy code, now it’s really complex to add a new feature!” Well, your fundamental problem there is the that code is messy. Clean it up, make the already-existing code simple, and suddenly adding the new feature will be simple.
At the risk of just quoting the entire Code Simplicity article...ah, who am I kidding? He says it much better than I could.
So when things get complex, back up and you look at the problem that you’re trying to solve. Take a really big step back. You are allowed to question everything. Maybe you thought that adding 2 and 2 was the only way to get 4, and you didn’t think about adding 1 and 3 instead, or just skipping the addition entirely and just putting “4” there. The “problem” is “How do I get 4?” Any method of solving that problem is acceptable, so figure out what the best method would be, for the situation that you’re in.

Discard your assumptions. Really look at the problem that you’re trying to solve, and think about the simplest way to solve that problem. Not “How do I solve this problem using my current code?” Not “How did Professor Bob solve this problem in his program?” No, just how, in general, in a perfect world, should that problem be solved? From there, you might see how your code needs to be re-worked. Then you can re-work your code. Then you can solve the problem.
Back up and take a look at the problem you are trying to solve.

Indeed.

A good design WILL make your coding less complex. In my case, that means a good data model.


For other references see:
Linux Journal, Programming Tools: Code Complexity Metrics
CodeProject, Cyclomatic Code Complexity Analysis for Microsoft .NET Applications
ONJava, Code Improvement Through Cyclomatic Complexity
Google, Let Me Google That For You

Tuesday, February 24, 2009

Coding is Easy

There, I said it. What a relief too.

Coding is easy. Once you learn the syntax of your language of choice, hammering out code is easy. The hard part is the design onto which you apply your code. The better the design, the easier the code.

With a nod to Kathy Sierra and her wonderful graphs...



I don't mean to infer that there is a sweet spot there where design goodness and complexity of code meet, just bare with me.

My point is simple, the better the underlying design, the easier it is to code against. Naturally I see this as a database thing as that's where my level of "expertise" lies, but it applies to all software development. Java, Ruby, .Net, whatever. In the OO world it would probably be class design. In my world, it's all about the data model.

If it were so easy, why the heck do they pay us so much? I ask myself that every day. Since I wasn't talented enough to play professional baseball and make oodles of money, I consider myself extremely fortunate to make the money I do at something I so thoroughly enjoy. I sometimes feel guilty about it too.

But why is it easy? I don't really know how to answer that. It's just been my experience. I would also imagine there is alway an exception; as I don't write programs for the Space Shuttle, I can't attest to that, but the point is the same. If you have a good model (design), coding is easy.

Thursday, February 19, 2009

No DDL in Production Environment - III

Part I.

Part II.

Alright, I've officially hit scope creep.

We've recently implemented VPD to further protect senstive data. We're using roles to manage the security as opposed to user level privileges. We as DBAs don't have (nor want) access to that sensitive data, but we're DBAs, we can do whatever we want right?

Anyway, by default we will NOT have these roles. They will only be enabled when needed. How do you manage that though? Auditing is already turned on so that's a good thing. But it would also be nice to know when someone granted one of these secure roles to a user or role. What better way than event based triggers?

I've come to appreciate triggers of late, at least the database or event level triggers. I still think application triggers are a means of last resort.

How does this tie in with No DDL in Production? Of course a GRANT statement is DDL but it's more of an access or security type. It's outside of the original intent of this "project." It would nice to be able to track it though. And shouldn't you be required to have a ticket prior to giving someone one of these secure roles?

So I've added 2 new tables, SECURE_ROLES and SECURE_GRANTS.

SECURE_ROLES
Stores the role name of...you guessed it, your secure roles.

    SECURE_GRANTS
    Tracks the history of GRANTs on those secure roles. If temporary, it would be easy enough to create a job to scan the table and send out alerts. Which just made me think of another attribute, amount of time needed. Perhaps to default at 1 hour or something.

    Here's the model so far:



    I've actually left off 3 other tables I just added. Serious scope creep. The good thing is I haven't written a lick of code yet. Let's call this the requirements/design phase.

    I think I need a better name for this project too. Any suggestions out there?

    Wednesday, February 18, 2009

    OBIEE: Don't Change the Administrator Password!

    I learned the hard way (naturally) that you shouldn't go and change the password for the Administrator account haphazardly.

    Yes, I use the Administrator password and not a named account. Yes, I'm trying to change that.

    I've been handing out credentials to folks lately and decided to change the password after everyone had their new accounts. Got a call yesterday, "Hey, I'm not getting my reports via email any longer."

    Well, we had the big deployment last week and I figured it was somehow related. Login into the OBIEE web site (wtf is that called? Dashboard? Answers?), go to Delivers and see some alerts. Click through and read the error message: invalid username/password. What?

    Of course I had forgotten I had changed the Administrator password so I ran around trying to figure out why that could be. Thankfully I didn't run around too long.

    Could the password be stored in some config file (which I don't have access to)? I asked the DBA and he wasn't sure.

    I tried to use the "Run As" functionality on the iBot using the recipients username. Wait one day...Nope. That didn't work. (Can you tell I like the hit or miss strategy?)

    DBA sends me this link.

    1. Shutdown OAS and OBIEE services.
    2. Change the password in the BI server repository offline.
    3.Run crypto tools for both Publisher and BI Scheduler.

    cryptotools credstore -add -infile D:\OracleBIData\web\config\credentialstore.xml
    BIPublisher Alias : bipublisheradmin
    BIScheduler Alias : admin

    4. Change password in Scheduler Config in job manager
    5. Start OAS server and OBIEE services and update the JDBC connections for BI Server Answers
    6. Update security configuration for the BI server security model with the new password.
    7. Restart all the servers again. If second restart is not done then BIPublisher reports based on answers will not work.
    Stop it!

    You really have to do all that to change the password? Kidding right?

    I reverted the password back to the original needless to say. I thought it would be quick and easy, but apparently not.

    No DDL in Production Environment - II

    Part I.

    I said I was going to flesh that out a bit more. After some more thought about it, it could be a nice feature for DBAs.

    I'll start with what needs to be stored.

    DEPLOYMENTS

    • DEPLOYMENTID - Surrogate key
    • NAME - Name to give the deployment. Could be a ticket number or a change control number. Your choice.
    • DESCRIPTION - What are you doing? Maintenance, the description from the change control ticket, etc.
    • START_DATE - Defaults to SYSDATE. Used to gather the AFFECTED_OBJECTS at the end.
    • STARTED_BY - DEFAULTS to USER
    • END_DATE - Date the deployment ends.
    • ENDED_BY - Will typically be the same as the STARTED_BY, but we'll capture it at the end just in case.
    INVALID_OBJECTS and STAGE_CODES (not to be confused with stage coach).


    STAGE_CODE
    • STAGECODE - Nothing really special about this one here. Just a way to maintain data integrity without throwing it into a CHECK CONSTRAINT.
    INVALID_OBJECTS
    • INVALIDOBJECTID - Surrogate Key
    • DEPLOYMENTID - Foreign Key from the DEPLOYMENTS table.
    • STAGECODE -
    • OWNER - Self explanatory
    • OBJECT_NAME - Self explanatory
    And finally, AFFECTED_OBJECTS:


    AFFECTED_OBJECTS
    • AFFECTEDOBJECTID - Surrogate key.
    • DEPLOYMENTID - FK from DEPLOYMENTS
    • OWNER - Self explanatory
    • OBJECT_NAME - Self explanatory
    And here's how it all fits together:


    Any ideas on what else I can capture here?

    Monday, February 16, 2009

    No DDL in Production Environment

    There might be a couple of reasons you want to disable DDL in a production environment (other than the normal operations of the database). One being you don't have your schemas locked down (i.e. no service accounts). Another might be the need to track all changes in the environment for reporting/auditing purposes.

    It's fairly easy to do.

    First, create an Application Context:
    CREATE OR REPLACE
    CONTEXT secure_ddl
    USING secure_ddl
    ACCESSED GLOBALLY;
    The USING clause is the package/procedure/function that will set the secure_ddl context.

    Next, create a procedure that will set the context using DBMS_SESSION:
    CREATE OR REPLACE
    PROCEDURE secure_ddl( p_ticket IN VARCHAR2 )
    AS
    l_sessionid NUMBER := SYS_CONTEXT( 'USERENV', 'SESSIONID' );
    BEGIN
    dbms_session.set_identifier( l_sessionid );

    dbms_session.set_context
    ( namespace => 'SECURE_DDL_CONTEXT',
    attribute => 'CAN_DO_DDL',
    value => 'Y',
    username => SYS_CONTEXT( 'USERENV', 'SESSION_USER' ),
    client_id => l_sessionid );
    END secure_ddl;
    /
    Finally, create a trigger that fires BEFORE DDL.
    CREATE OR REPLACE
    TRIGGER enable_ddl
    BEFORE DDL ON DATABASE
    WHEN ( ora_dict_obj_owner = 'APPLICATION_OWNER' )
    BEGIN
    IF SYS_CONTEXT( 'SECURE_DDL_CONTEXT', 'CAN_DO_DDL' ) <> 'Y'
    OR SYS_CONTEXT( 'SECURE_DDL_CONTEXT', 'CAN_DO_DDL' ) IS NULL
    THEN
    raise_application_error( -20001, 'must enable DDL' );
    END IF;
    END enable_ddl;
    /
    So let's test it out.

    I'll create 2 users, the APPLICATION_OWNER (see trigger) and TESTUSER. APPLICATION_OWNER is the schema that will store all your application objects and code. TESTUSER will be granted the DBA role which will allow them access to create objects in the APPLICATION_OWNER schema.
    CREATE USER application_owner IDENTIFIED BY application_owner
    DEFAULT TABLESPACE users
    QUOTA 1G ON users;

    GRANT create session, resource TO application_owner;

    CREATE USER testuser IDENTIFIED BY testuser
    DEFAULT TABLESPACE users
    QUOTA 5M ON users;

    GRANT dba TO testuser;
    Let's test it out. Login as TESTUSER and create a table.
    TESTUSER@RMDEV2>set sqlprompt TESTUSER@11G>
    TESTUSER@11G>CREATE TABLE application_owner.t ( x NUMBER );
    CREATE TABLE application_owner.t ( x NUMBER )
    *
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-20001: must enable DDL
    ORA-06512: at line 5
    Nice. A user with DBA privileges cannot perform DDL in the APPLICATION_OWNER schema. Since the SECURE_DDL procedure is not in the SYS schema, TESTUSER can call it to enable DDL.
    TESTUSER@11G>EXEC CJUSTICE.SECURE_DDL( 'TICKET 1' );

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.01
    TESTUSER@11G>CREATE TABLE application_owner.t( x NUMBER );

    Table created.
    So what about APPLICATION_OWNER? Can that user perform DDL on objects in it's own schema (see #1 above)? I've logged into the APPLICATION_OWNER schema:
    APPLICATION_OWNER@11G>CREATE TABLE t ( x NUMBER );
    CREATE TABLE t ( x NUMBER )
    *
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-20001: must enable DDL
    ORA-06512: at line 5
    Very cool.

    Again, this method could be helpful if:
    1. You don't have your schema (application) accounts locked down
    2. You want to track all DDL in your production environment.

    The procedure can be easily expanded into a series of calls, specifically ENABLE_DDL and DISABLE_DDL. When you call ENABLE_DDL, a record is written to a table with the ticket number and the start time. When you call DISABLE_DDL, the end time is captured and you can further query DBA_OBJECTS and capture all objects that changed during that window.

    In a future post, I'll work to expand on this concept to provide this kind of tracking.

    Friday, February 13, 2009

    Is the Relational Database Doomed?

    Straight from ReadWriteWeb, Is the Relational Database Doomed? Found via Mr. Eddie Awad.

    Mr. M I'm sure would say that it is. Find our conversations here and here.

    I don't know what the future may be. I do know that there are many people who don't understand, truly understand, how to use a database. My experience so far has been that those who don't understand far outnumber those who do.

    Mr. M didn't care, didn't want to know what the underlying technology was. He just wanted a set() and get() method. I can't blame him really either. I wish I had been able to work with him more closely on an application to prove to him what it could do, but alas it was never to be.

    I can also argue until I'm blue in the face (and I do) that it could be done far easier and more efficient in Oracle 99% of the time.

    In the article above I did laugh a little bit at the Key/Value concept, reminds me of some articles I've read on AskTom and most recently his post here on the 4 table design.

    Anyway, it's an interest read. I wonder if it will ever come true?

    Thursday, February 12, 2009

    SELECT INTO or LOOP?

    I've always used the SELECT INTO syntax to retrieve data into variables. Lately though, I've seen the LOOP structure used.

    Personally, I like the SELECT INTO, it seems clear and concise to me. I didn't want to just dismiss the LOOP offhand so I ran a couple of tests.

    First I create my table with random strings:
    CREATE TABLE t
    AS
    SELECT dbms_random.string( 'a', TRUNC( dbms_random.value( 5, 30 ) ) ) x
    FROM dual
    CONNECT BY level <= 10000000;
    Yes, that's 10 Million rows. I just wanted to have a nice number for total time. Everything small returned a big fat zero.

    I picked a string out of the table and performed the same test 5 times with each method searching for the same string.

    SELECT INTO
    DECLARE
    l_start_time NUMBER;
    l_end_time NUMBER;
    l_total_time NUMBER;
    l_dummy VARCHAR2(1);
    BEGIN
    l_start_time := dbms_utility.get_time;

    d( l_start_time );

    SELECT 'Y'
    INTO l_dummy
    FROM t
    WHERE x = 'XFLCTdLXZjwlHBAqOgdddUCu';

    l_end_time := dbms_utility.get_time;

    d( l_end_time );
    l_total_time := l_end_time - l_start_time;
    d( 'Total Time: ' || l_total_time );
    END;
    /
    Here are the results:
    SQL>/
    Total Time: 86

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.85
    SQL>/
    Total Time: 87

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.87
    SQL>/
    Total Time: 86

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.87
    SQL>/
    Total Time: 87

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.87
    SQL>/
    Total Time: 87

    PL/SQL procedure successfully completed.
    Not bad.

    LOOP
    DECLARE
    l_start_time NUMBER;
    l_end_time NUMBER;
    l_total_time NUMBER;
    l_dummy VARCHAR2(1);
    BEGIN
    l_start_time := dbms_utility.get_time;

    d( l_start_time );

    FOR i IN ( SELECT 'Y' y
    FROM t
    WHERE x = 'XFLCTdLXZjwlHBAqOgdddUCu' )
    LOOP
    l_dummy := i.y;
    EXIT;
    END LOOP;

    l_end_time := dbms_utility.get_time;

    d( l_end_time );
    l_total_time := l_end_time - l_start_time;
    d( 'Total Time: ' || l_total_time );
    END;
    /
    And the results:
    SQL>/
    Total Time: 86

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.87
    SQL>/
    Total Time: 88

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.89
    SQL>/
    Total Time: 88

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.89
    SQL>/
    Total Time: 88

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.89
    SQL>/
    Total Time: 87

    PL/SQL procedure successfully completed.
    Neglibily slower. Nothing to write home about though.

    Let's throw an index on there for fun.
    CREATE INDEX t_idx
    ON t( x );
    SELECT INTO
    SQL>DECLARE
    2 l_start_time NUMBER;
    3 l_end_time NUMBER;
    4 l_total_time NUMBER;
    5 l_dummy VARCHAR2(1);
    6 BEGIN
    7 l_start_time := dbms_utility.get_time;
    8
    9 SELECT 'Y'
    10 INTO l_dummy
    11 FROM t
    12 WHERE x = 'XFLCTdLXZjwlHBAqOgdddUCu';
    13
    14 l_end_time := dbms_utility.get_time;
    15
    16 l_total_time := l_end_time - l_start_time;
    17 d( 'Total Time: ' || l_total_time );
    18 END;
    19 /
    Total Time: 1

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.01
    SQL>/
    Total Time: 0

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.01
    SQL>/
    Total Time: 0

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.01
    SQL>/
    Total Time: 1

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.00
    SQL>/
    Total Time: 0

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.00

    LOOP
    SQL>DECLARE
    2 l_start_time NUMBER;
    3 l_end_time NUMBER;
    4 l_total_time NUMBER;
    5 l_dummy VARCHAR2(1);
    6 BEGIN
    7 l_start_time := dbms_utility.get_time;
    8
    9 FOR i IN ( SELECT 'Y' y
    10 FROM t
    11 WHERE x = 'XFLCTdLXZjwlHBAqOgdddUCu' )
    12 LOOP
    13 l_dummy := i.y;
    14 EXIT;
    15 END LOOP;
    16
    17 l_end_time := dbms_utility.get_time;
    18
    19 l_total_time := l_end_time - l_start_time;
    20 d( 'Total Time: ' || l_total_time );
    21 END;
    22 /
    Total Time: 1

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.01
    SQL>/
    Total Time: 0

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.01
    SQL>/
    Total Time: 0

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.00
    SQL>/
    Total Time: 0

    PL/SQL procedure successfully completed.

    Elapsed: 00:00:00.01
    SQL>/
    Total Time: 0

    PL/SQL procedure successfully completed.
    OK, there doesn't seem to be much of a difference performance wise.

    Then I got to thinking (look out!). What about errors? no_data_found or too_many_rows? With the SELECT INTO those are thrown, with the LOOP, they are not.

    SELECT INTO, no_data_found
    SQL>DECLARE
    2 l_start_time NUMBER;
    3 l_end_time NUMBER;
    4 l_total_time NUMBER;
    5 l_dummy VARCHAR2(1);
    6 BEGIN
    7 l_start_time := dbms_utility.get_time;
    8
    9 SELECT 'Y'
    10 INTO l_dummy
    11 FROM t
    12 WHERE x = 'ORACLENERD';
    13
    14 l_end_time := dbms_utility.get_time;
    15
    16 l_total_time := l_end_time - l_start_time;
    17 d( 'Total Time: ' || l_total_time );
    18 END;
    19 /
    DECLARE
    *
    ERROR at line 1:
    ORA-01403: no data found
    ORA-06512: at line 9
    LOOP, no record found
    SQL>DECLARE
    2 l_start_time NUMBER;
    3 l_end_time NUMBER;
    4 l_total_time NUMBER;
    5 l_dummy VARCHAR2(1);
    6 BEGIN
    7 l_start_time := dbms_utility.get_time;
    8
    9 FOR i IN ( SELECT 'Y' y
    10 FROM t
    11 WHERE x = 'ORACLENERD' )
    12 LOOP
    13 l_dummy := i.y;
    14 EXIT;
    15 END LOOP;
    16
    17 l_end_time := dbms_utility.get_time;
    18
    19 l_total_time := l_end_time - l_start_time;
    20 d( 'Total Time: ' || l_total_time );
    21 END;
    22 /
    Total Time: 0

    PL/SQL procedure successfully completed.
    SELECT INTO, too_many_rows
    SQL>INSERT INTO t ( x ) VALUES ( 'ORACLENERD' );

    1 row created.

    Elapsed: 00:00:00.04
    SQL>INSERT INTO t ( x ) VALUES ( 'ORACLENERD' );

    1 row created.

    Elapsed: 00:00:00.01
    SQL>DECLARE
    2 l_start_time NUMBER;
    3 l_end_time NUMBER;
    4 l_total_time NUMBER;
    5 l_dummy VARCHAR2(1);
    6 BEGIN
    7 l_start_time := dbms_utility.get_time;
    8
    9 SELECT 'Y'
    10 INTO l_dummy
    11 FROM t
    12 WHERE x = 'ORACLENERD';
    13
    14 l_end_time := dbms_utility.get_time;
    15
    16 l_total_time := l_end_time - l_start_time;
    17 d( 'Total Time: ' || l_total_time );
    18 END;
    19 /
    DECLARE
    *
    ERROR at line 1:
    ORA-01422: exact fetch returns more than requested number of rows
    ORA-06512: at line 9
    If choosing between the 2 methods, performance is not necessarily a consideration. However, one method will throw errors and one will not. If you use the LOOP method, you'll have to add a check IF l_dummy IS NOT NULL THEN or something along those lines. If you use the SELECT INTO method, you can either next the statement in a BEGIN END block or let the exception propogate.

    I prefer to let it complain loudly with SELECT INTO. I want to create the BEGIN END block knowing why I am doing it or catching no_data_found or too_many_rows.

    One more thing about the LOOP method, what if you have more than one row that matches the criteria (input) you specify? You'll get a row but you can't be sure it's the row. That's not good.

    I would recommend SELECT INTO vs the LOOP method. It's prettier (naturally) and it will complain loudly if you don't specify all of the proper predicates (input).

    Wednesday, February 11, 2009

    Looking for Oracle Financials Support

    My company is currently seeking a company/individual to support and maintain our Oracle Financials application.

    The company is in St. Petersburg (check my LinkedIn profile for specifics).

    Leave a comment or email me.

    Fun with Deployments

    I'm sure many of you have had deployments such as this, but it was my first.

    The day started at 2:00 AM. Joy.

    In the office a little after 3 (40 minute drive) and the place is bustling. There are about 10 people in the office already (we have 50 employees) and 2 on the phone.

    Mr. DBA had a whopping 1 hour of sleep.

    I managed to get about 4 as I just feel asleep putting LC (not Lewis Cunningham, Little Chet) to bed.

    To summarize the deployment, massive changes to our core OLTP system which needed to be propagated to the BI/DW environment. We started meeting/planning for this back in December as downtime was required.

    As part of the preparation, a code freeze was implemented last week as well as a deployment (fixes/patches) freeze.

    Before leaving yesterday, I disabled the production OLTP jobs that would run during the deployment window. I then disabled all jobs on the BI/DW side.

    Just before deployment of the database changes, Mr. DBA created a restore point.

    We set up a 2 hour window to deploy and validate. Deployment of the database changes took about 15 minutes. Validation took a little over an hour. Once the call that we were backing out was made, I began work on the BI/DW side.

    I had DROPped, ALTERed and CREATEd about 200 objects. Everything went smoothly (3 or 4 practice runs). I turned all the jobs back on.

    I then began copying the 187 OBIEE reports that were affected over to the production environment. I know there are better ways, or at least I've heard of them, but I was going with what I was sure of. Copied the old reports to a new folder and began replacing them.

    Once I was done with that, I went through my checklist of changed reports to make sure they were working (at least from a functional standpoint). Nine of them were still broken. I made fixes to those and moved forward.

    I then fixed and validated our Financial reports because we have a daily 11 AM deadline.

    I did however forget to include a few grants from objects I DROPped and reCREATEd. I fixed a few of them in the morning then notified (and apologized) the business that they may have issues copying the Oracle (we don't really have any others) DBA group of the objects that may be affected.

    My piece from start to finish took about 4 hours. I was done* by 9.

    Had lunch with my parents (they bought!) and brought them back to meet everyone. I was laughed at (nothing new there) for bringing them in. I think my Mom cried too (might be an only child thing or the fact that it took me 10 years to graduate from college so she thought I would be a lawn man forever).

    I'll still have work to do on the reports as the Business folks begin to validate, but overall it was a big win. The planning and implementation on this deployment was excellent and I was proud to be a part of it.

    Monday, February 9, 2009

    Audit Failed Logon Attempts

    I've been looking at how to send out an alert to the DBA group on failed login attempts.

    In searching through the Database and Client events, the closest thing I could find initially was AFTER LOGON. That didn't make sense because of that little word AFTER. I then realized today that I could use SERVERERROR.

    The SERVERERROR event gives you access to the following Attribute Functions:
    ora_sysevent
    ora_login_user
    ora_instance_num
    ora_database_name
    ora_server_error
    ora_is_servererror
    space_error_info

    I created the trigger using ORA_LOGIN_USER but when I received the email, it would be blank.

    That makes sense, they're not logged in.

    What could I do?

    Google it.

    Well, nothing really there. Some people discussing the same problem, the inability to retrieve the username attempting to login.

    Then I went to Metalink and found Doc ID: 352389.1 (not sure how to link up now with the "new" site). The note talks about enabling auditing (done) and an init.ora parameter, but the username was still illusive.

    I don't know the internals, but Oracle needs to know the username or how else would it know is logging in?

    Anyway, I settled on the following for the time being.
    CREATE OR REPLACE
    TRIGGER failed_logon_notifications
    AFTER SERVERERROR ON DATABASE
    DECLARE
    b VARCHAR2(3) := UTL_TCP.CRLF;
    l_username VARCHAR2(30);
    l_failed_attempts INTEGER;
    l_subject VARCHAR2(40) := 'Alert - Failed Login';
    l_message VARCHAR2(500);
    BEGIN
    IF ora_is_servererror( 01017 ) THEN
    l_message := 'Date/Time: ' || TO_CHAR( SYSDATE, 'mm/dd/yyyy hh24:mi:ssss' ) || b;
    l_message := l_message || 'OS User: ' || SYS_CONTEXT( 'USERENV', 'OS_USER' ) || b;
    l_message := l_message || 'Host: ' || SYS_CONTEXT( 'USERENV', 'HOST' ) || b;
    l_message := l_message || 'Terminal: ' || SYS_CONTEXT( 'USERENV', 'TERMINAL' ) || b;
    l_message := l_message || 'IP Address: ' || SYS_CONTEXT( 'USERENV', 'IP_ADDRESS' ) || b;
    l_message := l_message || 'Protocol: ' || SYS_CONTEXT( 'USERENV', 'NETWORK_PROTOCOL' ) || b;
    l_message := l_message || 'Database Instance: ' || ora_instance_num || b;
    l_message := l_message || 'Database Name: ' || ora_database_name || b;

    BEGIN
    utl_mail.send
    ( sender => ora_database_name || '@revolutionmoney.com',
    recipients => 'dbas@email.com',
    subject => l_subject,
    message => l_message );
    EXCEPTION
    WHEN others THEN
    RAISE;
    END;
    END IF;
    END failed_logon_notifications;
    /
    Now I didn't really want this to fire on every single event, but I had problems with the WHEN clause.

    So if you know how to 1, capture the username or 2, apply the WHEN clause to only fire on ORA-01017, please comment.

    Sunday, February 8, 2009

    Design: Storing Data

    Let's say you have the following data:
    NAME          200812     200901     200902
    --------- ---------- ---------- ----------
    Cable 75 75 75
    Groceries 250 225 300
    Internet 50 50 50
    Mortgage 1000 1000 1000
    Phone 25 25 25
    The table structure for that would look like the following:
    CREATE TABLE t
    (
    name VARCHAR2(30),
    "200812" NUMBER(6,0),
    "200901" NUMBER(6,0),
    "200902" NUMBER(6,0)
    );
    Is this the best way to store it though?

    Are there times when you would want to store it this way?

    What happens when March (200903) rolls around? Well, you'll have to add another column. Not so bad you say? But you'll have to do that same thing the following month, the month after that and so on. Now you have 5 years (60 months/columns) worth of data. Your table becomes a bit unwieldy and difficult to maintain. Not to mention the eyesore of a 60+ column (and ever growing) table.

    What if you find a mistake in 200901, Groceries should have been $400. That's easy enough to update right?
    UPDATE t 
    SET "200901" = 400
    WHERE name = 'Groceries';
    That's OK for the occasional update I suppose, but what if you wanted an interface through PL/SQL? Each time you add a column you'd have to update the code as well. That sucks.

    I suppose you could write some Dynamic SQL to generate the UPDATE statement each and every time, but Dynamic SQL isn't very easy to maintain.

    Now, what if you needed to know the history for the table. What would you do?

    Well, you could create a history table and a trigger (blech) to populate it on UPDATE and DELETE. Not so bad right?

    You'd then be able to query both tables to find the history of a particular column. Your SQL (reports) will have to updated and may get pretty complicated as well.

    But wait. You have to add a column for each month which now needs to propogate to:

    1. Source Table
    2. PL/SQL (unless of the Dynamic variety)
    3. Trigger
    4. History Table
    5. Reports.

    How do you make this easy?

    I want to write this one time and just let it be. I don't want a history table (nor the accompanying trigger, blech). I don't want to rewrite my code or my SQL each and every time a new month comes.

    The underlying model should handle most circumstances and your reports should just work.

    First, I want to constrain my "name" list. I could use a CHECK constraint to do so, but I'd like to be able to add to it. Let's create a reference/lookup/crosswalk (not to be confused with an intersection) table.
    CREATE TABLE name_types
    (
    namecode VARCHAR2(30)
    CONSTRAINT pk_namecode PRIMARY KEY
    );

    INSERT INTO name_types ( namecode ) VALUES ( 'Groceries' );
    INSERT INTO name_types ( namecode ) VALUES ( 'Internet' );
    INSERT INTO name_types ( namecode ) VALUES ( 'Phone' );
    INSERT INTO name_types ( namecode ) VALUES ( 'Cable' );
    INSERT INTO name_types ( namecode ) VALUES ( 'Mortgage' );
    Next comes my main table. I'll use NAME_TYPES.NAMECODE as a Foreign Key in my main table.
    CREATE TABLE t
    (
    namecode
    CONSTRAINT fk_namecode_t REFERENCES name_types( namecode )
    CONSTRAINT nn_namecode_t NOT NULL,
    date_of DATE DEFAULT SYSDATE
    CONSTRAINT nn_dateof_t NOT NULL,
    amount NUMBER(16,2) DEFAULT 0
    CONSTRAINT nn_amount_t NOT NULL,
    date_from DATE DEFAULT SYSDATE
    CONSTRAINT nn_datefrom_t NOT NULL,
    date_to DATE
    );
    I'm not worried about a Primary Key as this is for demonstration purposes only.

    The DATE_TO field will typically be NULL but will allow me to maintain a history as mentioned above. If I no longer what a record to be displayed, I just "end date" it, or populate DATE_TO with the current date.

    To keep it mildly short, I'll use the "Groceries" data for starters.
    INSERT INTO t
    ( namecode,
    date_of,
    amount )
    VALUES
    ( 'Groceries',
    ADD_MONTHS( SYSDATE, -2 ),
    250 );

    INSERT INTO t
    ( namecode,
    date_of,
    amount )
    VALUES
    ( 'Groceries',
    ADD_MONTHS( SYSDATE, -1 ),
    225 );

    INSERT INTO t
    ( namecode,
    date_of,
    amount )
    VALUES
    ( 'Groceries',
    SYSDATE,
    300 );

    NAMECODE DATE_OF AMOUNT DATE_FROM DATE_TO
    ---------- --------- ---------- --------- ---------
    Groceries 09-DEC-08 250 09-FEB-09
    Groceries 09-JAN-09 225 09-FEB-09
    Groceries 09-FEB-09 300 09-FEB-09
    If I want to update something, I can write one procedure that should update record I want.
    UPDATE t
    SET amount = 200
    WHERE namecode = p_namecode
    AND TRUNC( date_of, 'MONTH' ) = p_dateof;
    P_NAMECODE begin 'Groceries' in this case and P_DATEOF being the Year and Month (in DATE format) of the record you want to modify.

    That won't satisfy our history requirement though. In order to do that, you'll need to UPDATE the DATE_TO column with the current date and then create a new record. It should look something like this:
    UPDATE t
    SET date_to = SYSDATE
    WHERE TRUNC( date_of, 'MONTH' ) = TRUNC( p_dateof, 'MONTH' )
    AND namecode = 'Groceries';

    INSERT INTO t
    ( namecode,
    dateof,
    amount )
    VALUES
    ( p_namecode,
    p_dateof,
    p_amount );
    Now and queries you write would just need the WHERE date_to IS NULL predicate. If you need history, you just remove the predicate.
    SELECT *
    FROM
    (
    SELECT
    namecode nc,
    TO_CHAR( date_of, 'YYYYMM' ) d,
    amount
    FROM t
    )
    PIVOT
    (
    SUM( amount ) total
    FOR d IN ( 200812 AS "200812", 200901 AS "200901", 200902 AS "200902" )
    )
    (Though I still haven't found a great way to have the columns automatically appear as new dates get added, I'll leave that for a future exercise.) The results look like this:
    NC         200812_TOTAL 200901_TOTAL 200902_TOTAL
    ---------- ------------ ------------ ------------
    Groceries 250 225 300


    Now you have a easy to maintain, robust design. Hardly any maintenance is required. Queries as as simple as:

    Thursday, February 5, 2009

    Random Things

    An old buddy of mine sent me this picture today. I have no credits unfortunately.

    Tim Tebow, University of Florida quarterback, does the Heisman pose with a baby. WIN!

    From oraclenerd


    What else?

    I promised Mr. Kuramoto ORACLENERD stickers quite some time ago...he sent me a few AppsLab stickers. Anyone else want one? Leave a comment if you do.

    I can't promise it will be soon, but the more people that leave a comment the more pressure I'll be under to come through. That's a good thing.

    Anyway, who in their right mind wouldn't want a sticker proclaiming their love of Oracle?

    What else?

    Need to work on my test harness, but it's kind of petered out.

    What about some sort of database security/auditing/reporting tool that sits on top of the built in auditing? Preventing DDL operations in certain accounts...limiting logons to a certain time frame...capture changes for deployments. We're trying to cook something like that up right now. If only I could get my DBA to configure ApEx correctly.

    What else?

    That's all I can think of now. Brain hurts from too much debugging and unraveling of complex (read: unintuitive) cross database applications (OLTP --> OLAP).

    Oh wait, if you haven't started reading the FAIL Blog, do so immediately. Hilarious. I'm seeing the WIN/FAIL thing everywhere now.

    OK, now that's it.

    Wednesday, February 4, 2009

    DBMS_SESSION.IS_ROLE_ENABLED

    Interesting (to me anyway) thing I learned recently.

    DBMS_SESSION.IS_ROLE_ENABLED

    The call looks like this:
    DBMS_SESSION.IS_ROLE_ENABLED (
    rolename VARCHAR2)
    RETURN BOOLEAN;
    A few times in the pass I've had need to check the role of the logged in user. I would do that with something like this:
    CREATE OR REPLACE
    FUNCTION role_enabled( p_role_name IN VARCHAR2 ) RETURN BOOLEAN
    IS
    l_dummy VARCHAR2(1);
    BEGIN
    SELECT 'Y'
    INTO l_dummy
    FROM dba_role_privs
    WHERE grantee = USER
    AND role = p_role_name;

    RETURN TRUE;
    EXCEPTION
    WHEN no_data_found THEN
    RETURN FALSE;
    END role_enabled;
    /
    I did something similar here (and I can't believe no one told me about this function!).

    Lesson? Before building something on your own (even something this simple), search through the documentation to see if Oracle has already done it.

    Update This was shown to me by my colleage, @serge_a_storms

    Monday, February 2, 2009

    How Not to Use Surrogate Keys

    There have been plenty of discussions about the use of surrogate keys vs. natural keys. This is not about that.

    For some reason lately I've been noticing surrogate keys...a lot. Specifically in analytic functions or in the ORDER BY clause.

    Here's an example of it's use in an analytic function:
    SELECT
    id,--surrogate key
    col1,
    col2,
    col3,
    ROW_NUMBER() OVER ( PARTITION BY col1 ORDER BY id ) my_rownum
    FROM some_table;
    OK, maybe I should have said anywhere you see an ORDER BY clause. Shoot me.

    Like I was saying, don't use them in the ORDER BY clause.

    The natural assumption (yes I know about those), assuming (again, I know) it's done via a sequence, is to assume (man...) that the highest number was the last one created. Not necessarily.

    What if you have a 2 node RAC Cluster?
    CREATE TABLE t
    (
    id NUMBER,
    created TIMESTAMP DEFAULT SYSTIMESTAMP
    );

    CREATE SEQUENCE t_seq
    START WITH 100
    CACHE 100
    INCREMENT BY 1
    NOCYCLE;

    In my tnsnames.ora file, I created an entry for each node (as opposed to have each ADDRESS in one entry).

    I connect to node 1 and run the following:
    INSERT INTO t ( id ) VALUES ( t_seq.nextval );
    I then connect to node 2
    INSERT INTO t ( id ) VALUES ( t_seq.nextval );
    COMMIT;
    And back to node 1
    INSERT INTO t ( id ) VALUES ( t_seq.nextval );
    COMMIT;
    A simple SELECT
    SELECT * FROM t;

    ID CREATED
    ------ --------------------------------
    100 02-FEB-09 07.19.20.786144 PM
    101 02-FEB-09 07.19.32.562516 PM
    200 02-FEB-09 07.19.25.100219 PM
    Now let's order it by CREATED:
    SELECT * FROM t ORDER BY created;

    ID CREATED
    ------ -------------------------------
    100 02-FEB-09 07.19.20.786144 PM
    200 02-FEB-09 07.19.25.100219 PM
    101 02-FEB-09 07.19.32.562516 PM
    Lesson? Don't rely on the sequence generated surrogate key to be in ORDER of creation.

    Update: I've used the surrogate many times in these types of cases. I suppose the only "reliable" place to use the surrogate key would be in the child table (where the surrogate key is a foreign key and has "some" value). I've just now realized it and will move away from it going forward.

    Sunday, February 1, 2009

    Using Twitter...

    I've been using Twitter for a little over a year now.

    Initially, I didn't really get it. I was just trying it out since many of the people I read were using it. After that initial post, it was 3 months before I used it again. The 3rd time was 2 months after that.

    From my timeline, I started to really use it at the end of July after I was laid off at the behest of Jake. He suggested using it to search for jobs. Though I didn't find any that matched up well for me, it was my breakthrough.

    Last month I broke the 1000 tweet milestone.

    I've used Twitter to search for jobs, to meet new (Oracle) people, to ask questions, to scream from the rooftop and other things. I try and encourage others to use it as well, most recently my colleague @serge_a_storms.

    It's a great way to ask questions too. A couple of weeks ago I posted a question asking for scripts to download source. @neilkod reminded me of DBMS_METADATA and I was off. I really use to love the OTN Forums participating heavily in the Application Express forum. I like this medium a lot better though. Just a quick tweet and I can get answers or pointers from a multitude of people. I don't think it replaces the forums just yet, but I like it.

    Probably my favorite part of it though is the "screaming from the rooftop" effect. @crisatunity seems to have taken this to a new level recently which is quite amusing. What do I mean by screaming from the rooftop?

    You know those times you are going through thousands of lines of code and you see something stupid? You want to scream right? Tweet it. Goes without saying though it is public and will stay around forever...but it's fun. Sometimes you'll get sympathy from others, sometimes you'll get crickets.

    Have you tried it out yet? What are your thoughts on it? If you do use it, do you blog as well?