Thursday, December 27, 2007

Asynchronous Distributed HotLog - Change Data Capture

I am currently researching the feasibility of getting Asynchronous Distributed HotLog CDC working for a Proof of Concept in our organization. The source database is a 9.2.0.7 on Sun Solaris and our target database is a 10gR2 on the same OS.

I have been trying, since I arrived, to get them to use more features provided by Oracle. From what I have found so far, all of our Oracle databases are severely under utilized.

I will post my findings here along with a full How To when I have completed it and hopefully the results of whether it was implemented or not.

I did want to post some of the sources I have used so far though:

Oracle Documentation - This is where I started. Most of the other links reference the docs.
Mark Rittman - 10g to 10g. Great example but from what I can tell so far, it's almost identical to the documentation.
Oracle How To - I just found this one and am very interested in reading through it.

If you have any examples, feel free to post the link.

Tuesday, December 11, 2007

SAS: Create SAS Dataset from Oracle

SAS is very prevalent in our environment. When testing, I need to create some datasets for the business users so they they can do their UAT (yes, I shouldn't be involved at this point...but I am).

Here's the syntax to do that (I use SAS EG):

LIBNAME dmart oracle path=<oracle_sid> schema=<schema_name> pw=<password> user=<db_username>;

CREATE TABLE sasave.mytable
AS
SELECT * FROM dmart.mytable;
QUIT;

You can also use the following syntax:

LIBNAME dmart oracle path=<oracle_sid> schema=<schema_name> pw=<password> user=<db_username>;

DATA sasave.mytable_filtered;
SET dmart.mytable
( WHERE = id IN ( 5, 6, 7, 8 ) );

RUN;

If you want to use Oracle SQL (SAS guys and gals call it passthrough SQL), here's the syntax for that:

PROC SQL NOPRINT;

CONNECT TO oracle( username=<username> pw=<password> path="<oracle_sid>");

CREATE TABLE sasave.dual_test
AS
SELECT *
FROM connection to oracle

( SELECT * FROM DUAL );

disconnect from oracle;

QUIT;

Wednesday, December 5, 2007

Instrumentation: DEBUG/LOGGING

In my previous entry on instrumenting code I detailed the way in which you could use the DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS procedure to help monitor your long running code.

In this one, I will detail my home grown version of Tom Kyte's debug routine. I do know that others have similar code but can't seem to find them right now.

You can find the source code here.

Contents:
2 tables
2 sequences
1 package
1 procedure
1 build file

debug_tab allows you to turn the debugging on and off. debug_details_tab will store each line that you write to the debug routine when turned on.

Here's an example of it in practice:

CREATE OR REPLACE
FUNCTION get_age_in_months( p_id IN NUMBER ) RETURN NUMBER
IS
l_age_in_months INTEGER;
BEGIN
--instrumentation calls
debug( 'GET_AGE_IN_MONTHS' );
debug( 'P_ID: ' || p_id );
debug( 'select value into variable' );

SELECT age_in_months
INTO l_age_in_months
FROM people_tab
WHERE id = p_id;

debug( 'L_AGE_IN_MONTHS: ' || l_age_in_months );

RETURN l_age_in_months;

EXCEPTION
WHEN no_data_found THEN
debug( 'no data found' );
RETURN NULL;
END get_age_in_months;
/
show errors


I mentioned in the previous article that I had had difficulty grasping this concept initially. I think once I related it to DBMS_OUTPUT.PUT_LINE it became much more clear to me.

This simple debug routine has helped me tremendously in the last year or two that I have used it. Especially when you get nested levels of logic. It gets very hard to keep track of where you are, but with this you can run your procedure or function and then issue a SELECT against the debug_details_tab and see what was happening when.

I even began using this in my APEX applications. I would preface each line with "APEX: " and then write whatever was necessary so that I could step through the various pieces of code. It became crucial when I was doing validation on a table of records in a collection...oh so much easier.

On large systems this will generate a lot of data. I definitely would not put this inside a loop doing some sort of batch processing, but it is helpful to find where things fail out.

It can certainly be improved on. I basically took the idea of DBMS_OUTPUT.PUT_LINE and wrote it to a table, nothing fancy there. Mr. Kyte mentions writing it to the file system as well. Since I don't have ready access to the database file system, this was the easiest.

Make your life easier and use a debug/logging routine in your code. No longer will you have to comment, the debug statements should do it for you!

Sunday, December 2, 2007

Instrumentation: DBMS_APPLICATION_INFO

Instrumentation has something that I have come to rely on fairly heavily. I believe I first read about it on asktom, but the one that really spurred me on was this post on instrumentation on his personal blog.

Initially, I couldn't really wrap my head around instrumentation. I don't know why it was so difficult; I had a similar problem with sessions when I first started my career. I look back now and it just seems so obvious.

Now that I am doing datawarehouse work, nothing is fast. Fast to me is now one hour to load 30 or 40 million records. No more split second queries for me.

We currently use no tools. It's straight PL/SQL. Instrumentation of the code is ideal. Actually, it's more instrumentation to aid monitoring. The tool most easily used is provided by Oracle in the DBMS_APPLICATION_INFO package.

There are three subprograms that I use most, SET_MODULE, SET_ACTION and most importantly SET_SESSION_LONGOPS. I hadn't started using it until this year, I mainly stuck to the first two. SET_SESSION_LONGOPS is now part of my procedure/function template I've created in JDeveloper.

What it allows you to do is set a row in the v$session_longops view (I know it's not actually putting the row in the view...it's the underlying table, but I digress). You can then monitor how your job is doing.

Here's an example:

dbms_application_info.set_session_longops
( rindex => g_index,
slno => g_slno,
op_name => 'GETTING MEMBER DATA',
sofar => 0,
totalwork => l_table.COUNT + 1,
target_desc => 'GETTING MEMBER DATA' );

g_index and g_slno are global variables in the package. l_table is a PL/SQL TABLE OF VARCHAR2.

Now you can monitor the progress of your job in v$session_longops!

Here's the query I use:

SELECT
username,
sid,
serial#,
TO_CHAR( start_time, 'MM/DD/YYYY HH24:MI:SS' ) start_ti,
time_remaining rem,
elapsed_seconds ela,
ROUND( ( sofar / REPLACE( totalwork, 0, 1 ) ) * 100, 2 ) per,
sofar,
totalwork work,
message,
target_desc
FROM v$session_longops
WHERE start_time >= SYSDATE - 1
ORDER BY start_time DESC


Now you too can sit for hours and watch your job move incrementally forward!

But seriously, it does help tremendously to know where a job is at. You can further use the SET_MODULE and SET_ACTION calls to see a specific point in the processing (inside a loop).

Here's the code in context:


PROCEDURE get_member_data
IS
l_exists INTEGER;
TYPE table_of_lobs IS TABLE OF VARCHAR2(3);
l_table TABLE_OF_LOBS := TABLE_OF_LOBS( 'COM', 'ORG' );
l_count INTEGER := 0;
BEGIN
--check to see if there is enrollment data, if not, move on
SELECT COUNT(*)
INTO l_exists
FROM members
WHERE rownum < 2;

IF l_exists = 1 THEN--data exists, truncate and reload

g_index := dbms_application_info.set_session_longops_nohint;

EXECUTE IMMEDIATE 'TRUNCATE TABLE member_stg';

g_audit_key := p_audit.begin_load
( p_targettable => 'MEMBER_STG',
p_loadsource => 'MEMBER_SOURCE',
p_loadstatus => 'PRE',
p_loadprogram => 'GET_MEMBER_DATA',
p_commenttext => 'INSERT' );

dbms_application_info.set_session_longops
( rindex => g_index,
slno => g_slno,
op_name => 'GETTING MEMBERS',
sofar => 0,
totalwork => l_table.COUNT + 1,
target_desc => 'GETTING MEMBERS' );

FOR i IN 1..l_table.COUNT LOOP
l_count := l_count + 1;

INSERT INTO member_stg
SELECT *
FROM members;

g_total_rows_affected := g_total_rows_affected + sql%rowcount;

COMMIT;

dbms_application_info.set_session_longops
( rindex => g_index,
slno => g_slno,
op_name => 'GETTING MEMBERS',
sofar => l_count,
totalwork => l_table.COUNT + 1,
target_desc => 'GETTING MEMBERS' );

END LOOP;

p_audit.end_load
( p_auditkey => g_audit_key,
p_loadstatus => 'SUC',
p_rowsuccess => g_total_rows_affected );

gather_table_stats
( p_tablename => 'MEMBER_STG',
p_schemaname => 'MYHOME' );

dbms_application_info.set_session_longops
( rindex => g_index,
slno => g_slno,
op_name => 'GETTING MEMBERS',
sofar => l_count + 1,
totalwork => l_table.COUNT + 1,
target_desc => 'GETTING MEMBERS' );

END IF;

EXCEPTION
WHEN others THEN
p_audit.failed_load
( p_auditkey => g_audit_key,
p_comments => SQLCODE || ' ' || SQLERRM );
RAISE;

END get_member_data;

Friday, November 30, 2007

Ling Chi II

or Death By a Thousand Cuts. I wrote about it before.

Today was one of the worst days I've had at this job.

I got home last night knowing I had to run and test a new line of business in our test environment. This was a fairly small subset so it should run fairly fast, two or three hours. Around 1:30 AM, it completed. I did my quick sanity checks and realized that it just wasn't working right.

Crap, I forgot to load our new rates. That has to be it.

So I reran the rates and then reran our process.

It's now 3:00 AM. I run my sanity check and it's exactly the same as the last one.

I quickly realize that some of the rates hadn't been updated like I assumed they had. This test was null and void. I went to sleep...mad.

I woke up around 8:15 with my son telling me something about Transformers. What? Oh, OK, I'm in his bed. I take him to school and get to work around 9:15.

I'm cranky because the other developer didn't do his part, but probably more mad at myself for not making sure they completed their piece.

I throw a piece of candy. Better.

Of course my colleague isn't in today, so I have to make sure the DBAs deploy this to our test environment.

Finally I get the process working and it finishes around noon. I send out a note to the business and QA letting them know that they can begin.

I get a call from the business around 2:30, it's not working the way they expected. One of our rates overlaps another causing invalid results to return.

Thanks for telling us that sooner. The day before deployment and were getting new requirements. Awesome!

I walk over to my boss' desk with my badge in hand, ready to quit, kind of, to tell him the news. He talks me off the ledge.

We then head over to their building to discuss and indeed it's something they didn't realize. OK, fair enough. Had they seemed to appreciate me/us a little bit more over the past eight months I probably wouldn't have been so mad. It was just one more thing though.

My boss decides how this will be remedied. I argue (gently) that this is not an application issue but a data issue. He agrees. We'll just put one job on hold and make the necessary changes to the [driving] data.

Two IT VPs, one business VP and the CIO need to sign off on two change requests, one for the driving data and one for the application.

We're slated to deploy tomorrow pending UAT signoff, though that shouldn't be a problem.

Ten to twenty percent of my time is spent writing code, the rest is paperwork. I believed I had mastered the administrative stuff only to get this requirement change at the last minute.

One more thing...just one more thing.

Ling Chi indeed.

Thursday, November 29, 2007

Keeping it Simple

One of my all time favorite articles is The Complicator's Gloves on Worse Than Failure (formerly the DailyWTF). It identifies the tendency of software developers in particular to come up with overly complex solutions, usually when there is a much simpler one available.

This was the context of my latest rant to my CIO. Actually, this theme seems to play out in all my rants. Funny how that works.

While web services and the like have their place, many times they are used just because they are the cool new thing, not because of a pressing need. I know I am not the first to mention that nor will I be the last.

Whether it was years of reading asktom (for pleasure no less) or the influence of my first boss, I have striven (sp?) to build applications that are scalable yet easy to maintain.

One of my proudest accomplishments as a developer was at my previous job. A small state-contracted agency where I was the lone developer. I, thanks to a very trusting boss, was allowed to install Oracle and soon after found Application Express (APEX). In 18 months I was able to create some 350 pages of forms and reports for the organization. One person, 350 pages. I once found a job ad for a web developer to help maintain a 100 page website on a team of six. What? Six people? Really? Must be java or something. ;)

I continued to work for them on a contract basis for about six months after I left. Mostly until the new guy got comfortable. Unfortunately for me, they didn't require my services a whole lot. Yes, I could be deluding myself, I realize that...but I just don't believe it. They WOULD tell me.

Back to my point. At our organization we seem to have quite a few architects. They talk of Ruby on Rails, Java, JBoss, etc. MySQL gets a brief mention on occasion.

We have a hard enough time writing good SQL or PL/SQL, so now we're going to introduce new languages and a new database platform?

If we were a company that made software, I will probably be [mostly] on board, but we are not. We store and manage data for the business to do their job.

I do hope I am wrong about them and that they do talk about the importance of data in our organization. I just haven't see it yet.

So, put it in the database, use APEX when appropriate (95% of the time) and keep it simple.

Monday, November 26, 2007

Oops I've Done it Again

By it, I mean I've sent another rant to my CIO. Here's my first one that I sent to Dratz before starting my own blog.

Fortunately, we've developed a bit of a rapport. I still should not do this type of thing on a holiday when it could be days before I hear back...it just makes my mind wander and wonder if I will have a job come Monday.

Update


I still have a job...I really need to stop this.

Saturday, November 24, 2007

Parallel Processing using DBMS_JOB

I found this article through the OraNA feed by ProdLife which talked about running a report that was based on multiple queries. It reminded me of something I did awhile back.

We have this multi-step process which loads data into 2 tables that the business would use to reconcile our money in the door and our membership. Membership is on a month granularity (member month) and our money is transactional (they may have multiple transactions within a given month).


One table stores the transactions joined with our members. Not the correct grain that the business needs but useful for research. The other table summarizes the transactions to the month level and then is joined with our membership so that both are at the same granularity. Currently we're pulling across about 27 million records for members and the same for their transactions.

On the right is a basic diagram of the process.

The process initially took 8 hours to complete. Part of it was the fact that it runs
sequentially. However, not all parts of this process are dependent on one another. It isn't until the final 2 steps (Target Table 1 and Target Table 2, in yellow) that they need to run sequentially.


I wanted to speed this up and began thinking about the ways to do this (assuming as much tuning as possible had already completed).

1. I could use our scheduler or unix shell scripts.

2. Use a table based approach as ProdLife did.

3. Utilize PL/SQL and DBMS_JOB.

I chose number 3 initially and that's the focus of this post. I'll detail why I didn't use this method at the end.

The first thing I had to figure out was how to get PL/SQL to wait. Having read a few
posts on AskTom I remembered the SLEEP procedure. After a quick scan of the site, I found that it was part of the DBMS_LOCK package. I asked the DBAs to give me access so that I could being testing.

I figured that if I could wait long enough, it would be easy to "poll" the USER_JOBS
view to see when it had finished. I'm just going to show code snippets as the whole thing can get quite long.

I first determined that the error returned from Oracle for a job not there is -23241.
That will let me know when it is complete. Next, I declared variables for each job to run.

DECLARE
no_job EXCEPTION;
PRAGMA EXCEPTION_INIT( no_job, -23421 );
l_exists NUMBER;
l_dollars_job NUMBER;
l_members_job NUMBER;

First thing I do in the body is create the jobs using DBMS_JOB.SUBMIT.

BEGIN
dbms_job.submit
( job => l_dollars_job,
what => 'BEGIN p_mypackage.get_dollars; COMMIT; END;',
next_date => SYSDATE );

dbms_job.submit
( job => l_members_job,
what => 'BEGIN p_mypackage.get_members; COMMIT; END;',
next_date => SYSDATE );

COMMIT;

Make sure you issue the COMMIT statement after the jobs have been submitted.

Here's the fun part. I created a loop that would call DBMS_LOCK.SLEEP and wait for 60 seconds. After the wait has ended, I check to see whether that job remains in the USER_JOBS table. This allows the jobs to complete in 100 minutes.

FOR i IN 1..100 LOOP
dbms_lock.sleep( 60 );

IF l_dollars_job IS NOT NULL THEN
BEGIN
SELECT 1
INTO l_exists
FROM user_jobs
WHERE job = l_dollars_job;

l_exists := NULL;
EXCEPTION
WHEN no_data_found THEN
l_dollars_job := NULL;--job is finished
END;
END IF;

IF l_members_job IS NOT NULL THEN
BEGIN
SELECT 1
INTO l_exists
FROM user_jobs
WHERE job = l_members_job;

l_exists := NULL;
EXCEPTION
WHEN no_data_found THEN
l_members_job := NULL;--job is finished
END;
END IF;

The next step is to determine when to exit the loop. Hopefully, the jobs will finish in time and move on to the next, but if not, you want to exit gracefully. Well, semi-gracefully anyway.

IF l_dollars_job IS NULL
AND l_members_job IS NULL
THEN
EXIT;
ELSIF i = 100 THEN
BEGIN
dbms_job.remove( l_dollars_job );
EXCEPTION
WHEN no_job THEN
NULL;
END;

BEGIN
dbms_job.remove( l_members_job );
EXCEPTION
WHEN no_job THEN
NULL;
END;
--abort run, taking too long
raise_application_error( -20001, 'DOLLARS/MEMBERS data from not loaded timely...' );
END IF;
END LOOP;
END;

That's all there is to it.

In the end though, I was convinced not to use this method as restartability would be difficult. Perhaps this method combined with the table-based approach would be the ideal. I'll leave that for another day though.

Tuesday, November 13, 2007

Death By a Thousand Cuts

In case I haven't mentioned it before, I work for a fairly immature IT organization. We're heading in the right direction mind you, but defined and documented processes don't really exist.

Yesterday I went back and forth with our QA department about two truncate table statements. We had performed this in production before and we didn't change the code. It needed to be tested and I needed to do a unit test plan. Today I needed UAT signoff. What? There's no business user.

Our EDI group is overtaxed right now, so I wrote a process to delete data in "their" database on tables that they created. Part of that process was to drop the constraints and then recreate them with the ON DELETE CASCADE rule. Amusingly, they created most of them with the SET NULL rule and then had NOT NULL constraints on those columms. Sweet. I needed to do all this and finally delete the data, of course none of the environments (DEV/TEST/PROD) are the same, so what worked in DEV didn't work in TEST. I'm guessing that as soon as it is deployed in PROD, I will have missed something else. I'll look like the idiot too. Did I mention that I have to create indexes for all those foreign keys? No, well, I did. Otherwise the process would take days to complete. There are some 13 tables in total. Three child tables of the parent table and then multiple child tables of those child tables. Awesome!

I'm on my fifth day of trying to complete a "simple" delete now.

Since the FBI raided, QA has become more strict. OK, understandable, but there was never a notice saying so, nor what the new rules would be.

I finally get my final approval and the work orders are created. Release Management gives me the all clear (the files were staged for the DBAs). I ran over to let them know and he wants to go through the instructions before I leave to make sure everythings clear. We then go into the staging folder and only two of the eight files are there. Deep breath. I call the Release Management group and let them know. It will be an hour before they get home.

I know that was an honest mistake. When I went back to make sure I filled out all the forms correctly I realized that I had left one file off of my build list, so I originally only had seven.

DBA just called and he was slammed with other critical deployments/production problems and wasn't able to get to it tonight. It would happen in the morning.

A "simple" delete will have taken seven days to complete.

Slow slicing indeed...

Monday, November 5, 2007

I Want to Be Better Than Tom Kyte

OK, that got your attention. Somehow I knew it would.

I believe Mr. Kyte to be one of the foremost experts at Oracle development. His solutions are usually simple and concise. His philosophy is simple and concise; logic belongs with the data (in the database), don't reinvent the wheel if we've already created it (using supplied packages) and keep it simple. Of course Mr. Kyte may have objections to some of that, but that's the general idea I have gleaned over that past 5 years.

I am very competitive

1. One of the reasons I got into IT in the first place was that I didn't like this whole group of people knowing more than I did.

2. I grew up playing baseball, I liked being better than most of the other kids. I still believe that if I hadn't drank away my opportunity in college, I'd still be playing.

3. I'm an only child, I'm used to the attention and crave it. How do I get it now? By being better than everyone else.

My goal is to be the best developer in the world

Will I ever achieve that? Could it even be measured? Is there some sort of Oracle developer competition out there?

Perhaps I should start small...be the best Oracle developer at WellCare, then Tampa, Florida, the U.S, North America, Northern Hemisphere and finally the World!

I am probably not the best Oracle developer at WellCare, so I have a ways to go. That's what drives me though. Trying to be the best. I'm surrounded by a lot of smart people which is a good thing. No...a great thing. I've been the lone wolf developer for too long. Now I have the opportunity to learn directly (as opposed to just reading) from others. There is give and take. Sometimes my solution is the best and sometimes it is not.

I don't believe my competitive nature interferes with my interpersonal relationships (I hope not anyway). It is more of an internal thing to me. Once upon a time I was skinny and in shape and I did triathlons. I wanted to be a pro (laughable). Each time though I tried to outdo my previous performance. Did I want to win? Sure I did, but it was more important for me to improve.

I believe that I am strong enough to take criticism from others. I can admit when I'm wrong (see countdown timer above).

I do want to be the best. I'll probably never have the opportunity (nor the time) to do what Mr. Kyte has done. I'm not going to strive to be mediocre though. Whether I realize that goal or not is mainly irrelevant, but that is my goal...to be better than Mr. Kyte.

Wednesday, October 31, 2007

Code Style: Functions

Functions

Functions are used to return a value. This value can take the form of a set of records (ref cursor), a collection, or a single value (NUMBER, VARCHAR2, etc.).

Functions (or procedures for that matter) should be logically grouped within a package. Only on the rare occasion should you store them individually.

That said, here is the syntax to create a function in Oracle:

CREATE OR REPLACE
FUNCTION get_age_in_months( p_id IN NUMBER ) RETURN NUMBER
IS
l_age_in_months INTEGER;
BEGIN
--instrumentation calls
debug( 'GET_AGE_IN_MONTHS' );
debug( 'P_ID: ' || p_id );
debug( 'select value into variable' );

SELECT age_in_months
INTO l_age_in_months
FROM people_tab
WHERE id = p_id;

debug( 'L_AGE_IN_MONTHS: ' || l_age_in_months );

RETURN l_age_in_months;

EXCEPTION
WHEN no_data_found THEN
debug( 'no data found' );
--here you need to decide what exactly you want to do. If this is
--used in a SQL statement then you probably don't want to halt
--processing, so returning NULL will work just fine. However, if
--this function is called by another function or procedure and you
--need this value to continue processing, you WOULD want to know
--that the value is not there and you would issue a RAISE after you've
--logged the error
RETURN NULL;
END get_age_in_months;
/
show errors

The "debug" call is just instrumentation of the code. It will allow you to step through the code more easily.

In the above function, you would also need to be aware of too_many_rows, but since I know that "id" is the primary key on the table, I have ommitted it.

  1. Instrument your code.

  2. Name the function something descriptive. There is a 30 character limit so don't be lazy.

  3. Name your functions something descriptive. I typically use get_ then whatever it is I'm doing. get_calculated_amount, get_as_of_date, etc.

  4. Name your variables something descriptive. There is a 30 character limit so don't be lazy. Please just spell out "no". Is it "No" or "Number?" Why make the next person think, just spell it out. They'll thank you for it.

  5. If there are more than 1 input parameters, start at the next line.

  6. CREATE OR REPLACE goes on the top line, nothing else

  7. Use spaces liberally

  8. Comment where necessary. If you name things descriptively though, you'll find you won't need a lot of comments.


I tend to name variables thusly (I think I got that from Mr. Kyte):

  • p_ = parameters passed (though you could use i_ for in and o_ for out as well)

  • l_ = local variables

  • g_ = global variables

Monday, October 29, 2007

FBI and Others...

I did have a job when I got back into town. Unfortunately I let the news of the FBI raid ruin the remainder of my vacation.

There still isn't a whole lot of information available, most of it is just speculation of course, but it seems now everyone is piling on.

Class Action Lawsuit Against WellCare Health Plans
WellCare Says SEC Joins Probe

On Monday of last week the stock price hit an all time high of over 128 dollars. It closed at $28.62 today. Um, yikes. We had purchased a few shares at around the $100 mark and lost about $200 dollars. If there's a bright side to all this it's that we can actually afford to buy more than a couple of shares now...

Thursday, October 25, 2007

Looking for an Oracle Developer?

So I am on vacation (well, I wouldn't necessarily call being a pack mule vacation) at Disney World this week.

Yesterday I received a text message from one of my friends, our company was being overtaken by armed FBI agents, lots of 'em.

FBI raid shutters Medicare insurer

FBI Raids Tampa WellCare

I've never seen any of these types of activities, though the scope of duties has been fairly limited (I've been there less than a year). I find it hard to believe as everyone I have worked with on the IT side of things I KNOW is on the up and up. I've never been asked to do anything nefarious...My hope, that if true, it is only a select few individuals.

I thoroughly enjoy working there. I am surrounding by some great people, both personally and professionally, but I do have a family to feed so...

So if you are in the Tampa area and are looking for a hard-working Oracle developer (APEX, PL/SQL), either email me or post a comment (I won't post the comment, but I will receive it via email and I can contact you from there). I'll also work on getting my resume updated and online.

Thursday, October 11, 2007

Code Style: Package Specification

Package Specification

CREATE OR REPLACE
PACKAGE p_foo
AS
FUNCTION get_something
( p_id IN NUMBER,
p_thing_im_looking_for VARCHAR2 ) RETURN NUMBER;
PROCEDURE do_something( p_id IN NUMBER );
PROCEDURE insert_something
( p_first_name IN PEOPLE_TAB.FIRST_NAME%TYPE,
p_last_name IN PEOPLE_TAB.LAST_NAME%TYPE,
p_email_address IN PEOPLE_TAB.EMAIL_ADDRESS%TYPE,
p_middle_initial IN PEOPLE_TAB.MIDDLE_INITIAL%TYPE DEFAULT NULL,
p_prefix IN PEOPLE_TAB.PREFIX%TYPE DEFAULT NULL );
PROCEDURE update_something
( p_id IN NUMBER,
p_first_name IN VARCHAR2,
p_last_name IN VARCHAR2 );
PROCEDURE delete_something( p_id IN NUMBER );
END p_foo;
/
show errors

  1. Name your package something descriptive. There is a 30 character limit so don't be lazy.

  2. Name your procedures and functions something descriptive. There is a 30 character limit so don't be lazy. UPDATE this or INSERT that. Try to give a small clue as to what it does. If it's a function, I typically use get_ then whatever it is I'm doing. get_calculated_amount, get_as_of_date, etc.

  3. Name your variables something descriptive. There is a 30 character limit so don't be lazy. Please just spell out "no". Is it "No" or "Number?" Why make the next person think, just spell it out. They'll thank you for it.

  4. Use the package name after the END statement.

  5. If there are more than 1 input parameter, start at the next line.

  6. CREATE OR REPLACE goes on the top line, nothing else

  7. Use spaces liberally

  8. Comment where necessary. If you name things descriptively though, you'll find you won't need a lot of comments.

  9. Declare parameters using %TYPE. If the parameter doesn't map to a table column declare SUBTYPEs somewhere.

Code Style: Index

My ongoing contribution to...pretty much myself. Perhaps you can find this useful, or amusing, whichever.

Code Style: Tables

Tables are easy.

CREATE TABLE t
(
col1 NUMBER(10,0)
CONSTRAINT pk_col1 PRIMARY KEY,
col2 VARCHAR2(32)
CONSTRAINT nn_col2_t NOT NULL
CONSTRAINT uq_col2_t UNIQUE,
col3 VARCHAR2(400),
col4 VARCHAR2(1) DEFAULT 'N'
CONSTRAINT ck_yorn_col4_t CHECK ( col4 IN ( 'Y', 'N' ) )
CONSTRAINT nn_col4_t NOT NULL
);
Remember to always name your constraints. While I am at, use constraints as much as humanly possible, at least in your OLTP systems. You'll be able to reduce the amount of code you need to write and actually let the database do it's job. I'd much rather let the database do it than rely on code to maintain my data integrity.

For the datawarehouse, you'll need to think about constraints a bit more as it may slow down load times. I'm still all for constraints, but I would never say always use them.

For child tables:

CREATE TABLE s
(
col5 NUMBER(10,0)
CONSTRAINT pk_col5 PRIMARY KEY,
col1
CONSTRAINT fk_col1_s REFERENCES t( col1 )
CONSTRAINT nn_col1_s NOT NULL,
col6 VARCHAR2(30)
);
For Foreign Key constraints, you do not have to declare the type as it will be inherited from the parent table.

This would be helpful if someone up and decided to change the NUMBER(10,0) to a VARCHAR2(10) or something (please don't ever do that!).

As for STORAGE or other table options, I typically leave that up to the DBA or work with them to add them. They may have a particular setup for certain tables that you can't possibly know (if you don't talk to them).

To recap:
  • Use constraints as much as possible

  • Always name your constraints

  • Work with your DBA for table options

  • Always name your constraints

Tuesday, October 9, 2007

Code Style: General Principles

I have always wanted to put together my personal code style guide. Now that I have a blog, I can do so rather easily.

This will be the first among many that details my approach to coding style. Yes, it is very particular to me. No, I don't expect anyone in their right mind to follow it. Though perhaps someone can use it as a general guideline.

I am fairly fanatical about style.

Parenthesis go on separate lines, always (I'll eventually contradict myself I'm sure). None of this my line starts with a comma crap. My only concession is that it makes commenting out lines easier. Otherwise, I can't stand it. It's ugly to me and I don't like ugly.

Tabs = 2 spaces

CREATE OR REPLACE is the top line followed by the particular object on the second line (I don't know why, it's my style though).

CREATE OR REPLACE
PACKAGE p_package
IS
...

Always put "show errors" at the bottom of procedure/function/package scripts.

Always end your procedure or function with the name after END.

CREATE OR REPLACE
PACKAGE p_package
IS
...
END p_package;
/
show errors

Align SELECT statements left, not right:

Evil

SELECT blah, t.id, s.x
  FROM t, s
 WHERE t.id = s.id
   AND t.x = s.x;

Correct

SELECT
  blah,
  t.id,
  s.x
FROM
  t,
  s
WHERE t.id = s.id
  AND t.x = s.x;

I believe but can't confirm that Toad Formatter is the main perpetrator of this travesty.

Instrumentation. Use lots of it. Home grown or Mr. Kyte's debug routine. You can also use the Oracle supplied package DBMS_APPLICATION_INFO. I have gotten into the habit of using set_session_longops and it helps tremendously in monitoring those pesky long running datawarehouse programs.

Be liberal in your use of spaces: ADD_MONTHS( TRUNC( SYSDATE, 'MONTH' ), 2 )
Capitalize keywords like SELECT, INSERT, UPDATE and DELETE.
Standard functions should also be capitalized.

I'm sure I've got more and I'll add them here as time goes on.

Wednesday, October 3, 2007

To Test or Not to Test?

OK, I'll stop with the silly Shakespearean silliness.

Why do we test everything?

I'm talking about one line of code like this:

Before
WHERE line_of_business IN ( 'XYZ', 'ABC' )


After

WHERE line_of_business IN ( 'XYZ', 'ABC', '123' )


(this was the cause of my first screwup)

I added an extra filter on the IN list. Does that really require the complete testing of the entire (say 8 hour) process?

I've been in IT for 5+ years now and I understand the importance of testing, but this seems a tad ridiculous to me.

Is there a better way? Unit testing performed by another developer? Visual inspection by the DBA or group of peers?

Wednesday, September 26, 2007

To LOOP or Not To LOOP - Revisited

In a previous post, I pondered the decision of LOOPing (i.e. PL/SQL calls to a lookup). Given 8 days to code and a 90 hour week last week I think I came up with a decent solution.

Initially I went the easy route, writing PL/SQL functions that would populate collections and then do the matching. I tried to tune the functions so that the collection would be populated only once for each line of business and then ordering the result set that would be passed through. In my mind, this would reduce the number of cursors opened. I wasn't if that would make a difference or not, but I was going to give it a try.

Well, it was dog slow. One of the lookups calls was estimated to take days to complete (I only know this because I have gotten into the habit of using dbms_application_info.set_session_longops).

So I rethought it.

I only needed to get the keys right?

I first created a primary key on the staging table (same key that would be used in the dimension). I then created a "work" table that would reference that key and then store the other key that I needed to lookup. I also made the foreign key a unique key (one to one relationship) so that I would know immediately if I did something wrong. I then used straight SQL to populate that table. One pass for all the values that matched and then another pass with those that didn't have a matching record (padded rows).

From days to minutes...2 minutes in fact. So I applied that technique to two other lookup situations.

The code is still pretty straight-forward and adding another line of business is relatively easy (I checked this today and it took all of 10 minutes). All the fancy PL/SQL and collection stuff I had created was thrown out the window.

So I did manage to use straight SQL, the performance is phenomenal and maintenance will be easy.

It even passed muster (albeit grudgingly) with our more seasoned datawarehouse folks. Ultimately, they couldn't complain when it ran in under two hours...

Wednesday, September 19, 2007

To CONSTRAINT or Not to CONSTRAINT

I've been having these long (good) drawn out conversations with a colleague of mine recently about constraints in a datawarehouse environment. Since I come from an OLTP environment, I want to put a constraint on everything. He on the other hand wants any constraints enforced through code.

I can understand that constraints my slow down table loads, but with the volume of data we are currently using (100 million row tables), I just don't see that as a huge impact on performance. For me, writing less code is better than any minimal performance gains we might achieve.

For instance, today he told me he wanted to add a new column. The possible values would either be M or G. I asked (I'm lead on the project) him to throw a CHECK constraint on the table limiting the values that can go in that field.

For me, I guess it's a support issue and part database purist issue. Use it for more than a bucket, Oracle's expensive. It forces developers to deal with that and will immediately tell them if they have done something wrong. It also makes support a tad easier as they won't have to wonder what actually goes in the field.

We talked at length about it but never really came to a decision (i.e. I couldn't convince him).

I know that a benchmark test would be the best way to prove either way, but I don't have the time right now to do it. Perhaps when things slow down again I will.

Any suggestions out there?

Tuesday, September 18, 2007

Zero Day is NOT Upon Me

I got a brief reprieve today, I guess.

Neither of my two deployments are going through tomorrow, so I have two more weeks to go until another opportunity to screw something up. ;)

Speaking of which...
I watch SpongeBob Squarepants quite a lot with a 5 year old boy in the house. I was watching this episode the other day where Patrick (the starfish and best buddy to SpongeBob) found out his parents were coming over to visit him.

He was not very happy as his parents treated him like he was stupid. SpongeBob told him that he would help out by acting even dumber than Patrick thus making Patrick actually look smart. The act worked, but too well. Patrick, even out of earshot of his parents, kept cracking stupid jokes at the expense of SpongeBob. This resulted in SpongeBob getting mad and reminding Patrick that he was only acting stupid to help him out.

Since I put my sign up (not counting my counter up top, no one here gives me a hard time) everyone has consistently beat me up over it. Today was it for me. Yes, I brought it on myself with the sign and all that, but I'm through with it. My boss mentioned that if I had to reset my sign that I wouldn't really be there to do so as I would have been escorted out the door (he didn't say this in a mean-spirited way...he might be right next to me) and unable to reset.

I started practicing how I would answer questions about my firing in my next interview.

Interviewer: "So, why did you get fired?"
Me: "Well, I screwed up production a couple of times."
Interviewer: "How did you do that?"
Me: "The first time I deployed code from development to production."
Interviewer: "Really?! Why did you have access to production?"
Me: "..."
Me: "On the second occurence, I deployed a bug into production; I changed a non-requirement driven piece of code"
Interviewer: "What about QA? Didn't they do regression testing?"
Me: "..."

I felt a little better.

I've have never been happy about my mistakes, I tried to "man-up" and own them; perspective and context are good things though. I should learn from my mistakes and correct the behavior that led to them, that's all I can really do.

Thursday, September 13, 2007

To LOOP or Not to LOOP

I've already established my creds as a Tom Kyte follower.

Tom's mantra:

  • You should do it in a single SQL statement if at all possible.

  • If you cannot do it in a single SQL Statement, then do it in PL/SQL.

  • If you cannot do it in PL/SQL, try a Java Stored Procedure.

  • If you cannot do it in Java, do it in a C external procedure.

  • If you cannot do it in a C external routine, you might want to seriously think about why it is you need to do it…


I'm faced with a new datawarehouse load, eventually it will be about 100 million records. If I go the straight SQL way (embedded in PL/SQL), I'll need to create multiple objects to support that method and each time we add a new line of business, we'll have to add more objects and more code.

If I can encapsulte it in PL/SQL, I could eliminate most of the necessary supporting objects, but at a big cost, performance. It would make coding time much quicker and ultimately more maintainable (in my opinion), adding a new line of business would simple by updating the packaged functions with an additional control check.

It's a difficult pill to swallow though. I'm originally an OLTP guy and I can write pretty good SQL. Being so new to datawarehousing concepts I feel like I am cheating somehow. Making that decision of when to move to slow-by-slow seems exceedingly difficult. Hopefully with more experience in the datawarehouse environment it will become easier...but it's still tough.

The Chicken Almost Came Home to Roost

We have a daily meeting with the Business concerning the current, long running project. This is the one I have screwed up a couple of times.

I have a scheduled deployment next week, mainly just performance improvements. I've managed to get this down from 10-12 hours to 7-8.

At the end of the meeting, the Business questions the need for the performance improvements - we're re-architecting my solution in parallel because mine was just a conversion from SAS (yuk!) - as we'll be live in just 2 months with the new one.

Obviously, I know why they're thinking that.

IT, me, can't be trusted to do it right. Since I've, umm...screwed up a couple of times. Why do this if it's not broken?

I just hung my head low, I knew, and I couldn't really argue with them. I had no ground to stand on. Our PM said that he'd take it back to our manager and let them know.

I went to lunch and thought about this blog entry I would be writing.

I got back from lunch and my PM informed me that this would be going into production...the Business' big boss, Miss VP, said so.

Woohoo! Someone has some semblance of confidence in IT (me)!

I've done everything I could to make sure that I didn't do something silly. We had a peer code review to compare the most recent build against that which was in production. My unit tests were much more thorough. I worked with QA to get them to look at specific points. Let's just hope all goes well.

This is a big test for me. Either I pass and gain some credibility back or fail and lose my job. Wish me luck!

Code Style

Having a specific coding style has become more apparent to me now that I work in a team environment. I have my way, which is of course the best, and everybody else has there way, which isn't nearly as good.

I'm sure if I could make the rules, I'd have everyone writing code according to my ways. Power corrupts and all that.

I have learned to accept others' ways though. It's been difficult at times. If the code is readable and it works, I usually just bite my tongue.

Strangely, I love to type. I manually create all of my scripts. I make my code "pretty." I don't use GUI tools to auto-generate table definitions, and I especially don't use tools to format my code.

Toad Formatter, or whatever it's called, is my enemy. I'm sure Toad is not the only one with some sort of auto-formatter and I don't like any of them. For some reason I believe it is an abomination.

When I get to do the technical interview, I scan their resume for SQL*Plus. If I don't see it, I ask them what tools they use. If they don't mention SQL*Plus, I'm skeptical of their abilities...until I get to the five constraints question.

This is not to say that people that use Toad or SQLDeveloper are bad coders or anything, it's just my preference.

So what's your preference?

Tuesday, September 11, 2007

Inline vs. Out-of-line Constraints?

Today one of my more feisty colleages and I had a discussion about constraints. I had asked him if he wouldn't mind naming the NOT NULL constraints to a particular table definition. With this guy, nothing is easy. ;-)

So he came over and we talked about inline vs. out-of-line constraints. I asked him the advantage of out-of-line. He asked me the advantage of inline. I'm sure his answer was better than mine as he is much more articulate.

For me, it's mostly a style thing, except for FOREIGN KEY constraints which, if defined inline, will inherit the data type of the parent column. That makes life easier if you ever need to change the data type of a parent key (not a recommended best practice mind you). I just think it looks prettier (factual based evidence). It's all in one file and I like to see how many constraints I can put on a single table. I'm shooting for at least one per column to save myself time down the road of coding exceptions. Let the database do it's job!

Here's a sample of inline constraints:
DROP TABLE s;
DROP TABLE t;

CREATE TABLE t
(
id NUMBER(10)
CONSTRAINT pk_id PRIMARY KEY,
name VARCHAR2(30)
CONSTRAINT nn_name_t NOT NULL,
age NUMBER(2,0)
CONSTRAINT nn_age_t NOT NULL
CONSTRAINT ck_gtzero_age_t CHECK ( age >= 0 )
);

CREATE TABLE s
(
sid NUMBER(10)
CONSTRAINT pk_sid PRIMARY KEY,
id
CONSTRAINT fk_id_s REFERENCES t ( id ),
something_unique VARCHAR2(30)
CONSTRAINT uq_somethingunique_s UNIQUE
);

And out-of-line Constraints:

DROP TABLE s;
DROP TABLE t;

CREATE TABLE t
(
id NUMBER(10),
name VARCHAR2(30),
age NUMBER(2,0)
);

ALTER TABLE t ADD CONSTRAINT pk_id PRIMARY KEY ( id );

ALTER TABLE t ADD CONSTRAINT nn_name_t CHECK ( name IS NOT NULL );

ALTER TABLE t ADD CONSTRAINT nn_age_t CHECK ( age IS NOT NULL );

ALTER TABLE t ADD CONSTRAINT ck_gtzero_age_t CHECK ( age >= 0 );

CREATE TABLE s
(
sid NUMBER(10),
id NUMBER(10),
something_unique VARCHAR2(30)
);

ALTER TABLE s ADD CONSTRAINT pk_sid PRIMARY KEY ( sid );

ALTER TABLE s
ADD CONSTRAINT fk_id_s FOREIGN KEY ( id ) REFERENCES t ( id );

ALTER TABLE s
ADD CONSTRAINT uq_somethingunique_s UNIQUE ( something_unique );


So, what are the advantages for either method? I couldn’t find much via google or asktom, but I probably just didn't search on the right terms. I find it hard to believe that this topic hasn't come up before.

* Note that I did use 5 constraint types

The Good Manager

I read Lewis Cunningham's article today What Keeps you at your Job?

This is something I think about quite often. I am in a very chaotic, immature organization currently. The process to deploy code changes about every other day and of course non of it is documented. Then there's the fact that I had complete control at my previous job, I was the DBA, Architect, Web Developer and Designer (suck at that), and most importantly Database Developer. I had a very good manager who just literally let me run wild (within reason of course).

For me, that was a perfect situation. I felt I was under-utilized at my previous job and that was the perfect opportunity to flex my muscles. I learned a great deal there and I am forever thankful for that.

One of the big reasons I took my current job was because of the chaos and the immaturity of the IT organization. There are countless opportunities to help shape the future, to build the foundation. I'd also get to experience life in the for-profit corporate world where performance is rewarded financially. There's also significant room to advance relatively quickly compared to more established environments.

I have learned things on the technical side, but far and away my biggest gain in knowledge is in how to do software development in a team environment and the peculiar politics of a company.

I have my manager to thank for that. He is a former military officer who attended one of the military academies. He has worked in our industry for a number of years and is our subject matter expert on the financial side of things.

  • He gives us (developers) the opportunity to voice our opinions.
  • He gives us a view into the politics.
  • He gives us the big picture view.
  • He is fair.
  • He does not do things just because that's the way they're done. He fights those battles so that we don't have to do it the wrong way.
  • He backs us up.
  • Shit doesn't roll downhill with him.
To me, those are all terrific qualities. When I've screwed up, he tells me; usually though, he asks me questions so that I will come to the realization. He's been an outstanding leader and most importantly (to me anyway), a teacher.

If he ever decided to leave, I might just have to follow him.

The Countdown Timer

So I found a handy little countdown timer from this site so that I can replicate my hand made sign in my cube. This all stems from a previous incident.

Now I'll have a reminder at work and on the blog not to touch that which is out of scope.

Monday, September 10, 2007

Oracle Tools I've Used

Tools:
SQL*Plus
APEX - Application Express (formerly HTMLDB)
JDeveloper
SQL Developer
Reports Builder
OC4J
Discoverer
Oracle Server 8i, 9i, 10g (Windows)
Oracle Application Server 9i, 10g (Windows)
rman
oradim
lsnrctl
tkprof

Coming Soon:
Oracle Warehouse Builder

Database Features:
Java
Object Types
Workflow
Advanced Queueing (in conjunction with Workflow)
Heterogenous Services

Features I'd like to use:
Change Data Capture
XML
Spatial
interMedia
Regular Expressions

Database Supplied Packages:
DBMS_OUTPUT
DBMS_LOB
UTL_FILE
DBMS_APPLICATION_INFO
DBMS_CRYPTO
DBMS_UTILITY
DBMS_METADATA
DBMS_EPG
DBMS_HS_PASSTHROUGH
DBMS_JAVA
DBMS_JOB
DBMS_LOCK
DBMS_MVIEW
DBMS_OBFUSCATION_TOOLKIT
DBMS_RANDOM
DBMS_SESSION
DBMS_SQL
DBMS_STATS
DBMS_XDB
DBMS_XPLAN
HTMLDB_APPLICATION
HTMLDB_ITEM
HTMLDB_UTIL
HTP
OWA_COOKIE
UTL_MAIL
UTL_RAW
UTL_SMTP
WPG_DOCLOAD

Packages I'd like learn to use (10g):
UTL_DBWS
DBMS_CDC_PUBLISH
DBMS_CDC_SUBSCRIBE
DBMS_DATA_MINING
DBMS_DATAPUMP
DBMS_SCHEDULER
DBMS_OLAP
DBMS_PIPE
DBMS_STREAMS
DBMS_STREAMS_ADM
DBMS_STREAMS_AUTH
DBMS_STREAMS_MESSAGING

Wednesday, September 5, 2007

Working with the Business

I was once one of those annoying Business users who constantly pestered the IT department.

Probably one of the biggest reasons I got into IT was because I didn't like the control that IT (seemingly) had over me. I wanted that kind of control, but I also wanted to share it.

I have tried to carry my end user mentality with me to IT. I try and go out of my way to learn the business; from reading procedures to just sitting down with them and discussing what they do. This was fairly easy to do in the smaller companies I've worked with, but now that I am in a 3,000+ person company, it has gotten a bit more difficult.

I read an article by Steve Jones about getting closer to your business. I couldn’t agree more.

I would love (and have asked) to be able to spend more time with the Business folks. Perhaps shadowing them for a week, or just writing reports so I can get a better feel for what they do with the data/information we provide them. Better yet, make it part of the on-boarding for new employees. Spend 6 weeks with various departments or something like that. Some companies have management programs that do just that.

Anyway, I believe time spent with the Business makes me a better developer and hopefully builds up a good working relationship (trust) between IT and the Business.

Business Logic: In the Database or in the Application

When I started in the Data Warehouse, I began subscribing to as many BI/Data Warehousing blogs that I could find (there don't seem to be lot of them). Intelligent Enterprise (Roger Kimball) and Bill Inmon seem to be the most popular or well known. I also read Mark Rittman (and Mr. Mead), David Aldridge, Lewis Cunningham and Dratz who all seem to have more of an Oracle lean.

Today I read a post from Dan Linstedt titled "How Data Models can Impact Business." I followed the rather lengthy, but descriptive post, until I got to this part:

"Ok, I kind-of buy it, but what about Referential Integrity, when should that be enforced?
In two places:
1) When the data is captured within the application capturing it - it would clean up a LOT of these source systems, and put much more rigorous business logic (and cleaner data) into the source systems to begin with.
2) When the data is "released" for users, to reports, to screens, to output. This is when reusable common services / routines for getting data out are helpful. They implement the referential integrity in the application layer."

Now, I haven't been in IT all that long (5+ years now), but I put as much business logic into the database as I possibly can. I use the front end (APEX, woohoo!), to handle row color or something along those lines.

If you start with a good, flexible model, have a strong database team (DBAs, Developers) and you build a good API to your physical model (no INSERT, UPDATE or DELETE to any users, the only entry point to your tables being your API), flexibility and maintainability should not be a problem. Making changes should not entail a monumental effort. I suppose if the application in Mr. Linstedt's article is the only point of entry into the tables, I probably wouldn't really disagree (he's just moved it from my database API to his application), but then you have a giant bucket. Why not just use what you have paid for and build it in the database? That way, more than one application can use the same API over and over.

Wednesday, August 29, 2007

Fun with SQL

My group held it's first Peer Code Review today. It was my code (the code I've screwed up a couple of times). One of the requirements (I stuck to them this time, yeah for me!) was to limit the data we were pulling from another system to only the last 48 months. There were three different procedures that needed this date but I didn't want to break the existing automated job so I defaulted everything to:

p_date_from DATE DEFAULT ADD_MONTHS( TO_DATE( TO_CHAR( SYSDATE, 'YYYYMM' ), 'YYYYMM' ), -48 )

I wanted the last full 48 months and this worked just fine.

This is a fairly long process which takes anywhere from 8 to 12 hours to run, depending on system resources. One of my colleagues asked what would happen in the event of a crash (on the last day of the month) and the required re-start (on the first day of the month). One set of data would have everything going back 49 months and the other 48 months which would create a lot of errors. A great catch.

Since we're in maintenance mode now (re-architecting the whole thing), we decided not to stop the deployment, we looked at the calendar to see when the first occurence was in which the 1st fell on a Sunday (the job is scheduled weekly on Sunday)...which now that I think about it, was probably wrong. We should have been looking for a Sunday in which was also the last day of the month. Anyway...

I like trying to answer questions with SQL. So I wanted to find the occurences on which Sunday was also the first day of the month (despite while writing this realizing that we should have been looking for the last day, again, digression). Here's my resultant query:

SELECT
ADD_MONTHS( TO_DATE( TO_CHAR( SYSDATE, 'YYYYMM' ), 'YYYYMM' ), myrownum )first_day_on_sunday
FROM
(

SELECT rownum myrownum
FROM dual
CONNECT BY LEVEL < 100
)
WHERE TO_CHAR( ADD_MONTHS( TO_DATE( TO_CHAR( SYSDATE, 'YYYYMM' ), 'YYYYMM' ), myrownum ), 'D' ) = 1
AND TO_CHAR( ADD_MONTHS( TO_DATE( TO_CHAR( SYSDATE, 'YYYYMM' ), 'YYYYMM' ), myrownum ), 'DD' ) = 1
/

FIRST_DAY_
----------
06/01/2008
02/01/2009
03/01/2009
11/01/2009
08/01/2010
05/01/2011
01/01/2012
04/01/2012
07/01/2012
09/01/2013
12/01/2013
06/01/2014
02/01/2015
03/01/2015
11/01/2015

I'm not so sure it impressed my colleagues, but it certainly reinforced the notion that I'm a nerd.

Humor in the Workplace

In an effort to deal with my recent screwup (one of two recent ones), I decided to try and laugh at it a bit.

Our organization is very young and since the on-boarding of our new CIO, our directive has been to stabilize current processes. In that regard, the VP who heads up the Infrastructure team has placed placards on his office window signifying the number of days one or another system has been up with no interruption in service.

I ran into him in the hallway and suggested he put one up for me too, "Days since Chet 'messed' up Production code." He got a hearty laugh out of that.

I've always been able to laugh at myself, if not immediately, then soon after. So I put up a hand made sign in my cube that read the same thing; I'm at 21 days. Hopefully it will remind me as I'm pouring through code not to touch that which is out of scope with the current requirements. That was my mistake on both occurences.

It's definitely a talking piece and hopefully people can laugh at it (as I can...sortof), but it will be a constant reminder to me as well.

Thursday, August 23, 2007

Why I Blog

I never really had a true mentor, well, a physical one anyway. I could ask questions to my first boss, but it was apparent rather quickly that they better be good ones.

So I took to reading AskTom everyday. Mostly the new posts at first but as time went on, there were fewer and fewer of those. Tom was mostly answering old questions. But they were fun to read and you could really get insight to his thinking. That's what I needed the most. The Why of doing things. I think this post illustrates more the Why, I still read that one from time to time.

Then a couple of years ago he began blogging. That really gave me a view into his mindset. I learned about instrumentation there, how to ask questions, how to answer them and the rule of thumb, to name but a few.

I think that was the first blog I read consistently.

I found his advice and his technical expertise inspiring. More than anything though, I liked have a view into his world. I believe that alone helped me get over the technical hurdle, to begin to really understand what was going on and why I should do things a particular way.

I hope I can do the same. I want to share my experiences, challenges and mistakes so that others may learn from them. I also want to provide more technical content, even though much of it is answered in one place or another, just because I have a different take. Someone might relate to my style and find me inspiring (that would be way cool).

I now consume about 40 or 50 different blogs daily. Most of them are Oracle related, but I also read many business blogs, opinion blogs, security blogs and data warehousing blogs (though I haven't found one yet that really inspires).

I believe it is a good thing to be well rounded and I try to be.

So that's why I blog. I want to share. Hopefully you'll find something you can relate to here.

Wednesday, August 22, 2007

Good Day to Worse Day

Today I got the opportunity to have lunch with my CIO.

A few weeks ago I sent him a manifesto, which I would now classify as more of a philosophy. He kept promising me he would articulate a response via email. Being the CIO of a Fortune 400 company, I figured he had better things to do than to write out an email to me. So I offered him up a trade, lunch in exchange for the email. Surprisingly, he agreed.

Today was the day but I fully expected him to cancel; surely something else would come up. Nothing did, but he did move it back by 15 minutes.

Down to the cafeteria we went (I was really hoping to go out to lunch, just to get a ride in his Porsche Gemballa). We sat in a booth and started talking. We discussed everything from my group (excellent group of people, talented and fun), using MySQL databases for one-off projects (I was for putting them in a single Oracle database), to the state of our current OLTP application (crud).

He mentioned user-defined fields (OLTP) and I told him about one instance where someone in our company created those in an internal application. I didn't say anything at the time because it was not in my group and I didn't want to call someone out for something I thought was wrong. He told me I should have, that it would be in the best interests of the company. Then he said something that I have heard him say in our All-Hands meeting, "Let me be the one that makes the wrong decision."

That sealed the deal for me. I have liked him and what he has done since he got here, but that one comment told me that he took his job as leader seriously. I was thoroughly impressed.

So that was the good (great) part of the day. I felt great because our CIO listened to me prattle on for an hour and listened to me. He even used one of my analogies (well, not mine really) in his management meeting a short time later.

Now on to the worse part.

The application I have been working on for the last few months required an Emergency Fix (EFIX) because they had detected a bug. I realized quickly that I was the culprit. Something that had worked previously was changed by me in an effort to re-factor the code. It wasn't broken. There was no specific business requirement to re-factor it, I just did it...and screwed it up. The ironic part is I had just sent out an article to my co-workers about discipline making good developers.

I told this to the Business folks, what I had done; and apparently I hadn't earned any brownie points with them because they escalated it to my boss and VP. Which of course had to go to the CIO as well...

I definitely screwed up. There's no way around that. I know better than that. Oh well. It was certainly a lesson in humility. I'm just thankful I still have a job...

Tuesday, August 21, 2007

SQL Puzzle, Sort of...

My group was once asked, "What's the first number that contains an 'A' in it?"

This has become a once a week exercise for fun.

I of course started counting from one to see where it might happen. Then I decided to use the magic of SQL to help me go through all the numbers.

On 10g:


SELECT num
FROM
(
SELECT
TO_CHAR(TO_DATE('01/01/4712 BC', 'MM/DD/YYYY BC') + (rownum - 1), 'JSP' ) num
FROM DUAL
CONNECT BY LEVEL < 1000
)
WHERE INSTR( num, 'A' ) > 0;

NUM
-------------------
ONE THOUSAND
I found the format model in the documention and moved on from there. I think my original query was a bit more elegant, but this works.


Pretty cool stuff.

Saturday, August 18, 2007

How I Got Started...

It all started 5 1/2 years ago, my first IT job as a reports developer. I was given a tnsnames file and SQL*Plus. What? What the hell am I supposed to do with this?

I learned about databases through Microsoft's Access. While I would never want to go back to it, I am thankful it was there or I probably would never have learned. I was basically a secretary (administrative assistant for the PC). I got tired of entering in the same thing every month and knew there had to be a better way. So I put it in Access to speed things up a bit. I began to annoy the crap out of the IT department. I now wanted raw feeds of the data so that I wouldn't have to type in anything...

Finally a friend came along and offered me a job.

That's when I met Oracle.

So there I sat on my first day wondering where the tables where. I couldn't "see" them. I needed to "see" them dammit. Within a week I had set up an Access database to be my front end to Oracle; I couldn't stand not "seeing" the tables.

That got old real quick though. The sheer volume of data caused that. Within weeks, I was using SQL*Plus and beggining to learn the data dictionary.

I'd ask my cube mate, "How do I see the source of the view?"

SELECT text FROM user_views WHERE view_name = 'MY_VIEW';

Oh yeah; SET LONG 100000

I was fortunate enough to have a helpful cube mate.

And so began my obsession...

I would say it took a good year before I really started to feel comfortable with SQL*Plus and Oracle, and 5 years later, I'm still using it.

I'm glad I was forced to learn Oracle that way. I believe that I have a far better understanding of how Oracle works because of my experience.

Thursday, August 16, 2007

What I Read...


Database

Oracle Documentation.

For over 5 years now I have been reading Tom Kyte's
AskTom site hosted on Oracle's site. This is the first place I go when I have an Oracle related problem.

For the past 2 years or so, I have also been reading his
blog.

The second source of information concerning Oracle is
Metalink. I mostly find all my answers on AskTom, but for the more obscure problems, this is the best place to go.

Third would be the OTN forums, mainly the
SQL and PL/SQL forum.

The
C.D.O.S. group is next followed by a general search on Google. Most of those lead to Morgan's library on the PSOUG site.

Of late, I have found many blogs as well. Particularly
Dratz, who has a very informative series on How Oracle Really Works. We've exchanged a few emails over the past couple of months and I believe we have a similar belief about the use of the database (particularly Oracle).

Lewis Cunningham is another good source of information. He talks about Oracle, EnterpriseDB and PostgresSQL. He is currently utilizing Streams (Advanced Queueing), has used XML in Oracle and is/was a database architect. I have been reading his blog for a couple of years now.

I also read
Howard Rogers, David Aldridge, Mark Rittman, and Jonathan Lewis (though I don't understand much of what he posts).

APEX - Application Express (formerly HTMLDB)

The OTN forum is my first choice. I contributed quite a bit there before moving to Tampa late last year.

John Scott - He's a big contributor to the APEX community. In fact, he received the Oracle Ace award last year. Mainly though he runs a site called shellprompt which allows you to host your APEX application. I was able to utilize his service for about a year with my previous business and I still have a site hosted with shellprompt. John's service was second-to-none!

Patrick Wolf - I've been reading his blog for a few months now, but unfortunately, I don't work with APEX (at work) anymore.

Carl Backstrom - He works on the APEX team at Oracle. His sample/example stuff is excellent.

Business

37Signals - Very transparent business that posts about how they run a business, their development philosophy (getting real), and design decisions. I have used their Ta-da List (to do list) and Writeboard (collaborative writing).


I'll continue to add more links here as time goes on; or you can just check out my
delicious links to Oracle.

Swapping Partitions

Recently I learned a pretty cool trick, swapping partitions.

I'm sure I read it originally on AskTom, but I never had a need for it. Now that I am in data warehousing though, moving millions and millions of records around takes time...a lot of time.

On this recent project, I had a process which brought in the data from a different database. There were some minor transformations on that data including the use of an analytic function to de-dup the data (I would assign a 1 to the first record matching my criteria and that's what I would use for everything downstream).

So I have essentially six tables. One which is the target for the source data, four which handle the transformed data and another one which holds all of that transformed data. Initially I would perform a INSERT INTO...SELECT FROM...but it would take upwards of three hours for this one particular set of data (~26 million records), and I was doing that twice (six hours).

Sitting with a colleague trying to figure out a way to speed up the process, we came across an article on AskTom which mentioned swapping (or exchanging) partitions with a regular table. The final table (of the six) is partitioned by line of business, a perfect candidate.

So I issue the statement found there:

ALTER TABLE target_table
EXCHANGE PARTITION gmd_lob
WITH TABLE source_table;

That didn't work because the table structures did not match. So I went about changing the table structure to match that of the target table.

I issued the statement again:

ALTER TABLE target_table
EXCHANGE PARTITION gmd_lob
WITH TABLE source_table;

Success! I then went on to change the other three tables structure to match that of the target table. What had taken three to four hours now took less than five minutes!

Tuesday, August 14, 2007

My Manifesto

I am an Oracle Nerd.

I am not an Oracle fanatic.

I believe that SQL Server, DB2, MySQL, et. al. are all fine databases, but I don't know them like I know Oracle.

Regardless of your database, use it. Stop treating it like a bucket for your data. If you are going to do so, you might as well use flat files...or better yet, XML! Oracle is an incredibly powerful database and I believe in using what I (well, my employer) paid for.

At my current employer, we have numerous Oracle databases on site: Operational Data Store (ODS - mostly raw data from outside agencies), our production OLTP system and 2 data warehouse databases (we're currently trying to re-architect into one). Out of all those, we use exactly zero built in features. No CDC (Change Data Capture), no
Advanced Queueing (Streams), hardly any analytics.

I try to keep things as simple as possible. My experience has been that most people want to overcomplicate things...hopefully I can help to change that at my workplace now. It won't be easy, but I'll certainly try.

I want to share my experiences; in my new role as a data warehouse developer, things I have learned designing, developing and maintaining OLTP systems, and my experience with the incredibly powerful APEX (formerly HTMLDB).