Showing posts with label development. Show all posts
Showing posts with label development. Show all posts

Monday, August 27, 2012

ODTUG @ Oracle OpenWorld

I really need to learn how to say no.

I, somehow, got roped (pun intended) into helping to put together a Sunday Symposium for ODTUG at Oracle OpenWorld this year. If you've been following along for the last couple of years, you'd know that I have this thing for DevOps (devops to me). For KScope 2011, I tried to put together a Sunday Symposium on this very topic. Lewis slapped me down though, "Tools Justice, tools." So we amended the Sunday Symposium to focus on the tools while doing those devopsy kind of things. Much better.

This year I tried to do the same thing. (Side note: The reason I am so "obsessed" with this topic is because I think things like testing, source control, and testing are done very poorly in the database world, the Oracle database world specifically. I know there are places that do it right, my first organization did it right, and then made me a bitter old cynic at each successive organization I came across. I also realize that doing some of these things on/in the database are much more difficult than wrapping up a binary or a web application. Data. It usually comes down to the data. Blah blah blah.). Kris Rice of Oracle SQL Developer fame ran KScope 12's Sunday Symposium...I gave him the topic, and he made it happen. Bonus points for Kris as he won the ODTUG Volunteer Contributor of the Year Award. You can find the slides from that Symposium here.

So here I am again, pushing the methodology or process involved around Oracle Database development. I think it's important. Very important. Fortunately I have a mentor who has spent way more time thinking about this than I. That would be Dominic Delmolino. Cary Millsap had sent me a presentation that Dominic had done at UKOUG in 2010. It was fantastic and about 23 hours worth of material. That was the original inspiration behind the 2011 KScope Sunday Symposium and I have tried to kept it going ever since.

So, back to the ODTUG Sunday Symposium at Oracle OpenWorld this year.

When: Sunday September 30th, 2012, 8:00 AM - 4:30 PM
Where: Moscone West - 2009
Who: Anyone can attend (provided you have an OOW pass of course)
Title: From Dev to Ops and Back Again: Insight into the Development Lifecycle from the ODTUG Experts
Presenters:
  • Gwen Shapira - Pythian, Oracle ACE, Oak Table
  • Dominic Delmolino - Agilex, Oak Table
  • Timothy J. Gorman - Evergreen Database Technologies, Oracle ACE Director, Oak Table
  • Robyn Sands - Oracle, Oak Table
  • Stewart Bryson - RittmanMead, Oracle ACE
  • Kellyn Pot'vin - Enkitec, Oracle ACE
  • Dan Norris - Oracle, Oracle ACE Director Alumnus, Oak Table
  • Cary Millsap - Method R, Oracle ACE Director, Oak Table
Pretty impressive line-up isn't it?

I'll be posting more details in the days and weeks to come. If you're heading to Oracle OpenWorld this year, check out one or more of these sessions, you won't be disappointed.

Thursday, December 15, 2011

Trace Data: Not Just For DBAs

On Wednesday, I attended Cary Millsap's Mastering Oracle Trace Data class here in Tampa.

Why?

Why would I go? I am working with OBIEE which is about 42 layers above the database...who cares about trace data? Well, performance is usually the number 1 item on everyone's list. Page loads to slow. Report takes to long to run. Whether it's trying to tune the application server (WLS) or figure out why Report A takes so long to run, we do a lot of performance analysis. In most cases, it ends up being the SQL that is run. What I mean by the SQL is that it's usually bringing back quite a few records. I've seen people try to pull back millions of records and then pivot all that data putting a nice load on the BI Server and hogging all the resources from everyone else.

On occasion though, there are other things that are going on (with the SQL) that we can't pinpoint.

Recently we had to back out a production implementation because one of the load processes seemed to just hang, threatening to slow down a larger process.

I asked the DBAs why.

Crickets.

Shouldn't that be an answer a DBA provides?

Disk? Network? CPU? Memory? Which one?

Crickets. (I didn't ask those exact questions, I think I said, "Reads? Writes? Network? Load?")

That is just one of the reasons I wanted to attend Mr. Millsap's class. That, and I've heard he's well regarded and does a pretty decent job presenting. OK, I admit it, I just want to show the DBA up. There, said it.

I really shouldn't have to though. It's supposed to be a partnership. They don't know much about OBIEE, so I help them there. I expect help in things like this.

Why? Part II

If you are a developer, understanding trace data will make you better. You'll no longer have to guess, you'll know.

Of course there's what I hinted at above, being able to go to your DBA(s) and prove something. No better feeling in the world.

How?

MR Trace is by far the easiest. It's integrated with SQL Developer. It's a great place to start.

MR Tools - For the more advanced professional. Mostly geared towards the DBA type, but incredibly useful to developers as well. It includes:

- mrskew - your trace file profiler and data miner
- mrls: your trace file lister
- mrcallrm: your trace file correction fluid
- mrtrim: your trace file tim calculator
- mrtrimfix: your trace file time value fixer

Method R Profiler:
The classic tool that started it all, the Method R Profiler is software that makes simple work of knowing exactly why your application consumes the response time it does. With minimal training, a Method R Profiler user can—in just minutes—identify the root cause of an Oracle-based application performance problem, propose sensible solutions to it, and predict the end-user response time impact of each proposed solution.

There are of course other products, check them out here.

Ask Mr. Millsap to come to your town. Try out MR Trace. You won't be sorry you did.

Friday, July 22, 2011

SOUG: Thinking Clearly About Performance

Alternate Title: Cary Millsap is Awesomesauce.

Last night the Suncoast Oracle Users' Group hosted Cary Millsap of Method-R Corporation. You may have heard about Mr. Millsap, he's written a few books, a few papers, done a couple of presentations...

Thinking Clearly About Performance
Find his paper here.

While the paper is good, I don't think it compares to the presentation.

This was my first second Millsap presentation I've gotten to see this year, the last was in June in Long Beach on My Case for Agile.

We were treated to 2+ hours ending a little after 9. There wasn't a dry eye in the house. Oh wait, that was something else. It was a packed house, approximately 50 attendees including Enrique Aviles (aka "evil Les" or @eaviles94) and his friend Eric, who drove down from Orlando.



One of my (many) takeaways was finally understanding what the prepare, execute and fetch were.

Here's the sequence diagram...



Here's the profile...



I had never thought of it in the context of what something like java does. Once I did, easy. Here's some pseudo-code:
ps = prepareStatement( "SELECT table_name FROM user_tables" );
rs = executeStatement( ps );
OK, not the best example, but it helped me finally make the connection.

It's All About the Skew
So you have CPU, Network, IO and Something Else. Is it divided evenly among the 4 components (talking about performance here)? 25% CPU. 25% Network. 25% IO. 25% Something Else. No, typically it's not. When you see the spike, you know that's where you start to look (is that Axiomatic?). I can't do it justice (pun intended)...so read the paper.

The beauty of this approach though is that it's not just confined to database performance, it's software performance in general. I think the "software" people get it more than we database people do too. You have to instrument your code if you are ever going to be good at Performance Tuning.

If you ever have the opportunity to see Cary speak on this topic (really, on any topic, he's incredibly good a breaking down a complex topic and making it digestable by mortals), don't hesitate, go see it.

The Future
Cary's got some plans for the near future too, so you may just get that opportunity sooner than you think. I won't spoil it, but I will announce it after he goes public.

Monday, June 27, 2011

KScope 11: Cary Millsap

Today was a big day for me, my first Cary Millsap (@carymillsap) presentation.

I slept in, preparing my mind and body for the event (and missing the keynote) (it had nothing to do with beer consumed either).

My Case for Agile Methods

Not only is it my first time seeing him speak (yesterday counts, sort of), but it's something I can actually understand and have a very strong opinion about. Not that Thinking Clearly About Performance is way over my head, but it doesn't affect me as much as methodology does.

I fight with methodology on a daily basis, mostly resulting in me hitting my head against the closest wall.

I find that when I don't tweet or I'm not inclined to check my mail or twitter, something is going on that has my attention. That was the case this morning (minus a small family emergency that was eventually averted).

The slide deck itself was minimal, but very well done.

With his reputation as a technical leader in the Oracle community, you know you're going to get great content.

The good stuff is in Cary's style or delivery.

He tells stories.

I love stories. I live for stories. If you've met me, I'm sure I've told you a few as well.

It's easier, I think, to relate to stories. You can see yourself in the other person's shoes. Or something like that.

As to the content...I don't think it was anything ground breaking and I believe Cary knew that. What was ground breaking is the audience.

Database developers and DBAs in the Oracle ecosystem, seem to think it a dirty word. That's of course a gross generalization, but save for my very first IT employer, it's been true (1 for 8, batting .167 .125 (thanks Cary)) for me.

I want to see more of this type of talk, the "how" and "why" as opposed to just the technical details of doing a particular task. I can read the docs, I can figure that out. Putting it all into context is the hard part.

Friday, June 3, 2011

Developer Access to V$ Views in Production

I love The Twitter Machine.

.

Assuming you made it all the way through...how freaking awesome. Got a question? The Twitter Machine and a lot of really smart people have the answer.

Tuesday, May 17, 2011

The Zone

By Enrique Aviles

When was the last time you were in "the zone"? I'm talking about that special moment when you are able to construct a mental representation of a complex task with none of its parts missing. You've managed to take a large problem and break it apart into smaller pieces that you fully understand thus giving you a clearer picture of the original bigger problem. At this stage you are completely focused, ideas are flowing, and you feel confident what seemed like a daunting task is now under your control. This mental state is typically associated with programming or software development but it applies to pretty much any task that requires deep focus and concentration. Programmers in the movie The Social Network were known to be "wired in" when they were in the zone



For me, this is usually achieved when I'm alone at work or at home. The seating arrangement at work is great for collaboration since I share cubicles with three colleagues but makes it almost impossible for me to get in the zone.

Recently I had to optimize a large query that feeds a Crystal Report. Notice I said query, not queries, so all the data was gathered with a single query. This query was actually a UNION of several queries each joining tables, views, inline views, and a few IN clauses. It wasn't until I was able to enter the zone that I managed to make significant progress troubleshooting the monster query and develop a solution that made a significant impact.

What are some of the hurdles that prevent us from entering the zone? Are they limited to the office? Perhaps it is better to work from home, but is that even a plausible option? Emails, phone calls, instant messaging, impromptu meetings, people asking for help and casual conversations make it very difficult for someone working on a complex task to enter the zone. Working from home could aggravate the issue since the only means of communication are limited to phone calls, emails or instant messages. Nobody knows how busy you may be so interruptions might occur more frequently than if you were in the office.

How do you find the zone? As with everything else this could be a matter of personal preference. Some people (like me) prefer silence and virtually no external stimuli while others are content and strive in the midst of chaos. Regardless of what your style may be one thing is certain: it is almost impossible to achieve deep concentration if one is frequently interrupted. Unless you possess special mental abilities that allow you to context switch as quickly as a modern CPU, most everyone requires a special setting and the right conditions in order to enter the zone.

The average IT professional would be more productive and enjoy greater job satisfaction if he/she is able to enter the zone more often. We would be able to develop better designs, troubleshoot more efficiently, and devise smarter solutions. Entering the zone is a rewarding experience not only because it helps us solve complex problems but because it shows us the level of mental prowess we are able to attain.

Thursday, February 17, 2011

Design Documentation

I got bit recently. I didn't ask for requirements and I didn't create a design document. Bad Chet.

It made for a difficult time getting that first cut done...and I missed a few very important items.

Talking with a friend today about it, and he kind of hammered the point home. Do the design document and you spend fewer cycles spinning your wheels. As an example he use a project we both worked on last summer. At first, he didn't do a design document and "struggled" for a couple of weeks. When it came time to do it again (it had been dropped from that first release), he started with the design document and it only took a week.

Forced him to think about how to do it, the possible roadblocks and the ability to raise those roadblocks to the appropriate people.

For whatever reason, I had never thought of the design document in terms of OBIEE. I don't know why, it just was. For OLTP type applications, I would always, at a minimum, create a Visio diagram. That would allow me to quickly and easily spot any problems. That world is much more intuitive to me, so I can visualize it easier. OBIEE, not as easy.

So here's my pledge to, no matter how much I loathe formal design documents, to go forth and create formal design documents. After all, this is development, just a different tool (and a layer of abstraction). No more ad-hoc development for me.

Tuesday, December 28, 2010

How To: Become a Professional Developer

Twitter = Discovery



That led to this article, Staging Servers, Source Control & Deploy Workflows, And Other Stuff Nobody Teaches You.

The first paragraph reads as follows:

I worked for almost three years as a cog in a Japanese megacorporation, and one of the best parts about that experience (perhaps even worth the 70 hour weeks) was that they taught me how to be a professional engineer. Prior to doing so, my workflow generally involved a whole lot of bubble gum, duct tape, and praying. I spent a lot of time firefighting broken software as a result, to the detriment of both my customers and myself. Talking to other software developers has made me realize that I’m not the only person who was never taught that there are options superior to bubblegum. If you aren’t lucky enough to work at a company that has good engineering in its very DNA, you’re likely to not know much about them.

I consider myself incredibly fortunate, I worked for a company, well, a friend, who had all these things in mind. Not all of the mentioned mechanisms were set up mind you; it had been pretty much a one or 2 person shop before I arrived, way back in 2002. They were on the road though. Rational was the source control tool. Rational was the documentation tool. Rational was the functional testing tool. SQLUnit was the database testing tool.

My boss was this guy, Tom Wurzbach.

When he offered me a job, he sent me this link:

http://asktom.oracle.com/pls/apex/f?p=100:1:0

I saw Tom in the title and thought it was a joke of some sorts. Little did I know.

I stayed at this company for just under 3 years. I am not proud of the way I left, but that's a whole other story. That was my longest IT stint to date (+1 year at the current, yippee!).

It took me about 2 years to start to realize what I had learned at that company. In fact, I'm still realizing things now that I learned way back then.

The title? How to become a Professional Developer? I know many who wouldn't call me a professional developer...

I'm going to list the things that I have done and read in my time at that first company. I believe that set a great foundation for me and has allowed me to do all sorts of things.

1
I mentioned AskTom. Funny story (of course). I was off reading some opinion site at work. I received an email from Tom, "What are you doing?" It was rhetorical of course, he knew exactly what I was doing. Please don't read personal stuff on company time.

Point taken.

After that, when I needed to take a break, I would read AskTom. Every day. If it wasn't AskTom, it was something else technical. It was mostly AskTom though. Lots of it. More than I care to remember. Actually, I would bet that I don't remember half of it, it's all been internalized.

2
Find a mentor be it in real life or virtual. I had one sitting in front of me and I had Tom Kyte virtually. The former I didn't realize (yes, long story) until I had gone, the latter, well, he's been a mentor to a great many of you out there.

3
Read. A lot.

Don't just read stuff you agree with, read opposing view points. Don't read about stuff that just works either...if you watch Derek Jeter, he makes it look so easy, like you could go out there and do the same thing. You can't. I promise. Strangely, I have gotten a whole lot out of The Daily WTF. Mr. Kyte even took the helm for a little while some years ago. Learning what not to do is just as important as what to do.

Read the manuals. Read the concepts guide. Read the documentation. I'm shocked when I go somewhere new and they don't have an internal Oracle library set up. What if oracle.com is down? What if it's slow?

Read the New Features guide. It doesn't take long. See what's available.

Some things you should read early and often:
- The Complicator's Gloves
- Big Ball of Mud
- How To Ask Questions The Smart Way
- Why Good Programmers Are Lazy and Dumb

4
Practice. I can't speak to other vendors, but you can download any software you want from Oracle. That's enterprise class software on your little old 'puter. I've had so many databases installed I can't count. I've always had a running database, just for that quick SQL statement (like how old am I in days), or just for a quick sanity check.

I've installed Oracle database version 8i through 11gR2, many, many times (only recently on a non-windows platform though).

I've installed and run Oracle Application Server 9i through 10g.

OBIEE? 10g and 11g.

JDeveloper.

SQL Developer.

Forms and Reports.

EBS even.

The point is, Oracle makes it super-easy to test-drive this stuff and learn it. You have no excuses, the means to your end are out there.

5
Test. I spent a good year at the first job writing nothing but SQLUnit tests. I hated it, every minute of it.

But guess what, it was good for me. You know why?

I had to build scripts to populate and teardown the database.
I had to build SQLUnit tests to login, because we used VPD and nothing worked without a valid login.
I had to build hundreds of little tests. 7 to 10 of them for each procedure or function.

What did all that testing buy me? Well, I know what errors are thrown when you pass in NULL. I know about no_data_found and a host of other errors.

If I were in charge and I got to hire a junior programmer, I would make them build test cases for their first year.

So I have 5 ways to become a professional developer. I have no doubt there are more and someday I'll probably add to the list. Feel free to find the comments and add your ideas on the topic.

Sunday, November 21, 2010

IT Meritocracy

Had to look up the word meritocracy, just so I didn't look stupid:

* a form of social system in which power goes to those with superior intellects
* the belief that rulers should be chosen for their superior abilities and not because of their wealth or birth
wordnetweb.princeton.edu/perl/webwn

* a system that rewards an individual based on their own talent and ability.
www.deanza.edu/faculty/leeethan/glossary.html

* A system in which the talented are chosen and moved ahead on the basis of their achievement.
www.talentrecessions.com/terminologycontent.html

IT is very meritocratic. I love that. It's partially what drew me to IT. It has allowed me, without formal education, to gain acceptance and and make a good living.

Let's face it though, IT, generally speaking, is the Wild West. It is no where near as established as Accounting, or Architecture, or Engineering...there is always more than one way to do a particular activity. Not so much in those other professions.

They have history behind the way things are done. It was proven long ago, in many instances, that doing something a particular way is the best way to do it. Those professions today stand on the shoulders of giants.

We really don't have that. Our industry is just too young.

I think this is one of the reasons I enjoy the DBA world, things are provable. DBAs can prove why something is better this was versus the other way. Developers have a harder time. Like I said, there is more than one way to do things.

Technology.
Which one? PL/SQL and put it in the database? (a resounding yes from me). What about Java and moving the business rules to the middle tier? Or Ruby? Or any other language for that matter?

Those decisions are mostly based on a developers comfort zone, or what's hot at the moment. It's not provable though, that one is better than the other. We've had the discussion here on multiple occasions (here, here, here, here...and a bunch of other ones)

How does this relate to meritocracy?
Good question. I would say that most decisions to go a certain direction are based on experience. Experience = merit, in the world of common sense anyway. IT is different though, experience does not equal merit. Just because you have worked in IT for 30 years, that does not make you an expert in a given technology. Sure, for the very rare person, that 30 years is quite valuable...they have seen it all and done it all...and they have learned along the way. I can usually spot those types easily, as there is a sort of calm to them, humility. I tend to gravitate towards those people (which is why I love OOW and other conferences so much, there are a lot of super smart people in close proximity).

I'm not perfect of course. I can remember an incident a couple of years ago where I called a colleague and friend, complicated. In fact, he wasn't, I just didn't know what he was doing. I learned quite a lot from that gentlemen.

How do you tell who is who?
Is it the person like me who isn't afraid to say "I don't know." Or is the person who says, unflinchingly, that we should use technology A over technology B? I usually have a good idea of who's talking out their ass (including myself, on those occasions where I let my emotions get the better of me)...but how do those not "in the know" know? Managers? Other colleagues who might not be versed in a particular technology? How can they tell?

I tend to end up on the wrong side of that. My manager, or someone else, believes I'm just a mal-content. I am working on that.

The meritocratic IT environment is equally fun and frustrating. I just need to figure out a way to minimize the frustrating part of it.

Monday, October 4, 2010

Never Use RAISE_APPLICATION_ERROR Again

By Michael O’Neill
http://twitter.com/cleverideanet (professional)
http://twitter.com/oraclenude (personal)
oraclenerd articles

If you write Oracle PL/SQL, you know what RAISE_APPLICATION_ERROR is. It is an abomination of hard-coding and poor practice. If you didn't know that, I'm sorry I was the one who told you. I've written and used extensively an ultra-simple framework to eliminate RAISE_APPLICATION_ERROR from my code forever.
Here's an example (assume 11gR2) of something we all know we can do:
begin  dbms_output.put_line(1/0); end;
This will throw an unhandled ORA-01476 exception. We could write some meaningful handling of that with this:
begin 
dbms_output.put_line(1/0);
exception
when zero_divide
then
dbms_output.put_line('zero divide exception caught');
end;
This coding is elegant because Oracle has conveniently predefined an exception named ZERO_DIVIDE and a corresponding pragma for us. Unfortunately, Oracle has only 22 predefined exceptions. What happens when I do this:
declare
d date;
begin
d := to_date('2010-09-30', 'YYYY-MM-DD'); -- works
dbms_output.put_line(d);
d := to_date('12345-09-30', 'YYYY-MM-DD'); -- fails
dbms_output.put_line(d);
end;
This will throw an unhandled ORA-01861 exception. My option to handle this is less than meaningful because this is not a predefined exception:
declare
d date;
begin
d := to_date('2010-09-30', 'YYYY-MM-DD'); -- works
dbms_output.put_line(d);
d := to_date('12345-09-30', 'YYYY-MM-DD'); -- fails
dbms_output.put_line(d);
exception
when others
then
case sqlcode
when -1861
then
dbms_output.put_line('literal does not match exception caught');
else
raise;
end case;
end;
This leads me to the inevitable desire to create my own named exception and pragma, so I could have code that looks like this instead:
declare
d date;
begin
d := to_date('2010-09-30', 'YYYY-MM-DD'); -- works
dbms_output.put_line(d);
d := to_date('12345-09-30', 'YYYY-MM-DD'); -- fails
dbms_output.put_line(d);
exception
when error.ora_literal_string_mismatch
then
dbms_output.put_line('literal does not match exception caught');
end;
Understanding this, creating my own ERROR package with a friendly named exception and pragma for ORA-01861 leads me to the pattern of how to handle my own application exceptions, namely defining an exception and pragma.

But how does this get RAISE_APPLICATION_ERROR out of my life? Consider the ERROR abbreviated package source I use (full source: error.pks and error.pkb):
create or replace package error is

package_name constant varchar2(32) := 'error'; -- in case you want to change the package name

-- application exceptions and pragmas

(snip)

not_one_based constant string(64) := package_name || '.app_not_one_based';
app_not_one_based exception;
pragma exception_init(app_not_one_based, -20004);

sparsity_not_allowed constant string(64) := package_name || '.app_sparsity_not_allowed';
app_sparsity_not_allowed exception;
pragma exception_init(app_sparsity_not_allowed, -20003);

parameter_cannot_be_null constant string(64) := package_name || '.app_parameter_cannot_be_null';
app_parameter_cannot_be_null exception;
pragma exception_init(app_parameter_cannot_be_null, -20002);

string_too_large constant string(64) := package_name || '.app_string_too_large';
app_string_too_large exception;
pragma exception_init(app_string_too_large, -20001);

application_exception constant string(64) := package_name || '.app_application_exception';
app_application_exception exception;
pragma exception_init(app_application_exception, -20000);

-- rdbms exceptions and pragmas

(snip)

literal_string_mismatch constant string(64) := package_name || '.ora_literal_string_mismatch';
ora_literal_string_mismatch exception;
pragma exception_init(ora_literal_string_mismatch, -1861);

(snip)

procedure throw(p_exception in varchar2);

procedure throw
(
p_exception in varchar2
,p_message in varchar2
);

end;
You can see several user-defined exceptions and pragmas as well as the ORA_LITERAL_STRING_MISMATCH used in the previous example. The full source has more defined, but is not relevant to understanding the concept I am presenting.

Notice there is just one (overloaded) method, THROW. THROW is what I use instead of RAISE_APPLICATION_ERROR.

So, instead of this:
declare
s string(3) := 'abc';
begin
if (instr(s,'b') > 0)
then
raise_application_error(-20000, 'I hate the letter b');
end if;
end;
I use this:
declare
s string(3) := 'abc';
begin
if (instr(s,'b') > 0)
then
error.throw(error.application_exception, 'I hate the letter b');
end if;
end;
On its surface this doesn’t seem terribly interesting or useful. Below the surface, several powerful advantages are gained:
  • A single ERROR package encapsulates a schema’s application exceptions and pragmas, giving me a consistent SQLCODEs returned to my C# code.
  • No more, remembering what number to use in RAISE_APPLICATION_ERROR.
  • Easier to understand code
  • I can effectively organize my exceptions without sprawling them throughout a schema’s packages
  • I can extend the ERROR package (and I have) to do many more things like logging or default messages for exceptions without writing that into my schema’s application packages.
How does it work? Taking a look at the body for the THROW method reveals all:
procedure throw
(
p_exception in varchar2
,p_message in varchar2
) is
begin
begin
begin
execute immediate ('begin raise ' || p_exception || '; end;');
-- exception is raised and immediately trapped
exception
when ora_plsql_compilation_error then
throw(error.exception_does_not_exist, p_exception);
end;

exception
when others then
if sqlcode between - 20999 and - 20000
then
raise_application_error(sqlcode, p_message);
-- this is the best/only use of raise_application_error
-- and eliminates the need in application code
else
raise;
-- nothing extra to do for extra for exceptions outside raise_application_error range
end if;
end;
exception
when others then
raise; -- finally, bubbles up the original throw() call
end throw;
There are few caveats I have using this development pattern. I don’t consolidate every exception I write into my ERROR package, only those exceptions that I want to bubble up unhandled to my C# code. I don’t feel it is necessary to have the same ERROR package in every application schema. In other words I don’t evolve every incarnation of my ERROR package when I’m adding exceptions and pragmas to one schema’s ERROR package. Finally, my ERROR package has a multitude more bells and whistles I’m not sharing in this post for clarity’s sake. If you are interested in a more extended version of my ERROR, let me know via Twitter (@cleverideanet)

Copyright © 2010 Michael O'Neill
Published by Permission on oraclenerd

Monday, May 17, 2010

Thoughts on Exadata

I was exposed to Exadata a few weeks ago and my brain has been churning ever since.

I can't speak about specifics (you know, the first rule of Exadata don't you?) unfortunately. I can speak about how I think it can be applied though (I hope).

I came away from that experience in sheer awe. To the point where I am rethinking the entire database landscape.

You may have read somewhere that, in regards to Exadata, you have to relearn your use of indexes. In other words, you don't need them (well, I'm sure there are cases) other than referential integrity.

What about materialized views? Indexes (non-RI anyway) and materialized views are basically work-arounds for a lack of raw power. I saw Exadata scan 45 million rows (in an Explain Plan) and immediately said, that should be materialized, that you don't, or shouldn't, normally, scan that many records. Then I started to think about it...why? Why materialize it if you can scan it in mere seconds?

Materialize views and indexes both require support to some degree or another. Both take development time in way or another. With Exadata, you might just not need them anymore. That's a good thing.

Taking it out one step further, do you even need a data warehouse any more? I'm not talking about the top 5% of shops out there, I'm talking about the smaller shops that are not processing thousands of transactions a second.

Think about why you build a data warehouse. Data warehouses are designed to make reporting easier by 1, creating a standalone instance that has it's own resources; 2, creating a design (denormalization in some cases) that makes it very easy to get at the data.

I may be crazy, but why create all that extra work if you don't need to? By going the DW route, you now have ETL routines, a separate database and a separate design, all of which need to be maintained. Doubtful the same person will be doing everything so you hire more staff.

(Remember, I'm talking about the 95% here).

The raw power of Exadata would allow you to do everything in a single location.

I've had private conversations with people about this very subject...some think I'm off my rocker (naturally), some haven't been exposed to it and are wary of speculation, and others see some merit in my rambling.

I'm very excited about the possibilities with Exadata. What say you?

Added January 26, 2011
A great post by Jeff McQuigg: OOW BI Imp Panel Questions #3: Can Exadata Replace a DW? - I asked this question at the OOW panel, but I'm not sure if this is directly related as mine was rushed to go see Larry's speech. Good read.

Friday, April 30, 2010

OBIEE: Physical Development Environment

I had contemplated about using source control before.

Related:
OBIEE Software Configuration Management by Mark Rittman
- Part 1 : Initial Deployment from DEV to PROD
- Part 2 : Subsequent Deployments from DEV to PROD
- Part 3 : Version Controlling the Project
OBIEE Enterprise Methodology Group Discussion
- OBIEE development, test and deployment
- Multiple RPDs/WebCats

Those deal with more of the software side of things. I want to explore the physical set up a little. I don't have the definitive answer, just thinking out loud perhaps.

The client is considering a ramp-up of development efforts (good thing). We were asked to think about an environment to allow parallel development. This is no easy task in OBIEE as the RPD can be a bottleneck.

While talking about it though, it sounded a lot like classic software development and source control. You pull down the code locally, fix/create/update it, and then merge that up into your trunk (subversion lingo anyway). You merge it up into a common repository.

What if you physically separated your environment? Here's my thinking:
  1. Find the common code (init blocks, connection pools, etc) shared by all subject areas. Especially relevant if setting some variables or something for security.
  2. Create virtual (or physical) machines with the full OBIEE environment (services, rpd, web cat) set up for each subject area and only that subject area. Of course this would include all components identified in step 1.
Something like this:

pretty environment picture

Pros
  1. Allows for parallel development
  2. Easier to identify changes
  3. Easier to import than merge?
  4. ????
Cons
  1. Separating RPD into sections, scary
  2. How do you handle physical tables shared across subject areas?
I need some good contrarians out there to discuss this with.

I don't know if this has merit or not, but I wanted to put it out there. Good or bad, I'm OK with. The goal is to give the client the most reasoned, researched, supportable analysis that I can.

Tuesday, April 13, 2010

APEX Architecture

I'm not talking about how APEX internally or within the database works, I'm talking about how to best manage it from an infrastructure point of view.

I recently raised the issue with my client about how they were using APEX. I'm not here as an APEX consultant, but they didn't have a lot of experience in-house and the DBA who got it implemented had been trying for years. So I wrote up my thoughts and passed them along to my supervisor who then shared them with others.

The current setup was a stand-alone instance that was only used for APEX applications using database links and synonyms to access the source tables. I was arguing to have APEX installed on the database where the data lives.

When I got on the call, I listened to their arguments and concerns.

Then I began to think...what do you do if you have, say, 100 databases? Do you install APEX on each and every one? That's just another piece that has to be supported and maintained right?

After the call, I began to do a little research on the subject...and couldn't find a whole lot of information out there. Perhaps I wasn't searching on the right terms...not really sure.

Thinking about it further and talking with some friends, I came up with a short list of potential architectures.
  1. Install it Locally - This is my preferred method. The APEX metadata (application) lives in the same database as the data it manipulates.
  2. Database Links - Target Tables Remote, Source Code Local - This is the configuration I spoke of above. There is a single database where APEX is installed which accesses tables across database links. Synonyms would be used for tables and views to "hide" the long names and make them visible. Any source code (PL/SQL) would be local.
  3. Database Links - Target Tables Remote, Source Code Remote - Same as #2, but the source code lives in the remote database. Again, synonyms would need to be used to make database objects visible to APEX.
Number 2 and 3 both limit the functionality of APEX. I am aware of the Automated Row Fetch not working across database links. I know, for #3, you can't use the RETURNING clause, which isn't a huge deal...just annoying.

For #2 and #3, the synonyms add additional maintenance and complexity to your application(s).

Also with #2 and #3, you have additional network traffic, which, depending on the environment and/or the number of users, could be significant.

Like I said, my first thought was to install it locally. In addition, just put it on the databases that you will use it, not every single one. I still believe that, but am trying to give a balanced and reasoned answer.

Ultimately, maybe it just depends on how you use it. If this is going to be a "professionally" written application (as opposed to something a power user might build), I think having it installed on the databases where your data lives. If it's going to be driven by those power users as a replacement for the Excel and Access databases, I think the best approach might just be the single instance accessing tables remotely would be the way to go.

Anyway, just my thoughts. Please scream (write) loudly if you have thoughts on the matter.

Monday, April 12, 2010

Developing in APEX

I see 3 possible ways to develop in APEX.
  1. Packaged Code: PL/SQL - With this method you take, possibly, an existing application and "skin" it with APEX. Using APEX's built in Form based on a Procedure wizard, this is quick and painless. You spend most of your time just moving buttons around to suit your needs. Maybe I've been reading AskTom for too long, but that's the general approach I take.
  2. Declarative - Using all the built-in functionality of APEX which creates all kinds of objects (triggers for example) that will be managed by APEX. This is probably most geared towards the power users who don't yet have the ability to write a lot of PL/SQL but have a pretty good understanding of building an application.
  3. Hybrid - This approach mixes and matches the 2 previous approaches. You use packaged code for the bulk of your application and the declarative functionality within APEX for everything else. For example, the multi row INSERT/UPDATE/DELETE. I can write that whole thing out using TYPEs in PL/SQL, populate an APEX collection, convert that collection to a TYPE and then pass that TYPE back to PL/SQL...but do I want to? It saves a boat load of time to use the built-in features. Ultimately it depends, of course, on many different factors.
I think I will always prefer #1...like I said, I read AskTom for years, mostly for kicks.

Mr. Kyte on where to put data, in the DB or in the web tier:
They belong right next to the data - allows you flexible access to the data - you can build a new application WITHOUT reinventing all of that stuff.

where does the logic to display a bunch of numbers in a pie chart belong? In the application.

where does the logic to ensure that if X=0, then Y must be >= 0? That belongs right next to the data.

So, programming logic - some of it in the application - user interface code, user interaction code, error handling, display of data, report generation - in the application.

Much of it in the database - security, access control, data integrity.
I love APEX, but will it be around forever?

Tuesday, March 16, 2010

The Case for the Bit Bucket

By Michael Cohen

Mr. Cohen is most famous here for the discussions we've had before about Application Developers vs. Database Developers. Part I is here, Part II is here. Mr. Cohen is a friend of mine. I have a great deal of respect for him. We obviously disagree in some areas, but I've learned to appreciate his push-back and learn from it.

He had left a comment the yesterday on my most recent post, The "Database is a Bit Bucket" Mentality (Michael O'Neill posted his followup, Everything is a Bit Bucket, as well). I thought the comment would get short shrift, so I asked him to write up a full post. I deleted that comment and here is his post.


Modern RDBMS's are quite powerful today. Pretty much every one of them has full support for SQL, including vendor extensions, all of the features we've come to expect from a relational database, a full fledged programming language built in, and quite often support for extras like full text search or native handling of XML. Most now also now ship with highly feature specific add-ons - PostgreSQL has a geospatial package that makes it the defacto standard in that domain, MySql has hot replication in a master-slave paradigm, Oracle has....well, Oracle has all kinds of things, a full object system and Java inside, a message broker, an HTTP server, a complete UI toolkit, among other things.

So the question arises as to how much of this capability one should use. I think it's becoming apparent that the answer to this is, "not much." Why shouldn't you take advantage of as much of the database's feature set as possible? The answer is performance and scalability. But wait, aren't stored procedures faster than ad hoc queries? Yes (theoretically). Won't it be more performant to execute business logic as close as possible to the data it operates on? Why should we introduce yet another component into the architecture when the database is perfectly capable of handling a particular task?

For one thing, the programming languages and environments offered by relational databases are now relatively long in the tooth, and have been eclipsed by modern OO languages. Developers are much more productive building applications with these new languages, and find it painful and tedious to work within the relational model, with SQL. You can see proof of this now with the overwhelming popularity of ORM frameworks in most of the popular OO languages out there. Java has Hibernate/EJB/JPA and many others. Ruby has ActiveRecord, DataMapper, and Sequel. Python has SqlAlchemy and Djanjo's ORM. And it's not because these developers lack the skills to work with the database directly. Quite the contrary actually, it takes intimate knowledge of the database to work effectively with an ORM. What's more, the ORM is often able to make runtime optimizations that would be difficult or prohibitively time consuming to hand code. Finally, clustered caches offer massive performance and scalability improvements, handling writes back to the database transparently behind the scenes, but for the most part they preclude implementing complex business logic in the database.

The overall trend is clear, across languages and platforms. It's the movement of data out of the database and into the application layer. Less and less reliance on the database, perhaps only for archival purposes. John Davies has a good comment on this. He's operating in a unique environment with extremely rigorous performance requirements, but we're now starting to see similar constraints imposed by the web. There's a whole class of software that has come about due to the inability to scale the relational database beyond a certain point. Facebook developed Cassandra, now used by Twitter, Reddit, and Digg, among others. LinkedIn built Voldemort. My employer doesn't deal with the massive scale of these companies, but we do large scale data processing with Hadoop. HBase, another non-relational persistent data store, is a natural fit, and just about the only option really. We use MySql less and less.

Of course, not everybody is building applications with such high scalability requirements. But even for applications with less intensive scalability requirements I would argue the same tendency to minimize the workload on the database should apply. Cameron Purdy has a good quote, "If you don't pick your bottlenecks, they'll pick you." Design your application to bottleneck, he says. What he means is, your application is going to bottleneck on something, so you need to explicitly decide what it will bottleneck on. Unfortunately, most applications bottleneck on the database, as this is the hardest layer to scale. It's pretty easy to scale the front end, we just throw more instances of Apache out there. It's a little bit harder, but not much, to scale the app server. But it's pretty hard to scale the database tier, particularly for write intensive applications. For well funded organizations, Oracle RAC is the standard. MySql's master-slave setup and hot replication saw it win out over PostgreSQL despite the fact that Postgres is a much better database in just about every other respect. The NoSql projects listed above grew out of the inability even to scale out MySql.

The trend is clear. We're collecting and processing more data than ever before, and this will only increase as we go forward. Unfortunately, the relational database (at least in it's current form) isn't well suited to the scale of data processing an already significant and growing number of organizations deal with on a daily basis. We're now seeing new solutions come forth to address the shortcomings of the traditional RDBMS, and the same forces that have necessitated such developments are at work even in smaller organizations. At all levels, developers would do well to require as little functionality as possible from the database, essentially, to treat it as a bit bucket.

Tuesday, March 9, 2010

The "Database is a Bucket" Mentality

Front and center again...I just woke up from a nap, I'm grumpy, so I must write. Besides, I haven't had a good rant in quite some time.

Friend of mine asked me last week for some advice, specifically asking if there was a tool to convert Oracle SQL Syntax to the ANSI SQL syntax. (A quick search turned up this (it was the first result), if you're interested).

I had to ask why.

Client is switching to an open source database, i.e. "free." Oracle licensing is way too pricey.

I'm sure Oracle costs a lot of money, it's pretty darn good software. Quite possibly the best in the world especially in the database realm. I've written about the incredibly feature rich goodness that is the Oracle database here here...actually, just trust me. It's in my name.

Why is there even a comparison?

Could it be that everyone out there believes that the sole purpose of a database is to store data? That it can't do anything else? The storage and retrieval of data...that's all it does of course.

It's like saying the Democrats and Republicans are the same...at face value, perhaps, but the devil is in the details.

This, this "Bit Bucket" mentality is what is so incredibly frustrating.

I am no position to argue the differences between the various flavors of database, I lack the experience. But if I were using SQL Server, I would leverage the shit out of it's capabilities. If I were using MySQL, I would leverage the shit out of it's capabilities. If I were using Firebird, I would leverage the shit out of it's capabilities. Same goes for every single flavor out there. Get my point here?

The database is NOT a bit bucket!

Do I need to use more 4-letter words?

I know that Oracle is feature rich and that 99% percent of your code can live in the database...think APEX and PL/SQL. You could probably put ALL of your code inside the database if you wanted to put the javascript in BLOBs as well.

Please, please please quit telling me they are the same...they are not.

Follow up rant by Mr. O'Neill can be found on this following post Everything is a Bit Bucket

Sunday, February 14, 2010

PARALLEL Rant?

Let's say you have DEGREE set at the table and index level.

I ask you if this is appropriate, instead offering up using whatever tool accessing the data to provide the PARALLEL hint.

The DOP is set pretty low, given our current system.

But it's still set and can't be easily turned off.

I'm all for PARALLEL, but it's been beaten into my head that it should be used, specifically for batch operations. In other words, transforming or loading data.

With the DOP set at the table or index level, it is not (necessarily, see resource limits below) controllable. If you have 1 or 2 users issuing SELECT statements against the table, it's not a big deal. Let's say the DOP is set to 8. 8 processes are spun off for each user. That's 16 processes now running that SELECT statement. Now let's say you have 1000 users. You probably won't make it to 8000 processes...your machine will probably keel over and die...or worse, just sit there forever.

But we have to set DOP at the table/index level...our users don't know how to write SQL.

Fair enough...teach them how to write it.

That takes too much time.

How do you ever expect them to learn? It might be a good short term solution, but is it really a good long term solution? Teaching your users how to write better SQL would be in everyone's best interest.
  1. Initially, you'll be swamped with "How do I?" type questions.
  2. Then the questions will only trickle in.
  3. You'll have much more savy business users who can now probably articulate their needs much better which will lead to
    1. Better design documents
    2. Better requirements
  4. You can finally begin to push off more of this "reporting" type functionality out to the business (where it should be in my opinion).

OK, that might be a bit of a fantasy.

What about setting up resource profiles for the users?

I've never used them, but I was reading up on them tonight for this post.

What can you control with this feature?
  • Limiting the User Session Level
  • Limiting Database Call Levels
  • Limiting CPU Time
  • Limiting Logical Reads
  • Limiting Other Resources
    • You can limit the number of concurrent sessions for each user
    • You can limit the idle time for a session.
    • You can limit the elapsed connect time for each session.
    • You can limit the amount of private System Global Area (SGA) space (used for private SQL areas) for a session
What's the point of all this? I'm not too sure. It's a rant I guess. I just got done reading Dom Brooks recent post and reminded me of this conversation...which I have all too often.

Wednesday, February 3, 2010

PRIMARY KEY and NOT NULL

I've seen this far too often. A table with a primary key (good) and a check constraint (NOT NULL) on the same column.

Stop doing it. Watch.
CREATE TABLE t
(
id NUMBER
CONSTRAINT pk_id PRIMARY KEY
);

SH@I_HAVE_NO_IDEA>INSERT INTO t ( id ) VALUES ( 1 );

1 row created.

Elapsed: 00:00:00.33
SH@I_HAVE_NO_IDEA>INSERT INTO t ( id ) VALUES ( NULL );
INSERT INTO t ( id ) VALUES ( NULL )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("SH"."T"."ID")
As HillbillyToad said,

hillbillytoad

It is better than no constraint, that's for sure. The heart was in the right place...

Tuesday, October 27, 2009

Free Oracle Developer/dba

There is a serious lack of work in the Tampa market and desperate times call for desperate measures.

Now, I've always wanted to do this, but was never in a position to do so financially...I'm still not, but something is way better than nothing.

I'm going to offer my services for free.

Not forever mind you, somewhere between 2 and 4 weeks. It is negotiable.

If it works out, i.e. you like what I can bring to your organization, I prove that I pick new systems up quickly and I fit in well with your team; and I like working in your organization, then you pay me for that time worked and I sign up full-time either as a permanent employee or some sort of contract.

If it doesn't work out, we part ways and no one is worse for wear. You get free work from me and I get to experience a new environment (i.e. meet new people, expand my network, etc).

You can view my resume here (permanent link up top coming soon).

Some basic highlights:
PL/SQL: Expert (I don't like that term)
DBA: Junior to Mid-level (or DBA in lowercase, "dba")
APEX: Worked with up to version 3.2, admittedly rusty, but passionate about the product.
Design: Love modeling data. Model just about everything in my head (in normal life). Use Visio extensively for visual representations. Experience with SQL Developer Data Modeler, ERwin, etc.
Documentation: Give me a wiki or something similar and I'll document just about everything that I do.

That's it. Contact info is on my resume or up top on the "email" icon.

It's now posted on craigslist as well.

Tuesday, October 20, 2009

Wu Wei

...is an important concept of Taoism (Daoism), that involves knowing when to act and when not to act. Another perspective to this is that "Wu Wei" means natural action - as planets revolve around the sun, they "do" this revolving, but without "doing" it; or as trees grow, they "do", but without "doing". Thus knowing when (and how) to act is not knowledge in the sense that one would think "now" is the right time to do "this", but rather just doing it, doing the natural thing.
From Wikipedia.