Showing posts with label wtf. Show all posts
Showing posts with label wtf. Show all posts

Wednesday, June 12, 2013

Required Reading

It's not often that I run across articles that really resonate with me. Last night was one of those rare occasions. What follows is a sampling of what I consider to be required reading for any IT professional with a slant towards database development.

Bad CaRMa


That led me to Bad CaRMa by Tim Gorman. This was an entry in Oracle Insights: Tales of the Oak Table, which I have not read, yet.

A snippet:

...The basic premise was that just about all of the features of the relational database were eschewed, and instead it was used like a filing system for great big plastic bags of data. Why bother with other containers for the data—just jam it into a generic black plastic garbage bag. If all of those bags full of different types of data all look the same and are heaped into the same pile, don't worry! We'll be able to differentiate the data after we pull it off the big pile and look inside.

Amazingly, Randy and his crew thought this was incredibly clever. Database engineer after database engineer were struck dumb by the realization of what Vision was doing, but the builders of the one-table database were blissfully aware that they were ushering in a new dawn in database design...

This is from 2006 (the book was published in 2004). Not sure how I missed that story, but I did.

Big Ball of Mud

I've read this one, and sent it out, many times over the years. I can't remember when I first encountered it, but I read this once every couple of months. I send it out to colleagues about as often. You can find the article here.

A BIG BALL OF MUD is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle. We’ve all seen them. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition. Programmers with a shred of architectural sensibility shun these quagmires. Only those who are unconcerned about architecture, and, perhaps, are comfortable with the inertia of the day-to-day chore of patching the holes in these failing dikes, are content to work on such systems.

Read it. Remember it.

How To Ask Questions The Smart Way

Ever been in a forum? Has anyone ever given you the "RTFM" answer? Here's how you can avoid it. How To Ask Questions The Smart Way. I read this originally about 9 or 10 years ago. I've sent it out countless times.

The first thing to understand is that hackers actually like hard problems and good, thought-provoking questions about them. If we didn't, we wouldn't be here. If you give us an interesting question to chew on we'll be grateful to you; good questions are a stimulus and a gift. Good questions help us develop our understanding, and often reveal problems we might not have noticed or thought about otherwise. Among hackers, “Good question!” is a strong and sincere compliment.

Despite this, hackers have a reputation for meeting simple questions with what looks like hostility or arrogance. It sometimes looks like we're reflexively rude to newbies and the ignorant. But this isn't really true.

What we are, unapologetically, is hostile to people who seem to be unwilling to think or to do their own homework before asking questions. People like that are time sinks — they take without giving back, and they waste time we could have spent on another question more interesting and another person more worthy of an answer. We call people like this “losers” (and for historical reasons we sometimes spell it “lusers”).

Business Logic - PL/SQL Vs Java - Reg

The article can be found here.

I don't believe this is the one that I would read just about every day during my first few years working with Oracle, but it's representative (I'll link up the original when I find it). I cut my teeth in the Oracle world by reading AskTom every single day for years. Some of my work at the time included working with java server pages (jsp) - at least until I found APEX. I monkeyed around with BC4J for awhile as well, but I believe these types of threads on AskTom kept me from going off the cliff. In fact, I got to a point where I would go to an interview and then debate the interviewer about this same topic. Fun times.

if it touches data -- plsql.

If it is computing a fourier transformation -- java.

If it is processing data -- plsql.

If it is generating a graph -- java.

If it is doing a transaction of any size, shape or form against data -- plsql.

Thinking Clearly About Performance

Cary Millsap. Most of the people seem to know Cary from Optimizing Oracle Performance, I didn't. I first "met" Cary virtually and he was gracious enough to help me understand my questions around Logging, Debugging, Instrumentation and Profiling. Anyway, what I've learned over that time, is that Cary doesn't think of himself as a DBA, he's a Developer. That was shocking for me to hear...I wonder how many others know that. So I've read this paper about 20 times over the last couple of years (mostly because I'm a little slow). I organize events around this topic (instrumentation, writing better software, etc) and this fits in perfectly. My goal is to one day co-present with Cary, while playing catch, on this topic (I don't think he knows that, so don't tell him). Link to his paper can be found here. Enjoy!

The Complicator's Gloves

One of my favorite articles from The Daily WTF of all time. Find the article here. The gist of the story is this: an internal forum where people were discussing how to warm a given individuals hands on his bike ride to work. The engineers then proceeded to come up with all kinds of solutions...they spent all day doing this. Finally, someone posts, "wear gloves." End of discussion. Love it. I wrote about it years ago in Keeping it Simple. For a few years I considered buying up thecomplicatorsgloves.com and try to gather related stories, but I got lazy. You should read this often, or better, send it out to colleagues on a regular basis to remind them of their craziness.

I'll continue to add to this list as time goes on. If you have any suggestions, leave a comment and I'll add them to the list.

Wednesday, May 4, 2011

The &DBA

Pronounced, Ampersand DBA.

I'm hearby naming all my future DBAs that meet certain criteria, &DBA.

Over the years, at about a million different organizations, I have met many DBAs. Many, at first glance, fit the DBA stereotype; cranky, condescending, arrogant, etc. After further review, many of those traits still apply. Some however turn out to be the opposite; they just put on a game face for the new guy or gal because they've been through this before...probably to be disappointed again that the developer will 1, leave them out of design discussions 2, throw bad SQL over the wall, 3, not use bind variables, ever and 4, well, all the other bad things we developers do.

The type that I have least encountered are the DBAs who actively seek out and engage developers. Hey, can I help you with that? Hey, you aren't using bind variables, here are the reasons you should. They know that in the long run, the better the developer, the easier their job will be.

I love that aspect of my job. I like helping others learn to do something better or more efficiently. I like to encourage others to download and install the database. But that's not the point of this post...

I needed a name for those DBAs who are 1, less qualified than I am to be a DBA (short list, I am under no illusions), 2, are wholly incompetent and 3, are jerks.

After a recent incident, as relayed by others, the &DBA was born.

What does the & mean?

In SQL*Plus, the & is a special character. If you run scripts you can parameterize them with &COLUMN_VALUE or something, right?

That's great for scripts.

It can go horribly wrong with code. At least the first time you try to load code via SQL*Plus. Here's the original
CREATE OR REPLACE
FUNCTION foo RETURN VARCHAR2
AS
l_retval VARCHAR2(10);
l_value VARCHAR2(30) := 'Barnes & Noble';
BEGIN
SELECT 'IT WORKS!'
INTO l_retval
FROM dual
WHERE 'Barnes & Noble' = l_value;

RETURN l_retval;

END foo;
/
show errors
and here's what happens when I run it:
CJUSTICE@NO_CLUE>CREATE OR REPLACE
FUNCTION foo RETURN VARCHAR2
2 AS
3 l_retval VARCHAR2(10);
4 l_value VARCHAR2(30) := 'Barnes & Noble';
5 BEGIN
6 SELECT 'IT WORKS!'
7 INTO l_retval
8 FROM dual
9 WHERE 'Barnes & Noble' = l_value;
10
11 RETURN l_retval;
12
13 END foo;
14 /
Enter value for noble: show errors
old 3: l_value VARCHAR2(30) := 'Barnes & Noble';
new 3: l_value VARCHAR2(30) := 'Barnes show errors';
Enter value for noble: testing
old 8: WHERE 'Barnes & Noble' = l_value;
new 8: WHERE 'Barnes testing' = l_value;

Warning: Function created with compilation errors.
This is how it is stored in the database:
FUNCTION foo RETURN VARCHAR2
AS
l_retval VARCHAR2(10);
l_value VARCHAR2(30) := 'Barnes show errors';
BEGIN
SELECT 'IT WORKS!'
INTO l_retval
FROM dual
WHERE 'Barnes testing' = l_value;

RETURN l_retval;

END foo;

12 rows selected.
Yeah, that won't work.

To fix this, you simply set an attribute/directive? in your SQL*Plus session, SET DEFINE OFF (I believe there is another, but this is the one I use). Like this:
CJUSTICE@NO_CLUE>SET DEFINE OFF
CJUSTICE@NO_CLUE>
CJUSTICE@NO_CLUE>CREATE OR REPLACE
2 FUNCTION foo RETURN VARCHAR2
3 AS
4 l_retval VARCHAR2(10);
5 l_value VARCHAR2(30) := 'Barnes & Noble';
6 BEGIN
7 SELECT 'IT WORKS!'
8 INTO l_retval
9 FROM dual
10 WHERE 'Barnes & Noble' = l_value;
11
12 RETURN l_retval;
13
14 END foo;
15 /

Function created.

Elapsed: 00:00:00.22
CJUSTICE@NO_CLUE>show errors
No errors.
CJUSTICE@NO_CLUE>
CJUSTICE@NO_CLUE>SET DEFINE ON
All is now well in the world.

Now this might be vexing on your first couple of weeks on the job, but you quickly pick it up.

This is where the name comes from...

SET DEFINE OFF
&DBA
SET DEFINE ON


said that this was a known Oracle bug.

Really.

Monday, March 14, 2011

Fun with SQL - Part 578

by Enrique Aviles [twitter]
The first guest post in awhile. I met Mr. Aviles via twitter (yeah for Twitter!) some time back. We got to meet in real life at our local SOUG meeting with Tanel Põder came to town. He's also graced the front page for awhile wearing his oraclenerd t-shirt (which is now his avatar on Twitter).

Part of my job consists of tuning SQL queries. Opportunities abound and present themselves in many different scenarios. A few weeks ago I was assigned a trouble ticket. I was the lucky winner because, after some investigation by the developers, the culprit was identified as an SQL issue that was causing a stack overflow in Hibernate. For those unfamiliar with it, Hibernate is an Object Relational Mapping library for Java (more info here). The stack overflow occurred during string manipulations probably during the code generation phase. I didn’t have to do much investigation of my own because the developers indicated the offending query was composed of 60 IN clauses, each clause containing 1000 elements. The query resembled something like this:
SELECT COL1, COL2, COL3
FROM
WHERE (UNIQUE_ID IN (1, 2, 3,..., 1000)
OR (UNIQUE_ID IN (1001, 1002, 1003,..., 2000)
OR (UNIQUE_ID IN (2001, 2002, 2003,..., 3000)
...
OR (UNIQUE_ID IN ( 59001, 59002, 59003, ..., 60000)
I was happy to see the query generation failed before it reached the database. It is usually better to have the application fail with a glaring error during unit tests rather than allowing a potential failure slip into production. Fortunately we test against a database that’s a slightly older copy of the production database so we execute all our queries using real data volumes, not just against artificial tiny development databases. Had we only tested on a development database, this issue would have made it to production with two possible outcomes:
- The application fails with the Hibernate stack overflow error that triggered the trouble ticket.
- Given a smaller set of unique ids, Hibernate successfully generates the query and sends it to the database.

Since the query never reached the database, it is hard to tell how well it would have performed.

For a handful of IN clauses, it is safe to assume the query would have run in a reasonable amount of time but 60 IN clauses could have possibly taken an unacceptable amount of time. Even if the database is able to process 60 IN clauses with no impact to the user, generating such statements as
UNIQUE_ID IN (1001, 1002,..., 2000)
UNIQUE_ID IN (2001, 2002,..., 3000)
UNIQUE_ID IN (59001, 59002, 59003,..., 60000));
in a query is not a good idea for various reasons. In the absence of bind variables, a lot of CPU will be consumed hard parsing SQL statements since potentially every generated query becomes a brand new query sent to the database even if the number of IN clauses remain static. Using bind variables lessens the load but doesn’t eliminate hard parsing as much as it should because queries with a different number of IN clauses become unique, thus unshareable. Both cases fill the shared pool with unshareable SQL, although the absence of bind variables would fill it faster. This is a waste of CPU cycles and memory. We can significantly reduce hard parsing and eliminate the risk of generating a huge SQL statement by implementing a temporary table.

Instead of selecting and sending to the application server all values that eventually ended in a series of IN clauses, we could simply insert those values in a temporary table and modify the original query to join the temporary table. The following is a sample temporary table:
CREATE GLOBAL TEMPORARY TABLE IDS_TMP 
(
ID NUMBER
)
ON COMMIT PRESERVE ROWS;
The table was loaded with the same INSERT that returned all IDs that were used to build the IN clauses. Once the temporary table contained all desired IDs the application generated the following query:
SELECT COL1, COL2, COL3
FROM TABLE AS TAB
WHERE EXISTS
(SELECT TMP.ID
FROM IDS_TMP AS TMP
WHERE TMP.ID = TAB.ID);
Of course, we could have also generated a query with an INNER JOIN or with a single IN clause but EXISTS worked fine for our case. This is obviously a much shorter SQL text. It remains static regardless of how many IDs are selected since they are hidden in the temporary table. Additionally, there is no need to send all IDs to the application server, spend time building a long character sting, and send them back as part of a SQL query. This translates in one hard parse and many executions so the risk of wasting shared pool and CPU usage is eliminated.

The “magic” of temporary tables makes this a safe solution since data inserted is only available per session. User A cannot select data inserted by user B and vice versa. Temporary data disappears automatically after the session disconnects so there is no need to explicitly DELETE or TRUNCATE the table. At the end of the day, this proved to be a suitable solution.

The fact that IN clauses only support 1000 elements was seen as a limitation. If the database imposes limits chances are defeating them with apparently clever solutions will result in slow performing queries and will increase the risk of wasting system resources. When faced with similar limitations we should ask ourselves if there are other means of achieving the same result that don’t require circumventing what was considered a database shortcoming.

Wednesday, March 31, 2010

OBIEE: Dimension (Hierarchy) - SOLVED

Of course 12 minutes after I finished writing this up I figured it out...or at least think I figured it out.

I began to think of this in terms of SQL. How would I do this if I had to write this all out by hand.

I started from the top most Level, Year:
SELECT DISTINCT
calendar_year,
calendar_year_id
FROM times
GROUP BY
calendar_year,
calendar_year_id
ORDER BY calendar_year DESC;
That returned the results I expected. Let's add quarter:
SELECT DISTINCT
calendar_year,
calendar_year_id,
calendar_quarter_id,
calendar_quarter_desc
FROM times
GROUP BY
calendar_year,
calendar_year_id,
calendar_quarter_id,
calendar_quarter_desc
ORDER BY
calendar_year DESC,
calendar_quarter_desc DESC;
OK, good. How about month?
SELECT DISTINCT
calendar_year,
calendar_year_id,
calendar_quarter_id,
calendar_quarter_desc,
calendar_month_id,
calendar_month_desc
FROM times
GROUP BY
calendar_year,
calendar_year_id,
calendar_quarter_id,
calendar_quarter_desc,
calendar_month_id,
calendar_month_desc
ORDER BY
calendar_year DESC,
calendar_quarter_desc DESC,
calendar_month_desc DESC;
That little exercise got me thinking...as you drill down, predicates are applied. After this first one, you would have:
SELECT DISTINCT
calendar_year,
calendar_year_id,
calendar_quarter_id,
calendar_quarter_desc
FROM times
WHERE calendar_year_id = 1111
GROUP BY
calendar_year,
calendar_year_id,
calendar_quarter_id,
calendar_quarter_desc
ORDER BY
calendar_year DESC,
calendar_quarter_desc DESC;
So on and so forth as you drill down. This makes perfect sense. You don't need to define a level (Quarter for example) YYYYYQQ, because it already knows the year. Likewise, the further down you go, the more predicates are added. That will allow you, at the Day Level, to just display the Day of the Week or the Day Number of the Week...or some such nonsense.

I confirmed this by reviewing the physical SQL being submitted to the database.

Funny how stepping back for a second and taking the time to write down the problem enables you to see the problem more clearly.

I should do this more often. :)

OBIEE: Dimension (Hierarchy)

In relation to the previous post on Report Drilldown(s), I've been playing with the Dimensions to get a really good idea of how they affect drilling capabilities.

Conceptually, it's not a problem, it's just hierarchical data. I wasn't born with the understanding of hierarchical data...I just kept messing around with it.

Anyway, I've run across a little problem. After perusing the usual places for answers, I couldn't find exactly what I was looking for. It's probably due to the fact that I'm still learning the BI lingo (Levels, Dimensions, Hierarchy, etc.). I did find a useful article from Mr. Rittman, but it didn't help me.

As I build and rebuild these dimensions (specifically Time), when I get to a certain level, the records are not rolling up. Here's what it should look like:

Time
-->Total
---->Year
------>Quarter
-------->Month
---------->Week
------------>Day

Easy enough right?

For the purposes of this demonstration, I'm only going to go down to the month level. This is also to keep you somewhat engaged.

Let's start with a screenshot of the Time Dimension in the BMM:



Instead of doing a million screenshots, I'll just explain how they are set up.

I pulled CALENDAR_YEAR and CALENDAR_YEAR_ID into the Year Level. From the Logical Level --> Keys tab, CALENDAR_YEAR is set up as a key with "Use for Drilldown" checked. I create another logical key on CALENDAR_YEAR_ID but I uncheck "Use for Drilldown". I then select CALENDAR_YEAR_ID as the Primary key for the table and also check off the Chronological Key for CALENDAR_YEAR_ID.

I did the exact same thing for the Quarter Level pulling in CALENDAR_QUARTER_DESC and CALENDAR_QUARTER_ID.

I switch over to Answers and run the report.



Now I drill down and I should only have one record.



This is where the trouble starts for me.

I do the exact same thing with the Month Level, pulling in CALENDAR_MONTH_DESC and CALENDAR_MONTH_ID.

When I run the report and click on a Quarter, I get the following:



You might not be able to fully see what it's doing...it's giving me a record for each day in that given month (all 3 quarters are represented). Suppressing the values (Column Format --> Suppress), it does the same thing.

If I remove CALENDAR_MONTH_DESC as a logical key and recheck "Use for Drilldown" on CALENDAR_MONTH_ID, it works just fine.



Only I don't care to have the users see the CALENDAR_MONTH_ID, it's meaningless to them.

When I looked at the SQL being issued to the database, I did notice that the first query (from year to quarter) had a DISTINCT clause and the second (from quarter to month) did not.

I've been beating my head against the wall because I know there is something small I am missing.

Any help would be much appreciated.

Update (12 minutes later)
Check out this one for my resolution.

Tuesday, March 23, 2010

WHERE rownum = 1

Or it's evil twin, WHERE rownum < 2

I've seen this a lot over the past few years. When I say a lot, I mean approaching infinity a lot. Well, it feels like it anyway. I'm allowed to exaggerate.

I'm pretty much convinced that this is a bug. I see it and immediately say, WTF?

The only thing I could see it being used for is some sort of EXISTS functionality, like this:
DECLARE
l_count INTEGER;
BEGIN
SELECT COUNT(*)
INTO l_count
FROM user_line_of_business
WHERE username = 'KPEDERSEN'
AND rownum < 2;

IF l_count = 1 THEN
do_something;
ELSE
raise_some_error;
END IF;
END;
I have multiple records in the table for KPEDERSEN, I just need to know if one exists. This would add a STOPKEY (command?) to the query plan and force it to...(ah, who am I kidding, I don't know what I am talking about...yet).

BTW, here's the table definition and data if you want to try it out.
CREATE TABLE user_line_of_business
(
username VARCHAR2(30),
line_of_business VARCHAR2(3)
);

INSERT INTO user_line_of_business
( username,
line_of_business )
VALUES
( 'KPEDERSEN',
'TEN' );

INSERT INTO user_line_of_business
( username,
line_of_business )
VALUES
( 'KPEDERSEN',
'OSX' );

INSERT INTO user_line_of_business
( username,
line_of_business )
VALUES
( 'KPEDERSEN',
'BOL' );

INSERT INTO user_line_of_business
( username,
line_of_business )
VALUES
( 'JKURAMOT',
'OSX' );

INSERT INTO user_line_of_business
( username,
line_of_business )
VALUES
( 'JPIWOWAR',
'OSX' );
So the EXISTS functionality is OK. Not great, not something I'd really like to see, but it works.

How about this though?

What if I, say, I was filtering queries based on a users line of business? Just to add, I am using something like APPLICATION CONTEXT to set the variable so it will hold only one value (please, just go with me on this...I know there is a wway around). My point is, the variable only holds one value. I promise, I am not making this scenario up.
DECLARE
l_lob VARCHAR2(5);
BEGIN
SELECT line_of_business
INTO l_lob
FROM user_line_of_business
WHERE username = 'KPEDERSEN'
AND rownum < 2;

dbms_output.put_line( 'l_lob: ' || l_lob );
END;
Which returns TEN. TEN might be the right answer.

What happens if you run that again? Well, you'll probably get the same result.

However, say this table grows and username KPEDERSEN gets more records? Do you think you could guarantee that TEN would be returned each and every time?

The short answer (and the one I am able to provide) is NO. You can't guarantee the order of the rows returned without explicitly putting an ORDER BY clause on there.

WHERE rownum = 1 or WHERE rownum < 2 are the devil (which contains evil).

Sunday, March 14, 2010

Afraid to COMMIT;

Going through some old documents, I found this little gem I had uncovered doing some analysis of the source code.

I found this crazy enough to save, so I hope you enjoy it.

I found 14 or 15 commits in a 115 line procedure. I was shocked and stunned. It was wrong on so many different levels. I share with you the pain I went through.

We'll start off with a call to the logging table (just a quick note, this was not a stored procedure but a INSERT statement).
log_something;
Creating a record in the THIS_TABLE, it will have a status of A. Everything in there has a status of A.
INSERT INTO this_table (my_id, start_date) 
VALUES ( l_my_id, sysdate)
RETURNING table_id INTO l_table_id;
I know some could argue for COMMITs being inside stored procedure, but it was hammered into my head at an early age that the calling application should perform the COMMIT.
COMMIT;
THIS_TABLE had multiple "rules" tables. I understood the concept, but the implementation was not so good. This is the first "rules" table.
UPDATE rule_tab
SET status = 'D'
WHERE my_id = l_my_id
AND status = 'A';
Just so you are aware, that STATUS column had no constraint on it other than the size VARCHAR2(1).

Guess what time it is?
COMMIT;
Here is the second
UPDATE other_rules_table
SET status='D'
WHERE my_id = l_my_id
AND status='A';
Guess what?
COMMIT;
OK...here comes the other awesome part...to switch the status back to A (Active), we're going to create a job...in 30 minutes
dbms_job.submit
( job => ln_jobno,
what => 'update rule_table set status=''A'' where my_id = '||l_my_id||' AND status = ''D'';',
next => sysdate+(.5/24));
COMMIT;
I kind of understand that one...for the job to go into the queue you have to issue the COMMIT.

For some reason, it's now cool to use a nested block.
BEGIN
dbms_job.submit
( job => ln_jobno,
what => 'update other_rule_table set status=''A'' where my_id = '||l_my_id||' AND status = ''D'';',
next => sysdate+(.5/24) );

COMMIT;
EXCEPTION
WHEN OTHERS THEN
insert into errors (name,error_date, text)
values ('it broke', sysdate, ' MY_ID '||l_my_id||' creating job to set to a ');
COMMIT;
END;
Now we have 2 jobs created, one to update THIS_TABLE and one to update my first "rules" table. -10 for using DML in a job. -10 for not putting them into a single job that could fail together. -30 for not creating a stored procedure to do this.

But wait, it's not over yet. We're getting to the very best part I think.
BEGIN
dbms_job.submit
( job => ln_jobno,
what => 'insert into errors (name,error_date, text)
values (''wow'', sysdate, ''error turning it back on') ;',
next => sysdate+(.5/24));

EXCEPTION
WHEN others THEN
insert into errors (name,error_date, text)
values ('doing stuff', sysdate, ' MY_ID '||l_my_id||' creating job to log errors');

COMMIT;
END;
Did I catch a "niner" in there?

Did you catch what that final job did? It "logged" an error from the previous 2 jobs. Really? Do jobs work like that? I'm not really sure that they do.

This little snippet is just a snapshot into my life over the last few years. This kind of thing is everywhere. (I know everywhere!). I've mentioned before, but I've been reading The Daily WTF since 2005. Daily. I've learned more from that site than perhaps any other because you learn what not to do...which is just as important as what to do. It takes years to gain the necessary experience (read: screwing up) to know what not to do, The Daily WTF speeds that up significantly by allowing you to witness others mistakes. We've all made them, to be sure. It's whether we learn from them that is important.

Tuesday, March 9, 2010

Code Comment WTF? Part 209

Found this in a snippet today:
-- ********************************
-- End of Package Body

END package_pkg ;
/
Seriously? Was that necessary? Could I possibly be under the illusion that it is not the end of the package?

Stop it.

Now.

Tuesday, March 2, 2010

DBA_TABLES vs DBA_OBJECTS

CJUSTICE@TESTING>SELECT * FROM V$VERSION;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
PL/SQL Release 10.2.0.3.0 - Production
CORE 10.2.0.3.0 Production
TNS for 32-bit Windows: Version 10.2.0.3.0 - Production
NLSRTL Version 10.2.0.3.0 - Production
Last night I was installing the Unit Testing repository for SQL Developer for a fun little post. After installing the repository, I just did a brief sanity check.
CJUSTICE@TESTING>SELECT owner, COUNT(*) c
2 FROM dba_objects
3 GROUP BY owner
4 ORDER BY 1;

OWNER C
------------------------------ ----------
...snip
SI_INFORMTN_SCHEMA 8
SYS 22970
SYSMAN 1341
SYSTEM 454
S_NQ_SCHED 3
TSMSYS 3
WMSYS 242
XDB 682

27 rows selected.
Strange.

I created a user, UNIT_TESTING, to house the data and fed it to SQL Developer. Did I create the user?
CJUSTICE@TESTING>SELECT COUNT(*) 
FROM dba_users
WHERE username = 'UNIT_TESTING';

COUNT(*)
----------
1
Yeah.

I check DBA_OBJECTS using UNIT_TESTING as the predicate:
CJUSTICE@TESTING>SELECT * FROM dba_objects WHERE owner = 'UNIT_TESTING';

no rows selected
Really?
CJUSTICE@TESTING>SELECT table_name
2 FROM dba_tables
3 WHERE owner = 'UNIT_TESTING';

TABLE_NAME
------------------------------
UT_LIB_TEARDOWNS
UT_LOOKUP_CATEGORIES
UT_LOOKUP_DATATYPES
UT_LOOKUP_VALUES
UT_METADATA
UT_TEST
UT_TEST_ARGUMENTS
UT_TEST_IMPL
UT_VALIDATIONS
UT_TEST_IMPL_ARGUMENTS
UT_LIB_STARTUPS
UT_LIB_VALIDATIONS
UT_LIB_DYN_QUERIES
UT_SUITE
UT_SUITE_TEST
UT_TEST_IMPL_VAL_RESULTS
UT_TEST_IMPL_ARG_RESULTS
UT_TEST_IMPL_RESULTS
UT_TEST_COVERAGE_STATS
UT_TEST_RESULTS
UT_SUITE_RESULTS
UT_SUITE_TEST_RESULTS

22 rows selected.
WTF?

Does this mean that my data dictionary is corrupted? This is a sandbox so it is very well possible...just never seen this kind of thing before.

Update 03/29/2010
I figured it out, user error...explanation is here.

Wednesday, February 3, 2010

PRIMARY KEY and NOT NULL

I've seen this far too often. A table with a primary key (good) and a check constraint (NOT NULL) on the same column.

Stop doing it. Watch.
CREATE TABLE t
(
id NUMBER
CONSTRAINT pk_id PRIMARY KEY
);

SH@I_HAVE_NO_IDEA>INSERT INTO t ( id ) VALUES ( 1 );

1 row created.

Elapsed: 00:00:00.33
SH@I_HAVE_NO_IDEA>INSERT INTO t ( id ) VALUES ( NULL );
INSERT INTO t ( id ) VALUES ( NULL )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("SH"."T"."ID")
As HillbillyToad said,

hillbillytoad

It is better than no constraint, that's for sure. The heart was in the right place...

Saturday, April 11, 2009

PL/SQL APIs

I was once told by a colleague, after sending out a note encouraging them, that they were not convinced of the effectiveness of PL/SQL APIs. I really didn't know what to say. Further, their past experience with them was bad. Huh?

As I went to do a little research on PL/SQL APIs, the first result I got was



What do you do with a response like that? Just submit it to Oracle WTF and be done?

"...not convinced of the effectiveness of PL/SQL APIs..."

Oracle has been selling that for years hasn't it? The Packages and Types Reference manual lists about 218 public packages which obviously doesn't include the internal only ones. Oracle seems to have bought into the PL/SQL API.

Steven Feuerstein wants you to build them. I personally don't like his style, but there is no arguing results. TAPI, or table based API, suggests building a simple API that performs the INSERT, UPDATE and DELETEs for each and every table and then using that TAPI in your APIs up a layer. My friend Daniel McGhan has even built a TAPI generator, tapiGen, which will create all the code for each table.

I'm pretty sure Tom Kyte is a fan as well. Unfortunately his site is down at the moment so I cannot link up to anything in particular. Hasn't he written some of the most predominant books on Oracle, specifically developing applications with Oracle?

I am still stunned that I heard that. I would expect it from Mr. M perhaps, but I think even he would appreciate the advantages of not having to write any SQL.

So how do you respond? What do you say?

Tuesday, December 23, 2008

Exception Handling WTF

In homage to Oracle WTF.

Trying to remove references for a table that no longer exists (a view was created in it's place), I found this little gem:
OPEN c FOR
SELECT 'INVALID USER'
FROM dual;

RETURN c;

It was nested in an IF-END IF block.

Instead of just throwing the exception or return 0 (zero) rows, this was the preferred method.

Awesome!

Tuesday, November 18, 2008

A Java PL/SQL Port Scanner

Saturday night I started to wonder how I could get the movies of my DVR. Recently Verizon FIOS added a boatload of HD stations so I've naturally begun to record those. Problem is, the 50 hours or so I have available are quickly eaten up by the HD versions.

We have the multi-room DVR setup, which means we have one DVR but we can view those movies from any TV (with a cable box). Naturally all these boxes are on my internal network (router provided by Verizon).

I understand that with some boxes you can go out and buy additional storage, but I wanted a DIY solution. So, if I could find an open port on one of these boxes, then perhaps I could figure out a way to get the movies onto my computer.

Java to the rescue. Well, not really, but I have always wanted to learn more about that kind of lower level programming (sockets/ports/TCP/whatever). See, I don't really know what I'm talking about which is why I did it.

Since I use JDeveloper for writing SQL and PL/SQL, creating a java class (with the Main method) is easy.
import java.io.IOException;
import java.net.Socket;
import java.net.UnknownHostException;
import java.util.Date;

public class Class1
{
private static void p ( String text )
{
System.out.println( text );
}

public static void main(String[] args) throws UnknownHostException, IOException
{
Date startDate = new Date();
String machine = "192.168.1.102";
Thread thread = new Thread();

Socket socket;
int j = 0;
int x = 0;
p( "start date/time: " + startDate.toString() );
for ( int i = 1000; i < 10000; i++ )
{
x++;
p( "checking port: " + machine + ":" + i );
try
{
socket = new Socket( machine, i );
p( "Port open: " + machine + ":" + i );
j++;
}
catch (UnknownHostException e) { }
catch (IOException e) { }
finally
{
}
}
p( "ports checked: " + x );
p( "ports open: " + j );
p( "start date/time: " + startDate.toString() );
p( "end date/time: " + new Date().toString() );
}
}
As LC would say, "That's not so bad."

Only it was slow, 1 port at a time. I did learn that trying to open up a port connection is s l o w. How can I speed this up?

I could make it threaded right? No. That would take too much time.

But hey, I know how to do it in PL/SQL (threaded that is). UTL_TCP should work nicely. I've never had to use it, so a great opportunity to familiarize myself with it (not that it's needed very often).

So I start Oracle XE and initially just try it out to see how it works:
DECLARE
c UTL_TCP.CONNECTION;
BEGIN
c := utl_tcp.open_connection
( remote_host => '192.168.1.2',
remote_port => 1,
tx_timeout => 1 );

utl_tcp.close_connection( c );
EXCEPTION
WHEN others THEN
utl_tcp.close_connection( c );
END check_port;
/
Easy enough. Now let's start use DBMS_JOB to "thread" this.
DECLARE
TYPE r_record IS TABLE OF OPEN_PORTS.IP%TYPE INDEX BY BINARY_INTEGER;
l_table R_RECORD;
job_count PLS_INTEGER;
job_number PLS_INTEGER;
BEGIN
l_table(1) := '192.168.1.102';

FOR i IN 1..l_table.COUNT LOOP
FOR j IN 1..10000 LOOP
SELECT COUNT(*)
INTO job_count
FROM user_jobs;

WHILE job_count < 20 LOOP

dbms_job.submit
( job => job_number,
what => 'BEGIN check_port( ''' || l_table(i) || ''',' || j || '); COMMIT; END;' );

COMMIT;
END LOOP;
END LOOP;
END LOOP;
END;
/
So it's running, and running, and running...I opened up another session and did a count on user_jobs. 10000. 12000. 20000. Uh, I think there's something wrong here.

I finally kill the session and there are 36,446 jobs in the queue. All checking the same port.

Then I can't even issue a simple SELECT statement. Maximum number of sessions reached or some such rubbish. Just a reminder, it was Saturday night and I was just trying to see if I could do this quickly.

Since I couldn't connect, I tried bouncing the database (a loud gasp from the DBAs out there). Come on! It's a local XE instance...there's nothing important there.

Start it back up, connect as SYS and still can't issue any queries. I do notice as I hit the / sign and the Enter key a bunch of times that once in a while it will let me do a query. So I create a quick PL/SQL block to remove all the jobs from the queue.
BEGIN
FOR i IN ( SELECT job FROM user_jobs ) LOOP
dbms_job.remove( i.job );
END LOOP;
END;
/
I go through the / + Enter key about 50 times until it finally runs. Success!

Or so I think. Still receiving the max sessions error. There are still 36,446 jobs in user_jobs. I can't remove them anymore as I get "job doesn't exist" when I try to dbms_job.remove it again. Hmmm...How about?
DELETE FROM dba_jobs;

36,446 rows deleted.
What?

How can that be? I didn't expect that to work at all...but it did. I have yet to look up the particulars of my actions (deleting from dba_jobs) but it did the job.

And yes, I'm still planning on writing this (correctly) in the near future...I want more movies!

Tuesday, May 13, 2008

ApEx: Oracle Marketing WTF?

At the time of writing, the score is 68 have not used ApEx leaving only 8 who have. Obviously this poll isn't scientific, but it does have to represent a small bit of the community. Perhaps Oracle Mix would be a better place to ask the question?

I guess my followup poll would be why? Why haven't you used it?

Did you not know about it?

If that's the case, then it's definitely an Oracle Marketing WTF.

Did you know about it but just never got around to trying it?

I'm completely dumb(quit snickering)-founded.

I believe the very first time I heard anything about the product was when it was referred to as Marvel (Project Marvel?) on AskTom. Am I the only one who read the site for enjoyment (probably, but it was the only thing I could read at work). That must have been 4 or 5 years ago. I started using it in May of 2005, now three years back.

Yes, Oracle has boatloads of products. I still don't know what the whole Fusion thing is (please don't say middleware, I don't know what that is either).

As someone pointed out in the comment section, it's one of the busiest forums on OTN.

I wonder if John and Dmitri run into this? Probably makes cold calling difficult.

As a [ApEx] community, what can we do to help promote it?