Showing posts with label tools. Show all posts
Showing posts with label tools. Show all posts

Monday, December 13, 2010

OBIEE 10g: Replace Column or Table in LTS

Even though I'm working hard on learning 11g, I still find little things in 10g that help.

Case in point, I was going through and trying to clean up the physical and BMM layers. Our physical model will now be shared between multiple projects. So how best to do that?

Using aliases along with naming conventions and display folders can help. By doing this, I will be able to "unhook" the source table from any and all joins. The goal is to keep these free...the aliases will be used to join to their respective tables (aliases).

The naming will be something like:
Dim_<source_table_name>_<subject_area>
Fact_<source_table_name>_<subject_area>
Outrigger_<source_table_name>_<subject_area>
Bridge_<source_table_name>_<subject_area>
To further organize them, I will put them in a Display Folder.



I'm just going to create one called "New."

I'm using the SH schema here, so it should be easy to follow along.

Next up, I'm going to create aliases for all my tables. For the sake of brevity and my sanity, I'm only going to use SALES, PRODUCTS, and TIMES.



When I initially started to clean things up, I would screenshot the Column Mappings section of the Logical Table Source. Fun right? There has to be a tool for this. Sure enough...

OK, this post has been sitting here all day...I'm cutting to the chase now.

So I've hooked up (joined) all my new tables. Now I just want to replace them. I could do it by hand, but what fun would that be. Here's the source for SALES.



If you go to Tools -> Utilities, you'll see this screen and the fun little tool we'll use for this exercise:



Now make sure you select "Replace whole table." On the left side, select the table you want to replace, on the right side, select the table you will replace it with.



Hit Next.



Blah blah blah, which translates into the column mappings. Since we are replacing the source table with a copy of itself, this part is easy.

Here you get a nice summary of what the tool will actually be doing.



Hit Next and voila!



Now, what does this buy you?

Since you are now sharing your physical layer with multiple Subject Areas, you've added a layer of abstraction to protect yourself. Let's say a new column was added to SALES. After you've updated your physical layer, you are now free to either keep the others the same or add it to that particular Subject Area. For my (mythical/theoretical) example, I just need that new column in one Subject Area, so I'll just need to pull that into the BMM and I'm done.

An even better example would involve joins. Maybe Subject Area 1 is based on one particular set of joins and the others are not. This should allow you to easily handle this kind of scenario.

I do apologize for this post breaking up as it did. I had the best intentions in the world this afternoon. Hopefully I got the gist of the tool across, if not, yell at me and I'll update it later.

Sunday, April 25, 2010

COLLABORATE: "I Hate Toad"

Saying something like that really endears you to a vendor.

Thankfully, I know Jeff Smith of Toad from his recent trip down to Tampa.

At dinner later that evening, I told him as much. He finally got it out of me that it isn't really Toad's fault, but the developers that use it. I agreed.

Unfortunately when I said it at COLLABORATE (jokingly of course), one of his colleagues heard me. Uh oh. Anyway, I had to explain why it is I "hate" Toad and think he was mostly happy with my answer.

Ultimately, a tool that helps you increase your productivity is a good thing. I like and have used the DBA pack that comes with Toad. However, many "developers" I have worked with can't live without Toad...and that's where I have a problem.

So trying to be fair, I asked Mr. Smith to walk me through Quest Code Tester. I "like" testing and think it is one of the bigger problems in the database developer community; in that we don't do it very well. Java, .NET, and many others have it built in (mostly anyway) to what they do. How many times have you seen a database developer run through unit tests on his or her code? I've seen exactly 1 shop (where I learned it).

quest 1quest 2
quest 3quest 4

From that demo, I would say it has much, if not more, of the functionality in SQL Developer. I haven't had a chance yet to really use that one either.

But if you would like to try out it, there is a free trial version (registration required)

Friday, March 19, 2010

SOUG + Toad Evangelist = Fun!

Twitter saves the world...again. On Monday I saw @hillbillytoad's announcement that he would be in Tampa.

the hillbillytoad announces trip to tampa

I proceeded to try and make plans to meet up with him along with @SQLChicken who resides in the area as well.

Then someone on the SOUG board decides to call a board meeting Monday night...at 9 PM...during 24. Jerk.

We discussed moving our March meeting to early April to accommodate another speaker. I told them that that Jeff Smith (aka @hillbillytoad) would be in town the 23rd and 24th (our scheduled meeting is on the 25th) and that I would contact him to see if he was available to stay longer or present on the 24th. While in the meeting, I sent him an email (apparently Mr. Smith goes to bed at 9:15 or something).

Tuesday morning I received confirmation that he would love to speak on Wednesday the 24th.

Who is this Jeff Smith character?
...is a Solutions Architect at Quest Software. He has been with Quest since 2001, working on the Toad R&D, Product Management, and Sales organizations. Jeff is the primary author of the Toad Handbook, 2nd Edition. Jeff is a regular speaker at Toad User Groups and Oracle User Groups worldwide.
You can find his blog here.

If you use Toad or are interested in learning more about Toad, come on over to his presentation/talk on Wednesday, March 24th. Directions and stuff can be found at the SOUG website.

Monday, February 22, 2010

OBIEE: Call SAPurgeCacheByDatabase

So the reason I am investigating the ODBC Function extensions is that we are trying to tie this into our existing ETL process. After load, and within a designated time frame, we'd like to clear the cache for a given set of tables. Fortunately, all those tables exist in a single database. Unfortunately, I'm wasting your time by putting these into separate posts. :)

First, nothing up my sleeve.

cache is cleared

So you understand the mapping, I made a pretty picture for that (it's not readily available in the cache manager)

mapped RPD

I go off and run the reports (no need for screenshots of those).

Refresh the cache manager

cache full!

So now I have cache entries for both "databases" (in quotes because they are actually the same physical database, just 2 separate connections).

I then create the SQL file and put the following:
CALL SAPurgeCacheByDatabase( 'local-sh' );
Then I just follow the same procedure as SAPurgeCacheAll()
nqcmd -d AnalyticsWeb -u Administrator -p Administrator -s sapurgecachebydatabase.sql
The output is almost identical as well
-------------------------------------------------------------------------------
Oracle BI Server
Copyright (c) 1997-2009 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------

CALL SAPurgeCacheByDatabase( 'local-sh' )
CALL SAPurgeCacheByDatabase( 'local-sh' )
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
------------------------------
RESULT_CODE RESULT_MESSAGE

-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
------------------------------
1 [59118] Operation SAPurgeCacheByDatabase succeeded!

-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
------------------------------
Row count: 1
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
------------------------------

Processed: 1 queries
And voila! The Sales cache has been cleared.

cache cleared!

OBIEE: Call SAPurgeAllCache()

Just a note to myself on how to clear the entire BI Server cache.

Mr. Minkjan has a great article here and Gerard Nico has one here.

Caching has been enabled (via <BI_SERVER_HOME>\server\Config\NQSConfig.ini)

For a list of ODBC Function extensions, read here.

First up, I run a simple report and then verify that there is an entry in the cache



I then create a small SQL script and put it in my c:\ folder:



(OK, just ignore what it says in the Title bar, I moved it)

Open up a cmd prompt and issue the following:
nqcmd -d AnalyticsWeb -u Administrator -p Administrator -s purgeallcache.sql
Hit enter and you should see the following:
-------------------------------------------------------------------------------
Oracle BI Server
Copyright (c) 1997-2009 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------

CALL SAPurgeAllCache()

CALL SAPurgeAllCache()

------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------
------------------------------
RESULT_CODE RESULT_MESSAGE

------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------
------------------------------
1 [59118] Operation SAPurgeAllCache succeeded!

------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------
------------------------------
Row count: 1
------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------
------------------------------

Processed: 1 queries
Check the Cache Manager and voila!



For future reference, here are the options available for the nqcmd utility:
Command: nqcmd - a command line client which can issue SQL statements
against either Oracle BI server or a variety
of ODBC compliant backend databases.
SYNOPSIS
nqcmd [OPTION]...
DESCRIPTION
-d
-u
-p
-s
-o
-D
-C<# number of fetched rows by column-wise binding>
-R<# number of fetched rows by row-wise binding>
-a (a flag to enable async processing)
-f (a flag to enable to flush output file for each write)
-H (a flag to enable to open/close a request handle for each query)
-z (a flag to enable UTF8 instead of ACP)
-utf16 (a flag to enable UTF16 instead of ACP)
-q (a flag to turn off row output)
-NoFetch (a flag to disable data fetch with query execution)
-NotForwardCursor (a flag to disable forwardonly cursor)
-SessionVar =

Wednesday, September 9, 2009

SQL Developer: Drill Down Reports

Finally, finally I've figured this out. I've googled "SQL Developer Drillable Reports" to no avail. The solution kept alluding me.

The first result you should get back is one from a fellow Tampan (Tampon?), Lewis Cunningham, from July 2006. OK, it's a bit old (I think it was still called Raptor back then), but I'll give it a try.

In it, Lewis talks about creating additional "pseudo" columns, SDEV_LINK_NAME, SDEV_LINK_OWNER, SDEV_LINK_OBJECT which appear to map to the corresponding columns in DBA_OBJECTS.



I tried that, and got...nothing. I tried changing the alias(es) to match the column I was using, again, to no avail.

Let me back up just a tad, I'm trying to create some reports based on the PLSQL_PROFILER_% tables:

* PLSQL_PROFILER_RUNS
* PLSQL_PROFILER_UNITS
* PLSQL_PROFILER_DATA

It's annoying to have to rewrite the SQL everytime. I did create a @profile script, but I had to pass the RUNID; so first, I had to know the RUNID.

So I took to Twitter as I know Kris Rice hangs out there sometimes.







That was last week, and I have been unable to get this to work. I could have sworn Kris had a good tutorial on it, but I think I confused it with the extensions you can create.

Anyway, I'm at it again tonight and I end up back at the link Kris originally pointed me to. For some reason (cough) I missed this crucial little nugget this first time
(the bind variable is case-sensitive)
Really? Could it be that easy? I UPPERed RUNID and voila! It worked!

To recap, go to the Reports tab, right click on a folder (I have one named "profiler") and select Add Report.



I fill out the Name, Description and Tooltip (optional)



Hit Apply which saves my report. Now I want a report that on PLSQL_PROFILER_UNITS that accepts the RUNID as an IN parameter.

First, create the report:



Go to the Binds tab and fill in the fields



Go to the Advanced tab and fill in the name of the report



Now, select your first report, right click, go to Reports and select the report you just created





Perfect!

Just a small reminder, the bind parameters are CASE SENSITIVE!

Tuesday, May 5, 2009

COLLABORATE 09: SwingBench

Session: Q352
Presenter: Debra Scarpelli
Time: 12:15-12:45
Room: 224 G
Title: A "Stress-Free" Free Stress Testing Tool From Oracle

I saw this on the schedule "free" jumped out at me. I don't believe the schedule made any mention of the tool, but I could be wrong.

Swingbench

I've read or seen about this somewhere but never got around to investigating it.

For those that don't know, it's a level load testing tool for the Oracle database. It was written by Dominic Giles (of Oracle). It has been used by Oracle internally for a time that I am unable to determine. You can find the page for it here. This tool is free but it is not supported by Oracle.

Specifics
- Currently works against 9i, 10g and 11g
- Built in Java and works on Unix, Linux and Windows
- Can use against RAC, TimesTen and Stand-alone databases.
- Three interfaces, command line (charbench), a mini GUI (minibench) and SwingBench (full fledged GUI)
  - SwingBench interface provides real-time feedback

There are 4 supplied (pre-configured) benchmark tests:
- Calling Circle with a read/write ratio of 70/30
- Order Entry with a read/write ratio of 60/40
- Stress Test with a read/write ratio of 50/50
- Sales History with a read/write ratio of 100/0

You can also write your own.

To run it you need to have Java (not sure which version) and the Oracle client installed on your machine. The output is xml so you can do pretty much anything you want with that.

There are some other tools that you can use in conjunction which you can find on Dominic Giles site.
- Data Generator
- BMCompare (yes, I snickered too) which compares the results of 2 benchmark tests.
- TraceAnalyzer

Pretty cool little tool. I'm definitely going to give it a try to see how it works and what it can teach me.

Friday, April 10, 2009

The Computer Setup

I get to setup my new computer.

Here's a list of tools I'm installing:

SQLDeveloper, for those moments I wanted to be able to easily download data. Great database navigator as well. I create reports here and in JDeveloper as well.
JDeveloper, though I won't be using Subversion for work, I'm hoping there is a plugin for source control that I can use. Still my favorite editor of choice.
7zip
CubicExplorer, which is a tabbed windows explorer and fairly nice.
WinMerge, always nice to be able to easily compare files.
FastStone Capture - screen grab utility. You can also add arrows and stuff to your captured images.

Firefox was already installed so I imported my JSON file with my standard links (Oracle Documentation, Google Stuff, etc). I needed some FF Add-ons though:
Meebo, which I'm trying out for the first time.
ScribeFire, for the occasional quick post.
Shareaholic, for quickly sending links out.
Delicious, naturally.

Oracle Database was already installed along with BI Publisher, so I didn't have to worry about those. I'm sure I'll have others, but these are the basics.

Thursday, March 19, 2009

Scribefire

I was reading Jake's post on JotNot and somehow I came across ScribeFire, a FireFox add-on. I'm thinking man, this is very cool. I looked at the version number, 3.2. Hmmm...maybe I'm not really an early adopter here. Google for it and found this post which describes some of the functionality.

The thing I like best about it is the split window. As I'm linking stuff up I have to do either Alt+Tab to switch windows (work) or Ctrl+Tab to shift tabs (home).

Here's what it looks like:


Yes, I still write my posts in the source view. I like ScribeFire so far and will try it out for a few posts.

Tuesday, October 21, 2008

JDeveloper, SQL Developer, Subversion and Tortoise

I've been trying for a month now to get JDeveloper, SQL Developer, Tortoise and Subversion to play nicely together. They don't really fight, but it seems JDev and SQLDev won't recognize versioned files if I check them out using Tortoise.

Very annoying because I like both tools.

So I went searching and found this document (online) in the help files. So I tried to put that DLL in the Tortoise bin directory, but of course that wouldn't work because they've renamed everything with a leading "T."

So I downloaded the actual SVN client from Subversion. Then put the dll in that directory...now I open up SQLDev and get this seay32.dll (or something like that) missing. Awesome.

Back to searching and I found this one. user594768 has only 2 posts but this one did the trick.

First, download the 1.2.0 SVNKit from here.

Then, do the following:
copy
- jna.jar
- svnkit-javahl.jar (rename into svnjavahl.jar)
- svnkit.jar
- svnkitsrc.zip
- trilead.jar

into your /sqldeveloper/jdev/extensions/oracle.jdeveloper.subversion/ directory. If you rename svnkit-javahl.jar correctly the first time, you won't have to suffer like me for another week.

Start SQL Developer and go to Tools --> Preferences --> Versioning and it should read:
Subversion Client
SVN/1.5.2 SVNKit/1.2.0 (http://svnkit.com/) r4949

This works for both JDeveloper and SQL Developer.

Wednesday, February 27, 2008

EDI Fun

or Electronic Data Interchange, just a fancy phrase for sending and receiving files. We (IT) do love to complicate things don't we?

I've put the change data capture stuff on hold as my never-ending project goes into it's 12th week past deadline. It's at nine 9s: 99.9999999. It's finance related stuff and nothing less than 100% is acceptable. I'm tired.

Part of my project was to move from an already built in house table to the raw (files) tables. My feisty colleague took on that fun challenge for 6 months or so. He's heading up a new project though so he's had to pass the baton on to me. I've accepted it...reluctantly. ;)

Anyway, we store these inbound files all over the place (seemingly to me). I started writing a little Java application that would scan a directory, read these x12-820 files and tell me the interchange date, control number, total amount and some other useful information.

I plan on either putting this in the database and wrapping it up in PL/SQL or creating a service (Java Service Wrapper) and pushing this useful data to a table. So if you have to deal with the wonderful x12/820 formats, you may want to check back soon for the code. I can't promise it will be good, but it will work!

Wednesday, December 5, 2007

Instrumentation: DEBUG/LOGGING

In my previous entry on instrumenting code I detailed the way in which you could use the DBMS_APPLICATION_INFO.SET_SESSION_LONGOPS procedure to help monitor your long running code.

In this one, I will detail my home grown version of Tom Kyte's debug routine. I do know that others have similar code but can't seem to find them right now.

You can find the source code here.

Contents:
2 tables
2 sequences
1 package
1 procedure
1 build file

debug_tab allows you to turn the debugging on and off. debug_details_tab will store each line that you write to the debug routine when turned on.

Here's an example of it in practice:

CREATE OR REPLACE
FUNCTION get_age_in_months( p_id IN NUMBER ) RETURN NUMBER
IS
l_age_in_months INTEGER;
BEGIN
--instrumentation calls
debug( 'GET_AGE_IN_MONTHS' );
debug( 'P_ID: ' || p_id );
debug( 'select value into variable' );

SELECT age_in_months
INTO l_age_in_months
FROM people_tab
WHERE id = p_id;

debug( 'L_AGE_IN_MONTHS: ' || l_age_in_months );

RETURN l_age_in_months;

EXCEPTION
WHEN no_data_found THEN
debug( 'no data found' );
RETURN NULL;
END get_age_in_months;
/
show errors


I mentioned in the previous article that I had had difficulty grasping this concept initially. I think once I related it to DBMS_OUTPUT.PUT_LINE it became much more clear to me.

This simple debug routine has helped me tremendously in the last year or two that I have used it. Especially when you get nested levels of logic. It gets very hard to keep track of where you are, but with this you can run your procedure or function and then issue a SELECT against the debug_details_tab and see what was happening when.

I even began using this in my APEX applications. I would preface each line with "APEX: " and then write whatever was necessary so that I could step through the various pieces of code. It became crucial when I was doing validation on a table of records in a collection...oh so much easier.

On large systems this will generate a lot of data. I definitely would not put this inside a loop doing some sort of batch processing, but it is helpful to find where things fail out.

It can certainly be improved on. I basically took the idea of DBMS_OUTPUT.PUT_LINE and wrote it to a table, nothing fancy there. Mr. Kyte mentions writing it to the file system as well. Since I don't have ready access to the database file system, this was the easiest.

Make your life easier and use a debug/logging routine in your code. No longer will you have to comment, the debug statements should do it for you!

Thursday, September 13, 2007

Code Style

Having a specific coding style has become more apparent to me now that I work in a team environment. I have my way, which is of course the best, and everybody else has there way, which isn't nearly as good.

I'm sure if I could make the rules, I'd have everyone writing code according to my ways. Power corrupts and all that.

I have learned to accept others' ways though. It's been difficult at times. If the code is readable and it works, I usually just bite my tongue.

Strangely, I love to type. I manually create all of my scripts. I make my code "pretty." I don't use GUI tools to auto-generate table definitions, and I especially don't use tools to format my code.

Toad Formatter, or whatever it's called, is my enemy. I'm sure Toad is not the only one with some sort of auto-formatter and I don't like any of them. For some reason I believe it is an abomination.

When I get to do the technical interview, I scan their resume for SQL*Plus. If I don't see it, I ask them what tools they use. If they don't mention SQL*Plus, I'm skeptical of their abilities...until I get to the five constraints question.

This is not to say that people that use Toad or SQLDeveloper are bad coders or anything, it's just my preference.

So what's your preference?

Monday, September 10, 2007

Oracle Tools I've Used

Tools:
SQL*Plus
APEX - Application Express (formerly HTMLDB)
JDeveloper
SQL Developer
Reports Builder
OC4J
Discoverer
Oracle Server 8i, 9i, 10g (Windows)
Oracle Application Server 9i, 10g (Windows)
rman
oradim
lsnrctl
tkprof

Coming Soon:
Oracle Warehouse Builder

Database Features:
Java
Object Types
Workflow
Advanced Queueing (in conjunction with Workflow)
Heterogenous Services

Features I'd like to use:
Change Data Capture
XML
Spatial
interMedia
Regular Expressions

Database Supplied Packages:
DBMS_OUTPUT
DBMS_LOB
UTL_FILE
DBMS_APPLICATION_INFO
DBMS_CRYPTO
DBMS_UTILITY
DBMS_METADATA
DBMS_EPG
DBMS_HS_PASSTHROUGH
DBMS_JAVA
DBMS_JOB
DBMS_LOCK
DBMS_MVIEW
DBMS_OBFUSCATION_TOOLKIT
DBMS_RANDOM
DBMS_SESSION
DBMS_SQL
DBMS_STATS
DBMS_XDB
DBMS_XPLAN
HTMLDB_APPLICATION
HTMLDB_ITEM
HTMLDB_UTIL
HTP
OWA_COOKIE
UTL_MAIL
UTL_RAW
UTL_SMTP
WPG_DOCLOAD

Packages I'd like learn to use (10g):
UTL_DBWS
DBMS_CDC_PUBLISH
DBMS_CDC_SUBSCRIBE
DBMS_DATA_MINING
DBMS_DATAPUMP
DBMS_SCHEDULER
DBMS_OLAP
DBMS_PIPE
DBMS_STREAMS
DBMS_STREAMS_ADM
DBMS_STREAMS_AUTH
DBMS_STREAMS_MESSAGING