Showing posts with label ddl. Show all posts
Showing posts with label ddl. Show all posts

Wednesday, October 20, 2010

DDL Auditing - Revisited

Of course I can't find the post(s) that I read recently, but they were all concerning auditing DDL. The most desirable solution is to use the built in auditing providing by Oracle...the only problem with auditing, in general, is that no one really monitors it. I'm sure there are numbers out there (read about that too, but can't seem to find the post). Oh wait, I found it, it was a video that I got via the Database Insider newsletter. You can see it here.

The gist of that video (as cheesy as it may be), is that some broker figures out a way to submit trades with his account that don't really exist. He's paid the commission, automatically, some $10 million. The Risk/Security Officer talks about the amount of time (months) they spent trying to track down how it was done. They were at least smart enough to have auditing turned on, however, they never monitored it. Somehow, all this led me to DDL auditing. No idea how, it just did.

After thinking about it for a few days, I realized the similarity between that and my PMDV project I did last year. It only made it to version 0.1, but this gave me the idea to revisit it. Maybe it was just poor naming/branding (PMDV stands for Poor Man's Data Vault)? Maybe it did just suck eggs? Tough to say for sure.

To sum it up quickly, it required a "ticket number" or "work order" to set an application context. Without that application context set, you could not perform any DDL. On top of that, it would record all the DDL changes performed during that deployment window and give you decent reporting through an APEX front end. Like I said in one of those other posts, a former company of mine had to provide the security team with audit reports for each deployment. While the built-in auditing would capture all this, what I created will at least capture that in one place and provide the objects that change.

So perhaps this falls into the Change Management landscape (which is on my radar again...big, big topic).

Guess what, I just remembered how I came to remember this...a conversation on Twitter with @hillbillytoad and @krisrice...that's what led me to the DDL trigger and auditing. I think Kris suggested that a DDL trigger should be installed to prevent little old me from doing damage. :)



That link led to here and then a few other places...

Wednesday, February 18, 2009

No DDL in Production Environment - II

Part I.

I said I was going to flesh that out a bit more. After some more thought about it, it could be a nice feature for DBAs.

I'll start with what needs to be stored.

DEPLOYMENTS

  • DEPLOYMENTID - Surrogate key
  • NAME - Name to give the deployment. Could be a ticket number or a change control number. Your choice.
  • DESCRIPTION - What are you doing? Maintenance, the description from the change control ticket, etc.
  • START_DATE - Defaults to SYSDATE. Used to gather the AFFECTED_OBJECTS at the end.
  • STARTED_BY - DEFAULTS to USER
  • END_DATE - Date the deployment ends.
  • ENDED_BY - Will typically be the same as the STARTED_BY, but we'll capture it at the end just in case.
INVALID_OBJECTS and STAGE_CODES (not to be confused with stage coach).


STAGE_CODE
  • STAGECODE - Nothing really special about this one here. Just a way to maintain data integrity without throwing it into a CHECK CONSTRAINT.
INVALID_OBJECTS
  • INVALIDOBJECTID - Surrogate Key
  • DEPLOYMENTID - Foreign Key from the DEPLOYMENTS table.
  • STAGECODE -
  • OWNER - Self explanatory
  • OBJECT_NAME - Self explanatory
And finally, AFFECTED_OBJECTS:


AFFECTED_OBJECTS
  • AFFECTEDOBJECTID - Surrogate key.
  • DEPLOYMENTID - FK from DEPLOYMENTS
  • OWNER - Self explanatory
  • OBJECT_NAME - Self explanatory
And here's how it all fits together:


Any ideas on what else I can capture here?

Monday, February 16, 2009

No DDL in Production Environment

There might be a couple of reasons you want to disable DDL in a production environment (other than the normal operations of the database). One being you don't have your schemas locked down (i.e. no service accounts). Another might be the need to track all changes in the environment for reporting/auditing purposes.

It's fairly easy to do.

First, create an Application Context:
CREATE OR REPLACE
CONTEXT secure_ddl
USING secure_ddl
ACCESSED GLOBALLY;
The USING clause is the package/procedure/function that will set the secure_ddl context.

Next, create a procedure that will set the context using DBMS_SESSION:
CREATE OR REPLACE
PROCEDURE secure_ddl( p_ticket IN VARCHAR2 )
AS
l_sessionid NUMBER := SYS_CONTEXT( 'USERENV', 'SESSIONID' );
BEGIN
dbms_session.set_identifier( l_sessionid );

dbms_session.set_context
( namespace => 'SECURE_DDL_CONTEXT',
attribute => 'CAN_DO_DDL',
value => 'Y',
username => SYS_CONTEXT( 'USERENV', 'SESSION_USER' ),
client_id => l_sessionid );
END secure_ddl;
/
Finally, create a trigger that fires BEFORE DDL.
CREATE OR REPLACE
TRIGGER enable_ddl
BEFORE DDL ON DATABASE
WHEN ( ora_dict_obj_owner = 'APPLICATION_OWNER' )
BEGIN
IF SYS_CONTEXT( 'SECURE_DDL_CONTEXT', 'CAN_DO_DDL' ) <> 'Y'
OR SYS_CONTEXT( 'SECURE_DDL_CONTEXT', 'CAN_DO_DDL' ) IS NULL
THEN
raise_application_error( -20001, 'must enable DDL' );
END IF;
END enable_ddl;
/
So let's test it out.

I'll create 2 users, the APPLICATION_OWNER (see trigger) and TESTUSER. APPLICATION_OWNER is the schema that will store all your application objects and code. TESTUSER will be granted the DBA role which will allow them access to create objects in the APPLICATION_OWNER schema.
CREATE USER application_owner IDENTIFIED BY application_owner
DEFAULT TABLESPACE users
QUOTA 1G ON users;

GRANT create session, resource TO application_owner;

CREATE USER testuser IDENTIFIED BY testuser
DEFAULT TABLESPACE users
QUOTA 5M ON users;

GRANT dba TO testuser;
Let's test it out. Login as TESTUSER and create a table.
TESTUSER@RMDEV2>set sqlprompt TESTUSER@11G>
TESTUSER@11G>CREATE TABLE application_owner.t ( x NUMBER );
CREATE TABLE application_owner.t ( x NUMBER )
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: must enable DDL
ORA-06512: at line 5
Nice. A user with DBA privileges cannot perform DDL in the APPLICATION_OWNER schema. Since the SECURE_DDL procedure is not in the SYS schema, TESTUSER can call it to enable DDL.
TESTUSER@11G>EXEC CJUSTICE.SECURE_DDL( 'TICKET 1' );

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
TESTUSER@11G>CREATE TABLE application_owner.t( x NUMBER );

Table created.
So what about APPLICATION_OWNER? Can that user perform DDL on objects in it's own schema (see #1 above)? I've logged into the APPLICATION_OWNER schema:
APPLICATION_OWNER@11G>CREATE TABLE t ( x NUMBER );
CREATE TABLE t ( x NUMBER )
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: must enable DDL
ORA-06512: at line 5
Very cool.

Again, this method could be helpful if:
1. You don't have your schema (application) accounts locked down
2. You want to track all DDL in your production environment.

The procedure can be easily expanded into a series of calls, specifically ENABLE_DDL and DISABLE_DDL. When you call ENABLE_DDL, a record is written to a table with the ticket number and the start time. When you call DISABLE_DDL, the end time is captured and you can further query DBA_OBJECTS and capture all objects that changed during that window.

In a future post, I'll work to expand on this concept to provide this kind of tracking.

Monday, December 15, 2008

DDL Triggers and Security

I've mentioned in the past that I don't particularly care for trigger. That's a shocker, I know, because Tom Kyte loves them so much (here, here, and here).

We're trying to move to a named account approach (for developers too) and the only way to do that is to grant system level privileges. That's not good.

I had read about DDL triggers in the past but wasn't sure exactly how they could be used. Well, let's just say you can do quite a lot. There are database level events and client level events. The database level events include: STARTUP, SHUTDOWN, DB_ROLE_CHANT and SERVERERROR. A few of the client level events include (BEFORE and AFTER): DDL, LOGOFF, LOGON, GRANT, DROP, CREATE, ALTER, etc.

Being a developer my entire career (with certain DBA skills), I had never quite wandered into this territory. I was rather impressed.

The three I thought would make the most sense were ALTER, DROP and CREATE.

If you have system level privileges my biggest concern would be CREATE ANY PROCEDURE and EXECUTE ANY PROCEDURE (though the latter is not one that is currently granted). I've used those 2 privileges to get the DBA role in the past.

Here's what I need to do:
1. Does the user have the DBA role?
SELECT grantee
INTO l_grantee
FROM dba_role_privs
WHERE grantee = ora_login_user
AND granted_role = 'DBA';

If they do have the role, it exits right then.
If not,
2. Where is the user trying to create/drop/alter an object? Is it in an approved schema? Their own?
3. raise_application_error if it's not approved or not their own object

Easy enough:
CREATE OR REPLACE 
TRIGGER system.no_create_in_dba_schema
BEFORE CREATE
ON DATABASE
DECLARE
l_user VARCHAR2(30) := ora_login_user;
l_owner VARCHAR2(30) := ora_dict_obj_owner;
l_object_name VARCHAR2(128) := ora_dict_obj_name;
l_dba VARCHAR2(30);
l_can_create BOOLEAN;
BEGIN
--is DBA?
SELECT grantee
INTO l_dba
FROM dba_role_privs
WHERE granted_role = 'DBA'
AND grantee = l_user;

EXCEPTION
WHEN no_data_found THEN
l_can_create := can_user_create_alter_drop
( p_user => l_user,
p_owner => l_owner );

IF l_can_create = FALSE THEN
raise_application_error( -20001, 'cannot CREATE objects in ' || l_owner );
END IF;
END no_create_in_dba_schema;
/
show errors
In preparation for this post I found two very similar articles by Arup Nanda and Tom Kyte. I think they've trained me well.

Thursday, May 1, 2008

Validating a Process Part II

Continued from my previous post.

While discussing external tables with my feisty colleague some time back, I explained that I liked using them but I couldn't figure out how to change the file name to match that of what was defined in the table definition.

Colleage to the rescue:

ALTER TABLE table_name LOCATION ( 'new_file_name.csv' );
Cool!

As I mentioned before, I had 4 files types I had to read: 820 and 835, both of the x12 format and two custom file layouts (flat files essentially). Since no one in the group knows Java yet, I wanted to keep the Java portion of the application as small as possible. So with the two custom files, I decided to use external tables. I could then put into practice the above ALTER TABLE statement.

As I looped through the list of files to be processed, I would issue an EXECUTE IMMEDIATE so that I could then SELECT from the table in the next step. It worked like a charm.

As I was doing some testing, I would issue the ROLLBACK statement to clear the tables for the next run. When I verified, there was still data there. WTF?

Oh wait, there's an EXECUTE IMMEDIATE...which runs DDL...which COMMITs...barnacles!

So I couldn't use that new thing I learned, oh well. Fortunately UTL_FILE does have the ability to rename files so I picked a name like 'external_table_file_name.txt' and rename the incoming file to that, then SELECT. Works like a charm.