Thursday, March 31, 2011

Expert Oracle APEX

I'm a bit of a fan of APEX; haven't used it consistently in a while but I think it's a great tool. I've annoyed many managers/DBAs about getting it installed and configured for use...most of that to no avail.

Many years ago, almost 6 to be exact, I decided to port my business' J2EE app over to APEX and I needed hosting. Still relatively new, the market was small. Through the forums, I found John Scott and ShellPrompt. It took a few months to write the J2EE application from scratch, it took less than 2 weeks of off-hours work to port. I hosted my site there for a year, maybe a little longer. During that time, John was pure awesome. I blogged about John's customer service once, but that was another life and the blog no longer exists. We have met IRL, but have yet to have a beer together, I'm pretty sure he owes me at least 4 now.

Anyway, John has put together a new book, Expert Oracle Application Express which is a joint effort by some 14 different APEX authors.



While the content must be outstanding, the best part is that all funds will be donated to the families of 2 men, Carl Backstrom and Scott Spadafore. Both men worked for Oracle on the APEX team. Both men were incredibly involved in the community. Both were highly regarded in that community.


John blog's about it here.

If you use APEX, new to APEX, or just want to know WTF it is, go out and (pre)order this book now. Not only do you get a great resource, but you get to help out the families of Carl and Scott.

Wednesday, March 30, 2011

The Nerd Defense

Really couldn't pass this one up. Friend and co-worker, Erica Baer [@skibaer] who has a Ph.D. in some sort of crazy thing (and she's not even 30!) sent me a link to Eyeglasses and Mock Juror Decisions...I'm pretty sure her Ph.D. is in something related to psychology (which is good for me, free counseling) and the selection of jurors. Wait, it's Forensic Psychology, LinkedIn told me so. Sadly, much of my charm is wasted on her, she sees right through it. Oh well.

So the link...had a great picture:



There are like 4 elements that apply to me there:
1. Nerd
2. Justice
3. Bald
4. Rollie Fingers-esque mustache
5. Glasses

Ok, 5, I can count, but you knew that.

Tuesday, March 29, 2011

OEL 6 + VirtualBox Guest Additions

I'm taking my first second spin at Oracle Enterprise Linux 6. The first time, I did not install a Desktop, so I was completely lost. I suppose I could do it now...but, I'm not going there just yet.

One of the very first things I do when creating a VM is install the Guest Additions. The main reason is screen size. Out-of-the-box, it's 800x600 or some such non-sense. I'm sure I could cope, if I had to, I'd rather not, it's just annoying. I guess a little perspective is in order:



That's against the backdrop of a 23" monitor. So yeah, annoying.

After having done this about 20 times, I know you need to install some extra packages. Usually the kernel* and make packages. I'll just skip straight to installing the Guest Additions though, let's see what happens.
Nothing to see here.
Well, Guest Additions includes the bi-directional clipboard support, and I'm not in the mood to type up the entire thing. Here's the specific error I received however (typed by hand, you are welcome):
Building the VirtualBox Guest Additions kernel modules
The headers for the current running kernel were not found. If the following
module compilation fails then this could be the reason.
The missing package can be probably [sic] installed with
yum install kernel-devel-2.6.32-100.28.5.el6.x86_64
The output of the log file shows this:
[root@oel6test VBOXADDITIONS_4.0.4_70112]# tail /var/log/vboxadd-install.log -n 100
Makefile:23: *** Error: unable to find the sources of your current Linux kernel.
Specify KERN_DIR= and run Make again.. Stop.
OK, let's try that. The Google Machine told me the kernel lives in /usr/src/kernels/
[root@oel6test VBOXADDITIONS_4.0.4_70112]# export KERN_DIR=/usr/src/kernels/
Run the command again and it tells me that "KERN_DIR does not point to a directory. Stop." No you stop. Whatever.

I then go into the "additional" packages provided by the OEL6 DVD. Funny, now that I look at the kernel* packages, I wonder if they are outdated? uname -r shows me 2.6.32-100.28.5.el6.x86_64 which I believe is correct, but the kernel* packages on the disk look to be in the 2.6.32-71 range. Weird, since I seem to have the latest...

Now, it's time for some public yum, Oracle's public yum server to be exact. Awesome, instructions on the front page on how to set it up, makes it somewhat foolproof for someone like me:

Download and copy the appropriate yum configuration file in place, by running the following commands as root:

Oracle Linux 4, Update 6 or Newer

# cd /etc/yum.repos.d
# mv Oracle-Base.repo Oracle-Base.repo.disabled
# wget http://public-yum.oracle.com/public-yum-el4.repo
Oracle Linux 5

# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-el5.repo
Oracle Linux 6

# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-ol6.repo
Oracle VM 2

# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-ovm2.repo
Enable the appropriate repository by editing the yum configuration file

Open the yum configuration file in a text editor
Locate the section in the file for the repository you plan to update from, e.g. [el4_u6_base]
Change enabled=0 to enabled=1
Begin using yum, for example:

yum list

yum install firefox

And here is where I cheat, sort of. Instead of just using yum, I go to the Add/Remove Software GUI.



Wait just a second...those "%100%" packages have "uek" in them, I thought that was for the Unbreakable Enterprise Kernel. Did I install that? I thought I opted out of that...

So back to the DVD and there are those packages sitting there, of the "uek*100*" variety. Let's install those.
kernel-uek-2.6.32-100.28.5.el6.x86_64.rpm
kernel-uek-debug-2.6.32-100.28.5.el6.x86_64.rpm
kernel-uek-debug-devel-2.6.32-100.28.5.el6.x86_64.rpm
kernel-uek-devel-2.6.32-100.28.5.el6.x86_64.rpm
kernel-uek-doc-2.6.32-100.28.5.el6.noarch.rpm
kernel-uek-firmware-2.6.32-100.28.5.el6.noarch.rpm
kernel-uek-headers-2.6.32-100.28.5.el6.x86_64.rpm
I install the debug, the debug-devel, devel and headers. Reboot.

No joy. But I can see that it can't find the gcc command, so I just have to load that package. Of course it isn't that easy, it has dependencies.
[root@oel6test Packages]# rpm -ivh gcc-4.4.4-13.el6.x86_64.rpm 
warning: gcc-4.4.4-13.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
error: Failed dependencies:
cloog-ppl >= 0.15 is needed by gcc-4.4.4-13.el6.x86_64
cpp = 4.4.4-13.el6 is needed by gcc-4.4.4-13.el6.x86_64
First up:
[root@oel6test Packages]# rpm -ivh glibc-headers-2.12-1.7.el6.x86_64.rpm 
warning: glibc-headers-2.12-1.7.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ########################################### [100%]
1:glibc-headers ########################################### [100%]
Now I can load the devel files for glibc.
[root@oel6test Packages]# rpm -ivh glibc-devel-2.12-1.7.el6.x86_64.rpm 
warning: glibc-devel-2.12-1.7.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ########################################### [100%]
1:glibc-devel ########################################### [100%]
What else?
[root@oel6test Packages]# rpm -ivh cpp-4.4.4-13.el6.x86_64.rpm 
warning: cpp-4.4.4-13.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
error: Failed dependencies:
libmpfr.so.1()(64bit) is needed by cpp-4.4.4-13.el6.x86_64
Seriously. Since this isn't the first time today, I know what package that so file is from.
[root@oel6test Packages]# rpm -ivh mpfr-2.4.1-6.el6.x86_64.rpm 
warning: mpfr-2.4.1-6.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ########################################### [100%]
1:mpfr ########################################### [100%]
Now cloog-ppl.
[root@oel6test Packages]# rpm -ivh cloog-ppl-0.15.7-1.2.el6.x86_64.rpm 
warning: cloog-ppl-0.15.7-1.2.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
error: Failed dependencies:
libppl.so.7()(64bit) is needed by cloog-ppl-0.15.7-1.2.el6.x86_64
libppl_c.so.2()(64bit) is needed by cloog-ppl-0.15.7-1.2.el6.x86_64
What about a ppl* package?
[root@oel6test Packages]# rpm -ivh ppl-0.10.2-11.el6.x86_64.rpm 
warning: ppl-0.10.2-11.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ########################################### [100%]
1:ppl ########################################### [100%]
Almost there. Back to cloog.
[root@oel6test Packages]# rpm -ivh cloog-ppl-0.15.7-1.2.el6.x86_64.rpm 
warning: cloog-ppl-0.15.7-1.2.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ########################################### [100%]
1:cloog-ppl ########################################### [100%]
cpp
[root@oel6test Packages]# rpm -ivh cpp-4.4.4-13.el6.x86_64.rpm 
warning: cpp-4.4.4-13.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ########################################### [100%]
1:cpp ########################################### [100%]
gcc?
[root@oel6test Packages]# rpm -ivh gcc-4.4.4-13.el6.x86_64.rpm 
warning: gcc-4.4.4-13.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ########################################### [100%]
1:gcc ########################################### [100%]
Yes!

It's working...

Sweet!
[root@oel6test VBOXADDITIONS_4.0.4_70112]# /bin/bash ./VBoxLinuxAdditions.run install
Verifying archive integrity... All good.
Uncompressing VirtualBox 4.0.4 Guest Additions for Linux.........
VirtualBox Guest Additions installer
Removing installed version 4.0.4 of VirtualBox Guest Additions...
Removing existing VirtualBox DKMS kernel modules [ OK ]
Removing existing VirtualBox non-DKMS kernel modules [ OK ]
Building the VirtualBox Guest Additions kernel modules
Your guest system does not seem to have sufficient OpenGL support to enable
accelerated 3D effects (this requires Linux 2.6.27 or later in the guest
system). This Guest Additions feature will be disabled.


Building the main Guest Additions module [ OK ]
Building the shared folder support module [ OK ]
Doing non-kernel setup of the Guest Additions [ OK ]
Starting the VirtualBox Guest Additions [ OK ]
Installing the Window System drivers
Installing X.Org Server 1.7 modules [ OK ]
Setting up the Window System to use the Guest Additions [ OK ]
You may need to restart the hal service and the Window System (or just restart
the guest system) to enable the Guest Additions.

Installing graphics libraries and desktop services componen[ OK ]
Reboot (or just restart your X-windows.



Voila!

Update: 03/30/2011 20:50:00 EST
Thanks to Robin Moffat [blog|@rnm1978] for providing a much simpler solution:
yum install kernel* dkms gcc
Reboot after that and install your Guest Additions. No more having to figure out the dependencies on your own (I'll leave it to you to decide whether that is a good exercise or not).

Update: 02/14/2013 12:00:00 EST
One more addition from Mr. Closson (comments, below).
# yum install kernel-uek-devel 

Monday, March 28, 2011

Decoding DECODE

I've never been a fan of DECODE. I find it difficult to read, especially past the (EXPRESSION, MATCH_TO_EXPRESSION, RESULT_IF_MATCHED, ELSE_RETURN_VALUE), i.e. 4 spots there. A friend of mine was trying to...decode a long DECODE expression, so I put on my Cary Millsap hat and tried to break it down.

Despite never having seen Cary speak, I love that he always includes Teacher in his bio. Those that present are in fact teachers, but with that inclusion, I think Cary takes it a step further.



(I plan on changing that streak this year, see Tools, Tools, Tools!)

So here's what I said, maybe it can help some newbie out there get started. Me, I encourage people to avoid the statement suggesting the CASE statement instead...mostly for it's readability (naturally I have seen that abused as well, oh well).

Here's a mock statement:
DECODE(my_expression,0,0,NULL,NULL,(another_expresssion/my_expression))


First, format the dang thing. Put some white space in there.
DECODE( my_expression, 0, 0, NULL, NULL, ( another_expresssion / my_expression ) )
We're not exactly tuning, but readability goes a long way.

Now, let's look at the picture defined in the docs:



Just for simplicity's sake, I'm going to spell it out:
DECODE( POSITION_1, POSITION_2, POSITION_3, POSITION_4, POSITION_5, POSITION_6 )
At least to me, that helps. Up to 4 places, this probably isn't necessary, but as it gets long (and I've seen some whoppers), it's good to be able to draw it out.

What does that mean? If POSITION_1 = POSITION_2, then use the value from POSITION_3. If POSITION_1 = POSITION_4, use the value from POSITION_5. The default will be POSITION_6.

Written out in CASE format, it would look like this:
CASE
WHEN my_expression /* POSITION_1 */ = 0 /* POSITION_2 */
THEN 0 /* POSITION_3 */
WHEN my_expression /* POSITION_1 */ IS NULL /* POSITION_4 */
THEN NULL
ELSE my_expression / another_expression /* POSITION_6 */
END
I write this up because I got the "You should be a teacher" comment at the end, which I took as a great compliment.

Wednesday, March 23, 2011

Fun with Tuning!

It's been an interesting and fun night.

Started, innocently enough, with a tweet (go figure, reason #1042 why Twitter rocks) about my CTAS operation completing.



That resulted in a flurry of activity and some actual learning on my part.

Of course you have the smart-ass (tweet), Matt Topper [@topperge]



I quickly did the math, it was only 4,919 times faster than mine. Though after tonight, I would have a hard time believing anything I say.

My CTAS operation created 102 GB of data in a shade over 7 hours. He did 70 TB per hour. Whatever. Showoff.

I need to back up a little actually. I have been posting these numbers over the last few days. Yesterday, Martin Berger [@martinberx] sees one those tweets (a cry for help?), and follows up via email (he rocks anyway, he sent katezilla a postcard from Vienna last year too).

We've exchanged a few emails, mostly me telling him I have no idea what he's talking about and then me trying to explain what I am talking about. Or something.

Tonight (yesterday?) he asked for an AWR report. I told him I disabled everything via DBCA. He told me I probably didn't, nicely. Then he pointed me to Tim Hall's post on running awrrpt.sql. Following those easy instructions, I ran the report. I guess I didn't turn it off.

So far, Matt's a smart-ass, and Martin rocks.

Then Greg Rahn (@gregrahn) joins.



So I have cruddy disks? (read it again Justice).

"Seems likely that the disk writes are the slow side of the execution. The read side probably faster. Got SQL Monitor report?"

I'm thinking, "...", actually, I wasn't. I was stuck on disk. But I could get him a SQL Dev report.

With that, he gets the SQL ID and tells me to do this:

Oracle 11g: Real-Time SQL Monitoring Using DBMS_SQLTUNE.REPORT_SQL_MONITOR

That's 3 years old. Wow.

BTW, here's the SQL statement I was running:
CREATE TABLE claim
COMPRESS BASIC
NOLOGGING
AS
SELECT /*+ PARALLEL( c, 8 ) */
date_of_service,
date_of_payment,
claim_count,
units,
amount,
...
...
( CASE
WHEN application_year IS NULL THEN '9'
ELSE application_year
END ) application_year,
( CASE
WHEN accounting_code IS NULL THEN '9'
ELSE accounting_code
END ) accounting_code,
( CASE
WHEN claim_form_type IS NULL THEN '9'
ELSE claim_form_type
END ) claim_form_type,
( CASE
WHEN diagnosis_code_1 IS NULL THEN '-999'
ELSE diagnosis_code_1
END ) diagnosis_code_1
...
...
FROM claims c;
The night gets even crazier, old friend, Tom Roach (@tomroachoracle) has now ssh'd into my VM. He's doing all kinds of craziness. He walked me through sar, iostat, and a few other tricks. At least now I know how to use the history command so I can replicate.

Meanwhile, Matt is still heckling me and I get an email from Greg after reviewing the SQL Monitor report.

(I'm paraphrasing here)

"Take a look at your CTAS again, anything jump out at you?"

Me: (To myself): "No"
Me: (staring at the email)

"Perhaps you have a parallel hint on the select but not on the table, like this"
CREATE TABLE claim
COMPRESS BASIC
NOLOGGING
PARALLEL 8
AS
SELECT /*+ PARALLEL( c, 8 ) */
date_of_service,
date_of_payment,
claim_count,
units,
amount,
...
...
Wow. Really? How did he do that? Impressive. Cool!

I admit to Greg that I thought he was talking about hardware in his original tweet. He said something pithy. I bowed.

So that information (kinda important huh?) couple with Mr. Roach's look at what was happening using sar told me something...I remember this, CPU wasn't being fully utilized. I can't remember the exact numbers so let's call it 50%. I told him I was about to rerun (it's about 1:30 AM at this point), he suggested upping the DOP to 16 from 8. Sure. I'll do what I'm told.

I reran the statement with the bug fix corrected and upping the DOP on both the table and the SELECT. As I was putting this together, it finished. 2.5 hours. Waaaay better than 7.5 hours. Tolerable for me since I'm not as cool as Matt (who was only on a 1/4 rack).

I learned stuff...mostly about how little I do know. I'll try to write up more notes in the future so I don't forget everything and so no one who helped me will have wasted their time.

Thanks so much to Tom, Martin and Greg, your help is greatly appreciated.

Monday, March 21, 2011

Fun with NFS Mount

This is just an ongoing saga for me. Nothing is easy. I'm pretty sure there is a great quote for that.

After my ORA-27054: NFS file system where the file is created or resides is not mounted with correct options issues last night (this morning), they just keep coming.

After running the mount command and then starting the backup process, I let it go until this morning. When I checked, it was complete. Yeah.

Checking the System Resource Monitor though, showed differently. I had less space than I did when I started (and quickly approaching no space). Further investigation showed that the mount command didn't work, at all. It didn't fail either though, so I assumed (again, I know) all was right in the world.
[root@medicaid oradata]# umount /media/test
umount: /media/test: not mounted
Not good.
[root@medicaid oradata]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw,size=7g)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
data on /media/sf_data type vboxsf (gid=501,rw)
exp on /media/sf_exp type vboxsf (gid=501,rw)
It's not listed. The ls command displayed the files...but they weren't on the share.

So I started to remove items from the mount command ending up with this:
[root@medicaid media]# mount -t nfs oraclenerd:/media/data_2/files/oradata/TESTING /media/test
mount: mount to NFS server 'oraclenerd' failed: Connection refused.
Ah, look at that, an error. Sweet. Where was that before? What did I do wrong?

After a bit of looking around, I found this Ubuntu guide on setting up NFS.

Looks like I was missing the nfs-kernel-server package. NFS = Network File System. I'm slow.

From the introduction of the How To document:
NFS (Network File System) allows you to 'share' a directory located on one networked computer with other computers/devices on that network. The computer 'sharing' the directory is called the server and the computers or devices connecting to that server are called clients. The clients 'mount' the shared directory, it becomes part of their own directory structure.

NFS is perfect for a NAS (Networked Attached Storage) deployment in a Linux/Unix environment. It is a native Linux/Unix protocol as opposed to Samba which uses the SMB protocol developed by Microsoft. The Apple OS has good support for NFS. Windows 7 has some support for NFS.

Sounds like what I am trying to do.

Load the package, configure the export file with the following:
/media/data_2 medicaid(rw,sync,no_subtree_check)
Then run the original mount command.
[root@medicaid media]# mount -o rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 -t nfs 
oraclenerd:/media/data_2/files/oradata/TESTING /media/test
Mounted.
[root@medicaid media] touch test/text.txt
File created.

ORA-27054: NFS file system where the file is created or resides is not mounted with correct options

All I seem to post are my issues...well, technical issues.

On Friday I encountered ORA-08103: object no longer exists. The result there, I had to re-re-re-re-re-load the files. Once I did that, I realized that the file name had actual meaning...In the table itself was a column called MONTH which was relative to the fiscal year (July - June). Guess where fiscal year was? Yup, the file name. Ugh.

So, as soon as I got the data loaded, I did this:
RMAN>BACKUP DATABASE;
I didn't want to go through the corrupted block issue for a 3rd time. After some more work, moving data around, I checked out the RMAN docs and decided to do perform an incremental backup. Well, I'm still learning, I apparently did the whole thing.
RMAN>BACKUP DATABASE INCREMENTAL LEVEL 0;
OK, now I'm running out of space, I have 500 GB allocated to this VM.

I read through the RMAN docs some more, BACKUP BACKUPSET. Yeah, backing up your backup. Inception.

I wanted to save this to my Shared Folders that now auto-mount (as of VirtualBox 4.0).
RMAN> BACKUP BACKUPSET 4
2> FORMAT '/media/sf_files/oradata/TESTING/20110320_%u'
3> DELETE INPUT;

Starting backup at 20-MAR-11
using channel ORA_DISK_1
channel ORA_DISK_1: input backup set: count=13, stamp=746274912, piece=1
channel ORA_DISK_1: starting piece 1 at 20-MAR-11
channel ORA_DISK_1: backup piece /u01/app/oracle/product/11.2.0/dbhome/dbs/0dm7mg30_1_1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 03/20/2011 21:36:53
ORA-19504: failed to create file "/media/sf_files/oradata/TESTING/20110320_0dm7mg30"
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 3
ORA-19600: input file is backup piece (/u01/app/oracle/product/11.2.0/dbhome/dbs/0dm7mg30_1_1)
ORA-19601: output file is backup piece (/media/sf_files/oradata/TESTING/20110320_0dm7mg30)
Checking the Google Machine took me to Kevin Closson's post Mount Options for Oracle over NFS. It’s All About the Port.. Those options are the following:
rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0
OK, I'm still new to Linux and now my Shared Folders auto-mount, WTF do I do? More Google Machine goodness took me to a post on the OTN Forums with the exact command I needed to run (which is of course, why I am writing this up now...).
[root@medicaid media]# mount -o rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 -t nfs 
oraclenerd:/media/data_2/files/oradata/TESTING /media/test
See that little trick I picked up? I used oraclenerd:/media/data_2/... instead of relying on the VirtualBox magic. Nice.

No errors, I then checked out the permissions:
[root@medicaid media]# ls -la
total 36
drwxr-xr-x 6 root root 4096 Mar 21 01:37 .
drwxr-xr-x 28 root root 4096 Mar 18 02:13 ..
-rw-r--r-- 1 root root 0 Mar 18 02:13 .hal-mtab
drwxrwx--- 1 root vboxsf 4096 Mar 4 04:18 sf_data
drwxrwx--- 1 root vboxsf 8192 Mar 8 13:59 sf_exp
drwxr-xr-x 1 root vboxsf 4096 Mar 20 21:18 sf_files
drwxr-xr-x 2 root root 4096 Mar 21 01:37 test
OK, let's change those:
[root@medicaid media]# chown oracle:dba test/ -R
[root@medicaid media]# ls -la
total 36
drwxr-xr-x 6 root root 4096 Mar 21 01:37 .
drwxr-xr-x 28 root root 4096 Mar 18 02:13 ..
-rw-r--r-- 1 root root 0 Mar 18 02:13 .hal-mtab
drwxrwx--- 1 root vboxsf 4096 Mar 4 04:18 sf_data
drwxrwx--- 1 root vboxsf 8192 Mar 8 13:59 sf_exp
drwxr-xr-x 1 root vboxsf 4096 Mar 20 21:18 sf_files
drwxr-xr-x 2 oracle dba 4096 Mar 21 01:37 test
Sweet. Now, run the RMAN backup of a backup again:
RMAN> BACKUP BACKUPSET 4
2> FORMAT '/media/test/2011_%u'
3> DELETE INPUT;

Starting backup at 21-MAR-11
using channel ORA_DISK_1
channel ORA_DISK_1: input backup set: count=13, stamp=746274912, piece=1
channel ORA_DISK_1: starting piece 1 at 21-MAR-11
channel ORA_DISK_1: backup piece /u01/app/oracle/product/11.2.0/dbhome/dbs/0dm7mg30_1_1
No errors...and it seems to be running fine. If it breaks, I'll update.

Update: 03/21/2011 02:41 AM
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 03/21/2011 02:39:39
ORA-19502: write error on file "/media/test/2011_0dm7mg30", block number 10173824 (block size=8192)
ORA-27072: File I/O error
Linux-x86_64 Error: 11: Resource temporarily unavailable
Additional information: 4
Additional information: 10173824
Additional information: 630784
ORA-19502: write error on file "/media/test/2011_0dm7mg30", block number 10173824 (block size=8192)
Barnacles.

Friday, March 18, 2011

ORA-08103: object no longer exists

From the docs:
ORA-08103: object no longer exists
Cause: The object has been deleted by another user since the operation began, or a prior incomplete recovery restored the database to a point in time during the deletion of the object.
Action: Action: Delete the object if this is the result of an incomplete recovery.

It started innocently enough.



Weird, I can see it in the schema browser.



I can select it from the schema browser and view the table properties.



I can issue a SELECT *



Strangely, the Count Rows just fails silently.

Enough of this GUI crap.

From the server itself, but first, the specs:
Host: Ubuntu 10.10 Desktop
RAM: 24 GB
CPU: Dual Xeon QuadCore something or another
VirtualBox: 4.0.4 r70112

Guest: Oracle Enterprise Linux 5.5 (Carthage)
RAM: 12 GB
Processors: 4
Shared Folders: 3
Guest Additions installed.
Database: 11gR2

Describe the table:
sql>desc eligibility
Name Null? Type
----------------------- -------- ----------------
FILE_ID NUMBER
RECORD_ID NUMBER(38)
RECIPIENT_ID VARCHAR2(12)
MONTH_OF NUMBER(4)
DISTRICT_CODE VARCHAR2(2)
COUNTY VARCHAR2(2)
ELIGIBILITY_FLAG VARCHAR2(1)
HMO_FLAG VARCHAR2(1)
NURSING_HOME_FLAG VARCHAR2(1)
MEDIPASS_FLAG VARCHAR2(1)
AGE NUMBER(38)
PROGRAM_CODE VARCHAR2(4)
GENDER VARCHAR2(1)
Good.

SELECT COUNT(*)
sql>SELECT COUNT(*) FROM eligibility;
SELECT COUNT(*) FROM eligibility
*
ERROR at line 1:
ORA-08103: object no longer exists
OK.
sql>SELECT * FROM eligibility WHERE rownum < 11;

FILE_ID RECORD_ID RECIPIENT_ID MONTH_OF DI CO E H N M AGE PROG G
---------- ---------- ------------ ---------- -- -- - - - - ---------- ---- -
52 1892424743 ACF9ASfEJrVw 1 07 48 6 0 0 0 27 MM P F
52 1892424744 ACF9ASfEJrVw 2 07 48 6 0 0 0 27 MM P F
52 1892424745 ACF9ASfEJrVw 3 07 48 6 0 0 0 28 MM P F
52 1892424746 ACF9ASfEJrVw 4 07 48 6 0 0 0 28 MM P F
52 1892424747 ACF9ASfEJrVw 5 07 48 1 0 0 0 28 MA R F
52 1892424748 ACF9ASfEJrVw 6 07 48 1 0 0 0 28 MA R F
52 1892424749 ACF9ASfEJrVw 7 07 48 1 0 0 0 28 MA R F
52 1892424750 ACF9ASfEJrVw 8 07 48 1 0 0 0 28 MA R F
52 1892424751 ACF9ASfEJrVw 9 07 48 1 0 0 0 28 MA R F
52 1892424752 ACF9ASfEJrVw 10 07 48 1 0 0 0 28 MA R F

10 rows selected.
Hah. You are there, sort of. Try to fool it.
sql>SELECT COUNT(*) FROM ( SELECT * FROM eligibility );
SELECT COUNT(*) FROM ( SELECT * FROM eligibility )
*
ERROR at line 1:
ORA-08103: object no longer exists
Barnacles.

If I remember correctly, this table took about 10 hours to load. Sometime later, I had another corrupted block, fortunately it was on a small table so I dropped it, the tablespace and the associated data file. Next time I restarted the server, I'm told to run fsck. Not good.

I run it with the -c switch. I can get back into it. Now this.

This is a DW type environment. ARCHIVELOG is not turned on. There is no backup.

So, should I drop and reload the dang thing? Or is it somehow recoverable?

Help. Winning answer will get a t-shirt.

Update 03/18/2011 10:46 PM EST
I tried Gary's method from below first, as it seemed the easiest to test out. There was a single bitmap index on the table, so I booted it:
DROP INDEX bmp_fileid_eligibility;
Now the test:
SELECT COUNT(*) FROM eligibility;
Waiting...
ORA-01578: ORACLE data block corrupted (file # 9, block # 2440127)
ORA-01110: data file 9: '/u01/app/oracle/oradata/TESTING/medicaid_01_04.dbf'
01578. 00000 - "ORACLE data block corrupted (file # %s, block # %s)"
*Cause: The data block indicated was corrupted, mostly due to software
errors.
*Action: Try to restore the segment containing the block indicated. This
may involve dropping the segment and recreating it. If there
is a trace file, report the errors in it to your ORACLE
representative.
Great. Same situation as last time. I ended up rebuilding the entire VM. I've never had these kinds of problems before...perhaps there is something wrong with my system setup or possibly how I setup Linux? I don't know. Sadly, I currently lack the skills necessary to figure it out. More importantly, I lack the time.

Following Martin's suggestion from last time, I have determined that the corrupt block belongs to the table and not the index (duh, I just dropped it and got that error). Next up, Joel's suggestion, CTAS.

Update 03/19/2011 1:46 AM EST
No go with CTAS
Error starting at line 1 in command:
CREATE TABLE elig
COMPRESS BASIC
NOLOGGING
AS
SELECT *
FROM eligibility
Error at Command Line:6 Column:5
Error report:
SQL Error: ORA-01578: ORACLE data block corrupted (file # 9, block # 2440127)
ORA-01110: data file 9: '/u01/app/oracle/oradata/TESTING/medicaid_01_04.dbf'
01578. 00000 - "ORACLE data block corrupted (file # %s, block # %s)"
*Cause: The data block indicated was corrupted, mostly due to software
errors.
*Action: Try to restore the segment containing the block indicated. This
may involve dropping the segment and recreating it. If there
is a trace file, report the errors in it to your ORACLE
representative.
Update 03/19/2011 3:46 AM EST
After reviewing my notes, I decided to drop and reload. This wasn't the large table that I have, it was only 10 GB or so of compressed data. I was able to drop the load time by 67% by moving the parallel hint from the INSERT section to the SELECT section. Originally it looked like this:
INSERT /*+ append parallel( e, 8 ) */ INTO claims
SELECT
col1,
...,
col10
FROM ext_claims;

COMMIT;
Changing it to this sped things up dramatically
INSERT /*+ append */ INTO claims
SELECT /*+ parallel( e, 8 ) */
col1,
...,
col10
FROM ext_claims;
After it was complete this time...
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP MOUNT;
SQL>host

[oracle@medicaid ~]$ rman target=sys/testing@testing

Recovery Manager: Release 11.2.0.1.0 - Production on Sat Mar 19 04:30:46 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: TESTING (DBID=89312249)


RMAN> backup database;

Starting backup at 19-MAR-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=91 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
...
...
...
input datafile file number=00001 name=/u01/app/oracle/oradata/TESTING/system01.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/TESTING/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/TESTING/example01.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/TESTING/undotbs01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/TESTING/users01.dbf
channel ORA_DISK_1: starting piece 1 at 19-MAR-11
channel ORA_DISK_1: finished piece 1 at 19-MAR-11
piece handle=/u01/app/oracle/product/11.2.0/dbhome/dbs/07m7j6h6_1_1 tag=TAG20110319T043341 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 01:05:26
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 19-MAR-11
channel ORA_DISK_1: finished piece 1 at 19-MAR-11
piece handle=/u01/app/oracle/product/11.2.0/dbhome/dbs/08m7jabs_1_1 tag=TAG20110319T043341 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 19-MAR-11
Now I have a good backup. Will continue to do this after anything major changes.

Still need to figure out what is happening, Gary's suggestion below to check out the BLOCK related init parameters will be a start.

Thursday, March 17, 2011

Random Thought: #19

Nickelodeon
I am lucky enough to get to watch The Fresh Beat Band every single day on Nickelodeon. It's one of katezilla's faves, after Dora the Explorer and Diego.

Anyway, one of the actors, Yvette Gonzalez-Nacer, is obviously very talented. She has an operatic (sp -10?) voice which overpowers her colleagues on the show. Doubtful it is intentional, she's just that much better.

Using the Google Machine, she's had a fairly impressive career. What I can't figure out, is how she ended up on this show. Nick does a good job at putting together shows with talented people (whether you agree with their...umm...whatever) who seem to be fairly normal (save Britney Spears sister, but that should be obvious now (and holy schnikeys, I know that...OMG...WTF???)).

Is it a good career move for her? Does she do it because she cares about the kids? Sadly, I ponder these things.

Percentages
How would you store a percentage? Would you store it as 40, 50 or 60? Or .4, .5 and .6? I opt for the latter. Why? Well, it's a decimal, it's a fraction of 1, right? If you store it as a whole number, what happens in 3 years after you are gone and the new person doesn't know? It's not obvious, is it?

OBIEE
I ran into an OBIEE project for a single business model that consisted of 2 separate design patterns, horizontal, and vertical fragmentation. That was fun. It was supposed to be easy, but it was not. I'd like to re-live that one soon. Maybe.

Monday, March 14, 2011

Fun with SQL - Part 578

by Enrique Aviles [twitter]
The first guest post in awhile. I met Mr. Aviles via twitter (yeah for Twitter!) some time back. We got to meet in real life at our local SOUG meeting with Tanel Põder came to town. He's also graced the front page for awhile wearing his oraclenerd t-shirt (which is now his avatar on Twitter).

Part of my job consists of tuning SQL queries. Opportunities abound and present themselves in many different scenarios. A few weeks ago I was assigned a trouble ticket. I was the lucky winner because, after some investigation by the developers, the culprit was identified as an SQL issue that was causing a stack overflow in Hibernate. For those unfamiliar with it, Hibernate is an Object Relational Mapping library for Java (more info here). The stack overflow occurred during string manipulations probably during the code generation phase. I didn’t have to do much investigation of my own because the developers indicated the offending query was composed of 60 IN clauses, each clause containing 1000 elements. The query resembled something like this:
SELECT COL1, COL2, COL3
FROM
WHERE (UNIQUE_ID IN (1, 2, 3,..., 1000)
OR (UNIQUE_ID IN (1001, 1002, 1003,..., 2000)
OR (UNIQUE_ID IN (2001, 2002, 2003,..., 3000)
...
OR (UNIQUE_ID IN ( 59001, 59002, 59003, ..., 60000)
I was happy to see the query generation failed before it reached the database. It is usually better to have the application fail with a glaring error during unit tests rather than allowing a potential failure slip into production. Fortunately we test against a database that’s a slightly older copy of the production database so we execute all our queries using real data volumes, not just against artificial tiny development databases. Had we only tested on a development database, this issue would have made it to production with two possible outcomes:
- The application fails with the Hibernate stack overflow error that triggered the trouble ticket.
- Given a smaller set of unique ids, Hibernate successfully generates the query and sends it to the database.

Since the query never reached the database, it is hard to tell how well it would have performed.

For a handful of IN clauses, it is safe to assume the query would have run in a reasonable amount of time but 60 IN clauses could have possibly taken an unacceptable amount of time. Even if the database is able to process 60 IN clauses with no impact to the user, generating such statements as
UNIQUE_ID IN (1001, 1002,..., 2000)
UNIQUE_ID IN (2001, 2002,..., 3000)
UNIQUE_ID IN (59001, 59002, 59003,..., 60000));
in a query is not a good idea for various reasons. In the absence of bind variables, a lot of CPU will be consumed hard parsing SQL statements since potentially every generated query becomes a brand new query sent to the database even if the number of IN clauses remain static. Using bind variables lessens the load but doesn’t eliminate hard parsing as much as it should because queries with a different number of IN clauses become unique, thus unshareable. Both cases fill the shared pool with unshareable SQL, although the absence of bind variables would fill it faster. This is a waste of CPU cycles and memory. We can significantly reduce hard parsing and eliminate the risk of generating a huge SQL statement by implementing a temporary table.

Instead of selecting and sending to the application server all values that eventually ended in a series of IN clauses, we could simply insert those values in a temporary table and modify the original query to join the temporary table. The following is a sample temporary table:
CREATE GLOBAL TEMPORARY TABLE IDS_TMP 
(
ID NUMBER
)
ON COMMIT PRESERVE ROWS;
The table was loaded with the same INSERT that returned all IDs that were used to build the IN clauses. Once the temporary table contained all desired IDs the application generated the following query:
SELECT COL1, COL2, COL3
FROM TABLE AS TAB
WHERE EXISTS
(SELECT TMP.ID
FROM IDS_TMP AS TMP
WHERE TMP.ID = TAB.ID);
Of course, we could have also generated a query with an INNER JOIN or with a single IN clause but EXISTS worked fine for our case. This is obviously a much shorter SQL text. It remains static regardless of how many IDs are selected since they are hidden in the temporary table. Additionally, there is no need to send all IDs to the application server, spend time building a long character sting, and send them back as part of a SQL query. This translates in one hard parse and many executions so the risk of wasting shared pool and CPU usage is eliminated.

The “magic” of temporary tables makes this a safe solution since data inserted is only available per session. User A cannot select data inserted by user B and vice versa. Temporary data disappears automatically after the session disconnects so there is no need to explicitly DELETE or TRUNCATE the table. At the end of the day, this proved to be a suitable solution.

The fact that IN clauses only support 1000 elements was seen as a limitation. If the database imposes limits chances are defeating them with apparently clever solutions will result in slow performing queries and will increase the risk of wasting system resources. When faced with similar limitations we should ask ourselves if there are other means of achieving the same result that don’t require circumventing what was considered a database shortcoming.

Friday, March 11, 2011

What is Big Data?

Had a question today about Big Data and I didn't really know how to define it.

Is it defined by volume? Size?

Structured, unstructured (content management systems)?

Naturally, the first thing I did was ask the Twitter Machine. It's like sitting in a giant room and yelling out questions to a wide variety of people, in other words, fun.

From the one year older John Piwowar:



Definitely agree with that sentiment. It is just ones and zeros. Lots of it, apparently.

I think my question goes deeper though. I have a pretty good understanding of structured data, i.e. that used by the majority of business applications. You know, you define a data model, it changes over time, no big deal. But...and it's a giant but, what about all that unstructured stuff?

Next up, future beer drinking buddy and obvious fellow smart-ass.



A couple of non-believers stroll in...







I sometimes wonder, do we, as in database people, who work in the data information business, somehow miss the boat on things like Big Data? I can't imagine that's possible. We know the importance of data...is there a disconnect? Is it just me? That ain't a swipe at the above people either.

Gary chimes in with a more...philosophical answer? I've read Gary's stuff for a few years and once in awhile, he just goes way over my head...



Ted comes in and follows up to me:



There were definitely more, but I think these were the highlights for me.

Then it was off to the Google Machine. I love The Google.

Wikipedia to the rescue:

A Definition:
Big Data is a term applied to data sets whose size is beyond the ability of commonly used software tools to capture, manage, and process the data within a tolerable elapsed time. Big data sizes are a constantly moving target currently ranging from a few dozen terabytes to many petabytes of data in a single data set.

An Example:
Examples include web logs, RFID, sensor networks, social networks, Internet text and documents, Internet search indexing, call detail records, genomics, astronomy, biological research, military surveillance, medical records, photography archives, video archives, and large scale eCommerce.

Definitely coming together now...and we're back to John's point, it's just ones and zeros.

I'll let you use The Google Machine yourself as well. Lots of good articles.

For a really quick (4 minutes) primer, here is some dude from O'Reilly


Thanks everyone for their responses on twitter. Definitely helped to clear up some of my confusion.

Thursday, March 10, 2011

Follow Friday

I know this is a Twitter tradition and I know it is just a tad early, but here is my version of Follow Friday.

First up, Enrique Aviles. I met Mr. Aviles at Tanel Põder's visit back in January in Tampa. We had corresponded a bit prior via Twitter. For the longest time, I read his Twitter handle as "Evil Les 94," I hope that's not me projecting. Super nice guy and apparently has some fun horror stories he promises to share.



Next up, Don Seiler. Don and I have been verbal sparring partners on Twitter for a couple of years now. We appear to have a similar sense of humor, smart-ass. That might be a company-wide trait at Pythian, I seem to encounter a lot of smart-asses from that particular organization. :)



Finally, Lisa Dobson. My first introduction to Mrs. Dobson was through Tom Kyte's blog a few years ago. Since I can't seem to find the exact post that I think I remember, I link to this one where Mr. Kyte talks about introducing her at her presentation for Newbie DBAs.

Now, I'm not one to spread rumors, so don't mention this to anyone, but rumor has it, she "bitch slapped" Tom Kyte at some past event. This is wholly unsubstantiated, so don't hold me to it.



You may see a trend in my Follow Friday suggestions, you may not. If you do, go here, all proceeds go to katezilla (well, technically, to us, her parents, to offset her costs. Though she is a girl and girls are supposed to cost more...). :p

OBIEE 11g: Fun With Folders!

I've had a bit of time to play with 11g lately. One thing that really jumps out is the directory structure.

In 10g, it was relatively easy. You had 2 folders created at the root, OracleBI and OracleBIData. Most of the server specific stuff (logs, binaries, etc.) are in OracleBI. Pretty easy to navigate. With the introduction of 11g and the integration with WebLogic, the directory structure has exploded. With that, the relative ease of finding files for the new guy or gal, is gone.

If you want a good overview of the current directory structure, read OBIEE 11g - Demystifying the directory structure by Jainnys (couldn't find any more information).

Now, the fun part.

The following group of folders is repeated no less than 7 times:

OracleBIClusterControllerComponent
OracleBIJavaHostComponent
OracleBIPresentationServicesComponent
OracleBISchedulerComponent
OracleBIServerComponent

Here are there locations:

\Oracle\middleware\instances\instance1

\Oracle\middleware\instances\instance1\bifoundation

Note that this one is only one level below the previous.

\Oracle\middleware\instances\instance1\tmp

\Oracle\middleware\instances\instance1\config

Ditto for those.

\Oracle\middleware\instances\instance1\diagnostics\logs

Two levels below that top one...

Let's switch it up a little; go back a couple of levels and start drilling again.

\Oracle\middleware\Oracle_BI1\bifoundation\admin\config

\Oracle\middleware\Oracle_BI1\bifoundation\admin\provisioning

Just a fun little observation. I would imagine there are legit reasons for this kind of directory structure...but I certainly don't understand it yet.

Tuesday, March 8, 2011

Detailed Design Documentation

Today I had to present my detailed design document for an OBIEE project.

The response was...not great. Not even good. So bad, in fact, my boss had to call me afterwards. Fortunately for me, he's a good boss and remembers the good with the bad.

I'm not sure if it is evident, but I'm not afraid to fail. I've done it before, I did it today, I'll do it again in the future. It's just life. As much as I would like to believe (and as much as I tell my wife and kids), I am not perfect. Never have been, never will be.

What should be evident though, through 686 posts here and the untold pages of documentation I have written for wikis, email, groups, blog comments, or some other medium, I don't have a problem with documentation. I like it. I find a lot of value in it. I usually take the lead on it if nothing exists.

Not too long ago I talked about Design Documentation, but that was more of the high level kind. The kind I was expected to deliver today was to be much more in depth. How much more so? Well...

Physical Layer
Each table and the columns that were to be used. This includes aliasing these tables and then their respective joins.

No, that's nothing terrible to ask. I don't argue that.

BMM (Business Model Mapping) Layer

1. I was to have every column defined (both sourced and calculated).
2. Each and every dimension and fact table, including all the Logical Table Sources (LTS) for each of those, including their joins (INNER/OUTER/FULL OUTER), Content, Fragmentation and Leveling defined.
3. The heirarchies (aka dimensions, but not of the table kind) defined by level with all the accompanying columns at each level.

This section gave me the most problems today. While I appreciate the high level design, I felt this encroached a bit on actual development. The more I think about it though, the more reasonable it sounds.

It is a departure for me though.

I tend to develop and design in tandem. Documenting everything up front feels...dirty to me.

Perhaps it's the fact that I was using Word? I mean, seriously, it's 2011. Why are we using binary files? Data should be free!

I prefer wikis. I prefer email over Word. I've used Atlassian products before and think they are great.

Where I have a problem is this (seeming) waterfall approach. How do you know what you don't know?

Then I'm back at the format of the documentation, Word, stored in...ugh...SharePoint. I don't believe I have ever met a less desirable solution. SharePoint is the suck.

Documentation should live and breathe. I have never seen that happen with a Word document. Never. I consider myself somewhat disciplined, and I couldn't fathom maintaining this. Let's make it more narrative (wiki). Let's have a place where the latest and greatest lives, with a possible story of how it got that way. The decisions were made because of A, B and C. You miss most, if not all, of that in the design documents.

Perhaps it's not even Word, or SharePoint. Perhaps it is the process. I'm probably not asking the right questions yet.

To sum it up, I failed today. I have to follow the guidelines set forth...but I can try to influence them...and of course you know I will.

Thursday, March 3, 2011

VirtualBox and Shared Folders

I have had problems with Shared Folders in the past, read here and here, for just 2 examples.

For my latest project, I decided to skip Shared Folders altogether and just FTP my files to the VM and then load them from a Directory (object). I started to use Telnet and FTP, but was quickly shamed into using ssh and SFTP. Mostly though, I think I was just lazy initially, or time-crunched, or whatever. I didn't want to bother.

I discovered some new functionality today which means I have to share.

Previously, you had to mount the Shared Folders manually, via something along the lines of:
[root@medicaid oracle]# mount -t vboxsf files oracleFiles
That worked like a charm, or so I thought. I started the process to load files and received an Oracle error (something about couldn't write the log file).
[root@medicaid oracle]# ls -la /u01/app
total 24
drwxrwxr-x 6 oracle oinstall 4096 Mar 3 12:46 .
drwxrwxr-x 3 oracle oinstall 4096 Feb 23 15:32 ..
drwxr-xr-x 8 oracle oinstall 4096 Feb 23 17:14 oracle
drwxr-xr-x 2 oracle dba 4096 Mar 3 12:52 oracleDirectory
drwxr-xr-x 1 root root 4096 Mar 3 16:51 oracleFiles
drwxrwx--- 5 oracle oinstall 4096 Feb 23 16:48 oraInventory
OK, let's change that.
[root@medicaid oracle]# chown oracle:dba -R /u01/app/oracleFiles
No errors.
[root@medicaid oracle]# ls -la /u01/app
total 24
drwxrwxr-x 6 oracle oinstall 4096 Mar 3 12:46 .
drwxrwxr-x 3 oracle oinstall 4096 Feb 23 15:32 ..
drwxr-xr-x 8 oracle oinstall 4096 Feb 23 17:14 oracle
drwxr-xr-x 2 oracle dba 4096 Mar 3 12:52 oracleDirectory
drwxr-xr-x 1 root root 4096 Mar 3 16:51 oracleFiles
drwxrwx--- 5 oracle oinstall 4096 Feb 23 16:48 oraInventory
WTF? Tried it again (yes, I know the definition of insanity). No joy.

What's this, a help button? Let's see what it says. From the docs:

Starting with version 4.0, VirtualBox can mount shared folders automatically, at your option. If automatic mounting is enabled for a specific shared folder, the Guest Additions will automatically mount that folder as soon as a user logs into the guest OS. The details depend on the guest OS type:

With Windows guests, any auto-mounted shared folder will receive its own drive letter (e.g. E:) depending on the free drive letters remaining in the guest.

If there no free drive letters left, auto-mounting will fail; as a result, the number of auto-mounted shared folders is typically limited to 22 or less with Windows guests.

With Linux guests, auto-mounted shared folders are mounted into the /media directory, along with the prefix sf_. For example, the shared folder myfiles would be mounted to /media/sf_myfiles on Linux and /mnt/sf_myfiles on Solaris.

The guest property /VirtualBox/GuestAdd/SharedFolders/MountPrefix determines the prefix that is used. Change that guest property to a value other than "sf" to change that prefix; see the section called “Guest properties” for details.

Ah, there it is. Neat.
[oracle@medicaid]$ ls -la /media/sf_files
Permission denied
Oh, a note:

Note
Access to auto-mounted shared folders is only granted to the user group vboxsf, which is created by the VirtualBox Guest Additions installer. Hence guest users have to be member of that group to have read/write access or to have read-only access in case the folder is not mapped writable.

That helps.

Based on some feedback in the comments, I am not 100% sure the following command (usermod) is accurate. Until I can research this, please see the comments.
[root@medicaid oracle]#/usr/sbin/usermod -G vboxsf oracle
Exit out of root, still no joy. Issued the id command, and there is my group listed...So what did an ex-Windows user do? Reboot.

That worked. I now have access to these super cool auto-mounted Shared Folders. Thanks VirtualBox!

Wednesday, March 2, 2011

Why I Virtualize

I made the permanent switch to Ubuntu almost 2 years ago. I haven't looked back.

While I don't believe it is quite ready for prime time, it gets close with every release. By prime time, I of course mean, will I have to support it. I have tried to talk my parents into it, but only half-heartedly; I know I'll have to do more than I already do.

Since I started using Ubuntu I have also been using VirtualBox extensively. Originally, it was just for work, i.e. no one seems to allow the easy integration of linux distros onto their network. I say that based on just my experience, which is not extensive. There are also certain work related tools that only run on Windows.

Here's the short list of tools that I use that require Windows:
- GoToMeeting - I love the tool, from a collaboration perspective. I hate that I have to use Windows to use it. Fortunately, I always have 1 or more Windows VMs up and running.
- Neoview Management Tools - this doesn't matter much anymore, but it was one of the original reasons.
- Quicken - I haven't moved to the cloud version yet. I probably will. Soon.

Wow. I think that might be it. Cisco VPN used to be a reason until I found vpnc. That's really not too bad. I could be down to 1, GoToMeeting. I did find that Adobe has a product that is OS independent, pretty sure it uses Flash. Definitely a future consideration...

I don't game, at all, so I have no need for Windows there.

The reason I virtualize? Compartmentalization.

Here's why:
  1. Last year, while on site at a client, my Windows VM got corrupted. I was back up and running within 2 hours. If that had been my host system, I am sure it would have taken much, much longer. I only put the software on the VMs that I need. Nothing more, nothing less. Windows Office. Notepad++. WinMerge. SQL Developer. Maybe a few others specific to the client.

    I don't store any data on the VMs, I use Shared Folders and write to my host disks. If I lose one, no big deal.
  2. Single Unit of Work. I can create a VM with an Oracle database, one with OBIEE, one with Subversion and a Wiki (ok, not a single unit of work, whatever). Since the purchase of my new computer, this is my reality. If I lose one of those VMs, I don't lose all the work I put into them. I just rebuild the one I need. Annoying, yet. Catastrophic, no.

    If I built a single system with all of these components, tools, etc, the loss of my computer would be catastrophic, even with the best backup policy in the world.
  3. Finally, there was last night's incident. I have a large VM (12 GB RAM allocated, 4 processors) for this medicaid database I am putting together. I had a 20 GB OS partition. At some point, VirtualBox decides to write that RAM to disk, all 12 GB of it. Space available: 0 (zero) bytes. I couldn't do anything. OpenOffice kept telling me I had no more space to auto-save, had to kill it. This even after I pointed 99% of the settings files away from the OS partition.

    I tried symbolic links. I was successful with the Desktop, but nothing else. I tried, very unsuccessfully to point /dev/shm, /dev/pts, /var/run and /var/lock to a different location. If I had a better understanding of how the OS works, it might not have been a big deal, but for me...

    Since you can't expand a partition (please don't tell me you can), I decided to just wipe the host OS and re-partition my drives appropriately. Total time* to get everything back up and configured? 3 hours. VirtualBox was easy, pull in the vdi files, create a new VM and voila, everything is back to normal.
Those are my reasons for virtualizing. I suspect those of you who use VMWare, VirtualBox or any other virtualization tool for a living might have different or better reasons. Please share.

* Total Time = Does not include the fact that the first USB Flash Drive I was using didn't work properly. I spent 4+ hours trying to get GParted to work to no avail. Research pointed to the possibility of using a different one. I went to BestBuy, bought a new one and everything went smooth. By the way, to add to this frustrating day, my battery died at BestBuy. Awesome.