Thursday, December 11, 2008

Unix command examples


man get manual page on a UNIX command


example: man uniq




cut extract columns of data

example: cut -f -3,5,7-9 -d ' ' infile1 > outfile1
example: ls -l | awk '{print $8}' | cut -d '.'. -f -2


-f 2,4-6 field
-c 35-44 character
-d ':' delimiter (default is a tab)



sort sort lines of a file (Warning: default delimiter is white space/character transition)

example: sort -nr infile1 more
Linux example: ls -l | sort -nr +4 -5
Linux example: df -k '/b01/oradata/' | sort -nr +1 -2

-n numeric sort
-r reverse sort
-k 3,5 start key


wc count lines, words, and characters in a file

example: wc -l infile1

-l count lines
-w count words
-c count characters


paste reattach columns of data

example: paste infile1 infile2 > outfile2


cat concatenate files together

example: cat infile1 infile2 > outfile2

-n number lines
-vet show non-printing characters (good
for finding problems)


uniq remove duplicate lines (normally from a sorted file)

example: sort infile1 uniq -c > outfile2

-c show count of lines
-d only show duplicate lines


join perform a relational join on two files

example: join -1 1 -2 3 infile1 infile2 > outfile1

-1 FIELD join field of infile1
-2 FIELD join field of infile2


cmp compare two files

example: cmp infile1 infile2


diff or diff3 compare 2 or 3 files - show differences

example: diff infile1 infile2 more
example: diff3 infile1 infile2 infile3 > outfile1


head extract lines from a file counting from the beginning

example: head -100 infile1 > outfile1


tail extract lines from a file counting from the end

example: tail +2 infile1 > outfile1

-n count from end of file (n is an integer)
+n count from beginning of file (n is an integer)


dos2unix convert dos-based characters to UNIX format (the file is
overwritten).

example: dos2unix infile1


tr translate characters - example shows replacement of spaces
with newline character

example: tr " " "[\012*]" <> outfile


grep extract lines from a file based on search strings and
regular expressions

example: grep 'Basin1' infile1 > outfile2
example: grep -E '15:2015:01' infile1 more


sed search and replace parts of a file based on regular
expressions

example: sed -e 's/450/45/g' infile1 > outfile3


Regular Expressions

Regular expressions can be used with many programs including ls, grep, sed,
vi, emacs, perl, etc. Be aware that each program has variations on usage.

ls examples:

ls Data*.txt
ls Data4[5-9].ps list ps files beginning with Data numbered 45-49


sed examples: (these are the regex part of the sed command only)

s/450/45/g search for '450' replace with '45' everywhere
s/99/-9999\.00/g search for all '99' replace with '-9999.00'
s/Basin[0-9]//g remove the word Basin followed by a single digit
s/^12/12XX/ search for '12' at the beginning of a line,
insert XX
s/Basin$// remove the word Basin if it is at the end of
the line.
s/^Basin$// remove the word Basin if it is the only word on
the line.
s/[cC]/100/g search for 'c' or 'C' replace with 100

45,$s/\([0-9][0-9]\)\.\([0-9][0-9]\)/\2\.\1/g
on lines 45 to the end of file, search for two digits
followed by a '.' followed by two digits. replace
with the digit pairs reversed.

2,$s/,\([^,]*\),/,\"\1\",/
on all lines except the first, search for a comma,
followed by any text, followed by a comma. replace
the found text surrounded by double quotes.

s/\([0-9][0-9]\):\([0-9][0-9]\):\([0-9][0-9][0-9][0-9]\)/Year = \3, Month = \2, Day = \1/
search for 2 digits, followed by a colon, followed by 2 digits,
followed by a colon, followed by 4 digits. replace with
text plus values in a different order.


Pipes, standard input, standard output:

Standard output, ">", places the results of a command into the file named
after the ">". A new file will be written (an old file with the same name
will be removed). In order to append to an existing file use ">>".

Pipes allow you to connect multiple commands together to form a data stream.
For example, to count the number of times the string "Nile" occurs in the
3rd column of a file run this:

cut -f 3 infile1 sort uniq -c grep 'Nile'

or do this:

cut -f 3 infile1 grep 'Nile' wc -l


From a global STN Attributes data set (tab delimited):

- extract all North American basins draining into the Atlantic Ocean
- select only columns 2,3,4,5,11,12,13, and 17
- replace all missing data values (either -99 or -999) with -9999.0
- remove duplicate lines
- sort by the first column
- number all lines sequentially
- save to a new file

grep 'North America' STNAttributes.txt grep 'Atlantic Ocean' \
cut -f 2-5,11-13,17 sed -e 's/-99\-999/-9999\.0/g' \
sort uniq cat -n > NewSTNAttributes.txt


Wednesday, November 19, 2008

FIND command examples...

Find commands examples:
To use the find command, at the Unix prompt, enter:

find . -name "pattern" -print

Replace "pattern" with a filename or matching expression, such as “*.txt”. (Leave the double quotes in.)
Options:
The general form of the command is:

find (starting directory) (matching criteria and actions)

The find command will begin looking in the starting directory you specify and proceed to search through all accessible subdirectories. You may specify more than one starting directory for searching.

You have several options for matching criteria:
-atime n File was accessed n days ago
-mtime n File was modified n days ago
-size +nM File is larger than n Mbytes
-type c Specifies file type: f=plain text, d=directory
-name nam The filename is nam
-user usr The file's owner is usr
-group grp The file's group owner is grp
-perm p The file's access mode is p (where p is an integer)

You can use + (plus) and - (minus) modifiers with the atime, mtime, and size criteria to increase their usefulness, for example:

-mtime +7 Matches files modified more than seven days ago
-atime -2 Matches files accessed less than two days ago
-size +100M Matches files larger than 100 MB


By default, multiple options are joined by "and". You may specify "or" with the -o flag and the use of grouped parentheses. To match all files modified more than 7 days ago and accessed more than 30 days ago, use:
\( -mtime +7 -o -atime +30 \)

You may specify "not" with an exclamation point. To match all files ending in .txt except the file notme.txt, use:
\! -name notme.txt -name \*.txt

You can specify the following actions for the list of files that the find command locates:
-print Display pathnames of matching files.
-exec cmd Execute command cmd on a file.
-ok cmd Prompt before executing the command cmd on a file.

Executed commands must end with \; (a backslash and semi-colon) and may use {} (curly braces) as a placeholder for each file that the find command locates. For example, for a long listing of each file found, use:
-exec ls -l {} \;

Matching criteria and actions may appear in any order and are evaluated from left to right.

Full examples

  • To find and report all C language source code files starting at the current directory, enter:

$find . -name \*.c -print

  • To report all files starting in the directories /mydir1 and /mydr2 larger than 2 Mbytes and that have not been accessed in over 30 days, enter:

$find /mydir1 /mydir2 -size +2M -atime +30 -print

  • To remove (with prompting) all files starting in the /mydir directory that have not been accessed in over 100 days, enter:

$find /mydir -atime +100 -ok rm {} \;

  • To show a long listing starting in /mydir of files not modified in over 20 days or not accessed in over 40 days, enter:

$find /mydir \(-mtime +20 -o -atime +40\) -exec ls -l {} \;

  • To list and remove all regular files named core starting in the directory /prog that are larger than 500KB, enter:

$find /prog -type f -size +500K -print -name core -exec rm {} \;

  • To search foo file in whole systemby, specifying the root directory (/) to search. If you don't run this command as root, find will display a error message for each directory on which you don't have read permission. This can be a lot of messages, and the matching files that are found may scroll right off your screen. A good way to deal with this problem is to redirect the error messages so you don't have to see them at all:

$find / -name foo 2>/dev/null

Thursday, November 13, 2008

Debug plsql procedure/function in TOAD when executed by external application

Debug plsql procedure/function in TOAD when executed by external application
Step Over: Execute one line of code at a time but does not go into any called procedures or functions.
Trace Into: Execute one line of code at a time and does go to any called procedure or functions and executing its code one line at a time as well.
Trace Out: Returns the execution to calling routine and stopping on next line of code after call statement.
Breakpoint: Stop execution at the line with breakpoint
Add Watch: Watches allow to the contents of specific variables.


For tutorials on google:

Toad for Oracle Tutorials - debug

Step 0:
grant DEBUG CONNECT SESSION to system;

revoke DEBUG CONNECT SESSION from system;

Step 1:

Anonymous Block
-- run this in your client session
-- first block gets the debug session ready
DECLARE
id VARCHAR2(255);
BEGIN
id := DBMS_DEBUG.INITIALIZE('TOAD_EXAMPLE');
DBMS_DEBUG.DEBUG_ON;
END;
/

Step 2:

--this fires off your stored pl/sql object so it can be debugged.
BEGIN
KARAM.LOOP_INSERT;
DBMS_DEBUG.DEBUG_OFF;
END;
/

Step 3:

Now attach session in TOAD

Thursday, August 28, 2008

check and set stty setting

check and set stty setting



UNIX$ stty -a


speed 38400 baud; line = 0;
rows = 45; columns = 126
min = 1; time = 1;
intr = ^C; quit = ^\; erase = ^H; kill = ^U
eof = ^D; eol = ^@; eol2 = ^@; swtch = ^@
stop = ^S; start = ^Q; susp = ^Z; dsusp = ^Y
werase = ^W; lnext = ^V
-parenb -parodd cs8 -cstopb hupcl cread -clocal -loblk -crts
-ignbrk brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl -iuclc
ixon -ixany -ixoff imaxbel -rtsxoff -ctsxon -ienqak
isig icanon -iexten -xcase echo echoe echok -echonl -noflsh
echoctl -echoprt echoke -flusho -pendin
opost -olcuc onlcr -ocrnl -onocr -onlret -ofill -ofdel -tostop tab3


LINUX$ stty -a


speed 38400 baud; rows 45; columns 126; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = ; eol2 = ; swtch = ; start = ^Q; stop = ^S;
susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;
-parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc -ixany -imaxbel -iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke


LINUX$ stty erase ^h


Thursday, July 24, 2008

How to: Linux/UNIX Delete or Remove Files With Inode Number

Sometime accidently, we create file with special character and its hard to remove file using rm command.

  • Find out the inode number coressponding to file name (:q)

    $ ls -il :q

    52155 -rw-rw-r-- 1 oracle dba 2310 Jul 24 14:52 :q
  • Use find command to remove file:
    find . -inum 52155 -exec rm -i {} \;

Wednesday, July 23, 2008

Blog Tips

Blog Tips

If you have noticed by now, the codes are wrapped in the lesser than (<) and greater than (>) signs. The moment these are posted, they will be interpreted as codes and will trigger an action by the browser. Should you want to display these as part of the text, use their character entities instead.

"&quot;
&&amp;
<&lt;
>&gt;


Monday, July 21, 2008

Hiding Passwords From UNIX ps Command

Hiding Passwords From UNIX ps Command

When running an sql script from unix OS, the unix ps (process status) command will show the Oracle userid and password of the sqlplus session.

To get around this problem, you should consider using pipe in script to pass the the password into sqlplus Below are examples of piping in the password (userid system, password systempw) for sqlplus, import, and export.

$ echo system_password sqlplus -s system @MYSCRIPT
$ echo system_password imp system file=myfile.dmp tables=mytable
$ echo system_password exp system file=myfile.dmp tables=mytable

Alternatively, run the following commands:

$ sqlplus <<EOF
SYSTEM/system_password
SELECT ...
EXIT
EOF

alert file location in 11g

alert file location in 11g

Starting from 11g all the diagnostic data (traces, dumps, the alert log) will go under Automatic Diagnostic Repository.

ADR home would be in the following location:

ADR_BASE/diag/product_type/product_id/instance_id

where

product_type --> rdbms
product_id --> database name (my database name t801)
instance_id --> SID (my instance name t801)

ADR Home Subdirectories:











So Alert file location will be under following directory:

ADR_BASE/diag/rdbms/t801/t801/alert

If the ORACLE_BASE=/u01/app/oracle then alert file location would be:

/u01/app/oracle/diag/rdbms/t801/t801/alert

DIAGNOSTIC_DEST initialization parameter location would be:

DIAGNOSTIC_DEST=/u01/app/oracle/

For more detail, read Oracle Documentation:

Friday, July 18, 2008

Find processor information from different UNIX flavor

Find processor information from different UNIX flavor

  • On HP-UX
  • You can use the info from the 'model' command to get the processor type (PA-RISC or Itanium) like so:

    Unix1$ grep -i $(model cut -d"/" -f3) /usr/sam/lib/mo/sched.models

    which tells us that the box has a PA8700 processor type (PA-RISC)

    To get the speed, you can do:
    Unix1$ echo 'itick_per_usec/D' adb /stand/vmunix /dev/kmem grep itick_per_usectail -1

    which tells us it's a 750MHz processor

    To get whether the kernel is running in 32 or 64bit mode you can do:
    (Unix1:oracle)# getconf KERNEL_BITS
    64
    which says 64bit mod

Find physical memory on Unix/Linux systems

HP-UX:Find physical memory
For your really real physical ram try this:
#echo "selclass qualifier memory;info;wait;infolog"|cstm

The above command will show you how many memory slot and physical memory that plug in that slots in MB format.

#dmesg|grep Phy, this ouput will show yor memory in KB format

But I trust the first command that come from cstm is more valid.

Thursday, July 17, 2008

AWR manual snapshot

AWR manual snapshot

To do stress testing or troubleshooting performace related issue, you can use manual AWR snapshot.

AWR manual snapshot:

EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;

Friday, July 11, 2008

Using Putty PSCP and PSFTP on Windows

Using Putty PSCP and PSFTP on Windows
  • Using PSCP on Windows


  • C:\> cd "C:\Program Files\PuTTY"

    C:\Program Files\PuTTY> pscp -h # -h gives a short help listing
    ...short help listing displays here...

    C:\Program Files\PuTTY> pscp -P 2003 -l alleni99 \
    net2003.idallen.ca:.bashrc foo.txt
    ...you may be asked to accept the host key here (say yes)...
    alleni99@net2003.idallen.ca's password:
    ...file transfers ".bashrc" to "foo.txt" in the current directory...

    Remember to use the -P option and use -l to set your own userid.

  • Using PSFTP on Windows


C:\> cd "C:\Program Files\PuTTY"

C:\Program Files\PuTTY> psftp -h # -h gives a short help listing
...short help listing displays here...

C:\Program Files\PuTTY> psftp -P 2003 -l alleni99 net2003.idallen.ca
...you may be asked to accept the host key here (say yes)...
alleni99@net2003.idallen.ca's password:
Remote working directory is /home/alleni99
psftp> help
...short help listing displays here...
psftp> ls
...listing of directory displays here...
psftp> get .bashrc foo.txt
remote:/home/alleni99.bashrc => local:foo.txt
psftp> quit
...file "foo.txt" is now in the current directory...

Remember to use the -P option and use -l to set your own userid when
using the PuTTY version of PSFTP.

Note that the options to the PuTTY Windows version of SFTP (named
PSFTP) are not the same as the options to the Unix/Linux version
of SFTP. In particular, the option "-P" has different meanings!



################################### My examples ##########################################################



************* windows:-**********************
From desktop to TESTNET database server:
pscp -P 2222 -l oracle swingbench230376.zip localhost:swingbench230376.zip

pscp -P 2222 -l oracle jre-6u7-solaris-sparc.sh localhost:jre-6u7-solaris-sparc.sh

pscp -P 2222 -l oracle jre-6u7-solaris-sparcv9.sh localhost:jre-6u7-solaris-sparcv9.sh

From TESTNET database server to desktop:
pscp -P 2222 -l oracle localhost:trap_test.ksh trap_test.ksh


C:\Program Files\Putty>pscp -P 2222 -l oracle swingbench230376.zip localhost:swingbench230376.zip
Using keyboard-interactive authentication.
Password:
swingbench230376.zip | 9005 kB | 1500.9 kB/s | ETA: 00:00:00 | 100%



If tunnel not setup correctly then we will get following message:

C:\Program Files\PuTTY>pscp -P 2222 -l oracle swingbench230376.zip localhost:swingbench230376.zip
Fatal: Network error: Connection refused


Thursday, July 10, 2008

Understanding Unix AWK

Understanding Unix AWK

awk is field-oriented and ideal for manipulating database or comma-delimited files
  1. awk '{ print $1 }' address_book
  2. awk -F, '{ print $1 }' address_book
  3. awk -F, '{ print $1 " " $2 " " $8 }' address_book > address_book.bck
  4. awk -F, '/KARAM/{ print $1 " " $2 " " $8 }' address_book > address_book.bck

Understanding Unix ENVIRONMENT

Understanding Unix ENVIRONMENT while logging.

When you login to UNIX environment, keep in mind that following configuratin files are generally run in a specified order.

  • Systemwide configuration files (such as /etc/profile) run first upon login if they exist.
  • Configuration files specific to your unix account run next if they are available (such as ~/.profile and ~/.kshrc)
  • It means that personal configuration files override the system one

Wednesday, July 9, 2008

Duplicate Database on a different server using RMAN dulplicate

Duplicate Database on a different server using RMAN dulplicate.


  1. copy the backup of RMAN back to standby server(new server). Backup Directory structure should same as on Primary side.

  2. Lets assume, password file already created on primary database. So Make sure that we can make connetion as sys user from new server.
  3. Make sure that db_file_name_convert and log_file_name_convert parameter are set correctly on new AUXILIARY database.
  4. Start the new AUXILIARY database in NOMOUNT state.
    unix$ sqlplus / as sysdba
    SQLPLUS> startup nomount

  5. Start RMAN from AUX side and Connect to rman catalog
    unix$ rman
    RMAN> rman/rman1@primcatalog

  6. Connect target(primary) database using sys user
    RMAN> sys/password@primary_db


    1. RMAN> duplicate target database to AUX.
    2. RMAN> duplicate target database to AUX.

Friday, July 4, 2008

Recovery from Complete Loss (No Controlfile)

Recovery from complete loss (no control file and no pfile or spfile)


  1. find out DBID of lost database. This can done by checking control file autobackup location.

  2. $ export ORACLE_SID=TEST

  3. $ sqlplus / as sysdba
  4. SQL> startup force nomount
  5. Restore spfile
    $rman
    rman> set dbid=4885937635
    rman> connect catalog
    rman/rman@dbcatalog
    rman> restore spfile from autobackup;
    rman> shutdown immediate;
    rman> startup nomount

  6. Restore the control file and mount database from restored control file.
    rman> restore controlfile from autobackup;
    rman> alter database mount;
  7. Restore the datafile and perform media recovery.
    rman> restore database;
    rman> recover database;
    rman> alter database open resetlogs;

Tuesday, July 1, 2008

Making SQLPLUS connection to remote database

Making SQLPLUS connection to remote database:

  1. sqlplus system/manager@test.gdbsi.com using connect string
  2. sqlplus system/manager@//hostname:portname/dbname
  3. sqlplus system/manager@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))(CONNECT_DATA=(SID=TEST)))

Drop Oracle Database Manually

Drop Oracle Database Manually:

  1. $export ORACLE_SID=TEST
  2. Connect to SQL*Plus as "/ as SYSDBA" privilege
  3. SQL> startup force mount
  4. SQL> alter system enable RESTRICTED session;
  5. SQL> drop database;

Import Multiple dump files using Data Pump

To import multiple dump files using Data Pump, use substitution variable (%u).

For example:

impdp / DIRECTORY=pump_dir LOGFILE=parallel_import.log DUMPFILE=par_exp%u.dmp PARALLEL=2 TABLE_EXISTS_ACTION=REPLACE

TABLE_EXISTS_ACTION={SKIP APPEND TRUNCATE REPLACE}
The possible values have the following effects:
  • SKIP leaves the table as is and moves on to the next object. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.
  • APPEND loads rows from the source and leaves existing rows unchanged.
  • TRUNCATE deletes existing rows and then loads rows from the source.
  • REPLACE drops the existing table and then creates and loads it from the source. This is not a valid option if the CONTENT parameter is set to DATA_ONLY.
Create Update trigger on stream heatbeat table to test HEAVY insert/update activiry.

CREATE OR REPLACE TRIGGER KARAM.UPDATE_AFTER_EACH_ROW
AFTER UPDATE
OF STM_UDT_TS
ON KARAM.TDM_STRMHB FOR EACH ROW
DECLARE
v_out_file UTL_FILE.FILE_TYPE;
begin
v_out_file := UTL_FILE.FOPEN('STRMADMIN_DIR','heart_beat.log', 'A');
UTL_FILE.PUT_LINE (v_out_file, 'Updated on ' to_char(sysdate,'mm-dd-YYYY HH24:MI:SS'));
UTL_FILE.FCLOSE(v_out_file);
EXCEPTION
WHEN OTHERS THEN NULL;
end update_after_each_row;
/
ALTER TRIGGER KARAM.UPDATE_AFTER_EACH_ROW ENABLE;

Also trigger will get fire only if following executed:

SQLPLUS> beginDBMS_DDL.SET_TRIGGER_FIRING_PROPERTY('TEST_SCHEMA','UPDATE_AFTER_EACH_ROW',FALSE);end;/




Monday, June 23, 2008

select YES, NO, NULL from a column

select YES, NO, NULL from a column:

decode(decode(:SCAN,'%','%',nvl(detail_schema.SCAN,'NULL')),:SCAN,1)=1

Thursday, June 19, 2008

select null values

select null values:

Today, I had faced strange problem. I have a table with a coloumn which can have YES, NO or NULL. We want to give a choice to user from webinterface that user can either select 'YES' or 'NO' or NULL.

I can put where clause easily for YES or NO values. But I had hard time, how to select only null values. Please don't say use IS NULL. I want ony one where condition to select either "YES" or "NO" or NULL. We can't use IS NULL there.


Here is the solution finally I found:

decode(scan,:scan,1)=1

Sunday, June 15, 2008

Unix TIPS - Running one scripts at a time

Unix TIPS - Running one scripts at a time


A lockfile can be used when a particular program might be run more than once at the same time and you need to be sure that only one instance of the program can do something (modify some file, access a printer, etc.). To really do this right, the program needs to both test for the lockfile and create it (if it doesn't exist) in one atomic operation. If the test-and-set operation isn't atomic — for instance, if a program tests for the lock file in one command and then creates the lock file in the next command — there's a chance that another user's program could do its test at the precise moment between the first program's (non-atomic) test and set operations. The technique in this article lets you make a lockfile atomically from a shell script.


touch /tmp/xxx

ln -s /tmp/xxx /tmp/lockfile 2>/dev/null
ret=$?
rm /tmp/xxx
if [ $ret -ne 0 ];then
echo lockfile already exists
exit 1
else
echo success
fi

Race conditions:
It's worth pointing out that there is a slight race condition in the above lock example between the time we test for the lockfile and the time we create it. A possible solution to this is to use IO redirection and bash's noclobber mode, which won't redirect to an existing file. We can use something similar to:


if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null;
then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT

critical-section

rm -f "$lockfile"
trap - INT TERM EXIT
else
echo "Failed to acquire lockfile: $lockfile."
echo "Held by $(cat $lockfile)"
fi

Another solution: Thanks Power Tools book...

LOCKFILE="/tmp/lockfile"
ACTION_WAIT="0"
MAX_ACTION_WAIT="60"
trap 'rm -f "$LOCKFILE"; exit $?' INT TERM EXIT
until (umask 222; echo "$$" > "$LOCKFILE") 2>/dev/null
do
if [ "$ACTION_WAIT" -ge "$MAX_ACTION_WAIT" ]; then
echo "expiring the lock"
rm -f "$LOCKFILE"
return 1
else
echo "Waiting for current ACTION to finish"
ACTION_WAIT=$(($ACTION_WAIT + 1))
fi
sleep 1
done

Unix TIPS - Send mail from Unix with file attachments

Unix TIPS - Send mail from Unix with file attachments

#!/bin/ksh
# --------------------------------------------------------------------
# Script: unix_mail_withattachments.ksh
# Aurthor: Ravin Maharaj
# Purpose: Use sendmail to e-mail messages from Unix with
# file attachements
# --------------------------------------------------------------------
SUBJ="Send mail from Unix with file attachments"
TO=someone@domain_name
CC=someoneelse_1@domain_name,someoneelse_2@domain_name
(
cat << !
To : ${TO}
Subject : ${SUBJ}
Cc : ${CC}
!
cat << !
HOPE THIS WORKS
This sample E-mail message demonstrates how one can attachfiles when sending messages with the Unix sendmail utility.
!
uuencode ${file_1} ${file_1}
uuencode ${file_2} ${file_2}
uuencode ${file_3} ${file_3}
!
) sendmail -v ${TO} ${CC}

Thursday, June 12, 2008

ERRORSTACK- Oracle Debugging Event

Errorstack is used as debugging event for following reasons
To extract Failing SQL statement

To get Oracle Function call (to be used by Oracle) for interpreting cause for the error (e.g ORA 4030, ORA- 4031). It is also used in case of database hang scenarios. Refer to my earlier article on database hang

It also contains execution plan in some cases which can be used to diagnose ORA - 1652 errors (Excessive temp usage)

Errorstack can be set at System level and also at Session level.
e.g To set errorstack for ORA - 1652 ,Syntax will be

alter system set events ‘1652 trace name errorstack level 3′;
If suppose, you want to set it at session level, then

alter session set events ‘1652 trace name errorstack level 3′;

You can also use oradebug to take errorstack for particular process (say ospid 1234) as below
sqlplus “/ as sysdba”
oradebug setospid 1234
oradebug unlimit
oradebug dump errorstack 3

Setting an Oracle trace event
Another set of parameters that may be useful are events, so I’ll discuss events a bit in this subsection. Setting an event means to tell Oracle to generate information in form of a trace file in the context of the event. The trace file is usually located in a directory specified by the initialization parameter USER_DUMP_DEST. By examining the resulting trace file, detailed information about the event traced can be deduced. The general format for an event is:
EVENT = ""
There are two types of events: session-events and process-events. Process-events are initialized in the parameter file; session-events are initialized with the ALTER SESSION... or ALTER SYSTEM command. When checking for posted events, the Oracle Server first checks for session-events then for process-events.


Oracle trace Event Classes
There are four traceable event classes:
Class 1: “Dump something.” Traces are generated upon so-called unconditioned immediate, events. This is the case when Oracle data has to be dumped; for example , the headers of all redolog files or the contents of the controlfile. These events can not be set in the init.ora, but must be set using the ALTER SESSION or the DBMS_SESSION.SET_EV() procedure.

Class 2: “Trap on Error.” Setting this class of (error-) events causes Oracle to generate an errorstack every time the event occurs.

Class 3: “Change execution path.” Setting such an event will cause Oracle to change the execution path for some specific Oracle internal code segment. For example, setting event "10269" prevents SMON from doing free-space coalescing.

Class 4: “Trace something.” Events from this class are set to obtain traces that are used for, for example, SQL tuning. A common event is "10046", which will cause Oracle to trace the SQL access path on each SQL-statement.
The "set events" trace command settings
The SET EVENTS command in an init.ora file have generally been placed there at the command of Oracle support to perform specific functions. Usually, these alerts turn on more advanced levels of tracing and error detection than are commonly available. Source 2.6 lists some of the more common events.
The syntax to specify multiple events in the init.ora is:
EVENT=":: : "

You can also split the events on multiple lines by using the continuation backslash character (\) at the end of each event and continue the next event on the next line. For example:
EVENT=":\ :\ : \ "
For Example:
EVENT="\ 10210 trace name context forever, level 10:\ 10211 trace name context forever, level 10:\ 10231 trace name context forever, level 10:\ 10232 trace name context forever, level 10"After setting the events in the initialization file, you need to stop and restart the instance. Be sure to check the alert.log and verify that the events are in effect. You can specify almost all EVENT settings at the session level using the ALTER SESSION command or a call to the DBMS_SYSYTEM.SET_EV( ) procedure; doing so does not require an instance bounce for the EVENT to take effect.
The alert.log should show the events that are in effect; for example:
event = 10210 trace name context forever, level 10:10211 trace name context for ever, level 10:10231 trace name context forever, level 10:10232 trace name context forever, level 10

Example Uses of the EVENT Initialization Parameter
To enable block header and trailer checking to detect corrupt blocks:
event="10210 trace name context forever, level 10" -- for tablesevent="10211 trace name context forever, level 10" -- for indexesevent="10210 trace name context forever, level 2" -- data block checkingevent="10211 trace name context forever, level 2" -- index block checkingevent="10235 trace name context forever, level 1" -- memory heap checkingevent="10049 trace name context forever, level 2" -- memory protect cursors
And to go with these, the undocumented parameter setting:
_db_block_cache_protect=TRUE
which will prevent corruption from getting to your disks (at the cost of a database crash).
For tracing of a MAX_CURSORS exceeded error:
event="1000 trace name ERRORSTACK level 3"

To get an error stack related to a SQLNET ORA-03120 error:
event="3120 trace name error stack"
To work around a space leak problem:
event="10262 trace name context forever, level x"
where x is the size of space leak to ignore.
To trace memory shortages:
event="10235 trace name context forever, level 4"
event="600 trace name heapdump, level 4"

To take a shared pool heapdump to track Ora-04031 as the error occurs, set the following event in your init.ora file:
event = "4031 trace name heapdump forever, level 2"
For ORA-04030 errors: Take a dump by setting this event in your INIT file and analyze the trace file. This will clearly pinpoint the problem.
event="4030 trace name errorstack level 3"
The following undocumented SQL statements can be used to obtain information about internal database structures:
* To dump the control file:
alter session set events 'immediate trace name CONTROLF level 10'
* To dump the file headers:
alter session set events 'immediate trace name FILE_HDRS level 10'
* To dump redo log headers:
alter session set events 'immediate trace name REDOHDR level 10'
* To dump the system state:
alter session set events 'immediate trace name SYSTEMSTATE level 10'
* To dump the optimizer statistics whenever a SQL statement is parsed:
alter session set events '10053 trace name context forever'
* To prevent db block corruptions:
event = "10210 trace name context forever, level 10"
event = "10211 trace name context forever, level 10"
event = "10231 trace name context forever, level 10"
* To enable the maximum level of SQL performance monitoring:
event = "10046 trace name context forever, level 12"
* To enable a memory-protect cursor:
event = "10049 trace name context forever, level 2"
* To perform data-block checks:
event = "10210 trace name context forever, level 2"
* To perform index-block checks:
event = "10211 trace name context forever, level 2"
* To perform memory-heap checks:
event = "10235 trace name context forever, level 1"
* To allow 300 bytes memory leak for each connection:
event = "10262 trace name context forever, level 300"
You should be noticing a pattern here for tracing events related to error codes: the first argument in the EVENT is the error code followed by the action you want to take upon receiving the code.
Events at the Session Level
Events are also used as the SESSION level using the ALTER SESSION command or calls to the DBMS_SYSTEM.SET_EV() procedure. The general format for the ALTER SESSION command is:
ALTER SESSION SET EVENTS 'ev_number ev_text level x';
where:
Ev_number is the event number.
Ev_text is any required text (usually "trace name context forever").
x is the required level setting corresponding to the desire action, file, or other required data.
For example, to provide more detailed SQL trace information:
ALTER SESSION SET EVENTS '10046 trace name context forever level NN'
where NN:

1 is same as a regular trace.
4 means also dump bind variables
8 means also dump wait information
12 means dump both bind and wait information

Example Uses of the ALTER SESSION Command to Set EVENT Codes
To coalesce free space in a tablespace pre-version 7.3:
ALTER SESSION SET EVENTS 'immediate trace name coalesce level XX'
where:
XX is the value of ts# from ts$ table for the tablespace.
To coalesce free space in a tablespace defined as temporary:
ALTER SESSION SET EVENTS 'immediate trace name drop_segments level &x';
where:
x is the value for file# from ts$ plus 1.
To get the information out of the db block buffers regarding order of LRU chains:
ALTER SESSION SET EVENTS 'immediate trace name buffers level x';
where:
x is 1-3 for buffer header order or 4-6 for LRU chain order.
To correct transportable tablespace export hanging (reported on 8.1.6, 8.1.7 on HPUX, a known bug):

ALTER SESSION SET EVENT '10297 trace name context forever, level 1';
To cause "QKA Disable GBY sort elimination". This affects how Oracle will process sorts:
ALTER SESSION SET EVENTS'10119 trace name context forever';

* You can disable the Index FFS using the event 10156. In this case, CBO will lean toward FTS or Index scan.

* You can set the event 10092 if you want to disable the hash joins completely.
It is very easy to see how SMON cleans up rollback entries by using the event 10015. You can use event 10235 to check how the memory manager works internally.

This is an excerpt from Mike Ault, bestselling author of "Oracle 10g Grid and Real Application Clusters".