Tuesday, April 17, 2012

Rename Diskgroup.

Links to this post

1) Unmount Diskgroup on All Nodes.

RACG1@:/home/oracle :+ASM1 $asmcmd -p
ASMCMD [+] > umount RACG

RACG2@:/home/oracle :+ASM2 $asmcmd -p
ASMCMD [+] > umount RACG

2) Check on Diskgroup is not mounted.

RACG1@:/dev/oracleasm/disks :+ASM1 $crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRSDG.dg
               ONLINE  ONLINE       racg1
               ONLINE  ONLINE       racg2
ora.LISTENER.lsnr
               ONLINE  ONLINE       racg1
               ONLINE  ONLINE       racg2
ora.RACG.dg
               OFFLINE OFFLINE      racg1
               OFFLINE OFFLINE      racg2
ora.asm
               ONLINE  ONLINE       racg1                    Started
               ONLINE  ONLINE       racg2                    Started
ora.eons
               ONLINE  ONLINE       racg1
               ONLINE  ONLINE       racg2
ora.gsd
               OFFLINE OFFLINE      racg1
               OFFLINE OFFLINE      racg2
ora.net1.network
               ONLINE  ONLINE       racg1
               ONLINE  ONLINE       racg2
ora.ons
               ONLINE  ONLINE       racg1
               ONLINE  ONLINE       racg2
ora.registry.acfs
               ONLINE  ONLINE       racg1
               ONLINE  ONLINE       racg2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racg2
ora.oc4j
      1        OFFLINE OFFLINE
ora.racg1.vip
      1        ONLINE  ONLINE       racg1
ora.racg2.vip
      1        ONLINE  ONLINE       racg2
ora.scan1.vip
      1        ONLINE  ONLINE       racg2

3) Now lets rename Diskgroup from +RACG to +RACD

RACG1@:/dev/oracleasm/disks :+ASM1 $renamedg phase=both dgname=RACG newdgname=RACD verbose=true
NOTE: No asm libraries found in the system

Parsing parameters..

Parameters in effect:

         Old DG name       : RACG
         New DG name          : RACD
         Phases               :
                 Phase 1
                 Phase 2
         Discovery str        : (null)
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=both dgname=RACG newdgname=RACD verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:
KFNDG-00407: file not found; arguments: []

Terminating kgfd context 0x2ba7587250a0

- it failed with KFNDG-00407: file not found; arguments: []
-Lets try again , Add asm_diskstring='/dev/oracleasm/disks/*' to command.

RACG1@:/dev/oracleasm/disks :+ASM1 $renamedg phase=both dgname=RACG newdgname=RACD verbose=true asm_diskstring='/dev/oracleasm/disks/*'
NOTE: No asm libraries found in the system

Parsing parameters..

Parameters in effect:

         Old DG name       : RACG
         New DG name          : RACD
         Phases               :
                 Phase 1
                 Phase 2
         Discovery str        : /dev/oracleasm/disks/*
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=both dgname=RACG newdgname=RACD verbose=true asm_diskstring=/dev/oracleasm/disks/*
Executing phase 1
Discovering the group
Performing discovery with string:/dev/oracleasm/disks/*
Identified disk UFS:/dev/oracleasm/disks/DISK2 with disk number:0 and timestamp (32969226 -1320570880)
Identified disk UFS:/dev/oracleasm/disks/DISK3 with disk number:1 and timestamp (32969226 -1320570880)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:/dev/oracleasm/disks/*
Identified disk UFS:/dev/oracleasm/disks/DISK2 with disk number:0 and timestamp (32969226 -1320570880)
Identified disk UFS:/dev/oracleasm/disks/DISK3 with disk number:1 and timestamp (32969226 -1320570880)
Checking if the diskgroup is mounted
Checking disk number:0
Checking disk number:1
Checking if diskgroup is used by CSS
Generating configuration file..
KFNDG-00305: file not found

Terminating kgfd context 0x2b03e7d4d0a0

-Again it failed with KFNDG-00305: file not found
-This time Add confirm=true config=/tmp/renamedg to command.

RACG1@:/dev/oracleasm/disks :+ASM1 $renamedg phase=both dgname=RACG newdgname=RACD confirm=true config=/tmp/renamedg verbose=true asm_diskstring='/dev/oracleasm/disks/*'
NOTE: No asm libraries found in the system

Parsing parameters..

Parameters in effect:

         Old DG name       : RACG
         New DG name          : RACD
         Phases               :
                 Phase 1
                 Phase 2
         Discovery str        : /dev/oracleasm/disks/*
         Confirm            : TRUE
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=both dgname=RACG newdgname=RACD confirm=true config=/tmp/renamedg verbose=true asm_diskstring=/dev/oracleasm/disks/*
Executing phase 1
Discovering the group
Performing discovery with string:/dev/oracleasm/disks/*
Identified disk UFS:/dev/oracleasm/disks/DISK2 with disk number:0 and timestamp (32969226 -1320570880)
Identified disk UFS:/dev/oracleasm/disks/DISK3 with disk number:1 and timestamp (32969226 -1320570880)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:/dev/oracleasm/disks/*
Identified disk UFS:/dev/oracleasm/disks/DISK2 with disk number:0 and timestamp (32969226 -1320570880)
Identified disk UFS:/dev/oracleasm/disks/DISK3 with disk number:1 and timestamp (32969226 -1320570880)
Checking if the diskgroup is mounted
Checking disk number:0
Checking disk number:1
Checking if diskgroup is used by CSS
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for /dev/oracleasm/disks/DISK2
Modifying the header
Looking for /dev/oracleasm/disks/DISK3
Modifying the header
Completed phase 2
Terminating kgfd context 0x2b2b6ce750a0

And it successfully Renamed.

4) Mount Renamed Diskgroup +RACD on All Nodes.

RACG1@:/home/oracle :+ASM1 $asmcmd mount RACD
RACG2@:/home/oracle :+ASM2 $asmcmd mount RACD

5) Confirm on All Nodes.

RACG1@:/home/oracle :+ASM1 $asmcmd lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      1019      623                0             623              0             N  CRSDG/
MOUNTED  EXTERN  N         512   4096  1048576      2038     1941                0            1941              0             N  RACD/

RACG2@:/home/oracle :+ASM2 $asmcmd lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576      1019      623                0             623              0             N  CRSDG/
MOUNTED  EXTERN  N         512   4096  1048576      2038     1941                0            1941              0             N  RACD/

Zero down-time Migration using Goldengate of Oracle database from 8i to 11g.

Links to this post
Hi

Few months back , I did migration of  Oracle database from 8i to 11gR1 on cross-platform using GoldenGate V10 .  3 minute downtime was required.
I would like to share a Overview of whole plan , which could be helpful for reference.

Fundamentally this method works in most of the migration using Goldengate .
Data Loading from source to target has to be decided.

It could be either export/import, ETL tool, Transportable Tablespace,Backup.

Note: challenges we faced was to capture & replicate CLOB data.



We moved  4 TB Oracle 8i database from NY to North Carolina. It was geographically separated but in same Timezone.

Some facts.
  • Data moved across databases using Export / Import utility. 
  • FTP used to move dumpfiles from Source to target destination server. (NFS will be better)
  • Goldengate was configure to replicate schema to schema. (Tables which as has CLOB data were excluded). 
  • Index were created on Target database for big tables or table subject to heavy transactions to boost replicate on target db.( Index were dropped after migration ).
  • Big Tables or Heavy transactions Tables can be split into few capture processes using @RANGE parameter.
  • Triggers were disable for performance reason.  its not mandatory to disable triggers.  
  • As source database was Oracle 8i , Trandata need to add for every table on source Database.  
  • Handlecolision parameter must be used on replicate side. 
  • 3 Minutes downtime was Achieved. 
  • There should be enough space to accommodate remote trailfiles on Target server.
Below is step-by-step Migration from 8i to 11g.

Timestamp
Source
Target
Comments.
New york (8i )
North Carolina (11g)
Sunday 8:00 Configure goldengate for capture and datapump processes.Configure goldengate for Replicate processes.Test this configureation in Dev/Qa.
Monday 8:00 Start capture and datapump processes. Dont start replicate now. Trailfiles from capture are transferred by datapump. This will be accumulate on target server.
Monday 9:00 start export from 8i database (Mark SCN when export started.)lag for replicate will increase but dont start replicate yet.
Tuesday 22:00Export completed. start FTP of dumpfile to target server.
Wednesday 14:00 FTP completed.Start Import on Target database using dumpfiles.
Friday 20:00 Import Finished. Index created & trigger were disabled after Import finished.
Friday 22:00Start replicat using atscn / Use HANDLECOLLITION it will apply all changes from trailfiles which was accumulated on target server til now.  Lag will decrease slowly for replicate.
Sunday 10:00 Replicate applied all changes from trailfiles. Monitor Lag for capture and datapump process.
sunday 11:00
(cut-over time)
when there are least transactions happens or Lock db for further login and wait till capture shows zero lag  , after this stop capture. when lag is zero for datapump ,stop datapump wait till replicate apply all changes from last trailfile  , lag should be zero for replicate. After this stop replicate.  This whole downtime depends on volume of transaction and latency between source and target in term of Goldengate. Drop indexes & Enable all triggers.
Sunday 11:03 (cut-over time)Redirect db connection point to target db. Redirect db connection point to target db. Migration completed.



We started capture process to capture all transaction on source db, till export and import finish.

HANDLECOLLISION was specially used on replicate side , so we can avoid duplicate transactions.

Cut-over is explained.

Source side.
Monitor lag of capture process on source side.
When there are very low transactions happen or we could also use approach to lock db for further loggin. Now when all capture processes shows zero lag. stop the all capture process. Once this is complete wait till datapump reach to zero lag, Now stop datapump too.
Idea behind this is to make sure that No transaction occurs after we stop capture.
As of now All transactions are captured , transmitted to target side on remote trailfile.

Target side.

Wait till replicate apply all changes till end of trailfile. There should be zero lag for all replicate processes.
Once this done , stop replicate.


Now change connection string to target database. which will redirect connection to new 11g database.


Drop indexes which was created to boost replicate processes. enable all triggers.

----------2nd approach-----------------

  • configure GG  extract/pump/replicat
  • start extract & pump (Make sure remote trailfile is generated), You can also take SCN based backup by RMAN. 
  • Take export of Whole Source database ,based on SCN. 
  • FTP dumpfiles to Target database server.
  • Import into Target database.
  • start replicat ,AFTERCSN  
  • Let replicat catchup.  
  • once extract/pump/replicat shows zero lag/checkpoint ,stop further login into source DB. 
  • switch tns to Target db. 

This whole cut-over time could be estimate and strongly depend on volume of transaction & latency between capture and replicate. Measure the redo generation & latency at peak-time.


Happy Migration.



Tuesday, April 10, 2012

How to "kill Semaphore of Oracle Instance" / "Remove Shared memory of Oracle Instnace"

Links to this post
We had Oracle instance running , Pmon of Instance was also visible .
While connecting to Oracle database it looked Normal , But while query V$database.It throws error.

So I had take look into Tail of alert log , Instance was cleanly shutdown. On Attempt to startup it gave below error.

ORA-01081: cannot start already-running ORACLE - shut it down first

So,

  • PMON is visible,
  • Alertlog shows clean shutdown of instnace last time.
  • you cant query even v$database or v$instnace.

This whole scenario leads to conclusion that semaphore of Instance is still locked.

And here we go .

To see how many Oracle symaphores/Shared memory are running.
ipcs | grep oracle
m  916291602 0xc0a8ed2c --rw-rw----    oracle       dba
m  154730522 0xee8c7308 --rw-rw----    oracle       dba
m   34111516 0x8fe194f8 --rw-rw----    oracle       dba
m  975011880 0xa1b69514 --rw-rw----    oracle       dba
s  237666346 0xe0695054 --ra-ra----    oracle       dba
s 1828749356 0x74616ad0 --ra-ra----    oracle       dba
s 1802698797 0xf9afd36c --ra-ra----    oracle       dba
s 1491337262 0xf40cfa70 --ra-ra----    oracle       dba

To identify which symaphore is connected to which Oracle instance .

echo $ORACLE_SID
VVCBDMQ1
sysresv is command to find symaphore attached to Instance.It will also shows shared memory address.

sysresv
IPC Resources for ORACLE_SID "VVCBDMQ1" :
Shared Memory:
ID              KEY
34111516        0x8fe194f8     
Semaphores:
ID              KEY
237666346       0xe0695054
Oracle Instance not alive for sid "VVCBDMQ1"

To confirm do as below

ipcs -a | grep oracle | grep  | grep 

ipcs -a | grep oracle | grep 34111516 | grep 0x8fe194f8
ipcs -a | grep oracle | grep 237666346 | grep 0xe0695054

To clean up symaphore.

ipcrm -s 34111516   <- ID
ipcrm -M 0x8fe194f8 <- key 
Check again to confirm it.
sysresv
IPC Resources for ORACLE_SID "VVCBDMQ1" :
Shared Memory
ID              KEY
No shared memory segments used
Semaphores:
ID              KEY
No semaphore resources used
Oracle Instance not alive for sid "VVCBDMQ1"

Monday, April 2, 2012

11GR2 Grid Infrastructure Processes

Links to this post

New 11GR2 Grid Infrastructure Processes.


COMPONENTPROCESSESOWNER
Cluster Ready Service (CRS) crsdroot
Cluster Synchronization Service (CSS)ocssd,cssdmonitor,cssdagentgrid owner, root, root
Event Manager (EVM)evmd, evmlogger grid owner
Cluster Time Synchronization Service (CTSS) octssdroot
Oracle Notification Service (ONS)ONS,eonsgrid owner
Oracle Agent oraagentgrid owner
Oracle Root Agent orarootagentroot
Grid Naming Service (GNS)gnsdroot
Grid Plug and Play (GPnP)gpnpdgrid owner
Multicast Domain Name service (mDNS)mdnsdgrid owner




Sunday, April 1, 2012

ASM disks are not visible at Installation

Links to this post

Hi
While installation of 11gR2,  ASM disk was not visible. 


permission were not given on ASMDISKS. 

ls -lart /dev/sd*
brw-r----- 1 root disk     8,  17 Apr  1 12:53 sdb1
brw-r----- 1 root disk     8,  33 Apr  1 12:53 sdc1
brw-r----- 1 root disk     8,  49 Apr  1 12:53 sdd1
brw-r----- 1 root disk     8,  65 Apr  1 12:53 sde1
brw-r----- 1 root disk     8,  81 Apr  1 12:53 sdf1
brw-r----- 1 root disk     8,  97 Apr  1 12:53 sdg1


Later on I remind myself that I forget to add below lines in /etc/rc.d/rc.local

chown oracle:dba /dev/sdb1

chown oracle:dba /dev/sdc1
chown oracle:dba /dev/sdd1
chown oracle:dba /dev/sde1       
chown oracle:dba /dev/sdf1      
chown oracle:dba /dev/sdg1       
chmod 660 /dev/sdb1
chmod 660 /dev/sdc1
chmod 660 /dev/sdd1
chmod 660 /dev/sde1
chmod 660 /dev/sdf1
chmod 660 /dev/sdg1


Define the Scanorder in /etc/sysconfig/oracleasm config file. For example, if the used multipathing device is /dev/md1, you have to force the ASMlib to scan the /dev/md* paths before the /dev/sd* paths


vi /etc/sysconfig/oracleasm

ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER="md sd"



Linx /Unix history command by Arrow key.

Links to this post
Few lines in .profile will make life easier of DBA on Unix platform.

edit below command in .profile.
-------
#  set Unix command prompt 
export PS1=`uname -n`@':$PWD :$ORACLE_SID $'
# To enable backspace. 
set -o emacs
stty erase ^?
#simple alias to make life easier & save 5 second each time you login to sqlplus  
alias ll='ls -lart'
alias lsd='ls -l | grep ^d'
alias home='cd $ORACLE_HOME'
alias psora='ps -ef | grep pmon'
alias orasql='sqlplus / as sysdba'

##### To enable Arrow key for History commands 

alias __A=$(print '\0020') # ^P = up = previous command
alias __B=$(print '\0016') # ^N = down = next command
alias __C=$(print '\0006') # ^F = right = forward a character
alias __D=$(print '\0002') # ^B = left = back a character
alias __H=$(print '\0001') # ^A = home = beginning of line
export COLUMNS=130
clear

ssh User equivalence

Links to this post


[oracle@RACG1 grid]$ ./runcluvfy.sh stage -pre crsinst -n RACG1,RACG2 -r 11gR2 -fixup -verbose

Performing pre-checks for cluster services setup

Checking node reachability...


Check: Node reachability from node "RACG1"

  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  RACG1                                 yes
  RACG2                                 yes
Result: Node reachability check passed from node "RACG1"

Checking user equivalence...


Check: User equivalence for user "oracle"

  Node Name                             Comment
  ------------------------------------  ------------------------
  RACG1                                 failed
  RACG2                                 failed
Result: PRVF-4007 : User equivalence check failed for user "oracle"

ERROR:

User equivalence unavailable on all the specified nodes
Verification cannot proceed

Pre-check for cluster services setup was unsuccessful on all the nodes.

-- TO AVOID 

PRVF-4007 : User equivalence check failed for user "oracle 



Setup ssh User equivalence in 11gR2





In new 11GR2 ssh User equivalence can be setup as below. 


[oracle@RACG1 grid]$ ll
total 40
drwxrwxrwx  9 oracle oinstall 4096 Apr  1 08:11 doc
drwxrwxrwx  4 oracle oinstall 4096 Apr  1 08:11 install
drwxrwxrwx  2 oracle oinstall 4096 Apr  1 08:11 response
drwxrwxrwx  2 oracle oinstall 4096 Apr  1 08:11 rpm
-rwxrwxrwx  1 oracle oinstall 3795 Apr  1 08:11 runcluvfy.sh
-rwxrwxrwx  1 oracle oinstall 3227 Apr  1 08:11 runInstaller
drwxrwxrwx  2 oracle oinstall 4096 Apr  1 09:05 sshsetup
drwxrwxrwx 14 oracle oinstall 4096 Apr  1 08:11 stage
-rwxrwxrwx  1 oracle oinstall 4228 Apr  1 08:11 welcome.html


cd sshsetup
./sshUserSetup.sh -user oracle -hosts NODE1,NODE2 -advanced -exverify -confirm

Below is method to setup ssh Manually step-by-step. 



ON NODE-1

TESTP1@:/home/oracle : $mkdir -p ~/.ssh
TESTP1@:/home/oracle : $chmod 700 ~/.ssh
TESTP1@:/home/oracle : $/usr/bin/ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
4a:3b:fe:ea:03:d3:cf:6f:d3:06:fb:1a:ed:1e:b0:6b oracle@TESTP1.localdomain.com
TESTP1@:/home/oracle : $/usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
5e:cb:46:cc:d8:f1:01:44:3e:7a:20:eb:ce:5d:d4:15 oracle@TESTP1.localdomain.com

ON NODE-2

TESTP2@:/home/oracle : $mkdir -p ~/.ssh
TESTP2@:/home/oracle : $chmod 700 ~/.ssh
TESTP2@:/home/oracle : $/usr/bin/ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
dc:4e:e7:c3:ee:71:84:e7:2e:72:99:3d:b0:0a:2b:f9 oracle@TESTP2.localdomain.com
TESTP2@:/home/oracle : $/usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
5a:49:d4:e2:b9:5d:e9:f8:f5:bb:ce:7c:4b:f4:dd:6c oracle@TESTP2.localdomain.com

ON NODE-1

TESTP1@:/home/oracle : $ssh TESTP1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'testp1 (192.168.100.181)' can't be established.
RSA key fingerprint is 27:ac:4b:9a:e3:d2:ae:6d:2b:71:99:8d:b9:c0:b1:a7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'testp1,192.168.100.181' (RSA) to the list of known hosts.
oracle@testp1's password:
TESTP1@:/home/oracle : $ssh TESTP1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
TESTP1@:/home/oracle : $scp /home/oracle/.ssh/authorized_keys oracle@TESTP2:~/.ssh/
oracle@testp2's password:
authorized_keys                                                                                                          100% 1030     1.0KB/s   00:00

ON NODE-2

TESTP2@:/home/oracle/.ssh : $ssh TESTP2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'testp2 (192.168.100.182)' can't be established.
RSA key fingerprint is 27:ac:4b:9a:e3:d2:ae:6d:2b:71:99:8d:b9:c0:b1:a7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'testp2,192.168.100.182' (RSA) to the list of known hosts.
oracle@testp2's password:
TESTP2@:/home/oracle/.ssh : $ssh TESTP2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
TESTP2@:/home/oracle/.ssh : $scp /home/oracle/.ssh/authorized_keys oracle@TESTP1:~/.ssh/
The authenticity of host 'testp1 (192.168.100.181)' can't be established.
RSA key fingerprint is 27:ac:4b:9a:e3:d2:ae:6d:2b:71:99:8d:b9:c0:b1:a7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'testp1,192.168.100.181' (RSA) to the list of known hosts.
oracle@testp1's password:
authorized_keys                                                                                                          100% 2060     2.0KB/s   00:00

--Confirm ssh on both Nodes. 

TESTP1@:/home/oracle : $ssh TESTP2 date
Mon Aug  6 09:20:48 EDT 2012
TESTP1@:/home/oracle : $ssh TESTP1 date
Mon Aug  6 09:20:43 EDT 2012

TESTP2@:/home/oracle/.ssh : $ssh TESTP2 date
Mon Aug  6 09:20:37 EDT 2012
TESTP2@:/home/oracle/.ssh : $ssh TESTP1 date
Mon Aug  6 09:20:31 EDT 2012



It should Ask for password only first time.  if it ask ,,,Do as below temporary 

------perform below step on BOTH NODE. 

exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

  • login as root
  • vi /etc/sshd_config 
  • mark no instead of yes at line 
passwordauthentication no