扫一扫
分享文章到微信
扫一扫
关注官方公众号
至顶头条
来源:IT认证考试 2010年09月24日
关键字:
硬件环境 Hardware Environment
Model: Dell Latitude D630
CPU: Core 2 Duo T7250 2 GHz
Memeory: 2GB
Free disk space: 50GB
Software Environment
软件环境
VMware Server 1.06
Daemon410-x86
Red Hat Enterprise Linux 4 AS (kernel 2.6.9-34)
Oracle 10gR2 clusterware
Oracle 10gR2 database
架构 Architecture
Install 1st Guest OS (RHEL4)
● VM configuration
VM name: node1
Memory 1024MB
When you adding hard disk, you must select advancedàchoose SCSI1:0, 1, 2, 3…àIndependent, check the persistent. You can uncheck the “Allocate all disk now”, avoiding useless disk space occupation. But you can check it for voting disk and ocr because they are small and very important.
Note: Since we should configure the share storage-ASM, we must choose a different bus number from the local disk. For example, we use SCSI0 as local disk, so we use another bus SCSI1 as our share storage.
HD (SCSI 0:0) RHEL4 15.0 GB
HD2 (SCSI 1:0) ocr1 0.2 GB Independent- persistent
HD3 (SCSI 1:1) ocr2 0.2 GB Independent- persistent
HD4 (SCSI 1:2) votingdisk1 0.2 GB Independent- persistent
HD5 (SCSI1:3) votingdisk2 0.2 GB Independent- persistent
HD6 (SCSI 1:4) votingdisk3 0.2 GB Independent- persistent
HD7 (SCSI 1:5) datadg1 2.0 GB Independent- persistent
HD8 (SCSI 1:6) datadg2 2.0 GB Independent- persistent (SCSI1:7 is SCSI controller)
HD9 (SCSI 1:8) redodg1 0.5 GB Independent- persistent
HD10(SCSI 1:9) redodg2 0.5 GB Independent- persistent
HD11(SCSI 1:10) flashdg1 0.5 GB Independent- persistent
HD12(SCSI 1:11) flashdg2 0.5 GB Independent- persistent
Ethernet: Bridged
Ethernet2: Host-only
CD-ROM: Using the daemon drive Y:
Open the E:\Virtual Machines\node1\RHEL4.vmx with notepad and add the bold lines as following:
config.version = “8″
virtualHW.version = “4″
scsi0.present = “TRUE”
scsi0.virtualDev = “lsilogic”
memsize = “1024″
scsi0:0.present = “TRUE”
scsi0:0.fileName = “RHEL4.vmdk”
ide1:0.present = “TRUE”
ide1:0.fileName = “Y:”
ide1:0.deviceType = “cdrom-raw”
floppy0.present = “FALSE”
Ethernet0.present = “TRUE”
displayName = “node1″
guestOS = “rhel4″
priority.grabbed = “normal”
priority.ungrabbed = “normal”
disk.locking = “FALSE”
diskLib.dataCacheMaxSize = “0″
scsi1.sharedBus = “virtual”
#Add the following lines at the end of file
scsi1:0.deviceType = “disk”
scsi1:1.deviceType = “disk”
scsi1:2.deviceType = “disk”
scsi1:3.deviceType = “disk”
scsi1:4.deviceType = “disk”
scsi1:5.deviceType = “disk”
scsi1:6.deviceType = “disk”
scsi1:8.deviceType = “disk”
scsi1:9.deviceType = “disk”
scsi1:10.deviceType = “disk”
scsi1:11.deviceType = “disk”
● Install RHEL4
Language: English (USA)
Disk partition: local disk only (do not configure the share storage)
/boot: 200MB ext3
swap: 1800MB swap
/: all else space ext3
eth0: 192.168.10.101, 255.255.255.0 public IP
eth1: 10.10.10.101, 255.255.255.0 private IP
Gateway and DNS: 192.168.10.1 (host windows’s IP)
Hostname: rac01
root’s PW: racrac
Software packages: custom (check all packages except some useless component, e.g. graphic development, entertainment, sound etc.)
● Install VMware tools for time sync
After the Linux is up, click the VMware server consoleàVM àInstall VMware tools, then a VMware tools icon will be displayed in the Linux desktop, double click and run the VMware-Tools-xxxxx.i386.rpm, and click continue. Right click at desktop; open a terminal and input “vmware-config-tools.pl”, select 800*600 as resolution. Then input “vmware-toolbox”, check the “Time synchronization between the virtual machine and the host operating system”。
vi /boot/grub/grub.conf
Add the bold words after “quiet” in the same line:
kernel /vmlinuz-2.6.9-34.EL ro root=LABEL=/ rhgb quiet clock=pit nosmp noapic nolapic
Reboot and input “date” to check whether the time is same as windows time.
If you still find that time is not synchronized, open c:\Documents and Setting\All Users\Application Data\Vmware\Vmware server\config.ini, and add:
host.cpuKHz=”2000000″ àyour CPU speed(2.0G here)
host.noTSC=”TRUE”
ptsc.noTSC=”TRUE”
Install 2nd Guest OS (RHEL4) unused this time
Most of the step is same as the 1st node. Only list the difference here:
VM name: node2
eth0: 192.168.10.102, 255.255.255.0 public IP
eth1: 10.10.10.102, 255.255.255.0 private IP
Gateway and DNS: 192.168.10.1
Hostname: rac02
root’s PW: racrac
Pre-installation of Oracle 10g
● Verify the package
“rpm -q binutils compat-db control-center gcc gcc-c++ glibc glibc-common \ gnome-libs libsstdc++ libstdc++-devel make pdksh sysstat xscreensaver”
All of the packages are installed.
● Change the /etc/hosts
Add the bold lines:
vi /etc/hosts 127.0.0.1 localhost.localdomain localhost #Public IP 192.168.10.101 rac01 192.168.10.102 rac02 #Virtual IP 192.168.10.10 rac01-vip 192.168.10.20 rac02-vip #Private IP 10.10.10.101 rac01-priv 10.10.10.102 rac02-priv
● Create the Oracle Groups and User Account
[root@node1 ~]# groupadd oinstall [root@node1 ~]# groupadd dba [root@node1 ~]# useradd -d /home/oracle -g oinstall -G dba oracle [root@node1 ~]# passwd oracle password is “nhy67ujm” [root@node1 ~]# id oracle uid=501(oracle) gid=502(oinstall) groups=502(oinstall),503(dba)
The user ID and group ID must be the same on all nodes. So add user ID at node2 as following:
[root@node2 ~]# groupadd –g 502 oinstall [root@node2 ~]# groupadd –g 503 dba [root@node2 ~]# useradd –d /home/oracle –u 501 -g oinstall –G dba oracle
● Modify the init parameter file of oracle user
[root@node1 ~]# vi /home/oracle/.bash_profile #Add as following: #ORACLE Settings export TMP=/tmp export TEMP=$TMP export TMPDIR=$TMP export ORACLE_BASE=/oracle export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1 export ORACLE_SID=pcard1 export ORACLE_TERM=xterm export ORACLE_CRS_HOME=$ORACLE_BASE/crs_home export CRS_HOME=$ORACLE_CRS_HOME export NLS_LANGUAGE=AMERICAN export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$ORACLE_CRS_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib:/usr/X11R6/lib/ export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib #ulimit -u 16384 -n 65536 umask 022 [root@node1 ~]# mkdir /oracle [root@node1 ~]# chown oracle.oinstall /oracle
Modify the kernel parameter
[root@node1 ~]# vi /etc/sysctl.conf kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 658576 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 1048576 net.core.wmem_max = 1048576
Change the limits of oracle user
[root@node1 ~]# vi /etc/security/limits.conf #Add: oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 [root@node1 ~]# vi /etc/pam.d/login #Add: session required /lib/security/pam_limits.so
Create Shared Disks
[root@node1 ~]# fdisk /dev/sdb
The red bold characters are those need to input. Command (m for help): n
Command (m for help): p
1
Enter
Enter
Command (m for help): w
Repeat above steps (sdc,sdd,sde…)。 Once all the disks are portioned, the result can be seen by the following command:
ls /dev sd*
/dev/sda /dev/sdb /dev/sdd /dev/sdf /dev/sdh /dev/sdj /dev/sdl
/dev/sda1 /dev/sdb1 /dev/sdd1 /dev/sdf1 /dev/sdh1 /dev/sdj1 /dev/sdl1
/dev/sda2 /dev/sdc /dev/sde /dev/sdg /dev/sdi /dev/sdk
/dev/sda3 /dev/sdc1 /dev/sde1 /dev/sdg1 /dev/sdi1 /dev/sdk1
Edit /etc/sysconfig/rawdevices, add the following lines:
vi /etc/sysconfig/rawdevices
/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdc1
/dev/raw/raw3 /dev/sdd1
/dev/raw/raw4 /dev/sde1
/dev/raw/raw5 /dev/sdf1
/dev/raw/raw6 /dev/sdg1
/dev/raw/raw7 /dev/sdh1
/dev/raw/raw8 /dev/sdi1
/dev/raw/raw9 /dev/sdj1
/dev/raw/raw10 /dev/sdk1
/dev/raw/raw11 /dev/sdl1
Restart the rawdevices service using the following command.
service rawdevices restart
Modify the default setting:
chkconfig rawdevices on
Run the following commands and add them the /etc/rc.local file.
chown root:oinstall /dev/raw/raw1
chown root:oinstall /dev/raw/raw2
chown oracle:oinstall /dev/raw/raw3
chown oracle:oinstall /dev/raw/raw4
chown oracle:oinstall /dev/raw/raw5
chown oracle:oinstall /dev/raw/raw6
chown oracle:oinstall /dev/raw/raw7
chown oracle:oinstall /dev/raw/raw8
chown oracle:oinstall /dev/raw/raw9
chown oracle:oinstall /dev/raw/raw10
chown oracle:oinstall /dev/raw/raw11
chmod 660 /dev/raw/raw1
chmod 660 /dev/raw/raw2
chmod 644 /dev/raw/raw3
chmod 644 /dev/raw/raw4
chmod 644 /dev/raw/raw5
chmod 660 /dev/raw/raw6
chmod 660 /dev/raw/raw7
chmod 660 /dev/raw/raw8
chmod 660 /dev/raw/raw9
chmod 660 /dev/raw/raw10
chmod 660 /dev/raw/raw11
或者:修改 /etc/udev/permissions.d/50-udev.per missions。原始设备在引导时会重新映射。默认情况下,在引导时原始设备的拥有者将更改为 root 用户。如果拥有者不是 oracle 用户,则 ASM 在访问共享分区时会出现问题。在 /etc/udev/permissions.d/50-udev.permissions 中为原始行”raw/*:root:disk:0660″添加注释,然后添加一个新行”raw/*:oracle:dba:0660″。
/etc/udev/permissions.d/50-udev.permissions
# raw devices
ram*:root:disk:0660
#raw/*:root:disk:0660
raw/*:oracle:dba:0660
(the rule is: ocr 660 root
vote 644 oracle
asm 660 oracle)
Configure SSH for user equivalence
login as “oracle” user:
[root@rac01 ~]# su – oracle
[oracle@rac01 ~]$ mkdir ~/.ssh
[oracle@rac01 ~]$ chmod 700 ~/.ssh
[oracle@rac01 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
79:3d:2b:99:c4:19:36:fd:ef:82:21:14:04:3a:a2:86 oracle@rac01
[oracle@rac01 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
33:d9:2a:e2:63:8c:95:f1:e9:78:da:31:e0:e2:79:a4 oracle@rac01
[oracle@rac01 ~]$ cat ~/.ssh/id_rsa.pub 》》 ~/.ssh/authorized_keys
[oracle@rac01 ~]$ cat ~/.ssh/id_dsa.pub 》》 ~/.ssh/authorized_keys
If there is another node, we should copy the keys to the other node so that we can ssh to the remote node without being prompted for a password. That means we should the same steps (generate keys and copy to another node) at the node2.
ssh rac02 cat ~/.ssh/id_rsa.pub 》》 ~/.ssh/authorized_keys
ssh rac02 cat ~/.ssh/id_dsa.pub 》》 ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac02:~/.ssh/authorized_keys
[oracle@rac01 ~]$ ssh rac01 date
[oracle@rac01 ~]$ ssh rac02 date
[oracle@rac01 ~]$ ssh rac01-priv date
[oracle@rac01 ~]$ ssh rac02-priv date
切换至 rac02 执行
[oracle@rac02 ~]$ ssh rac01 date
[oracle@rac02 ~]$ ssh rac02 date
[oracle@rac02 ~]$ ssh rac01-priv date
[oracle@rac02 ~]$ ssh rac02-priv date
################## BEGIN ##################
本次实验中ASMLib没有配置成功,最终还是asm直接使用raw disk,故这里只是做个记录,回头再研究原因
[root@rac01 network-scripts]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: [ OK ]
Loading module “oracleasm”: [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
/etc/init.d/oracleasm createdisk VOL1 /dev/sdg1
/etc/init.d/oracleasm createdisk VOL2 /dev/sdh1
/etc/init.d/oracleasm createdisk VOL3 /dev/sdi1
/etc/init.d/oracleasm listdisks
################## END ##################
Test ssh in every node:
ssh rac01 date
ssh rac01-priv date
Install Oracle 10g Clusterware
run the following as root:
[root@rac01 ~]$ xhost +
Note: If you meet any xhost error, you should execute this command as root.
[root@rac01 ~]$ su – oracle
Pre-verification:
[oracle@rac01 ~]$ /media/cdrom/cluvfy/runcluvfy.sh stage -pre crsinst -n rac01 -verbose
Error “Could not find a suitable set of interfaces for VIPs.” can be ignored.
Install the Clusterware:
[oracle@rac01 ~]$ /media/cdrom/runInstaller
Destination:
Name: crs
Path: oracle/crs_home
Languages: Add Japanese
Specify network interface usage:
eth0: public IP 192.168.10.101 rac01
eth1: private IP 10.10.10.101 rac01-priv
Specify OCR location:
OCR: /dev/raw/raw1
Mirror: /dev/raw/raw2
Specify Voting disk:
Voting disk Location: /dev/raw/raw3
Voting disk 1 Location: /dev/raw/raw4
Voting disk 2 Location: /dev/raw/raw5
In the end of installation, OUI requires you to run the below scripts as root:
[root@rac01 ~]$ /oracle/oraInventory/orainstRoot.sh
[root@rac01 ~]$ /oracle/crs_home/root.sh
Run vipca as root:
[root@rac01 ~]$ /oracle/crs_home/bin /vipca
Enter the vip “rac01-vip”, then 192.168.10.10 will be inputted automatically. Click finish.
Click OK in OUI of Clusterware Installation. Clusterware’s installation is finished.
When you find vip issues, edit this file $CRS_HOME/bin/racgvip
# set it to 0 for checkIf() to return success if default gateway is not found,
# otherwise set it to 1
FAIL_WHEN_DEFAULTGW_NOT_FOUND=0
Because crs PING the default gateway several minutes automatically, so the vip will become OFFLINE if it cannot reach the gateway which doesn’t exists actually (we set 192.168.10.2)。
这里引用三思文章的一部分:
在使用SSH方式配置RAC时,可能会在检查用户等价时失败。
配置了SSH之后,发现在验证用户等价时失败:
$ 。/runcluvfy.sh comp nodecon -n ahrac1,ahrac2 -verbose
Verifying node connectivity
ERROR: User equivalence unavailable on all the nodes. Verification cannot proceed.
Verification of node connectivity was unsuccessful on all the nodes.
查询metalink发现是Oracle cluvfy工具的问题。Oracle在文章:Note:369598.1:User Equivalence Check Failed for User Oracle Using Ssh/scp中进行了详细的描述。
简单的说,Oracle在寻找ssh命令时,去/usr/local/bin目录下寻找,而ssh命令在/usr/bin目录下。
相应的解决方法也很简单,在/usr/local/bin目录下建立一个指向/usr/bin/ssh的链接就可以了。
root@ahrac1 # mkdir -p /usr/local/bin root@ahrac1 # ln -s -f /usr/bin/ssh /usr/local/bin/ssh root@ahrac1 # ln -s -f /usr/bin/scp /usr/local/bin/scp
不过再次尝试,仍然报错:
$ 。/runcluvfy.sh comp nodecon -n ahrac1,ahrac2 -verbose
Verifying node connectivity
ERROR: User equivalence unavailable on all the nodes. Verification cannot proceed.
Verification of node connectivity was unsuccessful on all the nodes.
后来才发现,原来Oracle用户的ssh验证步骤必须先执行:
$ exec /usr/bin/ssh-agent $SHELL $ /usr/bin/ssh-add Enter passphrase for /export/home/oracle/.ssh/id_rsa: Identity added: /export/home/oracle/.ssh/id_rsa (/export/home/oracle/.ssh/id_rsa) Enter passphrase for /export/home/oracle/.ssh/id_dsa: Identity added: /export/home/oracle/.ssh/id_dsa (/export/home/oracle/.ssh/id_dsa) $ 。/runcluvfy.sh comp nodecon -n ahrac1,ahrac2 -verbose
在 node1 上执行:/opt/ora10g/oraInventory/orainstRoot.sh; 在 node2 上执行:/opt/ora10g/oraInventory/orainstRoot.sh; 在 node1 上执行:/opt/ora10g/product/10.2.0/crs_1/root.sh;
建议最好在node2执行root.sh之前,首先修改vipca。手工重新配置rac1-vip和rac2-vip /oracle/crs_home/bin /vipca 在 node2 上执行:/opt/ora10g/product/10.2.0/crs_1/root.sh;
注:在两个节点上执行orainstRoot.sh 之后,验证”/etc/oraInst.loc”文件的权限为644 (-rw-r–r–) 且所有者为root。如果oracle 用户帐户不具备该文件的读权限,在Oracle 安装期间可能会出现问题-”the location of the oraInventory directory cannot be determined”。例如,在Oracle 集群件安装后(运行Oracle 集群验证实用程序时),将出现以下错误:”CRS is not installed on any of the nodes。”如果/etc/oraInst.loc 的权限设置不当,则运行root.sh 之前,您无法在两个节点上运行orainstRoot.sh。此外,umask 设置可能为off – 应该为0022。如果/etc/oraInst.loc 的权限设置不当,在Oracle RAC 集群中的两个节点上运行以下命令以解决此问题:
# chmod 644 /etc/oraInst.loc
# ls -l /etc/oraInst.loc
-rw-r–r– 1 root root 56 Oct 12 21:52 /etc/oraInst.loc
When you find vip issues, edit this file $CRS_HOME/bin/racgvip
# set it to 0 for checkIf() to return success if default gateway is not found,
# otherwise set it to 1
FAIL_WHEN_DEFAULTGW_NOT_FOUND=0
Because crs PING the default gateway several minutes automatically, so the vip will become OFFLINE if it cannot reach the gateway which doesn’t exists actually (we set 192.168.10.2)。
Install Oracle 10g Database with ASM
[root@rac01 ~]$ su – oracle
Install the Database:
[oracle@rac01 ~]$ /media/cdrom/runInstaller
Installation Type: Enterprise Product Languages: Add Japanese
Name: OraDB_Install
Path: /oracle/product/10.2.0/db_1
Product-Specific Prerequisite Checks: You can ignore Checking Network Configuration requirements here, because we use static IP not DHCP.
ASM’s SYS password: nhy67ujm
ORA-15186: ASMLIB error function = [asm_open], error = [1],
ORA-15032: not all alterations performed
ORA-15063: ASM discovered an insufficient number of disks for diskgroup
Solution: Click ASM parameters:
asm_diskstring=’ORCL:*’
ERROR: no PST quorum in group 1: required 2, found 0
Metalink(309815.1): There are two issues with using multipath disks in ASM. ASM cannot handle seeing the same disk twice. If it does, it will cause an error.
vi /etc/sysconfig/oracleasm
ORACLEASM_SCANORDER=”oracleasm”
ORACLEASM_SCANEXCLUDE=”sdb sdc sdd sde sdf”
You can make any data group here, because we will delete the data group and recreate new ones later.
Once the installation is completed, wait while the configuration assistants run. Execute the ” /oracle/product/10.2.0/db_1/root.sh ” as root on all nodes (Enter the full pathname of the local bin directory: [/usr/local/bin]: /usr/local/bin), and then click the “OK” button.
[oracle@rac01 ~]$ export ORACLE_SID=+ASM1
[oracle@rac01 ~]$ sqlplus / as sysdba
SQL》 drop diskgroup DATA including contents;
SQL》 create diskgroup DATADG
normal redundancy
failgroup fgroup1 disk
‘/dev/raw/raw6′ name DATADG_ASM_1
failgroup fgroup2 disk
‘/dev/raw/raw7′ name DATADG_ASM_2;
create diskgroup REDODG
normal redundancy
failgroup fgroup1 disk
‘/dev/raw/raw8′ name REDODG_ASM_1
failgroup fgroup2 disk
‘/dev/raw/raw9′ name REDODG_ASM_2;
create diskgroup FLASHDG
normal redundancy
failgroup fgroup1 disk
‘/dev/raw/raw10′ name FLASHDG_ASM_1
failgroup fgroup2 disk
‘/dev/raw/raw11′ name FLASHDG_ASM_2;
After configuration, query the state of disk group.
SQL》 set line 150
col path for a15
col G_NO for 99
col name for a12
col LABEL for a6
select dg.GROUP_NUMBER G_NO,dg.name,dg.state dg_stat,d.name name,d.path,d.state d_stat,d.label,d.header_status from v$asm_diskgroup dg,v$asm_disk d where dg.group_number=d.group_number;
NAME DG_STAT D_NAME PATH D_STAT
———- ———– ——————– ————— ——–
DATADG MOUNTED DATADG_ASM_1 /dev/raw/raw6 NORMAL
DATADG MOUNTED DATADG_ASM_2 /dev/raw/raw7 NORMAL
REDODG MOUNTED REDODG_ASM_1 /dev/raw/raw8 NORMAL
REDODG MOUNTED REDODG_ASM_2 /dev/raw/raw9 NORMAL
FLASHDG MOUNTED FLASHDG_ASM_1 /dev/raw/raw10 NORMAL
FLASHDG MOUNTED FLASHDG_ASM_2 /dev/raw/raw11 NORMAL
If any diskgroup is unmount, use “alter diskgroup XXXDG mount” to mount it.
Upgrade to 10.2.0.3
[oracle@rac01 FTP]$ srvctl stop asm -n rac01
[oracle@rac01 FTP]$ srvctl stop nodeapps -n rac01
[oracle@rac01 FTP]$ crs_stat -t
Name Type Target State Host
————————————————————
ora….SM1.asm application OFFLINE OFFLINE
ora….01.lsnr application OFFLINE OFFLINE
ora.rac01.gsd application OFFLINE OFFLINE
ora.rac01.ons application OFFLINE OFFLINE
ora.rac01.vip application OFFLINE OFFLINE
Upgrade Clusterware:
[oracle@rac01 FTP]$ / FTP/Disk1/runInstaller (Need to execute “xhost +” as root at first)
At Specify Home Details, select Name: crs
Run as root:
[root@rac01 oracle]# /oracle/crs_home/bin/crsctl stop crs
[root@rac01 oracle]# /oracle/crs_home/install/root102.sh
Upgrade Database:
[oracle@rac01 FTP]$ / FTP/Disk1/runInstaller (Need to execute “xhost +” as root at first)
At Specify Home Details, select Name: OraDB_Install
Run as root:
[root@rac01 FTP]# /oracle/product/10.2.0/db_1/root.sh
[oracle @rac01 FTP]# srvctl start nodeapps -n rac01
[oracle @rac01 FTP]# srvctl start asm -n rac01
[oracle@rac01 FTP]$ crs_stat -t
Name Type Target State Host
————————————————————
ora….SM1.asm application ONLINE ONLINE rac01
ora….01.lsnr application ONLINE ONLINE rac01
ora.rac01.gsd application ONLINE ONLINE rac01
ora.rac01.ons application ONLINE ONLINE rac01
ora.rac01.vip application ONLINE ONLINE rac01
Create a Database with DBCA
Pre-Database creation check:
cd /cdrom/clusterware/cluvfy
$ 。/runcluvfy.sh stage -pre dbcfg -n rac01,rac02 -d /oracle/product/10.2.0
login to RAC01 as the oracle user and start the Database Configuration Assistant.
[oracle@rac01 ~]$ dbca
On the “Welcome” screen, select the “Oracle Real Application Clusters database” option and click the “Next” button.
Select the “Custom Database” option and click the “Next” button.
Global Database Name: pcard
SID prefix: pcard
Password: nhy67ujm
Select ASM as storage management.
Use Oracle-Managed Files: +DATADG
Flash Recovery Area: +FLASHDG
SIZE: 512MB
Enable Archiving: Edit Archive Mode Para: PCARD_%t_%r_%S.arc
Archive Log Destinations: +FLASHDG/PCARD/arch
It is necessary to create directory at first (+FLASHDG/PCARD/arch) to avoid some errors during the database starting procedure.
SGA: 300MB
PGA: 100MB
Character Set: JA16SJISTILDE
National Character Set: AL16UTF16
Click All Init Parameters:
Edit the control files: +REDODG/{db_name}/control01.ctl,+FLASHDG/{db_name}/control02.ctl, and then check it.
Change the parameter file location:
SQL》 create pfile=’/oracle/product/10.2.0/db_1/dbs/backup.ora’ from spfile;
SQL》 create spfile=’+FLASHDG/pcard/spfilepcard.ora’ from pfile=’/oracle/product/10.2.0/db_1/dbs/backup.ora’;
SQL》 host vi /oracle/product/10.2.0/db_1/dbs/initpcard1.ora
SPFILE=’+FLASHDG/pcard/spfilepcard.ora’, save and exit
SQL》 shutdown immediate
Remove the original spfile in ASM.
ASMCMD》 pwd
+/DATADG/pcard
ASMCMD》 rm spfilepcard.ora
SQL》 startup pfile=’/oracle/product/10.2.0/db_1/dbs/initpcard1.ora’
SQL》 show parameter spfile
NAME TYPE VALUE
———————————— ———– ——————————
spfile string +FLASHDG/pcard/spfilepcard.ora
Check the Status of the RAC
There are several ways to check the status of the RAC. The srvctl utility shows the current configuration and status of the RAC database.
$ srvctl config database -d pcard
$ srvctl status database -d pcard
Instance pcard1 is running on node rac01
Configure RAC database do not start automatically when OS starts
$ srvctl modify database -d RACB -y MANUAL
srvctl modify database -d 《dbname》 -y AUTOMATIC|MANUAL
The V$ACTIVE_INSTANCES view can also display the current status of the instances.
$ sqlplus / as sysdba
SQL》 SELECT * FROM v$active_instances;
INST_NUMBER INST_NAME
———– —————
1 rac01:pcard1
Finally, the GV$ allow you to display global information for the whole RAC.
SQL》 SELECT inst_id, username, sid, serial# FROM gv$session WHERE username IS NOT NULL;
INST_ID USERNAME SID SERIAL#
———- —————————— ———- ———-
1 SYS 128 2
1 SYS 129 2
1 SYS 130 1
1 SYS 132 4078
1 SYS 133 6
Create RMAN Catalog
Prepare the RMAN catalog DB creation script at first. Upload it to /oracle/admin/RMANUAT/scripts/
[oracle@rac01 ~]$ 。/RMANUAT.sh
Input the password of SYS and SYSTEM: rman
Configure RMAN DB auto startup when Linux starts:
login as root, add this entry in the /etc/oratab:
RMANUAT:/oracle/product/10.2.0/db_1:N
[oracle@rac01]$ su – oracle
[oracle@rac01 scripts]$vi $ORACLE_HOME/bin/dbstart
Find ORACLE_HOME_LISTNER, and edit like this:
ORACLE_HOME_LISTNER=$ORACLE_HOME
[oracle@rac01 scripts]$vi $ORACLE_HOME/bin/dbshut
Find ORACLE_HOME_LISTNER, and edit like this:
ORACLE_HOME_LISTNER=$ORACLE_HOME
[oracle@rac01 scripts]$su
[ root @rac01]$ cd /etc/rc.d/init.d/ [ root @rac01 init.d]$ vi rmandb #!/bin/bash
# chkconfig 345 99 10 # description: Startup Script for Oracle Databases export ORACLE_SID=RMANUAT export ORACLE_BASE= /oracle export ORACLE_HOME=/oracle/product/10.2.0/db_1 export PATH=$PATH:$ORACLE_HOME/bin case “$1″ in start) su oracle -c $ORACLE_HOME/bin/dbstart touch /var/lock/oracle echo “OK” ;; stop) echo -n “Shutdown Oracle: ” su oracle -c $ORACLE_HOME/bin/dbshut rm -f /var/lock/oracle echo “OK” ;; *) echo “Usage: ‘basename $0′ start|stop” exit 1 esac exit 0
[ root @rac01 init.d]$ chkconfig -add rmandb
[ root @rac01 init.d]$ chkconfig -list rmandb
Register the pcard database to rman catalog:
[oracle@rac01 scripts]$ export ORACLE_SID=RMANUAT
[oracle@rac01 scripts]$ sqlplus / as sysdba;
SQL》 CREATE USER RMANUSR IDENTIFIED BY rman DEFAULT TABLESPACE RMANUATUSR_DAT;
SQL》 GRANT connect,resource,recovery_catalog_owner TO RMANUSR;
[oracle@rac01 scripts]$ rman catalog rmanusr/rman
RMAN》 create catalog tablespace RMANUATUSR_DAT;
RMAN》 connect target sys/nhy67ujm@pcard1;
RMAN》 register database;
RMAN》 report schema;
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
—- ——– ——————– ——- ————————
1 480 SYSTEM YES +DATADG/pcard/datafile/system.259.663797429
2 200 UNDOTBS1 YES +DATADG/pcard/datafile/undotbs1.260.663797479
3 260 SYSAUX NO +DATADG/pcard/datafile/sysaux.261.663797495
4 5 USERS NO +DATADG/pcard/datafile/users.263.663797539
List of Temporary Files
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
—- ——– ——————– ———– ——————–
1 29 TEMP 32767 +DATADG/pcard/tempfile/temp.262.663797505
Uninstall RAC
shutdown DB instance, ASM, OCR
/etc/init.d/init.crs disable
/etc/init.d/init.crs stop
Edit /etc/inittab, remove CRSD,EVMD, CSSD entries
reboot
rm -fr All directories and files about oracle under /etc (oracle, oraInst.loc, oratab)
rm -fr /etc/init.d/init.crs, init.crsd, init.evmd, init.cssd, /etc/rc0.d/K96init.crs, /etc/rc1.d/K96init.crs, /etc/rc2.d/K96init.crs, /etc/rc3.d/S96init.crs, /etc/rc4.d/K96init.crs, /etc/rc5.d/S96init.crs, /etc/rc6.d/K96init.crs
cd /oracle
rm -fr *
# OCR and Vote Disk
dd if=/dev/zero of=/dev/raw/raw1 bs=4096 count=128
dd if=/dev/zero of=/dev/raw/raw2 bs=4096 count=128
dd if=/dev/zero of=/dev/raw/raw3 bs=4096 count=128
dd if=/dev/zero of=/dev/raw/raw4 bs=4096 count=128
dd if=/dev/zero of=/dev/raw/raw5 bs=4096 count=128
# ASM disks
dd if=/dev/zero of=/dev/raw/raw6 bs=8192 count=128
dd if=/dev/zero of=/dev/raw/raw7 bs=8192 count=128
dd if=/dev/zero of=/dev/raw/raw8 bs=8192 count=128
dd if=/dev/zero of=/dev/raw/raw9 bs=8192 count=128
dd if=/dev/zero of=/dev/raw/raw10 bs=8192 count=128
dd if=/dev/zero of=/dev/raw/raw11 bs=8192 count=128
Create Physical Standby Database and Switchover
RAC01 side:
rman target sys/nhy67ujm@pcard1 catalog=RMANUSR/rman@RMANUAT
mkdir -p /oracle /backup/hotbackup
RMAN》 run{
allocate channel d1 type disk;
backup format ‘/oracle/backup/hotbackup/df_t%t_s%s_p%p’ database;
backup current controlfile for standby format
‘/oracle/backup/hotbackup/sb_t%t_s%s_p%p’;
sql ‘alter system archive log current’;
backup format ‘/oracle/backup/hotbackup/al_t%t_s%s_p%p’ archivelog all;
release channel d1;
}
Standby side:
mkdir -p /oracle/backup/hotbackup
cd /oracle/backup/hotbackup
sftp oracle@rac01:/oracle/backup/hotbackup/
Use “get” command to copy all backup files in this directory.
Create standby db directory
mkdir -p /oracle/admin/pcard/adump
mkdir -p /oracle/admin/pcard/bdump
mkdir -p /oracle/admin/pcard/cdump
mkdir -p /oracle/admin/pcard/udump
mkdir -p /oracle/oradata/pcard/arch/
mkdir -p /oracle/oradata/pcard/datafile
mkdir -p /oracle/oradata/pcard/onlinelog
Standby Side: Edit listener.ora and tnsname.ora
listener.ora:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /oracle/product/10.2.0/db)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = standby)
(ORACLE_HOME = /oracle/product/10.2.0/db)
(SID_NAME = pcarddg)
)
)
LISTENER_STANDBY =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.10.201)(PORT = 1521))
)
)
tnsname.ora (Copy from RAC01, and add the following contents):
#####STANDBY SETTINGS##############
LISTENERS_STANDBY =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.10.201)(PORT = 1521))
PCARDDG =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.10.201)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = PCARDDG)(UR=A)
(INSTANCE_NAME = PCARDDG)
)
)
RAC01 Side: Create pfile from spfile and edit
SQL》 create pfile from spfile;
SQL》 host vi /oracle/product/10.2.0/db_1/dbs/initpcard1.ora
Edit or add the following parameters:
*.db_name=’pcard’
*.DB_UNIQUE_NAME=’pcard’
*.log_archive_config=’DG_CONFIG=(pcard,pcarddg)’
*.log_archive_dest_1=’LOCATION=+FLASHDG/pcard/arch/ VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=pcard’
*.log_archive_dest_2=’SERVICE=pcarddg VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=pcarddg’
*.log_archive_dest_state_1=’ENABLE’
*.log_archive_dest_state_2=’ENABLE’
*.remote_login_passwordfile=’exclusive’
# Add the following parameters for switchover
*.FAL_CLIENT=’pcard’
*.FAL_SERVER=’pcarddg’
*.DB_FILE_NAME_CONVERT=’/oracle/oradata/pcard’,‘+DATADG/PCARD’
*.LOG_FILE_NAME_CONVERT=’/oracle/oradata/pcard/onlinelog’,’+DATADG/PCARD/ONLINELOG’
*.STANDBY_ARCHIVE_DEST=’+FLASHDG/pcard/arch/’
*.STANDBY_FILE_MANAGEMENT=’AUTO’
SQL》 shutdown immediate
SQL》 startup pfile=’/oracle/product/10.2.0/db_1/dbs/ initpcard1.ora ‘;
SQL》 create spfile =’ +FLASHDG/pcard/spfilepcard.ora ‘ from pfile;
Standby Side: Edit pfile copied from RAC01, and Create spfile from pfile
Copy the above pfile (initpcard1.ora) to Standby (/oracle/product/10.2.0/db_1/dbs/pfilestandby.ora), edit or add the following parameters:
*.db_name=’pcard’
*.DB_UNIQUE_NAME=pcarddg
*.LOG_ARCHIVE_CONFIG=’DG_CONFIG=(pcard,pcarddg)’
*.LOG_ARCHIVE_DEST_1=’LOCATION=/oracle/oradata/pcard/arch/ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=pcarddg’
*.LOG_ARCHIVE_DEST_2=’SERVICE=pcard VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=pcard’
*.LOG_ARCHIVE_DEST_STATE_1=ENABLE
*.LOG_ARCHIVE_DEST_STATE_2=ENABLE
*.REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
# Add the following parameters for switchover
*.FAL_SERVER=pcard
*.FAL_CLIENT=pcarddg
*.STANDBY_ARCHIVE_DEST=’/oracle/oradata/pcard/arch’
*.STANDBY_FILE_MANAGEMENT=’AUTO’
*.DB_FILE_NAME_CONVERT=’+DATADG/PCARD/’,‘/oracle/oradata/pcard/’
*.LOG_FILE_NAME_CONVERT=’+DATADG/PCARD/ONLINELOG’,’/oracle/oradata/pcard/onlinelog’
export ORACLE_SID=pcarddg
startup nomount pfile=’/oracle/product/10.2.0/db_1/dbs/pfilestandby.ora’
create spfile=’/oracle/product/10.2.0/db_1/dbs/spfilepcarddg.ora’ from pfile=’/oracle/product/10.2.0/db_1/dbs/pfilestandby.ora’;
[oracle@standby admin]$ orapwd file=/oracle/product/10.2.0/db_1/dbs/orapwpcarddg password=nhy67ujm entries=30 force=y
[oracle@standby admin]$ lsnrctl start
[oracle@standby admin]$ rman target sys/nhy67ujm@pcard1 auxiliary sys/nhy67ujm@pcarddg catalog=rmanusr/rman@rmanuat
RMAN》 copy current controlfile for standby to ‘/oracle/oradata/control01.ctl’;
copy control01.ctl from RAC01 to standby /oracle/oradata/pcard
RMAN》 duplicate target database for standby dorecover;
Then the standby DB will be mounted automatically.
SQL》 alter database recover managed standby database disconnect from session;
Then the standby DB will receive the archived log from RAC01.
Please note: standby DB can only startup mount under most of situation, so we startup standby db into mount every time and execute “alter database recover managed standby database disconnect from session” so that standby db can receive the primary archived redo logs.
Check the status of standby DB:
RAC01 Side:
alter system switch logfile;
select max(sequence#) from v$log_history;
MAX(SEQUENCE#)
————–
84
Standby Side:
select max(sequence#) from v$log_history;
MAX(SEQUENCE#)
————–
84
It is same as rac01 side. So the standby db is configured successfully.
Switchover primary to standby
rac01:
SQL》 select database_role,switchover_status from v$database;
DATABASE_ROLE SWITCHOVER_STATUS
—————- ——————–
PHYSICAL STANDBY SESSIONS ACTIVE
或者DGMGRL》 SWITCHOVER TO xx
standby:
SQL》 select database_role,switchover_status from v$database;
DATABASE_ROLE SWITCHOVER_STATUS
—————- ——————–
PHYSICAL STANDBY SESSIONS ACTIVE
rac01:
SQL》 alter database commit to switchover to physical standby with session shutdown;
SQL》 shutdown immediate
standby:
SQL》 select database_role,switchover_status from v$database;
DATABASE_ROLE SWITCHOVER_STATUS
—————- ——————–
PHYSICAL STANDBY TO PRIMARY
SQL》 alter database commit to switchover to physical primary with session shutdown;
SQL》 shutdown immediate
SQL》 startup
SQL》 select database_role,switchover_status from v$database;
DATABASE_ROLE SWITCHOVER_STATUS
—————- ——————–
PRIMARY SESSIONS ACTIVE
rac01:
SQL》 startup mount
SQL》 alter database recover managed standby database disconnect from session;
SQL》 select database_role,switchover_status from v$database;
DATABASE_ROLE SWITCHOVER_STATUS
—————- ——————–
PHYSICAL STANDBY SESSIONS ACTIVE
It is proved that the switchover is executed successfully.