Skip to main content

Posts

Showing posts from April, 2011

Got problem Package :cvuqdisk - 1.0.9-1 Failed

Install the operating system package cvuqdisk. Without cvuqdisk, Cluster Verification Utility cannot discover shared disks, and you receive the error message "Package cvuqdisk not installed" when you run Cluster Verification Utility. Use the cvuqdisk rpm for your hardware (for example, x86_64, or i386). To install the cvuqdisk RPM, complete the following procedure: 1. Locate the cvuqdisk RPM package, which is in the directory rpm on the installation media. If you have already installed Oracle Grid Infrastructure, then it is located in the directory grid_home/rpm. 2. Copy the cvuqdisk package to each node on the cluster. You should ensure that each node is running the same version of Linux. 3. Log in as root. 4. Use the following command to find if you have an existing version of the cvuqdisk package: 5. # rpm -qi cvuqdisk If you have an existing version, then enter the following command to deinstall the existing version: # rpm -e cvuqdisk 6. Set the environment varia

Some new concepts in 11gR2 Rac

Oracle clusterware and ASM now are installed into the Same Oracle Home, and is now called the grid infrastructure install. Raw devices are no longer supported for use for anything (Read oracle cluster registry, voting disk, asm disks), for new installs. OCR and Voting disk can now be stored in ASM, or a certified cluster file system. The redundancy level of your ASM diskgroup (That you choose to place voting disk on) determines the number of voting disks you can have. You can place Only One voting disk on an ASM diskgroup configured as external redundancy Only Three voting disks on an ASM diskgroup configured as normal redundancy Only Five voting disks on an ASM diskgroup configured as high redundancy The contents of the voting disks are automatically backed up into the OCR ACFS (Asm cluster file system) is only supported on Oracle Enterprise Linux 5 (And RHEL5), not on OEL4. There is a new service called cluster time synchronization service that can keep the clocks o

How to check the major number of the devices?

it should be the same for all the nodes ls -l /dev/oracleasm/disks brw-rw---- 1 oracle oinstall 253, 66 Mar 4 15:08 ACIQENARCHLOG01 brw-rw---- 1 oracle oinstall 253, 67 Mar 4 15:08 ACIQENARCHLOG02 brw-rw---- 1 oracle oinstall 253, 59 Mar 4 15:08 ACIQENDATA01 brw-rw---- 1 oracle oinstall 253, 60 Mar 4 15:08 ACIQENDATA02 brw-rw---- 1 oracle oinstall 253, 61 Mar 4 15:08 ACIQENDATA03 brw-rw---- 1 oracle oinstall 253, 62 Mar 4 15:08 ACIQENDATA04 brw-rw---- 1 oracle oinstall 253, 64 Mar 4 15:08 ATGQENARCHLOG01 brw-rw---- 1 oracle oinstall 253, 65 Mar 4 15:08 ATGQENARCHLOG02 brw-rw---- 1 oracle oinstall 253, 51 Mar 4 15:08 ATGQENDATA01 brw-rw---- 1 oracle oinstall 253, 52 Mar 4 15:08 ATGQENDATA02 brw-rw---- 1 oracle oinstall 253, 53 Mar 4 15:08 ATGQENDATA03 brw-rw---- 1 oracle oinstall 253, 54 Mar 4 15:08 ATGQENDATA04 brw-rw---- 1 oracle oinstall 253, 43 Mar 4 15:08 CONTROL01 brw-rw---- 1 oracle oinstall 253, 44 Mar 4 15:08 CONTROL02 brw-rw---- 1 oracle oinstal

How to check the ASM configuration on all the nodes?

It should be the same on all the nodes [oracle@tlexratgdb1b grid]$ cat /etc/sysconfig/oracleasm # # This is a configuration file for automatic loading of the Oracle # Automatic Storage Management library kernel driver. It is generated # By running /etc/init.d/oracleasm configure. Please use that method # to modify this file # # ORACLEASM_ENABELED: 'true' means to load the driver on boot. ORACLEASM_ENABLED=true # ORACLEASM_UID: Default user owning the /dev/oracleasm mount point. ORACLEASM_UID=oracle # ORACLEASM_GID: Default group owning the /dev/oracleasm mount point. ORACLEASM_GID=oinstall # ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot. ORACLEASM_SCANBOOT=true # ORACLEASM_SCANORDER: Matching patterns to order disk scanning ORACLEASM_SCANORDER="dm" # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan ORACLEASM_SCANEXCLUDE=""

How to check the no. of ASM disks assigned on all the nodes of the RAC cluster?

It should be the same [oracle@tlexratgdb3b ~]$ /etc/init.d/oracleasm listdisks ACIQENARCHLOG01 ACIQENARCHLOG02 ACIQENDATA01 ACIQENDATA02 ACIQENDATA03 ACIQENDATA04 ATGQENARCHLOG01 ATGQENARCHLOG02 ATGQENDATA01 ATGQENDATA02 ATGQENDATA03 ATGQENDATA04 CONTROL01 CONTROL02 REDOLOG01 TEMPREDO01 TEMPREDO02 TEMPREDO03 TEMPREDO04 VOTINGOCR01 VOTINGOCR02 VOTINGOCR03 VOTINGOCR04 VOTINGOCR05 VOTINGOCR06