Skip to main content

Posts

Showing posts from July, 2012

Hugepages and concern on Linux environment

HugePages is crucial for faster Oracle database performance on Linux if you have a large RAM and SGA. If the combined database SGAs is large (like more than 8GB, can even be important for smaller), you will need HugePages configured. Note that the size of the SGA matters.  Applies to: Linux OS - Version Enterprise Linux 4.0 to Oracle Linux 6.0 with Unbreakable Enterprise Kernel [2.6.32] [Release RHEL4 to OL6] Oracle Server - Enterprise Edition - Version 9.2.0.1 and later Linux x86-64 Oracle Linux Red Hat Enterprise Linux (RHEL) SUSE Linux Enterprise Server (SLES) Hugepages is a delicate features of Oracle and one should understand certain concern before setting this. Concern before setting Huge Pages for Linux environment The performed configuration is basically based on the RAM installed and combined size of SGA of database instances you are running. Based on that when: Amount of RAM installed for the Linux OS changed New database instance(s) introduced SGA size / conf

Memory Parameter usage in Oracle 11gR2

MEMORY_TARGET specifies the Oracle system-wide usable memory. MEMORY_TARGET specifies the Oracle system-wide usable memory. MEMORY_MAX_TARGET (…) decide on a maximum amount of memory that you would want to allocate to the database for the foreseeable future. The database tunes memory to the MEMORY_TARGET value, reducing or enlarging the SGA and PGA as needed. In a text initialization parameter file, if you omit the line for MEMORY_MAX_TARGET and include a value for MEMORY_TARGET, the database automatically sets MEMORY_MAX_TARGET to the value of MEMORY_TARGET. If you omit the line for MEMORY_TARGET and include a value for MEMORY_MAX_TARGET, the MEMORY_TARGET parameter defaults to zero. Prior to Oracle 11g, the DBA used to set the sga_target and sga_max_size parameters, allowing Oracle to reallocate RAM within the SGA.  The PGA was independent, as governed by the pga_aggregate_target parameter. Now in Oracle 11g we see the memory_max_target parameter which g

Decision Criteria for db_file_multiblock_readcount

DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to minimize I/O during table scans. It specifies the maximum number of blocks read in one I/O operation during a sequential scan . The total number of I/Os needed to perform a full table scan depends on such factors as the size of the table, the multiblock read count, and whether parallel execution is being utilized for the operation. The maximum value is the operating system's maximum I/O size expressed as Oracle blocks ((max I/O size)/ DB_BLOCK_SIZE ) . According to Oracle, this is the formula for setting db_file_multiblock_read_count:                                           max I/O chunk size DB_FILE_MULTIBLOCK_READ_COUNT   =   ------------------------------------                                         db_block_size If you set this parameter to a value greater than the maximum, Oracle uses the maximum. The setting of db_file_multiblock_read_count dictates how many I/O calls will be required

Decision Criteria for Memory Parameter for OLTP and Reporting environment

In transaction environment, multiple copies of block exists in Database Buffer Cache, It is advisable to have more SGA_Target / Memory_Target size in comparison to reporting environment. Whereas in Reporting environment, multiple copies of block does not exist in Buffer Cache rather more sorting, searching operation is taking place, hence more PGA size is required.