Skip to main content

System Requirements and Recommendations

Standby is maintained as a mirror image of the primary. Many aspects of the primary and standby databases need to be the same.

  • Primary and standby must have the same DB2 major version. For example, both on V10.1.
    • Standby database fix pack level must be same or higher than that of the primary (otherwise, primary could generate log records the standby cannot replay).
    • Same fix pack level is recommended on primary and standby, to minimize compatibility risk. Different primary and standby fix pack levels usually only occur duringrolling update
  • Primary and standby must have the same platform.
    • "Platform" here is defined as the combination of OS type (software) and machine architecture (hardware). For example, the followings are considered distinct platforms: Windows-x86, AIX-power, HP-IA, Solaris-Sparc, Solaris-x86, Linux-PPC, Linux-Z, Linux-390, Linux-x86,
    • Primary and standby must have the same endian (both big endian, or both small endian). This requirement is usually satisfied by the platform requirement already.
  • Same OS version (major and minor) is recommended on primary and standby. Different versions usually only occur during rolling update. DB2 does not enforce any check on OS version. But you should keep the different-version window as short as possible, to minimize compatibility risk.
  • The DB2 software on primary and standby must have the same bit size (both 64 bit, or both 32 bit).
  • Same bit size on the host platform is recommended, to minimize compatibility risk.
    • Host platform bit size could be different. For example, DB2 is 32 bit on both machines. Primary host is 64 bit, which can run both 64 bit and 32 bit applications. Standby host is 32 bit.
  • Primary and standby must have the same paths for tablespace containers, to support tablespace replication.
    • The container path requirement can often be satisfied with symbolic links. The standby devices should have same or larger capacity.
    • Redirected restore is not supported when creating the standby. However, database directory (for database metadata files) and transaction log directory changes are supported during the restore. Table space containers created by relative paths will be restored to paths relative to the new database directory.
  • Same hardware (CPU, memory, disk, etc.) is recommended on the primary and standby, so that standby has enough power for replay. Sufficient planning and testing should be done when deploying a less powerful standby.
  • Same amount of memory is recommended on the primary and standby, so that buffer pool replication is less likely to fail.

Comments

Popular posts from this blog

Error: could not find function "read.xlsx" while reading .xlsx file in R

Got this during the execution of following command in R > dat Error: could not find function "read.xlsx" Tried following command > install.packages("xlsx", dependencies = TRUE) Installing package into ‘C:/Users/amajumde/Documents/R/win-library/3.2’ (as ‘lib’ is unspecified) also installing the dependencies ‘rJava’, ‘xlsxjars’ trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/rJava_0.9-8.zip' Content type 'application/zip' length 766972 bytes (748 KB) downloaded 748 KB trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/xlsxjars_0.6.1.zip' Content type 'application/zip' length 9485170 bytes (9.0 MB) downloaded 9.0 MB trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/xlsx_0.5.7.zip' Content type 'application/zip' length 400968 bytes (391 KB) downloaded 391 KB package ‘rJava’ successfully unpacked and MD5 sums checked package ‘xlsxjars’ successfully unpacked ...

What is the benefit of using Quantization in LLM

Quantization is a technique used in LLMs (Large Language Models) to reduce the memory requirements for storing and training the model parameters. It involves reducing the precision of the model weights from 32-bit floating-point numbers (FP32) to lower precision formats, such as 16-bit floating-point numbers (FP16) or 8-bit integers (INT8). Bottomline: You can use Quantization to reduce the memory footprint off the model during the training. The usage of quantization in LLMs offers several benefits: Memory Reduction: By reducing the precision of the model weights, quantization significantly reduces the memory footprint required to store the parameters. This is particularly important for LLMs, which can have billions or even trillions of parameters. Quantization allows these models to fit within the memory constraints of GPUs or other hardware accelerators. Training Efficiency: Quantization can also improve the training efficiency of LLMs. Lower precision formats require fewer computati...

What is Tensor Parallelism and relationship between Buffer and GPU

  Tensor Parallelism in GPU Tensor parallelism is a technique used to distribute the computation of large tensor operations across multiple GPUs or multiple cores within a GPU .   It is an essential method for improving the performance and scalability of deep learning models, particularly when dealing with very large models that cannot fit into the memory of a single GPU. Key Concepts Tensor Operations : Tensors are multidimensional arrays used extensively in deep learning. Common tensor operations include matrix multiplication, convolution, and element-wise operations. Parallelism : Parallelism involves dividing a task into smaller sub-tasks that can be executed simultaneously. This approach leverages the parallel processing capabilities of GPUs to speed up computations. How Tensor Parallelism Works Splitting Tensors : The core idea of tensor parallelism is to split large tensors into smaller chunks that can be processed in parallel. Each chunk is assigned to a different GP...