Skip to main content

Wait Events - Analysis

Log Buffer Space

This wait occurs because you are writing Log Buffers fater than LGWR can write to the redo log files.

Log Switches are very slow

Solution

1. Increase log buffer;
2. Increase Log file size;
3. faster disk for redo logs

Logfile Switch

All commit requests are waiting for logfile switch (archiving needed) OR logfile switch (checkpoint incomplete)

Cause

1. Ensure that archive disk is not full or slow

2. DBWR may be too slow due to I/O

3. Even though a logfile has been archived, it's contents still need to be
written to disk via a checkpoint before it can be used again. This insures
that instance recovery is possible (you would have to do media recovery
after shutdown abort in some cases if this was not so).

4. So, what is happening is that your disks are not keeping up with the
checkpoint volume. Until the checkpoints can complete, your database will
just stall, waiting.

Solution

1. You may need to add more larger redo logs

2. Need to add DBWR is there is probs with DBWR

Log File Sync

When a user session commits, the session's redo information
needs to be flushed (FROM MEMORY) to the redo logfile (TO DISK).
The session issuing the commit will wait on the log file sync event.

Solution

1. Where possible reduce the commit frequency. Eg: Commit at batch intervals

2. Speed up redo writing (Eg: Do NOT use RAID 5, use fast disks etc..)

3. Tune LGWR to get good throughput to disk . eg: Do not put redo logs on RAID

Free Buffer

This indicates your system is waiting for a buffer in memory, because none is currently available. Waits in this category may indicate that you need to increase the DB_BUFFER_CACHE, if all your SQL is tuned. Free buffer waits could also indicate that unselective SQL is causing data to flood the buffer cache with index blocks

Solution

1. Need to tune the query

2. Need to accelerate more checkpoint

3. Using more DBWR process or incresasing No. of physical disks

DB file scattered read

Indicate many full table scan

1. Tune the code

2. cache small tables

DB file sequential read

Indicates many Index Scan

1. Tune the code (especially joins)

2. Check to ensure index scans are necessary

Buffer Busy - Segment Header Segment Header : Add freelist or freelist group

Buffer Busy Waits

Buffer Busy - Data Block Data Block : separate hot data; use reverse key indexes; smaller block
Data Block : increase initrans and or maxtrans
Buffer Busy - UNDO Header UNDO Header : add rollback segments or areas

Buffer Busy - UNDO Block UNDO Block : commit more; larger rollback segments area

Comments

Popular posts from this blog

What is the difference between Elastic and Enterprise Redis w.r.t "Hybrid Query" capabilities

  We'll explore scenarios involving nested queries, aggregations, custom scoring, and hybrid queries that combine multiple search criteria. 1. Nested Queries ElasticSearch Example: ElasticSearch supports nested documents, which allows for querying on nested fields with complex conditions. Query: Find products where the product has a review with a rating of 5 and the review text contains "excellent". { "query": { "nested": { "path": "reviews", "query": { "bool": { "must": [ { "match": { "reviews.rating": 5 } }, { "match": { "reviews.text": "excellent" } } ] } } } } } Redis Limitation: Redis does not support nested documents natively. While you can store nested structures in JSON documents using the RedisJSON module, querying these nested structures with complex condi...

Training LLM model requires more GPU RAM than storing same LLM

Storing an LLM model and training the same model both require memory, but the memory requirements for training are typically higher than just storing the model. Let's dive into the details: Memory Requirement for Storing the Model: When you store an LLM model, you need to save the weights of the model parameters. Each parameter is typically represented by a 32-bit float (4 bytes). The memory requirement for storing the model weights is calculated by multiplying the number of parameters by 4 bytes. For example, if you have a model with 1 billion parameters, the memory requirement for storing the model weights alone would be 4 GB (4 bytes * 1 billion parameters). Memory Requirement for Training: During the training process, additional components use GPU memory in addition to the model weights. These components include optimizer states, gradients, activations, and temporary variables needed by the training process. These components can require additional memory beyond just storing th...

How are vector databases used?

  Vector Databases Usage: Typically used for vector search use cases such as visual, semantic, and multimodal search. More recently, they are paired with generative AI text models for conversational search experiences. Development Process: Begins with building an embedding model designed to encode a corpus (e.g., product images) into vectors. The data import process is referred to as data hydration. Application Development: Application developers utilize the database to search for similar products. This involves encoding a product image and using the vector to query for similar images. k-Nearest Neighbor (k-NN) Indexes: Within the model, k-nearest neighbor (k-NN) indexes facilitate efficient retrieval of vectors. A distance function like cosine is applied to rank results by similarity.