Skip to main content

Journals

Teradata Database supports tables that are devoted to journaling. A journal is a record of some kind of activity. Teradata Database supports several kinds of journaling. The system does some journaling on its own, while you can specify whether to perform other journaling.

1. Down AMP recovery Occurs always

The Down AMP Recovery Journal (DARJ) is started on all AMPs in the cluster when an AMP is down. This allows for three AMPs to check on their mate. Since there are four AMPs in most clusters and all Fallback for a particular AMP remains within the cluster there are Three AMPs that will hold Fallback rows for a down AMP.

The Down AMP Recovery Journal (DARJ) is a special journal used only for FALLBACK rows when an AMP is not working. Like the TRANSIENT JOURNAL, the DARJ, also known as the RECOVERY JOURNAL, gets it space from the DBC’s PERM Space. When an AMP fails, the rest of the AMPs in its cluster initiate a DARJ. The DARJ keeps track of any changes that would have been written to the failed AMP. When the AMP comes back online, the DARJ will catch-up the AMP by completing missed transactions. Once everything is caught-up the DARJ is dropped.

• Is active during an AMP failure only
• Journals fallback tables only
• Is used to recover the AMP after the AMP is repaired, then is discarded

2. Transient Occurs always

A Transient journal maintains data integrity when in-flight transactions are interrupted. Data is returned to original state after the transaction failure.

A transient journal is used during normal system operation to keep"Before Images" of changed rows so that data can be restored to its previous state if the transaction is not completed. This happens on each AMP as changes occur. When a transaction is started, the sytem automatically stores all the rows affected by the transaction in the transient journal until the transaction is completed. Once the transaction is completed the "Before Images" are purged.

In the event of transaction failure, the "Before Images" are reapplied to the affected tables and deleted from the journal, and "rollback" operation is completed.


• Logs BEFORE images for transactions
• Is used by system to roll back failed transactions aborted either by the user or by the system
Captures:
• Begin/End Transaction indicators
• Before row images for UPDATE and DELETE statements
• Row IDs for INSERT statements
• Control records for CREATE, DROP, DELETE, and ALTER statements
• Keeps each image on the same AMP as the row it describes
• Discards images when the transaction or rollback completes

3. Permanent As specified by the user.

Permanent Journals are an optional feature used to provide an additional level of data protection. You specify the use of permanent journal at the table level. It provides full table recovery to a specific point in time. It can also reduce the need for costly and time - consuming full table backups.

Permanent journals are tables stored on disk array like user data is, so they can take up additional disk space, on the system. The database administrator maintains the permanent journal entries.

A database can have one permanent journal.

While creating a table with this option need to specify

1. Before Images
2. After Images

• Is available for tables or databases
• Can contain before images, which permit rollback, or after images, which permit rollforward, or both before and after images
• Provides rollforward recovery
• Provides rollback recovery
• Provides full recovery of nonfallback tables
• Reduces need for frequent, full-table archives

Comments

Popular posts from this blog

How are vector databases used?

  Vector Databases Usage: Typically used for vector search use cases such as visual, semantic, and multimodal search. More recently, they are paired with generative AI text models for conversational search experiences. Development Process: Begins with building an embedding model designed to encode a corpus (e.g., product images) into vectors. The data import process is referred to as data hydration. Application Development: Application developers utilize the database to search for similar products. This involves encoding a product image and using the vector to query for similar images. k-Nearest Neighbor (k-NN) Indexes: Within the model, k-nearest neighbor (k-NN) indexes facilitate efficient retrieval of vectors. A distance function like cosine is applied to rank results by similarity.

Error: could not find function "read.xlsx" while reading .xlsx file in R

Got this during the execution of following command in R > dat <- colindex="colIndex," endrow="23," file="NGAP.xlsx" header="TRUE)</p" read.xlsx="" sheetindex="1," startrow="18,"> Error: could not find function "read.xlsx" Tried following command > install.packages("xlsx", dependencies = TRUE) Installing package into ‘C:/Users/amajumde/Documents/R/win-library/3.2’ (as ‘lib’ is unspecified) also installing the dependencies ‘rJava’, ‘xlsxjars’ trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/rJava_0.9-8.zip' Content type 'application/zip' length 766972 bytes (748 KB) downloaded 748 KB trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/xlsxjars_0.6.1.zip' Content type 'application/zip' length 9485170 bytes (9.0 MB) downloaded 9.0 MB trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/xlsx_0.5.7.zip&

Feature Engineering - What and Why

Feature engineering is a crucial step in the machine learning pipeline where you create new, meaningful features or transform existing features to improve the performance of your predictive models. It involves selecting, modifying, or creating features from your raw data to make it more suitable for machine learning algorithms. Here's a more detailed overview of feature engineering: Why Feature Engineering? Feature engineering is essential for several reasons: Improving Model Performance: Well-engineered features can significantly boost the predictive power of your machine learning models. Handling Raw Data: Raw data often contains noise, missing values, and irrelevant information. Feature engineering helps in cleaning and preparing the data for analysis. Capturing Domain Knowledge: Domain-specific insights can be incorporated into feature creation to make the model more representative of the problem. Common Techniques and Strategies: 1. Feature Extraction: Transforming raw data