Skip to main content

Autonomous Transaction

In Oracle's database products, an autonomous transaction is an independent transaction that is initiated by another transaction.

Autonomous Transactions are independant transactions that can be called from within another transaction.

You can call Autonomous Transaction from within PL/SQL Block using pragma Autonomous_transaction

When an autonomous transaction is called, the original transaction (calling transaction) is temporarily suspended. The autonomous transaction must commit or roll back before it returns control to the calling transaction.

The easiest way to understand autonomous transactions is to see them in action. To do this, we create a test table and populate it with two rows. Notice that the data is not commited.

CREATE TABLE at_test (
id NUMBER NOT NULL,
description VARCHAR2(50) NOT NULL
);

INSERT INTO at_test (id, description) VALUES (1, 'Description for 1');
INSERT INTO at_test (id, description) VALUES (2, 'Description for 2');

SELECT * FROM at_test;

ID DESCRIPTION
---------- --------------------------------------------------
1 Description for 1
2 Description for 2

2 rows selected.

SQL>
Next, we insert another 8 rows using an anonymous block declared as an autonomous transaction, which contains a commit statement.

DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
FOR i IN 3 .. 10 LOOP
INSERT INTO at_test (id, description)
VALUES (i, 'Description for ' || i);
END LOOP;
COMMIT;
END;
/

PL/SQL procedure successfully completed.

SELECT * FROM at_test;

ID DESCRIPTION
---------- --------------------------------------------------
1 Description for 1
2 Description for 2
3 Description for 3
4 Description for 4
5 Description for 5
6 Description for 6
7 Description for 7
8 Description for 8
9 Description for 9
10 Description for 10

10 rows selected.

SQL>
As expected, we now have 10 rows in the table. If we now issue a rollback statement we get the following result.

ROLLBACK;
SELECT * FROM at_test;

ID DESCRIPTION
---------- --------------------------------------------------
3 Description for 3
4 Description for 4
5 Description for 5
6 Description for 6
7 Description for 7
8 Description for 8
9 Description for 9
10 Description for 10

8 rows selected.

SQL>
The 2 rows inserted by our current session (transaction) have been rolled back, while the rows inserted by the autonomous transactions remain. The presence of the PRAGMA AUTONOMOUS_TRANSACTION compiler directive made the anonymous block run in its own transaction, so the internal commit statement did not affect the calling session. As a result rollback was still able to affect the DML issued by the current statement.

Comments

Popular posts from this blog

What is the difference between Elastic and Enterprise Redis w.r.t "Hybrid Query" capabilities

  We'll explore scenarios involving nested queries, aggregations, custom scoring, and hybrid queries that combine multiple search criteria. 1. Nested Queries ElasticSearch Example: ElasticSearch supports nested documents, which allows for querying on nested fields with complex conditions. Query: Find products where the product has a review with a rating of 5 and the review text contains "excellent". { "query": { "nested": { "path": "reviews", "query": { "bool": { "must": [ { "match": { "reviews.rating": 5 } }, { "match": { "reviews.text": "excellent" } } ] } } } } } Redis Limitation: Redis does not support nested documents natively. While you can store nested structures in JSON documents using the RedisJSON module, querying these nested structures with complex condi...

Training LLM model requires more GPU RAM than storing same LLM

Storing an LLM model and training the same model both require memory, but the memory requirements for training are typically higher than just storing the model. Let's dive into the details: Memory Requirement for Storing the Model: When you store an LLM model, you need to save the weights of the model parameters. Each parameter is typically represented by a 32-bit float (4 bytes). The memory requirement for storing the model weights is calculated by multiplying the number of parameters by 4 bytes. For example, if you have a model with 1 billion parameters, the memory requirement for storing the model weights alone would be 4 GB (4 bytes * 1 billion parameters). Memory Requirement for Training: During the training process, additional components use GPU memory in addition to the model weights. These components include optimizer states, gradients, activations, and temporary variables needed by the training process. These components can require additional memory beyond just storing th...

How are vector databases used?

  Vector Databases Usage: Typically used for vector search use cases such as visual, semantic, and multimodal search. More recently, they are paired with generative AI text models for conversational search experiences. Development Process: Begins with building an embedding model designed to encode a corpus (e.g., product images) into vectors. The data import process is referred to as data hydration. Application Development: Application developers utilize the database to search for similar products. This involves encoding a product image and using the vector to query for similar images. k-Nearest Neighbor (k-NN) Indexes: Within the model, k-nearest neighbor (k-NN) indexes facilitate efficient retrieval of vectors. A distance function like cosine is applied to rank results by similarity.