Skip to main content

Posts

Showing posts from March, 2012

How Smart Cache works in RAC

The Database Smart Flash Cache is not shared across RAC nodes; it is private to each RAC instance. For that reason, each RAC instance must define its own local flash cache file path. When the flash cache is managed by ASM, a separate diskgroup is needed for the flash cache of each instance.

Oracle DB Smart Flash Cache - How it works

Oracle’s Database Smart Flash Cache functions as a victim cache, which means that it stores clean (unmodified) database blocks that have been evicted from the SGA buffer cache to make room for other blocks. If a block selected for eviction has been modified, the dirty block is first written to disk by one of the database writer (DBWR) processes, then it is written to the Database Smart Flash Cache. Blocks can later be retrieved from the Database Smart Flash Cache and returned to the SGA as required. If a block cannot be found either in the SGA buffer cache or the Database Smart Flash Cache, it will be retrieved from disk.

Oracle Database Smart Flash Cache

1. Oracle Database 11g Release 2 introduced a new database feature: Database Smart Flash Cache. 2. This feature is available on Solaris and Oracle Enterprise Linux and allows customers to increase the effective size of the Oracle database buffer cache without adding more main memory to the system. 3. The idea is to use Flash card as the second cache of Oracle database. 4. Looks for blocks in flash cache before issuing Physical I/O 5. It keeps only the clean blocks

Teradata Parallel Data Pump

Teradata TPump uses standard SQL (not block transfers) to maintain data in tables. TPump also contains a resource governing method whereby you can control the use of system resources by specifying how many inserts and updates occur minute-by-minute. This allows background maintenance for insert, delete, and update operations to take place at any time of day while Teradata Database is in use. TPump provides the following capabilities: • Has no limitations on the number of instances running concurrently. • Uses conventional row hash locking, which provides some amount of concurrent read and write access to the target tables. • Supports the same restart, portability, and scalability as Teradata MultiLoad. • Perform data Upserts.

Teradata MultiLoad

Teradata MultiLoad supports bulk inserts, updates, and deletes against initially unpopulated or populated database tables. Both the client and server environments support Teradata MultiLoad. Teradata MultiLoad can: • Run against multiple tables. • Perform block transfers with multisession parallelism. • Load data from multiple input source files. • Pack multiple SQL statements and associated data into a request. • Perform data Upserts.

Teradata FastLoad

Teradata FastLoad loads data into unpopulated tables only. Both the client and server environments support Teradata FastLoad. Teradata FastLoad can: 1. Load data into an empty table. FastLoad loads data into one table per job. If you want to load data into more than one table, you can submit multiple FastLoad jobs. 2. Perform block transfers with multisession parallelism.

Teradata FastExport

Teradata FastExport extracts large quantities of data in parallel from Teradata Database to a client. The utility is the functional complement of the FastLoad and MultiLoad utilities. Teradata FastExport can: Export tables to client files. Export data to an Output Modification (OUTMOD) routine. You can write an OUTMOD routine to select, validate, and preprocess exported data. Perform block transfers with multisession parallelism.

Journals

Teradata Database supports tables that are devoted to journaling. A journal is a record of some kind of activity. Teradata Database supports several kinds of journaling. The system does some journaling on its own, while you can specify whether to perform other journaling. 1. Down AMP recovery Occurs always The Down AMP Recovery Journal (DARJ) is started on all AMPs in the cluster when an AMP is down. This allows for three AMPs to check on their mate. Since there are four AMPs in most clusters and all Fallback for a particular AMP remains within the cluster there are Three AMPs that will hold Fallback rows for a down AMP. The Down AMP Recovery Journal (DARJ) is a special journal used only for FALLBACK rows when an AMP is not working. Like the TRANSIENT JOURNAL, the DARJ, also known as the RECOVERY JOURNAL, gets it space from the DBC’s PERM Space. When an AMP fails, the rest of the AMPs in its cluster initiate a DARJ. The DARJ keeps track of any changes that would have been writte

Fallback Table

A fallback table is a duplicate copy of a primary table. Each fallback row in a fallback table is stored on an AMP different from the one to which the primary row hashes. Note that the fallback copy of any row is always located on an AMP different from the AMP which holds the primary copy. This is an entry-level fault tolerance strategy. The disadvantage of fallback is that this method doubles the storage space and the I/O (on INSERT, UPDATE, and DELETE statements) for tables. The advantage is that data is almost never unavailable because of one down AMP. Data is fully available during an AMP or disk outage, and recovery is automatic after repairs have been made.

What are the different types of vprocs available in Teradata

AMP Access module processors perform database functions, such as executing database queries. Each AMP owns a portion of the overall database storage. GTW Gateway vprocs provide a socket interface to Teradata Database Node The node vproc handles PDE and operating system functions not directly related to AMP and PE work. Node vprocs cannot be externally manipulated, and do not appear in the output of the Vproc Manager utility. PE Parsing engines perform session control, query parsing, security validation, query optimization, and query dispatch. RSG Relay Services Gateway provides a socket interface for the replication agent, and for relaying dictionary changes to the Teradata Meta Data Services utility. VSS Manages Teradata Database storage. AMPs acquire their portions of database storage through the TVS vproc.

How to manage the Tuning and Diagnostic licenses

CONTROL_MANAGEMENT_PACK_ACCESS, controls access to the Diagnostic Pack and Tuning Pack. This parameter can be set to one of three values: DIAGNOSTIC+TUNING: Diagnostic Pack and Tuning Pack functionally is enabled in the database server. DIAGNOSTIC: Only Diagnostic Pack functionality is enabled in the server. NONE: Diagnostic Pack and Tuning pack functionally is disabled in the database server.

Limitation on SQL Server 2008 Standard Edition w.r.t Enterprise Edition

1. For SQL Server 2008 Standard Edition is limited with 4 CPU where as it is unlimited with Enterprise Edition. 2. For Multi-Instance Support Standard Edition is limited to 16 instances where as it is 50 for Enterprise Edition. 3. For Clustering Standard Edition is limited to 2-node failover clustering where as it is 16 for Enterprise Edition. 4. For Auditing and Data Encryption it it limited to Standard Edition comparing to Enterprise Edition.

Workstation Types and Available Platforms

Workstations provide a window into the interworkings of Teradata Database. The following table shows the types of workstations available and their platforms. System Console The system console: • Provides an input mechanism for the system and database administrators. • Displays system status. • Displays current system configuration. • Displays performance statistics. • Allows you to control various utilities. Administration Workstation The AWS can do everything a System Console can do, plus: • Provide a single-system view in the multinode environment. • Monitor system performance.

Teradata Database File System

The file system is a layer of software between Teradata Database and PDE. File system service calls allow Teradata Database to store and retrieve data efficiently and with integrity without being concerned about the specific low-level operating system interfaces.

Parallel Database Extensions (PDE)

Parallel Database Extensions (PDE) is a software interface layer that lies between the operating system and Teradata Database. PDE supports the parallelism that gives Teradata Database its speed and linear scalability. The operating system can be Linux or Microsoft Windows. PDE provides Teradata Database with the ability to: • Run in a parallel environment • Execute vprocs • Apply a flexible priority scheduler to Teradata Database sessions • Consistently manage memory, I/O, and messaging system interfaces across multiple OS platforms

Hot Standby Nodes

Hot standby nodes allow spare nodes to be incorporated into the production environment. Teradata Database can use spare nodes to improve availability and maintain performance levels in the event of a node failure. A hot standby node is a node that: • Is a member of a clique. • Does not normally participate in Teradata Database operations. • Can be brought in to participate in Teradata Database operations to compensate for the loss of a node in the clique. Configuring a hot standby node can eliminate the system-wide performance degradation associated with the loss of a node. A hot standby node is added to each clique in the system. When a node fails, all AMPs and all LAN-attached PEs on the failed node migrate to the node designated as the hot standby. The hot standby node becomes a production node. When the failed node returns to service, it becomes the new hot standby node

CLIQUE

CLIQUE The clique is a feature of some MPP systems that physically group nodes together by multiported access to common disk array units. Inter-node disk array connections are made using FibreChannel (FC) buses. A clique is the mechanism that supports the migration of vprocs under PDE following a node failure. If a node in a clique fails, then vprocs migrate to other nodes in the clique and continue to operate while recovery occurs on their home node. PEs that manage physical channel connections cannot migrate because they are dependent on the hardware that is physically attached to the node to which they are assigned. PEs for LAN-attached connections do migrate when a node failure occurs, as do all AMPs.

VDISK

VDISK The group of cylinders currently assigned to an AMP is referred to as a vdisk, although the actual physical storage may derive from several different storage devices.

BYNET

At the most elementary level, you can look at the BYNET as a switched fabric that loosely couples all the SMP nodes in a multinode system. But the BYNET has capabilities that range far beyond those of a simple system bus. The BYNET possesses high-speed logic that provides bi-directional broadcast, multicast, and point-to-point communication and merge functions. A multinode system has at least two BYNETs. This creates a fault-tolerant environment and enhances interprocessor communication. Load-balancing software optimizes transmission of messages over the BYNETs. If one BYNET should fail, the second can handle the traffic.

Teradata is a shared nothing database architecture

Teradata is a shared nothing database architecture. It is this architecture that affords Teradata systems their scalability. The PE and AMP vprocs in the Teradata architecture share neither memory nor disk across CPU units; These platforms use virtual processors (vprocs) that run a set of software processes on a node under the Parallel Database Extensions (PDE). For information about PD

What are the data centre savings using Oracle Advanced Compression Technology

To estimate savings from Advanced Compression, the following cost elements need to be considered: Size of production database Size of other copies (staging, standbys, QA, development, backup, etc.) Excess storage capacity provisioned for future growth (this should also go down with compression – excess capacity provisioned for a 6 TB database will be much more than excess capacity provisioned for a 2TB compressed database) Data center overheads (cooling, power, floor space, administration, etc.) Apart from storage cost savings, Advanced Compression also improves memory efficiency and boosts query performance in many cases.

What is the benefit of using Oracle Advanced Compression Option while cost of storage is falling day by day

While cost of storage on a per unit (GB) basis is falling, the data volumes are growing at a much faster rate. By industry estimates, databases are growing to twice or thrice their original size every couple of years. Thus, on a total basis, storage expenditure for organizations is still increasing. Further, data center overheads such as cooling, power, floor space, storage administration, etc. continue to grow with hardware getting added to handle higher data volumes. By reducing the need to add new hardware, Advanced Compression helps in managing these overheads. Another important benefit is in the performance area. A major bottleneck for many systems is I/O bandwidth. Advanced Compression can help alleviate that bottleneck in several cases by reducing the amount of data that needs to be transferred across I/O channel and also further boost performance through improved memory efficiencies.

Technology used in Oracle Advanced Compression

For compression of table data, Advanced Compression extends industry standard algorithms to compress data at a block level. Repeating data patterns are stored in the block header as symbols, and occurrences of such data in the block are replaced with pointers to symbols. For unstructured data compression, backup compression and network compression, Advanced Compression uses industry standard algorithms that work on a similar concept of eliminating data redundancies. Oracle Database does not need to decompress table blocks when reading data. Oracle can keep blocks compressed in memory and read them directly. Hence, more data can be packed in memory which results in improved cache hit ratio and reduced I/O.

Key Benefits of Oracle Advanced Compression

1. 2-4X reduction in storage across all environments such as production, standby, test, development, backup, etc. 2. Improved memory efficiency as data remains compressed in memory 3. Minimal or no performance impact on DML operations due to performance optimized compression technology 4. Faster queries in many cases, due to improved I/O and memory efficiency 5. Better network bandwidth utilization 6. Transparent to applications - no application changes required

Oracle Advanced Compression

Advanced Compression, an option introduced in Oracle Database 11 g Enterprise Edition, offers a comprehensive set of compression capabilities to help organizations reduce costs, while maintaining or improving performance. It significantly reduces the storage footprint of databases through compression of structured data (numbers, characters) as well as unstructured data (documents, spreadsheets, XML and other files). It provides enhanced compression for database backups and also includes network compression capabilities for faster synchronization of standby databases. It can be done at following level 1. Table Level 2. Secure File Level 3. RMAN Level 4. Data Pump Level 5. Data Guard Network compression Level

What is Times Ten

TimesTen is a high performance event-processing software component that enables applications to capture, store, use, and distribute information in real-time, while preserving transactional integrity and continuous availability. TimesTen is designed to operate most efficiently in an application’s address space. Using standard interfaces, TimesTen can be integrated into an application to serve as either a stand-alone relational database management system (RDBMS) or an application-tier cache that works in conjunction with a traditional disk-based RDBMS, such as the Oracle database. TimesTen can be configured to operate entirely in memory, or it can be configured for disk-based environments to log and checkpoint data to disk.

Oracle Times Ten

Oracle TimesTen In-Memory Database (TimesTen) is a full-featured, memory-optimized, relational database with persistence and recoverability. It provides applications with the instant responsiveness and very high throughput required by database-intensive applications. Deployed in the application tier, TimesTen operates on databases that fit entirely in physical memory (RAM). Applications access the TimesTen database using standard SQL interfaces. For customers with existing application data residing on the Oracle Database, TimesTen is deployed as an in-memory cache database with automatic data synchronization between TimesTen and the Oracle Database.

High 'cursor: pin S wait on X' and/or 'library cache lock' Waits.

Symptoms: A spike in "cursor: pin S wait on X" or "library cache lock" waits may be seen. This is more likely to be seen in an OLTP environment where both shared pool and buffer cache are in demand. The problem will happen randomly and intermittently. Cause: Alternating frequent shrink and grow of the buffer cache and shared pool may be seen with automatic memory management enabled causing various waits in sessions and concurrency issues. The frequent resize of the shared pool and buffer cache is causing contention so that waits on "cursor: pin S wait on X" and "library cache lock" may be seen. Solution: Option 1: Disable Automatic memory management by setting SGA_TARGET=0. Option 2: Following parameter cna be set dynamically: alter system set "_memory_broker_stat_interval"=999; This will increase the time between resize to at least 999 seconds and thereby reducing the number of resize operations. "_memory_broker_

DB Time - Performance Focus Area

DB Time = Sum of DB Time over all sessions Time spent by clients in database calls • Foreground session time • Active = on CPU or in-Wait (non-idle) DB time is also time spent servicing calls More users ------------- • => More calls • => DB time increases Larger transactions --------------------- • => Longer calls • => DB time increases IO performance degrades ------------------------ • => IO time increases • => DB time increases Application performance degrades --------------------------------- • => Wait time increases • => DB time increases Host is CPU-bound ----------------------- • => foregrounds accumulate active run-queue time • => wait event times are artificially inflated • => DB time increases