Skip to main content

Features not available with Oracle Standard Edition

1. Oracle Data Guard—Redo Apply
2. Oracle Data Guard—SQL Apply
3. Oracle Data Guard—Snapshot Standby
4. Oracle Active Data Guard
5. Rolling Upgrades—Patch Set, Database, and Operating System
6. Online index rebuild
7. Online index-organized table organization
8. Online table redefinition
9. Duplexed backup sets
10. Block change tracking for fast incremental backup
11. Unused block compression in backups
12. Block-level media recovery
13. Lost Write Protection
14. Parallel backup and recovery
15. Tablespace point-in-time recovery
16. Trial recovery
17. Fast-start fault recovery
18. Flashback Table
19. Flashback Database
20. Flashback Transaction
21. Flashback Transaction Query
22. Oracle Total Recall
23. Client Side Query Cache
24. Query Results Cache
25. PL/SQL Function Result Cache
26. In-Memory Database Cache
27. SQL Plan Management
28. Support for Oracle Exadata Storage Server
29. Support for Oracle Exadata Storage Server Software
30. Advanced Security Option
31. Oracle Label Security
32. Virtual Private Database
33. Fine-grained auditing
34. Oracle Database Vault
35. Secure External Password Store
36. Oracle Change Management Pack
37. Oracle Configuration Management Pack
38. Oracle Diagnostic Pack
39. Oracle Tuning Pack
40. Oracle Provisioning and Patch Automation Pack
41. Database Resource Manager
42. Oracle Real Application Testing
43. Oracle Partitioning
44. Oracle OLAP
45. Oracle Data Mining
46. Oracle Advanced Compression
47. Direct-Load Table Compression
48. Bitmapped index, bitmapped join index, and bitmap plan conversions
49. Parallel query/DML
50. Parallel statistics gathering
51. Parallel index build/scans
52. Parallel Data Pump Export/Import
53. Transportable tablespaces, including cross-platform
54. Summary management—Materialized View Query Rewrite
55. Asynchronous Change Data Capture
56. Advanced Replication
57. Oracle Connection Manager
58. Infiniband Support
59. Oracle Spatial
60. Semantic Technologies (RDF/OWL)

Comments

Popular posts from this blog

Error: could not find function "read.xlsx" while reading .xlsx file in R

Got this during the execution of following command in R > dat Error: could not find function "read.xlsx" Tried following command > install.packages("xlsx", dependencies = TRUE) Installing package into ‘C:/Users/amajumde/Documents/R/win-library/3.2’ (as ‘lib’ is unspecified) also installing the dependencies ‘rJava’, ‘xlsxjars’ trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/rJava_0.9-8.zip' Content type 'application/zip' length 766972 bytes (748 KB) downloaded 748 KB trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/xlsxjars_0.6.1.zip' Content type 'application/zip' length 9485170 bytes (9.0 MB) downloaded 9.0 MB trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/xlsx_0.5.7.zip' Content type 'application/zip' length 400968 bytes (391 KB) downloaded 391 KB package ‘rJava’ successfully unpacked and MD5 sums checked package ‘xlsxjars’ successfully unpacked ...

What is Tensor Parallelism and relationship between Buffer and GPU

  Tensor Parallelism in GPU Tensor parallelism is a technique used to distribute the computation of large tensor operations across multiple GPUs or multiple cores within a GPU .   It is an essential method for improving the performance and scalability of deep learning models, particularly when dealing with very large models that cannot fit into the memory of a single GPU. Key Concepts Tensor Operations : Tensors are multidimensional arrays used extensively in deep learning. Common tensor operations include matrix multiplication, convolution, and element-wise operations. Parallelism : Parallelism involves dividing a task into smaller sub-tasks that can be executed simultaneously. This approach leverages the parallel processing capabilities of GPUs to speed up computations. How Tensor Parallelism Works Splitting Tensors : The core idea of tensor parallelism is to split large tensors into smaller chunks that can be processed in parallel. Each chunk is assigned to a different GP...

What is the benefit of using Quantization in LLM

Quantization is a technique used in LLMs (Large Language Models) to reduce the memory requirements for storing and training the model parameters. It involves reducing the precision of the model weights from 32-bit floating-point numbers (FP32) to lower precision formats, such as 16-bit floating-point numbers (FP16) or 8-bit integers (INT8). Bottomline: You can use Quantization to reduce the memory footprint off the model during the training. The usage of quantization in LLMs offers several benefits: Memory Reduction: By reducing the precision of the model weights, quantization significantly reduces the memory footprint required to store the parameters. This is particularly important for LLMs, which can have billions or even trillions of parameters. Quantization allows these models to fit within the memory constraints of GPUs or other hardware accelerators. Training Efficiency: Quantization can also improve the training efficiency of LLMs. Lower precision formats require fewer computati...