Skip to main content

Best Practices when using Mount Points in SQL Server

Follow the following best practices when using mount points with SQL Server standalone and failover cluster instances -

Use the root (host) volume exclusively for mount points. The root volume is the volume that is hosting the mount points. This greatly reduces the time that it takes to restore access to the mounted volumes if you have to run a chkdsk. This also reduces the time that it takes to restore from backup on the host volume.

If you use the root (host) volume exclusively for mount points, the size of the host volume only has to be several MB. This reduces the probability that the root volume will be used for anything other than the mount points.

During failover cluster installation, use subdirectories under the root of mounted volumes to store database and backup files. For example, say you have a mounted volume F:\SQL1. This is the root of the mount point and you shouldn't use this location directly to store your database files. You should instead create a subdirectory/subfolder such as d:\SQL1\USERDATA and use this location to store your database files.

Add missing dependencies after installing SQL Server 2008 / R2 failover cluster with mount points. Make sure that each of the mounted volume is dependent on the root (or host) drive. Additionally, make sure that SQL Server is dependent on not just the root (or host) drive, but also on each of the mounted volumes.

If configuring MSDTC on failover cluster, do not use mount points as storage for the MSDTC service. MSDTC currently does not supported mount points.

Comments

Anonymous said…
have a question:
in a clustered environment, ca nthe root volume for all the mounts points be on the server or should it be a san volume?
I am thinking of using 1gb partition on my server1 for the root volume and similarly 1gb partition on the second server to hosts the drives for second server.
In this case what will happen ahd how will things work wieh a servers goes down and fail-over happens.
Anonymous said…
have a question:
in a clustered environment, ca nthe root volume for all the mounts points be on the server or should it be a san volume?
I am thinking of using 1gb partition on my server1 for the root volume and similarly 1gb partition on the second server to hosts the drives for second server.
In this case what will happen ahd how will things work wieh a servers goes down and fail-over happens.
Arindam said…
It should be on SAN volume.

Popular posts from this blog

Error: could not find function "read.xlsx" while reading .xlsx file in R

Got this during the execution of following command in R > dat Error: could not find function "read.xlsx" Tried following command > install.packages("xlsx", dependencies = TRUE) Installing package into ‘C:/Users/amajumde/Documents/R/win-library/3.2’ (as ‘lib’ is unspecified) also installing the dependencies ‘rJava’, ‘xlsxjars’ trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/rJava_0.9-8.zip' Content type 'application/zip' length 766972 bytes (748 KB) downloaded 748 KB trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/xlsxjars_0.6.1.zip' Content type 'application/zip' length 9485170 bytes (9.0 MB) downloaded 9.0 MB trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/xlsx_0.5.7.zip' Content type 'application/zip' length 400968 bytes (391 KB) downloaded 391 KB package ‘rJava’ successfully unpacked and MD5 sums checked package ‘xlsxjars’ successfully unpacked ...

What is the benefit of using Quantization in LLM

Quantization is a technique used in LLMs (Large Language Models) to reduce the memory requirements for storing and training the model parameters. It involves reducing the precision of the model weights from 32-bit floating-point numbers (FP32) to lower precision formats, such as 16-bit floating-point numbers (FP16) or 8-bit integers (INT8). Bottomline: You can use Quantization to reduce the memory footprint off the model during the training. The usage of quantization in LLMs offers several benefits: Memory Reduction: By reducing the precision of the model weights, quantization significantly reduces the memory footprint required to store the parameters. This is particularly important for LLMs, which can have billions or even trillions of parameters. Quantization allows these models to fit within the memory constraints of GPUs or other hardware accelerators. Training Efficiency: Quantization can also improve the training efficiency of LLMs. Lower precision formats require fewer computati...

What is Tensor Parallelism and relationship between Buffer and GPU

  Tensor Parallelism in GPU Tensor parallelism is a technique used to distribute the computation of large tensor operations across multiple GPUs or multiple cores within a GPU .   It is an essential method for improving the performance and scalability of deep learning models, particularly when dealing with very large models that cannot fit into the memory of a single GPU. Key Concepts Tensor Operations : Tensors are multidimensional arrays used extensively in deep learning. Common tensor operations include matrix multiplication, convolution, and element-wise operations. Parallelism : Parallelism involves dividing a task into smaller sub-tasks that can be executed simultaneously. This approach leverages the parallel processing capabilities of GPUs to speed up computations. How Tensor Parallelism Works Splitting Tensors : The core idea of tensor parallelism is to split large tensors into smaller chunks that can be processed in parallel. Each chunk is assigned to a different GP...