Skip to main content

What are different statistical tests used for Feature selection in Machine Learning?

 

Feature TypeTest NameDescriptionUse Case
NumericalPearson's Correlation CoefficientDetermines the strength and direction of linear relationships between numerical variables. High absolute values indicate strong correlations.Measure linear correlation
NumericalMutual InformationMeasures the amount of information gained about one variable by observing another. Useful for feature selection when dealing with numerical data.Measure dependence between variables
NumericalANOVAAnalyzes the difference in means among multiple groups. Helpful for selecting numerical features with significant differences in group means.Compare means between multiple groups
Numericalt-testAssesses whether the means of two groups are statistically different. Useful for binary classification tasks.Compare means between two groups
CategoricalChi-Square TestDetermines if two categorical variables are independent or related. Useful for feature selection with categorical data.Test independence of categorical variables
CategoricalFisher's Exact TestTests the association between two categorical variables in 2x2 contingency tables. Applicable when sample sizes are small.Test independence in 2x2 contingency tables
CategoricalGini ImportanceMeasures how often a feature is used to split data in decision tree algorithms. Higher values indicate more important features.Assess feature importance in decision trees
CategoricalInformation GainCalculates the reduction in entropy (uncertainty) achieved by using a feature to split data in decision trees or random forests.Measure reduction in entropy
CategoricalCramér's VQuantifies the association between two categorical variables in contingency tables. Values range from 0 (no association) to 1 (complete association).Measure association in contingency tables
CategoricalKendall's Tau and Spearman's Rank CorrelationEvaluate the strength and direction of monotonic relationships between ordinal or ranked data. Useful when data is not normally distributed.Measure rank correlation
CategoricalPoint-Biserial CorrelationAssesses the relationship between a binary target variable and a continuous or ordinal feature. Helps identify features with strong associations.Measure correlation with binary target

Comments

Popular posts from this blog

Error: could not find function "read.xlsx" while reading .xlsx file in R

Got this during the execution of following command in R > dat Error: could not find function "read.xlsx" Tried following command > install.packages("xlsx", dependencies = TRUE) Installing package into ‘C:/Users/amajumde/Documents/R/win-library/3.2’ (as ‘lib’ is unspecified) also installing the dependencies ‘rJava’, ‘xlsxjars’ trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/rJava_0.9-8.zip' Content type 'application/zip' length 766972 bytes (748 KB) downloaded 748 KB trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/xlsxjars_0.6.1.zip' Content type 'application/zip' length 9485170 bytes (9.0 MB) downloaded 9.0 MB trying URL 'https://cran.rstudio.com/bin/windows/contrib/3.2/xlsx_0.5.7.zip' Content type 'application/zip' length 400968 bytes (391 KB) downloaded 391 KB package ‘rJava’ successfully unpacked and MD5 sums checked package ‘xlsxjars’ successfully unpacked ...

What is Tensor Parallelism and relationship between Buffer and GPU

  Tensor Parallelism in GPU Tensor parallelism is a technique used to distribute the computation of large tensor operations across multiple GPUs or multiple cores within a GPU .   It is an essential method for improving the performance and scalability of deep learning models, particularly when dealing with very large models that cannot fit into the memory of a single GPU. Key Concepts Tensor Operations : Tensors are multidimensional arrays used extensively in deep learning. Common tensor operations include matrix multiplication, convolution, and element-wise operations. Parallelism : Parallelism involves dividing a task into smaller sub-tasks that can be executed simultaneously. This approach leverages the parallel processing capabilities of GPUs to speed up computations. How Tensor Parallelism Works Splitting Tensors : The core idea of tensor parallelism is to split large tensors into smaller chunks that can be processed in parallel. Each chunk is assigned to a different GP...

What is the benefit of using Quantization in LLM

Quantization is a technique used in LLMs (Large Language Models) to reduce the memory requirements for storing and training the model parameters. It involves reducing the precision of the model weights from 32-bit floating-point numbers (FP32) to lower precision formats, such as 16-bit floating-point numbers (FP16) or 8-bit integers (INT8). Bottomline: You can use Quantization to reduce the memory footprint off the model during the training. The usage of quantization in LLMs offers several benefits: Memory Reduction: By reducing the precision of the model weights, quantization significantly reduces the memory footprint required to store the parameters. This is particularly important for LLMs, which can have billions or even trillions of parameters. Quantization allows these models to fit within the memory constraints of GPUs or other hardware accelerators. Training Efficiency: Quantization can also improve the training efficiency of LLMs. Lower precision formats require fewer computati...