Skip to main content

Bernoulli Distribution

 Bernoulli Distribution:

  • Definition: The Bernoulli Distribution is a discrete probability distribution that models a random experiment with two possible outcomes - success (usually denoted as 1) and failure (usually denoted as 0). It is named after Swiss mathematician Jacob Bernoulli.


  • Probability Mass Function (PMF): The PMF of the Bernoulli Distribution is defined as:


  • Mean and Variance: The mean (expected value) of the Bernoulli Distribution is p, and the variance is p(1p).


    Mean

    The expected value of a Bernoulli random variable  is

    This is due to the fact that for a Bernoulli distributed random variable  with  and  we find



  • Variance

    The variance of a Bernoulli distributed  is

    We first find

    From this follows

    [2]

    With this result it is easy to prove that, for any Bernoulli distribution, its variance will have a value inside .


    Graphical Representation:

    Here's a bar graph illustrating the Bernoulli Distribution for different values of p:


    Bernoulli distribution
    Probability mass function
    Funzione di densità di una variabile casuale normale

    Three examples of Bernoulli distribution:

       and 
       and 
       and 


    In this graph, you can see that the probability of success (p) is represented by the height of the bar at x=1, and the probability of failure (1p) is represented by the height of the bar at x=0. Since it's a discrete distribution, there are only two possible outcomes.


    Parameters


    Support
    PMF
    CDF
    Mean
    Median
    Mode
    Variance
    MAD
    Skewness
    Ex. kurtosis
    Entropy
    MGF
    CF
    PGF
    Fisher information

  • Use Cases:

    • The Bernoulli Distribution is commonly used to model random experiments with binary outcomes, such as:
      • Coin flips (success = heads, failure = tails).
      • Pass/fail experiments (success = pass, failure = fail).
      • Click-through rate (success = click, failure = no click).

    It serves as the building block for other important distributions like the Binomial Distribution and the Geometric Distribution.

This distribution is fundamental in probability theory and statistics, especially when dealing with events that have only two possible outcomes.

Comments

Popular posts from this blog

What's replicated, what's not?

Logged operations are replicated. These include, but are not limited to: DDL DML Create/alter table space Create/alter storage group Create/alter buffer pool XML data. Logged LOBs Not logged operations are not replicated. These include, but are not limited to: Database configuration parameters (this allows primary and standby databases to be configured differently). "Not logged initially" tables Not logged LOBs UDF (User Defined Function) libraries. UDF DDL is replicated. But the libraries used by UDF (such as C or Java libraries)  are not replicated, because they are not stored in the database. Users must manually copy the libraries to the standby. Note: You can use database configuration parameter  BLOCKNONLOGGED  to block not logged operations on the primary.

What is Tensor Parallelism and relationship between Buffer and GPU

  Tensor Parallelism in GPU Tensor parallelism is a technique used to distribute the computation of large tensor operations across multiple GPUs or multiple cores within a GPU .   It is an essential method for improving the performance and scalability of deep learning models, particularly when dealing with very large models that cannot fit into the memory of a single GPU. Key Concepts Tensor Operations : Tensors are multidimensional arrays used extensively in deep learning. Common tensor operations include matrix multiplication, convolution, and element-wise operations. Parallelism : Parallelism involves dividing a task into smaller sub-tasks that can be executed simultaneously. This approach leverages the parallel processing capabilities of GPUs to speed up computations. How Tensor Parallelism Works Splitting Tensors : The core idea of tensor parallelism is to split large tensors into smaller chunks that can be processed in parallel. Each chunk is assigned to a different GP...

What is the benefit of using Quantization in LLM

Quantization is a technique used in LLMs (Large Language Models) to reduce the memory requirements for storing and training the model parameters. It involves reducing the precision of the model weights from 32-bit floating-point numbers (FP32) to lower precision formats, such as 16-bit floating-point numbers (FP16) or 8-bit integers (INT8). Bottomline: You can use Quantization to reduce the memory footprint off the model during the training. The usage of quantization in LLMs offers several benefits: Memory Reduction: By reducing the precision of the model weights, quantization significantly reduces the memory footprint required to store the parameters. This is particularly important for LLMs, which can have billions or even trillions of parameters. Quantization allows these models to fit within the memory constraints of GPUs or other hardware accelerators. Training Efficiency: Quantization can also improve the training efficiency of LLMs. Lower precision formats require fewer computati...