• Descriptive
    • Moments
    • Concentration
    • Central Tendency
    • Variability
    • Stem-and-Leaf Plot
    • Histogram & Frequency Table
    • Data Quality Forensics
    • Conditional EDA
    • Quantiles
    • Kernel Density Estimation
    • Normal QQ Plot
    • Bootstrap Plot

    • Multivariate Descriptive Statistics
  • Distributions
    • Binomial Probabilities
    • Geometric Probabilities
    • Negative Binomial Probabilities
    • Hypergeometric Probabilities
    • Multinomial Probabilities
    • Dirichlet
    • Poisson Probabilities

    • Exponential
    • Gamma
    • Erlang
    • Weibull
    • Rayleigh
    • Maxwell-Boltzmann
    • Lognormal
    • Pareto
    • Inverse Gamma
    • Inverse Chi-Square

    • Beta
    • Power
    • Beta Prime (Inv. Beta)
    • Triangular

    • Normal (area)
    • Logistic
    • Laplace
    • Cauchy (standard)
    • Cauchy (location-scale)
    • Gumbel
    • Fréchet
    • Generalized Extreme Value

    • Normal RNG
    • ML Fitting
    • Tukey Lambda PPCC
    • Box-Cox Normality Plot
    • Noncentral t
    • Noncentral F
    • Sample Correlation r

    • Empirical Tests
  • Hypotheses
    • Theoretical Aspects of Hypothesis Testing
    • Bayesian Inference
    • Minimum Sample Size

    • Empirical Tests
    • Multivariate (pair-wise) Testing
  • Models
    • Manual Model Building
    • Guided Model Building
  • Time Series
    • Time Series Plot
    • Decomposition
    • Exponential Smoothing

    • Blocked Bootstrap Plot
    • Mean Plot
    • (P)ACF
    • VRM
    • Standard Deviation-Mean Plot
    • Spectral Analysis
    • ARIMA

    • Cross Correlation Function
    • Granger Causality
  1. Probability Distributions
  2. 21  Gaussian Naive Bayes Classifier
  • Preface
  • Getting Started
    • 1  Introduction
    • 2  Why Do We Need Innovative Technology?
    • 3  Basic Definitions
    • 4  The Big Picture: Why We Analyze Data
  • Introduction to Probability
    • 5  Definitions of Probability
    • 6  Jeffreys’ axiom system
    • 7  Bayes’ Theorem
    • 8  Sensitivity and Specificity
    • 9  Naive Bayes Classifier
    • 10  Law of Large Numbers

    • 11  Problems
  • Probability Distributions
    • 12  Bernoulli Distribution
    • 13  Binomial Distribution
    • 14  Geometric Distribution
    • 15  Negative Binomial Distribution
    • 16  Hypergeometric Distribution
    • 17  Multinomial Distribution
    • 18  Poisson Distribution

    • 19  Uniform Distribution (Rectangular Distribution)
    • 20  Normal Distribution (Gaussian Distribution)
    • 21  Gaussian Naive Bayes Classifier
    • 22  Chi Distribution
    • 23  Chi-squared Distribution (1 parameter)
    • 24  Chi-squared Distribution (2 parameters)
    • 25  Student t-Distribution
    • 26  Fisher F-Distribution
    • 27  Exponential Distribution
    • 28  Lognormal Distribution
    • 29  Gamma Distribution
    • 30  Beta Distribution
    • 31  Weibull Distribution
    • 32  Pareto Distribution
    • 33  Inverse Gamma Distribution
    • 34  Rayleigh Distribution
    • 35  Erlang Distribution
    • 36  Logistic Distribution
    • 37  Laplace Distribution
    • 38  Gumbel Distribution
    • 39  Cauchy Distribution
    • 40  Triangular Distribution
    • 41  Power Distribution
    • 42  Beta Prime Distribution
    • 43  Sample Correlation Distribution
    • 44  Dirichlet Distribution
    • 45  Generalized Extreme Value (GEV) Distribution
    • 46  Frechet Distribution
    • 47  Noncentral t Distribution
    • 48  Noncentral F Distribution
    • 49  Inverse Chi-Squared Distribution
    • 50  Maxwell-Boltzmann Distribution
    • 51  Distribution Relationship Map

    • 52  Problems
  • Descriptive Statistics & Exploratory Data Analysis
    • 53  Types of Data
    • 54  Datasheets

    • 55  Frequency Plot (Bar Plot)
    • 56  Frequency Table
    • 57  Contingency Table
    • 58  Binomial Classification Metrics
    • 59  Confusion Matrix
    • 60  ROC Analysis

    • 61  Stem-and-Leaf Plot
    • 62  Histogram
    • 63  Data Quality Forensics
    • 64  Quantiles
    • 65  Central Tendency
    • 66  Variability
    • 67  Skewness & Kurtosis
    • 68  Concentration
    • 69  Notched Boxplot
    • 70  Scatterplot
    • 71  Pearson Correlation
    • 72  Rank Correlation
    • 73  Partial Pearson Correlation
    • 74  Simple Linear Regression
    • 75  Moments
    • 76  Quantile-Quantile Plot (QQ Plot)
    • 77  Normal Probability Plot
    • 78  Probability Plot Correlation Coefficient Plot (PPCC Plot)
    • 79  Box-Cox Normality Plot
    • 80  Kernel Density Estimation
    • 81  Bivariate Kernel Density Plot
    • 82  Conditional EDA: Panel Diagnostics
    • 83  Bootstrap Plot (Central Tendency)
    • 84  Survey Scores Rank Order Comparison
    • 85  Cronbach Alpha

    • 86  Equi-distant Time Series
    • 87  Time Series Plot (Run Sequence Plot)
    • 88  Mean Plot
    • 89  Blocked Bootstrap Plot (Central Tendency)
    • 90  Standard Deviation-Mean Plot
    • 91  Variance Reduction Matrix
    • 92  (Partial) Autocorrelation Function
    • 93  Periodogram & Cumulative Periodogram

    • 94  Problems
  • Hypothesis Testing
    • 95  Normal Distributions revisited
    • 96  The Population
    • 97  The Sample
    • 98  The One-Sided Hypothesis Test
    • 99  The Two-Sided Hypothesis Test
    • 100  When to use a one-sided or two-sided test?
    • 101  What if \(\sigma\) is unknown?
    • 102  The Central Limit Theorem (revisited)
    • 103  Statistical Test of the Population Mean with known Variance
    • 104  Statistical Test of the Population Mean with unknown Variance
    • 105  Statistical Test of the Variance
    • 106  Statistical Test of the Population Proportion
    • 107  Statistical Test of the Standard Deviation \(\sigma\)
    • 108  Statistical Test of the difference between Means -- Independent/Unpaired Samples
    • 109  Statistical Test of the difference between Means -- Dependent/Paired Samples
    • 110  Statistical Test of the difference between Variances -- Independent/Unpaired Samples

    • 111  Hypothesis Testing for Research Purposes
    • 112  Decision Thresholds, Alpha, and Confidence Levels
    • 113  Bayesian Inference for Decision-Making
    • 114  One Sample t-Test
    • 115  Skewness & Kurtosis Tests
    • 116  Paired Two Sample t-Test
    • 117  Wilcoxon Signed-Rank Test
    • 118  Unpaired Two Sample t-Test
    • 119  Unpaired Two Sample Welch Test
    • 120  Two One-Sided Tests (TOST) for Equivalence
    • 121  Mann-Whitney U test (Wilcoxon Rank-Sum Test)
    • 122  Bayesian Two Sample Test
    • 123  Median Test based on Notched Boxplots
    • 124  Chi-Squared Tests for Count Data
    • 125  Kolmogorov-Smirnov Test
    • 126  One Way Analysis of Variance (1-way ANOVA)
    • 127  Kruskal-Wallis Test
    • 128  Two Way Analysis of Variance (2-way ANOVA)
    • 129  Repeated Measures ANOVA
    • 130  Friedman Test
    • 131  Testing Correlations
    • 132  A Note on Causality

    • 133  Problems
  • Regression Models
    • 134  Simple Linear Regression Model (SLRM)
    • 135  Multiple Linear Regression Model (MLRM)
    • 136  Logistic Regression
    • 137  Generalized Linear Models
    • 138  Multinomial and Ordinal Logistic Regression
    • 139  Cox Proportional Hazards Regression
    • 140  Conditional Inference Trees
    • 141  Leaf Diagnostics for Conditional Inference Trees
    • 142  Conditional Random Forests
    • 143  Hypothesis Testing with Linear Regression Models (from a Practical Point of View)

    • 144  Problems
  • Introduction to Time Series Analysis
    • 145  Case: the Market of Health and Personal Care Products
    • 146  Decomposition of Time Series
    • 147  Ad hoc Forecasting of Time Series
  • Box-Jenkins Analysis
    • 148  Introduction to Box-Jenkins Analysis
    • 149  Theoretical Concepts
    • 150  Stationarity
    • 151  Identifying ARMA parameters
    • 152  Estimating ARMA Parameters and Residual Diagnostics
    • 153  Forecasting with ARIMA models
    • 154  Intervention Analysis
    • 155  Cross-Correlation Function
    • 156  Transfer Function Noise Models
    • 157  General-to-Specific Modeling
  • Model Building Strategies
    • 158  Introduction to Model Building Strategies
    • 159  Manual Model Building
    • 160  Model Validation
    • 161  Regularization Methods
    • 162  Hyperparameter Optimization Strategies
    • 163  Guided Model Building in Practice
    • 164  Diagnostics, Revision, and Guided Forecasting
    • 165  Leakage, Target Encoding, and Robust Regression
  • References
  • Appendices
    • Appendices
    • A  Method Selection Guide
    • B  Presentations and Teaching Materials
    • C  R Language Concepts for Statistical Computing
    • D  Matrix Algebra
    • E  Standard Normal Table (Gaussian Table)
    • F  Critical values of Student’s \(t\) distribution with \(\nu\) degrees of freedom
    • G  Upper-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom
    • H  Lower-tail critical values of the \(\chi^2\)-distribution with \(\nu\) degrees of freedom

Table of contents

  • 21.1 Introduction
  • 21.2 R Module
  • 21.3 Example
  • 21.4 Task
  1. Probability Distributions
  2. 21  Gaussian Naive Bayes Classifier

21  Gaussian Naive Bayes Classifier

21.1 Introduction

The Multinomial Naive Bayes Classifier that was discussed in Chapter 9 made the assumption that the likelihoods are based on Discrete Distributions. If, however, our dataset contains a variable \(X\) with a continuous distribution then it is often convenient to assume it is also normally distributed.

This chapter appears in the Distributions part because the Gaussian Naive Bayes likelihood is a direct application of the Normal density. It complements the model-building chapters by showing how distributional assumptions translate into a concrete classifier.

Suppose there are \(K\) classes that we want to predict1. For each class \(k \in [1, 2, …, K]\) we can compute the mean \(\mu_k\) and variance \(\sigma_k^2\) so that the likelihood for any observed value \(X = \nu\) is

\[ f_{X \mid k}(\nu) = \frac{e^{-\frac{1}{2} \left( \frac{\nu - \mu_k}{\sigma_k} \right)^2} }{\sigma_k \sqrt{2 \pi}} \]

This is a class-conditional density value (not a point probability); for a continuous variable, \(\text{P}(X=\nu \mid k)=0\).

Even if there are multiple variables with a normal distribution it is easy to compute the likelihoods that are needed to apply Bayes’ Theorem and obtain a posterior probability that is used to make a prediction. Of course, it is also possible to combine the likelihoods from the Bernoulli, Multinomial, Poisson, and Normal distribution.

If the normality assumption for a continuous variable is not satisfied, one often uses the empirical Kernel Density instead. Since the Naive Bayes approach can be easily implemented and computed, it is common practice to estimate the classifier for both scenarios (i.e. using Kernel Densities and Normal Distributions). Furthermore, it is also possible to recompute the model for various choices of \(\alpha\) (as was explained in Section 9.4).

21.2 R Module

The Naive Bayes Module can be found on the public website:

  • https://compute.wessa.net/rwasp_NaiveBayes.wasp

The Naive Bayes Module is also available in RFC under the menu “`Models / Manual Model Building”.

If you prefer to compute a Gaussian Naive Bayes model on your local computer, the following code snippet can be used in the R console2:

library(mlbench) #this library contains the Pima Indian Diabetes dataset
library(caret)
library(naivebayes)
data(PimaIndiansDiabetes2) #load the dataset
par1 = 9 #column number of target variable
par2 = 'gaussian' #smoothing kernel to be used
par3 = 'nnnnnnnnc' #specify n (numeric) or c (categorical) for each column in the dataset
par4 = 'no' #use repeated cross-validation?
print.naive_bayes <- function (x,...) {
  model <- 'Naive Bayes'
  n_char <- getOption('width')
  str_left_right <- paste0(rep('=', floor((n_char - nchar(model)) / 2)), collapse = '')
  str_full <- paste0(str_left_right, ' ', model,' ',
  ifelse(n_char %% 2 != 0, '=', ''), str_left_right)
  len <- nchar(str_full)
  l <- paste0(rep('-', len), collapse = '')
  cat('\n')
  cat(str_full, '\n', '\n', 'Call:', '\n')
  print(x$call)
  cat('\n')
  cat(l, '\n', '\n')
  cat( 'Laplace smoothing:', x$laplace)
  cat('\n')
  cat('\n')
  cat(l, '\n', '\n')
  cat(' A priori probabilities:','\n')
  print(x$prior)
  cat('\n')
  cat(l, '\n', '\n')
  cat(' Tables:','\n')
  tabs <- x$tables
  n <- length(x$tables)
  indices <- seq_len(min(25,n))
  tabs <- tabs[indices]
  print(tabs)
  if (n > 25) {
    cat('\n\n')
    cat('# … and', n - 25, ifelse(n - 25 == 1, 'more table\n\n', 'more tables\n\n'))
    cat(l)
  }
  cat('\n\n')
}
x <- na.omit(PimaIndiansDiabetes2) #remove rows with missing data
k <- length(x[1,]) #we could also use ncol(x)
n <- length(x[,1]) #we could also use nrow(x)
myf <- formula(paste(colnames(x)[par1],' ~ .',sep=''))
nb_grid <- expand.grid(usekernel = c(TRUE, FALSE), laplace = c(0, 0.5, 1, 2, 3, 4), adjust = c(0.75, 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5))
fitControl <- trainControl(method = 'repeatedcv', number = 10, repeats = 5)
#please, be patient
if(par4=='no') {
naive_bayes_via_caret <- train(myf, data = x, method = 'naive_bayes', kernel = par2, bw = 'SJ', usepoisson = TRUE, tuneGrid = nb_grid)
}
if(par4=='yes') {
naive_bayes_via_caret <- train(myf, data = x, method = 'naive_bayes', kernel = par2, bw = 'SJ', usepoisson = TRUE, tuneGrid = nb_grid, trControl = fitControl)
}
#show results
naive_bayes_via_caret$results
naive_bayes_via_caret$finalModel$tuneValue
naive_bayes_via_caret$finalModel
z <- cbind(x, predict(naive_bayes_via_caret$finalModel, x, type = 'prob'))
head(z)

The R script uses a custom function (print.naive_bayes) to produce the output tables of the model. Furthermore, the data is cleaned through the na.omit function to remove the rows that contain missing data (denoted by NA). Instead of calling the naive_bayes function (from the naivebayes library) directly, the R script uses the train function (from the caret package) to invoke the naive_bayes function for different combinations of kernels, Laplace parameters, and adjustments based on caret’s expand.grid function. Note that the formula function creates the model specification to be estimated (par1 indicates the column that is to be used as target variable).

21.3 Example

We wish to build a model that allows us to predict diabetes, in a population of female patients of Pima Indian heritage (Smith et al. 1988), based on a series of diagnostic measurements.

Interactive Shiny app (click to load).
Open in new tab

The R Module uses a random selection of 200 female patients and shows the results of the standard Naive Bayes Classifier (i.e. without optimisation of the Gaussian Kernel Density and Laplace \(\alpha\) values). The output shows the model specification type ~ npreg+glu+bp which means that the variable type is the binary target variable to be predicted (the patient has been diagnosed with diabetes or not) and is explained through three exogenous variables or features (i.e. npreg+glu+bp). The value of \(\alpha\) is zero (Laplace: 0) and the number of rows in our dataset is 200 (Samples: 200). There are three features (Features: 3) which are all treated as Gaussian/Normal distributions. The model uses data-based prior probabilities which are displayed as

- Prior probabilities: 
    - No: 0.66
    - Yes: 0.34

implying that 34% of the patients contained in the dataset have the disease.

For each feature the output also shows the descriptive statistics and how they relate to the target variable. For example, the mean of the number of pregnancies npreg for women without diabetes is equal to 2.9167 versus 4.8382 for patients with the disease.

--------------------------------------------------------------------------------- 
 ::: npreg (Gaussian) 
--------------------------------------------------------------------------------- 
      
npreg        No      Yes
  mean 2.916667 4.838235
  sd   2.806866 3.972331

According to this result, npreg seems to be a good predictor for diabetes because the means are quite different. Near the bottom of the output page the ML Fitted Normal Densities of npreg (for both groups) are shown as well. The predictive power of the feature depends on how much both densities overlap. In other words, the predictive power of the number of pregnancies depends of how far the Likelihoods (\(\text{P}(npreg = \nu | k = \text{no diabetes})\) and \(\text{P}(npreg = \nu | k = \text{diabetes})\)) are apart (which in turn depends on the difference between both mean levels). The question of how far the mean levels should be apart so that the variable has a meaningful contribution in the prediction of diabetes, still remains an open question and will be answered in Hypothesis Testing.

Changing the “Select Variable to Plot” field allows us to examine all the features that have been included in the model. For instance, the variable glu (plasma glucose concentration in an oral glucose tolerance test) promises to be a better predictor than the variable bp (diastolic blood pressure mm Hg).

While it is interesting to examine the individual contribution of individual features, the ultimate purpose of creating a Naive Bayes Classifier is to generate useful predictions. The predictive performance can now be assessed by setting the “Training set %” slider to 90%. This causes the model to be recomputed based on the first 90% of the rows in the dataset (we call this the “training set”). The remaining 10% (i.e. the “test set”) is used to test the quality of the predictions against the actual values. We already know how to evaluate binary classifiers from Chapter 8 which describes the concepts of sensitivity and specificity. After setting the training percentage, the output will show a table with the true positive/negative and false positive/negative values (we call this a “confusion matrix”). In addition, the values for sensitivity and specificity are also displayed (if feasible), along with some other statistics that are discussed in Descriptive Statistics.

The Laplace field allows us to manually select various values for \(\alpha\) and observe how the results of the Naive Bayes Classifier change. In addition, we can also select the density functions with the “Kernel or Poisson” selector (the default is a Gaussian/Normal Density which can be replaced with Kernel functions and/or Poisson Densities).

21.4 Task

Use the Naive Bayes Classifier software shown above and improve its predictive performance. Try to add new features and investigate the effects of changing the Laplace and Kernel settings.

Smith, Jack W., James E. Everhart, W. C. Dickson, William C. Knowler, and Robert S. Johannes. 1988. “Using the ADAP Learning Algorithm to Forecast the Onset of Diabetes Mellitus.” In Proceedings of the Annual Symposium on Computer Application in Medical Care, 261–65.

  1. For instance, if we are interested in three classes (i.e. “real”, “fake”, and “mixed” news which contains truths and falsehoods) then we would use \(K = 3\)↩︎

  2. Make sure that the mlbench, caret, and naivebayes packages have been installed with the install.packages function before running the script.↩︎

20  Normal Distribution (Gaussian Distribution)
22  Chi Distribution

© 2026 Patrick Wessa. Provided as-is, without warranty.

Feedback: e-mail | Anonymous contributions: click to copy (Sats) | click to copy (XMR)

Cookie Preferences