Implementing Gaussian Naive Bayes in Python: A Comprehensive Guide
Table of Contents
- Introduction to Gaussian Naive Bayes
- Understanding the Dataset
- Data Preprocessing
- Handling Missing Data
- Encoding Categorical Variables
- Feature Selection
- Feature Scaling
- Model Implementation
- K-Nearest Neighbors (KNN)
- Logistic Regression
- Gaussian Naive Bayes
- Model Evaluation
- Visualizing Decision Boundaries
- Hyperparameter Tuning
- Conclusion
- References
1. Introduction to Gaussian Naive Bayes
Gaussian Naive Bayes (GNB) is a probabilistic classification algorithm based on Bayes’ Theorem, assuming that features follow a normal distribution. It’s particularly effective for continuous data and offers simplicity in implementation with relatively low computational requirements. Despite its simplistic assumptions, GNB often performs remarkably well, especially in text classification and medical diagnosis tasks.
Key Features of Gaussian Naive Bayes:
- Probabilistic Model: Provides probabilities for predictions.
- Assumption of Feature Independence: Simplifies calculation by assuming feature independence.
- Efficiency: Fast training and prediction phases.
2. Understanding the Dataset
For our implementation, we’ll use two datasets:
- Iris Flower Dataset: A classic dataset in machine learning, comprising 150 samples of iris flowers from three different species (Setosa, Virginica, and Versicolor). Each sample has four features: sepal length, sepal width, petal length, and petal width.
- WeatherAUS Dataset: Obtained from Kaggle, this dataset contains meteorological data from Australian weather stations, including features like temperature, rainfall, humidity, and wind speed.
3. Data Preprocessing
Effective data preprocessing is crucial for building robust machine learning models. We’ll walk through the essential preprocessing steps applied to the WeatherAUS dataset.
a. Handling Missing Data
Missing data can skew the results of your analysis. We employ two strategies to handle missing values:
- Numeric Features: Imputed using the mean strategy.
- Categorical Features: Imputed using the most frequent strategy.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
import numpy as np import pandas as pd from sklearn.impute import SimpleImputer # Load the dataset data = pd.read_csv('weatherAUS.csv') # Separate features and target X = data.iloc[:, :-1] y = data.iloc[:, -1] # Identify numerical and categorical columns numerical_cols = X.select_dtypes(include=['int64', 'float64']).columns categorical_cols = X.select_dtypes(include=['object']).columns # Impute numerical features with mean imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean') X[numerical_cols] = imp_mean.fit_transform(X[numerical_cols]) # Impute categorical features with the most frequent value imp_freq = SimpleImputer(missing_values=np.nan, strategy='most_frequent') X[categorical_cols] = imp_freq.fit_transform(X[categorical_cols]) |
b. Encoding Categorical Variables
Machine learning algorithms require numerical input. We apply Label Encoding and One-Hot Encoding to transform categorical variables.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
from sklearn.preprocessing import LabelEncoder, OneHotEncoder from sklearn.compose import ColumnTransformer # Label Encoding for binary categorical variables le = LabelEncoder() y = le.fit_transform(y) # Function for encoding def EncodingSelection(X, threshold=10): string_cols = list(X.select_dtypes(include=['object']).columns) one_hot_encoding_cols = [] for col in string_cols: unique_vals = len(X[col].unique()) if unique_vals == 2 or unique_vals > threshold: X[col] = le.fit_transform(X[col]) else: one_hot_encoding_cols.append(col) # One-Hot Encoding for remaining categorical variables if one_hot_encoding_cols: ct = ColumnTransformer([('encoder', OneHotEncoder(), one_hot_encoding_cols)], remainder='passthrough') X = ct.fit_transform(X) return X X = EncodingSelection(X) |
c. Feature Selection
To enhance model performance and reduce computational cost, we select the most relevant features using the SelectKBest method with the Chi-Squared score function.
1 2 3 4 5 6 7 8 9 10 11 12 |
from sklearn.feature_selection import SelectKBest, chi2 from sklearn.preprocessing import MinMaxScaler # Scale features scaler = MinMaxScaler() X_scaled = scaler.fit_transform(X) # Select top 2 features kbest = SelectKBest(score_func=chi2, k=2) X_selected = kbest.fit_transform(X_scaled, y) print(f"Selected Features Shape: {X_selected.shape}") |
d. Feature Scaling
Standardizing features ensures that each feature contributes equally to the result, which is especially important for distance-based algorithms like KNN.
1 2 3 4 |
from sklearn.preprocessing import StandardScaler scaler = StandardScaler(with_mean=False) X_scaled = scaler.fit_transform(X_selected) |
4. Model Implementation
We’ll implement three classification models: K-Nearest Neighbors (KNN), Logistic Regression, and Gaussian Naive Bayes.
a. K-Nearest Neighbors (KNN)
KNN classifies a data point based on the majority label of its nearest neighbors.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score from sklearn.model_selection import train_test_split # Split the dataset X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.20, random_state=1) # Initialize and train KNN knn = KNeighborsClassifier(n_neighbors=3) knn.fit(X_train, y_train) # Predict and evaluate y_pred_knn = knn.predict(X_test) knn_accuracy = accuracy_score(y_pred_knn, y_test) print(f"KNN Accuracy: {knn_accuracy:.2f}") |
1 |
KNN Accuracy: 0.80 |
b. Logistic Regression
Logistic Regression models the probability of a categorical dependent variable.
1 2 3 4 5 6 7 8 9 10 |
from sklearn.linear_model import LogisticRegression # Initialize and train Logistic Regression lr = LogisticRegression(random_state=0, max_iter=200) lr.fit(X_train, y_train) # Predict and evaluate y_pred_lr = lr.predict(X_test) lr_accuracy = accuracy_score(y_pred_lr, y_test) print(f"Logistic Regression Accuracy: {lr_accuracy:.2f}") |
1 |
Logistic Regression Accuracy: 0.83 |
c. Gaussian Naive Bayes
GaussianNB assumes that continuous values associated with each class are distributed normally.
1 2 3 4 5 6 7 8 9 10 |
from sklearn.naive_bayes import GaussianNB # Initialize and train GaussianNB gnb = GaussianNB() gnb.fit(X_train, y_train) # Predict and evaluate y_pred_gnb = gnb.predict(X_test) gnb_accuracy = accuracy_score(y_pred_gnb, y_test) print(f"Gaussian Naive Bayes Accuracy: {gnb_accuracy:.2f}") |
1 |
Gaussian Naive Bayes Accuracy: 0.80 |
5. Model Evaluation
Model evaluation is essential to understand how well your models perform on unseen data. We use Accuracy Score as our primary metric.
Model | Accuracy |
---|---|
K-Nearest Neighbors (KNN) | 80% |
Logistic Regression | 83% |
Gaussian Naive Bayes | 80% |
Among the models tested, Logistic Regression outperforms KNN and Gaussian Naive Bayes on this dataset, highlighting the importance of model selection based on data characteristics.
6. Visualizing Decision Boundaries
Visualizing decision boundaries helps in understanding how different classifiers separate the data. We’ll use the Iris Flower dataset for this purpose.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
from mlxtend.plotting import plot_decision_regions import matplotlib.pyplot as plt from sklearn import datasets def visualize_decision_regions(X, y, model): plot_decision_regions(X, y, clf=model) plt.xlabel('Feature 1') plt.ylabel('Feature 2') plt.title(f'Decision Boundary for {model.__class__.__name__}') plt.show() # Load Iris dataset iris = datasets.load_iris() X_iris = iris.data[:, :2] # First two features y_iris = iris.target # Initialize classifiers knn_iris = KNeighborsClassifier(n_neighbors=3) knn_iris.fit(X_iris, y_iris) lr_iris = LogisticRegression(random_state=0, max_iter=200) lr_iris.fit(X_iris, y_iris) gnb_iris = GaussianNB() gnb_iris.fit(X_iris, y_iris) # Visualize decision boundaries visualize_decision_regions(X_iris, y_iris, knn_iris) visualize_decision_regions(X_iris, y_iris, lr_iris) visualize_decision_regions(X_iris, y_iris, gnb_iris) |
- K-Nearest Neighbors (KNN): Captures more complex boundaries based on proximity.
- Logistic Regression: Linear decision boundaries.
- Gaussian Naive Bayes: Curved boundaries due to probabilistic assumptions.
7. Hyperparameter Tuning
While our initial experiments provide a good starting point, fine-tuning hyperparameters can further enhance model performance. Techniques like Grid Search and Random Search can be employed to find the optimal set of hyperparameters for each classifier.
1 2 3 4 5 6 7 8 9 |
from sklearn.model_selection import GridSearchCV # Example: Hyperparameter tuning for KNN param_grid = {'n_neighbors': range(1, 10)} grid_knn = GridSearchCV(KNeighborsClassifier(), param_grid, cv=5) grid_knn.fit(X_train, y_train) print(f"Best KNN Parameters: {grid_knn.best_params_}") print(f"Best KNN Accuracy: {grid_knn.best_score_:.2f}") |
8. Conclusion
Implementing Gaussian Naive Bayes in Python is straightforward, thanks to libraries like scikit-learn. Despite its simplicity, GNB offers competitive performance, making it a valuable tool in the machine learning arsenal. However, as demonstrated, model performance is contingent on the dataset’s nature. Logistic Regression, for instance, outperformed GNB and KNN in our experiments with the WeatherAUS dataset.
Key Takeaways:
- Data Preprocessing: Handling missing data and encoding categorical variables are critical steps.
- Feature Selection: Selecting relevant features can enhance model performance and reduce computational overhead.
- Model Selection: Always experiment with multiple models to identify the best performer for your specific dataset.
- Visualization: Understanding decision boundaries provides insights into how models segregate data.
By following the steps outlined in this guide, you can effectively implement and evaluate Gaussian Naive Bayes alongside other classification algorithms to make informed decisions in your machine learning projects.