objective the goal of this assignment is to implement the linear class
Search for question
Question
Objective: The goal of this assignment is to implement the linear classifier algorithm for image classification using the CIFAR-10 dataset. You will:
1. Utilize a built-in packages or libraries such as Pytorch and Tensorflow to perform forward and backward computations (Optional, not graded)
2. Implement the algorithm, such as forward and backward computations independently (required).
Tasks and Requirements:
1. Algorithm Implementation:
. Employ the linear classifier for image classification on the CIFAR-10 dataset using a self-coded version.
2. Performance Improvement Strategies:
. Analyze how to improve the performance of your implementations, including and not limited to: tuning hyperparameters such as training epoch, step size (learning rate), regularization terms, strength of regularization, etc. Explore by yourslef!
3. Comprehensive Report:
· Prepare a detailed report encompassing the following sections:
. Background and Method Introduction: Provide an overview of the linear classifier and its application in image classification.
. Dataset and Tasks Description: Describe the CIFAR-10 dataset and outline the specific classification tasks undertaken.
. Algorithms Used: Elaborate on the implementation details of the algorithm. Attach screenshot of the codes whenever necessary.
. Results: Present and discuss the classification results obtained.
. Methods of Improvements: Discuss the strategies employed to enhance the performance of your algorithm, focusing on accuracy and speed.
4. Submission Format:
. Submit your work in the form of Jupyter Notebook (.ipynb) and HTML files, along with the final report.
Grading Criteria:
· Implementation of the Algorithm without regularization (30%):
· Implementation of the Algorithm with regularization (30%):
· Algorithm Improvement (20%): Thoughtful considerations and implementations for validating and improving your algorithm, including techniques like hyper-parameter tuning, and efficient coding practices.
· Report Quality (20%): Overall quality, clarity, organization, and thoroughness of the submitted report.
Tasks and Requirements:
1. Algorithm Implementation:
. Employ the linear classifier for image classification on the CIFAR-10 dataset using a self-coded version.
2. Performance Improvement Strategies:
. Analyze how to improve the performance of your implementations, including and not limited to: tuning hyperparameters such as training epoch, step size (learning rate), regularization terms, strength of regularization, etc. Explore by yourslef!
3. Comprehensive Report:
· Prepare a detailed report encompassing the following sections:
. Background and Method Introduction: Provide an overview of the linear classifier and its application in image classification.
. Dataset and Tasks Description: Describe the CIFAR-10 dataset and outline the specific classification tasks undertaken.
· Algorithms Used: Elaborate on the implementation details of the algorithm. Attach screenshot of the codes whenever necessary.
· Results: Present and discuss the classification results obtained.
. Methods of Improvements: Discuss the strategies employed to enhance the performance of your algorithm, focusing on accuracy and speed.
4. Submission Format:
. Submit your work in the form of Jupyter Notebook (.ipynb) and HTML files, along with the final report.
Grading Criteria:
· Implementation of the Algorithm without regularization (30%):
· Implementation of the Algorithm with regularization (30%):
· Algorithm Improvement (20%): Thoughtful considerations and implementations for validating and improving your algorithm, including techniques like hyper-parameter tuning, and efficient coding practices.
· Report Quality (20%): Overall quality, clarity, organization, and thoroughness of the submitted report.
Please reference the materials in Lecture 3 slides for detailed of linear classifier.
To assist you in getting acquainted with fundamental coding techniques, such as acquiring datasets and performing basic operations, a sample answer is provided here 4 (regularization not included). This sample serves as a guide to help you understand the basic framework. However, it is crucial that you develop and implement your own code and strategies. You are encouraged to experiment with different hyperparameters./nKNN demo - Copy
Linear Classifier Model¶
To build a linear classifier model using numpy and test it on the CIFAR-10 dataset, you'll follow these steps:
Load the CIFAR-10 dataset:¶
You'll need to load the dataset and preprocess it. CIFAR-10 contains 32x32 color images, and you'll need to reshape them into a 1D array because numpy does not accept 3D data for training.
Preprocess the data:¶
This includes reshaping the images and normalizing them.
Split the data:¶
Optionally, you can split the dataset into training and testing sets. CIFAR-10 usually comes with predefined training and testing sets.
Train the Linear model:¶
Train the linear classifier.
Evaluate the model:¶
Finally, evaluate the model's performance on the test dataset.
In [ ]:
%%javascript IPython.notebook.clear_all_output();
In [2]:
In [1]:
import torch import torchvision import torchvision.transforms as transforms from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score import numpy as np # Step 1: Load the CIFAR-10 dataset transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) # Convert dataset to numpy arrays for sklearn def to_numpy(dataset): data_loader = torch.utils.data.DataLoader(dataset, batch_size=len(dataset), shuffle=False) data = next(iter(data_loader)) images, labels = data images = images.numpy() labels = labels.numpy() images = images.reshape(images.shape[0], -1) return images, labels x_train, y_train = to_numpy(trainset) x_test, y_test = to_numpy(testset) def initialize_parameters(input_size, num_classes): # Initialize weights and biases weights = np.random.randn(input_size, num_classes) * 0.0001 bias = np.zeros((1, num_classes)) return weights, bias def linear_forward(X, weights, bias): # Linear forward computation return np.dot(X, weights) + bias def softmax(z): # Softmax function for multi-class classification exp_z = np.exp(z - np.max(z, axis=1, keepdims=True)) return exp_z / np.sum(exp_z, axis=1, keepdims=True) def cross_entropy_loss(y_pred, y_true): # Cross-entropy loss function m = y_true.shape[0] log_likelihood = -np.log(y_pred[range(m), y_true]) loss = np.sum(log_likelihood) / m return loss def linear_backward(X, y_true, y_pred, weights): # Backward computation to get gradients m = y_true.shape[0] grad_softmax = y_pred grad_softmax[range(m), y_true] -= 1 grad_softmax /= m grad_weights = np.dot(X.T, grad_softmax) grad_bias = np.sum(grad_softmax, axis=0, keepdims=True) return grad_weights, grad_bias def update_parameters(weights, bias, grad_weights, grad_bias, learning_rate): # Update parameters using gradients weights -= learning_rate * grad_weights bias -= learning_rate * grad_bias return weights, bias # Example parameters input_size = 32*32*3 # CIFAR-10 images are 32x32x3 num_classes = 10 # CIFAR-10 has 10 classes weights, bias = initialize_parameters(input_size, num_classes) # Forward pass def train(x_train, y_train, weights, bias, learning_rate, epochs): for epoch in range(epochs): # Forward pass output = linear_forward(x_train, weights, bias) y_pred = softmax(output) # Loss loss = cross_entropy_loss(y_pred, y_train) # Backward pass grad_weights, grad_bias = linear_backward(x_train, y_train, y_pred, weights) # Update parameters weights, bias = update_parameters(weights, bias, grad_weights, grad_bias, learning_rate) if (epoch + 1) % 10 == 0: print(f'Epoch {epoch + 1}, Loss: {loss}') return weights, bias def test(x_test, y_test, weights, bias): # Forward pass output = linear_forward(x_test, weights, bias) y_pred = softmax(output) # Convert predictions to label indexes predictions = np.argmax(y_pred, axis=1) # Calculate accuracy accuracy = np.mean(predictions == y_test) * 100 return accuracy # Training parameters learning_rate = 0.01 epochs = 100 # Training the model weights, bias = train(x_train, y_train, weights, bias, learning_rate, epochs) # Testing the model accuracy = test(x_test, y_test, weights, bias) accuracy
0.0%
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data\cifar-10-python.tar.gz
100.0%
Extracting ./data\cifar-10-python.tar.gz to ./data
Files already downloaded and verified
Epoch 10, Loss: 2.0969680140086897
Epoch 20, Loss: 2.024600949912772
Epoch 30, Loss: 1.982202292225029
Epoch 40, Loss: 1.952916619186967
Epoch 50, Loss: 1.931062103905428
Epoch 60, Loss: 1.9139495845819248
Epoch 70, Loss: 1.9000873992620306
Epoch 80, Loss: 1.8885638237262654
Epoch 90, Loss: 1.8787842588989903
Epoch 100, Loss: 1.8703425488034913
Out[1]:
36.64