Excel in Exams with Expert Deep Learning Homework Help Tutors.

Deep Learning, an advanced branch of machine learning, is revolutionizing the way we understand and interact with technology. With its foundations deeply rooted in simulating the human brain's ability to learn, Deep Learning algorithms offer groundbreaking advancements in AI. However, mastering this complex subject and navigating through its challenging assignments can be daunting for students. That's where Tutorbin steps in to illuminate your path to success.

Deep Learning is not just another subset of machine learning; it's an innovation inspired by our understanding of the human brain's architecture. These algorithms go beyond basic pattern recognition, venturing into the realm of intuition and decision-making by processing data in complex, layered networks. Whether it's recognizing speech, interpreting complex images, or making predictive analyses, Deep Learning algorithms are at the forefront of creating intelligent systems that mimic human thought processes.

The efficacy of Deep Learning hinges on its ability to process and analyze vast datasets. Through artificial neural networks, these systems can classify and interpret data by answering a cascade of binary queries. Each response is meticulously weighed against human-generated outcomes, continuously refining and adjusting its algorithms to enhance accuracy and mimic human cognition more closely.

Understanding the nuances of Deep Learning can be overwhelming. FavTutor rises to the occasion, offering personalized assignment help tailored to demystify this intricate subject. Our pool of seasoned experts is dedicated to assisting you in comprehending the theoretical underpinnings and practical applications of Deep Learning. Beyond just completing assignments, our aim is to foster a deeper understanding, enabling you to focus on absorbing the practical aspects of this revolutionary field.

- Qualified Tutors: Our team consists of experts with extensive experience and qualifications in various disciplines, ready to offer unparalleled online assistance.
- Global Perspective: We're not just focused on one geographical area; our tutors understand and specialize in the educational needs of students from the USA, Canada, and beyond.
- Timely Deliveries: We're committed to providing your assignments on time, giving you ample opportunity to review and understand the material.
- Budget-Friendly Solutions: Our pricing model is designed with students in mind, ensuring our services are accessible without compromising on quality.
- 24/7 Support: Our experts are here to offer continuous support, ensuring you have the guidance you need, whenever you need it.

Getting the help you need is just a few clicks away:

- Sign Up: Register your query or assignment with ease.
- Expert Assignment: We swiftly match you with the ideal expert for your specific needs.
- Solution Delivery: Engage with your tutor, solve your queries, and receive your polished assignment.

Whether you're grappling with the fundamentals or delving into advanced topics, Tutorbin stands ready to support your academic journey in Deep Learning and beyond. From Data Science to Programming and from Java to Python, our comprehensive assistance ensures you're always a step ahead.

Don't let the complexities of Deep Learning deter your academic and professional growth. With Tutorbin's expert guidance, embrace the opportunity to excel in this cutting-edge field. Contact us now through email or chat, and take the first step towards unlocking your full potential in Deep Learning.

**Q1:**Respond to the following in a minimum of 225 words: Distinguish between deep learning, traditional machine learning, and artificial intelligence and situations where you would utilize deep learning. APA styleSee Answer**Q2:**This week you learned about a variety of deep learning techniques. In the previous week, you considered 5 applications. During this week, you determine if any of the applications are appropriate for deep learning techniques. Write a ½ to 1-page memo that justifies whether or not deep learning techniques should be used for any of these 5 applications: • Diagnosis of a well-established but complex disease • Price lookup subsystem for a high-volume merchandise seller • Automated voice inquiry processing system Training of new employees • Handwriting recognitionSee Answer**Q3:**Use resources mentioned in the class and submit a screenshot of the "X-Y plot with a regression line" along with the rest of the desktop environment showing the system timestamp. You can submit either MATLAB or Python code. The dataset for the assignment is: assignment_data.txt (Do not use the dataset mentioned in the tutorial.) Use the code from the hands-on tutorial. It is available here.See Answer**Q4:**Please read the assignment instructions carefully: 1. 2. The assignment is about the Neural Networks and MLP (Topic 5) You can work individually or in a teams of 2 (no more than 2 students per team). Students from different sections can work together. 3. One submission per team is enough. In the submission, both team members should be indicated in gradescope in the submission page. Multiple submission by different team members may be considered as copying the assignment (i.e. if both team members submitted their work separately, it will be considered cheating attempt) 4. Before you solve the assignment, watch the video that explain the assignment (Attached) and also watch the video that explain the jupyter notebook in topic 5: Neural Network (NNCircle). The assignment relies on these videos. 5. 6. Use the attached notebook to complete the assignment. you need to complete the code in the attached jupyter notebook. Complete the missing code as instructed in the code 7. You need to upload your jupyter notebook to gradescope (Programming Assignment 2). Make sure to indicate your partner when you upload your code to gradescope Any other form of submissions (e.g. email submission) is not accepted under any circumstances. 8. Try to start working on the assignment early. If you have any question, search for the answer in (assignment 2) channel in this team. in case you cannot find the answer, post your question in this channel and the Teaching assistant will answer it.See Answer**Q5:**Objective: The goal of this assignment is to implement the K-Nearest Neighbors (KNN) algorithm for image classification using the CIFAR-10 dataset. You are required to approach this task using two methods: 1. Utilize a built-in function from an existing library, such as scikit-learn. 2. Implement the algorithm independently. Tasks and Requirements: 1. Algorithm Implementation: • Employ the KNN algorithm for image classification on the CIFAR-10 dataset using both a pre-existing library function and a self-coded version. • Analyze and compare the performance of both implementations. 2. Performance Improvement Strategies: • Develop and apply strategies to enhance the performance of your self-implemented KNN algorithm. Focus on aspects like accuracy or computational efficiency. 3. Comprehensive Report: • Prepare a detailed report encompassing the following sections: • Background and Method Introduction: Provide an overview of the KNN algorithm and its application in image classification. • Dataset and Tasks Description: Describe the CIFAR-10 dataset and outline the specific classification tasks undertaken. ▪ Algorithms Used: Elaborate on the implementation details of both the library-based and self- implemented algorithms. Attach screenshot of the codes whenever necessary. • Results: Present and discuss the classification results obtained from both approaches. • Methods of Improvements: Discuss the strategies employed to enhance the performance of your algorithm, focusing on accuracy and speed. 4. Submission Format: • Submit your work in the form of Jupyter Notebook (.ipynb) and HTML files, along with the final report. Grading Criteria: Implementation of the Algorithm (40%): Demonstrated ability to effectively implement the KNN algorithm using both the library function and self-coded method (20% for each). • Preliminary Results (20%): Ability to achieve reasonable initial results from the implemented algorithms. • Algorithm Improvement and Validation (20%): Thoughtful considerations and implementations for validating and improving your algorithm, including techniques like cross-validation, hyper-parameter tuning, and efficient coding practices. Report Quality (20%): Overall quality, clarity, organization, and thoroughness of the submitted report. Please reference the materials in Lecture 2 slides for detailed information and instructions. To assist you in getting acquainted with fundamental coding techniques, such as acquiring datasets and performing basic operations, a sample answer is provided here This sample serves as a guide to help you understand the basic framework. However, it is crucial that you develop and implement your own code and strategies. You are encouraged to experiment with different hyperparameters, and the implementation of cross- validation is highly recommended.See Answer**Q6:**TO do: it's related to our graduation project. We are using speech recognition algorithms in our system for error detection in the pronunciation of letters. what we are confused in is the part after completing the acoustic model and data processing pipeline, extracting the results from the acoustic model, and converting our audio data pipeline to the required format for the CRNN model, do we need to retrain the same model from scratch to achieve more accurate and faster results? After this step, how do we integrate this model into our program to receive audio data from users? we only need to understand how we will achieve this part. We already have documented how does the DNNs will work. We are only confused in this part and to clarify it more for you we aren't working on the implementation phase yet. we still in the documentation phase which indicates describing and explaining how these algorithms will workSee Answer**Q7:**Objective: This project aims to leverage the deep learning techniques explored throughout the course, and to apply these methods to a domain of personal interest or academic relevance. Through this endeavor, you will demonstrate your ability to conceptualize, develop, and refine a deep learning model, showcasing your analytical and problem-solving skills. Project Phases 1. Dataset Selection and Task Definition: ° • Identify and select a dataset that aligns with your interests or field of study. Datasets can be found in public repositories (e.g., UCI Machine Learning Repository >>>) or as part of deep learning frameworks. ⚫ Clearly define the task you aim to address with your selected dataset. This could involve prediction, classification, detection, etc. 2. Model Construction: ⚫ Decide on an appropriate model architecture for your task. Consider whether a pre-existing model could be adapted through transfer learning, or if designing a new model from scratch is more suitable. • Justify your model choice based on its relevance and potential effectiveness for the task at hand. 3. Model Training: • Train your model using the chosen dataset, applying best practices in data preprocessing, augmentation (if applicable), and parameter tuning. 4. Model Evaluation and Refinement: • Evaluate your model's performance using appropriate metrics. • Implement strategies for performance improvement, such as hyperparameter tuning, regularization techniques, model ensembling, batch normalization, etc. Tasks and Requirements ⚫ Algorithm Implementation: Detailed implementation of the chosen deep learning model, focusing on the practical application of theoretical concepts. ⚫ Performance Evaluation: Comprehensive assessment of the model performance, employing both quantitative metrics and qualitative analysis. ⚫ Improvement Strategies: Exploration and application of various techniques to enhance model accuracy and efficiency. Comprehensive Report Your final submission should include a thorough report covering: ⚫ Background and Methodology: Overview of the chosen task, relevance to your field, and introduction to the methodology. ⚫ Dataset and Task Description: Detailed description of the dataset and a clear definition of the specific task(s) undertaken. . ⚫ Implementation Details: Elaborate on the model architecture, training process, and any unique implementation challenges. Include code snippets or screenshots as necessary. ⚫ Results: Presentation and discussion of the results achieved, including an analysis of model performance. Improvement Methods: Discussion on the applied strategies for performance enhancement, along with their outcomes. Submission Guidelines Submit your project as a Jupyter Notebook (.ipynb) along with an HTML export of the notebook. ⚫ Include the comprehensive report as a separate document. • Ensure each component is clearly labeled and organized. Grading Criteria ⚫ Technical Implementation (40%): Complexity, creativity, and correctness of the deep learning model implementation. ⚫ Analysis and Evaluation (30%): Depth and thoroughness of the performance evaluation, including the analysis of results and the implementation of improvements. ⚫ Report and Presentation (30%): Clarity, comprehensiveness, and professionalism of the written report and code documentation. The ability to communicate complex ideas effectively will be key.See Answer**Q8:**2. Introduction: In this lab, you will customize a convolutional neural network for image classification tasks. You will use the provided flowers-6 dataset. It consists of 480 color images with different sizes in 6 classes (80 images per class). For each class, there are 73 training images and 7 test images. buttercup daisy fritillary snowdrop sunflower Windflower Please download the dataset "flowers6.zip" from Canvas and then upload it to the Google Colab online computer as shown below. + CodeSee Answer**Q9:**Please try to use Google Colab to complete this lab. For specific steps, please follow the template file (lab1_student.ipynb). There are six steps in total, of which the code of the first two steps has been provided to you, you only need to follow the prompts to complete the next four steps. Four images (1.png, 2.png, 3.png, 4.png) will be used in this lab. You can download them from Canvas and upload them to Google Colab as shown below. Since this lab does not need to use GPU, so you can directly use the CPU mode. Lab 1_student.ipynb PRO File Edit View Insert Runtime Tools Help All changes saved X III E ELEC 4806/5806 Introduction to Deep Learning and PyTorch Lab 1 PyTorch basic practice o Files sample_data 1.png 2.png 3.png 4.png + Code + Text 1. import the required libraries [1] import cv2 import torch import glob import numpy as np Comment [2] img_list = [] # a list used to save all the images in the current directory for img in glob.glob("*.png"): image cv2.imread (img) img_list.append(image) RAM I Disk ↑ ↓ Sha 2. Read in the images (1.png, 2.png, 3.png, 4.png) in the current directory and save them to a list (so the length of this list is 4). ▾ Each element in this list is a numpy array (an image). All the images have the same shape (height, width, channel), which is (555, 695, 3) in this lab. Submission requirements (all team members need to submit): 1. Please submit the .ipynb file and make sure the TA or the instructor can easily run your code. If you use google colab, you can download the .ipynb file by clicking "File➜Download Download .ipynb". Please include the names of all team members in the file name. 2. Please submit a pdf version as well. You can use this website (https://htmtopdf.herokuapp.com/ipynbviewer/ ) to convert your .ipynb file to pdf. 3. Please record a video to demo your work. In the video, please explain your code line by line and all team members must participate in the recording. If the team members cooperate remotely, they can submit multiple videos.See Answer**Q10:**Objective: The goal of this assignment is to implement the linear classifier algorithm for image classification using the CIFAR-10 dataset. You will: 1. Utilize a built-in packages or libraries such as Pytorch and Tensorflow to perform forward and backward computations (Optional, not graded) 2. Implement the algorithm, such as forward and backward computations independently (required). Tasks and Requirements: 1. Algorithm Implementation: . Employ the linear classifier for image classification on the CIFAR-10 dataset using a self-coded version. 2. Performance Improvement Strategies: . Analyze how to improve the performance of your implementations, including and not limited to: tuning hyperparameters such as training epoch, step size (learning rate), regularization terms, strength of regularization, etc. Explore by yourslef! 3. Comprehensive Report: · Prepare a detailed report encompassing the following sections: . Background and Method Introduction: Provide an overview of the linear classifier and its application in image classification. . Dataset and Tasks Description: Describe the CIFAR-10 dataset and outline the specific classification tasks undertaken. . Algorithms Used: Elaborate on the implementation details of the algorithm. Attach screenshot of the codes whenever necessary. . Results: Present and discuss the classification results obtained. . Methods of Improvements: Discuss the strategies employed to enhance the performance of your algorithm, focusing on accuracy and speed. 4. Submission Format: . Submit your work in the form of Jupyter Notebook (.ipynb) and HTML files, along with the final report. Grading Criteria: · Implementation of the Algorithm without regularization (30%): · Implementation of the Algorithm with regularization (30%): · Algorithm Improvement (20%): Thoughtful considerations and implementations for validating and improving your algorithm, including techniques like hyper-parameter tuning, and efficient coding practices. · Report Quality (20%): Overall quality, clarity, organization, and thoroughness of the submitted report. Tasks and Requirements: 1. Algorithm Implementation: . Employ the linear classifier for image classification on the CIFAR-10 dataset using a self-coded version. 2. Performance Improvement Strategies: . Analyze how to improve the performance of your implementations, including and not limited to: tuning hyperparameters such as training epoch, step size (learning rate), regularization terms, strength of regularization, etc. Explore by yourslef! 3. Comprehensive Report: · Prepare a detailed report encompassing the following sections: . Background and Method Introduction: Provide an overview of the linear classifier and its application in image classification. . Dataset and Tasks Description: Describe the CIFAR-10 dataset and outline the specific classification tasks undertaken. · Algorithms Used: Elaborate on the implementation details of the algorithm. Attach screenshot of the codes whenever necessary. · Results: Present and discuss the classification results obtained. . Methods of Improvements: Discuss the strategies employed to enhance the performance of your algorithm, focusing on accuracy and speed. 4. Submission Format: . Submit your work in the form of Jupyter Notebook (.ipynb) and HTML files, along with the final report. Grading Criteria: · Implementation of the Algorithm without regularization (30%): · Implementation of the Algorithm with regularization (30%): · Algorithm Improvement (20%): Thoughtful considerations and implementations for validating and improving your algorithm, including techniques like hyper-parameter tuning, and efficient coding practices. · Report Quality (20%): Overall quality, clarity, organization, and thoroughness of the submitted report. Please reference the materials in Lecture 3 slides for detailed of linear classifier. To assist you in getting acquainted with fundamental coding techniques, such as acquiring datasets and performing basic operations, a sample answer is provided here 4 (regularization not included). This sample serves as a guide to help you understand the basic framework. However, it is crucial that you develop and implement your own code and strategies. You are encouraged to experiment with different hyperparameters./nKNN demo - Copy Linear Classifier Model¶ To build a linear classifier model using numpy and test it on the CIFAR-10 dataset, you'll follow these steps: Load the CIFAR-10 dataset:¶ You'll need to load the dataset and preprocess it. CIFAR-10 contains 32x32 color images, and you'll need to reshape them into a 1D array because numpy does not accept 3D data for training. Preprocess the data:¶ This includes reshaping the images and normalizing them. Split the data:¶ Optionally, you can split the dataset into training and testing sets. CIFAR-10 usually comes with predefined training and testing sets. Train the Linear model:¶ Train the linear classifier. Evaluate the model:¶ Finally, evaluate the model's performance on the test dataset. In [ ]: %%javascript IPython.notebook.clear_all_output(); In [2]: In [1]: import torch import torchvision import torchvision.transforms as transforms from sklearn.neighbors import KNeighborsClassifier from sklearn.metrics import accuracy_score import numpy as np # Step 1: Load the CIFAR-10 dataset transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) # Convert dataset to numpy arrays for sklearn def to_numpy(dataset): data_loader = torch.utils.data.DataLoader(dataset, batch_size=len(dataset), shuffle=False) data = next(iter(data_loader)) images, labels = data images = images.numpy() labels = labels.numpy() images = images.reshape(images.shape[0], -1) return images, labels x_train, y_train = to_numpy(trainset) x_test, y_test = to_numpy(testset) def initialize_parameters(input_size, num_classes): # Initialize weights and biases weights = np.random.randn(input_size, num_classes) * 0.0001 bias = np.zeros((1, num_classes)) return weights, bias def linear_forward(X, weights, bias): # Linear forward computation return np.dot(X, weights) + bias def softmax(z): # Softmax function for multi-class classification exp_z = np.exp(z - np.max(z, axis=1, keepdims=True)) return exp_z / np.sum(exp_z, axis=1, keepdims=True) def cross_entropy_loss(y_pred, y_true): # Cross-entropy loss function m = y_true.shape[0] log_likelihood = -np.log(y_pred[range(m), y_true]) loss = np.sum(log_likelihood) / m return loss def linear_backward(X, y_true, y_pred, weights): # Backward computation to get gradients m = y_true.shape[0] grad_softmax = y_pred grad_softmax[range(m), y_true] -= 1 grad_softmax /= m grad_weights = np.dot(X.T, grad_softmax) grad_bias = np.sum(grad_softmax, axis=0, keepdims=True) return grad_weights, grad_bias def update_parameters(weights, bias, grad_weights, grad_bias, learning_rate): # Update parameters using gradients weights -= learning_rate * grad_weights bias -= learning_rate * grad_bias return weights, bias # Example parameters input_size = 32*32*3 # CIFAR-10 images are 32x32x3 num_classes = 10 # CIFAR-10 has 10 classes weights, bias = initialize_parameters(input_size, num_classes) # Forward pass def train(x_train, y_train, weights, bias, learning_rate, epochs): for epoch in range(epochs): # Forward pass output = linear_forward(x_train, weights, bias) y_pred = softmax(output) # Loss loss = cross_entropy_loss(y_pred, y_train) # Backward pass grad_weights, grad_bias = linear_backward(x_train, y_train, y_pred, weights) # Update parameters weights, bias = update_parameters(weights, bias, grad_weights, grad_bias, learning_rate) if (epoch + 1) % 10 == 0: print(f'Epoch {epoch + 1}, Loss: {loss}') return weights, bias def test(x_test, y_test, weights, bias): # Forward pass output = linear_forward(x_test, weights, bias) y_pred = softmax(output) # Convert predictions to label indexes predictions = np.argmax(y_pred, axis=1) # Calculate accuracy accuracy = np.mean(predictions == y_test) * 100 return accuracy # Training parameters learning_rate = 0.01 epochs = 100 # Training the model weights, bias = train(x_train, y_train, weights, bias, learning_rate, epochs) # Testing the model accuracy = test(x_test, y_test, weights, bias) accuracy 0.0% Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data\cifar-10-python.tar.gz 100.0% Extracting ./data\cifar-10-python.tar.gz to ./data Files already downloaded and verified Epoch 10, Loss: 2.0969680140086897 Epoch 20, Loss: 2.024600949912772 Epoch 30, Loss: 1.982202292225029 Epoch 40, Loss: 1.952916619186967 Epoch 50, Loss: 1.931062103905428 Epoch 60, Loss: 1.9139495845819248 Epoch 70, Loss: 1.9000873992620306 Epoch 80, Loss: 1.8885638237262654 Epoch 90, Loss: 1.8787842588989903 Epoch 100, Loss: 1.8703425488034913 Out[1]: 36.64See Answer**Q11:**Objective: The goal of this assignment is to implement a 2-layer NN for image classification using the CIFAR-10 dataset. You will: 1. Utilize a built-in packages or libraries such as Pytorch and Tensorflow to perform forward and backward computations (Optional, not graded) 2. Implement the algorithm, such as forward and backward computations independently (required). Tasks and Requirements: 1. Algorithm Implementation: . Employ the 2- layer NN classifier for image classification on the CIFAR-10 dataset using a self-coded version. 2. Performance Improvement Strategies: · Analyze how to improve the performance of your implementations, including and not limited to: tuning hyperparameters such as number of nodes in hidden layer, regularization terms, strength of regularization, etc. Explore by yourself! 3. Comprehensive Report: · Prepare a detailed report encompassing the following sections: . Background and Method Introduction: Provide an overview of the 2-layer NN and its application in image classification. - Dataset and Tasks Description: Describe the CIFAR-10 dataset and outline the specific classification tasks undertaken. . Algorithms Used: Elaborate on the implementation details of the algorithm. Attach screenshot of the codes whenever necessary. . Results: Present and discuss the classification results obtained. . Methods of Improvements: Discuss the strategies employed to enhance the performance of your algorithm, focusing on hyper-parameter tuning 4. Submission Format: . Submit your work in the form of Jupyter Notebook (.ipynb) and HTML files, along with the final report. Submit each file separately. Grading Criteria: · Implementation of the Algorithm with regularization (40%): · Algorithm Improvement (40%): Thoughtful considerations and implementations for validating and improving your algorithm, including techniques like hyper-parameter tuning, and efficient coding practices. . Report Quality (20%): Overall quality, clarity, organization, and thoroughness of the submitted report.See Answer**Q12:**ELEC 4806/5806 Introduction to Deep Learning and PyTorch/TensorFlow Lab 3 Customize a Convolutional Neural Network for image classification tasks 1. Experiment goal: . Understand the structure of Convolutional Neural Networks. · Master the use of hyperparameters. · Master various optimization, regularization and weight initialization methods. . Improve the performance of the model by fine-tuning the CNN network . TensorFlow code 2. Introduction: In this lab, you will customize a convolutional neural network for image classification tasks. You will use the provided flowers-6 dataset. It consists of 480 color images with different sizes in 6 classes (80 images per class). For each class, there are 73 training images and 7 test images. buttercup daisy fritillary snowdrop sunflower Windflower Please download the dataset "flowers6.zip" from Canvas and then upload it to the Google Colab online computer as shown below. + Code = Files X ó [1] sample_data flowers6.zip Then extract the zip file using the following command: [3] !unzip flowers6.zip You can find the "flower6" folder under the current directory. If you cannot see what shows below, please refresh it. flowers6 sample_data flowers6.zip Upload Refresh New file New folder And its sub-folders are as shown in the figure below. The first-level contains two subfolders "train" and "test", which are used to store data for training and testing respectively. Each of them has 6 secondary sub-folders, which are used to store 6 kinds of flower images. flowers6 test buttercup daisy fritillary snowdrop sunflower windflower train buttercup daisy 1 fritillary snowdrop sunflower windflower sample_data flowers6.zip You can use the following code to construct the train loader and the test loader. In this lab, the provided dataset is relatively small. However, training deep learning neural network models on more data can result in more skillful models. So here we introduce a powerful technique which is called "Data Augmentation". Image data augmentation is a technique that can be used to artificially expand the size of a training dataset by creating modified versions of images in the dataset. It can improve the ability of the fit models to generalize what they have learned to new images. train_dir = 'flowers6/train' test_dir = 'flowers6/test' train_transforms = transforms . Compose ( [transforms. RandomRotation(30), transforms . RandomResizedCrop (224), transforms . RandomHorizontalFlip (), transforms . ToTensor (), transforms. Normalize((0.485,0.456,0.406), (0.229,0.224,0.225))]) test_transforms = transforms . Compose ( [transforms . Resize (256), transforms . CenterCrop (224), transforms . ToTensor (), transforms . Normalize((0.485,0.456,0.406), (0.229,0.224,0.225))]) train_data = datasets. ImageFolder(train_dir, transform = train_transforms) test_data = datasets. ImageFolder(test_dir, transform = test_transforms) train_loader = torch. utils. data. DataLoader (train_data, batch_size = batch_size_train, shuffle = True) test_loader = torch.utils. data. DataLoader (test_data, batch_size = batch_size_test, shuffle=True) Pytorch provides very convenient API functions. You only need to add what the data augmentation you want in the transpose.compose function. Here we use three forms: random rotation, random crop and random horizontal flip. ImageFolder function assumes that all files are stored in folders, each folder stores images of the same category, and the folder name is the category name. The CNN model requires the input images to have the same size. So the images here are all resized to 224x224. Please refer to the CNN module on canvas to complete this lab. But you need to pay attention to the following: · The minimum requirement for model performance is that the test accuracy is greater than 82% (students registered ELEC 4806) and 85% (students who registered ELEC 5806) · To achieve good performance, you can consider: o Add more convolution layers and pooling layers. o You may have only 2-3 fully connected layers at the end of your model. o You can try to use a relatively smaller learning rate and train more epochs. Choose a smaller batch size, otherwise you will encounter out-of-memory issue. o o Try different combination of activation function, optimization method, and regularization method. o Please be patient fine tuning the model. This is the most basic requirement for engaging in deep learning work. · Since the image data in this lab are all color images, you also need to modify some other codes, such as the "plot_image" function, etc. · When evaluating the model, you must first use the .eval() function as shown in the figure below, otherwise, batch normalization and dropout will also be used in the test. The performance of the model will be very bad. correct = 0 net.eval() for data, target in test_loader: data, target = data.to(device), target.cuda() logits = net(data) pred = logits.argmax(dim=1) correct += pred.eq(target).float().sum().item() total_num = len(test_loader.dataset) acc = correct / total_num print('test acc:', acc) · Since the three channels (RGB) of the image are all normalized when constructing the test_loader, thus, it is necessary to inversely transform the data when displaying the image. Please refer to the code in the figure below. inv_normalize = transforms.Normalize( mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225], std=[1/0.229, 1/0.224, 1/0.255] ) x, y = next(iter(test_loader)) x, y = x.to(device), y.cuda() out = net(x) pred = out.argmax(dim=1) X = inv_normalize(x) plot_image(x.detach().cpu().numpy(), pred,y) Prediction = 0 Label = 0 0 25 50 75 100 125 150 175 200 0 50 100 150 200 . Please use only what you learned in the previous lectures to complete this lab. Plagiarizing code directly from other websites is not allowed. At the same time, methods to improve model performance . The purpose of this is to prevent plagiarism. · Please try to use Google Colab to complete this Lab. Without GPU, the training process may become unacceptably long. Submission requirements (all team members need to submit): 1. Please submit the .ipynb file . If you use google colab, you can download the .ipynb file by clicking "File->Download-> Download .ipynb". Please include the names of all team members in the file name. 2. Please submit a pdf version as well. You can use this website (https://htmtopdf.herokuapp.com/ipynbviewer/ ) to convert your .ipynb file to pdf. 3. Please record a video to demo your work. In the video, please indicate where you have changed the code and state the reason for the change. In addition, please describe in detail the process of modifying the model, adjusting the parameters, the problems encountered, etc. And all team members must participate in the recording. If the team members cooperate remotely, they can submit multiple videos. 4. When you are recording a video, if you just read the comments you wrote in the code, or read according to a script, then you will not be able to prove that the work was done independently by you. Therefore, the lab will be judged as plagiarism./nELEC 4806/5806 Introduction to Deep Learning and PyTorch /TensorFlow The state-of-the-art CNNs 0. ILSVRC winners The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) evaluates algorithms for object detection and image classification at large scale 1. LeNet LeNet refers to LeNet-5 was one of the earliest convolutional neural networks and promoted the development of deep learning. LeNet possesses the basic units of convolutional neural network, such as convolutional layer, pooling layer and full connection layer, laying a foundation for the future development of convolutional neural network. 2. AlexNet AlexNet, which employed an 8-layer CNN, won the ImageNet Large Scale Visual Recognition Challenge 2012 by a phenomenally large margin. This network showed, for the first time, that the features obtained by learning can transcend manually-designed features, breaking the previous paradigm in computer vision. from LeNet (left) to AlexNet (right) 3. VGG-16 VGG16 is a convolutional neural network model proposed by K. Simonyan and A. Zisserman from the University of Oxford in the paper “Very Deep Convolutional Networks for Large-Scale Image Recognition”. The model achieves 92.7% top-5 test accuracy in ImageNet, which is a dataset of over 14 million images belonging to 1000 classes. It was one of the famous model submitted to ILSVRC-2014. It makes the improvement over AlexNet by replacing large kernel-sized filters (11 and 5 in the first and second convolutional layer, respectively) with multiple 3×3 kernel-sized filters one after another. VGG16 was trained for weeks and was using NVIDIA Titan Black GPU’s. 4. GoogleNet In this architecture, along with going deeper (it contains 22 layers in comparison to VGG which had 19 layers), the researchers also made a novel approach called the Inception module Deep architectures, and specifically GoogLeNet (22 layers) are in danger of the vanishing gradients problem during training (back-propagation algorithm). The engineers of GoogLeNet addressed this issue by adding classifiers in the intermediate layers as well, such that the final loss is a combination of the intermediate loss and the final loss. This is why you see a total of three loss layers, unlike the usual single layer as the last layer of the network. 4. GoogleNet An Inception Module is an image model block that aims to approximate an optimal local sparse structure in a CNN. Put simply, it allows for us to use multiple types of filter size, instead of being restricted to a single filter size, in a single image block, which we then concatenate and pass onto the next layer. A 1x1 convolution simply maps an input pixel with all it's channels to an output pixel, not looking at anything around itself. It is often used to reduce the number of depth channels, since it is often very slow to multiply volumes with extremely large depths. 5. ResNet ResNet, short for Residual Network is a specific type of neural network that was introduced in 2015 Won 1st place in the ILSVRC 2015 classification competition with a top-5 error rate of 3.57% Won the 1st place in ILSVRC and COCO 2015 competition in ImageNet Detection, ImageNet localization, Coco detection and Coco segmentation. Replacing VGG-16 layers in Faster R-CNN with ResNet-101. They observed relative improvements of 28% Efficiently trained networks with 100 layers and 1000 layers also. 5. ResNet Mostly in order to solve a complex problem, we stack some additional layers in the Deep Neural Networks which results in improved accuracy and performance. The intuition behind adding more layers is that these layers progressively learn more complex features. But it has been found that there is a maximum threshold for depth with the traditional Convolutional neural network model. 5. ResNet The problem of training very deep networks has been alleviated with the introduction of ResNet or residual networks and these ResNets are made up from Residual Blocks A regular block (left) and a residual block (right) We want the deep network to perform at least as good as the shallow network and not degrade the performance as we saw in case of plain neural networks(without residual blocks). One way of achieving so is if the additional layers in a deep network learn the identity function and thus their output equals inputs which do not allow them to degrade the performance even with extra layers 5. ResNet ResNet follows VGG’s full 3×3 convolutional layer design. The residual block has two 3×3 convolutional layers with the same number of output channels. Each convolutional layer is followed by a batch normalization layer and a ReLU activation function. Then, we skip these two convolution operations and add the input directly before the final ReLU activation function. This kind of design requires that the output of the two convolutional layers has to be of the same shape as the input, so that they can be added together. If we want to change the number of channels, we need to introduce an additional 1×1 convolutional layer to transform the input into the desired shape for the addition operation ResNet block with and without 1×1 convolution 5. ResNet 6. Customize a ResNet-18 for CIFAR-10 classification task 6.1 ResNet-18 Architecture 6. Customize a ResNet-18 for CIFAR-10 classification task 6.1 ResNet-18 Architecture 6. Customize a ResNet-18 for CIFAR-10 classification task Training Dataset: The sample of data used to fit the model. The actual dataset that we use to train the model (weights and biases in the case of a Neural Network). The model sees and learns from this data. Validation Dataset: The sample of data used to provide an unbiased evaluation of a model fit on the training dataset while tuning model hyperparameters. The evaluation becomes more biased as skill on the validation dataset is incorporated into the model configuration. Test Dataset: The sample of data used to provide an unbiased evaluation of a final model fit on the training dataset. 6.2 Prepare a validation dataset 6. Customize a ResNet-18 for CIFAR-10 classification task 6.3 Scheduling a learning rate decay strategy Learning rate decay is a technique for training modern neural networks. It starts training the network with a large learning rate and then slowly reducing/decaying it until local minima is obtained. It is empirically observed to help both optimization and generalization 6. Customize a ResNet-18 for CIFAR-10 classification task 6.3 Scheduling a learning rate decay strategy 6. Customize a ResNet-18 for CIFAR-10 classification task 6.4 Data Augmentation Image data augmentation is a technique that can be used to artificially expand the size of a training dataset by creating modified versions of images in the dataset Image data augmentation is used to expand the training dataset in order to improve the performance and ability of the model to generalize 7. Transfer learning Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task. Transfer learning is an optimization that allows rapid progress or improved performance when modeling the second task. 7. Transfer learning Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task. Transfer learning is an optimization that allows rapid progress or improved performance when modeling the second task. 7. Transfer learning and other the state of the arts SqueezeNet DenseNet Inception v3 ShuffleNet v2 MobileNetV2 MobileNetV3 ResNeXt Wide ResNet MNASNet import torchvision.models as models resnet18 = models.resnet18(pretrained=True) alexnet = models.alexnet(pretrained=True) squeezenet = models.squeezenet1_0(pretrained=True) vgg16 = models.vgg16(pretrained=True) densenet = models.densenet161(pretrained=True) inception = models.inception_v3(pretrained=True) googlenet = models.googlenet(pretrained=True) shufflenet = models.shufflenet_v2_x1_0(pretrained=True) mobilenet_v2 = models.mobilenet_v2(pretrained=True) mobilenet_v3_large = models.mobilenet_v3_large(pretrained=True) mobilenet_v3_small = models.mobilenet_v3_small(pretrained=True) resnext50_32x4d = models.resnext50_32x4d(pretrained=True) wide_resnet50_2 = models.wide_resnet50_2(pretrained=True) mnasnet = models.mnasnet1_0(pretrained=True)/nSee Answer**Q13:**ELEC 4806/5806 Introduction to Deep Learning and PyTorch/TensorFlow Lab 4 Image Colorization 1. Experiment goal: . Understand the structure of Convolutional Autoencoder and Generative Adversarial Network. · Master the use of hyperparameters. · Master various optimization, regularization and weight initialization methods. · Improve the performance of the model by fine-tuning the network · For this lab only use pytorch 2. Introduction: In this lab, you will first customize a Convolutional Autoencoder for image colorization tasks. The input of the model is 3-channel grayscale images (the three channels are the same), and the output is colorized images. As close as possible AutoEncoder This is an open experiment, you can make any changes to autoencoder. But please note that the input of the model is a 3- channel grayscale image, and the provided image data are all color images, so you need to preprocess the data. You must first convert the color image into a 3-channel gray image and use it as the input of the model, and expect the color image output by the model to have as little error as possible from the original image. Another alternative is that you can add a GAN discriminator to your autoencoder to improve the quality of colorization. This requirement is relatively difficult, so it is not a mandatory requirement. You will use the provided face dataset. It consists of 3000 color images with the size of 512 x 512. And there are 2700 images for training and 300 images for testing. Please download the dataset "face-HQ.zip" from this link (https://drive.google.com/file/d/10Q4JI6ZZoJrlYje1w9hZEOvUM1- 3uJUj/view?usp=sharing ), and then upload it to your Google Colab online computer. Since the size of this file is larger than 1GB, you'd better directly upload it your google drive first, and then each time when you open the google colab, you can mount your google drive folder and directly access it from there. (Or add shortcut to Drive). (See this: https://www.marktechpost.com/2019/06/07/how-to-connect-google-colab-with-google-drive/ ). Then extract the zip file similarly to the previous lab. The extracted folder contains two subfolders "train" and "test", which are used to store data for training (2700 images) and testing (300) respectively. Please do not change the image size. Please refer to the Deep Auto encoder and GAN modules to complete this lab. But you need to pay attention to the following: · The grade will be given subjectively based on the quality of the image reconstruction. The instructor will take the reconstruction quality that 80% of the students can achieve as the minimum standard based on the results submitted by all students. · You can consider adding GAN's discriminator to improve the quality of image colorization. The discriminator can be used discriminate and punish the low quality outputs (since the images are not colorized correctly, so we can consider they are fake images). You can add an adversarial loss to your original reconstruction loss. But its weight should be small. If you want to use WGAN, LSGAN, BEGAN, etc., you can refer to the code on the Internet. But the source needs to be clearly stated. . Plagiarizing code directly from other websites is not allowed. · Please try to use Google Colab to complete this Lab. Without GPU, the training process may become unacceptably long. Submission requirements (all team members need to submit): 1. Please submit the ipynb file and make sure the TA or the instructor can easily run your code. If you use google colab, you can download the .ipynb file by clicking "File->Download->Download .ipynb". Please include the names of all team members in the file name. 2. Please submit a pdf version as well. You can use this website (https://htmtopdf.herokuapp.com/ipynbviewer/ ) to convert your .ipynb file to pdf. 3. Please record a video to demo your work. In the video, please indicate where you have changed the code and state the reason for the change. In addition, please describe in detail the process of modifying the model, adjusting the parameters, the problems encountered, etc. And all team members must participate in the recording. If the team members cooperate remotely, they can submit multiple videos. 4. When you are recording a video, if you just read the comments you wrote in the code, or read according to a script, then you will not be able to prove that the work was done independently by you. Therefore, the lab will be judged as plagiarism.See Answer

**Get** Instant **Deep Learning Solutions** From
**TutorBin App** Now!

Get personalized homework help in your pocket! Enjoy your $20 reward upon registration!

TutorBin believes that distance should never be a barrier to learning. Over 500000+ orders and 100000+ happy customers explain TutorBin has become the name that keeps learning fun in the UK, USA, Canada, Australia, Singapore, and UAE.