Search for question
Question

ELEC 4806/5806 Introduction to Deep Learning and PyTorch/TensorFlow Lab 4 Image Colorization 1. Experiment goal: . Understand the structure of Convolutional Autoencoder and Generative Adversarial Network. · Master the use of hyperparameters. · Master various optimization, regularization and weight initialization methods. · Improve the performance of the model by fine-tuning the network · For this lab only use pytorch 2. Introduction: In this lab, you will first customize a Convolutional Autoencoder for image colorization tasks. The input of the model is 3-channel grayscale images (the three channels are the same), and the output is colorized images. As close as possible AutoEncoder This is an open experiment, you can make any changes to autoencoder. But please note that the input of the model is a 3- channel grayscale image, and the provided image data are all color images, so you need to preprocess the data. You must first convert the color image into a 3-channel gray image and use it as the input of the model, and expect the color image output by the model to have as little error as possible from the original image. Another alternative is that you can add a GAN discriminator to your autoencoder to improve the quality of colorization. This requirement is relatively difficult, so it is not a mandatory requirement. You will use the provided face dataset. It consists of 3000 color images with the size of 512 x 512. And there are 2700 images for training and 300 images for testing. Please download the dataset "face-HQ.zip" from this link (https://drive.google.com/file/d/10Q4JI6ZZoJrlYje1w9hZEOvUM1- 3uJUj/view?usp=sharing ), and then upload it to your Google Colab online computer. Since the size of this file is larger than 1GB, you'd better directly upload it your google drive first, and then each time when you open the google colab, you can mount your google drive folder and directly access it from there. (Or add shortcut to Drive). (See this: https://www.marktechpost.com/2019/06/07/how-to-connect-google-colab-with-google-drive/ ). Then extract the zip file similarly to the previous lab. The extracted folder contains two subfolders "train" and "test", which are used to store data for training (2700 images) and testing (300) respectively. Please do not change the image size. Please refer to the Deep Auto encoder and GAN modules to complete this lab. But you need to pay attention to the following: · The grade will be given subjectively based on the quality of the image reconstruction. The instructor will take the reconstruction quality that 80% of the students can achieve as the minimum standard based on the results submitted by all students. · You can consider adding GAN's discriminator to improve the quality of image colorization. The discriminator can be used discriminate and punish the low quality outputs (since the images are not colorized correctly, so we can consider they are fake images). You can add an adversarial loss to your original reconstruction loss. But its weight should be small. If you want to use WGAN, LSGAN, BEGAN, etc., you can refer to the code on the Internet. But the source needs to be clearly stated. . Plagiarizing code directly from other websites is not allowed. · Please try to use Google Colab to complete this Lab. Without GPU, the training process may become unacceptably long. Submission requirements (all team members need to submit): 1. Please submit the ipynb file and make sure the TA or the instructor can easily run your code. If you use google colab, you can download the .ipynb file by clicking "File->Download->Download .ipynb". Please include the names of all team members in the file name. 2. Please submit a pdf version as well. You can use this website (https://htmtopdf.herokuapp.com/ipynbviewer/ ) to convert your .ipynb file to pdf. 3. Please record a video to demo your work. In the video, please indicate where you have changed the code and state the reason for the change. In addition, please describe in detail the process of modifying the model, adjusting the parameters, the problems encountered, etc. And all team members must participate in the recording. If the team members cooperate remotely, they can submit multiple videos. 4. When you are recording a video, if you just read the comments you wrote in the code, or read according to a script, then you will not be able to prove that the work was done independently by you. Therefore, the lab will be judged as plagiarism.