Data Warehousing and Machine Learning

7 July 2021

What is CNN Part 2

Filed under: Data Warehousing — Vincent Rainardi @ 4:54 am

In the first part (link), after trying 28 different models the conclusion was that the best models are model #26 and #28. Model #28 has more validation fluctuation, but it has half the number of parameters.

But as we can see above both model #26 and #28 suffer from overfitting. Meaning that the training accuracy is very high (about 90%) but very low validation accuracy (about 50%). This big gap of 40% is a clear indication of overfitting. To solve this we need to do image augmentation, i.e. we need to rotate, flip, zoom out, zoom in, and shift the image, like this:

The top left image is the original image. The other 11 images are generated using random rotation, random flip, random zoom and random contrast. I put 3 sets so we can understand the interplay between these 4 transformations on the augmented images: rotate, flip, zoom, contrast (combined). Jason Brownlee gave a good tutorial on this: link.

After doing image augmentation the result is as follows:

The gap between is closing but they are still low! The best one with the narrowest gap and the highest validation accuracy is A3. It has training accuracy = 55% and validation accuracy = 54%.

In this situation like this (i.e. after doing image augmentation) if the accuracy is still low, we need to check the number of training images in each class. If one class has only a few images, and other class has lots of images, then the model training will suffer from “class imbalance” problem. Shubrashankh Chatterjee’s explained this very well on his article: link.

Basically we auto generate additional images using image augmentation (rotate, flip, zoom, contrast, shift, etc) so that each class has the same number of images. After doing this, the result is like this: (note that it’s 30 epochs not 20)

So both models still suffer from validation fluctuation, even after 30 epochs. Even with batch normalisation and dropout. Even with dropout on the dense layer! I’m still finding out why, but I think it might be because of the type of augmentation, for example I didn’t change the colours of the images. To troubleshoot this we need to find out which class causing the low accuracy, is it just some particular classes or all classes. But that’s for another time and another article. Happy learning!

1 Comment »

  1. […] can see that the validation accuracy is still low. In part 2 of this article (link) I’m going to address that […]

    Pingback by What is Convolutional Neural Network (CNN)? | Data Warehousing and Machine Learning — 7 July 2021 @ 4:57 am | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: