training gets slow. O As the network gets deeper i.e., by adding more layers, the magnitude of the gradients might below too less or high, which might lead to vanishing or exploding gradients. O Since deep learning models are complex by nature, generally we observe under-fitting issue.
Fig: 1