(₁, ₁), (2, 2), (In, Yn), where zi is the input example, and y, is the class label (+1 or
-1). However, the training data is highly imbalanced (say 90% of the examples are negative
and 10% of the examples are positive) and we care more about the accuracy of positive
examples. How will you modify the perceptron algorithm to solve this learning problem?
Please justify your answer.