The choice of the dataset is far more important than the ML model in the data science world. It means your algorithm is less important than other parameters.
Are you facing real issues related to imbalanced data? You did not need to worry at all, any classifier will do when you choose a little issue. Imbalanced data definitely a pain in the neck & head, and it is a difficult task to develop a classification model for that.
Imbalanced classification is challenging where one item (category) shows a fantastic rich number of features. Let us go with an example to deep dive into it.
How to Collect Datasets for Machine Learning
Considering you have the dataset, how many chances of rain in Sialkot? There are two columns heading “Yes” & “NO”, whether it is raining or not respectively. 950 times is yes and 50 times no.
This situation will lead to the following problems.
(A) Data will learn yes more because of more times of occurrence. In this way, the model will learn yes all the time and no condition will ever come up. This is not fruitful practice at all.
(B) Let us consider if you set 30-70 test training, then training will take out data from one class and testing will be from another class. Implies once again a wrong practice.
The problem is the same, what to do with such a type of dataset that is small? Following Materials will be helpful for you, keep reading.