I have a supervised ML problem, a dataset which contains 24 features (1600~ observations).

I’ve trained a Linear SVC model on the and have reasonable accuracy, about 13%~ above majority class baseline.

However if I remove 23 of the 24 features, I get about 1% less accuracy, i’ve looked at the weights and essentially all of the power is coming from a feature.

In terms of building a model that can generalise to future unknown datapoints, I know just having one feature in the model isn’t acceptable. (This is an academic publication btw).

My current ideas are to deliberately reduce the weight of that feature using some form of regularizer.

Has anyone alternative suggestions?



Source link
thanks you RSS link
( https://www.reddit.com/r//comments/9pu1av/d__on_a_single_feature/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here