I have a taking in 0 MLB baseball player plate appearances as a time series. Each sequence is paired with a single continuous number corresponding to the future OPS (a baseball ) over the following 0 plate appearance window. Essentially I’m using the plate appearance outcomes for a player to predict their future performance.

However, baseball is subject to a lot of random noise. A batter can go into a slump for no reason other than random chance, and future performance can vary significantly as well. If possible, I’d like the neural network to output its in its performance alongside the itself.

Is there any documented way to do this? Currently, my output consists of a single number corresponding to its performance prediction, and I’m using squared error as my loss . However this doesn’t capture the network’s confidence in its own prediction.

I thought about discretizing the output and using softmax over, say, a dozen or so buckets. However, softmax is TOO categorical for this purpose. If the NN says there is a 50% probability future performance falls within [0.5, 0.6), and 50% probability within [0.6, 0.7), but actual performance is 0.45, the loss from the [0.6, 0.7) bucket should be weighted higher than the loss from the [0.5, 0.6) bucket because the values in that bucket are farther away from the actual performance. Softmax doesn’t capture this and would treat every bucket equally.

Is there a loss function engineered for such a purpose?



Source link
thanks you RSS link
( https://www.reddit.com/r//comments/8mr4hm/d_is_there_a_way_for_a_neural_network_to_/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here