I spent some time trying to reconcile the of the no on and I came to the conclusion that there is little significance. I wound up writing this blog post to get a better understanding of the theorem: http://blog.tabanpour.info/projects/2018/07/20/no-free-lunch.html

In light of the theorem, I’m still not sure how we actually ensure that models align well with the generating functions f for our models to truly generalize (please don’t say cross validation or regularization if you don’t look at the theorem).

Are we just doing lookups and never truly generalizing? What assumptions in practice are we actually making about the data generating distribution that helps us generalize? Let’s take imagenet models as an example.



Source link
thanks you RSS link
( https://www.reddit.com/r//comments/91467l/trying_to__practical_implications_of_no/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here