There was an exuberant feeling around 2012-2014 that the hard problems of intelligence could really be solved by deep . Right now it seems like we’re back to grinding on benchmarks and hacking on tweaks and , not new paradigms. Reward engineering has become the new feature engineering. ML models still don’t learn to do anything remotely sentient compared to humans, they make silly mistakes and are very brittle.

Is the expansion of narrow a step towards general , or have we jumped out of one into another?



Source link
thanks you RSS link
( https://www.reddit.com/r//comments/9902bz/d_is__learning___near_a_local/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here