I’m looking into various automatic differentiation implementations, and I’m curious as to how the python package works, or what the main algorithm at play is. For reverse mode autodiff I’ve mostly seen Wengert lists used – is some version of this concept used in the package?

In general I understand this approach to automatic differentiation of decomposing a function into primitive operations, but from an implementation standpoint I am curious as to how this decomposition occurs. The example in autograd uses a tanh function defined in regular python, so I imagine the package must have some way of breaking the python up into these kinds of primitive operations? Or maybe autograd doesn’t use that approach?



Source link
thanks you RSS link
( https://www.reddit.com/r//comments/8ep130/d_how_does_autograd_/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here