I8217;m looking into various automatic differentiation implementations, and I8217;m curious as to how the python package autograd works, or what the main algorithm at play is. For reverse mode autodiff I’ve mostly seen Wengert lists used – is some version of this concept used in the autograd package?
In general I understand this approach to automatic differentiation of decomposing a function into primitive operations, but from an implementation standpoint I am curious as to how this decomposition occurs. The example in autograd uses a
tanh function defined in regular python, so I imagine the package must have some way of breaking the python up into these kinds of primitive operations? Or maybe autograd doesn’t use that approach?