Here’s some more details
It seems to be able to work in 2 modes:
One mode is passing the dense layer parameter and then filter_indices (terrible name?) as the class node/label.
The other is passing it a conv layer then filter_indices is the position of the filter you wish to visualize in the conv layer.
I’d like to know if a 3rd (more general) option is available so that any arbitrary subset of nodes (possibly spanning multiple layers/filters) could be used as parameters into a saliency visualization over an input image.
A naive approach would be to create k maps (one for each of the k nodes in the subset) and then do some union operation.
A more elegant approach would be to do the backprop calculation once, but with respect to a subset of nodes.
Does the math/code for this exist? Things get a bit weird when the subset of nodes spans multiple layers.