I’d like to expand the notion of saliency so that they can be generated by a of nodes that are not necessarily in the output layer or even in the same or layer.

Here’s some more details
https://raghakot.github.io/keras-vis/visualizations/saliency/.

It seems to be able to work in 2 modes:

One mode is passing the dense layer parameter and then filter_indices (terrible name?) as the node/label.

The other is passing it a conv layer then filter_indices is the position of the filter you wish to visualize in the conv layer.

I’d like to know if a 3rd (more general) option is available so that any subset of nodes (possibly spanning multiple layers/filters) could be used as parameters into a saliency visualization over an input image.

A naive approach would be to k maps (one for each of the k nodes in the subset) and then do some union operation.

A more elegant approach would be to do the backprop calculation once, but with respect to a subset of nodes.

Does the math/code for this exist? Things get a bit weird when the subset of nodes spans multiple layers.



Source link
thanks you RSS link
( https://www.reddit.com/r//comments/9q2bpu/_maps_for_arbitrary_subset_of_nodes_not/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here