Interpretability by design is usually a conscientious effort. Researchers will think of new architectures or adaptions to existing ones that
Class Activation Maps (CAMs) from Scratch
How global average pooling layers lead to intrinsically interpretable neural networks
