### Truncated Neural Networks?

Recently, I’ve found good success in truncated neural networks ie functions of the form $$ g=f1_{[-M,M]^d}, $$ where $f:\mathbb{R}^d\to\mathbb{R}^n$ is a feed-forward neural network and $1_{[-M,M]^d}$ is the indicator function on the cube of radius $M>0$. Has anyone come across any paper using these “truncated neural networks” instead of simply using (un-truncated/classical) feed-forward neural networks?