How are small scale features generated in an Inverse Graphics Networks?

This post refers to Fig. 1 of a paper by Microsoft on their Deep Convolutional Inverse Graphics Network: https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/kwkt_nips2015.pdf Having read the paper, I understand in general terms how the network functions. However, one detail has been bothering me: how does the network decoder (or “Renderer”) generate small scale features in the correct location as…

I can train with a set of any size using the NN (mlp) , but the test-set always yields the bad evaluation results. Any solution?

The data that I can provide for learning is pseudo-random, and it just happened that I can train the Neural Nets (classical Multilayer perceptron) with any size as an input. The error while training is very low. However, as soon as I test it with a test-set, I get very bad results (Correlation coefficient is…

I can train with a set of any size using the NN (mlp) , but the test-set always yields the bad evaluation results. Any solution?

The data that I can provide for learning is pseudo-random, and it just happened that I can train the Neural Nets (classical Multilayer perceptron) with any size as an input. The error while training is very low. However, as soon as I test it with a test-set, I get very bad results (Correlation coefficient is…

How to back propagate for implementation of Sequence-to-Sequence with Multi Decoders

I am proposing a modified version of Sequence-to-Sequence model with dual decoders. The problem that I am trying to solve is Neural Machine Translation into two languages at once. This is the simplified illustration for the model. /–> Decoder 1 -> Language Output 1 Language Input -> Encoder -| \–> Decoder 2 -> Language Output…