Trained a regression network and getting EXACT same result on validation set, on every epoch

I trained this network from this github. The training went well, and returns nice results for new, unseen images. On training, the loss changed (decreased), thus I must assume the weights changed as well. On training, I saved a snapshot of the net every epoch. When trying to run a validation set through each epoch’s…

In $logp_{\theta}(x^1,…,x^N)=D_{KL}(q_{\theta}(z|x^i)||p_{\phi}(z|x^i))+\mathbb{L}(\phi,\theta;x^i)$ why is $\theta$ and param for $p$ and $q$?

In $logp_{\theta}(x^1,…,x^N)=D_{KL}(q_{\theta}(z|x^i)||p_{\phi}(z|x^i))+\mathbb{L}(\phi,\theta;x^i)$ why is $\theta$ and param for $p$ and $q$? Why does $p(x^1,…,x^N)$ and $q(z|x^i)$ have the same parameter $\theta?$ Cause $p$ is just the probability of the observed data and $q$ is the approximation of the posterior so shouldn’t they be different distributions and their parameters different?

Uniform Cost Search Algorithm from Russell&Norvig Artificial Intelligence Book

On page 84 of Russell&Norvig’s Artificial Intelligence Book, 3rd Ed., the algorithm for uniform cost search is given. I provided a screenshot of it here for your convenience. I am having trouble understanding the highlighted line if child.STATE is not in explored **or** frontier then Shouldn’t that be if child.STATE is not in explored **and**…