I have something would like to clarify regarding Generative Adversarial Imitation Learning (GAIL). Is the original GAIL applicable if the experts trajectories (sample data) are for the same task but are in different environment (modified but will not be completely different)? My gut feeling is yes, otherwise we can just simply adopt behavioral cloning. Furthermore, since the experts trajectories are from different environment, the dimension/length of state-action pairs will most likely be different. Will those trajectories still be useful for GAIL training?