in original Skipgram/CBOW both context word and target word is represented as one hot encoding. Is fasttext also use one-hot encoding for each subwords when train the skipgram/CBOW model? so the length of the one-hot encoding vector is |Vocab| + |all subwords|?
Author: Jessica Alba
Whether your next job interview is related to data science or machine learning, you can bet that artificial intelligence questions will come up.