Categories
Artificial Intelligence (AI) Development

Is Fasttext still use One Hot Encoding?

in original Skipgram/CBOW both context word and target word is represented as one hot encoding. Is fasttext also use one-hot encoding for each subwords when train the skipgram/CBOW model? so the length of the one-hot encoding vector is |Vocab| + |all subwords|?

Leave a Reply

Your email address will not be published. Required fields are marked *