When classifying (like in project 2: Text classification), if the embeddings have the same length, can we assume that test time (to classify) will be the same regardless of the method used to create the vectors?
In other words, at test time, assuming the same embedding length, will gloVe, bag-of-word and word2vec have similar running times?
In Week 13 part 2 minute 23:51, professor Jaggi mentions that we could learn word vectors by filling the missing entries with zeros and then doing SVD. However, these word vectors are not GloVe word vectors (and don't perform as well).
To learn glove word vectors we must use the weights and SGD or ALS?
GloVe
Could we learn GloVe word vectors using SVD?
If so, what modifications would we have to do?
When classifying (like in project 2: Text classification), if the embeddings have the same length, can we assume that test time (to classify) will be the same regardless of the method used to create the vectors?
In other words, at test time, assuming the same embedding length, will gloVe, bag-of-word and word2vec have similar running times?
In Week 13 part 2 minute 23:51, professor Jaggi mentions that we could learn word vectors by filling the missing entries with zeros and then doing SVD. However, these word vectors are not GloVe word vectors (and don't perform as well).
To learn glove word vectors we must use the weights and SGD or ALS?
Add comment