Natural Language Understanding Wiki
Advertisement

Voleti et al. (2019) https://arxiv.org/abs/1811.07021: "Our results show that pre-trained neural sentence encoders are both robust to ASR errors and perform well on textual similarity tasks after errors are introduced. Meanwhile, unweighted averages of word vectors perform well with perfect transcriptions, but their per- formance degrades rapidly on textual similarity tasks for text with word substitution errors."

See also

Advertisement