📝Improving Temporal Generalization of Pre-trained Language Models with Lexical Semantic Change” (#emnlp )
https://arxiv.org/pdf/2210.17127.pdf
Clever idea to exploit models of lexical semantic change for continual adaptation in LLMs. Some words are very stable over time, others shift. So if you’re just continually fine-tuning on random new data, you’re burning a lot of cycles on words that aren’t changing while missing the long tail of words that are. Would be nice to see other #inductiveBiases from Ling.