WOODS: Benchmarks for Out-of-Distribution Generalization in Time Series
Jean-Christophe Gagnon-Audet, Kartik Ahuja, Mohammad Javad Darvishi Bayazi et al.
Action editor: Antoni Chan.
#generalization #generalize #datasets
WOODS: Benchmarks for Out-of-Distribution Generalization in Time Series
#generalization #generalize #datasets
I would argue the named entity recognition (#NER) is the simplest, best defined #NLP task with a current, practical application. One might think that NER is a solved problem. However, the best existing NER models still fail to #generalize well to named entities (names) that were not in their training data (despite decades of work).
I don't know what #sentience is, but I'm, on principle, not willing to grant #personhood to something that can't identify names it has never seen before.
#ner #nlp #generalize #Sentience #personhood
Perception people, i'd be very interested to learn about distributions of time constants in #PrimarySensoryCortices. There's a particular choice (that we think we see in the #hippocampus) that lets #DeepNetworks #generalize in what seems like a pretty adaptive way. Distribution is #WeberFechnerLaw:
https://proceedings.mlr.press/v162/jacques22a.html
#weberfechnerlaw #generalize #DeepNetworks #hippocampus #primarysensorycortices