Predictions throughout training, hyperparams and architectures are yet again shown to be on
a small manifold
which means models learn their classifications outputs similarly
https://arxiv.org/abs/2305.01604
Mao ... @pratikac
#MachineLearning #enough2skim
Few-shot learning almost reaches traditional machine translation
https://arxiv.org/abs/2302.01398
#enough2skim #NLProc #neuralEmpty
#enough2skim #nlproc #neuralempty
20 questions can now be played by computers
you probably all know @akinator_team@twitter.com that can guess what you thought about
https://arxiv.org/pdf/2301.08718.pdf
propose the other role
They pick a character and will answer yes or no
(basically, QA over wiki+ tweaks)