Interpolation Can Provably Preclude Invariance https://arxiv.org/abs/2211.15724
#Overfitting to the point of #interpolation can hinder invariance-inducing objectives: One cannot assume a #DeepLearninig model with an invariance penalty will indeed achieve any form of #invariance… suggests that “benign overfitting,” in which models generalize well despite interpolating, might not favorably extend to settings in which #robustness or #fairness are desirable.
#fairness #robustness #invariance #DeepLearninig #interpolation #overfitting
What if the whole idea of #DeepLearninig is actually dead wrong, and the very deep networks are necessary to counteract the implicit errors created by such networks? What if the brain use a form of generalized function that simply span a sufficient subspace to form that pesky manifold, and then place learned vectors on those generalized functions? It would fit the vectors to the existing net, not as in deep learning where the net is fitted to the vector. #ArtificialIntelligence
#DeepLearninig #artificialintelligence
#Introduction : I'm Paul, a computational semanticist who is currently working in the #ClinicalNLP world. I teach at the #MedicalUniversityOfSouthCarolina (#MUSC) in a joint PhD program with #ClemsonUniversity . I run the #NLPCore , a service center for helping other researchers at MUSC gain access to the power of #NLP , #AI , #ML , #DeepLearninig , and #ShallowLearning for their own agenda. My tech tree: #Linux, #RStats, #Python, #Java, #emacs, #OrgMode, #LaTeX, #git.
#git #latex #orgmode #emacs #java #python #rstats #linux #ShallowLearning #DeepLearninig #ml #ai #nlp #NLPCore #ClemsonUniversity #musc #MedicalUniversityOfSouthCarolina #ClinicalNLP #introduction