@BenjaminHan and rephrasing my earlier statement:
Using #knowledge as engineering short-hand for #inductivebias is fine. Use #KG as a type of implementation of #datastorage for #DNN, but not as #representation of #knowledge.
Another view point from which to consider the obvious deficit of #KG is the work of #Vervaeke et al, esp. the interpretation of “4 types of knowledge”.
#knowledge #inductivebias #vervaeke #kg #datastorage #dnn #representation
@BenjaminHan also, coming back to ur OP ;-)
In practice, we find #KG useful to loosely couple databases across different “subcontexts”, bridging (not breaking) #organizational #silos. Yet, the shared context remains narrow and brittle.
#Transformers we find most useful to catch the longtail of at least a subcontext in a practically robust way.
And yes, #inductivebias here is informed by #KG, top-down, yet as always #datamodel of input is the determinant of predictive viability.
#kg #organizational #Silos #Transformers #datamodel #inductivebias