Humans with several VIAF ids in @wikidata: https://qlever.cs.uni-freiburg.de/wikidata/SRk7qK
This SPARQL query is run in #QLever, because it can not be run in the Wikidata Query Service as it times out there: https://w.wiki/6CPG 😞
I moved on from #QLever to #QEndpoint, which seems to be more promising in terms of being updatable. However, the RAM requirement for the HDT build process is notable: at least 200 GB for building the full #Wikidata set.
The challenge here isn't going to be initial setup, as far as I can tell. The index build is in progress. It is going to be taking updater tech built for #Blazegraph and adapting it to #QLever. Which I suspect is possible, if both use #SPARQL, but I haven't actually tried it yet.
After successfully experimenting with and deploying #Blazegraph for #Wikidata querying I am now experimenting with #QLever which is like a breath of fresh air.
Eventually I would like to offer QLever as an experimental service alongside Blazegraph, eventually retiring Blazegraph.