You can sample nodes for scalable #GNN #training. But how do you do #scalable #inference?
In our latest paper (Oral #LogConference
) we introduce influence-based mini-batching (#IBMB) for both fast inference and training, achieving up to 130x and 17x speedups, respectively!
1/8 in 🧵
#gnn #training #scalable #inference #logconference #ibmb