Using LM Studio on the GX10 has really nudged me to use the LM Link feature, which allows it to be accessed as a secure API server on any remote machine which also has LM Studio installed (without having to forward any ports on your router). LM link lets me use all the models on my GX10, with inference processed by the GX10's GPU), and results delivered on any other machine that has LM Studio installed.
To be clear, when you run Pi, for example, on several remote machines, and each of those PI instances connects to the API server in the LM Studio instance installed on each of those individual machines. The model list shown in each of those machine's LM Studio instances (connected to your LM Link account) includes all the models on the GX10 - those models on the GX10 appear as if they're installed directly in the local instance of LM Studio (so Pi connects to the API served by the local LM Studio instance and the inference runs on the GX10). That's pretty slick.
UPDATE: I've been running LM Link with one of the Strix Halo machines acting as server, and that's also working reliably.