Description
Found it more convenient to just spawn off the tool augmenting an existing inference model as a tool call. Also aborted on relying on sdk wrappers around your tool and just spawn it programatically. But of course it's only one render per spawn. and quite a conglomeration of args to feed it. If the tool had a rest service. A single json could hold all the parameters and be more robust on types. and eliminate the huge stringbuilder required to do this.
What is more important though is that the tool could remain running listening for future runs. it could either return a base64 (webP) or (AviF) string for the image. or just go ahead and write the file directly as it does now. This saves that initial bootup it needs to do for subsequent runs as it never exits.
If it was a rest service, then this tool could be "remote" and have a dedicated computer. Whereas now it shares the space with an active LLM. It could even be on a cloud and serve up Flux renders. could be LocalHost or http://domain here.