LM Studio Connection for Local LLM

Hello,
I am running n8n locally on older Mac Mini hardware (Catalina!!) inside Docker. I have a nice desktop computer that runs LM Studio for some local LLM usage across many computers inside my LAN. I use AnythingLLM for those machines and the base URL for the LM Studio desktop computer is actually the URL generated by a Zerotier client I run on that machine and most any machines I want to more easily access. Zerotier is a great VLAN system that is pretty simple to work with but maybe in this case I want to configure access in the OpenAI Chat Model node differently in n8n and not necessarily use the base URL that is using the IP from ZeroTier. It seems that it should work but when I create the credential and then try to have it select a model from the list after using that Base URL, none of the available models ever load.
My main questions are how best to get this Docker instance of n8n to be able to access the LM Studio models. It seems like if I wanted to continue using the ZeroTier option, I would need to run it in Docker on same network as the n8n Docker instance instead of locally on the client app for my Mac Mini as I do currently. Hopefully there is a more precise way to do this that makes it much simpler for n8n to access the models from LM Studio on the desktop while still allowing other machines that use AnythingLLM to do the same.
Thanks for any insights or ideas around this and for anyone that is using LM Studio at all for n8n, I would welcome hearing in general how it works for you and how you configured. Maybe it needs its own custom node in the future?

Thanks for reading through this,
~Dubhead

Could you provide some more details on the credentials without PII? Perhaps screenshots and errors?