Totally new to running local LLM. How are you connecting to your model via mobile?
Totally new to running local LLM. How are you connecting to your model via mobile?
I’m connecting to llama.cpp on my laptop through my phone via Tailscale but when my laptop sleeps I can’t access it anymore on my phone.
What are yall using for this? Thanks!