If you are looking for a hardened phone, I would consider trying GrapheneOS for a bit, see if it does what you are looking for. Uses SELinux and a seccomp-bpf policy for app sandboxing, as well as runs a hardened kernel with a hardened memory alloc. Great isolation approach, too, so that you can run apps on a ‘completely different phone,’ so to speak – think of the isolation like a small version of the OS that can keep apps entirely separate. Finally, if desired (and needed for certain apps), you can sandbox all Google services so that they don’t have direct access. It’s is a different approach to, say, microG.
GrapheneOS is all about hardening. Security is solid.
VPN wise, Mullvad wireguard servers are also solid. You can do multihops, which help you obsfucate traffic to degree. They have also been playing around with packet shaping (if you use their app directly).
Sim cards can be swapped out if use a VoIP service like jmp.chat.
Hello! I recently deployed GPUStack, a self-hosted GPU resource manager.
It helps you deploy AI models across clusters of GPUs, regardless of network or device. Got a Mac? It can toss a model on there and route it into an interface. Got a VM on a sever somewhere? Same. How about your home PC, with that beefy gaming GPU? No prob. GPUStack is great at scaling what you have on hand, without having to deploy a bunch of independent instances of ollama, llama.ccp, etc.
I use it to route pre-run LLMs into Open WebUI, another self-hosted interface for AI interactions, via the OpenAI API that both GPUStack and Open WebUI support!