minus-squareViatorOmnium@piefed.socialtoSelfhosted@lemmy.world•What can I use for an offline, selfhosted LLM client, pref with images,charts, python code executionlinkfedilinkEnglisharrow-up2·3 days agoThe main limitation is the VRAM, but I doubt any model is going to be particularly fast. I think phi3:mini on ollama might be an okish fit for python, since it’s a small model, but was trained on python codebases. linkfedilink
The main limitation is the VRAM, but I doubt any model is going to be particularly fast.
I think
phi3:mini
on ollama might be an okish fit for python, since it’s a small model, but was trained on python codebases.