← Back

Using proprietary LLMs feels like walking on quicksand. System prompts change. Quantizations change Snapshots change. It’s hard to build a workflow around any model these days. It’s hard to commit to anything.

My big hope around locally hosted models potentially catching up with the proprietary ones is that we might finally be able to freeze something so that we can experiment with it.