Launch and chat
Easy to use API, the spec is simple enough to just a few lines of YAML to deploy a model, and you can chat with it right away.
Large language models, scaled, deployed
Yet another operator for running large language models on Kubernetes with ease. Powered by Ollama! 🐫
Dislike YAML?
Faster and better UX?
No worries, kollama here to rescue!
# as regular binary cli $ kollama deploy phior
# as kubectl plugin $ kubectl ollama deploy phi
Fine-grained control over parameters?
GitOps and CI/CD?
CRD is simple enough with 6 lines!
apiVersion: ollama.ayaka.io/v1 kind: Model metadata: name: phi spec: image: phi