Your AI, your way
Run everything locally for free, use our cloud, or mix both. Stimma adapts to your setup.
Built by enthusiasts
We run FLUX on our 4090s, swap LoRAs in ComfyUI, and argue about quantization on Reddit. We learned most of what we know from this community.
So the local tier is genuinely free. Not "free trial" free. Connect your ComfyUI, point to your local LLM, done.
Bring Your Own AI
Your hardware, your models, your data. Stimma connects to what you already run.
Cloud Models
Premium models, no setup. Always the latest with cloud GPU acceleration.
Mix cloud and local freely — use FLUX.2 Pro from our cloud while running your own LLM for chat. Pay only for what you generate.
New models added regularly. Pricing on our pricing page.
export default defineTool({
name: "style-transfer",
description: "Apply artistic style",
parameters: {
image: { type: "image" },
style: { type: "string" },
},
async execute({ image, style }) {
// Your implementation here
return { image: result }
}
}) Stimma Tools Protocol
Like MCP, but for visual creation. Build custom tools that plug directly into Stimma's workflow.
Custom filters, style transfer, batch operations, API integrations — if you can code it, it works in Stimma.
Well-documented, language-agnostic. TypeScript SDK available.
Share tools and discover what others have built. Coming soon.
All Supported Models
Growing regularly. See pricing for cloud model costs.