Interactive testing environment for the Shimmy LLM inference API
Click any example to load it into the playground:
POST /api/generate
Generate text with any loaded model. Supports streaming and custom parameters.
POST /api/chat
Chat-style completions with conversation history and system prompts.
GET /api/tags
Get all available models and their metadata.
GET /health
Check if the Shimmy server is running and healthy.