Dockerized FastAPI wrapper for Kokoro-82M text-to-speech model
This repository contains a simple Dockerfile for deploying the Kokoro-FastAPI text-to-speech service using the pre-built CPU image.
docker build -t kokoro-tts .
docker run -p 8880:8880 kokoro-tts
Once running, the API is available at http://localhost:8880 with OpenAI-compatible endpoints:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8880/v1",
api_key="not-needed"
)
with client.audio.speech.with_streaming_response.create(
model="kokoro",
voice="af_bella",
input="Hello world!",
response_format="mp3"
) as response:
response.stream_to_file("output.mp3")
For comprehensive documentation including:
Please refer to the main README.md in the repository.
http://localhost:8880http://localhost:8880/docshttp://localhost:8880/web