Build LLM workflows like normal Python while keeping a full audit trail by default.
Visit https://getrepublic.org for concepts, guides, and API reference.
Republic is a tape-first LLM client: messages, tool calls, tool results, errors, and usage are all recorded as structured data. You can make the workflow explicit first, then decide where intelligence should be added.
from __future__ import annotations
import os
from republic import LLM
api_key = os.getenv("LLM_API_KEY")
if not api_key:
raise RuntimeError("Set LLM_API_KEY before running this example.")
llm = LLM(model="openrouter:openrouter/free", api_key=api_key)
result = llm.chat("Describe Republic in one sentence.", max_tokens=48)
if result.error:
print(result.error.kind, result.error.message)
else:
print(result.value)
- Plain Python: The main flow is regular functions and branches, no extra DSL.
- Structured Result: Core interfaces return
StructuredOutput, with stableErrorKindvalues. - Tools without magic: Supports both automatic and manual tool execution with clear debugging and auditing.
- Tape-first memory: Use anchor/handoff to bound context windows and replay full evidence.
- Event streaming: Subscribe to text deltas, tool calls, tool results, usage, and final state.
See CONTRIBUTING.md for local setup, testing, and release guidance.
This project is derived from lightning-ai/litai and inspired by pydantic/pydantic-ai; we hope you like them too.