Module

ai._providers

LLM provider implementations.

Each provider is a pair of functions: one for complete generation, one for streaming. Both use raw HTTP via httpx — no provider SDKs required.

Supported providers:

  • anthropic— Claude models (Messages API)
  • openai— GPT models (Chat Completions API)

Provider string format:provider:model

  • **anthropic**: claude-sonnet-4-20250514
  • **openai**: gpt-4o

Classes

ProviderConfig 4
Parsed provider configuration.

Parsed provider configuration.

Attributes

Name Type Description
provider str
model str
api_key str
base_url str

Functions

parse_provider 2 ProviderConfig
Parse a ``provider:model`` string into a config. API keys are resolved in orde…
def parse_provider(provider_string: str, /, *, api_key: str | None = None) -> ProviderConfig

Parse aprovider:modelstring into a config.

API keys are resolved in order:

1. Explicit ``api_key`` parameter
2. Environment variable (``ANTHROPIC_API_KEY`` / ``OPENAI_API_KEY``)
Parameters
Name Type Description
provider_string str
api_key str | None Default:None
Returns
ProviderConfig
_get_httpx 0 Any
Import httpx or raise a clear error.
def _get_httpx() -> Any
Returns
Any
anthropic_generate 5 str
Generate a complete response from Anthropic's Messages API.
async
async def anthropic_generate(config: ProviderConfig, messages: list[dict[str, str]], *, max_tokens: int = 4096, temperature: float = 0.0, system: str | None = None) -> str
Parameters
Name Type Description
config ProviderConfig
messages list[dict[str, str]]
max_tokens int Default:4096
temperature float Default:0.0
system str | None Default:None
Returns
str
anthropic_stream 5 AsyncIterator[str]
Stream text tokens from Anthropic's Messages API.
async
async def anthropic_stream(config: ProviderConfig, messages: list[dict[str, str]], *, max_tokens: int = 4096, temperature: float = 0.0, system: str | None = None) -> AsyncIterator[str]
Parameters
Name Type Description
config ProviderConfig
messages list[dict[str, str]]
max_tokens int Default:4096
temperature float Default:0.0
system str | None Default:None
Returns
AsyncIterator[str]
openai_generate 4 str
Generate a complete response from OpenAI's Chat Completions API.
async
async def openai_generate(config: ProviderConfig, messages: list[dict[str, str]], *, max_tokens: int = 4096, temperature: float = 0.0) -> str
Parameters
Name Type Description
config ProviderConfig
messages list[dict[str, str]]
max_tokens int Default:4096
temperature float Default:0.0
Returns
str
openai_stream 4 AsyncIterator[str]
Stream text tokens from OpenAI's Chat Completions API.
async
async def openai_stream(config: ProviderConfig, messages: list[dict[str, str]], *, max_tokens: int = 4096, temperature: float = 0.0) -> AsyncIterator[str]
Parameters
Name Type Description
config ProviderConfig
messages list[dict[str, str]]
max_tokens int Default:4096
temperature float Default:0.0
Returns
AsyncIterator[str]