Classes
ProviderConfig
4
▼
Parsed provider configuration.
ProviderConfig
4
▼
Parsed provider configuration.
Attributes
| Name | Type | Description |
|---|---|---|
provider |
str
|
— |
model |
str
|
— |
api_key |
str
|
— |
base_url |
str
|
— |
Functions
parse_provider
2
ProviderConfig
▼
Parse a ``provider:model`` string into a config.
API keys are resolved in orde…
parse_provider
2
ProviderConfig
▼
def parse_provider(provider_string: str, /, *, api_key: str | None = None) -> ProviderConfig
Parse aprovider:modelstring into a config.
API keys are resolved in order:
1. Explicit ``api_key`` parameter
2. Environment variable (``ANTHROPIC_API_KEY`` / ``OPENAI_API_KEY``)
Parameters
| Name | Type | Description |
|---|---|---|
provider_string |
str |
|
api_key |
str | None |
Default:None
|
Returns
ProviderConfig
_get_httpx
0
Any
▼
Import httpx or raise a clear error.
_get_httpx
0
Any
▼
def _get_httpx() -> Any
Returns
Any
anthropic_generate
5
str
▼
Generate a complete response from Anthropic's Messages API.
async
anthropic_generate
5
str
▼
async def anthropic_generate(config: ProviderConfig, messages: list[dict[str, str]], *, max_tokens: int = 4096, temperature: float = 0.0, system: str | None = None) -> str
Parameters
| Name | Type | Description |
|---|---|---|
config |
ProviderConfig |
|
messages |
list[dict[str, str]] |
|
max_tokens |
int |
Default:4096
|
temperature |
float |
Default:0.0
|
system |
str | None |
Default:None
|
Returns
str
anthropic_stream
5
AsyncIterator[str]
▼
Stream text tokens from Anthropic's Messages API.
async
anthropic_stream
5
AsyncIterator[str]
▼
async def anthropic_stream(config: ProviderConfig, messages: list[dict[str, str]], *, max_tokens: int = 4096, temperature: float = 0.0, system: str | None = None) -> AsyncIterator[str]
Parameters
| Name | Type | Description |
|---|---|---|
config |
ProviderConfig |
|
messages |
list[dict[str, str]] |
|
max_tokens |
int |
Default:4096
|
temperature |
float |
Default:0.0
|
system |
str | None |
Default:None
|
Returns
AsyncIterator[str]
openai_generate
4
str
▼
Generate a complete response from OpenAI's Chat Completions API.
async
openai_generate
4
str
▼
async def openai_generate(config: ProviderConfig, messages: list[dict[str, str]], *, max_tokens: int = 4096, temperature: float = 0.0) -> str
Parameters
| Name | Type | Description |
|---|---|---|
config |
ProviderConfig |
|
messages |
list[dict[str, str]] |
|
max_tokens |
int |
Default:4096
|
temperature |
float |
Default:0.0
|
Returns
str
openai_stream
4
AsyncIterator[str]
▼
Stream text tokens from OpenAI's Chat Completions API.
async
openai_stream
4
AsyncIterator[str]
▼
async def openai_stream(config: ProviderConfig, messages: list[dict[str, str]], *, max_tokens: int = 4096, temperature: float = 0.0) -> AsyncIterator[str]
Parameters
| Name | Type | Description |
|---|---|---|
config |
ProviderConfig |
|
messages |
list[dict[str, str]] |
|
max_tokens |
int |
Default:4096
|
temperature |
float |
Default:0.0
|
Returns
AsyncIterator[str]