mirror of
https://gitea.toothfairyai.com/ToothFairyAI/tf_code.git
synced 2026-04-04 16:13:11 +00:00
fix(core): a chunk timeout when processing llm stream (#16366)
This commit is contained in:
@@ -244,7 +244,7 @@ You can configure the providers and models you want to use in your OpenCode conf
|
||||
|
||||
The `small_model` option configures a separate model for lightweight tasks like title generation. By default, OpenCode tries to use a cheaper model if one is available from your provider, otherwise it falls back to your main model.
|
||||
|
||||
Provider options can include `timeout` and `setCacheKey`:
|
||||
Provider options can include `timeout`, `chunkTimeout`, and `setCacheKey`:
|
||||
|
||||
```json title="opencode.json"
|
||||
{
|
||||
@@ -253,6 +253,7 @@ Provider options can include `timeout` and `setCacheKey`:
|
||||
"anthropic": {
|
||||
"options": {
|
||||
"timeout": 600000,
|
||||
"chunkTimeout": 30000,
|
||||
"setCacheKey": true
|
||||
}
|
||||
}
|
||||
@@ -261,6 +262,7 @@ Provider options can include `timeout` and `setCacheKey`:
|
||||
```
|
||||
|
||||
- `timeout` - Request timeout in milliseconds (default: 300000). Set to `false` to disable.
|
||||
- `chunkTimeout` - Timeout in milliseconds between streamed response chunks. If no chunk arrives in time, the request is aborted.
|
||||
- `setCacheKey` - Ensure a cache key is always set for designated provider.
|
||||
|
||||
You can also configure [local models](/docs/models#local). [Learn more](/docs/models).
|
||||
|
||||
Reference in New Issue
Block a user