fix(core): a chunk timeout when processing llm stream (#16366)

This commit is contained in:
James Long
2026-03-10 11:12:14 -04:00
committed by GitHub
parent 4c4aed5a87
commit 69ddc91c35
4 changed files with 77 additions and 10 deletions

View File

@@ -244,7 +244,7 @@ You can configure the providers and models you want to use in your OpenCode conf
The `small_model` option configures a separate model for lightweight tasks like title generation. By default, OpenCode tries to use a cheaper model if one is available from your provider, otherwise it falls back to your main model.
Provider options can include `timeout` and `setCacheKey`:
Provider options can include `timeout`, `chunkTimeout`, and `setCacheKey`:
```json title="opencode.json"
{
@@ -253,6 +253,7 @@ Provider options can include `timeout` and `setCacheKey`:
"anthropic": {
"options": {
"timeout": 600000,
"chunkTimeout": 30000,
"setCacheKey": true
}
}
@@ -261,6 +262,7 @@ Provider options can include `timeout` and `setCacheKey`:
```
- `timeout` - Request timeout in milliseconds (default: 300000). Set to `false` to disable.
- `chunkTimeout` - Timeout in milliseconds between streamed response chunks. If no chunk arrives in time, the request is aborted.
- `setCacheKey` - Ensure a cache key is always set for designated provider.
You can also configure [local models](/docs/models#local). [Learn more](/docs/models).