Skip to content

LLM: add Volcengine and BytePlus presets#240

Open
Maaannnn wants to merge 2 commits intoaiming-lab:mainfrom
Maaannnn:feat/volcano
Open

LLM: add Volcengine and BytePlus presets#240
Maaannnn wants to merge 2 commits intoaiming-lab:mainfrom
Maaannnn:feat/volcano

Conversation

@Maaannnn
Copy link
Copy Markdown

Summary

  • add 4-provider support for volcengine, volcengine-coding-plan, byteplus, and byteplus-coding-plan
  • wire the new presets into researchclaw init, the quickstart wizard, and the example config
  • document the new provider/env-var options in the English and Chinese READMEs

Behavior Changes

  • researchclaw init now shows the four new providers after minimax, using the canonical base URLs and model catalogs from the provider contract
  • the quickstart wizard now offers the same provider family and fills in base URL, API key env var, primary model, and fallback models for named providers
  • LLMClient.from_rc_config() now resolves the Volcengine and BytePlus presets without requiring manual base URL entry

Codebase Search

  • searched for provider registration and config entrypoints with rg before editing
  • confirmed this repo uses preset-driven LLM config instead of a registry/plugin provider system

Tests

  • .venv/bin/pytest tests/test_rc_cli.py tests/test_volcano_provider.py tests/test_minimax_provider.py
  • .venv/bin/python -m py_compile researchclaw/llm/__init__.py researchclaw/cli.py researchclaw/wizard/quickstart.py tests/test_rc_cli.py tests/test_volcano_provider.py

Risks

  • only the main README and docs/README_CN.md were updated; the other translated READMEs still mention the older provider list
  • no live provider API calls were run because this branch does not include real provider credentials

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant