Skip to content

Commit fe5531e

Browse files
authored
Merge pull request #151 from bio-xyz/fix/gpt-5-model-updates
Updated models to gpt-5.4 and fixed reasoning_effort parameter
2 parents 530b20b + 6a38eb8 commit fe5531e

6 files changed

Lines changed: 34 additions & 24 deletions

File tree

.env.example

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -33,23 +33,25 @@ PRIMARY_LITERATURE_AGENT=
3333
BIO_LIT_AGENT_API_URL=
3434
BIO_LIT_AGENT_API_KEY=
3535
REPLY_LLM_PROVIDER=openai
36-
REPLY_LLM_MODEL=gpt-5
36+
REPLY_LLM_MODEL=gpt-5.4
3737

3838
HYP_LLM_PROVIDER=openai
39-
HYP_LLM_MODEL=gpt-5
39+
HYP_LLM_MODEL=gpt-5.4
4040

4141
PLANNING_LLM_PROVIDER=openai
42-
PLANNING_LLM_MODEL=gpt-5
42+
PLANNING_LLM_MODEL=gpt-5.4
4343

4444
STRUCTURED_LLM_PROVIDER=openai
45-
STRUCTURED_LLM_MODEL=gpt-5
45+
STRUCTURED_LLM_MODEL=gpt-5.4
4646

4747

4848
GOOGLE_API_KEY=
4949
ANTHROPIC_API_KEY=
5050
OPENROUTER_API_KEY=
5151

5252
# Chat Agent Loop (shared config for in-process and queue mode)
53+
# Note: chat mode currently uses the Anthropic SDK directly, so this model is
54+
# not provider-agnostic like the deep-research agents above.
5355
CHAT_AGENT_MODEL=claude-sonnet-4-6
5456
CHAT_AGENT_MAX_TOOL_CALLS=10
5557
CHAT_AGENT_MAX_TOKENS=4096

.env.worker.example

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -33,17 +33,17 @@ OPENROUTER_API_KEY=
3333
# OPTIONAL: Agent Configuration
3434
# --------------------------------------------------------------------------
3535
# LLM providers: openai, google, anthropic, openrouter
36-
REPLY_LLM_PROVIDER=google
37-
REPLY_LLM_MODEL=gemini-2.5-pro
36+
REPLY_LLM_PROVIDER=openai
37+
REPLY_LLM_MODEL=gpt-5.4
3838

39-
HYP_LLM_PROVIDER=google
40-
HYP_LLM_MODEL=gemini-2.5-pro
39+
HYP_LLM_PROVIDER=openai
40+
HYP_LLM_MODEL=gpt-5.4
4141

42-
PLANNING_LLM_PROVIDER=google
43-
PLANNING_LLM_MODEL=gemini-2.5-flash
42+
PLANNING_LLM_PROVIDER=openai
43+
PLANNING_LLM_MODEL=gpt-5.4
4444

45-
STRUCTURED_LLM_PROVIDER=google
46-
STRUCTURED_LLM_MODEL=gemini-2.5-flash
45+
STRUCTURED_LLM_PROVIDER=openai
46+
STRUCTURED_LLM_MODEL=gpt-5.4
4747

4848
# --------------------------------------------------------------------------
4949
# OPTIONAL: Literature & Analysis Services

documentation/content/getting-started/configuration.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -36,21 +36,25 @@ Configure which models to use for different agents:
3636
```bash
3737
# Reply generation
3838
REPLY_LLM_PROVIDER=openai
39-
REPLY_LLM_MODEL=gpt-4
39+
REPLY_LLM_MODEL=gpt-5.4
4040

4141
# Hypothesis generation
4242
HYP_LLM_PROVIDER=openai
43-
HYP_LLM_MODEL=gpt-4
43+
HYP_LLM_MODEL=gpt-5.4
4444

4545
# Planning
4646
PLANNING_LLM_PROVIDER=openai
47-
PLANNING_LLM_MODEL=gpt-4
47+
PLANNING_LLM_MODEL=gpt-5.4
4848

4949
# Structured output
5050
STRUCTURED_LLM_PROVIDER=openai
51-
STRUCTURED_LLM_MODEL=gpt-4
51+
STRUCTURED_LLM_MODEL=gpt-5.4
5252
```
5353

54+
Deep Research agent selection is provider/model env-driven. The separate `/api/chat`
55+
agent loop currently uses the Anthropic SDK directly via `CHAT_AGENT_MODEL`, so moving
56+
that path to OpenAI requires code changes in addition to env changes.
57+
5458
## Embedding Configuration
5559

5660
```bash

documentation/docs/SETUP.md

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -30,16 +30,16 @@ Configure which LLM provider to use for each agent:
3030
```bash
3131
# Choose provider for each agent
3232
REPLY_LLM_PROVIDER=openai # or google, anthropic, openrouter
33-
REPLY_LLM_MODEL=gpt-4o
33+
REPLY_LLM_MODEL=gpt-5.4
3434

3535
HYP_LLM_PROVIDER=openai
36-
HYP_LLM_MODEL=gpt-4o
36+
HYP_LLM_MODEL=gpt-5.4
3737

3838
PLANNING_LLM_PROVIDER=openai
39-
PLANNING_LLM_MODEL=gpt-4o
39+
PLANNING_LLM_MODEL=gpt-5.4
4040

4141
STRUCTURED_LLM_PROVIDER=openai
42-
STRUCTURED_LLM_MODEL=gpt-4o
42+
STRUCTURED_LLM_MODEL=gpt-5.4
4343

4444
# Add API keys for your chosen providers
4545
OPENAI_API_KEY=sk-...
@@ -48,6 +48,10 @@ ANTHROPIC_API_KEY=... # If using Anthropic
4848
OPENROUTER_API_KEY=... # If using OpenRouter
4949
```
5050

51+
Deep Research agent selection is provider/model env-driven. The separate `/api/chat`
52+
agent loop currently uses the Anthropic SDK directly via `CHAT_AGENT_MODEL`, so moving
53+
that path to OpenAI requires code changes in addition to env changes.
54+
5155
**Database:**
5256

5357
```bash

src/llm/adapters/openai.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -238,10 +238,10 @@ export class OpenAIAdapter extends LLMAdapter {
238238
openaiRequest.temperature = request.temperature;
239239
}
240240

241-
// Map thinkingBudget to reasoning effort for GPT-5.2+ and o-series models
241+
// Map thinkingBudget to Chat Completions reasoning_effort for GPT-5 family models.
242242
const reasoningEffort = this.mapThinkingBudgetToReasoningEffort(request.thinkingBudget);
243243
if (reasoningEffort) {
244-
(openaiRequest as any).reasoning = { effort: reasoningEffort };
244+
(openaiRequest as any).reasoning_effort = reasoningEffort;
245245
}
246246

247247
const mappedTools = this.mapToolsForChat(request.tools);

src/llm/test/test-all-adapters.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -116,8 +116,8 @@ async function run(): Promise<void> {
116116
displayName: 'OpenAI',
117117
apiKey: openaiKey,
118118
baseUrl: process.env.OPENAI_BASE_URL,
119-
chatModel: process.env.OPENAI_CHAT_MODEL ?? 'gpt-4o-mini',
120-
webSearchModel: process.env.OPENAI_WEB_SEARCH_MODEL ?? 'gpt-5',
119+
chatModel: process.env.OPENAI_CHAT_MODEL ?? 'gpt-5.4',
120+
webSearchModel: process.env.OPENAI_WEB_SEARCH_MODEL ?? 'gpt-5.4',
121121
chatRequest: {
122122
systemInstruction:
123123
'You are a concise assistant that replies in one sentence, ending each sentence with "hahaha".',

0 commit comments

Comments
 (0)