Merged
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1807 +/- ##
==========================================
- Coverage 70.74% 70.44% -0.30%
==========================================
Files 176 176
Lines 30426 30578 +152
==========================================
+ Hits 21526 21542 +16
- Misses 8900 9036 +136 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
ahuang11
pushed a commit
that referenced
this pull request
Apr 13, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Adds support for the OpenAI Responses API to the Lumen OpenAI LLM wrapper.
The existing integration is based on the Chat Completions API, which is now a legacy interface with limitations around structured outputs, tool usage, and multi-step interactions. The Responses API provides a more unified and extensible abstraction that better aligns with Lumen’s typed, agent-driven execution model.
Motivation
Supporting the Responses API enables:
First-class tool calling
The Responses API natively supports multiple tool calls within a single response and more flexible tool invocation patterns. This maps cleanly to Lumen’s agent architecture, where tools and agents may be composed and invoked iteratively.
Structured, typed outputs
The API is designed to work well with structured outputs, improving reliability when generating artifacts such as SQL queries, Vega-Lite specs, and intermediate agent outputs.
Unified interface across modalities and workflows
A single API handles text generation, tool use, and future extensions, reducing fragmentation in the LLM layer.
Improved control over execution flow
Responses can represent multi-step reasoning and tool usage in a single object, making it easier to integrate with Lumen’s execution pipeline and maintain visibility into intermediate steps.
Forward compatibility
The Responses API is the primary interface going forward, ensuring continued compatibility with new OpenAI features and reducing reliance on deprecated endpoints.
Summary of Changes
How Has This Been Tested?
Verified compatibility with existing agents (e.g. SQL and visualization flows) using the Responses API backend.
Manually tested end-to-end flows to confirm:
AI Disclosure
This PR contains AI-generated content.
Tools and Models: Cursor + Codex 5.3