Conversation
- Add rate limiting middleware with configurable presets - Cache Cloudflare Workers app instance to avoid recreation per request - Fix timeout middleware memory leak by cleaning up event listeners - Remove duplicate callGradioApi implementation, consolidate in utils - Sanitize error logs to prevent sensitive data leakage - Enhance Provider Registry with type-safe satisfies pattern - Add size limit (10MB) and streaming to image proxy endpoint - Add Retry-After header for video task polling guidance - Extract magic numbers to named constants (MAX_INT32, etc.) - Add Zod validation schemas and middleware helpers - Add OpenAPI/Swagger documentation infrastructure 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix PromptCard Switch className bug (invalid conditional string) - Add response.ok check for proxy fetch in useImageGenerator - Refactor api.ts with shared apiRequest wrapper, reducing ~150 lines of duplicate code 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Refactor api.ts to use shared runWithTokenRotation function, reducing ~100 lines of duplicate token rotation logic - Add basic token format validation in parseTokens (min 8 chars, alphanumeric with common special chars) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Create shared SettingsModal component with tab-based UI - Add API Config and LLM tabs with direct content display - Persist selected tab to localStorage across page refreshes - Update Header component to match FlowPageV2 style - Reuse SettingsModal in both ImageGenerator and FlowPageV2 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add 'custom' provider type for OpenAI-compatible APIs - Create custom LLM provider backend implementation - Add /api/custom-models endpoint to fetch available models - Update SettingsModal with custom provider config UI - Auto-fetch models when baseUrl and apiKey are filled - Show model selector dropdown when models are available - Support separate custom configs for optimize and translate - Add refresh button to manually re-fetch models 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add i18n to Header.tsx (subtitle, flowMode) - Add i18n to PromptCard.tsx (all labels, buttons, placeholders) - Add i18n to ImageResultCard.tsx (all UI text, tooltips, toasts) - Add i18n to StatusCard.tsx (title) - Add translation keys to en.json and zh.json 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Deploying with
|
| Status | Name | Latest Commit | Preview URL | Updated (UTC) |
|---|---|---|---|---|
| ✅ Deployment successful! View logs |
zenith-image-generator-api | c27f7fd | Commit Preview URL Branch Preview URL |
Dec 17 2025, 01:51 PM |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
✅ Deploy Preview for zenith-image-generator ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
Caution Review failedThe pull request is closed. WalkthroughAdds custom OpenAI‑compatible LLM provider support, centralized API constants and Zod schemas, sliding-window rate limiting and validation middleware, OpenAPI/Swagger endpoints, enhanced provider error parsing, client token rotation + custom model discovery, widespread i18n, SettingsModal, and removal of legacy flow pages. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant API as API Server
participant Validate as Zod Middleware
participant RateLimit as RateLimit Middleware
participant Custom as CustomLLMProvider
participant Response
Client->>API: POST /optimize { provider, customConfig?, prompt }
API->>RateLimit: apply preset
API->>Validate: validateJson(OptimizeRequestSchema)
alt invalid
Validate-->>Client: 400 Invalid params
else
Validate->>API: pass
API->>Custom: validate customConfig (baseUrl, apiKey, model)
alt invalid customConfig
Custom-->>Client: 400 Invalid customConfig
else
Custom->>Custom: build /v1/chat/completions request
Custom->>Custom: POST (apiKey auth)
alt HTTP error (401/429/402)
Custom-->>Response: map to auth/rate/quota error
Response-->>Client: error (401/429/402)
else success
Custom->>API: { content, model }
API-->>Client: 200 { optimized, provider: 'custom', model }
end
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–90 minutes
Possibly related PRs
Suggested labels
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (4)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
- Update tokenRotation tests to use valid token formats (min 8 chars) - Fix wrangler installation in GitHub Actions by pre-installing with -w flag - Remove wranglerVersion parameter as wrangler is now installed manually 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Deploying zenith-image-generator with
|
| Latest commit: |
c27f7fd
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://8ee65079.zenith-image-generator.pages.dev |
| Branch Preview URL: | https://dev.zenith-image-generator.pages.dev |
🚀 Preview Deployment
|
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/web/src/pages/FlowPageV2.tsx (1)
237-252: Add missing 'custom' case and pass customConfig to optimizePromptThe switch statement (line 237-252) doesn't handle the 'custom' provider case, and the
optimizePromptcall doesn't passcustomConfig. Without this, custom provider validation in the API will fail.Update the switch to include the custom case:
case 'deepseek': token = await decryptTokenFromStore('deepseek') break + case 'custom': + // Custom provider uses apiKey from customConfig, not authToken + breakAnd pass customConfig to the API call:
const result = await optimizePrompt( { prompt, provider: llmSettings.llmProvider, - model: llmSettings.llmModel, + model: llmSettings.llmProvider === 'custom' + ? llmSettings.customOptimizeConfig.model + : llmSettings.llmModel, lang: 'en', systemPrompt: getEffectiveSystemPrompt(llmSettings.customSystemPrompt), + customConfig: llmSettings.llmProvider === 'custom' + ? llmSettings.customOptimizeConfig + : undefined, }, token )
🧹 Nitpick comments (18)
apps/web/src/hooks/usePromptTranslate.ts (1)
54-88: Consider extracting callback dependencies to reduce re-renders.The
useCallbackdependency array includes the entireoptionsobject, which may cause thetranslatecallback to be recreated on every render if the parent component doesn't memoizeoptions.Consider extracting the individual callbacks as dependencies:
const translate = useCallback( async (prompt: string): Promise<TranslateResponse | null> => { // ... implementation }, - [options] + [options.onSuccess, options.onError] )This ensures the callback only recreates when the actual callback references change, not when a new options object is created.
packages/shared/src/constants/llm-providers.ts (1)
97-98: Consider using optional fields for better type safety.The empty strings for
urlanddefaultModelsatisfy TypeScript's type system but represent semantically invalid values. While functional with the currentcustomConfigpattern, this could lead to subtle bugs if code assumes all providers inLLM_PROVIDER_CONFIGShave valid values.Consider one of these approaches for better type safety:
Option 1: Make fields optional in the type
type LLMProviderConfig = { // ... other fields url?: string; defaultModel?: string; }Option 2: Use placeholder constants
const CUSTOM_PROVIDER_PLACEHOLDER = '__USER_PROVIDED__'; custom: { id: 'custom', name: 'Custom (OpenAI Compatible)', url: CUSTOM_PROVIDER_PLACEHOLDER, defaultModel: CUSTOM_PROVIDER_PLACEHOLDER, // ... }This makes the special case more explicit and easier to validate against.
apps/web/src/lib/tokenRotation.ts (2)
83-83: Remove redundant length check.The condition
t.length > 0is redundant sinceisValidTokenFormat(t)already enforcestoken.length >= 8.Apply this diff to simplify the filter:
- .filter((t) => t.length > 0 && isValidTokenFormat(t)) + .filter(isValidTokenFormat)
68-73: Consider adding user feedback for rejected tokens during parsing.The validation regex
/^[a-zA-Z0-9_\-:.]+$/is actually appropriate and covers common API token formats (including OpenAI'ssk-proj-...format with hyphens). However,parseTokens()silently filters tokens without notifying users, which could make troubleshooting difficult if someone provides a token with unexpected characters or length < 8.Consider logging rejected tokens or adding a console warning to help users diagnose configuration issues.
apps/web/src/pages/ImageGenerator.tsx (1)
120-139: Consider refactoring to reduce prop drilling.The SettingsModal receives 18 props, which creates tight coupling and makes the component harder to maintain. Consider using a compound component pattern or passing a settings context object instead.
Example refactoring approach:
<SettingsModal isOpen={showSettings} onClose={() => setShowSettings(false)} settings={{ provider: { value: provider, models: availableModels, onChange: setProvider }, model: { value: model, onChange: setModel }, token: { current: currentToken, onSave: saveToken }, llm: llmSettings, // ... group related settings }} />apps/api/src/openapi/index.ts (1)
77-92: Consider gating Swagger UI behind environment check.The Swagger UI endpoint is exposed unconditionally. While the comment mentions it "can be removed in production," it's safer to gate it behind an environment variable to prevent accidental exposure of API documentation in production.
Apply this diff:
export function createOpenAPIApp() { const app = new OpenAPIHono() // Add OpenAPI documentation endpoint app.doc('/doc', openApiConfig) - // Add Swagger UI (optional, can be removed in production) - app.get( - '/ui', - swaggerUI({ - url: '/api/doc', - }) - ) + // Add Swagger UI in development only + if (process.env.NODE_ENV !== 'production') { + app.get( + '/ui', + swaggerUI({ + url: '/api/doc', + }) + ) + } return app }apps/api/src/middleware/validate.ts (1)
54-91: Consider extracting common validation logic.The three validation functions share identical error handling logic, differing only in the validated target ('json', 'query', 'param') and default field names. Extracting this logic would reduce duplication and improve maintainability.
Example refactoring:
function createValidator<T extends z.ZodType>( target: 'json' | 'query' | 'param', defaultField: string ) { return (schema: T) => zValidator(target, schema, (result, c) => { if (!result.success) { const firstIssue = result.error.issues[0] const field = firstIssue?.path.join('.') || defaultField const message = firstIssue?.message || `Invalid ${defaultField}` return sendError(c, Errors.invalidParams(field, message)) } }) } export const validateJson = createValidator('json', 'body') export const validateQuery = createValidator('query', 'query') export const validateParam = createValidator('param', 'param')apps/api/src/middleware/rate-limit.ts (1)
29-46: In-memory rate limiting won't work correctly in distributed Cloudflare Workers.While the comment acknowledges this limitation, the in-memory Map will result in inconsistent rate limiting across multiple Worker instances. Each instance maintains its own state, allowing clients to bypass limits by hitting different instances.
For production deployment:
- Use Cloudflare Rate Limiting API (simplest, built-in)
- Use Cloudflare KV for distributed state (eventual consistency)
- Use Durable Objects for strongly consistent rate limiting (recommended)
The current implementation is acceptable for development or single-instance deployments.
apps/web/src/components/feature/SettingsModal.tsx (3)
149-178: Auto-fetch effects may trigger multiple parallel requests.The condition
optimizeCustomModels.length === 0 && !isLoadingOptimizeModelscreates a potential race: iffetchOptimizeModelssets loading state and the effect re-runs before state updates propagate, multiple requests could fire. Consider using a ref to track in-flight requests or debouncing.
193-202: Modal lacks keyboard accessibility.The modal doesn't handle Escape key to close, which is a common accessibility pattern. Consider adding a keyboard event listener.
+ // Close on Escape key + useEffect(() => { + const handleKeyDown = (e: KeyboardEvent) => { + if (e.key === 'Escape') { + onClose() + } + } + if (isOpen) { + document.addEventListener('keydown', handleKeyDown) + return () => document.removeEventListener('keydown', handleKeyDown) + } + }, [isOpen, onClose]) + if (!isOpen) return null
194-194: Modal backdrop doesn't close modal on click.Clicking outside the modal content (on the backdrop) should close it for better UX. The outer div with
bg-black/50should have an onClick handler.- <div className="fixed inset-0 bg-black/50 flex items-center justify-center z-50"> + <div + className="fixed inset-0 bg-black/50 flex items-center justify-center z-50" + onClick={onClose} + > - <div className="bg-zinc-900 border border-zinc-800 rounded-xl w-full max-w-md mx-4 max-h-[80vh] overflow-hidden flex flex-col"> + <div + className="bg-zinc-900 border border-zinc-800 rounded-xl w-full max-w-md mx-4 max-h-[80vh] overflow-hidden flex flex-col" + onClick={(e) => e.stopPropagation()} + >apps/api/src/llm-providers/custom.ts (2)
63-73: URL normalization logic has edge cases.The normalization could fail for URLs like
https://api.example.com/api(would become/api/v1/chat/completionsincorrectly) orhttps://api.example.com/v1/(trailing slash after v1). Consider a more robust approach:- // Normalize base URL - ensure it ends with /chat/completions - let url = baseUrl.trim() - if (url.endsWith('/')) { - url = url.slice(0, -1) - } - if (!url.endsWith('/chat/completions')) { - if (!url.endsWith('/v1')) { - url = `${url}/v1` - } - url = `${url}/chat/completions` - } + // Normalize base URL - ensure it ends with /chat/completions + let url = baseUrl.trim().replace(/\/+$/, '') // Remove trailing slashes + if (url.endsWith('/chat/completions')) { + // Already complete + } else if (url.endsWith('/v1')) { + url = `${url}/chat/completions` + } else { + url = `${url}/v1/chat/completions` + }
75-90: Missing timeout for fetch request.The fetch call has no timeout, which could cause requests to hang indefinitely if the custom provider is unresponsive. Consider using
AbortControllerwith a timeout.+ const controller = new AbortController() + const timeoutId = setTimeout(() => controller.abort(), 30000) // 30s timeout + + try { const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, + signal: controller.signal, body: JSON.stringify({ model, messages: [ { role: 'system', content: request.systemPrompt }, { role: 'user', content: request.prompt }, ], max_tokens: request.maxTokens || 1000, temperature: request.temperature ?? 0.7, }), }) + } catch (err) { + if (err instanceof Error && err.name === 'AbortError') { + throw Errors.timeout(this.name) + } + throw err + } finally { + clearTimeout(timeoutId) + }apps/api/src/openapi/routes.ts (2)
91-124: Missing 401 and 429 error responses for optimize and translate routes.These routes can also fail with authentication errors (401) when using providers that require auth, and rate limiting (429). Consider adding these response schemas for completeness.
responses: { description: 'Prompt optimized successfully', content: { 'application/json': { schema: OptimizeResponseSchema, }, }, }, description: 'Invalid request parameters', content: { 'application/json': { schema: ErrorResponseSchema, }, }, }, + 401: { + description: 'Authentication required', + content: { + 'application/json': { + schema: ErrorResponseSchema, + }, + }, + }, + 429: { + description: 'Rate limit exceeded', + content: { + 'application/json': { + schema: ErrorResponseSchema, + }, + }, + }, },Also applies to: 130-163
291-343: Consider extracting inline schemas to the schemas module for consistency.The provider and model list routes define inline schemas while other routes import from
../schemas. For consistency and reusability, consider definingProvidersResponseSchemaandModelsResponseSchemain the schemas module.apps/api/src/app.ts (1)
300-326: Type assertion for custom provider complete() is fragile.The type cast
as unknown as { complete: ... }bypasses type safety. Consider defining a proper interface for the custom provider or using a discriminated union pattern.- const result = - provider === 'custom' - ? await ( - llmProvider as unknown as { - complete: ( - req: Parameters<typeof llmProvider.complete>[0], - config?: typeof customConfig - ) => ReturnType<typeof llmProvider.complete> - } - ).complete( - { - prompt, - systemPrompt: finalSystemPrompt, - model: customConfig?.model, - maxTokens: 1000, - }, - customConfig - ) - : await llmProvider.complete({ + // Consider creating a type-safe interface in the provider registry + const result = await llmProvider.complete({ + prompt, + systemPrompt: finalSystemPrompt, + model: provider === 'custom' ? customConfig?.model : model, + authToken, + maxTokens: 1000, + ...(provider === 'custom' && customConfig ? { customConfig } : {}), + })apps/web/src/lib/api.ts (1)
311-375: Consider extracting common LLM request pattern.
optimizePromptandtranslatePromptshare nearly identical structure:
- Custom provider validation and direct call
- No-auth provider direct call
- Token rotation for auth-required providers
Consider extracting a generic helper:
async function llmRequestWithRotation<TOptions, TResponse>( options: TOptions & { provider?: LLMProviderType; customConfig?: CustomLLMConfig }, tokenOrTokens: string | string[] | undefined, singleCallFn: (opts: TOptions, token: string | null) => Promise<TResponse> ): Promise<ApiResponse<TResponse>> { const { provider = 'pollinations', customConfig } = options const providerConfig = LLM_PROVIDER_CONFIGS[provider] const tokenProvider = getLLMTokenProvider(provider) const allTokens = Array.isArray(tokenOrTokens) ? tokenOrTokens : tokenOrTokens ? [tokenOrTokens] : [] // Custom provider flow if (provider === 'custom') { if (!customConfig?.baseUrl || !customConfig?.apiKey || !customConfig?.model) { return { success: false, error: 'Please configure custom provider URL, API key, and model' } } try { return { success: true, data: await singleCallFn(options, null) } } catch (err) { return { success: false, error: err instanceof Error ? err.message : 'Network error' } } } // No-auth provider flow if (!providerConfig?.needsAuth) { try { return { success: true, data: await singleCallFn(options, null) } } catch (err) { return { success: false, error: err instanceof Error ? err.message : 'Network error' } } } // Token rotation flow if (allTokens.length === 0 || !tokenProvider) { return { success: false, error: `Please configure your ${provider} token first` } } const result = await runWithTokenRotation(tokenProvider, allTokens, (t) => singleCallFn(options, t), { allowAnonymous: false }) return result.success ? { success: true, data: result.data } : { success: false, error: result.error } }This would reduce ~130 lines to ~60 lines and ensure consistent behavior.
Also applies to: 420-484
apps/api/src/schemas/index.ts (1)
36-39: UsePROMPT_LIMITSconstant for consistency.The prompt max length is hardcoded to 4000 here, but
OptimizeRequestSchemausesPROMPT_LIMITS.OPTIMIZE. Using the constant ensures consistency and easier maintenance if limits change.prompt: z .string() .min(1, 'Prompt is required') - .max(4000, 'Prompt must be less than 4000 characters'), + .max(PROMPT_LIMITS.OPTIMIZE, `Prompt must be less than ${PROMPT_LIMITS.OPTIMIZE} characters`),
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (42)
apps/api/package.json(1 hunks)apps/api/src/app.ts(11 hunks)apps/api/src/constants.ts(1 hunks)apps/api/src/index.ts(1 hunks)apps/api/src/llm-providers/custom.ts(1 hunks)apps/api/src/llm-providers/deepseek.ts(1 hunks)apps/api/src/llm-providers/gitee-llm.ts(1 hunks)apps/api/src/llm-providers/huggingface-llm.ts(1 hunks)apps/api/src/llm-providers/modelscope-llm.ts(1 hunks)apps/api/src/llm-providers/pollinations.ts(1 hunks)apps/api/src/llm-providers/registry.ts(2 hunks)apps/api/src/llm-providers/types.ts(1 hunks)apps/api/src/middleware/error-handler.ts(1 hunks)apps/api/src/middleware/index.ts(1 hunks)apps/api/src/middleware/rate-limit.ts(1 hunks)apps/api/src/middleware/timeout.ts(1 hunks)apps/api/src/middleware/validate.ts(1 hunks)apps/api/src/openapi/index.ts(1 hunks)apps/api/src/openapi/routes.ts(1 hunks)apps/api/src/providers/gitee.ts(2 hunks)apps/api/src/providers/huggingface.ts(2 hunks)apps/api/src/providers/registry.ts(1 hunks)apps/api/src/schemas/index.ts(1 hunks)apps/api/src/utils/gradio.ts(4 hunks)apps/web/src/components/feature/Header.tsx(1 hunks)apps/web/src/components/feature/ImageResultCard.tsx(25 hunks)apps/web/src/components/feature/PromptCard.tsx(10 hunks)apps/web/src/components/feature/SettingsModal.tsx(1 hunks)apps/web/src/components/feature/StatusCard.tsx(2 hunks)apps/web/src/hooks/useImageGenerator.ts(7 hunks)apps/web/src/hooks/usePromptTranslate.ts(1 hunks)apps/web/src/lib/api.ts(14 hunks)apps/web/src/lib/constants.ts(3 hunks)apps/web/src/lib/tokenRotation.ts(1 hunks)apps/web/src/locales/en.json(2 hunks)apps/web/src/locales/zh.json(2 hunks)apps/web/src/pages/ConversationFlowPage.tsx(0 hunks)apps/web/src/pages/FlowPage.tsx(0 hunks)apps/web/src/pages/FlowPageV2.tsx(5 hunks)apps/web/src/pages/ImageGenerator.tsx(4 hunks)packages/shared/src/constants/llm-providers.ts(1 hunks)packages/shared/src/types/llm.ts(3 hunks)
💤 Files with no reviewable changes (2)
- apps/web/src/pages/ConversationFlowPage.tsx
- apps/web/src/pages/FlowPage.tsx
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{ts,tsx,js,jsx,json}
📄 CodeRabbit inference engine (CLAUDE.md)
Use Biome for all linting and formatting operations (pnpm lint, pnpm format, pnpm check)
Files:
apps/api/src/providers/gitee.tsapps/api/src/llm-providers/types.tsapps/api/src/middleware/error-handler.tsapps/api/src/llm-providers/huggingface-llm.tsapps/api/src/middleware/timeout.tsapps/api/src/middleware/validate.tsapps/web/src/lib/tokenRotation.tsapps/web/src/hooks/usePromptTranslate.tsapps/api/src/llm-providers/deepseek.tsapps/api/src/index.tsapps/api/src/llm-providers/registry.tsapps/api/src/middleware/index.tsapps/web/src/locales/zh.jsonapps/api/src/llm-providers/custom.tsapps/api/src/llm-providers/gitee-llm.tsapps/api/src/utils/gradio.tsapps/web/src/pages/ImageGenerator.tsxapps/web/src/components/feature/PromptCard.tsxapps/web/src/components/feature/StatusCard.tsxpackages/shared/src/constants/llm-providers.tsapps/api/src/llm-providers/pollinations.tspackages/shared/src/types/llm.tsapps/api/src/providers/huggingface.tsapps/api/src/middleware/rate-limit.tsapps/web/src/components/feature/SettingsModal.tsxapps/web/src/lib/constants.tsapps/web/src/components/feature/ImageResultCard.tsxapps/api/src/app.tsapps/api/src/constants.tsapps/api/package.jsonapps/api/src/providers/registry.tsapps/api/src/openapi/routes.tsapps/web/src/hooks/useImageGenerator.tsapps/web/src/lib/api.tsapps/web/src/components/feature/Header.tsxapps/web/src/pages/FlowPageV2.tsxapps/web/src/locales/en.jsonapps/api/src/schemas/index.tsapps/api/src/llm-providers/modelscope-llm.tsapps/api/src/openapi/index.ts
apps/api/src/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
apps/api/src/**/*.ts: POST /api/generate endpoint must support 'gitee' and 'huggingface' as provider parameters, require X-API-Key header for Gitee AI, and support optional X-HF-Token header for HuggingFace
Use Hono API framework for Cloudflare Workers implementation in the API app (apps/api)
Files:
apps/api/src/providers/gitee.tsapps/api/src/llm-providers/types.tsapps/api/src/middleware/error-handler.tsapps/api/src/llm-providers/huggingface-llm.tsapps/api/src/middleware/timeout.tsapps/api/src/middleware/validate.tsapps/api/src/llm-providers/deepseek.tsapps/api/src/index.tsapps/api/src/llm-providers/registry.tsapps/api/src/middleware/index.tsapps/api/src/llm-providers/custom.tsapps/api/src/llm-providers/gitee-llm.tsapps/api/src/utils/gradio.tsapps/api/src/llm-providers/pollinations.tsapps/api/src/providers/huggingface.tsapps/api/src/middleware/rate-limit.tsapps/api/src/app.tsapps/api/src/constants.tsapps/api/src/providers/registry.tsapps/api/src/openapi/routes.tsapps/api/src/schemas/index.tsapps/api/src/llm-providers/modelscope-llm.tsapps/api/src/openapi/index.ts
apps/web/src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/src/**/*.{ts,tsx}: Use @/ path alias for imports in frontend code (maps to src/)
Persist settings and API keys to localStorage with AES-256-GCM encryption (use src/lib/crypto.ts)
Persist Flow mode state using IndexedDB storage (via src/lib/flow-storage.ts) for nodes, edges, and images
Configure API URL via VITE_API_URL environment variable, defaulting to relative path for same-origin deployment
Files:
apps/web/src/lib/tokenRotation.tsapps/web/src/hooks/usePromptTranslate.tsapps/web/src/pages/ImageGenerator.tsxapps/web/src/components/feature/PromptCard.tsxapps/web/src/components/feature/StatusCard.tsxapps/web/src/components/feature/SettingsModal.tsxapps/web/src/lib/constants.tsapps/web/src/components/feature/ImageResultCard.tsxapps/web/src/hooks/useImageGenerator.tsapps/web/src/lib/api.tsapps/web/src/components/feature/Header.tsxapps/web/src/pages/FlowPageV2.tsx
apps/web/src/**/*.{tsx,ts}
📄 CodeRabbit inference engine (CLAUDE.md)
Use React 19 with Vite, Tailwind CSS, and shadcn/ui components in the frontend (apps/web)
Files:
apps/web/src/lib/tokenRotation.tsapps/web/src/hooks/usePromptTranslate.tsapps/web/src/pages/ImageGenerator.tsxapps/web/src/components/feature/PromptCard.tsxapps/web/src/components/feature/StatusCard.tsxapps/web/src/components/feature/SettingsModal.tsxapps/web/src/lib/constants.tsapps/web/src/components/feature/ImageResultCard.tsxapps/web/src/hooks/useImageGenerator.tsapps/web/src/lib/api.tsapps/web/src/components/feature/Header.tsxapps/web/src/pages/FlowPageV2.tsx
🧠 Learnings (7)
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Applies to apps/api/src/**/*.ts : POST /api/generate endpoint must support 'gitee' and 'huggingface' as provider parameters, require X-API-Key header for Gitee AI, and support optional X-HF-Token header for HuggingFace
Applied to files:
apps/api/src/providers/gitee.tsapps/api/src/llm-providers/custom.tsapps/api/src/llm-providers/gitee-llm.tsapps/api/src/utils/gradio.tspackages/shared/src/types/llm.tsapps/api/src/providers/huggingface.tsapps/api/src/app.tsapps/web/src/hooks/useImageGenerator.tsapps/web/src/lib/api.tsapps/api/src/schemas/index.ts
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Implement provider abstraction pattern for API: define provider interface in apps/api/src/providers/types.ts, implement specific providers in individual files, and register/lookup providers in apps/api/src/providers/registry.ts
Applied to files:
apps/api/src/llm-providers/registry.tsapps/api/src/llm-providers/custom.tspackages/shared/src/constants/llm-providers.tsapps/api/src/providers/registry.tsapps/web/src/lib/api.ts
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Organize shared code in packages/shared with type definitions in src/types/, provider configs in src/constants/, and validation utilities in src/utils/
Applied to files:
apps/api/src/middleware/index.tsapps/api/src/providers/registry.ts
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Applies to apps/web/src/**/*.{tsx,ts} : Use React 19 with Vite, Tailwind CSS, and shadcn/ui components in the frontend (apps/web)
Applied to files:
apps/web/src/components/feature/PromptCard.tsxapps/web/src/pages/FlowPageV2.tsx
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Applies to apps/web/src/**/*.{ts,tsx} : Persist settings and API keys to localStorage with AES-256-GCM encryption (use src/lib/crypto.ts)
Applied to files:
apps/web/src/components/feature/SettingsModal.tsxapps/web/src/hooks/useImageGenerator.ts
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Applies to apps/api/src/**/*.ts : Use Hono API framework for Cloudflare Workers implementation in the API app (apps/api)
Applied to files:
apps/api/package.jsonapps/api/src/openapi/routes.tsapps/api/src/openapi/index.ts
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Applies to apps/web/src/**/*.{ts,tsx} : Persist Flow mode state using IndexedDB storage (via src/lib/flow-storage.ts) for nodes, edges, and images
Applied to files:
apps/web/src/pages/FlowPageV2.tsx
🧬 Code graph analysis (16)
apps/api/src/providers/gitee.ts (1)
apps/api/src/constants.ts (1)
MAX_INT32(8-8)
apps/api/src/middleware/error-handler.ts (1)
apps/api/src/middleware/request-id.ts (1)
requestId(18-27)
apps/api/src/middleware/validate.ts (2)
apps/api/src/middleware/error-handler.ts (1)
sendError(28-32)packages/shared/src/types/error.ts (1)
Errors(95-156)
apps/web/src/hooks/usePromptTranslate.ts (1)
apps/web/src/lib/api.ts (1)
translatePrompt(420-484)
apps/api/src/index.ts (2)
apps/api/src/app.ts (1)
createApp(56-767)apps/api/src/config.ts (1)
getCorsOriginsFromBindings(48-56)
apps/api/src/llm-providers/registry.ts (1)
apps/api/src/llm-providers/custom.ts (1)
customLLMProvider(129-129)
apps/api/src/llm-providers/custom.ts (2)
apps/api/src/llm-providers/types.ts (3)
LLMProvider(30-37)LLMCompleteRequest(6-19)LLMCompleteResult(22-27)packages/shared/src/types/error.ts (1)
Errors(95-156)
apps/api/src/utils/gradio.ts (1)
packages/shared/src/types/error.ts (1)
Errors(95-156)
packages/shared/src/types/llm.ts (1)
apps/web/src/lib/constants.ts (2)
CustomLLMConfig(104-111)LLMProviderType(22-22)
apps/api/src/providers/huggingface.ts (1)
apps/api/src/constants.ts (1)
MAX_INT32(8-8)
apps/api/src/middleware/rate-limit.ts (2)
apps/api/src/middleware/error-handler.ts (1)
sendError(28-32)packages/shared/src/types/error.ts (1)
Errors(95-156)
apps/web/src/lib/constants.ts (1)
packages/shared/src/types/llm.ts (2)
CustomLLMConfig(15-22)LLMProviderType(6-12)
apps/web/src/components/feature/ImageResultCard.tsx (1)
apps/web/src/components/ui/card.tsx (1)
CardTitle(75-75)
apps/web/src/hooks/useImageGenerator.ts (4)
apps/web/src/lib/constants.ts (4)
LLMProviderType(22-22)getDefaultLLMModel(99-101)saveLLMSettings(188-192)getEffectiveSystemPrompt(195-198)packages/shared/src/types/llm.ts (1)
LLMProviderType(6-12)apps/web/src/lib/crypto.ts (1)
loadTokensArray(94-101)apps/web/src/lib/api.ts (1)
translatePrompt(420-484)
apps/web/src/components/feature/Header.tsx (1)
apps/web/src/components/ui/LanguageSwitcher.tsx (1)
LanguageSwitcher(4-23)
apps/api/src/schemas/index.ts (1)
apps/api/src/constants.ts (1)
PROMPT_LIMITS(45-50)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Workers Builds: zenith-image-generator-api
- GitHub Check: Cloudflare Pages
🔇 Additional comments (68)
apps/api/src/middleware/timeout.ts (1)
17-38: Excellent refactor for proper resource cleanup!The changes improve the timeout middleware by introducing explicit lifecycle management for the abort handler:
Named abort handler (lines 20-23): Using a named function instead of an inline handler enables proper cleanup via
removeEventListener.Stored reject callback (lines 17-18, 28): Capturing the reject function allows the abort handler to trigger timeout errors while maintaining the ability to clean up the reference later.
Deterministic cleanup (lines 37-38): The finally block now properly removes the event listener and nulls out the reject reference, preventing memory leaks from accumulated listeners or dangling function references.
Safe invocation (line 22): Optional chaining on
rejectTimeout?.()gracefully handles the edge case where cleanup has already occurred but the abort event still fires.This pattern ensures that whether the request completes successfully, times out, or errors, all resources are properly released.
apps/web/src/hooks/usePromptTranslate.ts (1)
67-67: Verify thatTranslateOptionsincludes thepromptfield.The change to pass
{ prompt }aligns with theTranslateOptionsAPI signature. The type definition inapps/web/src/lib/api.tsconfirms thatTranslateOptionsincludespromptas a required string field, along with optionalproviderand other configuration options. The code is correct.packages/shared/src/constants/llm-providers.ts (1)
94-101: Custom provider implementation properly validates configuration - no changes needed.The
customprovider configuration correctly uses empty strings as placeholders. The custom.ts implementation explicitly requirescustomConfigto be passed at runtime and validates thatbaseUrl,apiKey, andmodelare all provided before use. The provider never uses the empty values fromLLM_PROVIDER_CONFIGSdirectly, and validation occurs at the entry point of thecomplete()method, ensuring safe handling of the special case.apps/api/src/middleware/error-handler.ts (1)
59-61: Good security practice: sanitizing error logs.The approach of sanitizing errors before logging is excellent and helps prevent accidental leakage of sensitive information (such as API keys or tokens) that might be embedded in error messages or stack traces.
apps/api/src/utils/gradio.ts (3)
7-47: LGTM! Well-structured error parsing.The
parseHuggingFaceErrorhelper provides comprehensive error mapping with clear logic for rate limits, quota, authentication, timeouts, and service availability. The use of aPROVIDER_NAMEconstant ensures consistency across error messages.
52-85: LGTM! Improved error handling in SSE extraction.The updates to
extractCompleteEventDataproperly leverageparseHuggingFaceErrorfor both parsed error data and syntax-error fallbacks, providing more specific error context than the previous generic provider errors.
90-121: LGTM! Consistent error handling in API calls.The
callGradioApifunction now usesparseHuggingFaceErrorfor queue failures with both error text and status code, enabling more accurate error classification. The event_id check maintains the existing provider error pattern.apps/api/src/constants.ts (1)
1-50: LGTM! Excellent centralization of constants.This new constants module follows best practices by:
- Eliminating magic numbers throughout the codebase
- Using
as constfor type safety and immutability- Providing clear documentation for each constant
- Using readable numeric literals with underscores (e.g.,
120_000)The values are all reasonable for their respective use cases. MAX_INT32 correctly represents 2³¹-1.
apps/web/src/locales/en.json (3)
16-17: LGTM! Clear flow mode labels added.The new header keys
tryFlowModeandflowModeprovide clear navigation labels for the experimental flow feature.
25-37: LGTM! Enhanced prompt configuration labels.The new prompt keys support inference steps, dimensions, and LLM-powered translation/optimization features, aligning with PR objectives for custom LLM provider support.
55-85: LGTM! Comprehensive result view enhancements.The extensive additions to the result section support:
- Video generation workflow with status messages
- Image comparison and upscaling features
- Fullscreen viewer with keyboard shortcuts
- Detailed metadata labels
All strings are clear, grammatically correct, and align with the PR's UI improvement objectives.
apps/api/src/providers/gitee.ts (1)
7-7: LGTM! Clean refactor to use centralized constant.Replacing the hardcoded
2147483647withMAX_INT32improves maintainability and aligns with the new centralized constants module. The seed generation logic remains functionally identical.Also applies to: 74-74
apps/api/src/llm-providers/huggingface-llm.ts (1)
60-60: LGTM! Added configurable temperature parameter.The change from hardcoded
0.7torequest.temperature ?? 0.7adds flexibility while maintaining backward compatibility with the same default value. This aligns with the temperature support added across other LLM providers in the PR.apps/api/src/llm-providers/gitee-llm.ts (1)
53-53: LGTM! Temperature parameter added consistently.The addition of
temperature: request.temperature ?? 0.7to the Gitee LLM request body provides configurable temperature control while defaulting to 0.7, maintaining consistency with other LLM providers in this PR.apps/api/src/llm-providers/modelscope-llm.ts (1)
53-53: LGTM! Temperature parameter completes consistent API across providers.The addition of
temperature: request.temperature ?? 0.7to the ModelScope LLM request body completes the temperature parameter rollout across all LLM providers (HuggingFace, Gitee, and ModelScope), ensuring consistent configurability with a sensible default.apps/api/package.json (1)
19-24: All package versions confirmed to exist and are secure.@hono/swagger-ui@0.5.2 exists and is the latest version. zod@4.2.1 was published recently and no known vulnerabilities exist. hono@4.11.1 was published 3 days ago, and version 4.11.0 is available—importantly, security vulnerabilities in hono affecting versions >=1.1.0 <4.10.2 have been patched in >=4.10.2, so hono@4.11.0 is secure. The Hono middleware packages (@hono/zod-openapi, @hono/zod-validator) are part of the stable honojs ecosystem and present no security concerns.
apps/api/src/llm-providers/pollinations.ts (1)
39-39: LGTM! Temperature parameter added correctly.The addition of the
temperatureparameter with a sensible default of 0.7 aligns with the updatedLLMCompleteRequestinterface and follows best practices for LLM generation control.apps/api/src/llm-providers/deepseek.ts (1)
52-52: LGTM! Consistent temperature parameter implementation.The temperature parameter addition matches the pattern used in other providers and provides appropriate control over generation determinism.
apps/web/src/components/feature/StatusCard.tsx (1)
1-14: LGTM! Clean internationalization implementation.The i18n integration is straightforward and follows React best practices. The translation key
'status.title'is appropriately scoped.apps/api/src/llm-providers/types.ts (1)
17-18: LGTM! Well-documented type extension.The optional
temperaturefield is properly documented and enables consistent temperature control across all LLM providers while maintaining backward compatibility.apps/api/src/llm-providers/registry.ts (1)
6-21: LGTM! Custom provider registration follows established patterns.The addition of the custom LLM provider to the registry is consistent with the existing provider registration pattern and properly enables the new OpenAI-compatible provider feature.
apps/api/src/providers/huggingface.ts (3)
6-7: LGTM! Improved maintainability through centralized utilities.Importing
MAX_INT32andcallGradioApipromotes code reuse and centralizes Gradio API interaction logic across providers.
50-50: LGTM! Consistent seed generation using shared constant.Using
MAX_INT32instead of a hardcoded magic number improves maintainability and ensures consistency across providers.
58-75: Error handling is properly centralized and covers all HuggingFace-specific scenarios.The refactoring successfully delegates error handling to
callGradioApiandparseHuggingFaceErrorwithout losing coverage. Verification confirms all critical error paths are handled:
- Rate limiting & quota: Detected via status codes (429) and message patterns (lines 16, 21)
- Authentication: 401/403 status codes and message patterns (lines 26-33)
- Timeout & service unavailability: Message patterns and status 503 (lines 36-43)
- SSE stream errors: Parsed from 'error' events with JSON fallback (lines 64-76)
- Queue initialization failures: Validated event_id presence (line 112)
- Network errors: Handled in queue POST request (lines 105-108)
All error types map to appropriate
Errorshelpers for consistent client-facing responses.apps/api/src/index.ts (1)
15-36: The app caching optimization is correctly implemented.The implementation properly identifies
CORS_ORIGINSas the only environment binding affecting app behavior. TheEnvinterface defines onlyCORS_ORIGINS, thecreateAppfunction receives pre-processed configuration and does not access environment bindings directly, and the cache invalidation logic correctly handles the only dynamic configuration that exists.apps/api/src/middleware/index.ts (1)
8-12: Middleware exports are properly implemented.All five middleware modules (rate-limit, request-id, security, timeout, validate) exist and export expected functions. The rate-limit module exports the
rateLimit()function andrateLimitPresetsused throughout the app, and the validate module exportsvalidateJson(),validateQuery(), andvalidateParam()for request validation. The barrel export pattern correctly expands the middleware API surface and integrates properly with the Hono API framework.apps/web/src/pages/ImageGenerator.tsx (2)
1-10: LGTM!The imports and state initialization follow React 19 best practices and the @/ path alias convention specified in the coding guidelines.
67-67: LGTM!The Header integration with the settings modal follows a clean callback pattern.
apps/api/src/openapi/index.ts (2)
1-10: LGTM!The imports and barrel export pattern are clean and follow modern ES module conventions.
15-54: LGTM!The OpenAPI configuration correctly documents the API security schemes. The X-API-Key requirement for Gitee AI and optional X-HF-Token for HuggingFace align with the coding guidelines.
packages/shared/src/types/llm.ts (3)
12-22: LGTM!The CustomLLMConfig interface appropriately defines the structure for custom OpenAI-compatible providers. Note that per the coding guidelines, consumers of this type must encrypt API keys using AES-256-GCM when persisting to storage.
53-66: LGTM!The optional customConfig field is appropriately designed for the custom provider use case. The comment clearly documents when this field is required.
92-110: LGTM!The TranslateRequest and TranslateResponse changes align the translation API with the optimization API, providing consistent support for custom LLM providers. The optional fields maintain backward compatibility.
apps/web/src/locales/zh.json (1)
1-175: LGTM!The Chinese localization file is well-structured and syntactically valid. The new translation keys align with the i18n implementation across the web application.
apps/api/src/providers/registry.ts (2)
11-23: LGTM!The use of TypeScript's
satisfiesoperator withas constprovides excellent type safety while preserving literal types. This prevents runtime errors from unregistered providers.
25-42: LGTM!The registry functions maintain runtime safety checks while leveraging TypeScript's type system for compile-time safety. The type predicate in
hasProviderenables proper type narrowing in consumer code.apps/api/src/middleware/rate-limit.ts (3)
48-78: LGTM!The key generation strategy appropriately prioritizes API keys over IP addresses and hashes keys for privacy. The simple hash function (lines 70-78) is sufficient for rate limiting purposes and prevents logging sensitive API keys.
83-133: LGTM!The sliding window rate limiting implementation is correct. The middleware properly:
- Resets windows when expired
- Increments counts before checking limits
- Sets standard rate limit headers
- Returns appropriate errors with retry-after information
135-162: LGTM!The rate limit presets are well-calibrated for different endpoint types, with stricter limits for resource-intensive operations (video: 5/min, generate: 10/min) and relaxed limits for lightweight operations (readonly: 60/min).
apps/web/src/components/feature/PromptCard.tsx (3)
2-2: LGTM!The i18n integration using react-i18next is correctly implemented and follows React 19 best practices. The translation keys align with the locale files reviewed earlier.
Also applies to: 63-63
71-256: LGTM!All user-facing strings are properly translated using i18next. The translations follow a consistent pattern and match the locale file keys.
170-170: Nice improvement with data attribute selectors!The Switch className now uses Tailwind's data attribute selectors instead of conditional logic, making the styling more declarative and easier to maintain.
apps/web/src/components/feature/SettingsModal.tsx (2)
1-27: LGTM! Imports are well-organized and use the @/ path alias correctly.The imports follow the coding guidelines for using @/ path alias for frontend code. Good separation of external libraries, UI components, and local utilities.
79-95: Tab persistence using localStorage is appropriate for non-sensitive UI state.The tab selection persistence correctly uses plain localStorage since it only stores a UI preference ('api', 'optimize', or 'translate'), not sensitive data like API keys. Per coding guidelines, encryption is required for settings and API keys, but UI state like active tab doesn't require encryption.
apps/web/src/pages/FlowPageV2.tsx (2)
185-211: LGTM! Custom config setters correctly persist to storage.The
setCustomOptimizeConfigandsetCustomTranslateConfigcallbacks properly merge partial updates and persist to storage viasaveLLMSettings. The empty dependency array is acceptable since these only usesetLLMSettings(stable setter) andsaveLLMSettings(module-level function).
587-612: LGTM! SettingsModal integration is complete.All required props are properly passed to the SettingsModal component, including the new custom config setters. The wiring correctly connects the local state management to the modal's callbacks.
apps/web/src/components/feature/Header.tsx (2)
1-11: LGTM! Clean component structure with proper TypeScript interface.The imports use the @/ path alias as required by coding guidelines. The
HeaderPropsinterface clearly defines optional props with sensible defaults via optional chaining in usage.
13-59: LGTM! Well-structured header with full i18n support.The component correctly uses the translation function for all user-facing text (
header.title,header.subtitle,common.api,header.flowMode). The three-column flex layout is clean and responsive.apps/api/src/llm-providers/custom.ts (2)
92-116: LGTM! Error handling is comprehensive.Good mapping of HTTP status codes to appropriate error types (401→authInvalid, 429→rateLimited, 402→quotaExceeded). The fallback to providerError with parsed message provides useful debugging information.
118-126: LGTM! Response parsing with proper null checks.The optional chaining on
data.choices?.[0]?.message?.contentcorrectly handles malformed responses, and the empty response case throws an appropriate error.apps/api/src/openapi/routes.ts (3)
1-23: LGTM! Well-organized OpenAPI route setup.The imports are clean, separating the route creation utility from the application schemas. The file header clearly documents its purpose.
28-85: LGTM! Generate route with comprehensive error responses.The generate route correctly documents all relevant HTTP status codes (200, 400, 401, 429, 502) with appropriate descriptions. This aligns with the error handling in the provider implementations.
270-274: The response headers definition is correct for @hono/zod-openapi.The
headersproperty usingz.object()is the intended pattern for this library. @hono/zod-openapi v1.1.5 uses@asteasolutions/zod-to-openapito automatically convert Zod schemas to OpenAPI 3.1.0 format, including response headers. The code type-checks successfully under strict TypeScript mode and passes CI, confirming the approach is valid and properly typed.Likely an incorrect or invalid review comment.
apps/web/src/components/feature/ImageResultCard.tsx (2)
18-56: LGTM! Clean i18n integration.The
useTranslationhook is properly imported and initialized. Thetfunction is correctly added to dependency arrays where translation strings are used in callbacks (lines 124, 146, 156).
264-264: Consistent localization applied to all UI strings.All user-facing strings throughout the component (card title, tooltips, info panel labels, status messages, keyboard shortcut hints, etc.) are properly localized using the
t()function with appropriate translation keys.apps/api/src/app.ts (3)
104-111: Good addition of rate limiting middleware.Rate limiting is properly applied to resource-intensive endpoints with appropriate limits (generate: 10/min, optimize: 20/min, video: 5/min, readonly: 60/min). This helps prevent abuse and protects upstream providers.
644-677: Good defensive measures for image proxy.The content-length check before streaming prevents memory exhaustion from large payloads, and streaming avoids buffering the entire image in memory. The 10MB limit is reasonable for images.
741-758: Helpful Retry-After header for polling guidance.Adding
Retry-Afterheader when status is pending/processing is a good practice that helps clients implement proper backoff without hardcoding intervals.apps/web/src/lib/constants.ts (1)
156-185: Well-structured settings migration with proper defaults.The
loadLLMSettingsfunction correctly handles backwards compatibility by providing defaults for new fields (translateProvider,translateModel,customOptimizeConfig,customTranslateConfig) when loading old settings data.apps/web/src/hooks/useImageGenerator.ts (4)
51-58: Good defensive parsing for localStorage.Adding try/catch around localStorage parsing prevents crashes from corrupted or malformed stored data.
247-250: Good fix - only create blob URL on successful fetch.Previously, the code would attempt to create a blob URL even if the fetch failed. Now it correctly checks
response.okbefore processing.
372-396: Clean abstraction for LLM token retrieval.The
getTokensForLLMProviderhelper consolidates the mapping logic andgetTranslateTokensproperly uses it. The dependency arrays are correct.
441-480: Translation handler properly supports custom provider flow.The
handleTranslatecallback correctly:
- Uses
getTranslateTokens()for the translation provider- Passes
customConfigwhen provider is 'custom'- Uses the custom model from config when applicable
apps/web/src/lib/api.ts (3)
56-82: Good centralized API request wrapper.The
apiRequestfunction provides:
- Consistent error parsing via
parseErrorResponse- Structured error objects with status and code
- Type-safe return values
This reduces boilerplate across all API calls.
187-196: Clean token rotation integration.The
generateImagefunction now usesrunWithTokenRotationfor automatic failover on rate limits. TheallowAnonymousoption correctly reflects the provider's auth requirements.
486-513: Custom models endpoint integration looks correct.
fetchCustomModelsproperly uses theapiRequestwrapper and returns the expectedApiResponseshape. Error handling is consistent with other functions.apps/api/src/schemas/index.ts (2)
138-145: Verify video provider restriction against HuggingFace support.The schema restricts video generation to only
'gitee'provider usingz.literal(), but the PR objectives mention "Image-to-Video Generation: New workflow to generate videos from images via HuggingFace." If HuggingFace should also support video generation, this schema needs updating.If HuggingFace video support is planned:
export const VideoGenerateRequestSchema = z.object({ - provider: z.literal('gitee'), + provider: z.enum(['gitee', 'huggingface']), imageUrl: z.string().url('Invalid image URL'), prompt: z.string().min(1, 'Prompt is required'), width: z.number().int().min(256).max(2048), height: z.number().int().min(256).max(2048), })
1-9: Well-structured schema organization.Good use of centralized Zod schemas with proper TypeScript type inference exports. The section comments and consistent naming conventions improve maintainability.
| // Fetch models from custom OpenAI-compatible provider | ||
| app.post('/custom-models', async (c) => { | ||
| let body: { baseUrl: string; apiKey: string } | ||
| try { | ||
| body = await c.req.json() | ||
| } catch { | ||
| return sendError(c, Errors.invalidParams('body', 'Invalid JSON body')) | ||
| } | ||
|
|
||
| const { baseUrl, apiKey } = body | ||
|
|
||
| if (!baseUrl || !apiKey) { | ||
| return sendError(c, Errors.invalidParams('body', 'baseUrl and apiKey are required')) | ||
| } | ||
|
|
||
| // Normalize base URL to get /models endpoint | ||
| let url = baseUrl.trim() | ||
| if (url.endsWith('/')) { | ||
| url = url.slice(0, -1) | ||
| } | ||
| // Remove /chat/completions if present | ||
| if (url.endsWith('/chat/completions')) { | ||
| url = url.slice(0, -'/chat/completions'.length) | ||
| } | ||
| // Ensure /v1 is present | ||
| if (!url.endsWith('/v1')) { | ||
| url = `${url}/v1` | ||
| } | ||
| url = `${url}/models` | ||
|
|
||
| try { | ||
| const response = await fetch(url, { | ||
| method: 'GET', | ||
| headers: { | ||
| Authorization: `Bearer ${apiKey}`, | ||
| }, | ||
| }) | ||
|
|
||
| if (!response.ok) { | ||
| const errorText = await response.text().catch(() => 'Unknown error') | ||
| if (response.status === 401) { | ||
| return sendError(c, Errors.authInvalid('Custom Provider')) | ||
| } | ||
| return sendError( | ||
| c, | ||
| Errors.providerError('Custom Provider', `Failed to fetch models: ${errorText}`) | ||
| ) | ||
| } | ||
|
|
||
| const data = (await response.json()) as { data?: Array<{ id: string; owned_by?: string }> } | ||
| const models = (data.data || []).map((m) => ({ | ||
| id: m.id, | ||
| name: m.id, | ||
| owned_by: m.owned_by, | ||
| })) | ||
|
|
||
| return c.json({ models }) | ||
| } catch (err) { | ||
| return sendError( | ||
| c, | ||
| Errors.providerError( | ||
| 'Custom Provider', | ||
| err instanceof Error ? err.message : 'Failed to fetch models' | ||
| ) | ||
| ) | ||
| } | ||
| }) |
There was a problem hiding this comment.
Potential SSRF vulnerability in custom-models endpoint.
The endpoint fetches from a user-provided baseUrl without validation. An attacker could use this to:
- Probe internal network services (e.g.,
http://localhost:6379,http://169.254.169.254/) - Scan internal infrastructure
- Exfiltrate data through DNS or HTTP
Consider adding URL validation:
+// Allowlist for custom provider URLs
+function isAllowedCustomProviderUrl(url: string): boolean {
+ try {
+ const parsed = new URL(url)
+ // Block private/internal IPs
+ const hostname = parsed.hostname.toLowerCase()
+ if (
+ hostname === 'localhost' ||
+ hostname === '127.0.0.1' ||
+ hostname.startsWith('192.168.') ||
+ hostname.startsWith('10.') ||
+ hostname.startsWith('172.') ||
+ hostname === '169.254.169.254' ||
+ hostname.endsWith('.local')
+ ) {
+ return false
+ }
+ // Require HTTPS for external providers
+ return parsed.protocol === 'https:'
+ } catch {
+ return false
+ }
+}
// Fetch models from custom OpenAI-compatible provider
app.post('/custom-models', async (c) => {
// ... validation ...
+
+ if (!isAllowedCustomProviderUrl(baseUrl)) {
+ return sendError(c, Errors.invalidParams('baseUrl', 'Invalid or disallowed URL'))
+ }Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/api/src/app.ts around lines 170 to 236, the custom-models endpoint
currently fetches a user-supplied baseUrl without validation which creates an
SSRF risk; validate and sanitize baseUrl before fetching by parsing it with URL,
rejecting non-http(s) schemes and any host that resolves to loopback,
link-local, private RFC1918 ranges, or known cloud metadata addresses (e.g.
169.254.169.254), and disallow IP-literal hosts if they resolve to private
ranges; perform a DNS lookup of the hostname and check the resolved IP(s)
against private/loopback ranges, and reject if any match, or alternatively
enforce a whitelist of allowed hosts; also reject URLs with credentials,
file://, or other schemes and ensure the path normalization still only targets
the /v1/models path; on validation failure return sendError(c,
Errors.invalidParams(...)) and only then perform the fetch.
| class CustomLLMProvider implements LLMProvider { | ||
| readonly id = 'custom' | ||
| readonly name = 'Custom (OpenAI Compatible)' | ||
|
|
||
| /** | ||
| * Complete a chat request using custom OpenAI-compatible API | ||
| * | ||
| * Note: For custom provider, the authToken is not used. | ||
| * Instead, the API key is passed via customConfig. | ||
| */ | ||
| async complete( | ||
| request: LLMCompleteRequest, | ||
| customConfig?: CustomProviderConfig | ||
| ): Promise<LLMCompleteResult> { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find and read the types.ts file and custom.ts file to verify interface definition
echo "=== Checking llm-providers directory ==="
find apps/api/src -name "*types.ts" -o -name "*custom.ts" | head -20
echo ""
echo "=== Reading types.ts to verify LLMProvider interface ==="
if [ -f "apps/api/src/llm-providers/types.ts" ]; then
cat -n apps/api/src/llm-providers/types.ts
else
echo "File not found at expected location, searching..."
find apps -name "types.ts" -path "*/llm-providers/*" -exec cat -n {} \;
fi
echo ""
echo "=== Reading custom.ts to see full implementation ==="
if [ -f "apps/api/src/llm-providers/custom.ts" ]; then
cat -n apps/api/src/llm-providers/custom.ts | head -80
fiRepository: WuMingDao/zenith-image-generator
Length of output: 4119
Method signature does not match LLMProvider interface.
The complete method includes an additional customConfig?: CustomProviderConfig parameter that is not part of the LLMProvider interface definition. The interface specifies complete(request: LLMCompleteRequest): Promise<LLMCompleteResult>, but the implementation adds an extra parameter, causing a signature mismatch that will result in TypeScript compilation errors.
🤖 Prompt for AI Agents
apps/api/src/llm-providers/custom.ts lines 36-49: the complete method currently
has an extra parameter customConfig?: CustomProviderConfig which does not match
the LLMProvider interface signature and causes TypeScript errors; remove the
extra parameter so the method is declared as complete(request:
LLMCompleteRequest): Promise<LLMCompleteResult>, and obtain any
provider-specific config via the provider instance (constructor/fields) or from
the request object (e.g., request.meta or request.options) instead of a second
parameter.
| function sanitizeErrorForLogging(err: unknown): { name: string; message: string; stack?: string } { | ||
| if (err instanceof Error) { | ||
| return { | ||
| name: err.name, | ||
| message: err.message, | ||
| // Only include stack in development | ||
| stack: process.env.NODE_ENV === 'development' ? err.stack : undefined, | ||
| } | ||
| } | ||
| return { | ||
| name: 'UnknownError', | ||
| message: typeof err === 'string' ? err : 'An unknown error occurred', | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check for wrangler configuration in apps/api
find apps/api -name "wrangler.toml" -o -name "wrangler.json" -o -name "wrangler.jsonc" | head -20Repository: WuMingDao/zenith-image-generator
Length of output: 98
🏁 Script executed:
#!/bin/bash
# Read the error-handler.ts file
cat -n apps/api/src/middleware/error-handler.tsRepository: WuMingDao/zenith-image-generator
Length of output: 2644
🏁 Script executed:
#!/bin/bash
# Search for NODE_ENV and ENVIRONMENT usage in the codebase
rg -n "NODE_ENV|process\.env|c\.env" apps/api/src --type ts --type tsxRepository: WuMingDao/zenith-image-generator
Length of output: 103
🏁 Script executed:
#!/bin/bash
# Search for NODE_ENV and process.env usage - retry without tsx flag
rg -n "NODE_ENV|process\.env|c\.env" apps/api/src -A 2 -B 2Repository: WuMingDao/zenith-image-generator
Length of output: 1766
🏁 Script executed:
#!/bin/bash
# Check wrangler.toml configuration
cat apps/api/wrangler.tomlRepository: WuMingDao/zenith-image-generator
Length of output: 832
🏁 Script executed:
#!/bin/bash
# Check if NODE_ENV is defined anywhere else (secrets, env file, etc.)
find apps/api -name ".dev.vars*" -o -name ".env*" | xargs cat 2>/dev/null || echo "No env files found"Repository: WuMingDao/zenith-image-generator
Length of output: 58
🏁 Script executed:
#!/bin/bash
# Search for how other middleware or handlers access environment in Hono context
rg -n "c\.env|getConfig\(\)" apps/api/src -A 2 -B 1Repository: WuMingDao/zenith-image-generator
Length of output: 509
🏁 Script executed:
#!/bin/bash
# Check the actual index.ts to see how the app is exported and whether Hono context is available
cat apps/api/src/index.tsRepository: WuMingDao/zenith-image-generator
Length of output: 1315
🏁 Script executed:
#!/bin/bash
# Check how errorHandler is used in the app
cat apps/api/src/app.ts | head -100Repository: WuMingDao/zenith-image-generator
Length of output: 3005
Fix process.env.NODE_ENV access in Cloudflare Workers environment.
In Cloudflare Workers, process.env is not available by default. When Node.js compatibility is turned on and the nodejs_compat_populate_process_env compatibility flag is set, process.env will contain any environment variables, but your wrangler.toml currently only enables nodejs_compat without the required flag. Since NODE_ENV is not defined in your vars and the compatibility flag is missing, process.env.NODE_ENV will be undefined, causing stack traces to never be included even in development.
To fix this, either:
- Add the compatibility flag and NODE_ENV to wrangler.toml:
compatibility_flags = ["nodejs_compat", "nodejs_compat_populate_process_env"]
[vars]
NODE_ENV = "development"- Or pass environment flag to the handler: Modify
errorHandlerto accept the environment context and update error-handler.ts to receive this parameter instead of relying on process.env.
🤖 Prompt for AI Agents
In apps/api/src/middleware/error-handler.ts around lines 37-50,
sanitizeErrorForLogging currently checks process.env.NODE_ENV which is not
available in Cloudflare Workers; update the code to stop reading process.env
directly and instead accept an explicit environment flag (e.g., pass a boolean
includeStack or an env string) from the errorHandler entry point, then have the
top-level handler obtain NODE_ENV from the runtime (either via wrangler.toml
[vars] or the Worker context) and pass it down; alternatively, if you prefer to
keep using process.env, update wrangler.toml to include compatibility_flags =
["nodejs_compat", "nodejs_compat_populate_process_env"] and add NODE_ENV =
"development" under [vars] so process.env.NODE_ENV is populated.
| export function validateJson<T extends z.ZodType>(schema: T) { | ||
| return zValidator('json', schema, (result, c) => { | ||
| if (!result.success) { | ||
| const firstIssue = result.error.issues[0] | ||
| const field = firstIssue?.path.join('.') || 'body' | ||
| const message = firstIssue?.message || 'Invalid request body' | ||
|
|
||
| return sendError(c, Errors.invalidParams(field, message)) | ||
| } | ||
| }) | ||
| } |
There was a problem hiding this comment.
Consider reporting all validation errors for better UX.
The middleware only returns the first validation error (Line 30). This means clients must fix errors one at a time, which degrades developer experience. Consider returning all validation issues in a single response.
Example improvement:
export function validateJson<T extends z.ZodType>(schema: T) {
return zValidator('json', schema, (result, c) => {
if (!result.success) {
- const firstIssue = result.error.issues[0]
- const field = firstIssue?.path.join('.') || 'body'
- const message = firstIssue?.message || 'Invalid request body'
-
- return sendError(c, Errors.invalidParams(field, message))
+ const errors = result.error.issues.map(issue => ({
+ field: issue.path.join('.') || 'body',
+ message: issue.message
+ }))
+ // You'd need to extend Errors.invalidParams to support multiple errors
+ return sendError(c, Errors.invalidParams('body', 'Validation failed', errors))
}
})
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/api/src/middleware/validate.ts around lines 27–37, the validator
currently returns only the first Zod issue; instead collect all validation
issues by mapping result.error.issues to an array of {field, message} (use
issue.path.join('.') || 'body' and issue.message) and send them in the error
response (either by extending Errors.invalidParams to accept an array or
creating a new Errors.invalidParamsMultiple payload) via sendError(c, ...).
Ensure the returned response contains all issues so clients can fix them at once
and keep existing behavior when result.success is true.
| /** LLM Provider type enum */ | ||
| export const LLMProviderSchema = z.enum([ | ||
| 'pollinations', | ||
| 'huggingface', | ||
| 'gitee', | ||
| 'modelscope', | ||
| 'deepseek', | ||
| ]) |
There was a problem hiding this comment.
Missing 'custom' provider in LLMProviderSchema.
The PR objectives mention adding a "Custom LLM Provider" for OpenAI-compatible endpoints, but 'custom' is not included in this enum. This will cause validation failures when users attempt to use the custom provider for prompt optimization.
/** LLM Provider type enum */
export const LLMProviderSchema = z.enum([
'pollinations',
'huggingface',
'gitee',
'modelscope',
'deepseek',
+ 'custom',
])📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** LLM Provider type enum */ | |
| export const LLMProviderSchema = z.enum([ | |
| 'pollinations', | |
| 'huggingface', | |
| 'gitee', | |
| 'modelscope', | |
| 'deepseek', | |
| ]) | |
| /** LLM Provider type enum */ | |
| export const LLMProviderSchema = z.enum([ | |
| 'pollinations', | |
| 'huggingface', | |
| 'gitee', | |
| 'modelscope', | |
| 'deepseek', | |
| 'custom', | |
| ]) |
🤖 Prompt for AI Agents
In apps/api/src/schemas/index.ts around lines 18 to 25, the LLMProviderSchema
enum is missing the 'custom' provider which will reject the new Custom LLM
Provider; update the z.enum array to include 'custom' alongside the other
provider strings, then run typechecks/tests to ensure validation and any
dependent types or usage sites compile and accept the new value.
| }`} | ||
| > | ||
| <Sparkles className="w-3.5 h-3.5" /> | ||
| <span>Optimize</span> |
There was a problem hiding this comment.
Hardcoded English strings should use i18n.
The tab labels "Optimize" and "Translate" are hardcoded in English while other UI text uses t() for internationalization.
- <span>Optimize</span>
+ <span>{t('settings.optimize')}</span>- <span>Translate</span>
+ <span>{t('settings.translate')}</span>Also applies to: 241-241
🤖 Prompt for AI Agents
In apps/web/src/components/feature/SettingsModal.tsx around lines 229 and 241
the tab labels "Optimize" and "Translate" are hardcoded in English; replace
these hardcoded strings with calls to the i18n translation function (e.g.,
t('...') with appropriate keys like 'settings.optimize' and
'settings.translate'), add or update the corresponding entries in the locale
resource files, and ensure the component imports/uses the t function (or
useTranslation hook) consistently with the surrounding UI text.
| <div className="space-y-4"> | ||
| {/* Optimize Provider Selection */} | ||
| <div> | ||
| <Label className="text-zinc-400 text-xs">Provider</Label> |
There was a problem hiding this comment.
Multiple hardcoded English strings throughout the component.
Several labels and descriptions are hardcoded in English rather than using the translation function. This affects the i18n support mentioned in the PR objectives. Examples include:
- "Provider" (lines 331, 507)
- "API Base URL" (lines 356, 532)
- "API Key" (lines 369, 545)
- "Model" (lines 383, 435, 559, 611)
- "Custom System Prompt" (line 458)
- "Leave empty to use the default system prompt." (line 482)
- "Auto-Translate" and its description (lines 493-496)
- "Reset" (line 470)
- "(Free)" (lines 344, 520)
Also applies to: 356-356, 369-369, 383-383, 435-435, 458-458, 482-482, 493-497, 507-507, 532-532, 545-545, 559-559, 611-611
| const setCustomOptimizeConfig = useCallback( | ||
| (config: Partial<{ baseUrl: string; apiKey: string; model: string }>) => { | ||
| setLLMSettings((prev) => { | ||
| const newSettings = { | ||
| ...prev, | ||
| customOptimizeConfig: { ...prev.customOptimizeConfig, ...config }, | ||
| } | ||
| saveLLMSettings(newSettings) | ||
| return newSettings | ||
| }) | ||
| }, | ||
| [] | ||
| ) | ||
|
|
||
| const setCustomTranslateConfig = useCallback( | ||
| (config: Partial<{ baseUrl: string; apiKey: string; model: string }>) => { | ||
| setLLMSettings((prev) => { | ||
| const newSettings = { | ||
| ...prev, | ||
| customTranslateConfig: { ...prev.customTranslateConfig, ...config }, | ||
| } | ||
| saveLLMSettings(newSettings) | ||
| return newSettings | ||
| }) | ||
| }, | ||
| [] | ||
| ) |
There was a problem hiding this comment.
API keys stored without encryption - violates security guideline.
Per coding guidelines, API keys should be persisted with AES-256-GCM encryption using src/lib/crypto.ts. However, setCustomOptimizeConfig and setCustomTranslateConfig store the apiKey field via saveLLMSettings, which uses plain localStorage.setItem.
Consider encrypting the API key before storage:
+import { encryptAndStoreToken, decryptTokenFromStore } from '@/lib/crypto'
const setCustomOptimizeConfig = useCallback(
(config: Partial<{ baseUrl: string; apiKey: string; model: string }>) => {
+ // Encrypt API key if provided
+ if (config.apiKey) {
+ encryptAndStoreToken('custom-optimize', config.apiKey)
+ config = { ...config, apiKey: '' } // Don't store plaintext
+ }
setLLMSettings((prev) => {
const newSettings = {
...prev,
customOptimizeConfig: { ...prev.customOptimizeConfig, ...config },
}
saveLLMSettings(newSettings)
return newSettings
})
},
[]
)You'll also need to decrypt when loading settings and when using the config in API calls. Based on learnings, this is a requirement for the codebase.
Committable suggestion skipped: line range outside the PR's diff.
| /** Custom LLM provider configuration */ | ||
| export interface CustomLLMConfig { | ||
| /** API base URL (e.g., https://api.openai.com/v1) */ | ||
| baseUrl: string | ||
| /** API key */ | ||
| apiKey: string | ||
| /** Model name */ | ||
| model: string | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Duplicate type definition - import from shared package instead.
CustomLLMConfig is already defined in packages/shared/src/types/llm.ts (lines 14-21). This duplication can lead to drift between definitions.
import {
type AspectRatioConfig,
+ type CustomLLMConfig,
DEFAULT_OPTIMIZE_SYSTEM_PROMPT,
getModelsByProvider,
LLM_PROVIDER_CONFIGS,
type LLMProviderType,
MODEL_CONFIGS,
PROVIDER_CONFIGS,
type ProviderType,
ASPECT_RATIOS as SHARED_ASPECT_RATIOS,
} from '@z-image/shared'
-/** Custom LLM provider configuration */
-export interface CustomLLMConfig {
- /** API base URL (e.g., https://api.openai.com/v1) */
- baseUrl: string
- /** API key */
- apiKey: string
- /** Model name */
- model: string
-}
+// Re-export from shared
+export type { CustomLLMConfig }Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In apps/web/src/lib/constants.ts around lines 103 to 111, remove the local
CustomLLMConfig type and replace its usage with the canonical type exported from
packages/shared/src/types/llm.ts: import { CustomLLMConfig } from 'shared' (or
the package's actual name/path), update any local references to use the imported
type, and ensure your tsconfig/paths or package.json points to the shared
package so the import resolves; run TypeScript typecheck to confirm no other
duplicates remain.
| // Get token for translate provider | ||
| let token: string | undefined | ||
| switch (llmSettings.translateProvider) { | ||
| case 'gitee-llm': | ||
| token = await decryptTokenFromStore('gitee') | ||
| break | ||
| case 'modelscope-llm': | ||
| token = await decryptTokenFromStore('modelscope') | ||
| break | ||
| case 'huggingface-llm': | ||
| token = await decryptTokenFromStore('huggingface') | ||
| break | ||
| case 'deepseek': | ||
| token = await decryptTokenFromStore('deepseek') | ||
| break | ||
| } | ||
|
|
||
| const result = await translatePrompt( | ||
| { | ||
| prompt, | ||
| provider: llmSettings.translateProvider, | ||
| model: llmSettings.translateModel, | ||
| }, | ||
| token | ||
| ) |
There was a problem hiding this comment.
Same issue: missing 'custom' case and customConfig for translate handler.
The handleTranslate function has the same issue - it doesn't handle the 'custom' provider case explicitly and doesn't pass customConfig to the API.
case 'deepseek':
token = await decryptTokenFromStore('deepseek')
break
+ case 'custom':
+ // Custom provider uses apiKey from customConfig, not authToken
+ break
}
const result = await translatePrompt(
{
prompt,
provider: llmSettings.translateProvider,
- model: llmSettings.translateModel,
+ model: llmSettings.translateProvider === 'custom'
+ ? llmSettings.customTranslateConfig.model
+ : llmSettings.translateModel,
+ customConfig: llmSettings.translateProvider === 'custom'
+ ? llmSettings.customTranslateConfig
+ : undefined,
},
token
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Get token for translate provider | |
| let token: string | undefined | |
| switch (llmSettings.translateProvider) { | |
| case 'gitee-llm': | |
| token = await decryptTokenFromStore('gitee') | |
| break | |
| case 'modelscope-llm': | |
| token = await decryptTokenFromStore('modelscope') | |
| break | |
| case 'huggingface-llm': | |
| token = await decryptTokenFromStore('huggingface') | |
| break | |
| case 'deepseek': | |
| token = await decryptTokenFromStore('deepseek') | |
| break | |
| } | |
| const result = await translatePrompt( | |
| { | |
| prompt, | |
| provider: llmSettings.translateProvider, | |
| model: llmSettings.translateModel, | |
| }, | |
| token | |
| ) | |
| // Get token for translate provider | |
| let token: string | undefined | |
| switch (llmSettings.translateProvider) { | |
| case 'gitee-llm': | |
| token = await decryptTokenFromStore('gitee') | |
| break | |
| case 'modelscope-llm': | |
| token = await decryptTokenFromStore('modelscope') | |
| break | |
| case 'huggingface-llm': | |
| token = await decryptTokenFromStore('huggingface') | |
| break | |
| case 'deepseek': | |
| token = await decryptTokenFromStore('deepseek') | |
| break | |
| case 'custom': | |
| // Custom provider uses apiKey from customConfig, not authToken | |
| break | |
| } | |
| const result = await translatePrompt( | |
| { | |
| prompt, | |
| provider: llmSettings.translateProvider, | |
| model: llmSettings.translateProvider === 'custom' | |
| ? llmSettings.customTranslateConfig.model | |
| : llmSettings.translateModel, | |
| customConfig: llmSettings.translateProvider === 'custom' | |
| ? llmSettings.customTranslateConfig | |
| : undefined, | |
| }, | |
| token | |
| ) |
🤖 Prompt for AI Agents
In apps/web/src/pages/FlowPageV2.tsx around lines 289-313, the translate handler
fails to handle the 'custom' provider and doesn't forward customConfig to the
translate API; add a 'case "custom"' in the switch that retrieves the stored
token (e.g., await decryptTokenFromStore('custom')) like the other providers,
and include the custom config when calling translatePrompt by adding the custom
config value from llmSettings (e.g., translateCustomConfig or similar) into the
request object passed to translatePrompt so the API receives provider, model,
and customConfig along with the token.
Summary
Test plan
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.