Skip to content

feat: major feature updates - i18n, LLM providers, video generation, and Flow improvements#60

Merged
WuMingDao merged 9 commits intomainfrom
dev
Dec 17, 2025
Merged

feat: major feature updates - i18n, LLM providers, video generation, and Flow improvements#60
WuMingDao merged 9 commits intomainfrom
dev

Conversation

@WuMingDao
Copy link
Copy Markdown
Owner

@WuMingDao WuMingDao commented Dec 17, 2025

Summary

  • i18n Support: Added internationalization support to ImageGenerator page components with Chinese and English locales
  • Custom LLM Provider: Added OpenAI-compatible LLM provider with auto model fetching for prompt optimization
  • Image-to-Video Generation: New feature for generating videos from images via HuggingFace
  • Flow Page V2: Complete rewrite with conversation-style UI, branching support, and improved persistence
  • Token Rotation: Automatic failover on rate limits with token rotation support
  • UI Improvements:
    • Settings modal with tabs instead of accordion
    • Image comparison component for upscale preview
    • Fullscreen preview with zoom, pan, and keyboard shortcuts
    • Right-click context menu for node deletion
  • Storage Improvements: Dual-limit storage (500 images / 4GB) with LRU cleanup
  • Code Quality: Comprehensive code review improvements and refactoring

Test plan

  • Verify i18n switching works correctly between Chinese and English
  • Test custom LLM provider configuration and model fetching
  • Test image-to-video generation workflow
  • Verify Flow page V2 functionality including branching and persistence
  • Test token rotation on rate limit scenarios
  • Verify storage limits and cleanup behavior

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Custom OpenAI-compatible LLM provider support for Optimize/Translate and model discovery
    • OpenAPI spec and interactive Swagger UI for the API
    • Settings modal with provider/model selection and custom provider configuration
    • Rate limiting for key API endpoints
  • Improvements

    • Wider internationalization (i18n) coverage across UI
    • Better validation, error messages, and timeout/retry guidance
    • Added temperature control for LLM requests

✏️ Tip: You can customize this high-level summary in your review settings.

WuMingDao and others added 8 commits December 16, 2025 23:07
- Add rate limiting middleware with configurable presets
- Cache Cloudflare Workers app instance to avoid recreation per request
- Fix timeout middleware memory leak by cleaning up event listeners
- Remove duplicate callGradioApi implementation, consolidate in utils
- Sanitize error logs to prevent sensitive data leakage
- Enhance Provider Registry with type-safe satisfies pattern
- Add size limit (10MB) and streaming to image proxy endpoint
- Add Retry-After header for video task polling guidance
- Extract magic numbers to named constants (MAX_INT32, etc.)
- Add Zod validation schemas and middleware helpers
- Add OpenAPI/Swagger documentation infrastructure

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix PromptCard Switch className bug (invalid conditional string)
- Add response.ok check for proxy fetch in useImageGenerator
- Refactor api.ts with shared apiRequest wrapper, reducing ~150 lines of duplicate code

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Refactor api.ts to use shared runWithTokenRotation function, reducing ~100 lines of duplicate token rotation logic
- Add basic token format validation in parseTokens (min 8 chars, alphanumeric with common special chars)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Create shared SettingsModal component with tab-based UI
- Add API Config and LLM tabs with direct content display
- Persist selected tab to localStorage across page refreshes
- Update Header component to match FlowPageV2 style
- Reuse SettingsModal in both ImageGenerator and FlowPageV2

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add 'custom' provider type for OpenAI-compatible APIs
- Create custom LLM provider backend implementation
- Add /api/custom-models endpoint to fetch available models
- Update SettingsModal with custom provider config UI
- Auto-fetch models when baseUrl and apiKey are filled
- Show model selector dropdown when models are available
- Support separate custom configs for optimize and translate
- Add refresh button to manually re-fetch models

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add i18n to Header.tsx (subtitle, flowMode)
- Add i18n to PromptCard.tsx (all labels, buttons, placeholders)
- Add i18n to ImageResultCard.tsx (all UI text, tooltips, toasts)
- Add i18n to StatusCard.tsx (title)
- Add translation keys to en.json and zh.json

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages Bot commented Dec 17, 2025

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Preview URL Updated (UTC)
✅ Deployment successful!
View logs
zenith-image-generator-api c27f7fd Commit Preview URL

Branch Preview URL
Dec 17 2025, 01:51 PM

@vercel
Copy link
Copy Markdown

vercel Bot commented Dec 17, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
zenith-image-generator-web Ready Ready Preview, Comment Dec 17, 2025 1:51pm

@netlify
Copy link
Copy Markdown

netlify Bot commented Dec 17, 2025

Deploy Preview for zenith-image-generator ready!

Name Link
🔨 Latest commit c27f7fd
🔍 Latest deploy log https://app.netlify.com/projects/zenith-image-generator/deploys/6942b54c7d16860008bcd731
😎 Deploy Preview https://deploy-preview-60--zenith-image-generator.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Dec 17, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

Adds custom OpenAI‑compatible LLM provider support, centralized API constants and Zod schemas, sliding-window rate limiting and validation middleware, OpenAPI/Swagger endpoints, enhanced provider error parsing, client token rotation + custom model discovery, widespread i18n, SettingsModal, and removal of legacy flow pages.

Changes

Cohort / File(s) Summary
API deps & constants
apps/api/package.json, apps/api/src/constants.ts
New dependencies for OpenAPI/Zod; added MAX_INT32, TIMEOUTS, BODY_LIMITS, IMAGE_PROXY, VIDEO.POLL_INTERVAL, PROMPT_LIMITS.
Custom LLM provider & registry
apps/api/src/llm-providers/custom.ts, apps/api/src/llm-providers/registry.ts, packages/shared/src/types/llm.ts
New CustomLLMProvider (singleton) with CustomProviderConfig; registry updated to include custom; shared types extended with CustomLLMConfig and custom provider.
LLM temperature & types
apps/api/src/llm-providers/*.ts, apps/api/src/llm-providers/types.ts
Added optional temperature?: number to request shape and applied defaulting (0.7) across providers.
Middleware
apps/api/src/middleware/{rate-limit,validate,error-handler,timeout}.ts, apps/api/src/middleware/index.ts
Added in-memory sliding-window rate limiter with presets; Zod-based validateJson/validateQuery/validateParam; sanitized error logging; timeout teardown refactor; exports updated.
Schemas & OpenAPI
apps/api/src/schemas/index.ts, apps/api/src/openapi/{index,routes}.ts
Centralized Zod schemas for requests/responses; OpenAPI config, /doc and /ui endpoints, and route metadata definitions.
API endpoints & app bootstrap
apps/api/src/app.ts, apps/api/src/index.ts
Added POST /custom-models; /optimize and /translate extended to support custom provider flows and customConfig; image proxy size limit and streaming; video polling Retry-After; app instance/CORS caching; rate-limit middleware applied.
Provider implementations & utils
apps/api/src/providers/{gitee,huggingface}.ts, apps/api/src/providers/registry.ts, apps/api/src/utils/gradio.ts
Replaced hardcoded seed with MAX_INT32; HuggingFace now uses callGradioApi and improved error parsing; added parseHuggingFaceError and PROVIDER_NAME mapping; provider registry made type-safe.
Web API client & token rotation
apps/web/src/lib/api.ts, apps/web/src/lib/tokenRotation.ts
Introduced apiRequest wrapper, token-rotation helpers, translatePrompt with token rotation and custom provider support, fetchCustomModels; token parsing now validates token format.
Web settings, hooks & constants
apps/web/src/lib/constants.ts, apps/web/src/hooks/{useImageGenerator,usePromptTranslate}.ts
Added CustomLLMConfig and new LLM settings fields (translateProvider/model, custom configs); hooks expose setters for translate/custom configs and handle custom provider flows; robust localStorage handling.
UI components & i18n
apps/web/src/components/feature/{Header,ImageResultCard,PromptCard,StatusCard,SettingsModal}.tsx
Header refactor (props, LanguageSwitcher), SettingsModal added (api/optimize/translate tabs, dynamic model fetching), many components wired to i18n keys.
Pages & flow removal
apps/web/src/pages/{ImageGenerator,FlowPageV2,FlowPage,ConversationFlowPage}.tsx
Replaced accordions with SettingsModal in ImageGenerator and FlowPageV2; removed legacy FlowPage and ConversationFlowPage files.
Locales & tests
apps/web/src/locales/{en,zh}.json, apps/web/src/lib/__tests__/tokenRotation.test.ts
Added many new translation keys; updated tokenRotation tests to reflect stricter token validation.
CI workflows
.github/workflows/*.yml
Moved Wrangler v4 installation into explicit pnpm step and removed wranglerVersion inputs.

Sequence Diagram(s)

sequenceDiagram
    actor Client
    participant API as API Server
    participant Validate as Zod Middleware
    participant RateLimit as RateLimit Middleware
    participant Custom as CustomLLMProvider
    participant Response

    Client->>API: POST /optimize { provider, customConfig?, prompt }
    API->>RateLimit: apply preset
    API->>Validate: validateJson(OptimizeRequestSchema)
    alt invalid
        Validate-->>Client: 400 Invalid params
    else
        Validate->>API: pass
        API->>Custom: validate customConfig (baseUrl, apiKey, model)
        alt invalid customConfig
            Custom-->>Client: 400 Invalid customConfig
        else
            Custom->>Custom: build /v1/chat/completions request
            Custom->>Custom: POST (apiKey auth)
            alt HTTP error (401/429/402)
                Custom-->>Response: map to auth/rate/quota error
                Response-->>Client: error (401/429/402)
            else success
                Custom->>API: { content, model }
                API-->>Client: 200 { optimized, provider: 'custom', model }
            end
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60–90 minutes

  • Areas requiring extra attention:
    • apps/api/src/llm-providers/custom.ts — URL normalization, auth header handling, and error mapping
    • apps/api/src/middleware/rate-limit.ts — correctness of sliding-window logic, cleanup interval, memory growth
    • apps/api/src/schemas/index.ts & openapi/routes.ts — ensure schema constraints and OpenAPI metadata match runtime behavior
    • apps/web/src/lib/api.ts — token rotation, error mapping, and custom provider request shapes
    • apps/web/src/components/feature/SettingsModal.tsx — async model fetching, state persistence, and integration with hooks

Possibly related PRs

Suggested labels

documentation

Poem

🐰 A curious hop, a new config in paw,
I fetch custom models and tidy the law.
Rate limits patter like raindrops so neat,
Schemas and locales keep journeys complete.
Swagger lights paths the users will draw ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 79.17% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title comprehensively summarizes the major features added: i18n support, LLM provider enhancements, video generation, and Flow improvements, which align with the substantial changes across multiple files.

📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 81833d3 and c27f7fd.

📒 Files selected for processing (4)
  • .github/workflows/deploy-cf-pages.yml (1 hunks)
  • .github/workflows/deploy-cf-workers.yml (1 hunks)
  • .github/workflows/preview.yml (1 hunks)
  • apps/web/src/lib/__tests__/tokenRotation.test.ts (2 hunks)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

- Update tokenRotation tests to use valid token formats (min 8 chars)
- Fix wrangler installation in GitHub Actions by pre-installing with -w flag
- Remove wranglerVersion parameter as wrangler is now installed manually

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages Bot commented Dec 17, 2025

Deploying zenith-image-generator with  Cloudflare Pages  Cloudflare Pages

Latest commit: c27f7fd
Status: ✅  Deploy successful!
Preview URL: https://8ee65079.zenith-image-generator.pages.dev
Branch Preview URL: https://dev.zenith-image-generator.pages.dev

View logs

@github-actions
Copy link
Copy Markdown

🚀 Preview Deployment

Status URL
✅ Ready https://adc056b2.zenith-image-generator.pages.dev

Commit: 813c890

@WuMingDao WuMingDao merged commit 2d937fc into main Dec 17, 2025
17 checks passed
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/web/src/pages/FlowPageV2.tsx (1)

237-252: Add missing 'custom' case and pass customConfig to optimizePrompt

The switch statement (line 237-252) doesn't handle the 'custom' provider case, and the optimizePrompt call doesn't pass customConfig. Without this, custom provider validation in the API will fail.

Update the switch to include the custom case:

        case 'deepseek':
          token = await decryptTokenFromStore('deepseek')
          break
+       case 'custom':
+         // Custom provider uses apiKey from customConfig, not authToken
+         break

And pass customConfig to the API call:

       const result = await optimizePrompt(
         {
           prompt,
           provider: llmSettings.llmProvider,
-          model: llmSettings.llmModel,
+          model: llmSettings.llmProvider === 'custom' 
+            ? llmSettings.customOptimizeConfig.model 
+            : llmSettings.llmModel,
           lang: 'en',
           systemPrompt: getEffectiveSystemPrompt(llmSettings.customSystemPrompt),
+          customConfig: llmSettings.llmProvider === 'custom' 
+            ? llmSettings.customOptimizeConfig 
+            : undefined,
         },
         token
       )
🧹 Nitpick comments (18)
apps/web/src/hooks/usePromptTranslate.ts (1)

54-88: Consider extracting callback dependencies to reduce re-renders.

The useCallback dependency array includes the entire options object, which may cause the translate callback to be recreated on every render if the parent component doesn't memoize options.

Consider extracting the individual callbacks as dependencies:

  const translate = useCallback(
    async (prompt: string): Promise<TranslateResponse | null> => {
      // ... implementation
    },
-   [options]
+   [options.onSuccess, options.onError]
  )

This ensures the callback only recreates when the actual callback references change, not when a new options object is created.

packages/shared/src/constants/llm-providers.ts (1)

97-98: Consider using optional fields for better type safety.

The empty strings for url and defaultModel satisfy TypeScript's type system but represent semantically invalid values. While functional with the current customConfig pattern, this could lead to subtle bugs if code assumes all providers in LLM_PROVIDER_CONFIGS have valid values.

Consider one of these approaches for better type safety:

Option 1: Make fields optional in the type

type LLMProviderConfig = {
  // ... other fields
  url?: string;
  defaultModel?: string;
}

Option 2: Use placeholder constants

const CUSTOM_PROVIDER_PLACEHOLDER = '__USER_PROVIDED__';

custom: {
  id: 'custom',
  name: 'Custom (OpenAI Compatible)',
  url: CUSTOM_PROVIDER_PLACEHOLDER,
  defaultModel: CUSTOM_PROVIDER_PLACEHOLDER,
  // ...
}

This makes the special case more explicit and easier to validate against.

apps/web/src/lib/tokenRotation.ts (2)

83-83: Remove redundant length check.

The condition t.length > 0 is redundant since isValidTokenFormat(t) already enforces token.length >= 8.

Apply this diff to simplify the filter:

-    .filter((t) => t.length > 0 && isValidTokenFormat(t))
+    .filter(isValidTokenFormat)

68-73: Consider adding user feedback for rejected tokens during parsing.

The validation regex /^[a-zA-Z0-9_\-:.]+$/ is actually appropriate and covers common API token formats (including OpenAI's sk-proj-... format with hyphens). However, parseTokens() silently filters tokens without notifying users, which could make troubleshooting difficult if someone provides a token with unexpected characters or length < 8.

Consider logging rejected tokens or adding a console warning to help users diagnose configuration issues.

apps/web/src/pages/ImageGenerator.tsx (1)

120-139: Consider refactoring to reduce prop drilling.

The SettingsModal receives 18 props, which creates tight coupling and makes the component harder to maintain. Consider using a compound component pattern or passing a settings context object instead.

Example refactoring approach:

<SettingsModal
  isOpen={showSettings}
  onClose={() => setShowSettings(false)}
  settings={{
    provider: { value: provider, models: availableModels, onChange: setProvider },
    model: { value: model, onChange: setModel },
    token: { current: currentToken, onSave: saveToken },
    llm: llmSettings,
    // ... group related settings
  }}
/>
apps/api/src/openapi/index.ts (1)

77-92: Consider gating Swagger UI behind environment check.

The Swagger UI endpoint is exposed unconditionally. While the comment mentions it "can be removed in production," it's safer to gate it behind an environment variable to prevent accidental exposure of API documentation in production.

Apply this diff:

 export function createOpenAPIApp() {
   const app = new OpenAPIHono()
 
   // Add OpenAPI documentation endpoint
   app.doc('/doc', openApiConfig)
 
-  // Add Swagger UI (optional, can be removed in production)
-  app.get(
-    '/ui',
-    swaggerUI({
-      url: '/api/doc',
-    })
-  )
+  // Add Swagger UI in development only
+  if (process.env.NODE_ENV !== 'production') {
+    app.get(
+      '/ui',
+      swaggerUI({
+        url: '/api/doc',
+      })
+    )
+  }
 
   return app
 }
apps/api/src/middleware/validate.ts (1)

54-91: Consider extracting common validation logic.

The three validation functions share identical error handling logic, differing only in the validated target ('json', 'query', 'param') and default field names. Extracting this logic would reduce duplication and improve maintainability.

Example refactoring:

function createValidator<T extends z.ZodType>(
  target: 'json' | 'query' | 'param',
  defaultField: string
) {
  return (schema: T) =>
    zValidator(target, schema, (result, c) => {
      if (!result.success) {
        const firstIssue = result.error.issues[0]
        const field = firstIssue?.path.join('.') || defaultField
        const message = firstIssue?.message || `Invalid ${defaultField}`
        return sendError(c, Errors.invalidParams(field, message))
      }
    })
}

export const validateJson = createValidator('json', 'body')
export const validateQuery = createValidator('query', 'query')
export const validateParam = createValidator('param', 'param')
apps/api/src/middleware/rate-limit.ts (1)

29-46: In-memory rate limiting won't work correctly in distributed Cloudflare Workers.

While the comment acknowledges this limitation, the in-memory Map will result in inconsistent rate limiting across multiple Worker instances. Each instance maintains its own state, allowing clients to bypass limits by hitting different instances.

For production deployment:

  1. Use Cloudflare Rate Limiting API (simplest, built-in)
  2. Use Cloudflare KV for distributed state (eventual consistency)
  3. Use Durable Objects for strongly consistent rate limiting (recommended)

The current implementation is acceptable for development or single-instance deployments.

apps/web/src/components/feature/SettingsModal.tsx (3)

149-178: Auto-fetch effects may trigger multiple parallel requests.

The condition optimizeCustomModels.length === 0 && !isLoadingOptimizeModels creates a potential race: if fetchOptimizeModels sets loading state and the effect re-runs before state updates propagate, multiple requests could fire. Consider using a ref to track in-flight requests or debouncing.


193-202: Modal lacks keyboard accessibility.

The modal doesn't handle Escape key to close, which is a common accessibility pattern. Consider adding a keyboard event listener.

+  // Close on Escape key
+  useEffect(() => {
+    const handleKeyDown = (e: KeyboardEvent) => {
+      if (e.key === 'Escape') {
+        onClose()
+      }
+    }
+    if (isOpen) {
+      document.addEventListener('keydown', handleKeyDown)
+      return () => document.removeEventListener('keydown', handleKeyDown)
+    }
+  }, [isOpen, onClose])
+
   if (!isOpen) return null

194-194: Modal backdrop doesn't close modal on click.

Clicking outside the modal content (on the backdrop) should close it for better UX. The outer div with bg-black/50 should have an onClick handler.

-    <div className="fixed inset-0 bg-black/50 flex items-center justify-center z-50">
+    <div 
+      className="fixed inset-0 bg-black/50 flex items-center justify-center z-50"
+      onClick={onClose}
+    >
-      <div className="bg-zinc-900 border border-zinc-800 rounded-xl w-full max-w-md mx-4 max-h-[80vh] overflow-hidden flex flex-col">
+      <div 
+        className="bg-zinc-900 border border-zinc-800 rounded-xl w-full max-w-md mx-4 max-h-[80vh] overflow-hidden flex flex-col"
+        onClick={(e) => e.stopPropagation()}
+      >
apps/api/src/llm-providers/custom.ts (2)

63-73: URL normalization logic has edge cases.

The normalization could fail for URLs like https://api.example.com/api (would become /api/v1/chat/completions incorrectly) or https://api.example.com/v1/ (trailing slash after v1). Consider a more robust approach:

-    // Normalize base URL - ensure it ends with /chat/completions
-    let url = baseUrl.trim()
-    if (url.endsWith('/')) {
-      url = url.slice(0, -1)
-    }
-    if (!url.endsWith('/chat/completions')) {
-      if (!url.endsWith('/v1')) {
-        url = `${url}/v1`
-      }
-      url = `${url}/chat/completions`
-    }
+    // Normalize base URL - ensure it ends with /chat/completions
+    let url = baseUrl.trim().replace(/\/+$/, '') // Remove trailing slashes
+    if (url.endsWith('/chat/completions')) {
+      // Already complete
+    } else if (url.endsWith('/v1')) {
+      url = `${url}/chat/completions`
+    } else {
+      url = `${url}/v1/chat/completions`
+    }

75-90: Missing timeout for fetch request.

The fetch call has no timeout, which could cause requests to hang indefinitely if the custom provider is unresponsive. Consider using AbortController with a timeout.

+    const controller = new AbortController()
+    const timeoutId = setTimeout(() => controller.abort(), 30000) // 30s timeout
+
+    try {
       const response = await fetch(url, {
         method: 'POST',
         headers: {
           'Content-Type': 'application/json',
           Authorization: `Bearer ${apiKey}`,
         },
+        signal: controller.signal,
         body: JSON.stringify({
           model,
           messages: [
             { role: 'system', content: request.systemPrompt },
             { role: 'user', content: request.prompt },
           ],
           max_tokens: request.maxTokens || 1000,
           temperature: request.temperature ?? 0.7,
         }),
       })
+    } catch (err) {
+      if (err instanceof Error && err.name === 'AbortError') {
+        throw Errors.timeout(this.name)
+      }
+      throw err
+    } finally {
+      clearTimeout(timeoutId)
+    }
apps/api/src/openapi/routes.ts (2)

91-124: Missing 401 and 429 error responses for optimize and translate routes.

These routes can also fail with authentication errors (401) when using providers that require auth, and rate limiting (429). Consider adding these response schemas for completeness.

   responses: {
       description: 'Prompt optimized successfully',
       content: {
         'application/json': {
           schema: OptimizeResponseSchema,
         },
       },
     },
       description: 'Invalid request parameters',
       content: {
         'application/json': {
           schema: ErrorResponseSchema,
         },
       },
     },
+    401: {
+      description: 'Authentication required',
+      content: {
+        'application/json': {
+          schema: ErrorResponseSchema,
+        },
+      },
+    },
+    429: {
+      description: 'Rate limit exceeded',
+      content: {
+        'application/json': {
+          schema: ErrorResponseSchema,
+        },
+      },
+    },
   },

Also applies to: 130-163


291-343: Consider extracting inline schemas to the schemas module for consistency.

The provider and model list routes define inline schemas while other routes import from ../schemas. For consistency and reusability, consider defining ProvidersResponseSchema and ModelsResponseSchema in the schemas module.

apps/api/src/app.ts (1)

300-326: Type assertion for custom provider complete() is fragile.

The type cast as unknown as { complete: ... } bypasses type safety. Consider defining a proper interface for the custom provider or using a discriminated union pattern.

-      const result =
-        provider === 'custom'
-          ? await (
-              llmProvider as unknown as {
-                complete: (
-                  req: Parameters<typeof llmProvider.complete>[0],
-                  config?: typeof customConfig
-                ) => ReturnType<typeof llmProvider.complete>
-              }
-            ).complete(
-              {
-                prompt,
-                systemPrompt: finalSystemPrompt,
-                model: customConfig?.model,
-                maxTokens: 1000,
-              },
-              customConfig
-            )
-          : await llmProvider.complete({
+      // Consider creating a type-safe interface in the provider registry
+      const result = await llmProvider.complete({
+        prompt,
+        systemPrompt: finalSystemPrompt,
+        model: provider === 'custom' ? customConfig?.model : model,
+        authToken,
+        maxTokens: 1000,
+        ...(provider === 'custom' && customConfig ? { customConfig } : {}),
+      })
apps/web/src/lib/api.ts (1)

311-375: Consider extracting common LLM request pattern.

optimizePrompt and translatePrompt share nearly identical structure:

  1. Custom provider validation and direct call
  2. No-auth provider direct call
  3. Token rotation for auth-required providers

Consider extracting a generic helper:

async function llmRequestWithRotation<TOptions, TResponse>(
  options: TOptions & { provider?: LLMProviderType; customConfig?: CustomLLMConfig },
  tokenOrTokens: string | string[] | undefined,
  singleCallFn: (opts: TOptions, token: string | null) => Promise<TResponse>
): Promise<ApiResponse<TResponse>> {
  const { provider = 'pollinations', customConfig } = options
  const providerConfig = LLM_PROVIDER_CONFIGS[provider]
  const tokenProvider = getLLMTokenProvider(provider)
  const allTokens = Array.isArray(tokenOrTokens) ? tokenOrTokens : tokenOrTokens ? [tokenOrTokens] : []

  // Custom provider flow
  if (provider === 'custom') {
    if (!customConfig?.baseUrl || !customConfig?.apiKey || !customConfig?.model) {
      return { success: false, error: 'Please configure custom provider URL, API key, and model' }
    }
    try {
      return { success: true, data: await singleCallFn(options, null) }
    } catch (err) {
      return { success: false, error: err instanceof Error ? err.message : 'Network error' }
    }
  }

  // No-auth provider flow
  if (!providerConfig?.needsAuth) {
    try {
      return { success: true, data: await singleCallFn(options, null) }
    } catch (err) {
      return { success: false, error: err instanceof Error ? err.message : 'Network error' }
    }
  }

  // Token rotation flow
  if (allTokens.length === 0 || !tokenProvider) {
    return { success: false, error: `Please configure your ${provider} token first` }
  }

  const result = await runWithTokenRotation(tokenProvider, allTokens, (t) => singleCallFn(options, t), { allowAnonymous: false })
  return result.success ? { success: true, data: result.data } : { success: false, error: result.error }
}

This would reduce ~130 lines to ~60 lines and ensure consistent behavior.

Also applies to: 420-484

apps/api/src/schemas/index.ts (1)

36-39: Use PROMPT_LIMITS constant for consistency.

The prompt max length is hardcoded to 4000 here, but OptimizeRequestSchema uses PROMPT_LIMITS.OPTIMIZE. Using the constant ensures consistency and easier maintenance if limits change.

   prompt: z
     .string()
     .min(1, 'Prompt is required')
-    .max(4000, 'Prompt must be less than 4000 characters'),
+    .max(PROMPT_LIMITS.OPTIMIZE, `Prompt must be less than ${PROMPT_LIMITS.OPTIMIZE} characters`),
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 32c47be and 81833d3.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (42)
  • apps/api/package.json (1 hunks)
  • apps/api/src/app.ts (11 hunks)
  • apps/api/src/constants.ts (1 hunks)
  • apps/api/src/index.ts (1 hunks)
  • apps/api/src/llm-providers/custom.ts (1 hunks)
  • apps/api/src/llm-providers/deepseek.ts (1 hunks)
  • apps/api/src/llm-providers/gitee-llm.ts (1 hunks)
  • apps/api/src/llm-providers/huggingface-llm.ts (1 hunks)
  • apps/api/src/llm-providers/modelscope-llm.ts (1 hunks)
  • apps/api/src/llm-providers/pollinations.ts (1 hunks)
  • apps/api/src/llm-providers/registry.ts (2 hunks)
  • apps/api/src/llm-providers/types.ts (1 hunks)
  • apps/api/src/middleware/error-handler.ts (1 hunks)
  • apps/api/src/middleware/index.ts (1 hunks)
  • apps/api/src/middleware/rate-limit.ts (1 hunks)
  • apps/api/src/middleware/timeout.ts (1 hunks)
  • apps/api/src/middleware/validate.ts (1 hunks)
  • apps/api/src/openapi/index.ts (1 hunks)
  • apps/api/src/openapi/routes.ts (1 hunks)
  • apps/api/src/providers/gitee.ts (2 hunks)
  • apps/api/src/providers/huggingface.ts (2 hunks)
  • apps/api/src/providers/registry.ts (1 hunks)
  • apps/api/src/schemas/index.ts (1 hunks)
  • apps/api/src/utils/gradio.ts (4 hunks)
  • apps/web/src/components/feature/Header.tsx (1 hunks)
  • apps/web/src/components/feature/ImageResultCard.tsx (25 hunks)
  • apps/web/src/components/feature/PromptCard.tsx (10 hunks)
  • apps/web/src/components/feature/SettingsModal.tsx (1 hunks)
  • apps/web/src/components/feature/StatusCard.tsx (2 hunks)
  • apps/web/src/hooks/useImageGenerator.ts (7 hunks)
  • apps/web/src/hooks/usePromptTranslate.ts (1 hunks)
  • apps/web/src/lib/api.ts (14 hunks)
  • apps/web/src/lib/constants.ts (3 hunks)
  • apps/web/src/lib/tokenRotation.ts (1 hunks)
  • apps/web/src/locales/en.json (2 hunks)
  • apps/web/src/locales/zh.json (2 hunks)
  • apps/web/src/pages/ConversationFlowPage.tsx (0 hunks)
  • apps/web/src/pages/FlowPage.tsx (0 hunks)
  • apps/web/src/pages/FlowPageV2.tsx (5 hunks)
  • apps/web/src/pages/ImageGenerator.tsx (4 hunks)
  • packages/shared/src/constants/llm-providers.ts (1 hunks)
  • packages/shared/src/types/llm.ts (3 hunks)
💤 Files with no reviewable changes (2)
  • apps/web/src/pages/ConversationFlowPage.tsx
  • apps/web/src/pages/FlowPage.tsx
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{ts,tsx,js,jsx,json}

📄 CodeRabbit inference engine (CLAUDE.md)

Use Biome for all linting and formatting operations (pnpm lint, pnpm format, pnpm check)

Files:

  • apps/api/src/providers/gitee.ts
  • apps/api/src/llm-providers/types.ts
  • apps/api/src/middleware/error-handler.ts
  • apps/api/src/llm-providers/huggingface-llm.ts
  • apps/api/src/middleware/timeout.ts
  • apps/api/src/middleware/validate.ts
  • apps/web/src/lib/tokenRotation.ts
  • apps/web/src/hooks/usePromptTranslate.ts
  • apps/api/src/llm-providers/deepseek.ts
  • apps/api/src/index.ts
  • apps/api/src/llm-providers/registry.ts
  • apps/api/src/middleware/index.ts
  • apps/web/src/locales/zh.json
  • apps/api/src/llm-providers/custom.ts
  • apps/api/src/llm-providers/gitee-llm.ts
  • apps/api/src/utils/gradio.ts
  • apps/web/src/pages/ImageGenerator.tsx
  • apps/web/src/components/feature/PromptCard.tsx
  • apps/web/src/components/feature/StatusCard.tsx
  • packages/shared/src/constants/llm-providers.ts
  • apps/api/src/llm-providers/pollinations.ts
  • packages/shared/src/types/llm.ts
  • apps/api/src/providers/huggingface.ts
  • apps/api/src/middleware/rate-limit.ts
  • apps/web/src/components/feature/SettingsModal.tsx
  • apps/web/src/lib/constants.ts
  • apps/web/src/components/feature/ImageResultCard.tsx
  • apps/api/src/app.ts
  • apps/api/src/constants.ts
  • apps/api/package.json
  • apps/api/src/providers/registry.ts
  • apps/api/src/openapi/routes.ts
  • apps/web/src/hooks/useImageGenerator.ts
  • apps/web/src/lib/api.ts
  • apps/web/src/components/feature/Header.tsx
  • apps/web/src/pages/FlowPageV2.tsx
  • apps/web/src/locales/en.json
  • apps/api/src/schemas/index.ts
  • apps/api/src/llm-providers/modelscope-llm.ts
  • apps/api/src/openapi/index.ts
apps/api/src/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

apps/api/src/**/*.ts: POST /api/generate endpoint must support 'gitee' and 'huggingface' as provider parameters, require X-API-Key header for Gitee AI, and support optional X-HF-Token header for HuggingFace
Use Hono API framework for Cloudflare Workers implementation in the API app (apps/api)

Files:

  • apps/api/src/providers/gitee.ts
  • apps/api/src/llm-providers/types.ts
  • apps/api/src/middleware/error-handler.ts
  • apps/api/src/llm-providers/huggingface-llm.ts
  • apps/api/src/middleware/timeout.ts
  • apps/api/src/middleware/validate.ts
  • apps/api/src/llm-providers/deepseek.ts
  • apps/api/src/index.ts
  • apps/api/src/llm-providers/registry.ts
  • apps/api/src/middleware/index.ts
  • apps/api/src/llm-providers/custom.ts
  • apps/api/src/llm-providers/gitee-llm.ts
  • apps/api/src/utils/gradio.ts
  • apps/api/src/llm-providers/pollinations.ts
  • apps/api/src/providers/huggingface.ts
  • apps/api/src/middleware/rate-limit.ts
  • apps/api/src/app.ts
  • apps/api/src/constants.ts
  • apps/api/src/providers/registry.ts
  • apps/api/src/openapi/routes.ts
  • apps/api/src/schemas/index.ts
  • apps/api/src/llm-providers/modelscope-llm.ts
  • apps/api/src/openapi/index.ts
apps/web/src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

apps/web/src/**/*.{ts,tsx}: Use @/ path alias for imports in frontend code (maps to src/)
Persist settings and API keys to localStorage with AES-256-GCM encryption (use src/lib/crypto.ts)
Persist Flow mode state using IndexedDB storage (via src/lib/flow-storage.ts) for nodes, edges, and images
Configure API URL via VITE_API_URL environment variable, defaulting to relative path for same-origin deployment

Files:

  • apps/web/src/lib/tokenRotation.ts
  • apps/web/src/hooks/usePromptTranslate.ts
  • apps/web/src/pages/ImageGenerator.tsx
  • apps/web/src/components/feature/PromptCard.tsx
  • apps/web/src/components/feature/StatusCard.tsx
  • apps/web/src/components/feature/SettingsModal.tsx
  • apps/web/src/lib/constants.ts
  • apps/web/src/components/feature/ImageResultCard.tsx
  • apps/web/src/hooks/useImageGenerator.ts
  • apps/web/src/lib/api.ts
  • apps/web/src/components/feature/Header.tsx
  • apps/web/src/pages/FlowPageV2.tsx
apps/web/src/**/*.{tsx,ts}

📄 CodeRabbit inference engine (CLAUDE.md)

Use React 19 with Vite, Tailwind CSS, and shadcn/ui components in the frontend (apps/web)

Files:

  • apps/web/src/lib/tokenRotation.ts
  • apps/web/src/hooks/usePromptTranslate.ts
  • apps/web/src/pages/ImageGenerator.tsx
  • apps/web/src/components/feature/PromptCard.tsx
  • apps/web/src/components/feature/StatusCard.tsx
  • apps/web/src/components/feature/SettingsModal.tsx
  • apps/web/src/lib/constants.ts
  • apps/web/src/components/feature/ImageResultCard.tsx
  • apps/web/src/hooks/useImageGenerator.ts
  • apps/web/src/lib/api.ts
  • apps/web/src/components/feature/Header.tsx
  • apps/web/src/pages/FlowPageV2.tsx
🧠 Learnings (7)
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Applies to apps/api/src/**/*.ts : POST /api/generate endpoint must support 'gitee' and 'huggingface' as provider parameters, require X-API-Key header for Gitee AI, and support optional X-HF-Token header for HuggingFace

Applied to files:

  • apps/api/src/providers/gitee.ts
  • apps/api/src/llm-providers/custom.ts
  • apps/api/src/llm-providers/gitee-llm.ts
  • apps/api/src/utils/gradio.ts
  • packages/shared/src/types/llm.ts
  • apps/api/src/providers/huggingface.ts
  • apps/api/src/app.ts
  • apps/web/src/hooks/useImageGenerator.ts
  • apps/web/src/lib/api.ts
  • apps/api/src/schemas/index.ts
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Implement provider abstraction pattern for API: define provider interface in apps/api/src/providers/types.ts, implement specific providers in individual files, and register/lookup providers in apps/api/src/providers/registry.ts

Applied to files:

  • apps/api/src/llm-providers/registry.ts
  • apps/api/src/llm-providers/custom.ts
  • packages/shared/src/constants/llm-providers.ts
  • apps/api/src/providers/registry.ts
  • apps/web/src/lib/api.ts
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Organize shared code in packages/shared with type definitions in src/types/, provider configs in src/constants/, and validation utilities in src/utils/

Applied to files:

  • apps/api/src/middleware/index.ts
  • apps/api/src/providers/registry.ts
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Applies to apps/web/src/**/*.{tsx,ts} : Use React 19 with Vite, Tailwind CSS, and shadcn/ui components in the frontend (apps/web)

Applied to files:

  • apps/web/src/components/feature/PromptCard.tsx
  • apps/web/src/pages/FlowPageV2.tsx
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Applies to apps/web/src/**/*.{ts,tsx} : Persist settings and API keys to localStorage with AES-256-GCM encryption (use src/lib/crypto.ts)

Applied to files:

  • apps/web/src/components/feature/SettingsModal.tsx
  • apps/web/src/hooks/useImageGenerator.ts
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Applies to apps/api/src/**/*.ts : Use Hono API framework for Cloudflare Workers implementation in the API app (apps/api)

Applied to files:

  • apps/api/package.json
  • apps/api/src/openapi/routes.ts
  • apps/api/src/openapi/index.ts
📚 Learning: 2025-12-12T02:11:01.015Z
Learnt from: CR
Repo: WuMingDao/zenith-image-generator PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-12T02:11:01.015Z
Learning: Applies to apps/web/src/**/*.{ts,tsx} : Persist Flow mode state using IndexedDB storage (via src/lib/flow-storage.ts) for nodes, edges, and images

Applied to files:

  • apps/web/src/pages/FlowPageV2.tsx
🧬 Code graph analysis (16)
apps/api/src/providers/gitee.ts (1)
apps/api/src/constants.ts (1)
  • MAX_INT32 (8-8)
apps/api/src/middleware/error-handler.ts (1)
apps/api/src/middleware/request-id.ts (1)
  • requestId (18-27)
apps/api/src/middleware/validate.ts (2)
apps/api/src/middleware/error-handler.ts (1)
  • sendError (28-32)
packages/shared/src/types/error.ts (1)
  • Errors (95-156)
apps/web/src/hooks/usePromptTranslate.ts (1)
apps/web/src/lib/api.ts (1)
  • translatePrompt (420-484)
apps/api/src/index.ts (2)
apps/api/src/app.ts (1)
  • createApp (56-767)
apps/api/src/config.ts (1)
  • getCorsOriginsFromBindings (48-56)
apps/api/src/llm-providers/registry.ts (1)
apps/api/src/llm-providers/custom.ts (1)
  • customLLMProvider (129-129)
apps/api/src/llm-providers/custom.ts (2)
apps/api/src/llm-providers/types.ts (3)
  • LLMProvider (30-37)
  • LLMCompleteRequest (6-19)
  • LLMCompleteResult (22-27)
packages/shared/src/types/error.ts (1)
  • Errors (95-156)
apps/api/src/utils/gradio.ts (1)
packages/shared/src/types/error.ts (1)
  • Errors (95-156)
packages/shared/src/types/llm.ts (1)
apps/web/src/lib/constants.ts (2)
  • CustomLLMConfig (104-111)
  • LLMProviderType (22-22)
apps/api/src/providers/huggingface.ts (1)
apps/api/src/constants.ts (1)
  • MAX_INT32 (8-8)
apps/api/src/middleware/rate-limit.ts (2)
apps/api/src/middleware/error-handler.ts (1)
  • sendError (28-32)
packages/shared/src/types/error.ts (1)
  • Errors (95-156)
apps/web/src/lib/constants.ts (1)
packages/shared/src/types/llm.ts (2)
  • CustomLLMConfig (15-22)
  • LLMProviderType (6-12)
apps/web/src/components/feature/ImageResultCard.tsx (1)
apps/web/src/components/ui/card.tsx (1)
  • CardTitle (75-75)
apps/web/src/hooks/useImageGenerator.ts (4)
apps/web/src/lib/constants.ts (4)
  • LLMProviderType (22-22)
  • getDefaultLLMModel (99-101)
  • saveLLMSettings (188-192)
  • getEffectiveSystemPrompt (195-198)
packages/shared/src/types/llm.ts (1)
  • LLMProviderType (6-12)
apps/web/src/lib/crypto.ts (1)
  • loadTokensArray (94-101)
apps/web/src/lib/api.ts (1)
  • translatePrompt (420-484)
apps/web/src/components/feature/Header.tsx (1)
apps/web/src/components/ui/LanguageSwitcher.tsx (1)
  • LanguageSwitcher (4-23)
apps/api/src/schemas/index.ts (1)
apps/api/src/constants.ts (1)
  • PROMPT_LIMITS (45-50)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Workers Builds: zenith-image-generator-api
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (68)
apps/api/src/middleware/timeout.ts (1)

17-38: Excellent refactor for proper resource cleanup!

The changes improve the timeout middleware by introducing explicit lifecycle management for the abort handler:

  1. Named abort handler (lines 20-23): Using a named function instead of an inline handler enables proper cleanup via removeEventListener.

  2. Stored reject callback (lines 17-18, 28): Capturing the reject function allows the abort handler to trigger timeout errors while maintaining the ability to clean up the reference later.

  3. Deterministic cleanup (lines 37-38): The finally block now properly removes the event listener and nulls out the reject reference, preventing memory leaks from accumulated listeners or dangling function references.

  4. Safe invocation (line 22): Optional chaining on rejectTimeout?.() gracefully handles the edge case where cleanup has already occurred but the abort event still fires.

This pattern ensures that whether the request completes successfully, times out, or errors, all resources are properly released.

apps/web/src/hooks/usePromptTranslate.ts (1)

67-67: Verify that TranslateOptions includes the prompt field.

The change to pass { prompt } aligns with the TranslateOptions API signature. The type definition in apps/web/src/lib/api.ts confirms that TranslateOptions includes prompt as a required string field, along with optional provider and other configuration options. The code is correct.

packages/shared/src/constants/llm-providers.ts (1)

94-101: Custom provider implementation properly validates configuration - no changes needed.

The custom provider configuration correctly uses empty strings as placeholders. The custom.ts implementation explicitly requires customConfig to be passed at runtime and validates that baseUrl, apiKey, and model are all provided before use. The provider never uses the empty values from LLM_PROVIDER_CONFIGS directly, and validation occurs at the entry point of the complete() method, ensuring safe handling of the special case.

apps/api/src/middleware/error-handler.ts (1)

59-61: Good security practice: sanitizing error logs.

The approach of sanitizing errors before logging is excellent and helps prevent accidental leakage of sensitive information (such as API keys or tokens) that might be embedded in error messages or stack traces.

apps/api/src/utils/gradio.ts (3)

7-47: LGTM! Well-structured error parsing.

The parseHuggingFaceError helper provides comprehensive error mapping with clear logic for rate limits, quota, authentication, timeouts, and service availability. The use of a PROVIDER_NAME constant ensures consistency across error messages.


52-85: LGTM! Improved error handling in SSE extraction.

The updates to extractCompleteEventData properly leverage parseHuggingFaceError for both parsed error data and syntax-error fallbacks, providing more specific error context than the previous generic provider errors.


90-121: LGTM! Consistent error handling in API calls.

The callGradioApi function now uses parseHuggingFaceError for queue failures with both error text and status code, enabling more accurate error classification. The event_id check maintains the existing provider error pattern.

apps/api/src/constants.ts (1)

1-50: LGTM! Excellent centralization of constants.

This new constants module follows best practices by:

  • Eliminating magic numbers throughout the codebase
  • Using as const for type safety and immutability
  • Providing clear documentation for each constant
  • Using readable numeric literals with underscores (e.g., 120_000)

The values are all reasonable for their respective use cases. MAX_INT32 correctly represents 2³¹-1.

apps/web/src/locales/en.json (3)

16-17: LGTM! Clear flow mode labels added.

The new header keys tryFlowMode and flowMode provide clear navigation labels for the experimental flow feature.


25-37: LGTM! Enhanced prompt configuration labels.

The new prompt keys support inference steps, dimensions, and LLM-powered translation/optimization features, aligning with PR objectives for custom LLM provider support.


55-85: LGTM! Comprehensive result view enhancements.

The extensive additions to the result section support:

  • Video generation workflow with status messages
  • Image comparison and upscaling features
  • Fullscreen viewer with keyboard shortcuts
  • Detailed metadata labels

All strings are clear, grammatically correct, and align with the PR's UI improvement objectives.

apps/api/src/providers/gitee.ts (1)

7-7: LGTM! Clean refactor to use centralized constant.

Replacing the hardcoded 2147483647 with MAX_INT32 improves maintainability and aligns with the new centralized constants module. The seed generation logic remains functionally identical.

Also applies to: 74-74

apps/api/src/llm-providers/huggingface-llm.ts (1)

60-60: LGTM! Added configurable temperature parameter.

The change from hardcoded 0.7 to request.temperature ?? 0.7 adds flexibility while maintaining backward compatibility with the same default value. This aligns with the temperature support added across other LLM providers in the PR.

apps/api/src/llm-providers/gitee-llm.ts (1)

53-53: LGTM! Temperature parameter added consistently.

The addition of temperature: request.temperature ?? 0.7 to the Gitee LLM request body provides configurable temperature control while defaulting to 0.7, maintaining consistency with other LLM providers in this PR.

apps/api/src/llm-providers/modelscope-llm.ts (1)

53-53: LGTM! Temperature parameter completes consistent API across providers.

The addition of temperature: request.temperature ?? 0.7 to the ModelScope LLM request body completes the temperature parameter rollout across all LLM providers (HuggingFace, Gitee, and ModelScope), ensuring consistent configurability with a sensible default.

apps/api/package.json (1)

19-24: All package versions confirmed to exist and are secure.

@hono/swagger-ui@0.5.2 exists and is the latest version. zod@4.2.1 was published recently and no known vulnerabilities exist. hono@4.11.1 was published 3 days ago, and version 4.11.0 is available—importantly, security vulnerabilities in hono affecting versions >=1.1.0 <4.10.2 have been patched in >=4.10.2, so hono@4.11.0 is secure. The Hono middleware packages (@hono/zod-openapi, @hono/zod-validator) are part of the stable honojs ecosystem and present no security concerns.

apps/api/src/llm-providers/pollinations.ts (1)

39-39: LGTM! Temperature parameter added correctly.

The addition of the temperature parameter with a sensible default of 0.7 aligns with the updated LLMCompleteRequest interface and follows best practices for LLM generation control.

apps/api/src/llm-providers/deepseek.ts (1)

52-52: LGTM! Consistent temperature parameter implementation.

The temperature parameter addition matches the pattern used in other providers and provides appropriate control over generation determinism.

apps/web/src/components/feature/StatusCard.tsx (1)

1-14: LGTM! Clean internationalization implementation.

The i18n integration is straightforward and follows React best practices. The translation key 'status.title' is appropriately scoped.

apps/api/src/llm-providers/types.ts (1)

17-18: LGTM! Well-documented type extension.

The optional temperature field is properly documented and enables consistent temperature control across all LLM providers while maintaining backward compatibility.

apps/api/src/llm-providers/registry.ts (1)

6-21: LGTM! Custom provider registration follows established patterns.

The addition of the custom LLM provider to the registry is consistent with the existing provider registration pattern and properly enables the new OpenAI-compatible provider feature.

apps/api/src/providers/huggingface.ts (3)

6-7: LGTM! Improved maintainability through centralized utilities.

Importing MAX_INT32 and callGradioApi promotes code reuse and centralizes Gradio API interaction logic across providers.


50-50: LGTM! Consistent seed generation using shared constant.

Using MAX_INT32 instead of a hardcoded magic number improves maintainability and ensures consistency across providers.


58-75: Error handling is properly centralized and covers all HuggingFace-specific scenarios.

The refactoring successfully delegates error handling to callGradioApi and parseHuggingFaceError without losing coverage. Verification confirms all critical error paths are handled:

  • Rate limiting & quota: Detected via status codes (429) and message patterns (lines 16, 21)
  • Authentication: 401/403 status codes and message patterns (lines 26-33)
  • Timeout & service unavailability: Message patterns and status 503 (lines 36-43)
  • SSE stream errors: Parsed from 'error' events with JSON fallback (lines 64-76)
  • Queue initialization failures: Validated event_id presence (line 112)
  • Network errors: Handled in queue POST request (lines 105-108)

All error types map to appropriate Errors helpers for consistent client-facing responses.

apps/api/src/index.ts (1)

15-36: The app caching optimization is correctly implemented.

The implementation properly identifies CORS_ORIGINS as the only environment binding affecting app behavior. The Env interface defines only CORS_ORIGINS, the createApp function receives pre-processed configuration and does not access environment bindings directly, and the cache invalidation logic correctly handles the only dynamic configuration that exists.

apps/api/src/middleware/index.ts (1)

8-12: Middleware exports are properly implemented.

All five middleware modules (rate-limit, request-id, security, timeout, validate) exist and export expected functions. The rate-limit module exports the rateLimit() function and rateLimitPresets used throughout the app, and the validate module exports validateJson(), validateQuery(), and validateParam() for request validation. The barrel export pattern correctly expands the middleware API surface and integrates properly with the Hono API framework.

apps/web/src/pages/ImageGenerator.tsx (2)

1-10: LGTM!

The imports and state initialization follow React 19 best practices and the @/ path alias convention specified in the coding guidelines.


67-67: LGTM!

The Header integration with the settings modal follows a clean callback pattern.

apps/api/src/openapi/index.ts (2)

1-10: LGTM!

The imports and barrel export pattern are clean and follow modern ES module conventions.


15-54: LGTM!

The OpenAPI configuration correctly documents the API security schemes. The X-API-Key requirement for Gitee AI and optional X-HF-Token for HuggingFace align with the coding guidelines.

packages/shared/src/types/llm.ts (3)

12-22: LGTM!

The CustomLLMConfig interface appropriately defines the structure for custom OpenAI-compatible providers. Note that per the coding guidelines, consumers of this type must encrypt API keys using AES-256-GCM when persisting to storage.


53-66: LGTM!

The optional customConfig field is appropriately designed for the custom provider use case. The comment clearly documents when this field is required.


92-110: LGTM!

The TranslateRequest and TranslateResponse changes align the translation API with the optimization API, providing consistent support for custom LLM providers. The optional fields maintain backward compatibility.

apps/web/src/locales/zh.json (1)

1-175: LGTM!

The Chinese localization file is well-structured and syntactically valid. The new translation keys align with the i18n implementation across the web application.

apps/api/src/providers/registry.ts (2)

11-23: LGTM!

The use of TypeScript's satisfies operator with as const provides excellent type safety while preserving literal types. This prevents runtime errors from unregistered providers.


25-42: LGTM!

The registry functions maintain runtime safety checks while leveraging TypeScript's type system for compile-time safety. The type predicate in hasProvider enables proper type narrowing in consumer code.

apps/api/src/middleware/rate-limit.ts (3)

48-78: LGTM!

The key generation strategy appropriately prioritizes API keys over IP addresses and hashes keys for privacy. The simple hash function (lines 70-78) is sufficient for rate limiting purposes and prevents logging sensitive API keys.


83-133: LGTM!

The sliding window rate limiting implementation is correct. The middleware properly:

  • Resets windows when expired
  • Increments counts before checking limits
  • Sets standard rate limit headers
  • Returns appropriate errors with retry-after information

135-162: LGTM!

The rate limit presets are well-calibrated for different endpoint types, with stricter limits for resource-intensive operations (video: 5/min, generate: 10/min) and relaxed limits for lightweight operations (readonly: 60/min).

apps/web/src/components/feature/PromptCard.tsx (3)

2-2: LGTM!

The i18n integration using react-i18next is correctly implemented and follows React 19 best practices. The translation keys align with the locale files reviewed earlier.

Also applies to: 63-63


71-256: LGTM!

All user-facing strings are properly translated using i18next. The translations follow a consistent pattern and match the locale file keys.


170-170: Nice improvement with data attribute selectors!

The Switch className now uses Tailwind's data attribute selectors instead of conditional logic, making the styling more declarative and easier to maintain.

apps/web/src/components/feature/SettingsModal.tsx (2)

1-27: LGTM! Imports are well-organized and use the @/ path alias correctly.

The imports follow the coding guidelines for using @/ path alias for frontend code. Good separation of external libraries, UI components, and local utilities.


79-95: Tab persistence using localStorage is appropriate for non-sensitive UI state.

The tab selection persistence correctly uses plain localStorage since it only stores a UI preference ('api', 'optimize', or 'translate'), not sensitive data like API keys. Per coding guidelines, encryption is required for settings and API keys, but UI state like active tab doesn't require encryption.

apps/web/src/pages/FlowPageV2.tsx (2)

185-211: LGTM! Custom config setters correctly persist to storage.

The setCustomOptimizeConfig and setCustomTranslateConfig callbacks properly merge partial updates and persist to storage via saveLLMSettings. The empty dependency array is acceptable since these only use setLLMSettings (stable setter) and saveLLMSettings (module-level function).


587-612: LGTM! SettingsModal integration is complete.

All required props are properly passed to the SettingsModal component, including the new custom config setters. The wiring correctly connects the local state management to the modal's callbacks.

apps/web/src/components/feature/Header.tsx (2)

1-11: LGTM! Clean component structure with proper TypeScript interface.

The imports use the @/ path alias as required by coding guidelines. The HeaderProps interface clearly defines optional props with sensible defaults via optional chaining in usage.


13-59: LGTM! Well-structured header with full i18n support.

The component correctly uses the translation function for all user-facing text (header.title, header.subtitle, common.api, header.flowMode). The three-column flex layout is clean and responsive.

apps/api/src/llm-providers/custom.ts (2)

92-116: LGTM! Error handling is comprehensive.

Good mapping of HTTP status codes to appropriate error types (401→authInvalid, 429→rateLimited, 402→quotaExceeded). The fallback to providerError with parsed message provides useful debugging information.


118-126: LGTM! Response parsing with proper null checks.

The optional chaining on data.choices?.[0]?.message?.content correctly handles malformed responses, and the empty response case throws an appropriate error.

apps/api/src/openapi/routes.ts (3)

1-23: LGTM! Well-organized OpenAPI route setup.

The imports are clean, separating the route creation utility from the application schemas. The file header clearly documents its purpose.


28-85: LGTM! Generate route with comprehensive error responses.

The generate route correctly documents all relevant HTTP status codes (200, 400, 401, 429, 502) with appropriate descriptions. This aligns with the error handling in the provider implementations.


270-274: The response headers definition is correct for @hono/zod-openapi.

The headers property using z.object() is the intended pattern for this library. @hono/zod-openapi v1.1.5 uses @asteasolutions/zod-to-openapi to automatically convert Zod schemas to OpenAPI 3.1.0 format, including response headers. The code type-checks successfully under strict TypeScript mode and passes CI, confirming the approach is valid and properly typed.

Likely an incorrect or invalid review comment.

apps/web/src/components/feature/ImageResultCard.tsx (2)

18-56: LGTM! Clean i18n integration.

The useTranslation hook is properly imported and initialized. The t function is correctly added to dependency arrays where translation strings are used in callbacks (lines 124, 146, 156).


264-264: Consistent localization applied to all UI strings.

All user-facing strings throughout the component (card title, tooltips, info panel labels, status messages, keyboard shortcut hints, etc.) are properly localized using the t() function with appropriate translation keys.

apps/api/src/app.ts (3)

104-111: Good addition of rate limiting middleware.

Rate limiting is properly applied to resource-intensive endpoints with appropriate limits (generate: 10/min, optimize: 20/min, video: 5/min, readonly: 60/min). This helps prevent abuse and protects upstream providers.


644-677: Good defensive measures for image proxy.

The content-length check before streaming prevents memory exhaustion from large payloads, and streaming avoids buffering the entire image in memory. The 10MB limit is reasonable for images.


741-758: Helpful Retry-After header for polling guidance.

Adding Retry-After header when status is pending/processing is a good practice that helps clients implement proper backoff without hardcoding intervals.

apps/web/src/lib/constants.ts (1)

156-185: Well-structured settings migration with proper defaults.

The loadLLMSettings function correctly handles backwards compatibility by providing defaults for new fields (translateProvider, translateModel, customOptimizeConfig, customTranslateConfig) when loading old settings data.

apps/web/src/hooks/useImageGenerator.ts (4)

51-58: Good defensive parsing for localStorage.

Adding try/catch around localStorage parsing prevents crashes from corrupted or malformed stored data.


247-250: Good fix - only create blob URL on successful fetch.

Previously, the code would attempt to create a blob URL even if the fetch failed. Now it correctly checks response.ok before processing.


372-396: Clean abstraction for LLM token retrieval.

The getTokensForLLMProvider helper consolidates the mapping logic and getTranslateTokens properly uses it. The dependency arrays are correct.


441-480: Translation handler properly supports custom provider flow.

The handleTranslate callback correctly:

  • Uses getTranslateTokens() for the translation provider
  • Passes customConfig when provider is 'custom'
  • Uses the custom model from config when applicable
apps/web/src/lib/api.ts (3)

56-82: Good centralized API request wrapper.

The apiRequest function provides:

  • Consistent error parsing via parseErrorResponse
  • Structured error objects with status and code
  • Type-safe return values

This reduces boilerplate across all API calls.


187-196: Clean token rotation integration.

The generateImage function now uses runWithTokenRotation for automatic failover on rate limits. The allowAnonymous option correctly reflects the provider's auth requirements.


486-513: Custom models endpoint integration looks correct.

fetchCustomModels properly uses the apiRequest wrapper and returns the expected ApiResponse shape. Error handling is consistent with other functions.

apps/api/src/schemas/index.ts (2)

138-145: Verify video provider restriction against HuggingFace support.

The schema restricts video generation to only 'gitee' provider using z.literal(), but the PR objectives mention "Image-to-Video Generation: New workflow to generate videos from images via HuggingFace." If HuggingFace should also support video generation, this schema needs updating.

If HuggingFace video support is planned:

 export const VideoGenerateRequestSchema = z.object({
-  provider: z.literal('gitee'),
+  provider: z.enum(['gitee', 'huggingface']),
   imageUrl: z.string().url('Invalid image URL'),
   prompt: z.string().min(1, 'Prompt is required'),
   width: z.number().int().min(256).max(2048),
   height: z.number().int().min(256).max(2048),
 })

1-9: Well-structured schema organization.

Good use of centralized Zod schemas with proper TypeScript type inference exports. The section comments and consistent naming conventions improve maintainability.

Comment thread apps/api/src/app.ts
Comment on lines +170 to +236
// Fetch models from custom OpenAI-compatible provider
app.post('/custom-models', async (c) => {
let body: { baseUrl: string; apiKey: string }
try {
body = await c.req.json()
} catch {
return sendError(c, Errors.invalidParams('body', 'Invalid JSON body'))
}

const { baseUrl, apiKey } = body

if (!baseUrl || !apiKey) {
return sendError(c, Errors.invalidParams('body', 'baseUrl and apiKey are required'))
}

// Normalize base URL to get /models endpoint
let url = baseUrl.trim()
if (url.endsWith('/')) {
url = url.slice(0, -1)
}
// Remove /chat/completions if present
if (url.endsWith('/chat/completions')) {
url = url.slice(0, -'/chat/completions'.length)
}
// Ensure /v1 is present
if (!url.endsWith('/v1')) {
url = `${url}/v1`
}
url = `${url}/models`

try {
const response = await fetch(url, {
method: 'GET',
headers: {
Authorization: `Bearer ${apiKey}`,
},
})

if (!response.ok) {
const errorText = await response.text().catch(() => 'Unknown error')
if (response.status === 401) {
return sendError(c, Errors.authInvalid('Custom Provider'))
}
return sendError(
c,
Errors.providerError('Custom Provider', `Failed to fetch models: ${errorText}`)
)
}

const data = (await response.json()) as { data?: Array<{ id: string; owned_by?: string }> }
const models = (data.data || []).map((m) => ({
id: m.id,
name: m.id,
owned_by: m.owned_by,
}))

return c.json({ models })
} catch (err) {
return sendError(
c,
Errors.providerError(
'Custom Provider',
err instanceof Error ? err.message : 'Failed to fetch models'
)
)
}
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential SSRF vulnerability in custom-models endpoint.

The endpoint fetches from a user-provided baseUrl without validation. An attacker could use this to:

  1. Probe internal network services (e.g., http://localhost:6379, http://169.254.169.254/)
  2. Scan internal infrastructure
  3. Exfiltrate data through DNS or HTTP

Consider adding URL validation:

+// Allowlist for custom provider URLs
+function isAllowedCustomProviderUrl(url: string): boolean {
+  try {
+    const parsed = new URL(url)
+    // Block private/internal IPs
+    const hostname = parsed.hostname.toLowerCase()
+    if (
+      hostname === 'localhost' ||
+      hostname === '127.0.0.1' ||
+      hostname.startsWith('192.168.') ||
+      hostname.startsWith('10.') ||
+      hostname.startsWith('172.') ||
+      hostname === '169.254.169.254' ||
+      hostname.endsWith('.local')
+    ) {
+      return false
+    }
+    // Require HTTPS for external providers
+    return parsed.protocol === 'https:'
+  } catch {
+    return false
+  }
+}

 // Fetch models from custom OpenAI-compatible provider
 app.post('/custom-models', async (c) => {
   // ... validation ...
+
+  if (!isAllowedCustomProviderUrl(baseUrl)) {
+    return sendError(c, Errors.invalidParams('baseUrl', 'Invalid or disallowed URL'))
+  }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/api/src/app.ts around lines 170 to 236, the custom-models endpoint
currently fetches a user-supplied baseUrl without validation which creates an
SSRF risk; validate and sanitize baseUrl before fetching by parsing it with URL,
rejecting non-http(s) schemes and any host that resolves to loopback,
link-local, private RFC1918 ranges, or known cloud metadata addresses (e.g.
169.254.169.254), and disallow IP-literal hosts if they resolve to private
ranges; perform a DNS lookup of the hostname and check the resolved IP(s)
against private/loopback ranges, and reject if any match, or alternatively
enforce a whitelist of allowed hosts; also reject URLs with credentials,
file://, or other schemes and ensure the path normalization still only targets
the /v1/models path; on validation failure return sendError(c,
Errors.invalidParams(...)) and only then perform the fetch.

Comment on lines +36 to +49
class CustomLLMProvider implements LLMProvider {
readonly id = 'custom'
readonly name = 'Custom (OpenAI Compatible)'

/**
* Complete a chat request using custom OpenAI-compatible API
*
* Note: For custom provider, the authToken is not used.
* Instead, the API key is passed via customConfig.
*/
async complete(
request: LLMCompleteRequest,
customConfig?: CustomProviderConfig
): Promise<LLMCompleteResult> {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find and read the types.ts file and custom.ts file to verify interface definition
echo "=== Checking llm-providers directory ==="
find apps/api/src -name "*types.ts" -o -name "*custom.ts" | head -20

echo ""
echo "=== Reading types.ts to verify LLMProvider interface ==="
if [ -f "apps/api/src/llm-providers/types.ts" ]; then
  cat -n apps/api/src/llm-providers/types.ts
else
  echo "File not found at expected location, searching..."
  find apps -name "types.ts" -path "*/llm-providers/*" -exec cat -n {} \;
fi

echo ""
echo "=== Reading custom.ts to see full implementation ==="
if [ -f "apps/api/src/llm-providers/custom.ts" ]; then
  cat -n apps/api/src/llm-providers/custom.ts | head -80
fi

Repository: WuMingDao/zenith-image-generator

Length of output: 4119


Method signature does not match LLMProvider interface.

The complete method includes an additional customConfig?: CustomProviderConfig parameter that is not part of the LLMProvider interface definition. The interface specifies complete(request: LLMCompleteRequest): Promise<LLMCompleteResult>, but the implementation adds an extra parameter, causing a signature mismatch that will result in TypeScript compilation errors.

🤖 Prompt for AI Agents
apps/api/src/llm-providers/custom.ts lines 36-49: the complete method currently
has an extra parameter customConfig?: CustomProviderConfig which does not match
the LLMProvider interface signature and causes TypeScript errors; remove the
extra parameter so the method is declared as complete(request:
LLMCompleteRequest): Promise<LLMCompleteResult>, and obtain any
provider-specific config via the provider instance (constructor/fields) or from
the request object (e.g., request.meta or request.options) instead of a second
parameter.

Comment on lines +37 to +50
function sanitizeErrorForLogging(err: unknown): { name: string; message: string; stack?: string } {
if (err instanceof Error) {
return {
name: err.name,
message: err.message,
// Only include stack in development
stack: process.env.NODE_ENV === 'development' ? err.stack : undefined,
}
}
return {
name: 'UnknownError',
message: typeof err === 'string' ? err : 'An unknown error occurred',
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check for wrangler configuration in apps/api
find apps/api -name "wrangler.toml" -o -name "wrangler.json" -o -name "wrangler.jsonc" | head -20

Repository: WuMingDao/zenith-image-generator

Length of output: 98


🏁 Script executed:

#!/bin/bash
# Read the error-handler.ts file
cat -n apps/api/src/middleware/error-handler.ts

Repository: WuMingDao/zenith-image-generator

Length of output: 2644


🏁 Script executed:

#!/bin/bash
# Search for NODE_ENV and ENVIRONMENT usage in the codebase
rg -n "NODE_ENV|process\.env|c\.env" apps/api/src --type ts --type tsx

Repository: WuMingDao/zenith-image-generator

Length of output: 103


🏁 Script executed:

#!/bin/bash
# Search for NODE_ENV and process.env usage - retry without tsx flag
rg -n "NODE_ENV|process\.env|c\.env" apps/api/src -A 2 -B 2

Repository: WuMingDao/zenith-image-generator

Length of output: 1766


🏁 Script executed:

#!/bin/bash
# Check wrangler.toml configuration
cat apps/api/wrangler.toml

Repository: WuMingDao/zenith-image-generator

Length of output: 832


🏁 Script executed:

#!/bin/bash
# Check if NODE_ENV is defined anywhere else (secrets, env file, etc.)
find apps/api -name ".dev.vars*" -o -name ".env*" | xargs cat 2>/dev/null || echo "No env files found"

Repository: WuMingDao/zenith-image-generator

Length of output: 58


🏁 Script executed:

#!/bin/bash
# Search for how other middleware or handlers access environment in Hono context
rg -n "c\.env|getConfig\(\)" apps/api/src -A 2 -B 1

Repository: WuMingDao/zenith-image-generator

Length of output: 509


🏁 Script executed:

#!/bin/bash
# Check the actual index.ts to see how the app is exported and whether Hono context is available
cat apps/api/src/index.ts

Repository: WuMingDao/zenith-image-generator

Length of output: 1315


🏁 Script executed:

#!/bin/bash
# Check how errorHandler is used in the app
cat apps/api/src/app.ts | head -100

Repository: WuMingDao/zenith-image-generator

Length of output: 3005


Fix process.env.NODE_ENV access in Cloudflare Workers environment.

In Cloudflare Workers, process.env is not available by default. When Node.js compatibility is turned on and the nodejs_compat_populate_process_env compatibility flag is set, process.env will contain any environment variables, but your wrangler.toml currently only enables nodejs_compat without the required flag. Since NODE_ENV is not defined in your vars and the compatibility flag is missing, process.env.NODE_ENV will be undefined, causing stack traces to never be included even in development.

To fix this, either:

  1. Add the compatibility flag and NODE_ENV to wrangler.toml:
compatibility_flags = ["nodejs_compat", "nodejs_compat_populate_process_env"]

[vars]
NODE_ENV = "development"
  1. Or pass environment flag to the handler: Modify errorHandler to accept the environment context and update error-handler.ts to receive this parameter instead of relying on process.env.
🤖 Prompt for AI Agents
In apps/api/src/middleware/error-handler.ts around lines 37-50,
sanitizeErrorForLogging currently checks process.env.NODE_ENV which is not
available in Cloudflare Workers; update the code to stop reading process.env
directly and instead accept an explicit environment flag (e.g., pass a boolean
includeStack or an env string) from the errorHandler entry point, then have the
top-level handler obtain NODE_ENV from the runtime (either via wrangler.toml
[vars] or the Worker context) and pass it down; alternatively, if you prefer to
keep using process.env, update wrangler.toml to include compatibility_flags =
["nodejs_compat", "nodejs_compat_populate_process_env"] and add NODE_ENV =
"development" under [vars] so process.env.NODE_ENV is populated.

Comment on lines +27 to +37
export function validateJson<T extends z.ZodType>(schema: T) {
return zValidator('json', schema, (result, c) => {
if (!result.success) {
const firstIssue = result.error.issues[0]
const field = firstIssue?.path.join('.') || 'body'
const message = firstIssue?.message || 'Invalid request body'

return sendError(c, Errors.invalidParams(field, message))
}
})
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Consider reporting all validation errors for better UX.

The middleware only returns the first validation error (Line 30). This means clients must fix errors one at a time, which degrades developer experience. Consider returning all validation issues in a single response.

Example improvement:

 export function validateJson<T extends z.ZodType>(schema: T) {
   return zValidator('json', schema, (result, c) => {
     if (!result.success) {
-      const firstIssue = result.error.issues[0]
-      const field = firstIssue?.path.join('.') || 'body'
-      const message = firstIssue?.message || 'Invalid request body'
-
-      return sendError(c, Errors.invalidParams(field, message))
+      const errors = result.error.issues.map(issue => ({
+        field: issue.path.join('.') || 'body',
+        message: issue.message
+      }))
+      // You'd need to extend Errors.invalidParams to support multiple errors
+      return sendError(c, Errors.invalidParams('body', 'Validation failed', errors))
     }
   })
 }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/api/src/middleware/validate.ts around lines 27–37, the validator
currently returns only the first Zod issue; instead collect all validation
issues by mapping result.error.issues to an array of {field, message} (use
issue.path.join('.') || 'body' and issue.message) and send them in the error
response (either by extending Errors.invalidParams to accept an array or
creating a new Errors.invalidParamsMultiple payload) via sendError(c, ...).
Ensure the returned response contains all issues so clients can fix them at once
and keep existing behavior when result.success is true.

Comment on lines +18 to +25
/** LLM Provider type enum */
export const LLMProviderSchema = z.enum([
'pollinations',
'huggingface',
'gitee',
'modelscope',
'deepseek',
])
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Missing 'custom' provider in LLMProviderSchema.

The PR objectives mention adding a "Custom LLM Provider" for OpenAI-compatible endpoints, but 'custom' is not included in this enum. This will cause validation failures when users attempt to use the custom provider for prompt optimization.

 /** LLM Provider type enum */
 export const LLMProviderSchema = z.enum([
   'pollinations',
   'huggingface',
   'gitee',
   'modelscope',
   'deepseek',
+  'custom',
 ])
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/** LLM Provider type enum */
export const LLMProviderSchema = z.enum([
'pollinations',
'huggingface',
'gitee',
'modelscope',
'deepseek',
])
/** LLM Provider type enum */
export const LLMProviderSchema = z.enum([
'pollinations',
'huggingface',
'gitee',
'modelscope',
'deepseek',
'custom',
])
🤖 Prompt for AI Agents
In apps/api/src/schemas/index.ts around lines 18 to 25, the LLMProviderSchema
enum is missing the 'custom' provider which will reject the new Custom LLM
Provider; update the z.enum array to include 'custom' alongside the other
provider strings, then run typechecks/tests to ensure validation and any
dependent types or usage sites compile and accept the new value.

}`}
>
<Sparkles className="w-3.5 h-3.5" />
<span>Optimize</span>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Hardcoded English strings should use i18n.

The tab labels "Optimize" and "Translate" are hardcoded in English while other UI text uses t() for internationalization.

-            <span>Optimize</span>
+            <span>{t('settings.optimize')}</span>
-            <span>Translate</span>
+            <span>{t('settings.translate')}</span>

Also applies to: 241-241

🤖 Prompt for AI Agents
In apps/web/src/components/feature/SettingsModal.tsx around lines 229 and 241
the tab labels "Optimize" and "Translate" are hardcoded in English; replace
these hardcoded strings with calls to the i18n translation function (e.g.,
t('...') with appropriate keys like 'settings.optimize' and
'settings.translate'), add or update the corresponding entries in the locale
resource files, and ensure the component imports/uses the t function (or
useTranslation hook) consistently with the surrounding UI text.

<div className="space-y-4">
{/* Optimize Provider Selection */}
<div>
<Label className="text-zinc-400 text-xs">Provider</Label>
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Multiple hardcoded English strings throughout the component.

Several labels and descriptions are hardcoded in English rather than using the translation function. This affects the i18n support mentioned in the PR objectives. Examples include:

  • "Provider" (lines 331, 507)
  • "API Base URL" (lines 356, 532)
  • "API Key" (lines 369, 545)
  • "Model" (lines 383, 435, 559, 611)
  • "Custom System Prompt" (line 458)
  • "Leave empty to use the default system prompt." (line 482)
  • "Auto-Translate" and its description (lines 493-496)
  • "Reset" (line 470)
  • "(Free)" (lines 344, 520)

Also applies to: 356-356, 369-369, 383-383, 435-435, 458-458, 482-482, 493-497, 507-507, 532-532, 545-545, 559-559, 611-611

Comment on lines +343 to +369
const setCustomOptimizeConfig = useCallback(
(config: Partial<{ baseUrl: string; apiKey: string; model: string }>) => {
setLLMSettings((prev) => {
const newSettings = {
...prev,
customOptimizeConfig: { ...prev.customOptimizeConfig, ...config },
}
saveLLMSettings(newSettings)
return newSettings
})
},
[]
)

const setCustomTranslateConfig = useCallback(
(config: Partial<{ baseUrl: string; apiKey: string; model: string }>) => {
setLLMSettings((prev) => {
const newSettings = {
...prev,
customTranslateConfig: { ...prev.customTranslateConfig, ...config },
}
saveLLMSettings(newSettings)
return newSettings
})
},
[]
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

API keys stored without encryption - violates security guideline.

Per coding guidelines, API keys should be persisted with AES-256-GCM encryption using src/lib/crypto.ts. However, setCustomOptimizeConfig and setCustomTranslateConfig store the apiKey field via saveLLMSettings, which uses plain localStorage.setItem.

Consider encrypting the API key before storage:

+import { encryptAndStoreToken, decryptTokenFromStore } from '@/lib/crypto'

 const setCustomOptimizeConfig = useCallback(
   (config: Partial<{ baseUrl: string; apiKey: string; model: string }>) => {
+    // Encrypt API key if provided
+    if (config.apiKey) {
+      encryptAndStoreToken('custom-optimize', config.apiKey)
+      config = { ...config, apiKey: '' } // Don't store plaintext
+    }
     setLLMSettings((prev) => {
       const newSettings = {
         ...prev,
         customOptimizeConfig: { ...prev.customOptimizeConfig, ...config },
       }
       saveLLMSettings(newSettings)
       return newSettings
     })
   },
   []
 )

You'll also need to decrypt when loading settings and when using the config in API calls. Based on learnings, this is a requirement for the codebase.

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +103 to +111
/** Custom LLM provider configuration */
export interface CustomLLMConfig {
/** API base URL (e.g., https://api.openai.com/v1) */
baseUrl: string
/** API key */
apiKey: string
/** Model name */
model: string
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Duplicate type definition - import from shared package instead.

CustomLLMConfig is already defined in packages/shared/src/types/llm.ts (lines 14-21). This duplication can lead to drift between definitions.

 import {
   type AspectRatioConfig,
+  type CustomLLMConfig,
   DEFAULT_OPTIMIZE_SYSTEM_PROMPT,
   getModelsByProvider,
   LLM_PROVIDER_CONFIGS,
   type LLMProviderType,
   MODEL_CONFIGS,
   PROVIDER_CONFIGS,
   type ProviderType,
   ASPECT_RATIOS as SHARED_ASPECT_RATIOS,
 } from '@z-image/shared'

-/** Custom LLM provider configuration */
-export interface CustomLLMConfig {
-  /** API base URL (e.g., https://api.openai.com/v1) */
-  baseUrl: string
-  /** API key */
-  apiKey: string
-  /** Model name */
-  model: string
-}
+// Re-export from shared
+export type { CustomLLMConfig }

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/web/src/lib/constants.ts around lines 103 to 111, remove the local
CustomLLMConfig type and replace its usage with the canonical type exported from
packages/shared/src/types/llm.ts: import { CustomLLMConfig } from 'shared' (or
the package's actual name/path), update any local references to use the imported
type, and ensure your tsconfig/paths or package.json points to the shared
package so the import resolves; run TypeScript typecheck to confirm no other
duplicates remain.

Comment on lines +289 to +313
// Get token for translate provider
let token: string | undefined
switch (llmSettings.translateProvider) {
case 'gitee-llm':
token = await decryptTokenFromStore('gitee')
break
case 'modelscope-llm':
token = await decryptTokenFromStore('modelscope')
break
case 'huggingface-llm':
token = await decryptTokenFromStore('huggingface')
break
case 'deepseek':
token = await decryptTokenFromStore('deepseek')
break
}

const result = await translatePrompt(
{
prompt,
provider: llmSettings.translateProvider,
model: llmSettings.translateModel,
},
token
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Same issue: missing 'custom' case and customConfig for translate handler.

The handleTranslate function has the same issue - it doesn't handle the 'custom' provider case explicitly and doesn't pass customConfig to the API.

         case 'deepseek':
           token = await decryptTokenFromStore('deepseek')
           break
+        case 'custom':
+          // Custom provider uses apiKey from customConfig, not authToken
+          break
       }

       const result = await translatePrompt(
         {
           prompt,
           provider: llmSettings.translateProvider,
-          model: llmSettings.translateModel,
+          model: llmSettings.translateProvider === 'custom'
+            ? llmSettings.customTranslateConfig.model
+            : llmSettings.translateModel,
+          customConfig: llmSettings.translateProvider === 'custom'
+            ? llmSettings.customTranslateConfig
+            : undefined,
         },
         token
       )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Get token for translate provider
let token: string | undefined
switch (llmSettings.translateProvider) {
case 'gitee-llm':
token = await decryptTokenFromStore('gitee')
break
case 'modelscope-llm':
token = await decryptTokenFromStore('modelscope')
break
case 'huggingface-llm':
token = await decryptTokenFromStore('huggingface')
break
case 'deepseek':
token = await decryptTokenFromStore('deepseek')
break
}
const result = await translatePrompt(
{
prompt,
provider: llmSettings.translateProvider,
model: llmSettings.translateModel,
},
token
)
// Get token for translate provider
let token: string | undefined
switch (llmSettings.translateProvider) {
case 'gitee-llm':
token = await decryptTokenFromStore('gitee')
break
case 'modelscope-llm':
token = await decryptTokenFromStore('modelscope')
break
case 'huggingface-llm':
token = await decryptTokenFromStore('huggingface')
break
case 'deepseek':
token = await decryptTokenFromStore('deepseek')
break
case 'custom':
// Custom provider uses apiKey from customConfig, not authToken
break
}
const result = await translatePrompt(
{
prompt,
provider: llmSettings.translateProvider,
model: llmSettings.translateProvider === 'custom'
? llmSettings.customTranslateConfig.model
: llmSettings.translateModel,
customConfig: llmSettings.translateProvider === 'custom'
? llmSettings.customTranslateConfig
: undefined,
},
token
)
🤖 Prompt for AI Agents
In apps/web/src/pages/FlowPageV2.tsx around lines 289-313, the translate handler
fails to handle the 'custom' provider and doesn't forward customConfig to the
translate API; add a 'case "custom"' in the switch that retrieves the stored
token (e.g., await decryptTokenFromStore('custom')) like the other providers,
and include the custom config when calling translatePrompt by adding the custom
config value from llmSettings (e.g., translateCustomConfig or similar) into the
request object passed to translatePrompt so the API receives provider, model,
and customConfig along with the token.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant