feat: Implement AI Requirement Generation feature (ollama container)#64
feat: Implement AI Requirement Generation feature (ollama container)#64johlju wants to merge 7 commits intoviscalyx:mainfrom
Conversation
- Added a new module for generating system requirements based on AI, adhering to ISO standards. - Introduced interfaces for generated requirements and taxonomy data. - Created functions to build prompts for AI generation in both English and Swedish. - Integrated AI requirement generation into the MCP server with a new tool registration. - Updated authorization service to include a new action for generating requirements. - Enhanced the requirements service to handle generation requests and validate outputs. - Added localization support for AI generation UI components in English and Swedish. - Implemented unit tests for the new AI requirement generation functionality and related components.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds an Ollama sidecar and devcontainer GPU option, an Ollama client, prompt/schema/taxonomy libraries, streaming server endpoints and service method for AI-generated requirements, a client-side AI requirement-generator modal, MCP tool registration, translations, tests, and devcontainer/documentation updates. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Modal as AiRequirementGenerator
participant API as /api/ai/generate-requirements
participant Ollama as Ollama Service
participant DB as Database
participant ReqAPI as /api/requirements
User->>Modal: open, enter topic/area, click Generate
Modal->>API: POST {topic, customInstruction, locale, model}
API->>DB: loadTaxonomy(locale)
API->>Ollama: POST /api/chat (system+user prompts, stream: true)
Ollama-->>API: stream "thinking" chunks
API-->>Modal: SSE "thinking" events
Ollama-->>API: stream "generating" content chunks
API-->>Modal: SSE "generating" events
Ollama-->>API: stream "done" with rawContent/stats
API->>DB: validateGeneratedRequirements(requirements, taxonomy)
API-->>Modal: SSE "done" with validated requirements
User->>Modal: select items, click Create
Modal->>ReqAPI: POST each requirement
ReqAPI->>DB: insert requirement(s)
DB-->>ReqAPI: confirm creation
ReqAPI-->>Modal: success
Modal->>User: close and refresh parent view
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
🧹 Nitpick comments (12)
lib/requirements/auth.ts (1)
86-88: Verify every authorization policy map includesgenerate_requirements.Adding the new action kind here does not force existing
RoleBasedAuthorizationServicepolicy objects to grow, because they are still typed asPartial<Record<...>>. If any instantiation missed this key, the feature will compile and then fail at runtime withNo policy defined for action generate_requirements.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/requirements/auth.ts` around lines 86 - 88, RoleBasedAuthorizationService policy maps can omit the new action kind generate_requirements because policies are typed as Partial<Record<...>>, causing runtime failures; update the types and any policy initializations to require generate_requirements by making the action map non-partial (e.g., change Partial<Record<...>> to Record<...>) or add explicit default entries for generate_requirements in every policy object, and audit any places where policies are constructed (RoleBasedAuthorizationService, policy constants) to ensure the generate_requirements key is present with an appropriate policy function.app/[locale]/requirements/requirements-client.tsx (1)
826-834: Redundant API calls inonCreatedcallback.The
onCreatedcallback invokes bothfetchData()andrefreshRows(). However,fetchData()(Lines 297-308) already callsrefreshRows()internally. This results in duplicate API requests to/api/requirementsafter creating a requirement.♻️ Proposed fix to remove redundant call
<AiRequirementGenerator areas={areas} onClose={() => setAiModalOpen(false)} - onCreated={() => { - fetchData() - refreshRows() - }} + onCreated={fetchData} open={aiModalOpen} />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/`[locale]/requirements/requirements-client.tsx around lines 826 - 834, The onCreated handler for AiRequirementGenerator triggers duplicate API calls by calling both fetchData() and refreshRows(); since fetchData() already calls refreshRows() internally, remove the redundant refreshRows() call from the onCreated callback so onCreated only calls fetchData() (and setAiModalOpen(false) where applicable) to prevent duplicate requests—locate the AiRequirementGenerator component usage and update its onCreated prop to call only fetchData().app/api/ai/system-prompt/route.ts (1)
29-56: ExtractloadTaxonomyinto shared code.This mapper now exists here, in
app/api/ai/generate-requirements/route.ts, Lines 122-150, and inlib/requirements/service.ts, Lines 2290-2313. The next taxonomy/localization change has to land in three places, which is an easy prompt-drift bug.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/api/ai/system-prompt/route.ts` around lines 29 - 56, The loadTaxonomy mapper is duplicated across multiple modules (function loadTaxonomy using listCategories, listTypes, listQualityCharacteristics, listRiskLevels, listScenarios and returning TaxonomyData), so extract it into a single shared helper module (e.g., export a function named loadTaxonomy and the TaxonomyData type from a new shared file) and replace the local copies by importing that shared function; update callers to use the shared loadTaxonomy and remove the duplicate implementations (ensure the shared module imports the same list* functions or accepts the db parameter and uses the same helper functions).tests/unit/ai-requirement-generator.test.tsx (1)
118-130: Assert the close behavior, not only the button's existence.This still passes if the button stops invoking
onClose. Exercise the click path so the test protects the modal's actual exit behavior. As per coding guidelines,When changing visible UI elements, labels, roles, or layout surfaces, update curated devMarker(...) calls or scanner heuristics, docs/developer-mode-overlay.md, and relevant unit and integration tests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/ai-requirement-generator.test.tsx` around lines 118 - 130, The test currently only checks the presence of the close button; update it to assert the close behavior by simulating a user click and verifying the onClose handler is invoked. In the test for AiRequirementGenerator, keep the onClose = vi.fn(), import/use userEvent (or fireEvent) to click the element found by screen.getByLabelText('close'), then assert expect(onClose).toHaveBeenCalled() (or toHaveBeenCalledTimes(1)) to exercise the click path and protect the modal exit behavior.components/AiRequirementGenerator.tsx (6)
382-384: Redundantopencheck insideAnimatePresence.The condition
{open && (...)}on line 384 is redundant since line 380 already returnsnullwhen!open. This causes unnecessary re-evaluation.♻️ Suggested fix
return ( <AnimatePresence> - {open && ( <motion.div animate={{ opacity: 1 }} ... </motion.div> - )} </AnimatePresence> )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/AiRequirementGenerator.tsx` around lines 382 - 384, In AiRequirementGenerator, the render contains a redundant `{open && (...)}` inside the <AnimatePresence> because the component already returns early when !open; remove the inner conditional and render the modal/children directly within AnimatePresence (locate the JSX returned in the AiRequirementGenerator component and remove the `open &&` guard around the content).
496-509: Toggle buttons lack minimum touch target size.The "Advanced" toggle button and similar toggle buttons (lines 520-538, 557-592) don't have
min-h-[44px] min-w-[44px]for accessible touch targets. Per coding guidelines, buttons need these minimum dimensions.♻️ Suggested fix for the Advanced toggle
<button - className="flex items-center gap-1 text-sm text-primary-600 hover:text-primary-700 dark:text-primary-400 dark:hover:text-primary-300" + className="flex min-h-[44px] items-center gap-1 text-sm text-primary-600 hover:text-primary-700 dark:text-primary-400 dark:hover:text-primary-300" disabled={inProgress} onClick={() => setShowAdvanced(!showAdvanced)} type="button" >Apply similar changes to the "Show/Hide Default Instruction" button (lines 520-538), "Show/Hide System Prompt" button (lines 557-592), and "Select/Deselect All" button (lines 659-667).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/AiRequirementGenerator.tsx` around lines 496 - 509, The toggle buttons (the Advanced toggle using showAdvanced / setShowAdvanced and the other toggles for "Show/Hide Default Instruction", "Show/Hide System Prompt", and the "Select/Deselect All" button) lack the required minimum touch target; update each button's className to include min-h-[44px] and min-w-[44px] (preserving existing classes like text-sm and flex) so the buttons meet the 44×44px accessibility guideline while keeping their existing behavior and handlers (e.g., the onClick that toggles showAdvanced and the corresponding state toggles for default instruction, system prompt, and selectAll).
441-441: Grid layout is not mobile-first.The
grid-cols-2class applies a two-column layout by default, which may cause cramped UI on narrow screens (320px). Per guidelines, use a mobile-first approach: single column by default, then expand on larger breakpoints.♻️ Suggested fix
- <div className="grid grid-cols-2 gap-4"> + <div className="grid grid-cols-1 gap-4 sm:grid-cols-2">🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/AiRequirementGenerator.tsx` at line 441, The grid currently uses a two-column layout by default which is not mobile-first; update the div with className "grid grid-cols-2 gap-4" to use a single column by default and switch to two columns at a larger breakpoint (e.g., "grid grid-cols-1 sm:grid-cols-2 gap-4") so the layout is one column on narrow screens and becomes two columns on small/above screens.
314-340: Sequential requirement creation may be slow and lacks progress feedback.Creating requirements sequentially with
for...ofand individualawait fetchcalls can be slow when many requirements are selected. Consider showing per-item progress or usingPromise.allSettledfor parallel creation.For better UX, consider:
- Showing progress like "Creating 3/10..."
- Or batching requests with
Promise.allSettledfor parallelism (with appropriate rate limiting if the API requires it)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/AiRequirementGenerator.tsx` around lines 314 - 340, The current sequential creation loop over selectedReqs (for...of awaiting fetch to '/api/requirements') is slow and gives no per-item feedback; refactor this to run requests in parallel using Promise.allSettled (map selectedReqs -> fetch POST) and collect failures into the existing errors array, while adding progress updates (e.g., a setCreateProgress or setCreatedCount state updated in each settled promise or via an incremental callback) to show "Creating X/Y" to the user; ensure you still parse error bodies (res.json().catch(() => null)) for failed responses and preserve existing logging (console.error/console.log) for compatibility.
91-109: Model fetch lacks error handling feedback to user.When the models fetch fails (line 107),
setModels([])silently clears the list without user feedback. Combined with the "No models available" option on line 483, users won't know if this is due to a network error or genuinely no models.♻️ Suggested improvement
.catch(() => setModels([])) + .catch(() => { + setModels([]) + // Optionally set an error state to show "Failed to load models" + })Consider adding a
modelsErrorstate to distinguish between "no models" and "fetch failed" scenarios.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/AiRequirementGenerator.tsx` around lines 91 - 109, The models fetch currently swallows errors (fetch('/api/ai/models') .catch(() => setModels([]))) so users can't tell network failures from genuinely empty model lists; add a new state like modelsError (useState<string | null>) and in the catch block setModels([]) and setModelsError with a descriptive message (or the caught error.message), clear modelsError on successful response before calling setModels, and ensure setModelsLoading is still toggled in finally; update the UI branch that renders the "No models available" option to show the modelsError message when modelsError is set (instead of the no-models text) so users see fetch failure feedback.
420-438: Form fields are missing help text buttons.Per coding guidelines, every form field that accepts user input must include a help text button (the
?icon) with corresponding translatable help text. The topic textarea, area select, model select, and custom instruction textarea are missing these help affordances.Consider adding
helpButton()andhelpPanel()patterns as seen inRequirementForm.tsxfor each input field.Also applies to: 440-465, 510-554
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/AiRequirementGenerator.tsx` around lines 420 - 438, The topic textarea, area select, model select and custom instruction textarea in AiRequirementGenerator are missing the required help affordances; add the same helpButton() and helpPanel() pattern used in RequirementForm.tsx next to each field label (e.g., for the Topic textarea, Area select, Model select, and CustomInstruction textarea), wire each helpButton to a corresponding helpPanel component that renders a translatable help string (use keys like topicHelp, areaHelp, modelHelp, customInstructionHelp or the project's existing i18n keys), ensure the helpButton is disabled when inProgress like the inputs, and keep ARIA associations/IDs consistent with other form help implementations so keyboard/screen-reader users can open the help panels.lib/ai/ollama-client.ts (2)
108-113: Consider making timeouts configurable.The hardcoded timeouts (180s for non-streaming, 300s for streaming, 5s for model listing) may not suit all deployment scenarios. Large models or slow networks might need longer timeouts.
Consider accepting timeout as an option parameter or reading from environment variables:
export async function generateChat<T>(options: { // ... existing options timeoutMs?: number }): Promise<NonStreamingResult<T>> { // ... signal: options.signal ?? AbortSignal.timeout(options.timeoutMs ?? 180_000),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/ai/ollama-client.ts` around lines 108 - 113, The fetch call in generateChat hardcodes AbortSignal.timeout values (e.g., 180_000 for non-streaming) making timeouts inflexible; add a timeoutMs (or similar) option to the generateChat options and use it when building the request signal (falling back to the existing defaults), update the streaming path and model-listing call sites that use AbortSignal.timeout to also accept/propagate the new timeout option (or read from an env var if preferred), and ensure existing options.signal still takes precedence (i.e., use options.signal ?? AbortSignal.timeout(options.timeoutMs ?? 180_000)); reference generateChat, the fetch call constructing { signal: ... }, and any streaming or model listing helpers to make the change consistently.
245-251: 'done' event is emitted even after errors or abort.The
yieldfor thedoneevent at lines 245-251 is outside thetryblock and will execute even if an error occurred mid-stream (e.g., network interruption after partial data). This could emit adoneevent with incomplete/empty content. Consider tracking whether an error occurred.♻️ Suggested fix
+ let errorOccurred = false + try { for (;;) { // ... existing loop } + } catch (err) { + errorOccurred = true + // Re-throw or yield error - abort errors should not emit done + if ((err as Error).name !== 'AbortError') { + yield { message: (err as Error).message, phase: 'error' } + } + return } finally { reader.releaseLock() } - yield { - phase: 'done', - rawContent: contentSoFar, - stats: finalStats, - thinking: thinkingSoFar, - } + if (!errorOccurred) { + yield { + phase: 'done', + rawContent: contentSoFar, + stats: finalStats, + thinking: thinkingSoFar, + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/ai/ollama-client.ts` around lines 245 - 251, The final "done" yield in the stream handler (the yield returning phase: 'done', rawContent: contentSoFar, stats: finalStats, thinking: thinkingSoFar) is emitted unconditionally outside the try/catch, so it fires even after errors or aborts; modify the surrounding logic in the function that produces these yields (the streaming generator in lib/ai/ollama-client.ts) to track an error/abort flag (e.g., hadError or aborted) set in the catch and abort paths, and only emit the 'done' yield when no error/abort occurred; also consider emitting a distinct 'error' or 'aborted' phase inside the catch/finally so downstream consumers get the correct terminal event instead of an incorrect 'done'.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.devcontainer/docker-compose.yml:
- Around line 44-45: Remove the fixed host port publish by deleting the ports
entry mapping "- \"11434:11434\"" and instead either add "expose: - \"11434\""
so the container port is available to the compose network only, or rely on VS
Code's devcontainer.json port forwarding; ensure the "ports:" block no longer
contains the "- \"11434:11434\"" mapping.
In @.devcontainer/ollama-entrypoint.sh:
- Around line 7-13: The readiness loop can hang forever; update the until ollama
list loop to implement a timeout/retry limit (configurable via an environment
variable like OLLAMA_START_TIMEOUT or a MAX_RETRIES variable) by counting
attempts and aborting if the limit is reached, printing a clear error message
(e.g., "Ollama failed to start after X seconds/attempts") and exiting non‑zero;
specifically modify the block containing the until ollama list > /dev/null 2>&1
loop and the echo messages to increment a counter, compare against the
timeout/retry limit, sleep between attempts, and call exit 1 with an error log
when the limit is exceeded so the container doesn't hang indefinitely.
- Around line 15-25: The current entrypoint unconditionally iterates
MODELS=("qwen3:1.7b" "qwen3:8b" "qwen3:14b") and pre-pulls all sizes; change the
logic to respect the OLLAMA_MODEL environment variable so we only pull the
intended model. Update the code that defines MODELS and the for MODEL in
"${MODELS[@]}" loop in .devcontainer/ollama-entrypoint.sh to: if OLLAMA_MODEL is
set, use that single value (or skip pulling if empty/“none”); otherwise fall
back to the existing array, then run the same ollama list | grep check and
ollama pull for only the selected model(s).
In @.devcontainer/strict/docker-compose.yml:
- Around line 41-42: Remove the explicit host port publish "11434:11434" from
the strict devcontainer docker-compose profile to avoid host-port collisions;
locate the ports entry under the strict profile (the ports: - "11434:11434"
mapping) and delete that line so the service uses an internal container port
only (or switch to "11434" without host binding or to a randomized port mapping
if explicit exposure is required).
In @.env.example:
- Around line 46-47: The client reads process.env.NEXT_PUBLIC_OLLAMA_MODEL in
AiRequirementGenerator.tsx but .env.example only defines OLLAMA_MODEL
(server-side), so add a public env var or change the client to get the default
from the server; to fix quickly, add a NEXT_PUBLIC_OLLAMA_MODEL=qwen3:14b entry
to .env.example so the client-side variable is defined, or alternatively update
AiRequirementGenerator.tsx to request the default model from your server API
(where OLLAMA_MODEL is defined) and use that response to initialize the
dropdown.
In `@app/api/ai/generate-requirements/route.ts`:
- Around line 20-42: Inside POST, guard the untrusted request.json() and its
fields before using them: wrap await request.json() in try/catch and return 400
on malformed JSON; validate that body.topic is a string and non-empty (avoid
calling body.topic.trim() without checking), ensure locale is one of 'en'|'sv'
(fallback to 'en' only after validation), verify optional customInstruction and
model are strings, and enforce max length limits on topic and customInstruction
before calling buildUserPrompt (and before any Ollama call) so oversized inputs
are rejected with a 400. Use the existing POST function, request.json(),
body.topic (and buildUserPrompt) identifiers when making these changes.
In `@components/AiRequirementGenerator.tsx`:
- Around line 674-682: The component uses requirements.map with key={i} causing
unstable keys; update the parsing step that produces the requirements list (the
function that converts AI output into GeneratedRequirement objects — e.g., the
parse/parseRequirements or the code that creates GeneratedRequirement instances)
to assign a stable unique id/key (UUID or hash of the description) to each
GeneratedRequirement (e.g., add a property like key or id), then replace key={i}
in the requirements.map rendering inside AiRequirementGenerator with that stable
property (e.g., key={req.key} or key={req.id}) so React can track items
correctly when reordering/filtering.
In `@lib/ai/ollama-client.ts`:
- Around line 196-239: The stream loop using reader.read() accumulates partial
lines in buffer but never processes the final incomplete line when the loop
exits; after the for(;;) loop completes, parse and handle the leftover buffer
(the last incomplete line) the same way as inside the loop: trim, JSON.parse to
an OllamaChatResponse, update thinkingSoFar/contentSoFar, yield any final
thinking/generating chunks, and set finalStats if chunk.done so the terminal
chunk (including done:true and stats) is not dropped; use the same logic as in
the inner per-line handling around buffer, chunk, thinkingSoFar, contentSoFar,
and finalStats to avoid duplicating errors.
In `@lib/ai/requirement-prompt.ts`:
- Around line 224-235: The buildUserPrompt function treats an empty string as a
valid customInstruction because it uses the nullish coalescing operator; change
the logic so blank/whitespace customInstruction is treated as missing — e.g., in
buildUserPrompt evaluate customInstruction (trimmed) and fall back to
getDefaultInstruction(locale) when it's undefined, null, or an empty/whitespace
string, ensuring the computed instruction variable always contains either the
user-provided non-empty instruction or the default.
In `@lib/mcp/server.ts`:
- Around line 1386-1418: The tool's output instructs clients to call
requirements_manage_requirement to create requirements but never provides an
area context or mentions requirement.areaId, which will make created records
invalid; update the inputSchema for the generator (the z.object used for
customInstruction/locale/model/topic) to accept an area identifier (e.g., areaId
or area) or require it, and update the generator's human-facing description and
the follow-up instruction text to explicitly tell clients to populate
requirement.areaId with that provided area when calling
requirements_manage_requirement; also apply the same human-text update in the
corresponding human-text generator in requirements service (references:
requirements_manage_requirement, requirement.areaId, the inputSchema object and
the generator description/follow-up text) so property names match service
expectations.
In `@lib/requirements/service.ts`:
- Around line 2351-2353: The success message hardcodes English ("Generated
${validated.length} requirements for topic: ${input.topic}") while the method
accepts locale; update the return to produce a localized human message based on
the locale parameter (e.g., use the service's localization utility or a small
locale->template map) so RequirementsService returns Swedish text for 'sv'
callers; ensure you reference validated.length, input.topic and model in the
localized template and fall back to English if the locale is unsupported.
- Around line 2277-2347: The generateRequirements handler currently passes
input.topic directly to the AI and assumes result.content.requirements exists;
trim and validate the input.topic (e.g. const topic = (input.topic ??
'').trim()) and reject empty/whitespace-only topics with an
authorization/validation error before building the prompts (references:
generateRequirements, input.topic, buildUserPrompt). After calling generateChat,
verify the model output contains a proper requirements array (check result &&
result.content && Array.isArray(result.content.requirements)) and throw/return a
controlled validation error if missing or malformed before calling
validateGeneratedRequirements (references: result.content.requirements,
validateGeneratedRequirements, REQUIREMENT_FORMAT_SCHEMA).
---
Nitpick comments:
In `@app/`[locale]/requirements/requirements-client.tsx:
- Around line 826-834: The onCreated handler for AiRequirementGenerator triggers
duplicate API calls by calling both fetchData() and refreshRows(); since
fetchData() already calls refreshRows() internally, remove the redundant
refreshRows() call from the onCreated callback so onCreated only calls
fetchData() (and setAiModalOpen(false) where applicable) to prevent duplicate
requests—locate the AiRequirementGenerator component usage and update its
onCreated prop to call only fetchData().
In `@app/api/ai/system-prompt/route.ts`:
- Around line 29-56: The loadTaxonomy mapper is duplicated across multiple
modules (function loadTaxonomy using listCategories, listTypes,
listQualityCharacteristics, listRiskLevels, listScenarios and returning
TaxonomyData), so extract it into a single shared helper module (e.g., export a
function named loadTaxonomy and the TaxonomyData type from a new shared file)
and replace the local copies by importing that shared function; update callers
to use the shared loadTaxonomy and remove the duplicate implementations (ensure
the shared module imports the same list* functions or accepts the db parameter
and uses the same helper functions).
In `@components/AiRequirementGenerator.tsx`:
- Around line 382-384: In AiRequirementGenerator, the render contains a
redundant `{open && (...)}` inside the <AnimatePresence> because the component
already returns early when !open; remove the inner conditional and render the
modal/children directly within AnimatePresence (locate the JSX returned in the
AiRequirementGenerator component and remove the `open &&` guard around the
content).
- Around line 496-509: The toggle buttons (the Advanced toggle using
showAdvanced / setShowAdvanced and the other toggles for "Show/Hide Default
Instruction", "Show/Hide System Prompt", and the "Select/Deselect All" button)
lack the required minimum touch target; update each button's className to
include min-h-[44px] and min-w-[44px] (preserving existing classes like text-sm
and flex) so the buttons meet the 44×44px accessibility guideline while keeping
their existing behavior and handlers (e.g., the onClick that toggles
showAdvanced and the corresponding state toggles for default instruction, system
prompt, and selectAll).
- Line 441: The grid currently uses a two-column layout by default which is not
mobile-first; update the div with className "grid grid-cols-2 gap-4" to use a
single column by default and switch to two columns at a larger breakpoint (e.g.,
"grid grid-cols-1 sm:grid-cols-2 gap-4") so the layout is one column on narrow
screens and becomes two columns on small/above screens.
- Around line 314-340: The current sequential creation loop over selectedReqs
(for...of awaiting fetch to '/api/requirements') is slow and gives no per-item
feedback; refactor this to run requests in parallel using Promise.allSettled
(map selectedReqs -> fetch POST) and collect failures into the existing errors
array, while adding progress updates (e.g., a setCreateProgress or
setCreatedCount state updated in each settled promise or via an incremental
callback) to show "Creating X/Y" to the user; ensure you still parse error
bodies (res.json().catch(() => null)) for failed responses and preserve existing
logging (console.error/console.log) for compatibility.
- Around line 91-109: The models fetch currently swallows errors
(fetch('/api/ai/models') .catch(() => setModels([]))) so users can't tell
network failures from genuinely empty model lists; add a new state like
modelsError (useState<string | null>) and in the catch block setModels([]) and
setModelsError with a descriptive message (or the caught error.message), clear
modelsError on successful response before calling setModels, and ensure
setModelsLoading is still toggled in finally; update the UI branch that renders
the "No models available" option to show the modelsError message when
modelsError is set (instead of the no-models text) so users see fetch failure
feedback.
- Around line 420-438: The topic textarea, area select, model select and custom
instruction textarea in AiRequirementGenerator are missing the required help
affordances; add the same helpButton() and helpPanel() pattern used in
RequirementForm.tsx next to each field label (e.g., for the Topic textarea, Area
select, Model select, and CustomInstruction textarea), wire each helpButton to a
corresponding helpPanel component that renders a translatable help string (use
keys like topicHelp, areaHelp, modelHelp, customInstructionHelp or the project's
existing i18n keys), ensure the helpButton is disabled when inProgress like the
inputs, and keep ARIA associations/IDs consistent with other form help
implementations so keyboard/screen-reader users can open the help panels.
In `@lib/ai/ollama-client.ts`:
- Around line 108-113: The fetch call in generateChat hardcodes
AbortSignal.timeout values (e.g., 180_000 for non-streaming) making timeouts
inflexible; add a timeoutMs (or similar) option to the generateChat options and
use it when building the request signal (falling back to the existing defaults),
update the streaming path and model-listing call sites that use
AbortSignal.timeout to also accept/propagate the new timeout option (or read
from an env var if preferred), and ensure existing options.signal still takes
precedence (i.e., use options.signal ?? AbortSignal.timeout(options.timeoutMs ??
180_000)); reference generateChat, the fetch call constructing { signal: ... },
and any streaming or model listing helpers to make the change consistently.
- Around line 245-251: The final "done" yield in the stream handler (the yield
returning phase: 'done', rawContent: contentSoFar, stats: finalStats, thinking:
thinkingSoFar) is emitted unconditionally outside the try/catch, so it fires
even after errors or aborts; modify the surrounding logic in the function that
produces these yields (the streaming generator in lib/ai/ollama-client.ts) to
track an error/abort flag (e.g., hadError or aborted) set in the catch and abort
paths, and only emit the 'done' yield when no error/abort occurred; also
consider emitting a distinct 'error' or 'aborted' phase inside the catch/finally
so downstream consumers get the correct terminal event instead of an incorrect
'done'.
In `@lib/requirements/auth.ts`:
- Around line 86-88: RoleBasedAuthorizationService policy maps can omit the new
action kind generate_requirements because policies are typed as
Partial<Record<...>>, causing runtime failures; update the types and any policy
initializations to require generate_requirements by making the action map
non-partial (e.g., change Partial<Record<...>> to Record<...>) or add explicit
default entries for generate_requirements in every policy object, and audit any
places where policies are constructed (RoleBasedAuthorizationService, policy
constants) to ensure the generate_requirements key is present with an
appropriate policy function.
In `@tests/unit/ai-requirement-generator.test.tsx`:
- Around line 118-130: The test currently only checks the presence of the close
button; update it to assert the close behavior by simulating a user click and
verifying the onClose handler is invoked. In the test for
AiRequirementGenerator, keep the onClose = vi.fn(), import/use userEvent (or
fireEvent) to click the element found by screen.getByLabelText('close'), then
assert expect(onClose).toHaveBeenCalled() (or toHaveBeenCalledTimes(1)) to
exercise the click path and protect the modal exit behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ef47da72-525a-4d42-aeb7-af8c3a458139
📒 Files selected for processing (26)
.devcontainer/devcontainer.json.devcontainer/docker-compose.gpu.yml.devcontainer/docker-compose.yml.devcontainer/ollama-entrypoint.sh.devcontainer/strict/docker-compose.yml.env.exampleCONTRIBUTING.mdapp/[locale]/requirements/requirements-client.tsxapp/api/ai/generate-requirements/route.tsapp/api/ai/models/route.tsapp/api/ai/system-prompt/route.tsapp/api/requirements/route.tscomponents/AiRequirementGenerator.tsxcspell.jsonclib/ai/ollama-client.tslib/ai/requirement-prompt.tslib/mcp/server.tslib/requirements/auth.tslib/requirements/service.tsmessages/en.jsonmessages/sv.jsontests/unit/ai-models-route.test.tstests/unit/ai-requirement-generator.test.tsxtests/unit/ollama-client.test.tstests/unit/requirement-prompt.test.tstests/unit/requirements-client.test.tsx
…or handling - Added `timeoutMs` option to `generateChat` and `generateChatStream` functions to customize request timeout. - Enhanced error handling in `generateChatStream` to yield error messages on abort or other errors. - Processed any leftover data in the buffer after the stream ends to ensure all responses are handled. fix: trim custom instruction in user prompt builder - Updated `buildUserPrompt` to trim the `customInstruction` input to avoid unnecessary whitespace. feat: implement taxonomy loading functionality - Introduced `loadTaxonomy` function to fetch and structure taxonomy data from various sources. chore: update MCP server to include areaId in requirement generation - Modified the input schema to accept an optional `areaId` for generated requirements. - Updated documentation to reflect the new `areaId` parameter. fix: enforce topic validation in requirements service - Added validation to ensure `topic` is provided and not empty when generating requirements. chore: enhance user feedback messages in requirements service - Updated messages to include the topic in the response for better clarity. docs: add help text for form fields in English and Swedish - Included help descriptions for topic, area, model, and custom instruction fields in `en.json` and `sv.json`. test: add unit tests for help buttons and close button functionality - Implemented tests for help button toggling and close button interaction in `AiRequirementGenerator`.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #64 +/- ##
==========================================
- Coverage 61.52% 60.67% -0.86%
==========================================
Files 185 192 +7
Lines 12495 13091 +596
Branches 4642 4804 +162
==========================================
+ Hits 7688 7943 +255
- Misses 4723 5062 +339
- Partials 84 86 +2
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
lib/requirements/service.ts (1)
2293-2318: Consider reusingloadTaxonomyhelper instead of duplicating taxonomy loading logic.The
loadTaxonomyfunction inlib/ai/taxonomy.tsperforms the same taxonomy loading and transformation. Reusing it would reduce duplication and ensure consistency.♻️ Suggested refactor
+import { loadTaxonomy } from '@/lib/ai/taxonomy' + async generateRequirements(context, input) { const locale = input.locale ?? 'en' const topic = (input.topic ?? '').trim() // ... return withLogging( // ... async () => { - const nameKey = locale === 'sv' ? 'nameSv' : 'nameEn' - - const [categories, types, qcs, riskLevels, scenarios] = - await Promise.all([ - listCategories(db), - listTypes(db), - listQualityCharacteristics(db), - listRiskLevels(db), - listScenarios(db), - ]) - - const qcMap = new Map(qcs.map(qc => [qc.id, qc])) - - const taxonomy: import('@/lib/ai/requirement-prompt').TaxonomyData = { - categories: categories.map(c => ({ id: c.id, name: c[nameKey] })), - qualityCharacteristics: qcs.map(qc => ({ - id: qc.id, - name: qc[nameKey], - parentName: qc.parentId - ? qcMap.get(qc.parentId)?.[nameKey] - : undefined, - })), - riskLevels: riskLevels.map(r => ({ id: r.id, name: r[nameKey] })), - scenarios: scenarios.map(s => ({ id: s.id, name: s[nameKey] })), - types: types.map(t => ({ id: t.id, name: t[nameKey] })), - } + const taxonomy = await loadTaxonomy(db, locale as 'en' | 'sv')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/requirements/service.ts` around lines 2293 - 2318, Replace the duplicated taxonomy loading logic that builds the taxonomy object with a call to the existing loadTaxonomy helper; specifically, remove the manual Promise.all + mapping block that produces the taxonomy variable and instead call loadTaxonomy(db, locale) (or the actual helper signature in lib/ai/taxonomy.ts) and assign its result to taxonomy, ensuring the parentName mapping for qualityCharacteristics is preserved by the helper; also add or update the import for loadTaxonomy from lib/ai/taxonomy.ts and keep the nameKey/locale usage consistent with the helper call.components/AiRequirementGenerator.tsx (1)
765-784: Hardcoded type and risk level IDs reduce maintainability.The display logic assumes
typeId === 1is "Functional" andriskLevelIdvalues 1/2/3 map to specific risk levels. If database IDs change, this code will display incorrect labels without any visible error.Consider either:
- Passing type/risk level lookup data as props to resolve names dynamically
- Having the AI output string identifiers (e.g.,
"functional") instead of numeric IDs- Defining a shared constant mapping that can be validated against the database
♻️ Example approach: pass lookup data as props
interface AiRequirementGeneratorProps { areas: Array<{ id: number; name: string }> + types: Array<{ id: number; name: string }> + riskLevels: Array<{ id: number; name: string; color: string }> onClose: () => void onCreated: () => void open: boolean }Then resolve names via lookup:
const typeName = types.find(t => t.id === req.typeId)?.name ?? t('functional')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/AiRequirementGenerator.tsx` around lines 765 - 784, The UI currently hardcodes meaning of numeric IDs (req.typeId and req.riskLevelId) which is brittle; change the component to resolve display names from a lookup instead: accept props like types and riskLevels (arrays of {id,name} or similar) or a shared constant map, then replace the inline checks for req.typeId and req.riskLevelId in the render block with lookups (e.g., find by id and fallback to t('functional')/t('riskLow') if missing) and derive the CSS class via the risk level key or lookup value rather than numeric comparisons; update any helper/prop types (AiRequirementGenerator props, and places that render the requirement item) to pass the lookup data.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@components/AiRequirementGenerator.tsx`:
- Around line 456-463: The close button's touch target is too small; update the
button element (the one with onClick={handleClose} and the <X .../> icon) to add
the required minimum touch target classes (min-h-[44px] and min-w-[44px]) to its
className alongside the existing padding and hover classes so it meets the
44×44px accessibility guideline.
---
Nitpick comments:
In `@components/AiRequirementGenerator.tsx`:
- Around line 765-784: The UI currently hardcodes meaning of numeric IDs
(req.typeId and req.riskLevelId) which is brittle; change the component to
resolve display names from a lookup instead: accept props like types and
riskLevels (arrays of {id,name} or similar) or a shared constant map, then
replace the inline checks for req.typeId and req.riskLevelId in the render block
with lookups (e.g., find by id and fallback to t('functional')/t('riskLow') if
missing) and derive the CSS class via the risk level key or lookup value rather
than numeric comparisons; update any helper/prop types (AiRequirementGenerator
props, and places that render the requirement item) to pass the lookup data.
In `@lib/requirements/service.ts`:
- Around line 2293-2318: Replace the duplicated taxonomy loading logic that
builds the taxonomy object with a call to the existing loadTaxonomy helper;
specifically, remove the manual Promise.all + mapping block that produces the
taxonomy variable and instead call loadTaxonomy(db, locale) (or the actual
helper signature in lib/ai/taxonomy.ts) and assign its result to taxonomy,
ensuring the parentName mapping for qualityCharacteristics is preserved by the
helper; also add or update the import for loadTaxonomy from lib/ai/taxonomy.ts
and keep the nameKey/locale usage consistent with the helper call.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d6e827d3-e307-400d-aee8-7dcadff98ddd
📒 Files selected for processing (16)
.devcontainer/docker-compose.yml.devcontainer/ollama-entrypoint.sh.devcontainer/strict/docker-compose.yml.env.exampleapp/[locale]/requirements/requirements-client.tsxapp/api/ai/generate-requirements/route.tsapp/api/ai/system-prompt/route.tscomponents/AiRequirementGenerator.tsxlib/ai/ollama-client.tslib/ai/requirement-prompt.tslib/ai/taxonomy.tslib/mcp/server.tslib/requirements/service.tsmessages/en.jsonmessages/sv.jsontests/unit/ai-requirement-generator.test.tsx
✅ Files skipped from review due to trivial changes (1)
- .env.example
🚧 Files skipped from review as they are similar to previous changes (5)
- app/api/ai/system-prompt/route.ts
- tests/unit/ai-requirement-generator.test.tsx
- .devcontainer/ollama-entrypoint.sh
- .devcontainer/strict/docker-compose.yml
- messages/en.json
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (2)
components/AiRequirementGenerator.tsx (2)
124-132: Consider preserving original overflow value.The cleanup resets
overflowto''rather than the original value, which could cause issues if another component had already modified the body overflow.♻️ Safer pattern
useEffect(() => { if (open) { + const originalOverflow = document.body.style.overflow document.body.style.overflow = 'hidden' + return () => { + document.body.style.overflow = originalOverflow + } } - return () => { - document.body.style.overflow = '' - } }, [open])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/AiRequirementGenerator.tsx` around lines 124 - 132, The useEffect in AiRequirementGenerator.tsx currently sets document.body.style.overflow='hidden' when open and restores it to '' on cleanup; instead capture the original overflow value before changing it (e.g., const previousOverflow = document.body.style.overflow) and in the cleanup restore document.body.style.overflow = previousOverflow so other components' settings are preserved; ensure this logic is tied to the same useEffect that reads the open prop and only restores the saved value when unmounting or when open flips.
740-747: Key collision risk with truncated description.Using
req.description.slice(0, 40)as key could cause collisions if two requirements share the same first 40 characters, leading to incorrect React reconciliation.Consider generating a stable unique key when parsing the requirements (around lines 307-309):
♻️ Proposed approach
case 'done': { const rawContent = payload.rawContent as string let parsed: { requirements: GeneratedRequirement[] } try { parsed = JSON.parse(rawContent) as { requirements: GeneratedRequirement[] } } catch { setPhase('error') setError(t('parseError')) return } const reqs = parsed.requirements ?? [] - setRequirements(reqs) + const reqsWithKeys = reqs.map((r, idx) => ({ + ...r, + _key: `${Date.now()}-${idx}`, + })) + setRequirements(reqsWithKeys)Then update the key usage:
- key={`req-${req.description.slice(0, 40)}`} + key={req._key}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/AiRequirementGenerator.tsx` around lines 740 - 747, The current React key uses a truncated description (key={`req-${req.description.slice(0, 40)}`}) which can collide; when you build/parse the requirements array (where requirement objects are created), add a stable unique id property to each requirement (e.g., generate a uuid or deterministic hash of the full description) and then change the map key to use that id (e.g., key={`req-${req.id}`}) instead of slicing the description to ensure stable reconciliation for requirements.map.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.devcontainer/strict/docker-compose.yml:
- Line 35: Replace the floating image tag "ollama/ollama:latest" with a pinned
version or digest in both docker-compose files (the image definition line
currently using ollama/ollama:latest); choose a specific stable tag (e.g.,
vX.Y.Z) or an immutable digest and update the image value in
.devcontainer/strict/docker-compose.yml and .devcontainer/docker-compose.yml so
both match the same pinned reference to ensure reproducible dev containers.
- Around line 30-32: The app's depends_on uses condition: service_started which
lets app proceed before Ollama is fully ready; add a Docker healthcheck to the
ollama service that runs the same readiness command used in the entrypoint
(e.g., "ollama list") and then change the app depends_on condition from
service_started to service_healthy so the app waits for ollama's healthcheck to
pass; update the ollama service block to include a healthcheck section (test,
interval, timeout, retries) and update the app's depends_on entry to reference
service_healthy for the ollama service.
In `@components/AiRequirementGenerator.tsx`:
- Around line 722-736: The "Select/Deselect all" button lacks a visible focus
ring for keyboard users; update the button rendered in AiRequirementGenerator
(the element that calls toggleAll and displays selected/requirements counts) to
include accessible focus styles — e.g., add focus-visible and focus classes such
as focus-visible:ring-2, focus-visible:ring-primary-500 (and optional
focus-visible:ring-offset-2 / focus-visible:outline-none or equivalent
theme-safe ring-offset class) so the button shows a clear focus ring in both
light and dark modes while preserving existing color classes.
- Around line 556-570: The Advanced toggle button in AiRequirementGenerator
toggles showAdvanced via setShowAdvanced but lacks ARIA attributes and a visible
focus ring; update the button (the element rendering ChevronDown/ChevronRight
and t('advancedLabel')) to include aria-expanded={showAdvanced} and
aria-controls="advanced-section", add a corresponding id="advanced-section" to
the controlled <div> that contains the expandable content, and ensure the button
has a visible keyboard focus style (e.g., add the project's focus utility
classes so a ring/outline appears on keyboard focus).
- Around line 803-850: Footer buttons (Cancel/Close, CreateSelected, Generate)
don't include visible focus rings for keyboard users; update the button
classNames in the blocks containing handleCancel, handleClose, handleCreate, and
handleGenerate to add focus-visible ring utilities—for primary actions
(Create/Generate) add something like focus-visible:ring-2
focus-visible:ring-primary-500 dark:focus-visible:ring-primary-400 (and
focus-visible:outline-none if needed), and for the border-style Cancel/Close add
focus-visible:ring-2 focus-visible:ring-secondary-300
dark:focus-visible:ring-secondary-600—ensuring the disabled state still respects
opacity and that the same changes are applied to both conditional render
branches.
- Around line 586-605: The toggle button that calls setShowDefaultInstruction
and reads showDefaultInstruction needs accessibility attributes and visible
focus styling: add aria-expanded={showDefaultInstruction} and
aria-controls="default-instruction-content" to that button, ensure the button
includes a visible focus ring class (e.g., focus:outline-none focus:ring or
project equivalent) so keyboard users see focus, and add
id="default-instruction-content" to the corresponding <pre> element that
contains the default instruction so the aria-controls target exists; keep the
existing onClick and icon logic unchanged.
- Around line 624-657: The toggle button that controls showSystemPrompt runs an
async fetch but isn't disabled during systemPromptLoading and lacks
accessibility attributes and a visible focus ring; update the button (the
element that calls setShowSystemPrompt/setSystemPromptLoading/setSystemPrompt
and renders Eye/EyeOff) to set disabled={systemPromptLoading}, short-circuit the
onClick when loading, add aria-pressed={showSystemPrompt} and
aria-controls="system-prompt-content" (and add id="system-prompt-content" to the
corresponding <pre>), and add keyboard-focus styling classes (e.g.,
focus:outline-none focus:ring focus:ring-primary-300 or your project's focus
ring utility) so it is not clickable during fetch and is accessible/focusable.
- Around line 430-439: The component currently returns null when open is false
which prevents AnimatePresence from running exit animations; remove the early
"if (!open) return null" and instead always render <AnimatePresence> and
conditionally render the animated node (e.g., replace the direct presence of
motion.div with {open && <motion.div ...>} inside AnimatePresence) so that
AnimatePresence can orchestrate enter/exit on the motion.div in
AiRequirementGenerator; keep the existing animate/initial/exit props on
motion.div.
---
Nitpick comments:
In `@components/AiRequirementGenerator.tsx`:
- Around line 124-132: The useEffect in AiRequirementGenerator.tsx currently
sets document.body.style.overflow='hidden' when open and restores it to '' on
cleanup; instead capture the original overflow value before changing it (e.g.,
const previousOverflow = document.body.style.overflow) and in the cleanup
restore document.body.style.overflow = previousOverflow so other components'
settings are preserved; ensure this logic is tied to the same useEffect that
reads the open prop and only restores the saved value when unmounting or when
open flips.
- Around line 740-747: The current React key uses a truncated description
(key={`req-${req.description.slice(0, 40)}`}) which can collide; when you
build/parse the requirements array (where requirement objects are created), add
a stable unique id property to each requirement (e.g., generate a uuid or
deterministic hash of the full description) and then change the map key to use
that id (e.g., key={`req-${req.id}`}) instead of slicing the description to
ensure stable reconciliation for requirements.map.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c117cd97-2817-4b47-a141-ee8eca75ff9a
📒 Files selected for processing (4)
.devcontainer/docker-compose.yml.devcontainer/strict/docker-compose.ymlcomponents/AiRequirementGenerator.tsxlib/requirements/service.ts
✅ Files skipped from review due to trivial changes (1)
- lib/requirements/service.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- .devcontainer/docker-compose.yml
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@components/AiRequirementGenerator.tsx`:
- Line 170: The UI busy state currently uses inProgress = phase === 'thinking'
|| phase === 'generating' but omits the creating flag, allowing conflicting
actions during POST loops; update the component to derive a single isBusy (or
isLoading) boolean that combines inProgress and creating (e.g., isBusy =
inProgress || creating) and then use that isBusy to disable controls, guard
actions that start generation or close the modal, and show inline loading text
where phase/inProgress was previously used (affects the inProgress declaration
and usages around the AiRequirementGenerator component including the areas
referenced near lines 369-412).
- Around line 119-123: The catch handler in AiRequirementGenerator.tsx currently
sets a hardcoded English fallback ('Failed to load models') via setModelsError;
replace that string with a translation lookup using the i18n translator
(t('...')) so the UI uses localized copy—update the catch block that calls
setModelsError (and the similar hardcoded fallback at the other occurrence
around lines 661-663) to call t with a new/appropriate key (e.g.,
'errors.failedToLoadModels') and ensure that key is added to the translation
files; keep the err instanceof Error ? err.message branch intact and only
localize the fallback string.
- Around line 369-405: The loop currently POSTs each item in selectedReqs and on
any failure returns while leaving already-created items in selectedReqs, causing
duplicates on retry; change the logic in the create flow that iterates
selectedReqs (the fetch('/api/requirements') loop) to collect failed items into
a new array (e.g., failedReqs) and remove or mark successful items before
returning so retries only resend failedReqs, and ensure you update state
(selectedReqs or a failures state) and call setCreating(false) and
setPhase('error')/setError(...) accordingly instead of leaving the original
selection intact; alternatively implement idempotency on the API side, but for
this fix update the client to replace selectedReqs with failedReqs (or remove
successful ones) before surfacing the error and calling onCreated/onClose only
when errors.length === 0.
- Around line 447-462: The modal container (the motion.div that renders the
panel) must be exposed as a dialog for assistive tech: add role="dialog" and
aria-modal="true" to the motion.div and set aria-labelledby to the id of the
title element; give the h2 that renders {t('generateTitle')} a stable id (e.g.,
"ai-requirement-dialog-title" or similar) and reference that id from
aria-labelledby on the motion.div so the dialog is properly labeled.
- Around line 829-879: The footer's current "flex items-center justify-between"
layout doesn't collapse on small screens; change it to a mobile-first responsive
layout by making the root container a column on small screens and a row on
larger ones (e.g., replace/add classes to "flex flex-col sm:flex-row
items-center sm:justify-between gap-2 sm:gap-0"), then make the two inner button
group wrappers responsive (give the two inner divs classes like "w-full
sm:w-auto flex gap-2 justify-center sm:justify-start") and ensure action buttons
(those wired to handleCancel, handleClose, handleCreate, handleGenerate) become
full-width blocks on mobile (add "w-full sm:w-auto" / "block" to their
className) so buttons stack and expand on narrow viewports but return to inline
layout on sm+ screens.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1ac9939f-7e14-40cf-b710-b151b7416958
📒 Files selected for processing (4)
.devcontainer/docker-compose.yml.devcontainer/strict/docker-compose.yml.gitignorecomponents/AiRequirementGenerator.tsx
✅ Files skipped from review due to trivial changes (1)
- .gitignore
🚧 Files skipped from review as they are similar to previous changes (1)
- .devcontainer/docker-compose.yml
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
components/AiRequirementGenerator.tsx (1)
213-232:⚠️ Potential issue | 🟠 MajorBlock close actions while create requests are in flight.
handleCloseonly treatsinProgressas cancellable work. Duringcreating, both close buttons still stay active, so the modal can unmount while the/api/requirementsloop keeps posting in the background and later callsonCreated()/onClose()again. Foldcreatinginto the close guard/disabled state too.As per coding guidelines, "Track an
isLoadingboolean for async work and disable conflicting controls withdisabled={isLoading}while work is pending; show inline loading feedback in UI text".Also applies to: 481-485, 860-863
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/AiRequirementGenerator.tsx` around lines 213 - 232, handleClose currently only checks inProgress before aborting and closing, but requests also run while creating, allowing the modal to unmount mid-request; update the guard to treat both inProgress and creating as "work pending" (e.g., use a derived isLoading = inProgress || creating) so the handler (handleClose) aborts active requests via abortRef and prevents onClose/onCreated from being called while work is pending, and also disable the close buttons/controls (disabled={isLoading}) and show inline loading feedback; apply the same pattern to the other close handlers/close buttons referenced around the other blocks (the handlers at the ranges noted) so all close paths check the unified isLoading flag before closing.
🧹 Nitpick comments (1)
messages/en.json (1)
1043-1046: Move the new help strings under thehelpnamespace.These
ai.*Helpentries bypass the repository’s required structure for help content, so this modal won’t follow the same translation layout as the existinghelpButton()/helpPanel()surfaces. Please relocate them underhelp.*and update the component lookup to match.As per coding guidelines, "Keep help keys relative to the
helpnamespace in translation files".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@messages/en.json` around lines 1043 - 1046, Move the four top-level ai.*Help keys into the help namespace (e.g., rename "topicHelp", "areaHelp", "modelHelp", "customInstructionHelp" to "help.topicHelp", "help.areaHelp", "help.modelHelp", "help.customInstructionHelp" or similar help.* keys) in messages/en.json, and update any component lookup that reads the old ai.*Help keys (the component that renders the modal / the code paths used by helpButton() / helpPanel()) to read the new help.* keys instead so translation resolution follows the repository’s help namespace convention.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@components/AiRequirementGenerator.tsx`:
- Around line 411-414: The current flow sets setPhase('error') and removes
successfully-created items from selected after a partial failure, which hides
the results list and the "Create Selected" button and prevents retry; change the
error handling in the create flow (locations around
AiRequirementGenerator:setPhase/setError usages) to preserve the 'done' phase UI
when some items succeeded and some failed—e.g., introduce and use a separate
transient state or flag (createErrorBanner / createPhase = 'create-error')
instead of switching phase to 'error', call setError(...) to show the banner
while keeping phase === 'done' so the results list and the Create Selected
button remain visible for retry; apply the same change to the other occurrences
around the blocks noted (also near lines 740-755 and 870-880) so partial
failures are retryable.
- Around line 670-678: The UI fetch for `/api/ai/system-prompt?locale=${locale}`
in the component (see setSystemPromptLoading, fetch(...), setSystemPrompt)
points to a non-existent backend route; either implement and expose an API
handler that returns { prompt?: string } at /api/ai/system-prompt (respecting
locale query) or change the frontend to source the preview from an existing
shared prompt helper/endpoint (for example reuse the logic used by
/api/ai/generate-requirements or a shared getSystemPrompt utility) so
setSystemPrompt receives a valid string instead of the error fallback.
---
Duplicate comments:
In `@components/AiRequirementGenerator.tsx`:
- Around line 213-232: handleClose currently only checks inProgress before
aborting and closing, but requests also run while creating, allowing the modal
to unmount mid-request; update the guard to treat both inProgress and creating
as "work pending" (e.g., use a derived isLoading = inProgress || creating) so
the handler (handleClose) aborts active requests via abortRef and prevents
onClose/onCreated from being called while work is pending, and also disable the
close buttons/controls (disabled={isLoading}) and show inline loading feedback;
apply the same pattern to the other close handlers/close buttons referenced
around the other blocks (the handlers at the ranges noted) so all close paths
check the unified isLoading flag before closing.
---
Nitpick comments:
In `@messages/en.json`:
- Around line 1043-1046: Move the four top-level ai.*Help keys into the help
namespace (e.g., rename "topicHelp", "areaHelp", "modelHelp",
"customInstructionHelp" to "help.topicHelp", "help.areaHelp", "help.modelHelp",
"help.customInstructionHelp" or similar help.* keys) in messages/en.json, and
update any component lookup that reads the old ai.*Help keys (the component that
renders the modal / the code paths used by helpButton() / helpPanel()) to read
the new help.* keys instead so translation resolution follows the repository’s
help namespace convention.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bc412034-c444-47cd-a65e-e3805305ac35
📒 Files selected for processing (3)
components/AiRequirementGenerator.tsxmessages/en.jsonmessages/sv.json
🚧 Files skipped from review as they are similar to previous changes (1)
- messages/sv.json
… during processing
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
🧹 Nitpick comments (2)
components/AiRequirementGenerator.tsx (2)
820-842: Consider extracting taxonomy ID constants for maintainability.The hardcoded IDs (
typeId === 1for functional,riskLevelId === 1/2/3for risk levels) rely on the database having these specific values. While the server-side validation in/api/ai/generate-requirementsvalidates against the taxonomy, these magic numbers in the UI could silently break if taxonomy data changes.This is low-risk since the taxonomy is controlled and the server validates IDs, but documenting the expected values or extracting them to constants would improve maintainability:
// At top of file or in a shared constants file const TAXONOMY_IDS = { TYPE_FUNCTIONAL: 1, TYPE_NON_FUNCTIONAL: 2, RISK_LOW: 1, RISK_MEDIUM: 2, RISK_HIGH: 3, } as const🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/AiRequirementGenerator.tsx` around lines 820 - 842, Replace the magic numeric taxonomy IDs used in AiRequirementGenerator's rendering (checks against req.typeId and req.riskLevelId) with named constants to improve maintainability; add a constant object (e.g., TAXONOMY_IDS with TYPE_FUNCTIONAL, TYPE_NON_FUNCTIONAL, RISK_LOW, RISK_MEDIUM, RISK_HIGH) at the top of the file or import from a shared constants module, then update the conditional expressions that reference req.typeId and req.riskLevelId to use those constants instead of 1/2/3 and ensure translations (t('functional')/t('nonFunctional')/t('riskHigh') etc.) remain unchanged.
663-666: Usearia-expandedinstead ofaria-pressedfor consistency.This button controls the visibility of expandable content (the system prompt preview), similar to the Advanced and default instruction toggles which correctly use
aria-expanded. Thearia-pressedattribute is semantically for toggle buttons representing on/off state, whilearia-expandedis for disclosure controls.♿ Proposed fix
<button aria-controls="system-prompt-content" - aria-pressed={showSystemPrompt} + aria-expanded={showSystemPrompt} className="flex min-h-11 min-w-11 items-center gap-1 text-xs text-primary-600 hover:text-primary-700 focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-primary-500 disabled:opacity-50 dark:text-primary-400 dark:hover:text-primary-300 dark:focus-visible:ring-primary-400"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/AiRequirementGenerator.tsx` around lines 663 - 666, The button rendering the system prompt preview is using aria-pressed but should use aria-expanded because it controls expandable content; update the button element that references showSystemPrompt to replace aria-pressed={showSystemPrompt} with aria-expanded={showSystemPrompt} (keep aria-controls="system-prompt-content" and the existing className/props intact) so the disclosure semantics match the Advanced/default instruction toggles.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@components/AiRequirementGenerator.tsx`:
- Around line 820-842: Replace the magic numeric taxonomy IDs used in
AiRequirementGenerator's rendering (checks against req.typeId and
req.riskLevelId) with named constants to improve maintainability; add a constant
object (e.g., TAXONOMY_IDS with TYPE_FUNCTIONAL, TYPE_NON_FUNCTIONAL, RISK_LOW,
RISK_MEDIUM, RISK_HIGH) at the top of the file or import from a shared constants
module, then update the conditional expressions that reference req.typeId and
req.riskLevelId to use those constants instead of 1/2/3 and ensure translations
(t('functional')/t('nonFunctional')/t('riskHigh') etc.) remain unchanged.
- Around line 663-666: The button rendering the system prompt preview is using
aria-pressed but should use aria-expanded because it controls expandable
content; update the button element that references showSystemPrompt to replace
aria-pressed={showSystemPrompt} with aria-expanded={showSystemPrompt} (keep
aria-controls="system-prompt-content" and the existing className/props intact)
so the disclosure semantics match the Advanced/default instruction toggles.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fc1b4b05-55cc-4fb8-9a3f-6d26e184a1b0
📒 Files selected for processing (3)
components/AiRequirementGenerator.tsxmessages/en.jsonmessages/sv.json
|
This was the first iteration of this functionality, but keeping the PR for a while since the feature that works with a Ollama in devcontainer. Though the functionality (UI/UX) on main branch has progressed further than in this branch. |
Description
Screenshots (if applicable)
Related Issues
Type of Change
Testing
npm run checkpasses locallyChecklist
Checklist
This change is