diff --git a/.agent/rules/00-project-specs.md b/.agent/rules/00-project-specs.md
index 650356d..92e4080 100644
--- a/.agent/rules/00-project-specs.md
+++ b/.agent/rules/00-project-specs.md
@@ -30,10 +30,12 @@ Before generating code or planning a solution, you MUST conceptually load the co
4. **💾 DATABASE & SCHEMA (`specs/03-Data-and-Storage/`)**
- _Action:_
- - **Read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`** for exact table structures and constraints.
+ - **Read `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`** for exact table structures and constraints.
- **Consult `specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
- - **Check `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-basic.sql`** to understand initial data states.
- - **Check `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql`** to understand initial permissions states.
+ - **Check `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-basic.sql`** to understand initial data states.
+ - **Check `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql`** to understand initial permissions states.
+ - **Check `specs/03-Data-and-Storage/03-04-legacy-data-migration.md`** for migration context (ADR-017).
+ - **Check `specs/03-Data-and-Storage/03-05-n8n-migration-setup-guide.md`** for n8n workflow setup.
- _Constraint:_ NEVER invent table names or columns. Use ONLY what is defined here.
5. **⚙️ IMPLEMENTATION DETAILS (`specs/05-Engineering-Guidelines/`)**
@@ -68,8 +70,9 @@ When proposing a change or writing code, you must explicitly reference the sourc
### 4. Schema Changes
- **DO NOT** create or run TypeORM migration files.
-- Modify the schema directly in `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`.
+- Modify the schema directly in `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`.
- Update `specs/03-Data-and-Storage/03-01-data-dictionary.md` if adding/changing columns.
- Notify the user so they can apply the SQL change to the live database manually.
+- **AI Isolation (ADR-018):** Ollama runs on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. All writes go through DMS API.
---
diff --git a/.agents/README.md b/.agents/README.md
index e430d5b..22ae7cb 100644
--- a/.agents/README.md
+++ b/.agents/README.md
@@ -1,8 +1,8 @@
# 🚀 Spec-Kit: Antigravity Skills & Workflows
> **The Event Horizon of Software Quality.**
-> *Adapted for Google Antigravity IDE from [github/spec-kit](https://github.com/github/spec-kit).*
-> *Version: 1.1.0*
+> _Adapted for Google Antigravity IDE from [github/spec-kit](https://github.com/github/spec-kit)._
+> _Version: 1.1.0_
---
@@ -11,6 +11,7 @@
Welcome to the **Antigravity Edition** of Spec-Kit. This system is architected to empower your AI pair programmer (Antigravity) to drive the entire Software Development Life Cycle (SDLC) using two powerful mechanisms: **Workflows** and **Skills**.
### 🔄 Dual-Mode Intelligence
+
In this edition, Spec-Kit commands have been split into two interactive layers:
1. **Workflows (`/command`)**: High-level orchestrations that guide the agent through a series of logical steps. **The easiest way to run a skill is by typing its corresponding workflow command.**
@@ -25,10 +26,27 @@ In this edition, Spec-Kit commands have been split into two interactive layers:
To enable these agent capabilities in your project:
-1. **Add the folder**: Drop the `.agent/` folder into the root of your project workspace.
-2. **That's it!** Antigravity automatically detects the `.agent/skills` and `.agent/workflows` directories. It will instantly gain the ability to perform Spec-Driven Development.
+1. **Add the folder**: Drop the `.agents/` folder into the root of your project workspace.
+2. **That's it!** Antigravity automatically detects the `.agents/skills` and `.agents/workflows` directories. It will instantly gain the ability to perform Spec-Driven Development.
-> **💡 Compatibility Note:** This toolkit is fully compatible with **Claude Code**. To use it with Claude, simply rename the `.agent` folder to `.claude`. The skills and workflows will function identically.
+> **💡 Compatibility Note:** This toolkit is compatible with multiple AI coding agents. To use with Claude Code, rename the `.agents` folder to `.claude`. The skills and workflows will function identically.
+
+### Prerequisites (Optional)
+
+Some skills and scripts reference a `.specify/` directory for templates and project memory. If you want the full Spec-Kit experience (template-driven spec/plan creation), create this structure at repo root:
+
+```text
+.specify/
+├── templates/
+│ ├── spec-template.md # Template for /speckit.specify
+│ ├── plan-template.md # Template for /speckit.plan
+│ ├── tasks-template.md # Template for /speckit.tasks
+│ └── agent-file-template.md # Template for update-agent-context.sh
+└── memory/
+ └── constitution.md # Project governance rules (/speckit.constitution)
+```
+
+> **Note:** If `.specify/` is absent, skills will still function — they'll create blank files instead of using templates. The constitution workflow (`/speckit.constitution`) will create this structure for you on first run.
---
@@ -37,62 +55,84 @@ To enable these agent capabilities in your project:
The toolkit is organized into modular components that provide both the logic (Scripts) and the structure (Templates) for the agent.
```text
-.agent/
-├── skills/ # @ Mentions (Agent Intelligence)
-│ ├── speckit.analyze # Consistency Checker
-│ ├── speckit.checker # Static Analysis Aggregator
-│ ├── speckit.checklist # Requirements Validator
-│ ├── speckit.clarify # Ambiguity Resolver
-│ ├── speckit.constitution # Governance Manager
-│ ├── speckit.diff # Artifact Comparator
-│ ├── speckit.implement # Code Builder (Anti-Regression)
-│ ├── speckit.migrate # Legacy Code Migrator
-│ ├── speckit.plan # Technical Planner
-│ ├── speckit.quizme # Logic Challenger (Red Team)
-│ ├── speckit.reviewer # Code Reviewer
-│ ├── speckit.specify # Feature Definer
-│ ├── speckit.status # Progress Dashboard
-│ ├── speckit.tasks # Task Breaker
-│ ├── speckit.taskstoissues# Issue Tracker Syncer
-│ ├── speckit.tester # Test Runner & Coverage
-│ └── speckit.validate # Implementation Validator
+.agents/
+├── skills/ # @ Mentions (Agent Intelligence)
+│ ├── nestjs-best-practices/ # NestJS Architecture Patterns
+│ ├── next-best-practices/ # Next.js App Router Patterns
+│ ├── speckit.analyze/ # Consistency Checker
+│ ├── speckit.checker/ # Static Analysis Aggregator
+│ ├── speckit.checklist/ # Requirements Validator
+│ ├── speckit.clarify/ # Ambiguity Resolver
+│ ├── speckit.constitution/ # Governance Manager
+│ ├── speckit.diff/ # Artifact Comparator
+│ ├── speckit.implement/ # Code Builder (Anti-Regression)
+│ ├── speckit.migrate/ # Legacy Code Migrator
+│ ├── speckit.plan/ # Technical Planner
+│ ├── speckit.quizme/ # Logic Challenger (Red Team)
+│ ├── speckit.reviewer/ # Code Reviewer
+│ ├── speckit.security-audit/ # Security Auditor (OWASP/CASL/ClamAV)
+│ ├── speckit.specify/ # Feature Definer
+│ ├── speckit.status/ # Progress Dashboard
+│ ├── speckit.tasks/ # Task Breaker
+│ ├── speckit.taskstoissues/ # Issue Tracker Syncer (GitHub + Gitea)
+│ ├── speckit.tester/ # Test Runner & Coverage
+│ └── speckit.validate/ # Implementation Validator
│
-├── workflows/ # / Slash Commands (Orchestration)
-│ ├── 00-speckit.all.md # Full Pipeline
-│ ├── 01-speckit.constitution.md # Governance
-│ ├── 02-speckit.specify.md # Feature Spec
-│ ├── ... (Numbered 00-11)
-│ ├── speckit.prepare.md # Prep Pipeline
-│ └── util-speckit.*.md # Utilities
+├── workflows/ # / Slash Commands (Orchestration)
+│ ├── 00-speckit.all.md # Full Pipeline (10 steps: Specify → Validate)
+│ ├── 01–11-speckit.*.md # Individual phase workflows
+│ ├── speckit.prepare.md # Prep Pipeline (5 steps: Specify → Analyze)
+│ ├── schema-change.md # DB Schema Change (ADR-009)
+│ ├── create-backend-module.md # NestJS Module Scaffolding
+│ ├── create-frontend-page.md # Next.js Page Scaffolding
+│ ├── deploy.md # Deployment via Gitea CI/CD
+│ └── util-speckit.*.md # Utilities (checklist, diff, migrate, etc.)
│
-└── scripts/ # Shared Bash Core (Kinetic logic)
+└── scripts/
+ ├── bash/ # Bash Core (Kinetic logic)
+ │ ├── common.sh # Shared utilities & path resolution
+ │ ├── check-prerequisites.sh # Prerequisite validation
+ │ ├── create-new-feature.sh # Feature branch creation
+ │ ├── setup-plan.sh # Plan template setup
+ │ ├── update-agent-context.sh # Agent file updater (main)
+ │ ├── plan-parser.sh # Plan data extraction (module)
+ │ ├── content-generator.sh # Language-specific templates (module)
+ │ └── agent-registry.sh # 17-agent type registry (module)
+ ├── powershell/ # PowerShell Equivalents (Windows-native)
+ │ ├── common.ps1 # Shared utilities & prerequisites
+ │ └── create-new-feature.ps1 # Feature branch creation
+ ├── fix_links.py # Spec link fixer
+ ├── verify_links.py # Spec link verifier
+ └── start-mcp.js # MCP server launcher
```
---
## 🗺️ Mapping: Commands to Capabilities
-| Phase | Workflow Trigger | Antigravity Skill | Role |
-| :--- | :--- | :--- | :--- |
-| **Pipeline** | `/00-speckit.all` | N/A | Runs the full SDLC pipeline. |
-| **Governance** | `/01-speckit.constitution` | `@speckit.constitution` | Establishes project rules & principles. |
-| **Definition** | `/02-speckit.specify` | `@speckit.specify` | Drafts structured `spec.md`. |
-| **Ambiguity** | `/03-speckit.clarify` | `@speckit.clarify` | Resolves gaps post-spec. |
-| **Architecture** | `/04-speckit.plan` | `@speckit.plan` | Generates technical `plan.md`. |
-| **Decomposition** | `/05-speckit.tasks` | `@speckit.tasks` | Breaks plans into atomic tasks. |
-| **Consistency** | `/06-speckit.analyze` | `@speckit.analyze` | Cross-checks Spec vs Plan vs Tasks. |
-| **Execution** | `/07-speckit.implement` | `@speckit.implement` | Builds implementation with safety protocols. |
-| **Quality** | `/08-speckit.checker` | `@speckit.checker` | Runs static analysis (Linting, Security, Types). |
-| **Testing** | `/09-speckit.tester` | `@speckit.tester` | Runs test suite & reports coverage. |
-| **Review** | `/10-speckit.reviewer` | `@speckit.reviewer` | Performs code review (Logic, Perf, Style). |
-| **Validation** | `/11-speckit.validate` | `@speckit.validate` | Verifies implementation matches Spec requirements. |
-| **Preparation** | `/speckit.prepare` | N/A | Runs Specify -> Analyze sequence. |
-| **Checklist** | `/util-speckit.checklist` | `@speckit.checklist` | Generates feature checklists. |
-| **Diff** | `/util-speckit.diff` | `@speckit.diff` | Compares artifact versions. |
-| **Migration** | `/util-speckit.migrate` | `@speckit.migrate` | Port existing code to Spec-Kit. |
-| **Red Team** | `/util-speckit.quizme` | `@speckit.quizme` | Challenges logical flaws. |
-| **Status** | `/util-speckit.status` | `@speckit.status` | Shows feature completion status. |
-| **Tracking** | `/util-speckit.taskstoissues`| `@speckit.taskstoissues`| Syncs tasks to GitHub/Jira/etc. |
+| Phase | Workflow Trigger | Antigravity Skill | Role |
+| :---------------- | :---------------------------- | :------------------------ | :------------------------------------------------------ |
+| **Full Pipeline** | `/00-speckit.all` | N/A | Runs full SDLC pipeline (10 steps: Specify → Validate). |
+| **Governance** | `/01-speckit.constitution` | `@speckit.constitution` | Establishes project rules & principles. |
+| **Definition** | `/02-speckit.specify` | `@speckit.specify` | Drafts structured `spec.md`. |
+| **Ambiguity** | `/03-speckit.clarify` | `@speckit.clarify` | Resolves gaps post-spec. |
+| **Architecture** | `/04-speckit.plan` | `@speckit.plan` | Generates technical `plan.md`. |
+| **Decomposition** | `/05-speckit.tasks` | `@speckit.tasks` | Breaks plans into atomic tasks. |
+| **Consistency** | `/06-speckit.analyze` | `@speckit.analyze` | Cross-checks Spec vs Plan vs Tasks. |
+| **Execution** | `/07-speckit.implement` | `@speckit.implement` | Builds implementation with safety protocols. |
+| **Quality** | `/08-speckit.checker` | `@speckit.checker` | Runs static analysis (Linting, Security, Types). |
+| **Testing** | `/09-speckit.tester` | `@speckit.tester` | Runs test suite & reports coverage. |
+| **Review** | `/10-speckit.reviewer` | `@speckit.reviewer` | Performs code review (Logic, Perf, Style). |
+| **Validation** | `/11-speckit.validate` | `@speckit.validate` | Verifies implementation matches Spec requirements. |
+| **Preparation** | `/speckit.prepare` | N/A | Runs Specify → Analyze prep sequence (5 steps). |
+| **Schema** | `/schema-change` | N/A | DB schema changes per ADR-009 (no migrations). |
+| **Security** | N/A | `@speckit.security-audit` | OWASP Top 10 + CASL + ClamAV audit. |
+| **Checklist** | `/util-speckit.checklist` | `@speckit.checklist` | Generates feature checklists. |
+| **Diff** | `/util-speckit.diff` | `@speckit.diff` | Compares artifact versions. |
+| **Migration** | `/util-speckit.migrate` | `@speckit.migrate` | Port existing code to Spec-Kit. |
+| **Red Team** | `/util-speckit.quizme` | `@speckit.quizme` | Challenges logical flaws. |
+| **Status** | `/util-speckit.status` | `@speckit.status` | Shows feature completion status. |
+| **Tracking** | `/util-speckit.taskstoissues` | `@speckit.taskstoissues` | Syncs tasks to GitHub/Gitea issues. |
---
@@ -100,20 +140,18 @@ The toolkit is organized into modular components that provide both the logic (Sc
The following skills are designed to work together as a comprehensive defense against regression and poor quality. Run them in this order:
-| Step | Skill | Core Question | Focus |
-| :--- | :--- | :--- | :--- |
-| **1. Checker** | `@speckit.checker` | *"Is the code compliant?"* | **Syntax & Security**. Runs compilation, linting (ESLint/GolangCI), and vulnerability scans (npm audit/govulncheck). Catches low-level errors first. |
-| **2. Tester** | `@speckit.tester` | *"Does it work?"* | **Functionality**. Executes your test suite (Jest/Pytest/Go Test) to ensure logic performs as expected and tests pass. |
-| **3. Reviewer** | `@speckit.reviewer` | *"Is the code written well?"* | **Quality & Maintainability**. Analyzes code structure for complexity, performance bottlenecks, and best practices, acting as a senior peer reviewer. |
-| **4. Validate** | `@speckit.validate` | *"Did we build the right thing?"* | **Requirements**. Semantically compares the implementation against the defined `spec.md` and `plan.md` to ensure all feature requirements are met. |
+| Step | Skill | Core Question | Focus |
+| :-------------- | :------------------ | :-------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------- |
+| **1. Checker** | `@speckit.checker` | _"Is the code compliant?"_ | **Syntax & Security**. Runs compilation, linting (ESLint/GolangCI), and vulnerability scans (npm audit/govulncheck). Catches low-level errors first. |
+| **2. Tester** | `@speckit.tester` | _"Does it work?"_ | **Functionality**. Executes your test suite (Jest/Pytest/Go Test) to ensure logic performs as expected and tests pass. |
+| **3. Reviewer** | `@speckit.reviewer` | _"Is the code written well?"_ | **Quality & Maintainability**. Analyzes code structure for complexity, performance bottlenecks, and best practices, acting as a senior peer reviewer. |
+| **4. Validate** | `@speckit.validate` | _"Did we build the right thing?"_ | **Requirements**. Semantically compares the implementation against the defined `spec.md` and `plan.md` to ensure all feature requirements are met. |
-> **🤖 Power User Tip:** You can amplify this pipeline by creating a custom **Claude Code (MCP) Server** or subagent that delegates heavy reasoning to **Gemini Pro 3** via the `gemini` CLI.
+> **🤖 Power User Tip:** You can amplify this pipeline by creating a custom **MCP Server** or subagent that delegates heavy reasoning to a dedicated LLM.
>
-> * **Use Case:** Bind the `@speckit.validate` and `@speckit.reviewer` steps to Gemini Pro 3.
-> * **Benefit:** Gemini's 1M+ token context and reasoning capabilities excel at analyzing the full project context against the Spec, finding subtle logical flaws that smaller models miss.
-> * **How:** Create a wrapper script `scripts/gemini-reviewer.sh` that pipes the `tasks.md` and codebase to `gemini chat`, then expose this as a tool to Claude.
-
----
+> - **Use Case:** Bind the `@speckit.validate` and `@speckit.reviewer` steps to a large-context model.
+> - **Benefit:** Large-context models (1M+ tokens) excel at analyzing the full project context against the Spec, finding subtle logical flaws that smaller models miss.
+> - **How:** Create a wrapper script `scripts/gemini-reviewer.sh` that pipes the `tasks.md` and codebase to an LLM, then expose this as a tool.
---
@@ -121,45 +159,48 @@ The following skills are designed to work together as a comprehensive defense ag
These workflows function as the "Control Plane" of the project, managing everything from idea inception to status tracking.
-| Step | Workflow | Core Question | Focus |
-| :--- | :--- | :--- | :--- |
-| **1. Preparation** | `/speckit.prepare` | *"Are we ready?"* | **The Macro-Workflow**. Runs Skills 02–06 (Specify $\to$ Clarify $\to$ Plan $\to$ Tasks $\to$ Analyze) in one sequence to go from "Idea" to "Ready to Code". |
-| **2. Migration** | `/util-speckit.migrate` | *"Can we import?"* | **Onboarding**. Reverse-engineers existing code into `spec.md`, `plan.md`, and `tasks.md`. |
-| **3. Red Team** | `/util-speckit.quizme` | *"What did we miss?"* | **Hardening**. Socratic questioning to find logical gaps in your specification before you plan. |
-| **4. Export** | `/util-speckit.taskstoissues` | *"Who does what?"* | **Handoff**. Converts your `tasks.md` into real GitHub/Jira issues for the team. |
-| **5. Status** | `/util-speckit.status` | *"Are we there yet?"* | **Tracking**. Scans all artifacts to report feature completion percentage. |
-| **6. Utilities** | `/util-speckit.diff`
`/util-speckit.checklist` | *"What changed?"* | **Support**. View artifact diffs or generate quick acceptance checklists. |
+| Step | Workflow | Core Question | Focus |
+| :----------------- | :-------------------------------------------------- | :-------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| **1. Preparation** | `/speckit.prepare` | _"Are we ready?"_ | **The Macro-Workflow**. Runs Skills 02–06 (Specify $\to$ Clarify $\to$ Plan $\to$ Tasks $\to$ Analyze) in one sequence to go from "Idea" to "Ready to Code". |
+| **2. Migration** | `/util-speckit.migrate` | _"Can we import?"_ | **Onboarding**. Reverse-engineers existing code into `spec.md`, `plan.md`, and `tasks.md`. |
+| **3. Red Team** | `/util-speckit.quizme` | _"What did we miss?"_ | **Hardening**. Socratic questioning to find logical gaps in your specification before you plan. |
+| **4. Export** | `/util-speckit.taskstoissues` | _"Who does what?"_ | **Handoff**. Converts your `tasks.md` into GitHub or Gitea issues with labels and milestones. |
+| **5. Status** | `/util-speckit.status` | _"Are we there yet?"_ | **Tracking**. Scans all artifacts to report feature completion percentage. |
+| **6. Utilities** | `/util-speckit.diff`
`/util-speckit.checklist` | _"What changed?"_ | **Support**. View artifact diffs or generate quick acceptance checklists. |
### 🔄 The Design Sequence
**Stage 1: Inception**
-* *Legacy Project?* $\to$ Run **`/util-speckit.migrate`**.
-* *New Feature?* $\to$ Run **`/speckit.prepare`**.
+
+- _Legacy Project?_ $\to$ Run **`/util-speckit.migrate`**.
+- _New Feature?_ $\to$ Run **`/speckit.prepare`**.
**Stage 2: Hardening**
-* Run **`/util-speckit.quizme`** to catch edge cases.
-* Run **`/speckit.prepare`** again to regenerate the Plan based on restricted specs.
+
+- Run **`/util-speckit.quizme`** to catch edge cases.
+- Run **`/speckit.prepare`** again to regenerate the Plan based on restricted specs.
**Stage 3: Handoff**
-* Run **`/util-speckit.taskstoissues`** to sync with your board.
+
+- Run **`/util-speckit.taskstoissues`** to sync with your board.
---
## 🚀 Getting Started in Antigravity
-Once the `.agent` folder is present, you can talk to your agent as a partner. You can trigger a full process using a slash command or reference a specific capability using a skill mention.
+Once the `.agents` folder is present, you can talk to your agent as a partner. You can trigger a full process using a slash command or reference a specific capability using a skill mention.
**Example Conversation:**
> **Developer**: "**/02-speckit.specify** Add a real-time notification system using WebSockets."
>
> **Antigravity**: "Understood. Executing the **@speckit.specify** skill. Creating feature branch `005-notifications` and drafting the initial specification..."
-> *(Antigravity creates branch, drafts spec.md)*
+> _(Antigravity creates branch, drafts spec.md)_
>
> **Antigravity**: "Specification ready. I've identified some ambiguity in the reconnection strategy. Should I use **@speckit.clarify** to resolve this?"
>
> **Developer**: "Yes, use **@speckit.clarify**."
-> *(Antigravity executes the skill logic, interviewing the developer and updating the spec)*
+> _(Antigravity executes the skill logic, interviewing the developer and updating the spec)_
>
> **Antigravity**: "The spec is now robust. I'm ready to trigger **/04-speckit.plan** and **/05-speckit.tasks** to prepare for implementation."
@@ -170,25 +211,33 @@ Once the `.agent` folder is present, you can talk to your agent as a partner. Yo
To get the most out of this system, follow these **Spec-Driven Development (SDD)** rules:
### 1. The Constitution is King 👑
+
**Never skip `/01-speckit.constitution`.**
-* This file is the "Context Window Anchor" for the AI.
-* It prevents hallucinations about tech stack (e.g., "Don't use jQuery" or "Always use TypeScript strict mode").
-* **Tip:** If Antigravity makes a style mistake, don't just fix the code—update the Constitution so it never happens again.
+
+- This file is the "Context Window Anchor" for the AI.
+- It prevents hallucinations about tech stack (e.g., "Don't use jQuery" or "Always use TypeScript strict mode").
+- **Tip:** If Antigravity makes a style mistake, don't just fix the code—update the Constitution so it never happens again.
### 2. The Layered Defense 🛡️
-Don't rush to code. The workflow exists to catch errors *cheaply* before they become expensive bugs.
-* **Ambiguity Layer**: `/03-speckit.clarify` catches misunderstandings.
-* **Logic Layer**: `/util-speckit.quizme` catches edge cases.
-* **Consistency Layer**: `/06-speckit.analyze` catches gaps between Spec and Plan.
+
+Don't rush to code. The workflow exists to catch errors _cheaply_ before they become expensive bugs.
+
+- **Ambiguity Layer**: `/03-speckit.clarify` catches misunderstandings.
+- **Logic Layer**: `/util-speckit.quizme` catches edge cases.
+- **Consistency Layer**: `/06-speckit.analyze` catches gaps between Spec and Plan.
### 3. The 15-Minute Rule ⏱️
+
When generating `tasks.md` (Skill 05), ensure tasks are **atomic**.
-* **Bad Task**: "Implement User Auth" (Too big, AI will get lost).
-* **Good Task**: "Create `User` Mongoose schema with email validation" (Perfect).
-* **Rule of Thumb**: If a task takes Antigravity more than 3 tool calls to finish, it's too big. Break it down.
+
+- **Bad Task**: "Implement User Auth" (Too big, AI will get lost).
+- **Good Task**: "Create `User` Mongoose schema with email validation" (Perfect).
+- **Rule of Thumb**: If a task takes Antigravity more than 3 tool calls to finish, it's too big. Break it down.
### 4. "Refine, Don't Rewind" ⏩
+
If you change your mind mid-project:
+
1. Don't just edit the code.
2. Edit the `spec.md` to reflect the new requirement.
3. Run `/util-speckit.diff` to see the drift.
@@ -198,9 +247,11 @@ If you change your mind mid-project:
## 🧩 Adaptation Notes
-* **Skill-Based Autonomy**: Mentions like `@speckit.plan` trigger the agent's internalized understanding of how to perform that role.
-* **Shared Script Core**: All logic resides in `.agent/scripts/bash` for consistent file and git operations.
-* **Agent-Native**: Designed to be invoked via Antigravity tool calls and reasoning rather than just terminal strings.
+- **Skill-Based Autonomy**: Mentions like `@speckit.plan` trigger the agent's internalized understanding of how to perform that role.
+- **Shared Script Core**: Logic resides in `.agents/scripts/bash` (modular) with PowerShell equivalents in `scripts/powershell/` for Windows-native execution.
+- **Agent-Native**: Designed to be invoked via Antigravity tool calls and reasoning rather than just terminal strings.
+- **LCBP3-DMS Specific**: Includes project-specific skills (`nestjs-best-practices`, `next-best-practices`, `speckit.security-audit`) and workflows (`/schema-change`, `/create-backend-module`, `/deploy`).
---
-*Built with logic from [Spec-Kit](https://github.com/github/spec-kit). Powered by Antigravity.*
+
+_Built with logic from [Spec-Kit](https://github.com/github/spec-kit). Powered by Antigravity._
diff --git a/.agents/scripts/bash/agent-registry.sh b/.agents/scripts/bash/agent-registry.sh
new file mode 100644
index 0000000..cf253f4
--- /dev/null
+++ b/.agents/scripts/bash/agent-registry.sh
@@ -0,0 +1,95 @@
+#!/usr/bin/env bash
+# Agent registry — maps agent types to file paths and display names
+# Extracted from update-agent-context.sh for modularity
+#
+# Usage:
+# source agent-registry.sh
+# init_agent_registry "$REPO_ROOT"
+# get_agent_file "claude" # → /path/to/CLAUDE.md
+# get_agent_name "claude" # → "Claude Code"
+
+# Initialize agent file paths (call after REPO_ROOT is set)
+init_agent_registry() {
+ local repo_root="$1"
+
+ # Agent type → file path mapping
+ declare -gA AGENT_FILES=(
+ [claude]="$repo_root/CLAUDE.md"
+ [gemini]="$repo_root/GEMINI.md"
+ [copilot]="$repo_root/.github/agents/copilot-instructions.md"
+ [cursor-agent]="$repo_root/.cursor/rules/specify-rules.mdc"
+ [qwen]="$repo_root/QWEN.md"
+ [opencode]="$repo_root/AGENTS.md"
+ [codex]="$repo_root/AGENTS.md"
+ [windsurf]="$repo_root/.windsurf/rules/specify-rules.md"
+ [kilocode]="$repo_root/.kilocode/rules/specify-rules.md"
+ [auggie]="$repo_root/.augment/rules/specify-rules.md"
+ [roo]="$repo_root/.roo/rules/specify-rules.md"
+ [codebuddy]="$repo_root/CODEBUDDY.md"
+ [qoder]="$repo_root/QODER.md"
+ [amp]="$repo_root/AGENTS.md"
+ [shai]="$repo_root/SHAI.md"
+ [q]="$repo_root/AGENTS.md"
+ [bob]="$repo_root/AGENTS.md"
+ )
+
+ # Agent type → display name mapping
+ declare -gA AGENT_NAMES=(
+ [claude]="Claude Code"
+ [gemini]="Gemini CLI"
+ [copilot]="GitHub Copilot"
+ [cursor-agent]="Cursor IDE"
+ [qwen]="Qwen Code"
+ [opencode]="opencode"
+ [codex]="Codex CLI"
+ [windsurf]="Windsurf"
+ [kilocode]="Kilo Code"
+ [auggie]="Auggie CLI"
+ [roo]="Roo Code"
+ [codebuddy]="CodeBuddy CLI"
+ [qoder]="Qoder CLI"
+ [amp]="Amp"
+ [shai]="SHAI"
+ [q]="Amazon Q Developer CLI"
+ [bob]="IBM Bob"
+ )
+
+ # Template file path
+ TEMPLATE_FILE="$repo_root/.specify/templates/agent-file-template.md"
+}
+
+# Get file path for an agent type
+get_agent_file() {
+ local agent_type="$1"
+ echo "${AGENT_FILES[$agent_type]:-}"
+}
+
+# Get display name for an agent type
+get_agent_name() {
+ local agent_type="$1"
+ echo "${AGENT_NAMES[$agent_type]:-}"
+}
+
+# Get all registered agent types
+get_all_agent_types() {
+ echo "${!AGENT_FILES[@]}"
+}
+
+# Check if an agent type is valid
+is_valid_agent() {
+ local agent_type="$1"
+ [[ -n "${AGENT_FILES[$agent_type]:-}" ]]
+}
+
+# Get supported agent types as a pipe-separated string (for error messages)
+get_supported_agents_string() {
+ local result=""
+ for key in "${!AGENT_FILES[@]}"; do
+ if [[ -n "$result" ]]; then
+ result="$result|$key"
+ else
+ result="$key"
+ fi
+ done
+ echo "$result"
+}
diff --git a/.agents/scripts/bash/content-generator.sh b/.agents/scripts/bash/content-generator.sh
new file mode 100644
index 0000000..266d3ad
--- /dev/null
+++ b/.agents/scripts/bash/content-generator.sh
@@ -0,0 +1,40 @@
+#!/usr/bin/env bash
+# Content generation functions for update-agent-context
+# Extracted from update-agent-context.sh for modularity
+
+# Get project directory structure based on project type
+get_project_structure() {
+ local project_type="$1"
+
+ if [[ "$project_type" == *"web"* ]]; then
+ echo "backend/\\nfrontend/\\ntests/"
+ else
+ echo "src/\\ntests/"
+ fi
+}
+
+# Get build/test commands for a given language
+get_commands_for_language() {
+ local lang="$1"
+
+ case "$lang" in
+ *"Python"*)
+ echo "cd src && pytest && ruff check ."
+ ;;
+ *"Rust"*)
+ echo "cargo test && cargo clippy"
+ ;;
+ *"JavaScript"*|*"TypeScript"*)
+ echo "npm test \\&\\& npm run lint"
+ ;;
+ *)
+ echo "# Add commands for $lang"
+ ;;
+ esac
+}
+
+# Get language-specific conventions string
+get_language_conventions() {
+ local lang="$1"
+ echo "$lang: Follow standard conventions"
+}
diff --git a/.agents/scripts/bash/plan-parser.sh b/.agents/scripts/bash/plan-parser.sh
new file mode 100644
index 0000000..c5ffd85
--- /dev/null
+++ b/.agents/scripts/bash/plan-parser.sh
@@ -0,0 +1,72 @@
+#!/usr/bin/env bash
+# Plan parsing functions for update-agent-context
+# Extracted from update-agent-context.sh for modularity
+
+# Extract a field value from plan.md by pattern
+# Usage: extract_plan_field "Language/Version" "/path/to/plan.md"
+extract_plan_field() {
+ local field_pattern="$1"
+ local plan_file="$2"
+
+ grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
+ head -1 | \
+ sed "s|^\*\*${field_pattern}\*\*: ||" | \
+ sed 's/^[ \t]*//;s/[ \t]*$//' | \
+ grep -v "NEEDS CLARIFICATION" | \
+ grep -v "^N/A$" || echo ""
+}
+
+# Parse plan.md and set global variables: NEW_LANG, NEW_FRAMEWORK, NEW_DB, NEW_PROJECT_TYPE
+parse_plan_data() {
+ local plan_file="$1"
+
+ if [[ ! -f "$plan_file" ]]; then
+ log_error "Plan file not found: $plan_file"
+ return 1
+ fi
+
+ if [[ ! -r "$plan_file" ]]; then
+ log_error "Plan file is not readable: $plan_file"
+ return 1
+ fi
+
+ log_info "Parsing plan data from $plan_file"
+
+ NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
+ NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
+ NEW_DB=$(extract_plan_field "Storage" "$plan_file")
+ NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
+
+ # Log what we found
+ if [[ -n "$NEW_LANG" ]]; then
+ log_info "Found language: $NEW_LANG"
+ else
+ log_warning "No language information found in plan"
+ fi
+
+ [[ -n "$NEW_FRAMEWORK" ]] && log_info "Found framework: $NEW_FRAMEWORK"
+ [[ -n "$NEW_DB" && "$NEW_DB" != "N/A" ]] && log_info "Found database: $NEW_DB"
+ [[ -n "$NEW_PROJECT_TYPE" ]] && log_info "Found project type: $NEW_PROJECT_TYPE"
+}
+
+# Format technology stack string from language and framework
+format_technology_stack() {
+ local lang="$1"
+ local framework="$2"
+ local parts=()
+
+ [[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
+ [[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
+
+ if [[ ${#parts[@]} -eq 0 ]]; then
+ echo ""
+ elif [[ ${#parts[@]} -eq 1 ]]; then
+ echo "${parts[0]}"
+ else
+ local result="${parts[0]}"
+ for ((i=1; i<${#parts[@]}; i++)); do
+ result="$result + ${parts[i]}"
+ done
+ echo "$result"
+ fi
+}
diff --git a/.agents/scripts/bash/update-agent-context.sh b/.agents/scripts/bash/update-agent-context.sh
index 6d3e0b3..630f926 100644
--- a/.agents/scripts/bash/update-agent-context.sh
+++ b/.agents/scripts/bash/update-agent-context.sh
@@ -2,7 +2,7 @@
# Update agent context files with information from plan.md
#
-# This script maintains AI agent context files by parsing feature specifications
+# This script maintains AI agent context files by parsing feature specifications
# and updating agent-specific configuration files with project information.
#
# MAIN FUNCTIONS:
@@ -52,13 +52,19 @@ set -o pipefail
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
+# Load modular components (extracted for maintainability)
+# See each file for documentation of the functions it provides
+source "$SCRIPT_DIR/plan-parser.sh" # extract_plan_field, parse_plan_data, format_technology_stack
+source "$SCRIPT_DIR/content-generator.sh" # get_project_structure, get_commands_for_language, get_language_conventions
+source "$SCRIPT_DIR/agent-registry.sh" # init_agent_registry, get_agent_file, get_agent_name, etc.
+
# Get all paths and variables from common functions
eval $(get_feature_paths)
NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code
AGENT_TYPE="${1:-}"
-# Agent-specific file paths
+# Agent-specific file paths
CLAUDE_FILE="$REPO_ROOT/CLAUDE.md"
GEMINI_FILE="$REPO_ROOT/GEMINI.md"
COPILOT_FILE="$REPO_ROOT/.github/agents/copilot-instructions.md"
@@ -131,7 +137,7 @@ validate_environment() {
fi
exit 1
fi
-
+
# Check if plan.md exists
if [[ ! -f "$NEW_PLAN" ]]; then
log_error "No plan.md found at $NEW_PLAN"
@@ -141,7 +147,7 @@ validate_environment() {
fi
exit 1
fi
-
+
# Check if template exists (needed for new files)
if [[ ! -f "$TEMPLATE_FILE" ]]; then
log_warning "Template file not found at $TEMPLATE_FILE"
@@ -156,7 +162,7 @@ validate_environment() {
extract_plan_field() {
local field_pattern="$1"
local plan_file="$2"
-
+
grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
head -1 | \
sed "s|^\*\*${field_pattern}\*\*: ||" | \
@@ -167,39 +173,39 @@ extract_plan_field() {
parse_plan_data() {
local plan_file="$1"
-
+
if [[ ! -f "$plan_file" ]]; then
log_error "Plan file not found: $plan_file"
return 1
fi
-
+
if [[ ! -r "$plan_file" ]]; then
log_error "Plan file is not readable: $plan_file"
return 1
fi
-
+
log_info "Parsing plan data from $plan_file"
-
+
NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
NEW_DB=$(extract_plan_field "Storage" "$plan_file")
NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
-
+
# Log what we found
if [[ -n "$NEW_LANG" ]]; then
log_info "Found language: $NEW_LANG"
else
log_warning "No language information found in plan"
fi
-
+
if [[ -n "$NEW_FRAMEWORK" ]]; then
log_info "Found framework: $NEW_FRAMEWORK"
fi
-
+
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
log_info "Found database: $NEW_DB"
fi
-
+
if [[ -n "$NEW_PROJECT_TYPE" ]]; then
log_info "Found project type: $NEW_PROJECT_TYPE"
fi
@@ -209,11 +215,11 @@ format_technology_stack() {
local lang="$1"
local framework="$2"
local parts=()
-
+
# Add non-empty parts
[[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
[[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
-
+
# Join with proper formatting
if [[ ${#parts[@]} -eq 0 ]]; then
echo ""
@@ -235,7 +241,7 @@ format_technology_stack() {
get_project_structure() {
local project_type="$1"
-
+
if [[ "$project_type" == *"web"* ]]; then
echo "backend/\\nfrontend/\\ntests/"
else
@@ -245,7 +251,7 @@ get_project_structure() {
get_commands_for_language() {
local lang="$1"
-
+
case "$lang" in
*"Python"*)
echo "cd src && pytest && ruff check ."
@@ -272,40 +278,40 @@ create_new_agent_file() {
local temp_file="$2"
local project_name="$3"
local current_date="$4"
-
+
if [[ ! -f "$TEMPLATE_FILE" ]]; then
log_error "Template not found at $TEMPLATE_FILE"
return 1
fi
-
+
if [[ ! -r "$TEMPLATE_FILE" ]]; then
log_error "Template file is not readable: $TEMPLATE_FILE"
return 1
fi
-
+
log_info "Creating new agent context file from template..."
-
+
if ! cp "$TEMPLATE_FILE" "$temp_file"; then
log_error "Failed to copy template file"
return 1
fi
-
+
# Replace template placeholders
local project_structure
project_structure=$(get_project_structure "$NEW_PROJECT_TYPE")
-
+
local commands
commands=$(get_commands_for_language "$NEW_LANG")
-
+
local language_conventions
language_conventions=$(get_language_conventions "$NEW_LANG")
-
+
# Perform substitutions with error checking using safer approach
# Escape special characters for sed by using a different delimiter or escaping
local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g')
local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g')
local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g')
-
+
# Build technology stack and recent change strings conditionally
local tech_stack
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
@@ -338,7 +344,7 @@ create_new_agent_file() {
"s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
"s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
)
-
+
for substitution in "${substitutions[@]}"; do
if ! sed -i.bak -e "$substitution" "$temp_file"; then
log_error "Failed to perform substitution: $substitution"
@@ -346,14 +352,14 @@ create_new_agent_file() {
return 1
fi
done
-
+
# Convert \n sequences to actual newlines
newline=$(printf '\n')
sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
-
+
# Clean up backup files
rm -f "$temp_file.bak" "$temp_file.bak2"
-
+
return 0
}
@@ -363,49 +369,49 @@ create_new_agent_file() {
update_existing_agent_file() {
local target_file="$1"
local current_date="$2"
-
+
log_info "Updating existing agent context file..."
-
+
# Use a single temporary file for atomic update
local temp_file
temp_file=$(mktemp) || {
log_error "Failed to create temporary file"
return 1
}
-
+
# Process the file in one pass
local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK")
local new_tech_entries=()
local new_change_entry=""
-
+
# Prepare new technology entries
if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then
new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)")
fi
-
+
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then
new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)")
fi
-
+
# Prepare new change entry
if [[ -n "$tech_stack" ]]; then
new_change_entry="- $CURRENT_BRANCH: Added $tech_stack"
elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then
new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB"
fi
-
+
# Check if sections exist in the file
local has_active_technologies=0
local has_recent_changes=0
-
+
if grep -q "^## Active Technologies" "$target_file" 2>/dev/null; then
has_active_technologies=1
fi
-
+
if grep -q "^## Recent Changes" "$target_file" 2>/dev/null; then
has_recent_changes=1
fi
-
+
# Process file line by line
local in_tech_section=false
local in_changes_section=false
@@ -413,7 +419,7 @@ update_existing_agent_file() {
local changes_entries_added=false
local existing_changes_count=0
local file_ended=false
-
+
while IFS= read -r line || [[ -n "$line" ]]; do
# Handle Active Technologies section
if [[ "$line" == "## Active Technologies" ]]; then
@@ -438,7 +444,7 @@ update_existing_agent_file() {
echo "$line" >> "$temp_file"
continue
fi
-
+
# Handle Recent Changes section
if [[ "$line" == "## Recent Changes" ]]; then
echo "$line" >> "$temp_file"
@@ -461,7 +467,7 @@ update_existing_agent_file() {
fi
continue
fi
-
+
# Update timestamp
if [[ "$line" =~ \*\*Last\ updated\*\*:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then
echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file"
@@ -469,13 +475,13 @@ update_existing_agent_file() {
echo "$line" >> "$temp_file"
fi
done < "$target_file"
-
+
# Post-loop check: if we're still in the Active Technologies section and haven't added new entries
if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
-
+
# If sections don't exist, add them at the end of the file
if [[ $has_active_technologies -eq 0 ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
echo "" >> "$temp_file"
@@ -483,21 +489,21 @@ update_existing_agent_file() {
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
tech_entries_added=true
fi
-
+
if [[ $has_recent_changes -eq 0 ]] && [[ -n "$new_change_entry" ]]; then
echo "" >> "$temp_file"
echo "## Recent Changes" >> "$temp_file"
echo "$new_change_entry" >> "$temp_file"
changes_entries_added=true
fi
-
+
# Move temp file to target atomically
if ! mv "$temp_file" "$target_file"; then
log_error "Failed to update target file"
rm -f "$temp_file"
return 1
fi
-
+
return 0
}
#==============================================================================
@@ -507,19 +513,19 @@ update_existing_agent_file() {
update_agent_file() {
local target_file="$1"
local agent_name="$2"
-
+
if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then
log_error "update_agent_file requires target_file and agent_name parameters"
return 1
fi
-
+
log_info "Updating $agent_name context file: $target_file"
-
+
local project_name
project_name=$(basename "$REPO_ROOT")
local current_date
current_date=$(date +%Y-%m-%d)
-
+
# Create directory if it doesn't exist
local target_dir
target_dir=$(dirname "$target_file")
@@ -529,7 +535,7 @@ update_agent_file() {
return 1
fi
fi
-
+
if [[ ! -f "$target_file" ]]; then
# Create new file from template
local temp_file
@@ -537,7 +543,7 @@ update_agent_file() {
log_error "Failed to create temporary file"
return 1
}
-
+
if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then
if mv "$temp_file" "$target_file"; then
log_success "Created new $agent_name context file"
@@ -557,12 +563,12 @@ update_agent_file() {
log_error "Cannot read existing file: $target_file"
return 1
fi
-
+
if [[ ! -w "$target_file" ]]; then
log_error "Cannot write to existing file: $target_file"
return 1
fi
-
+
if update_existing_agent_file "$target_file" "$current_date"; then
log_success "Updated existing $agent_name context file"
else
@@ -570,7 +576,7 @@ update_agent_file() {
return 1
fi
fi
-
+
return 0
}
@@ -580,7 +586,7 @@ update_agent_file() {
update_specific_agent() {
local agent_type="$1"
-
+
case "$agent_type" in
claude)
update_agent_file "$CLAUDE_FILE" "Claude Code"
@@ -643,43 +649,43 @@ update_specific_agent() {
update_all_existing_agents() {
local found_agent=false
-
+
# Check each possible agent file and update if it exists
if [[ -f "$CLAUDE_FILE" ]]; then
update_agent_file "$CLAUDE_FILE" "Claude Code"
found_agent=true
fi
-
+
if [[ -f "$GEMINI_FILE" ]]; then
update_agent_file "$GEMINI_FILE" "Gemini CLI"
found_agent=true
fi
-
+
if [[ -f "$COPILOT_FILE" ]]; then
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
found_agent=true
fi
-
+
if [[ -f "$CURSOR_FILE" ]]; then
update_agent_file "$CURSOR_FILE" "Cursor IDE"
found_agent=true
fi
-
+
if [[ -f "$QWEN_FILE" ]]; then
update_agent_file "$QWEN_FILE" "Qwen Code"
found_agent=true
fi
-
+
if [[ -f "$AGENTS_FILE" ]]; then
update_agent_file "$AGENTS_FILE" "Codex/opencode"
found_agent=true
fi
-
+
if [[ -f "$WINDSURF_FILE" ]]; then
update_agent_file "$WINDSURF_FILE" "Windsurf"
found_agent=true
fi
-
+
if [[ -f "$KILOCODE_FILE" ]]; then
update_agent_file "$KILOCODE_FILE" "Kilo Code"
found_agent=true
@@ -689,7 +695,7 @@ update_all_existing_agents() {
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
found_agent=true
fi
-
+
if [[ -f "$ROO_FILE" ]]; then
update_agent_file "$ROO_FILE" "Roo Code"
found_agent=true
@@ -714,12 +720,12 @@ update_all_existing_agents() {
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
found_agent=true
fi
-
+
if [[ -f "$BOB_FILE" ]]; then
update_agent_file "$BOB_FILE" "IBM Bob"
found_agent=true
fi
-
+
# If no agent files exist, create a default Claude file
if [[ "$found_agent" == false ]]; then
log_info "No existing agent files found, creating default Claude file..."
@@ -729,19 +735,19 @@ update_all_existing_agents() {
print_summary() {
echo
log_info "Summary of changes:"
-
+
if [[ -n "$NEW_LANG" ]]; then
echo " - Added language: $NEW_LANG"
fi
-
+
if [[ -n "$NEW_FRAMEWORK" ]]; then
echo " - Added framework: $NEW_FRAMEWORK"
fi
-
+
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
echo " - Added database: $NEW_DB"
fi
-
+
echo
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|codebuddy|shai|q|bob|qoder]"
@@ -754,18 +760,18 @@ print_summary() {
main() {
# Validate environment before proceeding
validate_environment
-
+
log_info "=== Updating agent context files for feature $CURRENT_BRANCH ==="
-
+
# Parse the plan file to extract project information
if ! parse_plan_data "$NEW_PLAN"; then
log_error "Failed to parse plan data"
exit 1
fi
-
+
# Process based on agent type argument
local success=true
-
+
if [[ -z "$AGENT_TYPE" ]]; then
# No specific agent provided - update all existing agent files
log_info "No agent specified, updating all existing agent files..."
@@ -779,10 +785,10 @@ main() {
success=false
fi
fi
-
+
# Print summary
print_summary
-
+
if [[ "$success" == true ]]; then
log_success "Agent context update completed successfully"
exit 0
diff --git a/.agents/scripts/fix_links.py b/.agents/scripts/fix_links.py
index 72774ce..e2c2fa6 100644
--- a/.agents/scripts/fix_links.py
+++ b/.agents/scripts/fix_links.py
@@ -1,26 +1,28 @@
import os
import re
+import sys
from pathlib import Path
-# Configuration
-BASE_DIR = Path(r"d:\nap-dms.lcbp3\specs")
+# Configuration - default base directory, can be overridden via CLI argument
+DEFAULT_BASE_DIR = Path(__file__).resolve().parent.parent.parent / "specs"
+
DIRECTORIES = [
- "00-overview",
- "01-requirements",
- "02-architecture",
- "03-implementation",
- "04-operations",
- "05-decisions",
- "06-tasks"
+ "00-Overview",
+ "01-Requirements",
+ "02-Architecture",
+ "03-Data-and-Storage",
+ "04-Infrastructure-OPS",
+ "05-Engineering-Guidelines",
+ "06-Decision-Records"
]
LINK_PATTERN = re.compile(r'(\[([^\]]+)\]\(([^)]+)\))')
-def get_file_map():
+def get_file_map(base_dir: Path):
"""Builds a map of {basename}.md -> {prefixed_name}.md across all dirs."""
file_map = {}
for dir_name in DIRECTORIES:
- directory = BASE_DIR / dir_name
+ directory = base_dir / dir_name
if not directory.exists():
continue
for file_path in directory.glob("*.md"):
@@ -53,41 +55,14 @@ def get_file_map():
if secondary_base:
file_map[secondary_base] = f"{dir_name}/{actual_name}"
- # Hardcoded specific overrides for versioning and common typos
- overrides = {
- "fullftack-js-v1.5.0.md": "03-implementation/03-01-fullftack-js-v1.7.0.md",
- "fullstack-js-v1.5.0.md": "03-implementation/03-01-fullftack-js-v1.7.0.md",
- "system-architecture.md": "02-architecture/02-01-system-architecture.md",
- "api-design.md": "02-architecture/02-02-api-design.md",
- "data-model.md": "02-architecture/02-03-data-model.md",
- "backend-guidelines.md": "03-implementation/03-02-backend-guidelines.md",
- "frontend-guidelines.md": "03-implementation/03-03-frontend-guidelines.md",
- "document-numbering.md": "03-implementation/03-04-document-numbering.md",
- "testing-strategy.md": "03-implementation/03-05-testing-strategy.md",
- "deployment-guide.md": "04-operations/04-01-deployment-guide.md",
- "environment-setup.md": "04-operations/04-02-environment-setup.md",
- "monitoring-alerting.md": "04-operations/04-03-monitoring-alerting.md",
- "backup-recovery.md": "04-operations/04-04-backup-recovery.md",
- "maintenance-procedures.md": "04-operations/04-05-maintenance-procedures.md",
- "security-operations.md": "04-operations/04-06-security-operations.md",
- "incident-response.md": "04-operations/04-07-incident-response.md",
- "document-numbering-operations.md": "04-operations/04-08-document-numbering-operations.md",
- # Missing task files - redirect to README or best match
- "task-be-011-notification-audit.md": "06-tasks/README.md",
- "task-be-001-database-migrations.md": "06-tasks/TASK-BE-015-schema-v160-migration.md", # Best match
- }
-
- for k, v in overrides.items():
- file_map[k] = v
-
return file_map
-def fix_links():
- file_map = get_file_map()
+def fix_links(base_dir: Path):
+ file_map = get_file_map(base_dir)
changes_made = 0
for dir_name in DIRECTORIES:
- directory = BASE_DIR / dir_name
+ directory = base_dir / dir_name
if not directory.exists():
continue
@@ -107,8 +82,12 @@ def fix_links():
if not target_path:
continue
- # Special case: file:///d:/nap-dms.lcbp3/specs/
- clean_target_path = target_path.replace("file:///d:/nap-dms.lcbp3/specs/", "").replace("file:///D:/nap-dms.lcbp3/specs/", "")
+ # Special case: file:/// absolute paths
+ clean_target_path = re.sub(
+ r'^file:///[a-zA-Z]:[/\\].*?specs[/\\]',
+ '',
+ target_path
+ )
resolved_locally = (file_path.parent / target_path).resolve()
if resolved_locally.exists() and resolved_locally.is_file():
@@ -119,7 +98,7 @@ def fix_links():
if target_filename in file_map:
correct_relative_to_specs = file_map[target_filename]
# Calculate relative path from current file's parent to the correct file
- correct_abs = (BASE_DIR / correct_relative_to_specs).resolve()
+ correct_abs = (base_dir / correct_relative_to_specs).resolve()
try:
new_relative_path = os.path.relpath(correct_abs, file_path.parent).replace(os.sep, "/")
@@ -143,4 +122,14 @@ def fix_links():
print(f"\nTotal files updated: {changes_made}")
if __name__ == "__main__":
- fix_links()
+ if len(sys.argv) > 1:
+ base_dir = Path(sys.argv[1])
+ else:
+ base_dir = DEFAULT_BASE_DIR
+
+ if not base_dir.exists():
+ print(f"Error: Directory not found: {base_dir}", file=sys.stderr)
+ sys.exit(1)
+
+ print(f"Scanning specs directory: {base_dir}")
+ fix_links(base_dir)
diff --git a/.agents/scripts/powershell/common.ps1 b/.agents/scripts/powershell/common.ps1
new file mode 100644
index 0000000..6346a26
--- /dev/null
+++ b/.agents/scripts/powershell/common.ps1
@@ -0,0 +1,157 @@
+# PowerShell equivalents for key .agents bash scripts
+# These provide Windows-native alternatives for the most commonly used functions
+
+<#
+.SYNOPSIS
+ Common utility functions for Spec-Kit PowerShell scripts.
+.DESCRIPTION
+ PowerShell equivalent of .agents/scripts/bash/common.sh
+ Provides repository root detection, branch identification, and feature path resolution.
+#>
+
+function Get-RepoRoot {
+ try {
+ $root = git rev-parse --show-toplevel 2>$null
+ if ($LASTEXITCODE -eq 0) { return $root.Trim() }
+ } catch {}
+ # Fallback: navigate up from script location
+ return (Resolve-Path "$PSScriptRoot\..\..\..").Path
+}
+
+function Get-CurrentBranch {
+ # Check environment variable first
+ if ($env:SPECIFY_FEATURE) { return $env:SPECIFY_FEATURE }
+
+ try {
+ $branch = git rev-parse --abbrev-ref HEAD 2>$null
+ if ($LASTEXITCODE -eq 0) { return $branch.Trim() }
+ } catch {}
+
+ # Fallback: find latest feature directory
+ $repoRoot = Get-RepoRoot
+ $specsDir = Join-Path $repoRoot "specs"
+ if (Test-Path $specsDir) {
+ $latest = Get-ChildItem -Path $specsDir -Directory |
+ Where-Object { $_.Name -match '^\d{3}-' } |
+ Sort-Object Name -Descending |
+ Select-Object -First 1
+ if ($latest) { return $latest.Name }
+ }
+ return "main"
+}
+
+function Test-HasGit {
+ try {
+ git rev-parse --show-toplevel 2>$null | Out-Null
+ return $LASTEXITCODE -eq 0
+ } catch { return $false }
+}
+
+function Test-FeatureBranch {
+ param([string]$Branch, [bool]$HasGit)
+ if (-not $HasGit) {
+ Write-Warning "[specify] Git repository not detected; skipped branch validation"
+ return $true
+ }
+ if ($Branch -notmatch '^\d{3}-') {
+ Write-Error "Not on a feature branch. Current branch: $Branch"
+ Write-Error "Feature branches should be named like: 001-feature-name"
+ return $false
+ }
+ return $true
+}
+
+function Find-FeatureDir {
+ param([string]$RepoRoot, [string]$BranchName)
+ $specsDir = Join-Path $RepoRoot "specs"
+
+ if ($BranchName -match '^(\d{3})-') {
+ $prefix = $Matches[1]
+ $matches = Get-ChildItem -Path $specsDir -Directory -Filter "$prefix-*" -ErrorAction SilentlyContinue
+ if ($matches.Count -eq 1) { return $matches[0].FullName }
+ if ($matches.Count -gt 1) {
+ Write-Warning "Multiple spec dirs with prefix '$prefix': $($matches.Name -join ', ')"
+ }
+ }
+ return Join-Path $specsDir $BranchName
+}
+
+function Get-FeaturePaths {
+ $repoRoot = Get-RepoRoot
+ $branch = Get-CurrentBranch
+ $hasGit = Test-HasGit
+ $featureDir = Find-FeatureDir -RepoRoot $repoRoot -BranchName $branch
+
+ return [PSCustomObject]@{
+ RepoRoot = $repoRoot
+ Branch = $branch
+ HasGit = $hasGit
+ FeatureDir = $featureDir
+ FeatureSpec = Join-Path $featureDir "spec.md"
+ ImplPlan = Join-Path $featureDir "plan.md"
+ Tasks = Join-Path $featureDir "tasks.md"
+ Research = Join-Path $featureDir "research.md"
+ DataModel = Join-Path $featureDir "data-model.md"
+ Quickstart = Join-Path $featureDir "quickstart.md"
+ ContractsDir = Join-Path $featureDir "contracts"
+ }
+}
+
+<#
+.SYNOPSIS
+ Check prerequisites for Spec-Kit workflows.
+.DESCRIPTION
+ PowerShell equivalent of .agents/scripts/bash/check-prerequisites.sh
+.PARAMETER RequireTasks
+ Require tasks.md to exist (for implementation phase)
+.PARAMETER IncludeTasks
+ Include tasks.md in available docs list
+.PARAMETER PathsOnly
+ Only output paths, no validation
+.EXAMPLE
+ .\common.ps1
+ $result = Check-Prerequisites -RequireTasks
+#>
+function Check-Prerequisites {
+ param(
+ [switch]$RequireTasks,
+ [switch]$IncludeTasks,
+ [switch]$PathsOnly
+ )
+
+ $paths = Get-FeaturePaths
+ $valid = Test-FeatureBranch -Branch $paths.Branch -HasGit $paths.HasGit
+ if (-not $valid) { throw "Not on a feature branch" }
+
+ if ($PathsOnly) { return $paths }
+
+ # Validate required files
+ if (-not (Test-Path $paths.FeatureDir)) {
+ throw "Feature directory not found: $($paths.FeatureDir). Run /speckit.specify first."
+ }
+ if (-not (Test-Path $paths.ImplPlan)) {
+ throw "plan.md not found. Run /speckit.plan first."
+ }
+ if ($RequireTasks -and -not (Test-Path $paths.Tasks)) {
+ throw "tasks.md not found. Run /speckit.tasks first."
+ }
+
+ # Build available docs list
+ $docs = @()
+ if (Test-Path $paths.Research) { $docs += "research.md" }
+ if (Test-Path $paths.DataModel) { $docs += "data-model.md" }
+ if ((Test-Path $paths.ContractsDir) -and (Get-ChildItem $paths.ContractsDir -ErrorAction SilentlyContinue)) {
+ $docs += "contracts/"
+ }
+ if (Test-Path $paths.Quickstart) { $docs += "quickstart.md" }
+ if ($IncludeTasks -and (Test-Path $paths.Tasks)) { $docs += "tasks.md" }
+
+ return [PSCustomObject]@{
+ FeatureDir = $paths.FeatureDir
+ AvailableDocs = $docs
+ Paths = $paths
+ }
+}
+
+# Export functions when dot-sourced
+Export-ModuleMember -Function * -ErrorAction SilentlyContinue 2>$null
diff --git a/.agents/scripts/powershell/create-new-feature.ps1 b/.agents/scripts/powershell/create-new-feature.ps1
new file mode 100644
index 0000000..7d791a1
--- /dev/null
+++ b/.agents/scripts/powershell/create-new-feature.ps1
@@ -0,0 +1,138 @@
+<#
+.SYNOPSIS
+ Create a new feature branch and spec directory.
+.DESCRIPTION
+ PowerShell equivalent of .agents/scripts/bash/create-new-feature.sh
+ Creates a numbered feature branch and initializes the spec directory.
+.PARAMETER Description
+ Natural language description of the feature.
+.PARAMETER ShortName
+ Optional custom short name for the branch (2-4 words).
+.PARAMETER Number
+ Optional manual branch number (overrides auto-detection).
+.EXAMPLE
+ .\create-new-feature.ps1 -Description "Add user authentication" -ShortName "user-auth"
+#>
+param(
+ [Parameter(Mandatory = $true, Position = 0)]
+ [string]$Description,
+
+ [string]$ShortName,
+ [int]$Number = 0
+)
+
+$ErrorActionPreference = "Stop"
+
+# Load common functions
+. "$PSScriptRoot\common.ps1"
+
+$repoRoot = Get-RepoRoot
+$hasGit = Test-HasGit
+$specsDir = Join-Path $repoRoot "specs"
+if (-not (Test-Path $specsDir)) { New-Item -ItemType Directory -Path $specsDir | Out-Null }
+
+# Stop words for smart branch name generation
+$stopWords = @('i','a','an','the','to','for','of','in','on','at','by','with','from',
+ 'is','are','was','were','be','been','being','have','has','had',
+ 'do','does','did','will','would','should','could','can','may','might',
+ 'must','shall','this','that','these','those','my','your','our','their',
+ 'want','need','add','get','set')
+
+function ConvertTo-BranchName {
+ param([string]$Text)
+ $Text.ToLower() -replace '[^a-z0-9]', '-' -replace '-+', '-' -replace '^-|-$', ''
+}
+
+function Get-SmartBranchName {
+ param([string]$Desc)
+ $words = ($Desc.ToLower() -replace '[^a-z0-9]', ' ').Split(' ', [StringSplitOptions]::RemoveEmptyEntries)
+ $meaningful = $words | Where-Object { $_ -notin $stopWords -and $_.Length -ge 3 } | Select-Object -First 3
+ if ($meaningful.Count -gt 0) { return ($meaningful -join '-') }
+ return ConvertTo-BranchName $Desc
+}
+
+function Get-HighestNumber {
+ param([string]$Dir)
+ $highest = 0
+ if (Test-Path $Dir) {
+ Get-ChildItem -Path $Dir -Directory | ForEach-Object {
+ if ($_.Name -match '^(\d+)-') {
+ $num = [int]$Matches[1]
+ if ($num -gt $highest) { $highest = $num }
+ }
+ }
+ }
+ return $highest
+}
+
+# Generate branch suffix
+if ($ShortName) {
+ $branchSuffix = ConvertTo-BranchName $ShortName
+} else {
+ $branchSuffix = Get-SmartBranchName $Description
+}
+
+# Determine branch number
+if ($Number -gt 0) {
+ $branchNumber = $Number
+} else {
+ $highestSpec = Get-HighestNumber $specsDir
+ $highestBranch = 0
+ if ($hasGit) {
+ try {
+ git fetch --all --prune 2>$null | Out-Null
+ $branches = git branch -a 2>$null
+ foreach ($b in $branches) {
+ $clean = $b.Trim('* ') -replace '^remotes/[^/]+/', ''
+ if ($clean -match '^(\d{3})-') {
+ $num = [int]$Matches[1]
+ if ($num -gt $highestBranch) { $highestBranch = $num }
+ }
+ }
+ } catch {}
+ }
+ $branchNumber = [Math]::Max($highestSpec, $highestBranch) + 1
+}
+
+$featureNum = "{0:D3}" -f $branchNumber
+$branchName = "$featureNum-$branchSuffix"
+
+# Truncate if exceeding GitHub's 244-byte limit
+if ($branchName.Length -gt 244) {
+ $maxSuffix = 244 - 4 # 3 digits + 1 hyphen
+ $branchSuffix = $branchSuffix.Substring(0, $maxSuffix).TrimEnd('-')
+ Write-Warning "Branch name truncated to 244 bytes"
+ $branchName = "$featureNum-$branchSuffix"
+}
+
+# Create git branch
+if ($hasGit) {
+ git checkout -b $branchName
+} else {
+ Write-Warning "Git not detected; skipped branch creation for $branchName"
+}
+
+# Create feature directory and spec file
+$featureDir = Join-Path $specsDir $branchName
+New-Item -ItemType Directory -Path $featureDir -Force | Out-Null
+
+$templateFile = Join-Path $repoRoot ".specify" "templates" "spec-template.md"
+$specFile = Join-Path $featureDir "spec.md"
+if (Test-Path $templateFile) {
+ Copy-Item $templateFile $specFile
+} else {
+ New-Item -ItemType File -Path $specFile -Force | Out-Null
+}
+
+$env:SPECIFY_FEATURE = $branchName
+
+# Output
+[PSCustomObject]@{
+ BranchName = $branchName
+ SpecFile = $specFile
+ FeatureNum = $featureNum
+}
+
+Write-Host "BRANCH_NAME: $branchName"
+Write-Host "SPEC_FILE: $specFile"
+Write-Host "FEATURE_NUM: $featureNum"
diff --git a/.agents/scripts/verify_links.py b/.agents/scripts/verify_links.py
index b095be9..3707521 100644
--- a/.agents/scripts/verify_links.py
+++ b/.agents/scripts/verify_links.py
@@ -1,30 +1,33 @@
import os
import re
+import sys
from pathlib import Path
-# Configuration
-BASE_DIR = Path(r"d:\nap-dms.lcbp3\specs")
+# Configuration - default base directory, can be overridden via CLI argument
+DEFAULT_BASE_DIR = Path(__file__).resolve().parent.parent.parent / "specs"
+
DIRECTORIES = [
- "00-overview",
- "01-requirements",
- "02-architecture",
- "03-implementation",
- "04-operations",
- "05-decisions"
+ "00-Overview",
+ "01-Requirements",
+ "02-Architecture",
+ "03-Data-and-Storage",
+ "04-Infrastructure-OPS",
+ "05-Engineering-Guidelines",
+ "06-Decision-Records"
]
# Regex for Markdown links: [label](path)
# Handles relative paths, absolute file paths, and anchors
LINK_PATTERN = re.compile(r'\[([^\]]+)\]\(([^)]+)\)')
-def verify_links():
+def verify_links(base_dir: Path):
results = {
"total_links": 0,
"broken_links": []
}
for dir_name in DIRECTORIES:
- directory = BASE_DIR / dir_name
+ directory = base_dir / dir_name
if not directory.exists():
print(f"Directory not found: {directory}")
continue
@@ -53,7 +56,7 @@ def verify_links():
# 2. Handle relative paths
# Remove anchor if present
clean_target_str = target.split("#")[0]
- if not clean_target_str: # It was just an anchor to another file but path is empty? Wait.
+ if not clean_target_str:
continue
# Resolve path relative to current file
@@ -71,8 +74,17 @@ def verify_links():
return results
if __name__ == "__main__":
- print(f"Starting link verification in {BASE_DIR}...")
- audit_results = verify_links()
+ if len(sys.argv) > 1:
+ base_dir = Path(sys.argv[1])
+ else:
+ base_dir = DEFAULT_BASE_DIR
+
+ if not base_dir.exists():
+ print(f"Error: Directory not found: {base_dir}", file=sys.stderr)
+ sys.exit(1)
+
+ print(f"Starting link verification in {base_dir}...")
+ audit_results = verify_links(base_dir)
print(f"\nAudit Summary:")
print(f"Total Internal Links Scanned: {audit_results['total_links']}")
diff --git a/.agents/skills/speckit.checklist/SKILL.md b/.agents/skills/speckit.checklist/SKILL.md
index a0145c7..0886f34 100644
--- a/.agents/skills/speckit.checklist/SKILL.md
+++ b/.agents/skills/speckit.checklist/SKILL.md
@@ -1,6 +1,7 @@
---
name: speckit.checklist
description: Generate a custom checklist for the current feature based on user requirements.
+version: 1.0.0
---
## Checklist Purpose: "Unit Tests for English"
@@ -212,7 +213,7 @@ You are the **Antigravity Quality Gatekeeper**. Your role is to validate the qua
b. **Structure Reference**: Generate the checklist following the canonical template in `templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### ` lines with globally incrementing IDs starting at CHK001.
-7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
+6. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
- Focus areas selected
- Depth level
- Actor/timing
diff --git a/.agents/skills/speckit.constitution/SKILL.md b/.agents/skills/speckit.constitution/SKILL.md
index c9fae15..1fc01fe 100644
--- a/.agents/skills/speckit.constitution/SKILL.md
+++ b/.agents/skills/speckit.constitution/SKILL.md
@@ -1,7 +1,8 @@
---
name: speckit.constitution
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
-handoffs:
+version: 1.0.0
+handoffs:
- label: Build Specification
agent: speckit.specify
prompt: Implement the feature specification based on the updated constitution. I want to build...
@@ -29,7 +30,7 @@ Follow this execution flow:
1. Load the existing constitution template at `memory/constitution.md`.
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
- **IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
+ **IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
2. Collect/derive values for placeholders:
- If user input (conversation) supplies a value, use it.
diff --git a/.agents/skills/speckit.quizme/SKILL.md b/.agents/skills/speckit.quizme/SKILL.md
index 988c9f1..9403e80 100644
--- a/.agents/skills/speckit.quizme/SKILL.md
+++ b/.agents/skills/speckit.quizme/SKILL.md
@@ -1,7 +1,8 @@
---
name: speckit.quizme
description: Challenge the specification with Socratic questioning to identify logical gaps, unhandled edge cases, and robustness issues.
-handoffs:
+version: 1.0.0
+handoffs:
- label: Clarify Spec Requirements
agent: speckit.clarify
prompt: Clarify specification requirements
@@ -38,8 +39,9 @@ Execution steps:
- Challenge security (e.g., "You rely on client-side validation here, but what if I curl the API?").
4. **The Quiz Loop**:
- - Present 3-5 challenging scenarios *one by one*.
+ - Present 3-5 challenging scenarios _one by one_.
- Format:
+
> **Scenario**: [Describe a plausible edge case or failure]
> **Current Spec**: [Quote where the spec implies behavior or is silent]
> **The Quiz**: What should the system do here?
@@ -62,4 +64,4 @@ Execution steps:
- **Be a Skeptic**: Don't assume the happy path works.
- **Focus on "When" and "If"**: When high load, If network drops, When concurrent edits.
-- **Don't be annoying**: Focus on *critical* flaws, not nitpicks.
+- **Don't be annoying**: Focus on _critical_ flaws, not nitpicks.
diff --git a/.agents/skills/speckit.security-audit/SKILL.md b/.agents/skills/speckit.security-audit/SKILL.md
new file mode 100644
index 0000000..a83b044
--- /dev/null
+++ b/.agents/skills/speckit.security-audit/SKILL.md
@@ -0,0 +1,199 @@
+---
+name: speckit.security-audit
+description: Perform a security-focused audit of the codebase against OWASP Top 10, CASL authorization, and LCBP3-DMS security requirements.
+version: 1.0.0
+depends-on:
+ - speckit.checker
+---
+
+## Role
+
+You are the **Antigravity Security Sentinel**. Your mission is to identify security vulnerabilities, authorization gaps, and compliance issues specific to the LCBP3-DMS project before they reach production.
+
+## Task
+
+Perform a comprehensive security audit covering OWASP Top 10, CASL permission enforcement, file upload safety, and project-specific security rules defined in `specs/06-Decision-Records/ADR-016-security.md`.
+
+## Context Loading
+
+Before auditing, load the security context:
+
+1. Read `specs/06-Decision-Records/ADR-016-security.md` for project security decisions
+2. Read `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` for backend security patterns
+3. Read `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql` for CASL permission definitions
+4. Read `GEMINI.md` for security rules (Section: Security & Integrity Rules)
+
+## Execution Steps
+
+### Phase 1: OWASP Top 10 Scan
+
+Scan the `backend/src/` directory for each OWASP category:
+
+| # | OWASP Category | What to Check | Files to Scan |
+| --- | ------------------------- | ---------------------------------------------------------------------------------------- | ------------------------------------------------- |
+| A01 | Broken Access Control | Missing `@UseGuards(JwtAuthGuard, CaslAbilityGuard)` on controllers, unprotected routes | `**/*.controller.ts` |
+| A02 | Cryptographic Failures | Hardcoded secrets, weak hashing, missing HTTPS enforcement | `**/*.ts`, `docker-compose*.yml` |
+| A03 | Injection | Raw SQL queries, unsanitized user input in TypeORM queries, template literals in queries | `**/*.service.ts`, `**/*.repository.ts` |
+| A04 | Insecure Design | Missing rate limiting on auth endpoints, no idempotency checks on mutations | `**/*.controller.ts`, `**/*.guard.ts` |
+| A05 | Security Misconfiguration | Missing Helmet.js, CORS misconfiguration, debug mode in production | `main.ts`, `app.module.ts`, `docker-compose*.yml` |
+| A06 | Vulnerable Components | Outdated dependencies with known CVEs | `package.json`, `pnpm-lock.yaml` |
+| A07 | Auth Failures | Missing brute-force protection, weak password policy, JWT misconfiguration | `auth/`, `**/*.strategy.ts` |
+| A08 | Data Integrity | Missing input validation, unvalidated file types, missing CSRF protection | `**/*.dto.ts`, `**/*.interceptor.ts` |
+| A09 | Logging Failures | Missing audit logs for security events, sensitive data in logs | `**/*.service.ts`, `**/*.interceptor.ts` |
+| A10 | SSRF | Unrestricted outbound requests, user-controlled URLs | `**/*.service.ts` |
+
+### Phase 2: CASL Authorization Audit
+
+1. **Load permission matrix** from `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql`
+2. **Scan all controllers** for `@UseGuards(CaslAbilityGuard)` coverage:
+
+ ```bash
+ # Find controllers without CASL guard
+ grep -rL "CaslAbilityGuard" backend/src/modules/*/\*.controller.ts
+ ```
+
+3. **Verify 4-Level RBAC enforcement**:
+ - Level 1: System Admin (full access)
+ - Level 2: Project Admin (project-scoped)
+ - Level 3: Department Lead (department-scoped)
+ - Level 4: User (own-records only)
+
+4. **Check ability definitions** — ensure every endpoint has:
+ - `@CheckPolicies()` or `@Can()` decorator
+ - Correct action (`read`, `create`, `update`, `delete`, `manage`)
+ - Correct subject (entity class, not string)
+
+5. **Cross-reference with routes** — verify:
+ - No public endpoints that should be protected
+ - No endpoints with broader permissions than required (principle of least privilege)
+ - Query scoping: users can only query their own records (unless admin)
+
+### Phase 3: File Upload Security (ClamAV)
+
+Check LCBP3-DMS-specific file handling per ADR-016:
+
+1. **Two-Phase Storage verification**:
+ - Upload goes to temp directory first → scanned by ClamAV → moved to permanent
+ - Check for direct writes to permanent storage (violation)
+
+2. **ClamAV integration**:
+ - Verify ClamAV service is configured in `docker-compose*.yml`
+ - Check that file upload endpoints call ClamAV scan before commit
+ - Verify rejection flow for infected files
+
+3. **File type validation**:
+ - Check allowed MIME types against whitelist
+ - Verify file extension validation exists
+ - Check for double-extension attacks (e.g., `file.pdf.exe`)
+
+4. **File size limits**:
+ - Verify upload size limits are enforced
+ - Check for path traversal in filenames (`../`, `..\\`)
+
+### Phase 4: LCBP3-DMS-Specific Checks
+
+1. **Idempotency** — verify all POST/PUT/PATCH endpoints check `Idempotency-Key` header:
+
+ ```bash
+ # Find mutation endpoints without idempotency
+ grep -rn "@Post\|@Put\|@Patch" backend/src/modules/*/\*.controller.ts
+ # Cross-reference with idempotency guard usage
+ grep -rn "IdempotencyGuard\|Idempotency-Key" backend/src/
+ ```
+
+2. **Optimistic Locking** — verify document entities use `@VersionColumn()`:
+
+ ```bash
+ grep -rn "VersionColumn" backend/src/modules/*/entities/*.entity.ts
+ ```
+
+3. **Redis Redlock** — verify document numbering uses distributed locks:
+
+ ```bash
+ grep -rn "Redlock\|redlock\|acquireLock" backend/src/
+ ```
+
+4. **Password Security** — verify bcrypt with 12+ salt rounds:
+
+ ```bash
+ grep -rn "bcrypt\|saltRounds\|genSalt" backend/src/
+ ```
+
+5. **Rate Limiting** — verify throttle guard on auth endpoints:
+
+ ```bash
+ grep -rn "ThrottlerGuard\|@Throttle" backend/src/modules/auth/
+ ```
+
+6. **Environment Variables** — ensure no `.env` files for production:
+ - Check for `.env` files committed to git
+ - Verify Docker compose uses `environment:` section, not `env_file:`
+
+## Severity Classification
+
+| Severity | Description | Response |
+| -------------- | ----------------------------------------------------- | ----------------------- |
+| 🔴 **Critical** | Exploitable vulnerability, data exposure, auth bypass | Immediate fix required |
+| 🟠 **High** | Missing security control, potential escalation path | Fix before next release |
+| 🟡 **Medium** | Best practice violation, defense-in-depth gap | Plan fix in sprint |
+| 🟢 **Low** | Informational, minor hardening opportunity | Track in backlog |
+
+## Report Format
+
+Generate a structured report:
+
+```markdown
+# 🔒 Security Audit Report
+
+**Date**:
+**Scope**:
+**Auditor**: Antigravity Security Sentinel
+
+## Summary
+
+| Severity | Count |
+| ---------- | ----- |
+| 🔴 Critical | X |
+| 🟠 High | X |
+| 🟡 Medium | X |
+| 🟢 Low | X |
+
+## Findings
+
+### [SEV-001] — 🔴 Critical
+
+**Category**: OWASP A01 / CASL / ClamAV / LCBP3-Specific
+**File**: `:`
+**Description**:
+**Impact**:
+**Recommendation**:
+**Code Example**:
+\`\`\`typescript
+// Before (vulnerable)
+...
+// After (fixed)
+...
+\`\`\`
+
+## CASL Coverage Matrix
+
+| Module | Controller | Guard? | Policies? | Level |
+| ------ | --------------- | ------ | --------- | ------------ |
+| auth | AuthController | ✅ | ✅ | N/A (public) |
+| users | UsersController | ✅ | ✅ | L1-L4 |
+| ... | ... | ... | ... | ... |
+
+## Recommendations Priority
+
+1.
+2.
+ ...
+```
+
+## Operating Principles
+
+- **Read-Only**: This skill only reads and reports. Never modify code.
+- **Evidence-Based**: Every finding must include the exact file path and line number.
+- **No False Confidence**: If a check is inconclusive, mark it as "⚠️ Needs Manual Review" rather than passing.
+- **LCBP3-Specific**: Prioritize project-specific rules (idempotency, ClamAV, Redlock) over generic checks.
+- **Frontend Too**: If scope includes frontend, also check for XSS in React components, unescaped user data, and exposed API keys.
diff --git a/.agents/skills/speckit.specify/SKILL.md b/.agents/skills/speckit.specify/SKILL.md
index da7d63f..c73c424 100644
--- a/.agents/skills/speckit.specify/SKILL.md
+++ b/.agents/skills/speckit.specify/SKILL.md
@@ -1,7 +1,8 @@
---
name: speckit.specify
description: Create or update the feature specification from a natural language feature description.
-handoffs:
+version: 1.0.0
+handoffs:
- label: Build Technical Plan
agent: speckit.plan
prompt: Create a plan for the spec. I am building with...
@@ -44,27 +45,28 @@ Given that feature description, do this:
- "Fix payment processing timeout bug" → "fix-payment-timeout"
2. **Check for existing branches before creating new one**:
-
+
a. First, fetch all remote branches to ensure we have the latest information:
- ```bash
- git fetch --all --prune
- ```
-
+
+ ```bash
+ git fetch --all --prune
+ ```
+
b. Find the highest feature number across all sources for the short-name:
- - Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-$'`
- - Local branches: `git branch | grep -E '^[* ]*[0-9]+-$'`
- - Specs directories: Check for directories matching `specs/[0-9]+-`
-
+ - Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-$'`
+ - Local branches: `git branch | grep -E '^[* ]*[0-9]+-$'`
+ - Specs directories: Check for directories matching `specs/[0-9]+-`
+
c. Determine the next available number:
- - Extract all numbers from all three sources
- - Find the highest number N
- - Use N+1 for the new branch number
-
+ - Extract all numbers from all three sources
+ - Find the highest number N
+ - Use N+1 for the new branch number
+
d. Run the script `../scripts/bash/create-new-feature.sh --json "{{args}}"` with the calculated number and short-name:
- - Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
- - Bash example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" --json --number 5 --short-name "user-auth" "Add user authentication"`
- - PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
-
+ - Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
+ - Bash example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" --json --number 5 --short-name "user-auth" "Add user authentication"`
+ - PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
+
**IMPORTANT**:
- Check all three sources (remote branches, local branches, specs directories) to find the highest number
- Only match branches/directories with the exact short-name pattern
@@ -77,30 +79,29 @@ Given that feature description, do this:
3. Load `templates/spec-template.md` to understand required sections.
4. Follow this execution flow:
-
- 1. Parse user description from Input
- If empty: ERROR "No feature description provided"
- 2. Extract key concepts from description
- Identify: actors, actions, data, constraints
- 3. For unclear aspects:
- - Make informed guesses based on context and industry standards
- - Only mark with [NEEDS CLARIFICATION: specific question] if:
- - The choice significantly impacts feature scope or user experience
- - Multiple reasonable interpretations exist with different implications
- - No reasonable default exists
- - **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
- - Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
- 4. Fill User Scenarios & Testing section
- If no clear user flow: ERROR "Cannot determine user scenarios"
- 5. Generate Functional Requirements
- Each requirement must be testable
- Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
- 6. Define Success Criteria
- Create measurable, technology-agnostic outcomes
- Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
- Each criterion must be verifiable without implementation details
- 7. Identify Key Entities (if data involved)
- 8. Return: SUCCESS (spec ready for planning)
+ 1. Parse user description from Input
+ If empty: ERROR "No feature description provided"
+ 2. Extract key concepts from description
+ Identify: actors, actions, data, constraints
+ 3. For unclear aspects:
+ - Make informed guesses based on context and industry standards
+ - Only mark with [NEEDS CLARIFICATION: specific question] if:
+ - The choice significantly impacts feature scope or user experience
+ - Multiple reasonable interpretations exist with different implications
+ - No reasonable default exists
+ - **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
+ - Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
+ 4. Fill User Scenarios & Testing section
+ If no clear user flow: ERROR "Cannot determine user scenarios"
+ 5. Generate Functional Requirements
+ Each requirement must be testable
+ Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
+ 6. Define Success Criteria
+ Create measurable, technology-agnostic outcomes
+ Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
+ Each criterion must be verifiable without implementation details
+ 7. Identify Key Entities (if data involved)
+ 8. Return: SUCCESS (spec ready for planning)
5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
@@ -108,91 +109,90 @@ Given that feature description, do this:
a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
- ```markdown
- # Specification Quality Checklist: [FEATURE NAME]
-
- **Purpose**: Validate specification completeness and quality before proceeding to planning
- **Created**: [DATE]
- **Feature**: [Link to spec.md]
-
- ## Content Quality
-
- - [ ] No implementation details (languages, frameworks, APIs)
- - [ ] Focused on user value and business needs
- - [ ] Written for non-technical stakeholders
- - [ ] All mandatory sections completed
-
- ## Requirement Completeness
-
- - [ ] No [NEEDS CLARIFICATION] markers remain
- - [ ] Requirements are testable and unambiguous
- - [ ] Success criteria are measurable
- - [ ] Success criteria are technology-agnostic (no implementation details)
- - [ ] All acceptance scenarios are defined
- - [ ] Edge cases are identified
- - [ ] Scope is clearly bounded
- - [ ] Dependencies and assumptions identified
-
- ## Feature Readiness
-
- - [ ] All functional requirements have clear acceptance criteria
- - [ ] User scenarios cover primary flows
- - [ ] Feature meets measurable outcomes defined in Success Criteria
- - [ ] No implementation details leak into specification
-
- ## Notes
-
- - Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
- ```
+ ```markdown
+ # Specification Quality Checklist: [FEATURE NAME]
+
+ **Purpose**: Validate specification completeness and quality before proceeding to planning
+ **Created**: [DATE]
+ **Feature**: [Link to spec.md]
+
+ ## Content Quality
+
+ - [ ] No implementation details (languages, frameworks, APIs)
+ - [ ] Focused on user value and business needs
+ - [ ] Written for non-technical stakeholders
+ - [ ] All mandatory sections completed
+
+ ## Requirement Completeness
+
+ - [ ] No [NEEDS CLARIFICATION] markers remain
+ - [ ] Requirements are testable and unambiguous
+ - [ ] Success criteria are measurable
+ - [ ] Success criteria are technology-agnostic (no implementation details)
+ - [ ] All acceptance scenarios are defined
+ - [ ] Edge cases are identified
+ - [ ] Scope is clearly bounded
+ - [ ] Dependencies and assumptions identified
+
+ ## Feature Readiness
+
+ - [ ] All functional requirements have clear acceptance criteria
+ - [ ] User scenarios cover primary flows
+ - [ ] Feature meets measurable outcomes defined in Success Criteria
+ - [ ] No implementation details leak into specification
+
+ ## Notes
+
+ - Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
+ ```
b. **Run Validation Check**: Review the spec against each checklist item:
- - For each item, determine if it passes or fails
- - Document specific issues found (quote relevant spec sections)
+ - For each item, determine if it passes or fails
+ - Document specific issues found (quote relevant spec sections)
c. **Handle Validation Results**:
+ - **If all items pass**: Mark checklist complete and proceed to step 6
- - **If all items pass**: Mark checklist complete and proceed to step 6
+ - **If items fail (excluding [NEEDS CLARIFICATION])**:
+ 1. List the failing items and specific issues
+ 2. Update the spec to address each issue
+ 3. Re-run validation until all items pass (max 3 iterations)
+ 4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
- - **If items fail (excluding [NEEDS CLARIFICATION])**:
- 1. List the failing items and specific issues
- 2. Update the spec to address each issue
- 3. Re-run validation until all items pass (max 3 iterations)
- 4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
+ - **If [NEEDS CLARIFICATION] markers remain**:
+ 1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
+ 2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
+ 3. For each clarification needed (max 3), present options to user in this format:
- - **If [NEEDS CLARIFICATION] markers remain**:
- 1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
- 2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
- 3. For each clarification needed (max 3), present options to user in this format:
+ ```markdown
+ ## Question [N]: [Topic]
- ```markdown
- ## Question [N]: [Topic]
-
- **Context**: [Quote relevant spec section]
-
- **What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
-
- **Suggested Answers**:
-
- | Option | Answer | Implications |
- |--------|--------|--------------|
- | A | [First suggested answer] | [What this means for the feature] |
- | B | [Second suggested answer] | [What this means for the feature] |
- | C | [Third suggested answer] | [What this means for the feature] |
- | Custom | Provide your own answer | [Explain how to provide custom input] |
-
- **Your choice**: _[Wait for user response]_
- ```
+ **Context**: [Quote relevant spec section]
- 4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
- - Use consistent spacing with pipes aligned
- - Each cell should have spaces around content: `| Content |` not `|Content|`
- - Header separator must have at least 3 dashes: `|--------|`
- - Test that the table renders correctly in markdown preview
- 5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
- 6. Present all questions together before waiting for responses
- 7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
- 8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
- 9. Re-run validation after all clarifications are resolved
+ **What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
+
+ **Suggested Answers**:
+
+ | Option | Answer | Implications |
+ | ------ | ------------------------- | ------------------------------------- |
+ | A | [First suggested answer] | [What this means for the feature] |
+ | B | [Second suggested answer] | [What this means for the feature] |
+ | C | [Third suggested answer] | [What this means for the feature] |
+ | Custom | Provide your own answer | [Explain how to provide custom input] |
+
+ **Your choice**: _[Wait for user response]_
+ ```
+
+ 4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
+ - Use consistent spacing with pipes aligned
+ - Each cell should have spaces around content: `| Content |` not `|Content|`
+ - Header separator must have at least 3 dashes: `|--------|`
+ - Test that the table renders correctly in markdown preview
+ 5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
+ 6. Present all questions together before waiting for responses
+ 7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
+ 8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
+ 9. Re-run validation after all clarifications are resolved
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
diff --git a/.agents/skills/speckit.taskstoissues/SKILL.md b/.agents/skills/speckit.taskstoissues/SKILL.md
index 69bc5dc..bb9fd68 100644
--- a/.agents/skills/speckit.taskstoissues/SKILL.md
+++ b/.agents/skills/speckit.taskstoissues/SKILL.md
@@ -1,6 +1,9 @@
---
name: speckit.taskstoissues
-description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
+description: Convert existing tasks into actionable, dependency-ordered issues for the feature based on available design artifacts.
+version: 1.1.0
+depends-on:
+ - speckit.tasks
tools: ['github/github-mcp-server/issue_write']
---
@@ -14,22 +17,190 @@ You **MUST** consider the user input before proceeding (if not empty).
## Role
-You are the **Antigravity Tracker Integrator**. Your role is to synchronize technical tasks with external project management systems like GitHub Issues. You ensure that every piece of work has a clear, tracked identity for collaborative execution.
+You are the **Antigravity Tracker Integrator**. Your role is to synchronize technical tasks with external project management systems (GitHub Issues or Gitea Issues). You ensure that every piece of work has a clear, tracked identity for collaborative execution.
## Task
### Outline
-1. Run `../scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
-1. From the executed script, extract the path to **tasks**.
-1. Get the Git remote by running:
+Convert all tasks from `tasks.md` into well-structured issues on the appropriate platform (GitHub or Gitea), preserving dependency order, phase grouping, and labels.
-```bash
-git config --get remote.origin.url
-```
+### Execution Steps
-**ONLY PROCEED TO NEXT STEPS IF THE REMOTE IS A GITHUB URL**
+1. **Load Task Data**:
+ Run `../scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
+ For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
-1. For each task in the list, use the GitHub MCP server to create a new issue in the repository that is representative of the Git remote.
+2. **Extract tasks path** from the executed script output.
-**UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL**
+3. **Detect Platform** — Get the Git remote and determine the platform:
+
+ ```bash
+ git config --get remote.origin.url
+ ```
+
+ | Remote URL Pattern | Platform | API |
+ | ---------------------------------------- | ----------- | --------------------------- |
+ | `github.com` | GitHub | GitHub MCP or REST API |
+ | `gitea.*`, custom domain with `/api/v1/` | Gitea | Gitea REST API |
+ | Other | Unsupported | **STOP** with error message |
+
+ **Platform Detection Rules**:
+ - If URL contains `github.com` → GitHub
+ - If URL contains a known Gitea domain (check `$ARGUMENTS` for hints, or try `/api/v1/version`) → Gitea
+ - If `$ARGUMENTS` explicitly specifies platform (e.g., `--platform gitea`) → use that
+ - If uncertain → **ASK** the user which platform to use
+
+ > **UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL**
+
+4. **Parse `tasks.md`** — Extract structured data for each task:
+
+ | Field | Source | Example |
+ | --------------- | ---------------------------- | -------------------------- |
+ | Task ID | `T001`, `T002`, etc. | `T001` |
+ | Phase | Phase heading | `Phase 1: Setup` |
+ | Description | Task text after ID | `Create project structure` |
+ | File paths | Paths in description | `src/models/user.py` |
+ | Parallel marker | `[P]` flag | `true`/`false` |
+ | User Story | `[US1]`, `[US2]`, etc. | `US1` |
+ | Dependencies | Sequential ordering in phase | `T001 → T002` |
+
+5. **Load Feature Context** (for issue body enrichment):
+ - Read `spec.md` for requirement references
+ - Read `plan.md` for architecture context (if exists)
+ - Map tasks to requirements where possible
+
+6. **Generate Issue Data** — For each task, create an issue with:
+
+ ### Issue Title Format
+
+ ```
+ []
+ ```
+
+ Example: `[T001] Create project structure per implementation plan`
+
+ ### Issue Body Template
+
+ ```markdown
+ ## Task Details
+
+ **Task ID**:
+ **Phase**:
+ **Parallel**:
+ **User Story**:
+
+ ## Description
+
+
+
+ ## File Paths
+
+ - ``
+ - ``
+
+ ## Acceptance Criteria
+
+ - [ ] Implementation complete per task description
+ - [ ] Relevant tests pass (if applicable)
+ - [ ] No regressions introduced
+
+ ## Context
+
+ **Feature**:
+ **Spec Reference**:
+
+ ---
+
+ _Auto-generated by speckit.taskstoissues from `tasks.md`_
+ ```
+
+7. **Apply Labels** — Assign labels based on task metadata:
+
+ | Condition | Label |
+ | ---------------------------------- | ------------------ |
+ | Phase 1 (Setup) | `phase:setup` |
+ | Phase 2 (Foundation) | `phase:foundation` |
+ | Phase 3+ (User Stories) | `phase:story` |
+ | Final Phase (Polish) | `phase:polish` |
+ | Has `[P]` marker | `parallel` |
+ | Has `[US1]` marker | `story:US1` |
+ | Task creates test files | `type:test` |
+ | Task creates models/entities | `type:model` |
+ | Task creates services | `type:service` |
+ | Task creates controllers/endpoints | `type:api` |
+ | Task creates UI components | `type:ui` |
+
+ **Label Creation**: If labels don't exist on the repo, create them first before assigning.
+
+8. **Set Milestone** (optional):
+ - If `$ARGUMENTS` includes `--milestone ""`, assign all issues to that milestone
+ - If milestone doesn't exist, create it with the feature name as the title
+
+9. **Create Issues** — Execute in dependency order:
+
+ **For GitHub**: Use the GitHub MCP server tool `issue_write` to create issues.
+
+ **For Gitea**: Use the Gitea REST API:
+
+ ```bash
+ # Create issue
+ curl -s -X POST "https:///api/v1/repos///issues" \
+ -H "Authorization: token " \
+ -H "Content-Type: application/json" \
+ -d '{
+ "title": "[T001] Create project structure",
+ "body": "",
+ "labels": []
+ }'
+ ```
+
+ **Authentication**:
+ - GitHub: Uses MCP server (pre-authenticated)
+ - Gitea: Requires `GITEA_TOKEN` environment variable. If not set, **STOP** and ask user to provide it.
+
+ **Rate Limiting**:
+ - Create issues sequentially with a 500ms delay between requests
+ - If rate limited (HTTP 429), wait and retry with exponential backoff
+
+10. **Track Created Issues** — Maintain a mapping of `TaskID → IssueNumber`:
+
+ ```markdown
+ | Task ID | Issue # | Title | URL |
+ | ------- | ------- | ----------------------------- | ----- |
+ | T001 | #42 | Create project structure | |
+ | T002 | #43 | Configure database connection | |
+ ```
+
+11. **Update `tasks.md`** (optional — ask user first):
+ - Append issue references to each task line:
+ ```
+ - [ ] T001 Create project structure (#42)
+ ```
+
+12. **Report Completion**:
+ - Total issues created
+ - Issues by phase
+ - Issues by label
+ - Any failures (with retry suggestions)
+ - Link to issue board/project
+ - Mapping table (Task ID → Issue #)
+
+## Arguments
+
+| Argument | Description | Default |
+| ---------------------------- | --------------------------------------- | ------------- |
+| `--platform ` | Force platform detection | Auto-detect |
+| `--milestone ""` | Assign issues to milestone | None |
+| `--dry-run` | Preview issues without creating | `false` |
+| `--labels-only` | Only create labels, don't create issues | `false` |
+| `--update-tasks` | Auto-update tasks.md with issue refs | `false` (ask) |
+
+## Operating Principles
+
+- **Idempotency**: Check if an issue with the same title already exists before creating duplicates
+- **Dependency Order**: Create issues in task execution order so dependencies are naturally numbered
+- **Rich Context**: Include enough context in each issue body that it can be understood standalone
+- **Label Consistency**: Use a consistent label taxonomy across all issues
+- **Platform Safety**: Never create issues on repos that don't match the git remote
+- **Dry Run Support**: Always support `--dry-run` to preview before creating
diff --git a/.agents/workflows/00-speckit.all.md b/.agents/workflows/00-speckit.all.md
index 4872fae..d9736e7 100644
--- a/.agents/workflows/00-speckit.all.md
+++ b/.agents/workflows/00-speckit.all.md
@@ -4,34 +4,64 @@ description: Run the full speckit pipeline from specification to analysis in one
# Workflow: speckit.all
-This meta-workflow orchestrates the complete specification pipeline.
+This meta-workflow orchestrates the **complete development lifecycle**, from specification through implementation and validation. For the preparation-only pipeline (steps 1-5), use `/speckit.prepare` instead.
-## Pipeline Steps
+## Preparation Phase (Steps 1-5)
1. **Specify** (`/speckit.specify`):
- - Use the `view_file` tool to read: `.agent/skills/speckit.specify/SKILL.md`
+ - Use the `view_file` tool to read: `.agents/skills/speckit.specify/SKILL.md`
- Execute with user's feature description
- Creates: `spec.md`
2. **Clarify** (`/speckit.clarify`):
- - Use the `view_file` tool to read: `.agent/skills/speckit.clarify/SKILL.md`
+ - Use the `view_file` tool to read: `.agents/skills/speckit.clarify/SKILL.md`
- Execute to resolve ambiguities
- Updates: `spec.md`
3. **Plan** (`/speckit.plan`):
- - Use the `view_file` tool to read: `.agent/skills/speckit.plan/SKILL.md`
+ - Use the `view_file` tool to read: `.agents/skills/speckit.plan/SKILL.md`
- Execute to create technical design
- Creates: `plan.md`
4. **Tasks** (`/speckit.tasks`):
- - Use the `view_file` tool to read: `.agent/skills/speckit.tasks/SKILL.md`
+ - Use the `view_file` tool to read: `.agents/skills/speckit.tasks/SKILL.md`
- Execute to generate task breakdown
- Creates: `tasks.md`
5. **Analyze** (`/speckit.analyze`):
- - Use the `view_file` tool to read: `.agent/skills/speckit.analyze/SKILL.md`
- - Execute to validate consistency
+ - Use the `view_file` tool to read: `.agents/skills/speckit.analyze/SKILL.md`
+ - Execute to validate consistency across spec, plan, and tasks
- Output: Analysis report
+ - **Gate**: If critical issues found, stop and fix before proceeding
+
+## Implementation Phase (Steps 6-7)
+
+6. **Implement** (`/speckit.implement`):
+ - Use the `view_file` tool to read: `.agents/skills/speckit.implement/SKILL.md`
+ - Execute all tasks from `tasks.md` with anti-regression protocols
+ - Output: Working implementation
+
+7. **Check** (`/speckit.checker`):
+ - Use the `view_file` tool to read: `.agents/skills/speckit.checker/SKILL.md`
+ - Run static analysis (linters, type checkers, security scanners)
+ - Output: Checker report
+
+## Verification Phase (Steps 8-10)
+
+8. **Test** (`/speckit.tester`):
+ - Use the `view_file` tool to read: `.agents/skills/speckit.tester/SKILL.md`
+ - Run tests with coverage
+ - Output: Test + coverage report
+
+9. **Review** (`/speckit.reviewer`):
+ - Use the `view_file` tool to read: `.agents/skills/speckit.reviewer/SKILL.md`
+ - Perform code review
+ - Output: Review report with findings
+
+10. **Validate** (`/speckit.validate`):
+ - Use the `view_file` tool to read: `.agents/skills/speckit.validate/SKILL.md`
+ - Verify implementation matches spec requirements
+ - Output: Validation report (pass/fail)
## Usage
@@ -39,9 +69,17 @@ This meta-workflow orchestrates the complete specification pipeline.
/speckit.all "Build a user authentication system with OAuth2 support"
```
+## Pipeline Comparison
+
+| Pipeline | Steps | Use When |
+| ------------------ | ------------------------- | -------------------------------------- |
+| `/speckit.prepare` | 1-5 (Specify → Analyze) | Planning only — you'll implement later |
+| `/speckit.all` | 1-10 (Specify → Validate) | Full lifecycle in one pass |
+
## On Error
If any step fails, stop the pipeline and report:
+
- Which step failed
- The error message
- Suggested remediation (e.g., "Run `/speckit.clarify` to resolve ambiguities before continuing")
diff --git a/.agents/workflows/01-speckit.constitution.md b/.agents/workflows/01-speckit.constitution.md
index ced7ec4..96544a0 100644
--- a/.agents/workflows/01-speckit.constitution.md
+++ b/.agents/workflows/01-speckit.constitution.md
@@ -8,11 +8,11 @@ description: Create or update the project constitution from interactive or provi
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.constitution/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.constitution/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- - If `.specify/` directory doesn't exist: Initialize the speckit structure first
\ No newline at end of file
+ - If `.specify/` directory doesn't exist: Initialize the speckit structure first
diff --git a/.agents/workflows/02-speckit.specify.md b/.agents/workflows/02-speckit.specify.md
index 7b5dafa..69fd061 100644
--- a/.agents/workflows/02-speckit.specify.md
+++ b/.agents/workflows/02-speckit.specify.md
@@ -9,7 +9,7 @@ description: Create or update the feature specification from a natural language
- This is typically the starting point of a new feature.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.specify/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.specify/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
diff --git a/.agents/workflows/03-speckit.clarify.md b/.agents/workflows/03-speckit.clarify.md
index 7eb1a12..9217be3 100644
--- a/.agents/workflows/03-speckit.clarify.md
+++ b/.agents/workflows/03-speckit.clarify.md
@@ -8,7 +8,7 @@ description: Identify underspecified areas in the current feature spec by asking
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.clarify/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.clarify/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
diff --git a/.agents/workflows/04-speckit.plan.md b/.agents/workflows/04-speckit.plan.md
index 0af702f..456b83c 100644
--- a/.agents/workflows/04-speckit.plan.md
+++ b/.agents/workflows/04-speckit.plan.md
@@ -8,11 +8,11 @@ description: Execute the implementation planning workflow using the plan templat
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.plan/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.plan/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- - If `spec.md` is missing: Run `/speckit.specify` first to create the feature specification
\ No newline at end of file
+ - If `spec.md` is missing: Run `/speckit.specify` first to create the feature specification
diff --git a/.agents/workflows/05-speckit.tasks.md b/.agents/workflows/05-speckit.tasks.md
index f1a6837..54967d0 100644
--- a/.agents/workflows/05-speckit.tasks.md
+++ b/.agents/workflows/05-speckit.tasks.md
@@ -8,7 +8,7 @@ description: Generate an actionable, dependency-ordered tasks.md for the feature
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.tasks/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.tasks/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
@@ -16,4 +16,4 @@ description: Generate an actionable, dependency-ordered tasks.md for the feature
4. **On Error**:
- If `plan.md` is missing: Run `/speckit.plan` first
- - If `spec.md` is missing: Run `/speckit.specify` first
\ No newline at end of file
+ - If `spec.md` is missing: Run `/speckit.specify` first
diff --git a/.agents/workflows/06-speckit.analyze.md b/.agents/workflows/06-speckit.analyze.md
index e4aa5fb..a177a65 100644
--- a/.agents/workflows/06-speckit.analyze.md
+++ b/.agents/workflows/06-speckit.analyze.md
@@ -10,7 +10,7 @@ description: Perform a non-destructive cross-artifact consistency and quality an
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.analyze/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.analyze/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
diff --git a/.agents/workflows/07-speckit.implement.md b/.agents/workflows/07-speckit.implement.md
index dd88763..9a23850 100644
--- a/.agents/workflows/07-speckit.implement.md
+++ b/.agents/workflows/07-speckit.implement.md
@@ -8,7 +8,7 @@ description: Execute the implementation plan by processing and executing all tas
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.implement/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.implement/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
diff --git a/.agents/workflows/08-speckit.checker.md b/.agents/workflows/08-speckit.checker.md
index 92e7872..821544b 100644
--- a/.agents/workflows/08-speckit.checker.md
+++ b/.agents/workflows/08-speckit.checker.md
@@ -10,7 +10,7 @@ description: Run static analysis tools and aggregate results.
- The user may specify paths to check or run on entire project.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.checker/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.checker/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
diff --git a/.agents/workflows/09-speckit.tester.md b/.agents/workflows/09-speckit.tester.md
index 5ca1c3a..80f1eab 100644
--- a/.agents/workflows/09-speckit.tester.md
+++ b/.agents/workflows/09-speckit.tester.md
@@ -10,7 +10,7 @@ description: Execute tests, measure coverage, and report results.
- The user may specify test paths, options, or just run all tests.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.tester/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.tester/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
diff --git a/.agents/workflows/10-speckit.reviewer.md b/.agents/workflows/10-speckit.reviewer.md
index 96b1c4c..e5e18ef 100644
--- a/.agents/workflows/10-speckit.reviewer.md
+++ b/.agents/workflows/10-speckit.reviewer.md
@@ -8,7 +8,7 @@ description: Perform code review with actionable feedback and suggestions.
- The user may specify files to review, "staged" for git staged changes, or "branch" for branch diff.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.reviewer/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.reviewer/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
diff --git a/.agents/workflows/11-speckit.validate.md b/.agents/workflows/11-speckit.validate.md
index 3a5d4f9..fbc20d1 100644
--- a/.agents/workflows/11-speckit.validate.md
+++ b/.agents/workflows/11-speckit.validate.md
@@ -8,7 +8,7 @@ description: Validate that implementation matches specification requirements.
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.validate/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.validate/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
diff --git a/.agents/workflows/create-backend-module.md b/.agents/workflows/create-backend-module.md
index 9dfdd67..78cf13d 100644
--- a/.agents/workflows/create-backend-module.md
+++ b/.agents/workflows/create-backend-module.md
@@ -9,9 +9,11 @@ Follows `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` and ADR-00
## Steps
+// turbo
+
1. **Verify requirements exist** — confirm the feature is in `specs/01-Requirements/` before starting
-2. **Check schema** — read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql` for relevant tables
+// turbo 2. **Check schema** — read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql` for relevant tables
3. **Scaffold module folder**
@@ -40,10 +42,10 @@ backend/src/modules//
9. **Register in AppModule** — import the new module in `app.module.ts`.
-10. **Write unit test** — cover service methods with Jest mocks. Run:
+// turbo 10. **Write unit test** — cover service methods with Jest mocks. Run:
```bash
pnpm test:watch
```
-11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md`
+// turbo 11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md`
diff --git a/.agents/workflows/schema-change.md b/.agents/workflows/schema-change.md
new file mode 100644
index 0000000..ef5afb2
--- /dev/null
+++ b/.agents/workflows/schema-change.md
@@ -0,0 +1,108 @@
+---
+description: Manage database schema changes following ADR-009 (no migrations, modify SQL directly)
+---
+
+# Schema Change Workflow
+
+Use this workflow when modifying database schema for LCBP3-DMS.
+Follows `specs/06-Decision-Records/ADR-009-database-strategy.md` — **NO TypeORM migrations**.
+
+## Pre-Change Checklist
+
+- [ ] Change is required by a spec in `specs/01-Requirements/`
+- [ ] Existing data impact has been assessed
+- [ ] No SQL triggers are being added (business logic in NestJS only)
+
+## Steps
+
+1. **Read current schema** — load the full schema file:
+
+```
+specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql
+```
+
+2. **Read data dictionary** — understand current field definitions:
+
+```
+specs/03-Data-and-Storage/03-01-data-dictionary.md
+```
+
+// turbo 3. **Identify impact scope** — determine which tables, columns, indexes, or constraints are affected. List:
+
+- Tables being modified/created
+- Columns being added/renamed/dropped
+- Foreign key relationships affected
+- Indexes being added/modified
+- Seed data impact (if any)
+
+4. **Modify schema SQL** — edit `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`:
+ - Add/modify table definitions
+ - Maintain consistent formatting (uppercase SQL keywords, lowercase identifiers)
+ - Add inline comments for new columns explaining purpose
+ - Ensure `DEFAULT` values and `NOT NULL` constraints are correct
+ - Add `version` column with `@VersionColumn()` marker comment if optimistic locking is needed
+
+> [!CAUTION]
+> **NEVER use SQL Triggers.** All business logic must live in NestJS services.
+
+5. **Update data dictionary** — edit `specs/03-Data-and-Storage/03-01-data-dictionary.md`:
+ - Add new tables/columns with descriptions
+ - Update data types and constraints
+ - Document business rules for new fields
+ - Add enum value definitions if applicable
+
+6. **Update seed data** (if applicable):
+ - `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-basic.sql` — for reference/lookup data
+ - `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql` — for new CASL permissions
+
+7. **Update TypeORM entity** — modify corresponding `backend/src/modules//entities/*.entity.ts`:
+ - Map ONLY columns defined in schema SQL
+ - Use correct TypeORM decorators (`@Column`, `@PrimaryGeneratedColumn`, `@ManyToOne`, etc.)
+ - Add `@VersionColumn()` if optimistic locking is needed
+
+8. **Update DTOs** — if new columns are exposed via API:
+ - Add fields to `create-*.dto.ts` and/or `update-*.dto.ts`
+ - Add `class-validator` decorators for all new fields
+ - Never use `any` type
+
+// turbo 9. **Run type check** — verify no TypeScript errors:
+
+```bash
+cd backend && npx tsc --noEmit
+```
+
+10. **Generate SQL diff** — create a summary of changes for the user to apply manually:
+
+```
+-- Schema Change Summary
+-- Date:
+-- Feature:
+-- Tables affected:
+--
+-- ⚠️ Apply this SQL to the live database manually:
+
+ALTER TABLE ...;
+-- or
+CREATE TABLE ...;
+```
+
+11. **Notify user** — present the SQL diff and remind them:
+ - Apply the SQL change to the live database manually
+ - Verify the change doesn't break existing data
+ - Run `pnpm test` after applying to confirm entity mappings work
+
+## Common Patterns
+
+| Change Type | Template |
+| ----------- | -------------------------------------------------------------- |
+| Add column | `ALTER TABLE \`table\` ADD COLUMN \`col\` TYPE DEFAULT value;` |
+| Add table | Full `CREATE TABLE` with constraints and indexes |
+| Add index | `CREATE INDEX \`idx_table_col\` ON \`table\` (\`col\`);` |
+| Add FK | `ALTER TABLE \`child\` ADD CONSTRAINT ... FOREIGN KEY ...` |
+| Add enum | Add to data dictionary + `ENUM('val1','val2')` in column def |
+
+## On Error
+
+- If schema SQL has syntax errors → fix and re-validate with `tsc --noEmit`
+- If entity mapping doesn't match schema → compare column-by-column against SQL
+- If seed data conflicts → check unique constraints and foreign keys
diff --git a/.agents/workflows/speckit.prepare.md b/.agents/workflows/speckit.prepare.md
index 9e15d14..d7fb5f7 100644
--- a/.agents/workflows/speckit.prepare.md
+++ b/.agents/workflows/speckit.prepare.md
@@ -8,20 +8,20 @@ This workflow orchestrates the sequential execution of the Speckit preparation p
1. **Step 1: Specify (Skill 02)**
- Goal: Create or update the `spec.md` based on user input.
- - Action: Read and execute `.agent/skills/speckit.specify/SKILL.md`.
+ - Action: Read and execute `.agents/skills/speckit.specify/SKILL.md`.
2. **Step 2: Clarify (Skill 03)**
- Goal: Refine the `spec.md` by identifying and resolving ambiguities.
- - Action: Read and execute `.agent/skills/speckit.clarify/SKILL.md`.
+ - Action: Read and execute `.agents/skills/speckit.clarify/SKILL.md`.
3. **Step 3: Plan (Skill 04)**
- Goal: Generate `plan.md` from the finalized spec.
- - Action: Read and execute `.agent/skills/speckit.plan/SKILL.md`.
+ - Action: Read and execute `.agents/skills/speckit.plan/SKILL.md`.
4. **Step 4: Tasks (Skill 05)**
- - Goal: Generate actional `tasks.md` from the plan.
- - Action: Read and execute `.agent/skills/speckit.tasks/SKILL.md`.
+ - Goal: Generate actionable `tasks.md` from the plan.
+ - Action: Read and execute `.agents/skills/speckit.tasks/SKILL.md`.
5. **Step 5: Analyze (Skill 06)**
- Goal: Validate consistency across all design artifacts (spec, plan, tasks).
- - Action: Read and execute `.agent/skills/speckit.analyze/SKILL.md`.
+ - Action: Read and execute `.agents/skills/speckit.analyze/SKILL.md`.
diff --git a/.agents/workflows/util-speckit.checklist.md b/.agents/workflows/util-speckit.checklist.md
index 4c7c496..49aa2d9 100644
--- a/.agents/workflows/util-speckit.checklist.md
+++ b/.agents/workflows/util-speckit.checklist.md
@@ -8,7 +8,7 @@ description: Generate a custom checklist for the current feature based on user r
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.checklist/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.checklist/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
diff --git a/.agents/workflows/util-speckit.diff.md b/.agents/workflows/util-speckit.diff.md
index db7760b..da3dd20 100644
--- a/.agents/workflows/util-speckit.diff.md
+++ b/.agents/workflows/util-speckit.diff.md
@@ -8,7 +8,7 @@ description: Compare two versions of a spec or plan to highlight changes.
- The user has provided an input prompt (optional file paths or version references).
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.diff/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.diff/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
diff --git a/.agents/workflows/util-speckit.migrate.md b/.agents/workflows/util-speckit.migrate.md
index 3aa16b2..cd2e5b4 100644
--- a/.agents/workflows/util-speckit.migrate.md
+++ b/.agents/workflows/util-speckit.migrate.md
@@ -8,7 +8,7 @@ description: Migrate existing projects into the speckit structure by generating
- The user has provided an input prompt (path to analyze, feature name).
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.migrate/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.migrate/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
diff --git a/.agents/workflows/util-speckit.quizme.md b/.agents/workflows/util-speckit.quizme.md
index 07b6098..11f70af 100644
--- a/.agents/workflows/util-speckit.quizme.md
+++ b/.agents/workflows/util-speckit.quizme.md
@@ -10,7 +10,7 @@ description: Challenge the specification with Socratic questioning to identify l
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.quizme/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.quizme/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
diff --git a/.agents/workflows/util-speckit.status.md b/.agents/workflows/util-speckit.status.md
index d819f4d..b2f5089 100644
--- a/.agents/workflows/util-speckit.status.md
+++ b/.agents/workflows/util-speckit.status.md
@@ -10,7 +10,7 @@ description: Display a dashboard showing feature status, completion percentage,
- The user may optionally specify a feature to focus on.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.status/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.status/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
diff --git a/.agents/workflows/util-speckit.taskstoissues.md b/.agents/workflows/util-speckit.taskstoissues.md
index 4a5ceec..0cdac6e 100644
--- a/.agents/workflows/util-speckit.taskstoissues.md
+++ b/.agents/workflows/util-speckit.taskstoissues.md
@@ -8,11 +8,11 @@ description: Convert existing tasks into actionable, dependency-ordered GitHub i
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- - Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.taskstoissues/SKILL.md`
+ - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.taskstoissues/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- - If `tasks.md` is missing: Run `/speckit.tasks` first
\ No newline at end of file
+ - If `tasks.md` is missing: Run `/speckit.tasks` first
diff --git a/.gemini/GEMINI.md b/.gemini/GEMINI.md
index d00db24..6f8ccfa 100644
--- a/.gemini/GEMINI.md
+++ b/.gemini/GEMINI.md
@@ -12,12 +12,14 @@ You value **Data Integrity**, **Security**, and **Clean Architecture**.
## 🏗️ Project Overview
-**LCBP3-DMS (Laem Chabang Port Phase 3 - Document Management System)** — Version 1.8.0
+**LCBP3-DMS (Laem Chabang Port Phase 3 - Document Management System)** — Version 1.8.0 (Patch 1.8.1)
- **Goal:** Manage construction documents (Correspondence, RFA, Contract Drawings, Shop Drawings)
with complex multi-level approval workflows.
-- **Infrastructure:** QNAP Container Station (Docker Compose), Nginx Proxy Manager (Reverse Proxy),
- Gitea (Git + CI/CD), n8n (Workflow Automation), Prometheus + Loki + Grafana (Monitoring/Logging)
+- **Infrastructure:**
+ - **QNAP NAS:** Container Station (Docker), Nginx Proxy Manager, MariaDB, Redis, Elasticsearch, ClamAV
+ - **ASUSTOR NAS:** Ollama (AI Processing), n8n (Workflow Automation), Portainer
+ - **Shared:** Gitea (Git + CI/CD), Prometheus + Loki + Grafana (Monitoring/Logging)
## 💻 Tech Stack & Constraints
@@ -26,6 +28,7 @@ You value **Data Integrity**, **Security**, and **Clean Architecture**.
- **Frontend:** Next.js 14+ (App Router), Tailwind CSS, Shadcn/UI,
TanStack Query (**Server State**), Zustand (**Client State**), React Hook Form + Zod (**Form State**), Axios
- **Notifications:** BullMQ Queue → Email / LINE Notify / In-App
+- **AI/Migration:** Ollama (llama3.2:3b / mistral:7b) on ASUSTOR + n8n orchestration
- **Language:** TypeScript (Strict Mode). **NO `any` types allowed.**
## 🛡️ Security & Integrity Rules
@@ -36,32 +39,59 @@ You value **Data Integrity**, **Security**, and **Clean Architecture**.
4. **Validation:** Use Zod (frontend) or Class-validator (backend DTO) for all inputs.
5. **Password:** bcrypt with 12 salt rounds. Enforce password policy.
6. **Rate Limiting:** Apply ThrottlerGuard on auth endpoints.
+7. **AI Isolation (ADR-018):** Ollama MUST run on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. Output JSON only.
## 📋 Workflow & Spec Guidelines
- Always follow specs in `specs/` (v1.8.0). Priority: `06-Decision-Records` > `05-Engineering-Guidelines` > others.
-- Always verify database schema against **`specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`** before writing queries.
-- Adhere to ADRs: ADR-001 (Workflow Engine), ADR-002 (Doc Numbering), ADR-009 (DB Strategy),
- ADR-011 (App Router), ADR-013 (Form Handling), ADR-016 (Security).
+- Always verify database schema against **`specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`** before writing queries.
+- Check data dictionary at **`specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
+- Check seed data: **`lcbp3-v1.8.0-seed-basic.sql`** (reference data), **`lcbp3-v1.8.0-seed-permissions.sql`** (CASL permissions).
+- For migration context: **`specs/03-Data-and-Storage/03-04-legacy-data-migration.md`** and **`03-05-n8n-migration-setup-guide.md`**.
+
+### ADR Reference (All 17 + Patch)
+
+Adhere to all ADRs in `specs/06-Decision-Records/`:
+
+| ADR | Topic | Key Decision |
+| ------- | ------------------------- | -------------------------------------------------- |
+| ADR-001 | Workflow Engine | Unified state machine for document workflows |
+| ADR-002 | Doc Numbering | Redis Redlock + DB optimistic locking |
+| ADR-005 | Technology Stack | NestJS + Next.js + MariaDB + Redis |
+| ADR-006 | Redis Caching | Cache strategy and invalidation patterns |
+| ADR-008 | Email Notification | BullMQ queue-based email/LINE/in-app |
+| ADR-009 | DB Strategy | No TypeORM migrations — modify schema SQL directly |
+| ADR-010 | Logging/Monitoring | Prometheus + Loki + Grafana stack |
+| ADR-011 | App Router | Next.js App Router with RSC patterns |
+| ADR-012 | UI Components | Shadcn/UI component library |
+| ADR-013 | Form Handling | React Hook Form + Zod validation |
+| ADR-014 | State Management | TanStack Query (server) + Zustand (client) |
+| ADR-015 | Deployment | Docker Compose + Gitea CI/CD |
+| ADR-016 | Security | JWT + CASL RBAC + Helmet.js + ClamAV |
+| ADR-017 | Ollama Migration | Local AI + n8n for legacy data import |
+| ADR-018 | AI Boundary (Patch 1.8.1) | AI isolation — no direct DB/storage access |
## 🎯 Active Skills
- **`nestjs-best-practices`** — Apply when writing/reviewing any NestJS code (modules, services, controllers, guards, interceptors, DTOs)
- **`next-best-practices`** — Apply when writing/reviewing any Next.js code (App Router, RSC boundaries, async patterns, data fetching, error handling)
+- **`speckit.security-audit`** — Apply when auditing security (OWASP Top 10, CASL, ClamAV, LCBP3-specific checks)
## 🔄 Speckit Workflow Pipeline
Use `/slash-command` to trigger these workflows. Always prefer spec-driven development for new features.
-| Phase | Command | เมื่อใช้ |
-| -------------------- | ---------------------------------------------------------- | --------------------------------------------------- |
-| **Feature Design** | `/speckit.prepare` | Feature ใหม่ — รัน Specify→Clarify→Plan→Tasks→Analyze |
-| **Implement** | `/07-speckit.implement` | เขียนโค้ดตาม tasks.md พร้อม anti-regression |
-| **QA** | `/08-speckit.checker` | ตรวจ TypeScript + ESLint + Security |
-| **Test** | `/09-speckit.tester` | รัน Jest/Vitest + coverage report |
-| **Review** | `/10-speckit.reviewer` | Code review — Logic, Performance, Style |
-| **Validate** | `/11-speckit.validate` | ยืนยันว่า implementation ตรงกับ spec.md |
-| **Project-Specific** | `/create-backend-module` `/create-frontend-page` `/deploy` | งานประจำของ LCBP3-DMS |
+| Phase | Command | เมื่อใช้ |
+| -------------------- | ---------------------------------------------------------- | ----------------------------------------------------- |
+| **Full Pipeline** | `/speckit.all` | Feature ใหม่ — รัน Specify→...→Validate (10 steps) |
+| **Feature Design** | `/speckit.prepare` | Preparation only — Specify→Clarify→Plan→Tasks→Analyze |
+| **Implement** | `/07-speckit.implement` | เขียนโค้ดตาม tasks.md พร้อม anti-regression |
+| **QA** | `/08-speckit.checker` | ตรวจ TypeScript + ESLint + Security |
+| **Test** | `/09-speckit.tester` | รัน Jest/Vitest + coverage report |
+| **Review** | `/10-speckit.reviewer` | Code review — Logic, Performance, Style |
+| **Validate** | `/11-speckit.validate` | ยืนยันว่า implementation ตรงกับ spec.md |
+| **Schema Change** | `/schema-change` | แก้ schema SQL → data dictionary → notify user |
+| **Project-Specific** | `/create-backend-module` `/create-frontend-page` `/deploy` | งานประจำของ LCBP3-DMS |
## 🚫 Forbidden Actions
@@ -71,3 +101,5 @@ Use `/slash-command` to trigger these workflows. Always prefer spec-driven devel
- DO NOT invent table names or columns — use ONLY what is defined in the schema SQL file.
- DO NOT generate code that violates OWASP Top 10 security practices.
- DO NOT use `any` TypeScript type anywhere.
+- DO NOT let AI (Ollama) access production database directly — all writes go through DMS API.
+- DO NOT bypass StorageService for file operations — all file moves must go through the API.
diff --git a/.gitea/workflows/deploy.yaml b/.gitea/workflows/deploy.yaml
index 4663c7f..7158a2f 100644
--- a/.gitea/workflows/deploy.yaml
+++ b/.gitea/workflows/deploy.yaml
@@ -45,19 +45,19 @@ jobs:
# 4. Update Containers
echo "🔄 Updating Containers..."
# Sync compose file จาก repo → app directory
- cp /share/np-dms/app/source/lcbp3/specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-lcbp3.yml /share/np-dms/app/docker-compose-lcbp3.yml
+ cp /share/np-dms/app/source/lcbp3/specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-app.yml /share/np-dms/app/docker-compose-app.yml
cd /share/np-dms/app
# ⚠️ ลบ container เดิมที่อาจสร้างจาก Container Station
- docker rm -f backend frontend 2>/dev/null || true
+ docker rm -f lcbp3-backend lcbp3-frontend 2>/dev/null || true
# 4a. Start Backend ก่อน
echo "🟢 Starting Backend..."
- docker compose -f docker-compose-lcbp3.yml up -d backend
+ docker compose -f docker-compose-app.yml up -d backend
# 4b. รอ Backend healthy (ทุก 5 วิ สูงสุด 60 วิ)
echo "⏳ Waiting for Backend health check..."
for i in $(seq 1 12); do
- if docker inspect --format='{{.State.Health.Status}}' backend 2>/dev/null | grep -q healthy; then
+ if docker inspect --format='{{.State.Health.Status}}' lcbp3-backend 2>/dev/null | grep -q healthy; then
echo "✅ Backend is healthy!"
break
fi
@@ -69,7 +69,7 @@ jobs:
# 4c. Start Frontend
echo "🟢 Starting Frontend..."
- docker compose -f docker-compose-lcbp3.yml up -d frontend
+ docker compose -f docker-compose-app.yml up -d frontend
# 5. Cleanup
echo "🧹 Cleaning up unused images..."
diff --git a/AGENTS.md b/AGENTS.md
new file mode 100644
index 0000000..e0d75f9
--- /dev/null
+++ b/AGENTS.md
@@ -0,0 +1,77 @@
+# NAP-DMS Project Context & Rules
+
+> **For:** Codex CLI, opencode, Amp, Amazon Q Developer CLI, IBM Bob, and other AGENTS.md-compatible tools.
+
+## 🧠 Role & Persona
+
+Act as a **Senior Full Stack Developer** expert in **NestJS**, **Next.js**, and **TypeScript**.
+You are a **Document Intelligence Engine** — not a general chatbot.
+You value **Data Integrity**, **Security**, and **Clean Architecture**.
+
+## 🏗️ Project Overview
+
+**LCBP3-DMS (Laem Chabang Port Phase 3 - Document Management System)** — Version 1.8.0 (Patch 1.8.1)
+
+- **Goal:** Manage construction documents (Correspondence, RFA, Contract Drawings, Shop Drawings)
+ with complex multi-level approval workflows.
+- **Infrastructure:**
+ - **QNAP NAS:** Container Station (Docker), Nginx Proxy Manager, MariaDB, Redis, Elasticsearch, ClamAV
+ - **ASUSTOR NAS:** Ollama (AI Processing), n8n (Workflow Automation), Portainer
+ - **Shared:** Gitea (Git + CI/CD), Prometheus + Loki + Grafana (Monitoring/Logging)
+
+## 💻 Tech Stack & Constraints
+
+- **Backend:** NestJS (Modular Architecture), TypeORM, MariaDB 11.8, Redis 7.2 (BullMQ),
+ Elasticsearch 8.11, JWT + Passport, CASL (4-Level RBAC), ClamAV (Virus Scanning), Helmet.js
+- **Frontend:** Next.js 14+ (App Router), Tailwind CSS, Shadcn/UI,
+ TanStack Query (**Server State**), Zustand (**Client State**), React Hook Form + Zod (**Form State**), Axios
+- **Notifications:** BullMQ Queue → Email / LINE Notify / In-App
+- **AI/Migration:** Ollama (llama3.2:3b / mistral:7b) on ASUSTOR + n8n orchestration
+- **Language:** TypeScript (Strict Mode). **NO `any` types allowed.**
+
+## 🛡️ Security & Integrity Rules
+
+1. **Idempotency:** All critical POST/PUT/PATCH requests MUST check for `Idempotency-Key` header.
+2. **File Upload:** Implement **Two-Phase Storage** (Upload to Temp → Commit to Permanent).
+3. **Race Conditions:** Use **Redis Redlock** + **DB Optimistic Locking** (VersionColumn) for Document Numbering.
+4. **Validation:** Use Zod (frontend) or Class-validator (backend DTO) for all inputs.
+5. **Password:** bcrypt with 12 salt rounds. Enforce password policy.
+6. **Rate Limiting:** Apply ThrottlerGuard on auth endpoints.
+7. **AI Isolation (ADR-018):** Ollama MUST run on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. Output JSON only.
+
+## 📋 Spec Guidelines
+
+- Always follow specs in `specs/` (v1.8.0). Priority: `06-Decision-Records` > `05-Engineering-Guidelines` > others.
+- Always verify database schema against **`specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`** before writing queries.
+- Check data dictionary at **`specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
+
+### ADR Reference (All 17 + Patch)
+
+| ADR | Topic | Key Decision |
+| ------- | ------------------------- | -------------------------------------------------- |
+| ADR-001 | Workflow Engine | Unified state machine for document workflows |
+| ADR-002 | Doc Numbering | Redis Redlock + DB optimistic locking |
+| ADR-005 | Technology Stack | NestJS + Next.js + MariaDB + Redis |
+| ADR-006 | Redis Caching | Cache strategy and invalidation patterns |
+| ADR-008 | Email Notification | BullMQ queue-based email/LINE/in-app |
+| ADR-009 | DB Strategy | No TypeORM migrations — modify schema SQL directly |
+| ADR-010 | Logging/Monitoring | Prometheus + Loki + Grafana stack |
+| ADR-011 | App Router | Next.js App Router with RSC patterns |
+| ADR-012 | UI Components | Shadcn/UI component library |
+| ADR-013 | Form Handling | React Hook Form + Zod validation |
+| ADR-014 | State Management | TanStack Query (server) + Zustand (client) |
+| ADR-015 | Deployment | Docker Compose + Gitea CI/CD |
+| ADR-016 | Security | JWT + CASL RBAC + Helmet.js + ClamAV |
+| ADR-017 | Ollama Migration | Local AI + n8n for legacy data import |
+| ADR-018 | AI Boundary (Patch 1.8.1) | AI isolation — no direct DB/storage access |
+
+## 🚫 Forbidden Actions
+
+- DO NOT use SQL Triggers (Business logic must be in NestJS services).
+- DO NOT use `.env` files for production configuration (Use Docker environment variables).
+- DO NOT run database migrations — modify the schema SQL file directly.
+- DO NOT invent table names or columns — use ONLY what is defined in the schema SQL file.
+- DO NOT generate code that violates OWASP Top 10 security practices.
+- DO NOT use `any` TypeScript type anywhere.
+- DO NOT let AI (Ollama) access production database directly — all writes go through DMS API.
+- DO NOT bypass StorageService for file operations — all file moves must go through the API.
diff --git a/CLAUDE.md b/CLAUDE.md
new file mode 100644
index 0000000..ce2ee15
--- /dev/null
+++ b/CLAUDE.md
@@ -0,0 +1,79 @@
+# NAP-DMS Project Context & Rules
+
+## 🧠 Role & Persona
+
+Act as a **Senior Full Stack Developer** expert in **NestJS**, **Next.js**, and **TypeScript**.
+You are a **Document Intelligence Engine** — not a general chatbot.
+You value **Data Integrity**, **Security**, and **Clean Architecture**.
+
+## 🏗️ Project Overview
+
+**LCBP3-DMS (Laem Chabang Port Phase 3 - Document Management System)** — Version 1.8.0 (Patch 1.8.1)
+
+- **Goal:** Manage construction documents (Correspondence, RFA, Contract Drawings, Shop Drawings)
+ with complex multi-level approval workflows.
+- **Infrastructure:**
+ - **QNAP NAS:** Container Station (Docker), Nginx Proxy Manager, MariaDB, Redis, Elasticsearch, ClamAV
+ - **ASUSTOR NAS:** Ollama (AI Processing), n8n (Workflow Automation), Portainer
+ - **Shared:** Gitea (Git + CI/CD), Prometheus + Loki + Grafana (Monitoring/Logging)
+
+## 💻 Tech Stack & Constraints
+
+- **Backend:** NestJS (Modular Architecture), TypeORM, MariaDB 11.8, Redis 7.2 (BullMQ),
+ Elasticsearch 8.11, JWT + Passport, CASL (4-Level RBAC), ClamAV (Virus Scanning), Helmet.js
+- **Frontend:** Next.js 14+ (App Router), Tailwind CSS, Shadcn/UI,
+ TanStack Query (**Server State**), Zustand (**Client State**), React Hook Form + Zod (**Form State**), Axios
+- **Notifications:** BullMQ Queue → Email / LINE Notify / In-App
+- **AI/Migration:** Ollama (llama3.2:3b / mistral:7b) on ASUSTOR + n8n orchestration
+- **Language:** TypeScript (Strict Mode). **NO `any` types allowed.**
+
+## 🛡️ Security & Integrity Rules
+
+1. **Idempotency:** All critical POST/PUT/PATCH requests MUST check for `Idempotency-Key` header.
+2. **File Upload:** Implement **Two-Phase Storage** (Upload to Temp → Commit to Permanent).
+3. **Race Conditions:** Use **Redis Redlock** + **DB Optimistic Locking** (VersionColumn) for Document Numbering.
+4. **Validation:** Use Zod (frontend) or Class-validator (backend DTO) for all inputs.
+5. **Password:** bcrypt with 12 salt rounds. Enforce password policy.
+6. **Rate Limiting:** Apply ThrottlerGuard on auth endpoints.
+7. **AI Isolation (ADR-018):** Ollama MUST run on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. Output JSON only.
+
+## 📋 Workflow & Spec Guidelines
+
+- Always follow specs in `specs/` (v1.8.0). Priority: `06-Decision-Records` > `05-Engineering-Guidelines` > others.
+- Always verify database schema against **`specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`** before writing queries.
+- Check data dictionary at **`specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
+- Check seed data: **`lcbp3-v1.8.0-seed-basic.sql`** (reference data), **`lcbp3-v1.8.0-seed-permissions.sql`** (CASL permissions).
+- For migration context: **`specs/03-Data-and-Storage/03-04-legacy-data-migration.md`** and **`03-05-n8n-migration-setup-guide.md`**.
+
+### ADR Reference (All 17 + Patch)
+
+Adhere to all ADRs in `specs/06-Decision-Records/`:
+
+| ADR | Topic | Key Decision |
+| ------- | ------------------------- | -------------------------------------------------- |
+| ADR-001 | Workflow Engine | Unified state machine for document workflows |
+| ADR-002 | Doc Numbering | Redis Redlock + DB optimistic locking |
+| ADR-005 | Technology Stack | NestJS + Next.js + MariaDB + Redis |
+| ADR-006 | Redis Caching | Cache strategy and invalidation patterns |
+| ADR-008 | Email Notification | BullMQ queue-based email/LINE/in-app |
+| ADR-009 | DB Strategy | No TypeORM migrations — modify schema SQL directly |
+| ADR-010 | Logging/Monitoring | Prometheus + Loki + Grafana stack |
+| ADR-011 | App Router | Next.js App Router with RSC patterns |
+| ADR-012 | UI Components | Shadcn/UI component library |
+| ADR-013 | Form Handling | React Hook Form + Zod validation |
+| ADR-014 | State Management | TanStack Query (server) + Zustand (client) |
+| ADR-015 | Deployment | Docker Compose + Gitea CI/CD |
+| ADR-016 | Security | JWT + CASL RBAC + Helmet.js + ClamAV |
+| ADR-017 | Ollama Migration | Local AI + n8n for legacy data import |
+| ADR-018 | AI Boundary (Patch 1.8.1) | AI isolation — no direct DB/storage access |
+
+## 🚫 Forbidden Actions
+
+- DO NOT use SQL Triggers (Business logic must be in NestJS services).
+- DO NOT use `.env` files for production configuration (Use Docker environment variables).
+- DO NOT run database migrations — modify the schema SQL file directly.
+- DO NOT invent table names or columns — use ONLY what is defined in the schema SQL file.
+- DO NOT generate code that violates OWASP Top 10 security practices.
+- DO NOT use `any` TypeScript type anywhere.
+- DO NOT let AI (Ollama) access production database directly — all writes go through DMS API.
+- DO NOT bypass StorageService for file operations — all file moves must go through the API.
diff --git a/specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-app.yml b/specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-app.yml
index d045ddd..a4cf6cd 100644
--- a/specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-app.yml
+++ b/specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-app.yml
@@ -1,9 +1,10 @@
-# File: /share/np-dms/app/docker-compose.yml
+# File: /share/np-dms/app/docker-compose-app.yml
# DMS Container v1.8.0: Application Stack (Backend + Frontend)
# Application name: lcbp3-app
# ============================================================
# ⚠️ ใช้งานร่วมกับ services อื่นที่รันอยู่แล้วบน QNAP:
# - mariadb (lcbp3-db)
+# - redis (lcbp3-redis)
# - cache (services)
# - search (services)
# - npm (lcbp3-npm)
@@ -29,12 +30,12 @@ networks:
services:
# ----------------------------------------------------------------
# 1. Backend API (NestJS)
- # Service Name: backend (ตามที่ NPM อ้างอิง → backend:3000)
+ # Service Name: backend (ตามที่ NPM อ้างอิง → lcbp3-backend:3000)
# ----------------------------------------------------------------
backend:
<<: [*restart_policy, *default_logging]
image: lcbp3-backend:latest
- container_name: backend
+ container_name: lcbp3-backend
stdin_open: true
tty: true
deploy:
@@ -88,12 +89,12 @@ services:
# ----------------------------------------------------------------
# 2. Frontend Web App (Next.js)
- # Service Name: frontend (ตามที่ NPM อ้างอิง → frontend:3000)
+ # Service Name: frontend (ตามที่ NPM อ้างอิง → lcbp3-frontend:3000)
# ----------------------------------------------------------------
frontend:
<<: [*restart_policy, *default_logging]
image: lcbp3-frontend:latest
- container_name: frontend
+ container_name: lcbp3-frontend
stdin_open: true
tty: true
deploy:
diff --git a/specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-lcbp3.yml b/specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-lcbp3-x.yml
similarity index 100%
rename from specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-lcbp3.yml
rename to specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-lcbp3-x.yml