260304:1233 20260304:1200 update app to lcbp3
Some checks failed
Build and Deploy / deploy (push) Failing after 1m32s

This commit is contained in:
admin
2026-03-04 12:33:22 +07:00
parent 56b5d87abd
commit ad77a2ae94
43 changed files with 1708 additions and 434 deletions

View File

@@ -30,10 +30,12 @@ Before generating code or planning a solution, you MUST conceptually load the co
4. **💾 DATABASE & SCHEMA (`specs/03-Data-and-Storage/`)** 4. **💾 DATABASE & SCHEMA (`specs/03-Data-and-Storage/`)**
- _Action:_ - _Action:_
- **Read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`** for exact table structures and constraints. - **Read `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`** for exact table structures and constraints.
- **Consult `specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules. - **Consult `specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
- **Check `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-basic.sql`** to understand initial data states. - **Check `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-basic.sql`** to understand initial data states.
- **Check `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql`** to understand initial permissions states. - **Check `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql`** to understand initial permissions states.
- **Check `specs/03-Data-and-Storage/03-04-legacy-data-migration.md`** for migration context (ADR-017).
- **Check `specs/03-Data-and-Storage/03-05-n8n-migration-setup-guide.md`** for n8n workflow setup.
- _Constraint:_ NEVER invent table names or columns. Use ONLY what is defined here. - _Constraint:_ NEVER invent table names or columns. Use ONLY what is defined here.
5. **⚙️ IMPLEMENTATION DETAILS (`specs/05-Engineering-Guidelines/`)** 5. **⚙️ IMPLEMENTATION DETAILS (`specs/05-Engineering-Guidelines/`)**
@@ -68,8 +70,9 @@ When proposing a change or writing code, you must explicitly reference the sourc
### 4. Schema Changes ### 4. Schema Changes
- **DO NOT** create or run TypeORM migration files. - **DO NOT** create or run TypeORM migration files.
- Modify the schema directly in `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`. - Modify the schema directly in `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`.
- Update `specs/03-Data-and-Storage/03-01-data-dictionary.md` if adding/changing columns. - Update `specs/03-Data-and-Storage/03-01-data-dictionary.md` if adding/changing columns.
- Notify the user so they can apply the SQL change to the live database manually. - Notify the user so they can apply the SQL change to the live database manually.
- **AI Isolation (ADR-018):** Ollama runs on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. All writes go through DMS API.
--- ---

View File

@@ -1,8 +1,8 @@
# 🚀 Spec-Kit: Antigravity Skills & Workflows # 🚀 Spec-Kit: Antigravity Skills & Workflows
> **The Event Horizon of Software Quality.** > **The Event Horizon of Software Quality.**
> *Adapted for Google Antigravity IDE from [github/spec-kit](https://github.com/github/spec-kit).* > _Adapted for Google Antigravity IDE from [github/spec-kit](https://github.com/github/spec-kit)._
> *Version: 1.1.0* > _Version: 1.1.0_
--- ---
@@ -11,6 +11,7 @@
Welcome to the **Antigravity Edition** of Spec-Kit. This system is architected to empower your AI pair programmer (Antigravity) to drive the entire Software Development Life Cycle (SDLC) using two powerful mechanisms: **Workflows** and **Skills**. Welcome to the **Antigravity Edition** of Spec-Kit. This system is architected to empower your AI pair programmer (Antigravity) to drive the entire Software Development Life Cycle (SDLC) using two powerful mechanisms: **Workflows** and **Skills**.
### 🔄 Dual-Mode Intelligence ### 🔄 Dual-Mode Intelligence
In this edition, Spec-Kit commands have been split into two interactive layers: In this edition, Spec-Kit commands have been split into two interactive layers:
1. **Workflows (`/command`)**: High-level orchestrations that guide the agent through a series of logical steps. **The easiest way to run a skill is by typing its corresponding workflow command.** 1. **Workflows (`/command`)**: High-level orchestrations that guide the agent through a series of logical steps. **The easiest way to run a skill is by typing its corresponding workflow command.**
@@ -25,10 +26,27 @@ In this edition, Spec-Kit commands have been split into two interactive layers:
To enable these agent capabilities in your project: To enable these agent capabilities in your project:
1. **Add the folder**: Drop the `.agent/` folder into the root of your project workspace. 1. **Add the folder**: Drop the `.agents/` folder into the root of your project workspace.
2. **That's it!** Antigravity automatically detects the `.agent/skills` and `.agent/workflows` directories. It will instantly gain the ability to perform Spec-Driven Development. 2. **That's it!** Antigravity automatically detects the `.agents/skills` and `.agents/workflows` directories. It will instantly gain the ability to perform Spec-Driven Development.
> **💡 Compatibility Note:** This toolkit is fully compatible with **Claude Code**. To use it with Claude, simply rename the `.agent` folder to `.claude`. The skills and workflows will function identically. > **💡 Compatibility Note:** This toolkit is compatible with multiple AI coding agents. To use with Claude Code, rename the `.agents` folder to `.claude`. The skills and workflows will function identically.
### Prerequisites (Optional)
Some skills and scripts reference a `.specify/` directory for templates and project memory. If you want the full Spec-Kit experience (template-driven spec/plan creation), create this structure at repo root:
```text
.specify/
├── templates/
│ ├── spec-template.md # Template for /speckit.specify
│ ├── plan-template.md # Template for /speckit.plan
│ ├── tasks-template.md # Template for /speckit.tasks
│ └── agent-file-template.md # Template for update-agent-context.sh
└── memory/
└── constitution.md # Project governance rules (/speckit.constitution)
```
> **Note:** If `.specify/` is absent, skills will still function — they'll create blank files instead of using templates. The constitution workflow (`/speckit.constitution`) will create this structure for you on first run.
--- ---
@@ -37,62 +55,84 @@ To enable these agent capabilities in your project:
The toolkit is organized into modular components that provide both the logic (Scripts) and the structure (Templates) for the agent. The toolkit is organized into modular components that provide both the logic (Scripts) and the structure (Templates) for the agent.
```text ```text
.agent/ .agents/
├── skills/ # @ Mentions (Agent Intelligence) ├── skills/ # @ Mentions (Agent Intelligence)
│ ├── speckit.analyze # Consistency Checker │ ├── nestjs-best-practices/ # NestJS Architecture Patterns
│ ├── speckit.checker # Static Analysis Aggregator │ ├── next-best-practices/ # Next.js App Router Patterns
│ ├── speckit.checklist # Requirements Validator │ ├── speckit.analyze/ # Consistency Checker
│ ├── speckit.clarify # Ambiguity Resolver │ ├── speckit.checker/ # Static Analysis Aggregator
│ ├── speckit.constitution # Governance Manager │ ├── speckit.checklist/ # Requirements Validator
│ ├── speckit.diff # Artifact Comparator │ ├── speckit.clarify/ # Ambiguity Resolver
│ ├── speckit.implement # Code Builder (Anti-Regression) │ ├── speckit.constitution/ # Governance Manager
│ ├── speckit.migrate # Legacy Code Migrator │ ├── speckit.diff/ # Artifact Comparator
│ ├── speckit.plan # Technical Planner │ ├── speckit.implement/ # Code Builder (Anti-Regression)
│ ├── speckit.quizme # Logic Challenger (Red Team) │ ├── speckit.migrate/ # Legacy Code Migrator
│ ├── speckit.reviewer # Code Reviewer │ ├── speckit.plan/ # Technical Planner
│ ├── speckit.specify # Feature Definer │ ├── speckit.quizme/ # Logic Challenger (Red Team)
│ ├── speckit.status # Progress Dashboard │ ├── speckit.reviewer/ # Code Reviewer
│ ├── speckit.tasks # Task Breaker │ ├── speckit.security-audit/ # Security Auditor (OWASP/CASL/ClamAV)
│ ├── speckit.taskstoissues# Issue Tracker Syncer │ ├── speckit.specify/ # Feature Definer
│ ├── speckit.tester # Test Runner & Coverage │ ├── speckit.status/ # Progress Dashboard
── speckit.validate # Implementation Validator ── speckit.tasks/ # Task Breaker
│ ├── speckit.taskstoissues/ # Issue Tracker Syncer (GitHub + Gitea)
│ ├── speckit.tester/ # Test Runner & Coverage
│ └── speckit.validate/ # Implementation Validator
├── workflows/ # / Slash Commands (Orchestration) ├── workflows/ # / Slash Commands (Orchestration)
│ ├── 00-speckit.all.md # Full Pipeline │ ├── 00-speckit.all.md # Full Pipeline (10 steps: Specify → Validate)
│ ├── 01-speckit.constitution.md # Governance │ ├── 0111-speckit.*.md # Individual phase workflows
│ ├── 02-speckit.specify.md # Feature Spec │ ├── speckit.prepare.md # Prep Pipeline (5 steps: Specify → Analyze)
│ ├── ... (Numbered 00-11) │ ├── schema-change.md # DB Schema Change (ADR-009)
│ ├── speckit.prepare.md # Prep Pipeline │ ├── create-backend-module.md # NestJS Module Scaffolding
── util-speckit.*.md # Utilities ── create-frontend-page.md # Next.js Page Scaffolding
│ ├── deploy.md # Deployment via Gitea CI/CD
│ └── util-speckit.*.md # Utilities (checklist, diff, migrate, etc.)
└── scripts/ # Shared Bash Core (Kinetic logic) └── scripts/
├── bash/ # Bash Core (Kinetic logic)
│ ├── common.sh # Shared utilities & path resolution
│ ├── check-prerequisites.sh # Prerequisite validation
│ ├── create-new-feature.sh # Feature branch creation
│ ├── setup-plan.sh # Plan template setup
│ ├── update-agent-context.sh # Agent file updater (main)
│ ├── plan-parser.sh # Plan data extraction (module)
│ ├── content-generator.sh # Language-specific templates (module)
│ └── agent-registry.sh # 17-agent type registry (module)
├── powershell/ # PowerShell Equivalents (Windows-native)
│ ├── common.ps1 # Shared utilities & prerequisites
│ └── create-new-feature.ps1 # Feature branch creation
├── fix_links.py # Spec link fixer
├── verify_links.py # Spec link verifier
└── start-mcp.js # MCP server launcher
``` ```
--- ---
## 🗺️ Mapping: Commands to Capabilities ## 🗺️ Mapping: Commands to Capabilities
| Phase | Workflow Trigger | Antigravity Skill | Role | | Phase | Workflow Trigger | Antigravity Skill | Role |
| :--- | :--- | :--- | :--- | | :---------------- | :---------------------------- | :------------------------ | :------------------------------------------------------ |
| **Pipeline** | `/00-speckit.all` | N/A | Runs the full SDLC pipeline. | | **Full Pipeline** | `/00-speckit.all` | N/A | Runs full SDLC pipeline (10 steps: Specify → Validate). |
| **Governance** | `/01-speckit.constitution` | `@speckit.constitution` | Establishes project rules & principles. | | **Governance** | `/01-speckit.constitution` | `@speckit.constitution` | Establishes project rules & principles. |
| **Definition** | `/02-speckit.specify` | `@speckit.specify` | Drafts structured `spec.md`. | | **Definition** | `/02-speckit.specify` | `@speckit.specify` | Drafts structured `spec.md`. |
| **Ambiguity** | `/03-speckit.clarify` | `@speckit.clarify` | Resolves gaps post-spec. | | **Ambiguity** | `/03-speckit.clarify` | `@speckit.clarify` | Resolves gaps post-spec. |
| **Architecture** | `/04-speckit.plan` | `@speckit.plan` | Generates technical `plan.md`. | | **Architecture** | `/04-speckit.plan` | `@speckit.plan` | Generates technical `plan.md`. |
| **Decomposition** | `/05-speckit.tasks` | `@speckit.tasks` | Breaks plans into atomic tasks. | | **Decomposition** | `/05-speckit.tasks` | `@speckit.tasks` | Breaks plans into atomic tasks. |
| **Consistency** | `/06-speckit.analyze` | `@speckit.analyze` | Cross-checks Spec vs Plan vs Tasks. | | **Consistency** | `/06-speckit.analyze` | `@speckit.analyze` | Cross-checks Spec vs Plan vs Tasks. |
| **Execution** | `/07-speckit.implement` | `@speckit.implement` | Builds implementation with safety protocols. | | **Execution** | `/07-speckit.implement` | `@speckit.implement` | Builds implementation with safety protocols. |
| **Quality** | `/08-speckit.checker` | `@speckit.checker` | Runs static analysis (Linting, Security, Types). | | **Quality** | `/08-speckit.checker` | `@speckit.checker` | Runs static analysis (Linting, Security, Types). |
| **Testing** | `/09-speckit.tester` | `@speckit.tester` | Runs test suite & reports coverage. | | **Testing** | `/09-speckit.tester` | `@speckit.tester` | Runs test suite & reports coverage. |
| **Review** | `/10-speckit.reviewer` | `@speckit.reviewer` | Performs code review (Logic, Perf, Style). | | **Review** | `/10-speckit.reviewer` | `@speckit.reviewer` | Performs code review (Logic, Perf, Style). |
| **Validation** | `/11-speckit.validate` | `@speckit.validate` | Verifies implementation matches Spec requirements. | | **Validation** | `/11-speckit.validate` | `@speckit.validate` | Verifies implementation matches Spec requirements. |
| **Preparation** | `/speckit.prepare` | N/A | Runs Specify -> Analyze sequence. | | **Preparation** | `/speckit.prepare` | N/A | Runs Specify Analyze prep sequence (5 steps). |
| **Checklist** | `/util-speckit.checklist` | `@speckit.checklist` | Generates feature checklists. | | **Schema** | `/schema-change` | N/A | DB schema changes per ADR-009 (no migrations). |
| **Diff** | `/util-speckit.diff` | `@speckit.diff` | Compares artifact versions. | | **Security** | N/A | `@speckit.security-audit` | OWASP Top 10 + CASL + ClamAV audit. |
| **Migration** | `/util-speckit.migrate` | `@speckit.migrate` | Port existing code to Spec-Kit. | | **Checklist** | `/util-speckit.checklist` | `@speckit.checklist` | Generates feature checklists. |
| **Red Team** | `/util-speckit.quizme` | `@speckit.quizme` | Challenges logical flaws. | | **Diff** | `/util-speckit.diff` | `@speckit.diff` | Compares artifact versions. |
| **Status** | `/util-speckit.status` | `@speckit.status` | Shows feature completion status. | | **Migration** | `/util-speckit.migrate` | `@speckit.migrate` | Port existing code to Spec-Kit. |
| **Tracking** | `/util-speckit.taskstoissues`| `@speckit.taskstoissues`| Syncs tasks to GitHub/Jira/etc. | | **Red Team** | `/util-speckit.quizme` | `@speckit.quizme` | Challenges logical flaws. |
| **Status** | `/util-speckit.status` | `@speckit.status` | Shows feature completion status. |
| **Tracking** | `/util-speckit.taskstoissues` | `@speckit.taskstoissues` | Syncs tasks to GitHub/Gitea issues. |
--- ---
@@ -100,20 +140,18 @@ The toolkit is organized into modular components that provide both the logic (Sc
The following skills are designed to work together as a comprehensive defense against regression and poor quality. Run them in this order: The following skills are designed to work together as a comprehensive defense against regression and poor quality. Run them in this order:
| Step | Skill | Core Question | Focus | | Step | Skill | Core Question | Focus |
| :--- | :--- | :--- | :--- | | :-------------- | :------------------ | :-------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------- |
| **1. Checker** | `@speckit.checker` | *"Is the code compliant?"* | **Syntax & Security**. Runs compilation, linting (ESLint/GolangCI), and vulnerability scans (npm audit/govulncheck). Catches low-level errors first. | | **1. Checker** | `@speckit.checker` | _"Is the code compliant?"_ | **Syntax & Security**. Runs compilation, linting (ESLint/GolangCI), and vulnerability scans (npm audit/govulncheck). Catches low-level errors first. |
| **2. Tester** | `@speckit.tester` | *"Does it work?"* | **Functionality**. Executes your test suite (Jest/Pytest/Go Test) to ensure logic performs as expected and tests pass. | | **2. Tester** | `@speckit.tester` | _"Does it work?"_ | **Functionality**. Executes your test suite (Jest/Pytest/Go Test) to ensure logic performs as expected and tests pass. |
| **3. Reviewer** | `@speckit.reviewer` | *"Is the code written well?"* | **Quality & Maintainability**. Analyzes code structure for complexity, performance bottlenecks, and best practices, acting as a senior peer reviewer. | | **3. Reviewer** | `@speckit.reviewer` | _"Is the code written well?"_ | **Quality & Maintainability**. Analyzes code structure for complexity, performance bottlenecks, and best practices, acting as a senior peer reviewer. |
| **4. Validate** | `@speckit.validate` | *"Did we build the right thing?"* | **Requirements**. Semantically compares the implementation against the defined `spec.md` and `plan.md` to ensure all feature requirements are met. | | **4. Validate** | `@speckit.validate` | _"Did we build the right thing?"_ | **Requirements**. Semantically compares the implementation against the defined `spec.md` and `plan.md` to ensure all feature requirements are met. |
> **🤖 Power User Tip:** You can amplify this pipeline by creating a custom **Claude Code (MCP) Server** or subagent that delegates heavy reasoning to **Gemini Pro 3** via the `gemini` CLI. > **🤖 Power User Tip:** You can amplify this pipeline by creating a custom **MCP Server** or subagent that delegates heavy reasoning to a dedicated LLM.
> >
> * **Use Case:** Bind the `@speckit.validate` and `@speckit.reviewer` steps to Gemini Pro 3. > - **Use Case:** Bind the `@speckit.validate` and `@speckit.reviewer` steps to a large-context model.
> * **Benefit:** Gemini's 1M+ token context and reasoning capabilities excel at analyzing the full project context against the Spec, finding subtle logical flaws that smaller models miss. > - **Benefit:** Large-context models (1M+ tokens) excel at analyzing the full project context against the Spec, finding subtle logical flaws that smaller models miss.
> * **How:** Create a wrapper script `scripts/gemini-reviewer.sh` that pipes the `tasks.md` and codebase to `gemini chat`, then expose this as a tool to Claude. > - **How:** Create a wrapper script `scripts/gemini-reviewer.sh` that pipes the `tasks.md` and codebase to an LLM, then expose this as a tool.
---
--- ---
@@ -121,45 +159,48 @@ The following skills are designed to work together as a comprehensive defense ag
These workflows function as the "Control Plane" of the project, managing everything from idea inception to status tracking. These workflows function as the "Control Plane" of the project, managing everything from idea inception to status tracking.
| Step | Workflow | Core Question | Focus | | Step | Workflow | Core Question | Focus |
| :--- | :--- | :--- | :--- | | :----------------- | :-------------------------------------------------- | :-------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **1. Preparation** | `/speckit.prepare` | *"Are we ready?"* | **The Macro-Workflow**. Runs Skills 0206 (Specify $\to$ Clarify $\to$ Plan $\to$ Tasks $\to$ Analyze) in one sequence to go from "Idea" to "Ready to Code". | | **1. Preparation** | `/speckit.prepare` | _"Are we ready?"_ | **The Macro-Workflow**. Runs Skills 0206 (Specify $\to$ Clarify $\to$ Plan $\to$ Tasks $\to$ Analyze) in one sequence to go from "Idea" to "Ready to Code". |
| **2. Migration** | `/util-speckit.migrate` | *"Can we import?"* | **Onboarding**. Reverse-engineers existing code into `spec.md`, `plan.md`, and `tasks.md`. | | **2. Migration** | `/util-speckit.migrate` | _"Can we import?"_ | **Onboarding**. Reverse-engineers existing code into `spec.md`, `plan.md`, and `tasks.md`. |
| **3. Red Team** | `/util-speckit.quizme` | *"What did we miss?"* | **Hardening**. Socratic questioning to find logical gaps in your specification before you plan. | | **3. Red Team** | `/util-speckit.quizme` | _"What did we miss?"_ | **Hardening**. Socratic questioning to find logical gaps in your specification before you plan. |
| **4. Export** | `/util-speckit.taskstoissues` | *"Who does what?"* | **Handoff**. Converts your `tasks.md` into real GitHub/Jira issues for the team. | | **4. Export** | `/util-speckit.taskstoissues` | _"Who does what?"_ | **Handoff**. Converts your `tasks.md` into GitHub or Gitea issues with labels and milestones. |
| **5. Status** | `/util-speckit.status` | *"Are we there yet?"* | **Tracking**. Scans all artifacts to report feature completion percentage. | | **5. Status** | `/util-speckit.status` | _"Are we there yet?"_ | **Tracking**. Scans all artifacts to report feature completion percentage. |
| **6. Utilities** | `/util-speckit.diff` <br> `/util-speckit.checklist` | *"What changed?"* | **Support**. View artifact diffs or generate quick acceptance checklists. | | **6. Utilities** | `/util-speckit.diff` <br> `/util-speckit.checklist` | _"What changed?"_ | **Support**. View artifact diffs or generate quick acceptance checklists. |
### 🔄 The Design Sequence ### 🔄 The Design Sequence
**Stage 1: Inception** **Stage 1: Inception**
* *Legacy Project?* $\to$ Run **`/util-speckit.migrate`**.
* *New Feature?* $\to$ Run **`/speckit.prepare`**. - _Legacy Project?_ $\to$ Run **`/util-speckit.migrate`**.
- _New Feature?_ $\to$ Run **`/speckit.prepare`**.
**Stage 2: Hardening** **Stage 2: Hardening**
* Run **`/util-speckit.quizme`** to catch edge cases.
* Run **`/speckit.prepare`** again to regenerate the Plan based on restricted specs. - Run **`/util-speckit.quizme`** to catch edge cases.
- Run **`/speckit.prepare`** again to regenerate the Plan based on restricted specs.
**Stage 3: Handoff** **Stage 3: Handoff**
* Run **`/util-speckit.taskstoissues`** to sync with your board.
- Run **`/util-speckit.taskstoissues`** to sync with your board.
--- ---
## 🚀 Getting Started in Antigravity ## 🚀 Getting Started in Antigravity
Once the `.agent` folder is present, you can talk to your agent as a partner. You can trigger a full process using a slash command or reference a specific capability using a skill mention. Once the `.agents` folder is present, you can talk to your agent as a partner. You can trigger a full process using a slash command or reference a specific capability using a skill mention.
**Example Conversation:** **Example Conversation:**
> **Developer**: "**/02-speckit.specify** Add a real-time notification system using WebSockets." > **Developer**: "**/02-speckit.specify** Add a real-time notification system using WebSockets."
> >
> **Antigravity**: "Understood. Executing the **@speckit.specify** skill. Creating feature branch `005-notifications` and drafting the initial specification..." > **Antigravity**: "Understood. Executing the **@speckit.specify** skill. Creating feature branch `005-notifications` and drafting the initial specification..."
> *(Antigravity creates branch, drafts spec.md)* > _(Antigravity creates branch, drafts spec.md)_
> >
> **Antigravity**: "Specification ready. I've identified some ambiguity in the reconnection strategy. Should I use **@speckit.clarify** to resolve this?" > **Antigravity**: "Specification ready. I've identified some ambiguity in the reconnection strategy. Should I use **@speckit.clarify** to resolve this?"
> >
> **Developer**: "Yes, use **@speckit.clarify**." > **Developer**: "Yes, use **@speckit.clarify**."
> *(Antigravity executes the skill logic, interviewing the developer and updating the spec)* > _(Antigravity executes the skill logic, interviewing the developer and updating the spec)_
> >
> **Antigravity**: "The spec is now robust. I'm ready to trigger **/04-speckit.plan** and **/05-speckit.tasks** to prepare for implementation." > **Antigravity**: "The spec is now robust. I'm ready to trigger **/04-speckit.plan** and **/05-speckit.tasks** to prepare for implementation."
@@ -170,25 +211,33 @@ Once the `.agent` folder is present, you can talk to your agent as a partner. Yo
To get the most out of this system, follow these **Spec-Driven Development (SDD)** rules: To get the most out of this system, follow these **Spec-Driven Development (SDD)** rules:
### 1. The Constitution is King 👑 ### 1. The Constitution is King 👑
**Never skip `/01-speckit.constitution`.** **Never skip `/01-speckit.constitution`.**
* This file is the "Context Window Anchor" for the AI.
* It prevents hallucinations about tech stack (e.g., "Don't use jQuery" or "Always use TypeScript strict mode"). - This file is the "Context Window Anchor" for the AI.
* **Tip:** If Antigravity makes a style mistake, don't just fix the code—update the Constitution so it never happens again. - It prevents hallucinations about tech stack (e.g., "Don't use jQuery" or "Always use TypeScript strict mode").
- **Tip:** If Antigravity makes a style mistake, don't just fix the code—update the Constitution so it never happens again.
### 2. The Layered Defense 🛡️ ### 2. The Layered Defense 🛡️
Don't rush to code. The workflow exists to catch errors *cheaply* before they become expensive bugs.
* **Ambiguity Layer**: `/03-speckit.clarify` catches misunderstandings. Don't rush to code. The workflow exists to catch errors _cheaply_ before they become expensive bugs.
* **Logic Layer**: `/util-speckit.quizme` catches edge cases.
* **Consistency Layer**: `/06-speckit.analyze` catches gaps between Spec and Plan. - **Ambiguity Layer**: `/03-speckit.clarify` catches misunderstandings.
- **Logic Layer**: `/util-speckit.quizme` catches edge cases.
- **Consistency Layer**: `/06-speckit.analyze` catches gaps between Spec and Plan.
### 3. The 15-Minute Rule ⏱️ ### 3. The 15-Minute Rule ⏱️
When generating `tasks.md` (Skill 05), ensure tasks are **atomic**. When generating `tasks.md` (Skill 05), ensure tasks are **atomic**.
* **Bad Task**: "Implement User Auth" (Too big, AI will get lost).
* **Good Task**: "Create `User` Mongoose schema with email validation" (Perfect). - **Bad Task**: "Implement User Auth" (Too big, AI will get lost).
* **Rule of Thumb**: If a task takes Antigravity more than 3 tool calls to finish, it's too big. Break it down. - **Good Task**: "Create `User` Mongoose schema with email validation" (Perfect).
- **Rule of Thumb**: If a task takes Antigravity more than 3 tool calls to finish, it's too big. Break it down.
### 4. "Refine, Don't Rewind" ⏩ ### 4. "Refine, Don't Rewind" ⏩
If you change your mind mid-project: If you change your mind mid-project:
1. Don't just edit the code. 1. Don't just edit the code.
2. Edit the `spec.md` to reflect the new requirement. 2. Edit the `spec.md` to reflect the new requirement.
3. Run `/util-speckit.diff` to see the drift. 3. Run `/util-speckit.diff` to see the drift.
@@ -198,9 +247,11 @@ If you change your mind mid-project:
## 🧩 Adaptation Notes ## 🧩 Adaptation Notes
* **Skill-Based Autonomy**: Mentions like `@speckit.plan` trigger the agent's internalized understanding of how to perform that role. - **Skill-Based Autonomy**: Mentions like `@speckit.plan` trigger the agent's internalized understanding of how to perform that role.
* **Shared Script Core**: All logic resides in `.agent/scripts/bash` for consistent file and git operations. - **Shared Script Core**: Logic resides in `.agents/scripts/bash` (modular) with PowerShell equivalents in `scripts/powershell/` for Windows-native execution.
* **Agent-Native**: Designed to be invoked via Antigravity tool calls and reasoning rather than just terminal strings. - **Agent-Native**: Designed to be invoked via Antigravity tool calls and reasoning rather than just terminal strings.
- **LCBP3-DMS Specific**: Includes project-specific skills (`nestjs-best-practices`, `next-best-practices`, `speckit.security-audit`) and workflows (`/schema-change`, `/create-backend-module`, `/deploy`).
--- ---
*Built with logic from [Spec-Kit](https://github.com/github/spec-kit). Powered by Antigravity.*
_Built with logic from [Spec-Kit](https://github.com/github/spec-kit). Powered by Antigravity._

View File

@@ -0,0 +1,95 @@
#!/usr/bin/env bash
# Agent registry — maps agent types to file paths and display names
# Extracted from update-agent-context.sh for modularity
#
# Usage:
# source agent-registry.sh
# init_agent_registry "$REPO_ROOT"
# get_agent_file "claude" # → /path/to/CLAUDE.md
# get_agent_name "claude" # → "Claude Code"
# Initialize agent file paths (call after REPO_ROOT is set)
init_agent_registry() {
local repo_root="$1"
# Agent type → file path mapping
declare -gA AGENT_FILES=(
[claude]="$repo_root/CLAUDE.md"
[gemini]="$repo_root/GEMINI.md"
[copilot]="$repo_root/.github/agents/copilot-instructions.md"
[cursor-agent]="$repo_root/.cursor/rules/specify-rules.mdc"
[qwen]="$repo_root/QWEN.md"
[opencode]="$repo_root/AGENTS.md"
[codex]="$repo_root/AGENTS.md"
[windsurf]="$repo_root/.windsurf/rules/specify-rules.md"
[kilocode]="$repo_root/.kilocode/rules/specify-rules.md"
[auggie]="$repo_root/.augment/rules/specify-rules.md"
[roo]="$repo_root/.roo/rules/specify-rules.md"
[codebuddy]="$repo_root/CODEBUDDY.md"
[qoder]="$repo_root/QODER.md"
[amp]="$repo_root/AGENTS.md"
[shai]="$repo_root/SHAI.md"
[q]="$repo_root/AGENTS.md"
[bob]="$repo_root/AGENTS.md"
)
# Agent type → display name mapping
declare -gA AGENT_NAMES=(
[claude]="Claude Code"
[gemini]="Gemini CLI"
[copilot]="GitHub Copilot"
[cursor-agent]="Cursor IDE"
[qwen]="Qwen Code"
[opencode]="opencode"
[codex]="Codex CLI"
[windsurf]="Windsurf"
[kilocode]="Kilo Code"
[auggie]="Auggie CLI"
[roo]="Roo Code"
[codebuddy]="CodeBuddy CLI"
[qoder]="Qoder CLI"
[amp]="Amp"
[shai]="SHAI"
[q]="Amazon Q Developer CLI"
[bob]="IBM Bob"
)
# Template file path
TEMPLATE_FILE="$repo_root/.specify/templates/agent-file-template.md"
}
# Get file path for an agent type
get_agent_file() {
local agent_type="$1"
echo "${AGENT_FILES[$agent_type]:-}"
}
# Get display name for an agent type
get_agent_name() {
local agent_type="$1"
echo "${AGENT_NAMES[$agent_type]:-}"
}
# Get all registered agent types
get_all_agent_types() {
echo "${!AGENT_FILES[@]}"
}
# Check if an agent type is valid
is_valid_agent() {
local agent_type="$1"
[[ -n "${AGENT_FILES[$agent_type]:-}" ]]
}
# Get supported agent types as a pipe-separated string (for error messages)
get_supported_agents_string() {
local result=""
for key in "${!AGENT_FILES[@]}"; do
if [[ -n "$result" ]]; then
result="$result|$key"
else
result="$key"
fi
done
echo "$result"
}

View File

@@ -0,0 +1,40 @@
#!/usr/bin/env bash
# Content generation functions for update-agent-context
# Extracted from update-agent-context.sh for modularity
# Get project directory structure based on project type
get_project_structure() {
local project_type="$1"
if [[ "$project_type" == *"web"* ]]; then
echo "backend/\\nfrontend/\\ntests/"
else
echo "src/\\ntests/"
fi
}
# Get build/test commands for a given language
get_commands_for_language() {
local lang="$1"
case "$lang" in
*"Python"*)
echo "cd src && pytest && ruff check ."
;;
*"Rust"*)
echo "cargo test && cargo clippy"
;;
*"JavaScript"*|*"TypeScript"*)
echo "npm test \\&\\& npm run lint"
;;
*)
echo "# Add commands for $lang"
;;
esac
}
# Get language-specific conventions string
get_language_conventions() {
local lang="$1"
echo "$lang: Follow standard conventions"
}

View File

@@ -0,0 +1,72 @@
#!/usr/bin/env bash
# Plan parsing functions for update-agent-context
# Extracted from update-agent-context.sh for modularity
# Extract a field value from plan.md by pattern
# Usage: extract_plan_field "Language/Version" "/path/to/plan.md"
extract_plan_field() {
local field_pattern="$1"
local plan_file="$2"
grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
head -1 | \
sed "s|^\*\*${field_pattern}\*\*: ||" | \
sed 's/^[ \t]*//;s/[ \t]*$//' | \
grep -v "NEEDS CLARIFICATION" | \
grep -v "^N/A$" || echo ""
}
# Parse plan.md and set global variables: NEW_LANG, NEW_FRAMEWORK, NEW_DB, NEW_PROJECT_TYPE
parse_plan_data() {
local plan_file="$1"
if [[ ! -f "$plan_file" ]]; then
log_error "Plan file not found: $plan_file"
return 1
fi
if [[ ! -r "$plan_file" ]]; then
log_error "Plan file is not readable: $plan_file"
return 1
fi
log_info "Parsing plan data from $plan_file"
NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
NEW_DB=$(extract_plan_field "Storage" "$plan_file")
NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
# Log what we found
if [[ -n "$NEW_LANG" ]]; then
log_info "Found language: $NEW_LANG"
else
log_warning "No language information found in plan"
fi
[[ -n "$NEW_FRAMEWORK" ]] && log_info "Found framework: $NEW_FRAMEWORK"
[[ -n "$NEW_DB" && "$NEW_DB" != "N/A" ]] && log_info "Found database: $NEW_DB"
[[ -n "$NEW_PROJECT_TYPE" ]] && log_info "Found project type: $NEW_PROJECT_TYPE"
}
# Format technology stack string from language and framework
format_technology_stack() {
local lang="$1"
local framework="$2"
local parts=()
[[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
[[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
if [[ ${#parts[@]} -eq 0 ]]; then
echo ""
elif [[ ${#parts[@]} -eq 1 ]]; then
echo "${parts[0]}"
else
local result="${parts[0]}"
for ((i=1; i<${#parts[@]}; i++)); do
result="$result + ${parts[i]}"
done
echo "$result"
fi
}

View File

@@ -52,6 +52,12 @@ set -o pipefail
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh" source "$SCRIPT_DIR/common.sh"
# Load modular components (extracted for maintainability)
# See each file for documentation of the functions it provides
source "$SCRIPT_DIR/plan-parser.sh" # extract_plan_field, parse_plan_data, format_technology_stack
source "$SCRIPT_DIR/content-generator.sh" # get_project_structure, get_commands_for_language, get_language_conventions
source "$SCRIPT_DIR/agent-registry.sh" # init_agent_registry, get_agent_file, get_agent_name, etc.
# Get all paths and variables from common functions # Get all paths and variables from common functions
eval $(get_feature_paths) eval $(get_feature_paths)

View File

@@ -1,26 +1,28 @@
import os import os
import re import re
import sys
from pathlib import Path from pathlib import Path
# Configuration # Configuration - default base directory, can be overridden via CLI argument
BASE_DIR = Path(r"d:\nap-dms.lcbp3\specs") DEFAULT_BASE_DIR = Path(__file__).resolve().parent.parent.parent / "specs"
DIRECTORIES = [ DIRECTORIES = [
"00-overview", "00-Overview",
"01-requirements", "01-Requirements",
"02-architecture", "02-Architecture",
"03-implementation", "03-Data-and-Storage",
"04-operations", "04-Infrastructure-OPS",
"05-decisions", "05-Engineering-Guidelines",
"06-tasks" "06-Decision-Records"
] ]
LINK_PATTERN = re.compile(r'(\[([^\]]+)\]\(([^)]+)\))') LINK_PATTERN = re.compile(r'(\[([^\]]+)\]\(([^)]+)\))')
def get_file_map(): def get_file_map(base_dir: Path):
"""Builds a map of {basename}.md -> {prefixed_name}.md across all dirs.""" """Builds a map of {basename}.md -> {prefixed_name}.md across all dirs."""
file_map = {} file_map = {}
for dir_name in DIRECTORIES: for dir_name in DIRECTORIES:
directory = BASE_DIR / dir_name directory = base_dir / dir_name
if not directory.exists(): if not directory.exists():
continue continue
for file_path in directory.glob("*.md"): for file_path in directory.glob("*.md"):
@@ -53,41 +55,14 @@ def get_file_map():
if secondary_base: if secondary_base:
file_map[secondary_base] = f"{dir_name}/{actual_name}" file_map[secondary_base] = f"{dir_name}/{actual_name}"
# Hardcoded specific overrides for versioning and common typos
overrides = {
"fullftack-js-v1.5.0.md": "03-implementation/03-01-fullftack-js-v1.7.0.md",
"fullstack-js-v1.5.0.md": "03-implementation/03-01-fullftack-js-v1.7.0.md",
"system-architecture.md": "02-architecture/02-01-system-architecture.md",
"api-design.md": "02-architecture/02-02-api-design.md",
"data-model.md": "02-architecture/02-03-data-model.md",
"backend-guidelines.md": "03-implementation/03-02-backend-guidelines.md",
"frontend-guidelines.md": "03-implementation/03-03-frontend-guidelines.md",
"document-numbering.md": "03-implementation/03-04-document-numbering.md",
"testing-strategy.md": "03-implementation/03-05-testing-strategy.md",
"deployment-guide.md": "04-operations/04-01-deployment-guide.md",
"environment-setup.md": "04-operations/04-02-environment-setup.md",
"monitoring-alerting.md": "04-operations/04-03-monitoring-alerting.md",
"backup-recovery.md": "04-operations/04-04-backup-recovery.md",
"maintenance-procedures.md": "04-operations/04-05-maintenance-procedures.md",
"security-operations.md": "04-operations/04-06-security-operations.md",
"incident-response.md": "04-operations/04-07-incident-response.md",
"document-numbering-operations.md": "04-operations/04-08-document-numbering-operations.md",
# Missing task files - redirect to README or best match
"task-be-011-notification-audit.md": "06-tasks/README.md",
"task-be-001-database-migrations.md": "06-tasks/TASK-BE-015-schema-v160-migration.md", # Best match
}
for k, v in overrides.items():
file_map[k] = v
return file_map return file_map
def fix_links(): def fix_links(base_dir: Path):
file_map = get_file_map() file_map = get_file_map(base_dir)
changes_made = 0 changes_made = 0
for dir_name in DIRECTORIES: for dir_name in DIRECTORIES:
directory = BASE_DIR / dir_name directory = base_dir / dir_name
if not directory.exists(): if not directory.exists():
continue continue
@@ -107,8 +82,12 @@ def fix_links():
if not target_path: if not target_path:
continue continue
# Special case: file:///d:/nap-dms.lcbp3/specs/ # Special case: file:/// absolute paths
clean_target_path = target_path.replace("file:///d:/nap-dms.lcbp3/specs/", "").replace("file:///D:/nap-dms.lcbp3/specs/", "") clean_target_path = re.sub(
r'^file:///[a-zA-Z]:[/\\].*?specs[/\\]',
'',
target_path
)
resolved_locally = (file_path.parent / target_path).resolve() resolved_locally = (file_path.parent / target_path).resolve()
if resolved_locally.exists() and resolved_locally.is_file(): if resolved_locally.exists() and resolved_locally.is_file():
@@ -119,7 +98,7 @@ def fix_links():
if target_filename in file_map: if target_filename in file_map:
correct_relative_to_specs = file_map[target_filename] correct_relative_to_specs = file_map[target_filename]
# Calculate relative path from current file's parent to the correct file # Calculate relative path from current file's parent to the correct file
correct_abs = (BASE_DIR / correct_relative_to_specs).resolve() correct_abs = (base_dir / correct_relative_to_specs).resolve()
try: try:
new_relative_path = os.path.relpath(correct_abs, file_path.parent).replace(os.sep, "/") new_relative_path = os.path.relpath(correct_abs, file_path.parent).replace(os.sep, "/")
@@ -143,4 +122,14 @@ def fix_links():
print(f"\nTotal files updated: {changes_made}") print(f"\nTotal files updated: {changes_made}")
if __name__ == "__main__": if __name__ == "__main__":
fix_links() if len(sys.argv) > 1:
base_dir = Path(sys.argv[1])
else:
base_dir = DEFAULT_BASE_DIR
if not base_dir.exists():
print(f"Error: Directory not found: {base_dir}", file=sys.stderr)
sys.exit(1)
print(f"Scanning specs directory: {base_dir}")
fix_links(base_dir)

View File

@@ -0,0 +1,157 @@
# PowerShell equivalents for key .agents bash scripts
# These provide Windows-native alternatives for the most commonly used functions
<#
.SYNOPSIS
Common utility functions for Spec-Kit PowerShell scripts.
.DESCRIPTION
PowerShell equivalent of .agents/scripts/bash/common.sh
Provides repository root detection, branch identification, and feature path resolution.
#>
function Get-RepoRoot {
try {
$root = git rev-parse --show-toplevel 2>$null
if ($LASTEXITCODE -eq 0) { return $root.Trim() }
} catch {}
# Fallback: navigate up from script location
return (Resolve-Path "$PSScriptRoot\..\..\..").Path
}
function Get-CurrentBranch {
# Check environment variable first
if ($env:SPECIFY_FEATURE) { return $env:SPECIFY_FEATURE }
try {
$branch = git rev-parse --abbrev-ref HEAD 2>$null
if ($LASTEXITCODE -eq 0) { return $branch.Trim() }
} catch {}
# Fallback: find latest feature directory
$repoRoot = Get-RepoRoot
$specsDir = Join-Path $repoRoot "specs"
if (Test-Path $specsDir) {
$latest = Get-ChildItem -Path $specsDir -Directory |
Where-Object { $_.Name -match '^\d{3}-' } |
Sort-Object Name -Descending |
Select-Object -First 1
if ($latest) { return $latest.Name }
}
return "main"
}
function Test-HasGit {
try {
git rev-parse --show-toplevel 2>$null | Out-Null
return $LASTEXITCODE -eq 0
} catch { return $false }
}
function Test-FeatureBranch {
param([string]$Branch, [bool]$HasGit)
if (-not $HasGit) {
Write-Warning "[specify] Git repository not detected; skipped branch validation"
return $true
}
if ($Branch -notmatch '^\d{3}-') {
Write-Error "Not on a feature branch. Current branch: $Branch"
Write-Error "Feature branches should be named like: 001-feature-name"
return $false
}
return $true
}
function Find-FeatureDir {
param([string]$RepoRoot, [string]$BranchName)
$specsDir = Join-Path $RepoRoot "specs"
if ($BranchName -match '^(\d{3})-') {
$prefix = $Matches[1]
$matches = Get-ChildItem -Path $specsDir -Directory -Filter "$prefix-*" -ErrorAction SilentlyContinue
if ($matches.Count -eq 1) { return $matches[0].FullName }
if ($matches.Count -gt 1) {
Write-Warning "Multiple spec dirs with prefix '$prefix': $($matches.Name -join ', ')"
}
}
return Join-Path $specsDir $BranchName
}
function Get-FeaturePaths {
$repoRoot = Get-RepoRoot
$branch = Get-CurrentBranch
$hasGit = Test-HasGit
$featureDir = Find-FeatureDir -RepoRoot $repoRoot -BranchName $branch
return [PSCustomObject]@{
RepoRoot = $repoRoot
Branch = $branch
HasGit = $hasGit
FeatureDir = $featureDir
FeatureSpec = Join-Path $featureDir "spec.md"
ImplPlan = Join-Path $featureDir "plan.md"
Tasks = Join-Path $featureDir "tasks.md"
Research = Join-Path $featureDir "research.md"
DataModel = Join-Path $featureDir "data-model.md"
Quickstart = Join-Path $featureDir "quickstart.md"
ContractsDir = Join-Path $featureDir "contracts"
}
}
<#
.SYNOPSIS
Check prerequisites for Spec-Kit workflows.
.DESCRIPTION
PowerShell equivalent of .agents/scripts/bash/check-prerequisites.sh
.PARAMETER RequireTasks
Require tasks.md to exist (for implementation phase)
.PARAMETER IncludeTasks
Include tasks.md in available docs list
.PARAMETER PathsOnly
Only output paths, no validation
.EXAMPLE
.\common.ps1
$result = Check-Prerequisites -RequireTasks
#>
function Check-Prerequisites {
param(
[switch]$RequireTasks,
[switch]$IncludeTasks,
[switch]$PathsOnly
)
$paths = Get-FeaturePaths
$valid = Test-FeatureBranch -Branch $paths.Branch -HasGit $paths.HasGit
if (-not $valid) { throw "Not on a feature branch" }
if ($PathsOnly) { return $paths }
# Validate required files
if (-not (Test-Path $paths.FeatureDir)) {
throw "Feature directory not found: $($paths.FeatureDir). Run /speckit.specify first."
}
if (-not (Test-Path $paths.ImplPlan)) {
throw "plan.md not found. Run /speckit.plan first."
}
if ($RequireTasks -and -not (Test-Path $paths.Tasks)) {
throw "tasks.md not found. Run /speckit.tasks first."
}
# Build available docs list
$docs = @()
if (Test-Path $paths.Research) { $docs += "research.md" }
if (Test-Path $paths.DataModel) { $docs += "data-model.md" }
if ((Test-Path $paths.ContractsDir) -and (Get-ChildItem $paths.ContractsDir -ErrorAction SilentlyContinue)) {
$docs += "contracts/"
}
if (Test-Path $paths.Quickstart) { $docs += "quickstart.md" }
if ($IncludeTasks -and (Test-Path $paths.Tasks)) { $docs += "tasks.md" }
return [PSCustomObject]@{
FeatureDir = $paths.FeatureDir
AvailableDocs = $docs
Paths = $paths
}
}
# Export functions when dot-sourced
Export-ModuleMember -Function * -ErrorAction SilentlyContinue 2>$null

View File

@@ -0,0 +1,138 @@
<#
.SYNOPSIS
Create a new feature branch and spec directory.
.DESCRIPTION
PowerShell equivalent of .agents/scripts/bash/create-new-feature.sh
Creates a numbered feature branch and initializes the spec directory.
.PARAMETER Description
Natural language description of the feature.
.PARAMETER ShortName
Optional custom short name for the branch (2-4 words).
.PARAMETER Number
Optional manual branch number (overrides auto-detection).
.EXAMPLE
.\create-new-feature.ps1 -Description "Add user authentication" -ShortName "user-auth"
#>
param(
[Parameter(Mandatory = $true, Position = 0)]
[string]$Description,
[string]$ShortName,
[int]$Number = 0
)
$ErrorActionPreference = "Stop"
# Load common functions
. "$PSScriptRoot\common.ps1"
$repoRoot = Get-RepoRoot
$hasGit = Test-HasGit
$specsDir = Join-Path $repoRoot "specs"
if (-not (Test-Path $specsDir)) { New-Item -ItemType Directory -Path $specsDir | Out-Null }
# Stop words for smart branch name generation
$stopWords = @('i','a','an','the','to','for','of','in','on','at','by','with','from',
'is','are','was','were','be','been','being','have','has','had',
'do','does','did','will','would','should','could','can','may','might',
'must','shall','this','that','these','those','my','your','our','their',
'want','need','add','get','set')
function ConvertTo-BranchName {
param([string]$Text)
$Text.ToLower() -replace '[^a-z0-9]', '-' -replace '-+', '-' -replace '^-|-$', ''
}
function Get-SmartBranchName {
param([string]$Desc)
$words = ($Desc.ToLower() -replace '[^a-z0-9]', ' ').Split(' ', [StringSplitOptions]::RemoveEmptyEntries)
$meaningful = $words | Where-Object { $_ -notin $stopWords -and $_.Length -ge 3 } | Select-Object -First 3
if ($meaningful.Count -gt 0) { return ($meaningful -join '-') }
return ConvertTo-BranchName $Desc
}
function Get-HighestNumber {
param([string]$Dir)
$highest = 0
if (Test-Path $Dir) {
Get-ChildItem -Path $Dir -Directory | ForEach-Object {
if ($_.Name -match '^(\d+)-') {
$num = [int]$Matches[1]
if ($num -gt $highest) { $highest = $num }
}
}
}
return $highest
}
# Generate branch suffix
if ($ShortName) {
$branchSuffix = ConvertTo-BranchName $ShortName
} else {
$branchSuffix = Get-SmartBranchName $Description
}
# Determine branch number
if ($Number -gt 0) {
$branchNumber = $Number
} else {
$highestSpec = Get-HighestNumber $specsDir
$highestBranch = 0
if ($hasGit) {
try {
git fetch --all --prune 2>$null | Out-Null
$branches = git branch -a 2>$null
foreach ($b in $branches) {
$clean = $b.Trim('* ') -replace '^remotes/[^/]+/', ''
if ($clean -match '^(\d{3})-') {
$num = [int]$Matches[1]
if ($num -gt $highestBranch) { $highestBranch = $num }
}
}
} catch {}
}
$branchNumber = [Math]::Max($highestSpec, $highestBranch) + 1
}
$featureNum = "{0:D3}" -f $branchNumber
$branchName = "$featureNum-$branchSuffix"
# Truncate if exceeding GitHub's 244-byte limit
if ($branchName.Length -gt 244) {
$maxSuffix = 244 - 4 # 3 digits + 1 hyphen
$branchSuffix = $branchSuffix.Substring(0, $maxSuffix).TrimEnd('-')
Write-Warning "Branch name truncated to 244 bytes"
$branchName = "$featureNum-$branchSuffix"
}
# Create git branch
if ($hasGit) {
git checkout -b $branchName
} else {
Write-Warning "Git not detected; skipped branch creation for $branchName"
}
# Create feature directory and spec file
$featureDir = Join-Path $specsDir $branchName
New-Item -ItemType Directory -Path $featureDir -Force | Out-Null
$templateFile = Join-Path $repoRoot ".specify" "templates" "spec-template.md"
$specFile = Join-Path $featureDir "spec.md"
if (Test-Path $templateFile) {
Copy-Item $templateFile $specFile
} else {
New-Item -ItemType File -Path $specFile -Force | Out-Null
}
$env:SPECIFY_FEATURE = $branchName
# Output
[PSCustomObject]@{
BranchName = $branchName
SpecFile = $specFile
FeatureNum = $featureNum
}
Write-Host "BRANCH_NAME: $branchName"
Write-Host "SPEC_FILE: $specFile"
Write-Host "FEATURE_NUM: $featureNum"

View File

@@ -1,30 +1,33 @@
import os import os
import re import re
import sys
from pathlib import Path from pathlib import Path
# Configuration # Configuration - default base directory, can be overridden via CLI argument
BASE_DIR = Path(r"d:\nap-dms.lcbp3\specs") DEFAULT_BASE_DIR = Path(__file__).resolve().parent.parent.parent / "specs"
DIRECTORIES = [ DIRECTORIES = [
"00-overview", "00-Overview",
"01-requirements", "01-Requirements",
"02-architecture", "02-Architecture",
"03-implementation", "03-Data-and-Storage",
"04-operations", "04-Infrastructure-OPS",
"05-decisions" "05-Engineering-Guidelines",
"06-Decision-Records"
] ]
# Regex for Markdown links: [label](path) # Regex for Markdown links: [label](path)
# Handles relative paths, absolute file paths, and anchors # Handles relative paths, absolute file paths, and anchors
LINK_PATTERN = re.compile(r'\[([^\]]+)\]\(([^)]+)\)') LINK_PATTERN = re.compile(r'\[([^\]]+)\]\(([^)]+)\)')
def verify_links(): def verify_links(base_dir: Path):
results = { results = {
"total_links": 0, "total_links": 0,
"broken_links": [] "broken_links": []
} }
for dir_name in DIRECTORIES: for dir_name in DIRECTORIES:
directory = BASE_DIR / dir_name directory = base_dir / dir_name
if not directory.exists(): if not directory.exists():
print(f"Directory not found: {directory}") print(f"Directory not found: {directory}")
continue continue
@@ -53,7 +56,7 @@ def verify_links():
# 2. Handle relative paths # 2. Handle relative paths
# Remove anchor if present # Remove anchor if present
clean_target_str = target.split("#")[0] clean_target_str = target.split("#")[0]
if not clean_target_str: # It was just an anchor to another file but path is empty? Wait. if not clean_target_str:
continue continue
# Resolve path relative to current file # Resolve path relative to current file
@@ -71,8 +74,17 @@ def verify_links():
return results return results
if __name__ == "__main__": if __name__ == "__main__":
print(f"Starting link verification in {BASE_DIR}...") if len(sys.argv) > 1:
audit_results = verify_links() base_dir = Path(sys.argv[1])
else:
base_dir = DEFAULT_BASE_DIR
if not base_dir.exists():
print(f"Error: Directory not found: {base_dir}", file=sys.stderr)
sys.exit(1)
print(f"Starting link verification in {base_dir}...")
audit_results = verify_links(base_dir)
print(f"\nAudit Summary:") print(f"\nAudit Summary:")
print(f"Total Internal Links Scanned: {audit_results['total_links']}") print(f"Total Internal Links Scanned: {audit_results['total_links']}")

View File

@@ -1,6 +1,7 @@
--- ---
name: speckit.checklist name: speckit.checklist
description: Generate a custom checklist for the current feature based on user requirements. description: Generate a custom checklist for the current feature based on user requirements.
version: 1.0.0
--- ---
## Checklist Purpose: "Unit Tests for English" ## Checklist Purpose: "Unit Tests for English"
@@ -212,7 +213,7 @@ You are the **Antigravity Quality Gatekeeper**. Your role is to validate the qua
b. **Structure Reference**: Generate the checklist following the canonical template in `templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001. b. **Structure Reference**: Generate the checklist following the canonical template in `templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize: 6. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
- Focus areas selected - Focus areas selected
- Depth level - Depth level
- Actor/timing - Actor/timing

View File

@@ -1,6 +1,7 @@
--- ---
name: speckit.constitution name: speckit.constitution
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync. description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
version: 1.0.0
handoffs: handoffs:
- label: Build Specification - label: Build Specification
agent: speckit.specify agent: speckit.specify
@@ -29,7 +30,7 @@ Follow this execution flow:
1. Load the existing constitution template at `memory/constitution.md`. 1. Load the existing constitution template at `memory/constitution.md`.
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`. - Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly. **IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
2. Collect/derive values for placeholders: 2. Collect/derive values for placeholders:
- If user input (conversation) supplies a value, use it. - If user input (conversation) supplies a value, use it.

View File

@@ -1,6 +1,7 @@
--- ---
name: speckit.quizme name: speckit.quizme
description: Challenge the specification with Socratic questioning to identify logical gaps, unhandled edge cases, and robustness issues. description: Challenge the specification with Socratic questioning to identify logical gaps, unhandled edge cases, and robustness issues.
version: 1.0.0
handoffs: handoffs:
- label: Clarify Spec Requirements - label: Clarify Spec Requirements
agent: speckit.clarify agent: speckit.clarify
@@ -38,8 +39,9 @@ Execution steps:
- Challenge security (e.g., "You rely on client-side validation here, but what if I curl the API?"). - Challenge security (e.g., "You rely on client-side validation here, but what if I curl the API?").
4. **The Quiz Loop**: 4. **The Quiz Loop**:
- Present 3-5 challenging scenarios *one by one*. - Present 3-5 challenging scenarios _one by one_.
- Format: - Format:
> **Scenario**: [Describe a plausible edge case or failure] > **Scenario**: [Describe a plausible edge case or failure]
> **Current Spec**: [Quote where the spec implies behavior or is silent] > **Current Spec**: [Quote where the spec implies behavior or is silent]
> **The Quiz**: What should the system do here? > **The Quiz**: What should the system do here?
@@ -62,4 +64,4 @@ Execution steps:
- **Be a Skeptic**: Don't assume the happy path works. - **Be a Skeptic**: Don't assume the happy path works.
- **Focus on "When" and "If"**: When high load, If network drops, When concurrent edits. - **Focus on "When" and "If"**: When high load, If network drops, When concurrent edits.
- **Don't be annoying**: Focus on *critical* flaws, not nitpicks. - **Don't be annoying**: Focus on _critical_ flaws, not nitpicks.

View File

@@ -0,0 +1,199 @@
---
name: speckit.security-audit
description: Perform a security-focused audit of the codebase against OWASP Top 10, CASL authorization, and LCBP3-DMS security requirements.
version: 1.0.0
depends-on:
- speckit.checker
---
## Role
You are the **Antigravity Security Sentinel**. Your mission is to identify security vulnerabilities, authorization gaps, and compliance issues specific to the LCBP3-DMS project before they reach production.
## Task
Perform a comprehensive security audit covering OWASP Top 10, CASL permission enforcement, file upload safety, and project-specific security rules defined in `specs/06-Decision-Records/ADR-016-security.md`.
## Context Loading
Before auditing, load the security context:
1. Read `specs/06-Decision-Records/ADR-016-security.md` for project security decisions
2. Read `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` for backend security patterns
3. Read `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql` for CASL permission definitions
4. Read `GEMINI.md` for security rules (Section: Security & Integrity Rules)
## Execution Steps
### Phase 1: OWASP Top 10 Scan
Scan the `backend/src/` directory for each OWASP category:
| # | OWASP Category | What to Check | Files to Scan |
| --- | ------------------------- | ---------------------------------------------------------------------------------------- | ------------------------------------------------- |
| A01 | Broken Access Control | Missing `@UseGuards(JwtAuthGuard, CaslAbilityGuard)` on controllers, unprotected routes | `**/*.controller.ts` |
| A02 | Cryptographic Failures | Hardcoded secrets, weak hashing, missing HTTPS enforcement | `**/*.ts`, `docker-compose*.yml` |
| A03 | Injection | Raw SQL queries, unsanitized user input in TypeORM queries, template literals in queries | `**/*.service.ts`, `**/*.repository.ts` |
| A04 | Insecure Design | Missing rate limiting on auth endpoints, no idempotency checks on mutations | `**/*.controller.ts`, `**/*.guard.ts` |
| A05 | Security Misconfiguration | Missing Helmet.js, CORS misconfiguration, debug mode in production | `main.ts`, `app.module.ts`, `docker-compose*.yml` |
| A06 | Vulnerable Components | Outdated dependencies with known CVEs | `package.json`, `pnpm-lock.yaml` |
| A07 | Auth Failures | Missing brute-force protection, weak password policy, JWT misconfiguration | `auth/`, `**/*.strategy.ts` |
| A08 | Data Integrity | Missing input validation, unvalidated file types, missing CSRF protection | `**/*.dto.ts`, `**/*.interceptor.ts` |
| A09 | Logging Failures | Missing audit logs for security events, sensitive data in logs | `**/*.service.ts`, `**/*.interceptor.ts` |
| A10 | SSRF | Unrestricted outbound requests, user-controlled URLs | `**/*.service.ts` |
### Phase 2: CASL Authorization Audit
1. **Load permission matrix** from `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql`
2. **Scan all controllers** for `@UseGuards(CaslAbilityGuard)` coverage:
```bash
# Find controllers without CASL guard
grep -rL "CaslAbilityGuard" backend/src/modules/*/\*.controller.ts
```
3. **Verify 4-Level RBAC enforcement**:
- Level 1: System Admin (full access)
- Level 2: Project Admin (project-scoped)
- Level 3: Department Lead (department-scoped)
- Level 4: User (own-records only)
4. **Check ability definitions** — ensure every endpoint has:
- `@CheckPolicies()` or `@Can()` decorator
- Correct action (`read`, `create`, `update`, `delete`, `manage`)
- Correct subject (entity class, not string)
5. **Cross-reference with routes** — verify:
- No public endpoints that should be protected
- No endpoints with broader permissions than required (principle of least privilege)
- Query scoping: users can only query their own records (unless admin)
### Phase 3: File Upload Security (ClamAV)
Check LCBP3-DMS-specific file handling per ADR-016:
1. **Two-Phase Storage verification**:
- Upload goes to temp directory first → scanned by ClamAV → moved to permanent
- Check for direct writes to permanent storage (violation)
2. **ClamAV integration**:
- Verify ClamAV service is configured in `docker-compose*.yml`
- Check that file upload endpoints call ClamAV scan before commit
- Verify rejection flow for infected files
3. **File type validation**:
- Check allowed MIME types against whitelist
- Verify file extension validation exists
- Check for double-extension attacks (e.g., `file.pdf.exe`)
4. **File size limits**:
- Verify upload size limits are enforced
- Check for path traversal in filenames (`../`, `..\\`)
### Phase 4: LCBP3-DMS-Specific Checks
1. **Idempotency** — verify all POST/PUT/PATCH endpoints check `Idempotency-Key` header:
```bash
# Find mutation endpoints without idempotency
grep -rn "@Post\|@Put\|@Patch" backend/src/modules/*/\*.controller.ts
# Cross-reference with idempotency guard usage
grep -rn "IdempotencyGuard\|Idempotency-Key" backend/src/
```
2. **Optimistic Locking** — verify document entities use `@VersionColumn()`:
```bash
grep -rn "VersionColumn" backend/src/modules/*/entities/*.entity.ts
```
3. **Redis Redlock** — verify document numbering uses distributed locks:
```bash
grep -rn "Redlock\|redlock\|acquireLock" backend/src/
```
4. **Password Security** — verify bcrypt with 12+ salt rounds:
```bash
grep -rn "bcrypt\|saltRounds\|genSalt" backend/src/
```
5. **Rate Limiting** — verify throttle guard on auth endpoints:
```bash
grep -rn "ThrottlerGuard\|@Throttle" backend/src/modules/auth/
```
6. **Environment Variables** — ensure no `.env` files for production:
- Check for `.env` files committed to git
- Verify Docker compose uses `environment:` section, not `env_file:`
## Severity Classification
| Severity | Description | Response |
| -------------- | ----------------------------------------------------- | ----------------------- |
| 🔴 **Critical** | Exploitable vulnerability, data exposure, auth bypass | Immediate fix required |
| 🟠 **High** | Missing security control, potential escalation path | Fix before next release |
| 🟡 **Medium** | Best practice violation, defense-in-depth gap | Plan fix in sprint |
| 🟢 **Low** | Informational, minor hardening opportunity | Track in backlog |
## Report Format
Generate a structured report:
```markdown
# 🔒 Security Audit Report
**Date**: <date>
**Scope**: <backend/frontend/both>
**Auditor**: Antigravity Security Sentinel
## Summary
| Severity | Count |
| ---------- | ----- |
| 🔴 Critical | X |
| 🟠 High | X |
| 🟡 Medium | X |
| 🟢 Low | X |
## Findings
### [SEV-001] <Title> — 🔴 Critical
**Category**: OWASP A01 / CASL / ClamAV / LCBP3-Specific
**File**: `<path>:<line>`
**Description**: <what is wrong>
**Impact**: <what could happen>
**Recommendation**: <how to fix>
**Code Example**:
\`\`\`typescript
// Before (vulnerable)
...
// After (fixed)
...
\`\`\`
## CASL Coverage Matrix
| Module | Controller | Guard? | Policies? | Level |
| ------ | --------------- | ------ | --------- | ------------ |
| auth | AuthController | ✅ | ✅ | N/A (public) |
| users | UsersController | ✅ | ✅ | L1-L4 |
| ... | ... | ... | ... | ... |
## Recommendations Priority
1. <Critical fix 1>
2. <Critical fix 2>
...
```
## Operating Principles
- **Read-Only**: This skill only reads and reports. Never modify code.
- **Evidence-Based**: Every finding must include the exact file path and line number.
- **No False Confidence**: If a check is inconclusive, mark it as "⚠️ Needs Manual Review" rather than passing.
- **LCBP3-Specific**: Prioritize project-specific rules (idempotency, ClamAV, Redlock) over generic checks.
- **Frontend Too**: If scope includes frontend, also check for XSS in React components, unescaped user data, and exposed API keys.

View File

@@ -1,6 +1,7 @@
--- ---
name: speckit.specify name: speckit.specify
description: Create or update the feature specification from a natural language feature description. description: Create or update the feature specification from a natural language feature description.
version: 1.0.0
handoffs: handoffs:
- label: Build Technical Plan - label: Build Technical Plan
agent: speckit.plan agent: speckit.plan
@@ -46,24 +47,25 @@ Given that feature description, do this:
2. **Check for existing branches before creating new one**: 2. **Check for existing branches before creating new one**:
a. First, fetch all remote branches to ensure we have the latest information: a. First, fetch all remote branches to ensure we have the latest information:
```bash
git fetch --all --prune ```bash
``` git fetch --all --prune
```
b. Find the highest feature number across all sources for the short-name: b. Find the highest feature number across all sources for the short-name:
- Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'` - Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'`
- Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'` - Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'`
- Specs directories: Check for directories matching `specs/[0-9]+-<short-name>` - Specs directories: Check for directories matching `specs/[0-9]+-<short-name>`
c. Determine the next available number: c. Determine the next available number:
- Extract all numbers from all three sources - Extract all numbers from all three sources
- Find the highest number N - Find the highest number N
- Use N+1 for the new branch number - Use N+1 for the new branch number
d. Run the script `../scripts/bash/create-new-feature.sh --json "{{args}}"` with the calculated number and short-name: d. Run the script `../scripts/bash/create-new-feature.sh --json "{{args}}"` with the calculated number and short-name:
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description - Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
- Bash example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" --json --number 5 --short-name "user-auth" "Add user authentication"` - Bash example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" --json --number 5 --short-name "user-auth" "Add user authentication"`
- PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" -Json -Number 5 -ShortName "user-auth" "Add user authentication"` - PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
**IMPORTANT**: **IMPORTANT**:
- Check all three sources (remote branches, local branches, specs directories) to find the highest number - Check all three sources (remote branches, local branches, specs directories) to find the highest number
@@ -77,30 +79,29 @@ Given that feature description, do this:
3. Load `templates/spec-template.md` to understand required sections. 3. Load `templates/spec-template.md` to understand required sections.
4. Follow this execution flow: 4. Follow this execution flow:
1. Parse user description from Input
1. Parse user description from Input If empty: ERROR "No feature description provided"
If empty: ERROR "No feature description provided" 2. Extract key concepts from description
2. Extract key concepts from description Identify: actors, actions, data, constraints
Identify: actors, actions, data, constraints 3. For unclear aspects:
3. For unclear aspects: - Make informed guesses based on context and industry standards
- Make informed guesses based on context and industry standards - Only mark with [NEEDS CLARIFICATION: specific question] if:
- Only mark with [NEEDS CLARIFICATION: specific question] if: - The choice significantly impacts feature scope or user experience
- The choice significantly impacts feature scope or user experience - Multiple reasonable interpretations exist with different implications
- Multiple reasonable interpretations exist with different implications - No reasonable default exists
- No reasonable default exists - **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
- **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total** - Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
- Prioritize clarifications by impact: scope > security/privacy > user experience > technical details 4. Fill User Scenarios & Testing section
4. Fill User Scenarios & Testing section If no clear user flow: ERROR "Cannot determine user scenarios"
If no clear user flow: ERROR "Cannot determine user scenarios" 5. Generate Functional Requirements
5. Generate Functional Requirements Each requirement must be testable
Each requirement must be testable Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
Use reasonable defaults for unspecified details (document assumptions in Assumptions section) 6. Define Success Criteria
6. Define Success Criteria Create measurable, technology-agnostic outcomes
Create measurable, technology-agnostic outcomes Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion) Each criterion must be verifiable without implementation details
Each criterion must be verifiable without implementation details 7. Identify Key Entities (if data involved)
7. Identify Key Entities (if data involved) 8. Return: SUCCESS (spec ready for planning)
8. Return: SUCCESS (spec ready for planning)
5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings. 5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
@@ -108,91 +109,90 @@ Given that feature description, do this:
a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items: a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
```markdown ```markdown
# Specification Quality Checklist: [FEATURE NAME] # Specification Quality Checklist: [FEATURE NAME]
**Purpose**: Validate specification completeness and quality before proceeding to planning **Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: [DATE] **Created**: [DATE]
**Feature**: [Link to spec.md] **Feature**: [Link to spec.md]
## Content Quality ## Content Quality
- [ ] No implementation details (languages, frameworks, APIs) - [ ] No implementation details (languages, frameworks, APIs)
- [ ] Focused on user value and business needs - [ ] Focused on user value and business needs
- [ ] Written for non-technical stakeholders - [ ] Written for non-technical stakeholders
- [ ] All mandatory sections completed - [ ] All mandatory sections completed
## Requirement Completeness ## Requirement Completeness
- [ ] No [NEEDS CLARIFICATION] markers remain - [ ] No [NEEDS CLARIFICATION] markers remain
- [ ] Requirements are testable and unambiguous - [ ] Requirements are testable and unambiguous
- [ ] Success criteria are measurable - [ ] Success criteria are measurable
- [ ] Success criteria are technology-agnostic (no implementation details) - [ ] Success criteria are technology-agnostic (no implementation details)
- [ ] All acceptance scenarios are defined - [ ] All acceptance scenarios are defined
- [ ] Edge cases are identified - [ ] Edge cases are identified
- [ ] Scope is clearly bounded - [ ] Scope is clearly bounded
- [ ] Dependencies and assumptions identified - [ ] Dependencies and assumptions identified
## Feature Readiness ## Feature Readiness
- [ ] All functional requirements have clear acceptance criteria - [ ] All functional requirements have clear acceptance criteria
- [ ] User scenarios cover primary flows - [ ] User scenarios cover primary flows
- [ ] Feature meets measurable outcomes defined in Success Criteria - [ ] Feature meets measurable outcomes defined in Success Criteria
- [ ] No implementation details leak into specification - [ ] No implementation details leak into specification
## Notes ## Notes
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan` - Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
``` ```
b. **Run Validation Check**: Review the spec against each checklist item: b. **Run Validation Check**: Review the spec against each checklist item:
- For each item, determine if it passes or fails - For each item, determine if it passes or fails
- Document specific issues found (quote relevant spec sections) - Document specific issues found (quote relevant spec sections)
c. **Handle Validation Results**: c. **Handle Validation Results**:
- **If all items pass**: Mark checklist complete and proceed to step 6
- **If all items pass**: Mark checklist complete and proceed to step 6 - **If items fail (excluding [NEEDS CLARIFICATION])**:
1. List the failing items and specific issues
2. Update the spec to address each issue
3. Re-run validation until all items pass (max 3 iterations)
4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
- **If items fail (excluding [NEEDS CLARIFICATION])**: - **If [NEEDS CLARIFICATION] markers remain**:
1. List the failing items and specific issues 1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
2. Update the spec to address each issue 2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
3. Re-run validation until all items pass (max 3 iterations) 3. For each clarification needed (max 3), present options to user in this format:
4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
- **If [NEEDS CLARIFICATION] markers remain**: ```markdown
1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec ## Question [N]: [Topic]
2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
3. For each clarification needed (max 3), present options to user in this format:
```markdown **Context**: [Quote relevant spec section]
## Question [N]: [Topic]
**Context**: [Quote relevant spec section] **What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
**What we need to know**: [Specific question from NEEDS CLARIFICATION marker] **Suggested Answers**:
**Suggested Answers**: | Option | Answer | Implications |
| ------ | ------------------------- | ------------------------------------- |
| A | [First suggested answer] | [What this means for the feature] |
| B | [Second suggested answer] | [What this means for the feature] |
| C | [Third suggested answer] | [What this means for the feature] |
| Custom | Provide your own answer | [Explain how to provide custom input] |
| Option | Answer | Implications | **Your choice**: _[Wait for user response]_
|--------|--------|--------------| ```
| A | [First suggested answer] | [What this means for the feature] |
| B | [Second suggested answer] | [What this means for the feature] |
| C | [Third suggested answer] | [What this means for the feature] |
| Custom | Provide your own answer | [Explain how to provide custom input] |
**Your choice**: _[Wait for user response]_ 4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
``` - Use consistent spacing with pipes aligned
- Each cell should have spaces around content: `| Content |` not `|Content|`
4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted: - Header separator must have at least 3 dashes: `|--------|`
- Use consistent spacing with pipes aligned - Test that the table renders correctly in markdown preview
- Each cell should have spaces around content: `| Content |` not `|Content|` 5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
- Header separator must have at least 3 dashes: `|--------|` 6. Present all questions together before waiting for responses
- Test that the table renders correctly in markdown preview 7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
5. Number questions sequentially (Q1, Q2, Q3 - max 3 total) 8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
6. Present all questions together before waiting for responses 9. Re-run validation after all clarifications are resolved
7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
9. Re-run validation after all clarifications are resolved
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status

View File

@@ -1,6 +1,9 @@
--- ---
name: speckit.taskstoissues name: speckit.taskstoissues
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts. description: Convert existing tasks into actionable, dependency-ordered issues for the feature based on available design artifacts.
version: 1.1.0
depends-on:
- speckit.tasks
tools: ['github/github-mcp-server/issue_write'] tools: ['github/github-mcp-server/issue_write']
--- ---
@@ -14,22 +17,190 @@ You **MUST** consider the user input before proceeding (if not empty).
## Role ## Role
You are the **Antigravity Tracker Integrator**. Your role is to synchronize technical tasks with external project management systems like GitHub Issues. You ensure that every piece of work has a clear, tracked identity for collaborative execution. You are the **Antigravity Tracker Integrator**. Your role is to synchronize technical tasks with external project management systems (GitHub Issues or Gitea Issues). You ensure that every piece of work has a clear, tracked identity for collaborative execution.
## Task ## Task
### Outline ### Outline
1. Run `../scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot"). Convert all tasks from `tasks.md` into well-structured issues on the appropriate platform (GitHub or Gitea), preserving dependency order, phase grouping, and labels.
1. From the executed script, extract the path to **tasks**.
1. Get the Git remote by running:
```bash ### Execution Steps
git config --get remote.origin.url
```
**ONLY PROCEED TO NEXT STEPS IF THE REMOTE IS A GITHUB URL** 1. **Load Task Data**:
Run `../scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
1. For each task in the list, use the GitHub MCP server to create a new issue in the repository that is representative of the Git remote. 2. **Extract tasks path** from the executed script output.
**UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL** 3. **Detect Platform** — Get the Git remote and determine the platform:
```bash
git config --get remote.origin.url
```
| Remote URL Pattern | Platform | API |
| ---------------------------------------- | ----------- | --------------------------- |
| `github.com` | GitHub | GitHub MCP or REST API |
| `gitea.*`, custom domain with `/api/v1/` | Gitea | Gitea REST API |
| Other | Unsupported | **STOP** with error message |
**Platform Detection Rules**:
- If URL contains `github.com` → GitHub
- If URL contains a known Gitea domain (check `$ARGUMENTS` for hints, or try `<host>/api/v1/version`) → Gitea
- If `$ARGUMENTS` explicitly specifies platform (e.g., `--platform gitea`) → use that
- If uncertain → **ASK** the user which platform to use
> **UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL**
4. **Parse `tasks.md`** — Extract structured data for each task:
| Field | Source | Example |
| --------------- | ---------------------------- | -------------------------- |
| Task ID | `T001`, `T002`, etc. | `T001` |
| Phase | Phase heading | `Phase 1: Setup` |
| Description | Task text after ID | `Create project structure` |
| File paths | Paths in description | `src/models/user.py` |
| Parallel marker | `[P]` flag | `true`/`false` |
| User Story | `[US1]`, `[US2]`, etc. | `US1` |
| Dependencies | Sequential ordering in phase | `T001 → T002` |
5. **Load Feature Context** (for issue body enrichment):
- Read `spec.md` for requirement references
- Read `plan.md` for architecture context (if exists)
- Map tasks to requirements where possible
6. **Generate Issue Data** — For each task, create an issue with:
### Issue Title Format
```
[<TaskID>] <Description>
```
Example: `[T001] Create project structure per implementation plan`
### Issue Body Template
```markdown
## Task Details
**Task ID**: <TaskID>
**Phase**: <Phase Name>
**Parallel**: <Yes/No>
**User Story**: <Story reference, if any>
## Description
<Full task description from tasks.md>
## File Paths
- `<file path 1>`
- `<file path 2>`
## Acceptance Criteria
- [ ] Implementation complete per task description
- [ ] Relevant tests pass (if applicable)
- [ ] No regressions introduced
## Context
**Feature**: <Feature name from spec.md>
**Spec Reference**: <Requirement ID if mapped>
---
_Auto-generated by speckit.taskstoissues from `tasks.md`_
```
7. **Apply Labels** — Assign labels based on task metadata:
| Condition | Label |
| ---------------------------------- | ------------------ |
| Phase 1 (Setup) | `phase:setup` |
| Phase 2 (Foundation) | `phase:foundation` |
| Phase 3+ (User Stories) | `phase:story` |
| Final Phase (Polish) | `phase:polish` |
| Has `[P]` marker | `parallel` |
| Has `[US1]` marker | `story:US1` |
| Task creates test files | `type:test` |
| Task creates models/entities | `type:model` |
| Task creates services | `type:service` |
| Task creates controllers/endpoints | `type:api` |
| Task creates UI components | `type:ui` |
**Label Creation**: If labels don't exist on the repo, create them first before assigning.
8. **Set Milestone** (optional):
- If `$ARGUMENTS` includes `--milestone "<name>"`, assign all issues to that milestone
- If milestone doesn't exist, create it with the feature name as the title
9. **Create Issues** — Execute in dependency order:
**For GitHub**: Use the GitHub MCP server tool `issue_write` to create issues.
**For Gitea**: Use the Gitea REST API:
```bash
# Create issue
curl -s -X POST "https://<gitea-host>/api/v1/repos/<owner>/<repo>/issues" \
-H "Authorization: token <GITEA_TOKEN>" \
-H "Content-Type: application/json" \
-d '{
"title": "[T001] Create project structure",
"body": "<issue body>",
"labels": [<label_ids>]
}'
```
**Authentication**:
- GitHub: Uses MCP server (pre-authenticated)
- Gitea: Requires `GITEA_TOKEN` environment variable. If not set, **STOP** and ask user to provide it.
**Rate Limiting**:
- Create issues sequentially with a 500ms delay between requests
- If rate limited (HTTP 429), wait and retry with exponential backoff
10. **Track Created Issues** — Maintain a mapping of `TaskID → IssueNumber`:
```markdown
| Task ID | Issue # | Title | URL |
| ------- | ------- | ----------------------------- | ----- |
| T001 | #42 | Create project structure | <url> |
| T002 | #43 | Configure database connection | <url> |
```
11. **Update `tasks.md`** (optional — ask user first):
- Append issue references to each task line:
```
- [ ] T001 Create project structure (#42)
```
12. **Report Completion**:
- Total issues created
- Issues by phase
- Issues by label
- Any failures (with retry suggestions)
- Link to issue board/project
- Mapping table (Task ID → Issue #)
## Arguments
| Argument | Description | Default |
| ---------------------------- | --------------------------------------- | ------------- |
| `--platform <github\|gitea>` | Force platform detection | Auto-detect |
| `--milestone "<name>"` | Assign issues to milestone | None |
| `--dry-run` | Preview issues without creating | `false` |
| `--labels-only` | Only create labels, don't create issues | `false` |
| `--update-tasks` | Auto-update tasks.md with issue refs | `false` (ask) |
## Operating Principles
- **Idempotency**: Check if an issue with the same title already exists before creating duplicates
- **Dependency Order**: Create issues in task execution order so dependencies are naturally numbered
- **Rich Context**: Include enough context in each issue body that it can be understood standalone
- **Label Consistency**: Use a consistent label taxonomy across all issues
- **Platform Safety**: Never create issues on repos that don't match the git remote
- **Dry Run Support**: Always support `--dry-run` to preview before creating

View File

@@ -4,34 +4,64 @@ description: Run the full speckit pipeline from specification to analysis in one
# Workflow: speckit.all # Workflow: speckit.all
This meta-workflow orchestrates the complete specification pipeline. This meta-workflow orchestrates the **complete development lifecycle**, from specification through implementation and validation. For the preparation-only pipeline (steps 1-5), use `/speckit.prepare` instead.
## Pipeline Steps ## Preparation Phase (Steps 1-5)
1. **Specify** (`/speckit.specify`): 1. **Specify** (`/speckit.specify`):
- Use the `view_file` tool to read: `.agent/skills/speckit.specify/SKILL.md` - Use the `view_file` tool to read: `.agents/skills/speckit.specify/SKILL.md`
- Execute with user's feature description - Execute with user's feature description
- Creates: `spec.md` - Creates: `spec.md`
2. **Clarify** (`/speckit.clarify`): 2. **Clarify** (`/speckit.clarify`):
- Use the `view_file` tool to read: `.agent/skills/speckit.clarify/SKILL.md` - Use the `view_file` tool to read: `.agents/skills/speckit.clarify/SKILL.md`
- Execute to resolve ambiguities - Execute to resolve ambiguities
- Updates: `spec.md` - Updates: `spec.md`
3. **Plan** (`/speckit.plan`): 3. **Plan** (`/speckit.plan`):
- Use the `view_file` tool to read: `.agent/skills/speckit.plan/SKILL.md` - Use the `view_file` tool to read: `.agents/skills/speckit.plan/SKILL.md`
- Execute to create technical design - Execute to create technical design
- Creates: `plan.md` - Creates: `plan.md`
4. **Tasks** (`/speckit.tasks`): 4. **Tasks** (`/speckit.tasks`):
- Use the `view_file` tool to read: `.agent/skills/speckit.tasks/SKILL.md` - Use the `view_file` tool to read: `.agents/skills/speckit.tasks/SKILL.md`
- Execute to generate task breakdown - Execute to generate task breakdown
- Creates: `tasks.md` - Creates: `tasks.md`
5. **Analyze** (`/speckit.analyze`): 5. **Analyze** (`/speckit.analyze`):
- Use the `view_file` tool to read: `.agent/skills/speckit.analyze/SKILL.md` - Use the `view_file` tool to read: `.agents/skills/speckit.analyze/SKILL.md`
- Execute to validate consistency - Execute to validate consistency across spec, plan, and tasks
- Output: Analysis report - Output: Analysis report
- **Gate**: If critical issues found, stop and fix before proceeding
## Implementation Phase (Steps 6-7)
6. **Implement** (`/speckit.implement`):
- Use the `view_file` tool to read: `.agents/skills/speckit.implement/SKILL.md`
- Execute all tasks from `tasks.md` with anti-regression protocols
- Output: Working implementation
7. **Check** (`/speckit.checker`):
- Use the `view_file` tool to read: `.agents/skills/speckit.checker/SKILL.md`
- Run static analysis (linters, type checkers, security scanners)
- Output: Checker report
## Verification Phase (Steps 8-10)
8. **Test** (`/speckit.tester`):
- Use the `view_file` tool to read: `.agents/skills/speckit.tester/SKILL.md`
- Run tests with coverage
- Output: Test + coverage report
9. **Review** (`/speckit.reviewer`):
- Use the `view_file` tool to read: `.agents/skills/speckit.reviewer/SKILL.md`
- Perform code review
- Output: Review report with findings
10. **Validate** (`/speckit.validate`):
- Use the `view_file` tool to read: `.agents/skills/speckit.validate/SKILL.md`
- Verify implementation matches spec requirements
- Output: Validation report (pass/fail)
## Usage ## Usage
@@ -39,9 +69,17 @@ This meta-workflow orchestrates the complete specification pipeline.
/speckit.all "Build a user authentication system with OAuth2 support" /speckit.all "Build a user authentication system with OAuth2 support"
``` ```
## Pipeline Comparison
| Pipeline | Steps | Use When |
| ------------------ | ------------------------- | -------------------------------------- |
| `/speckit.prepare` | 1-5 (Specify → Analyze) | Planning only — you'll implement later |
| `/speckit.all` | 1-10 (Specify → Validate) | Full lifecycle in one pass |
## On Error ## On Error
If any step fails, stop the pipeline and report: If any step fails, stop the pipeline and report:
- Which step failed - Which step failed
- The error message - The error message
- Suggested remediation (e.g., "Run `/speckit.clarify` to resolve ambiguities before continuing") - Suggested remediation (e.g., "Run `/speckit.clarify` to resolve ambiguities before continuing")

View File

@@ -8,7 +8,7 @@ description: Create or update the project constitution from interactive or provi
- The user has provided an input prompt. Treat this as the primary input for the skill. - The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.constitution/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.constitution/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -9,7 +9,7 @@ description: Create or update the feature specification from a natural language
- This is typically the starting point of a new feature. - This is typically the starting point of a new feature.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.specify/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.specify/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -8,7 +8,7 @@ description: Identify underspecified areas in the current feature spec by asking
- The user has provided an input prompt. Treat this as the primary input for the skill. - The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.clarify/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.clarify/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -8,7 +8,7 @@ description: Execute the implementation planning workflow using the plan templat
- The user has provided an input prompt. Treat this as the primary input for the skill. - The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.plan/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.plan/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -8,7 +8,7 @@ description: Generate an actionable, dependency-ordered tasks.md for the feature
- The user has provided an input prompt. Treat this as the primary input for the skill. - The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.tasks/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.tasks/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -10,7 +10,7 @@ description: Perform a non-destructive cross-artifact consistency and quality an
- The user has provided an input prompt. Treat this as the primary input for the skill. - The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.analyze/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.analyze/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -8,7 +8,7 @@ description: Execute the implementation plan by processing and executing all tas
- The user has provided an input prompt. Treat this as the primary input for the skill. - The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.implement/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.implement/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -10,7 +10,7 @@ description: Run static analysis tools and aggregate results.
- The user may specify paths to check or run on entire project. - The user may specify paths to check or run on entire project.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.checker/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.checker/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -10,7 +10,7 @@ description: Execute tests, measure coverage, and report results.
- The user may specify test paths, options, or just run all tests. - The user may specify test paths, options, or just run all tests.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.tester/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.tester/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -8,7 +8,7 @@ description: Perform code review with actionable feedback and suggestions.
- The user may specify files to review, "staged" for git staged changes, or "branch" for branch diff. - The user may specify files to review, "staged" for git staged changes, or "branch" for branch diff.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.reviewer/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.reviewer/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -8,7 +8,7 @@ description: Validate that implementation matches specification requirements.
- The user has provided an input prompt. Treat this as the primary input for the skill. - The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.validate/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.validate/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -9,9 +9,11 @@ Follows `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` and ADR-00
## Steps ## Steps
// turbo
1. **Verify requirements exist** — confirm the feature is in `specs/01-Requirements/` before starting 1. **Verify requirements exist** — confirm the feature is in `specs/01-Requirements/` before starting
2. **Check schema** — read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql` for relevant tables // turbo 2. **Check schema** — read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql` for relevant tables
3. **Scaffold module folder** 3. **Scaffold module folder**
@@ -40,10 +42,10 @@ backend/src/modules/<module-name>/
9. **Register in AppModule** — import the new module in `app.module.ts`. 9. **Register in AppModule** — import the new module in `app.module.ts`.
10. **Write unit test** — cover service methods with Jest mocks. Run: // turbo 10. **Write unit test** — cover service methods with Jest mocks. Run:
```bash ```bash
pnpm test:watch pnpm test:watch
``` ```
11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` // turbo 11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md`

View File

@@ -0,0 +1,108 @@
---
description: Manage database schema changes following ADR-009 (no migrations, modify SQL directly)
---
# Schema Change Workflow
Use this workflow when modifying database schema for LCBP3-DMS.
Follows `specs/06-Decision-Records/ADR-009-database-strategy.md`**NO TypeORM migrations**.
## Pre-Change Checklist
- [ ] Change is required by a spec in `specs/01-Requirements/`
- [ ] Existing data impact has been assessed
- [ ] No SQL triggers are being added (business logic in NestJS only)
## Steps
1. **Read current schema** — load the full schema file:
```
specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql
```
2. **Read data dictionary** — understand current field definitions:
```
specs/03-Data-and-Storage/03-01-data-dictionary.md
```
// turbo 3. **Identify impact scope** — determine which tables, columns, indexes, or constraints are affected. List:
- Tables being modified/created
- Columns being added/renamed/dropped
- Foreign key relationships affected
- Indexes being added/modified
- Seed data impact (if any)
4. **Modify schema SQL** — edit `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`:
- Add/modify table definitions
- Maintain consistent formatting (uppercase SQL keywords, lowercase identifiers)
- Add inline comments for new columns explaining purpose
- Ensure `DEFAULT` values and `NOT NULL` constraints are correct
- Add `version` column with `@VersionColumn()` marker comment if optimistic locking is needed
> [!CAUTION]
> **NEVER use SQL Triggers.** All business logic must live in NestJS services.
5. **Update data dictionary** — edit `specs/03-Data-and-Storage/03-01-data-dictionary.md`:
- Add new tables/columns with descriptions
- Update data types and constraints
- Document business rules for new fields
- Add enum value definitions if applicable
6. **Update seed data** (if applicable):
- `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-basic.sql` — for reference/lookup data
- `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql` — for new CASL permissions
7. **Update TypeORM entity** — modify corresponding `backend/src/modules/<module>/entities/*.entity.ts`:
- Map ONLY columns defined in schema SQL
- Use correct TypeORM decorators (`@Column`, `@PrimaryGeneratedColumn`, `@ManyToOne`, etc.)
- Add `@VersionColumn()` if optimistic locking is needed
8. **Update DTOs** — if new columns are exposed via API:
- Add fields to `create-*.dto.ts` and/or `update-*.dto.ts`
- Add `class-validator` decorators for all new fields
- Never use `any` type
// turbo 9. **Run type check** — verify no TypeScript errors:
```bash
cd backend && npx tsc --noEmit
```
10. **Generate SQL diff** — create a summary of changes for the user to apply manually:
```
-- Schema Change Summary
-- Date: <current date>
-- Feature: <feature name>
-- Tables affected: <list>
--
-- ⚠️ Apply this SQL to the live database manually:
ALTER TABLE ...;
-- or
CREATE TABLE ...;
```
11. **Notify user** — present the SQL diff and remind them:
- Apply the SQL change to the live database manually
- Verify the change doesn't break existing data
- Run `pnpm test` after applying to confirm entity mappings work
## Common Patterns
| Change Type | Template |
| ----------- | -------------------------------------------------------------- |
| Add column | `ALTER TABLE \`table\` ADD COLUMN \`col\` TYPE DEFAULT value;` |
| Add table | Full `CREATE TABLE` with constraints and indexes |
| Add index | `CREATE INDEX \`idx_table_col\` ON \`table\` (\`col\`);` |
| Add FK | `ALTER TABLE \`child\` ADD CONSTRAINT ... FOREIGN KEY ...` |
| Add enum | Add to data dictionary + `ENUM('val1','val2')` in column def |
## On Error
- If schema SQL has syntax errors → fix and re-validate with `tsc --noEmit`
- If entity mapping doesn't match schema → compare column-by-column against SQL
- If seed data conflicts → check unique constraints and foreign keys

View File

@@ -8,20 +8,20 @@ This workflow orchestrates the sequential execution of the Speckit preparation p
1. **Step 1: Specify (Skill 02)** 1. **Step 1: Specify (Skill 02)**
- Goal: Create or update the `spec.md` based on user input. - Goal: Create or update the `spec.md` based on user input.
- Action: Read and execute `.agent/skills/speckit.specify/SKILL.md`. - Action: Read and execute `.agents/skills/speckit.specify/SKILL.md`.
2. **Step 2: Clarify (Skill 03)** 2. **Step 2: Clarify (Skill 03)**
- Goal: Refine the `spec.md` by identifying and resolving ambiguities. - Goal: Refine the `spec.md` by identifying and resolving ambiguities.
- Action: Read and execute `.agent/skills/speckit.clarify/SKILL.md`. - Action: Read and execute `.agents/skills/speckit.clarify/SKILL.md`.
3. **Step 3: Plan (Skill 04)** 3. **Step 3: Plan (Skill 04)**
- Goal: Generate `plan.md` from the finalized spec. - Goal: Generate `plan.md` from the finalized spec.
- Action: Read and execute `.agent/skills/speckit.plan/SKILL.md`. - Action: Read and execute `.agents/skills/speckit.plan/SKILL.md`.
4. **Step 4: Tasks (Skill 05)** 4. **Step 4: Tasks (Skill 05)**
- Goal: Generate actional `tasks.md` from the plan. - Goal: Generate actionable `tasks.md` from the plan.
- Action: Read and execute `.agent/skills/speckit.tasks/SKILL.md`. - Action: Read and execute `.agents/skills/speckit.tasks/SKILL.md`.
5. **Step 5: Analyze (Skill 06)** 5. **Step 5: Analyze (Skill 06)**
- Goal: Validate consistency across all design artifacts (spec, plan, tasks). - Goal: Validate consistency across all design artifacts (spec, plan, tasks).
- Action: Read and execute `.agent/skills/speckit.analyze/SKILL.md`. - Action: Read and execute `.agents/skills/speckit.analyze/SKILL.md`.

View File

@@ -8,7 +8,7 @@ description: Generate a custom checklist for the current feature based on user r
- The user has provided an input prompt. Treat this as the primary input for the skill. - The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.checklist/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.checklist/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -8,7 +8,7 @@ description: Compare two versions of a spec or plan to highlight changes.
- The user has provided an input prompt (optional file paths or version references). - The user has provided an input prompt (optional file paths or version references).
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.diff/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.diff/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -8,7 +8,7 @@ description: Migrate existing projects into the speckit structure by generating
- The user has provided an input prompt (path to analyze, feature name). - The user has provided an input prompt (path to analyze, feature name).
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.migrate/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.migrate/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -10,7 +10,7 @@ description: Challenge the specification with Socratic questioning to identify l
- The user has provided an input prompt. Treat this as the primary input for the skill. - The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.quizme/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.quizme/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -10,7 +10,7 @@ description: Display a dashboard showing feature status, completion percentage,
- The user may optionally specify a feature to focus on. - The user may optionally specify a feature to focus on.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.status/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.status/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -8,7 +8,7 @@ description: Convert existing tasks into actionable, dependency-ordered GitHub i
- The user has provided an input prompt. Treat this as the primary input for the skill. - The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**: 2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.taskstoissues/SKILL.md` - Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.taskstoissues/SKILL.md`
3. **Execute**: 3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly. - Follow the instructions in the `SKILL.md` exactly.

View File

@@ -12,12 +12,14 @@ You value **Data Integrity**, **Security**, and **Clean Architecture**.
## 🏗️ Project Overview ## 🏗️ Project Overview
**LCBP3-DMS (Laem Chabang Port Phase 3 - Document Management System)** — Version 1.8.0 **LCBP3-DMS (Laem Chabang Port Phase 3 - Document Management System)** — Version 1.8.0 (Patch 1.8.1)
- **Goal:** Manage construction documents (Correspondence, RFA, Contract Drawings, Shop Drawings) - **Goal:** Manage construction documents (Correspondence, RFA, Contract Drawings, Shop Drawings)
with complex multi-level approval workflows. with complex multi-level approval workflows.
- **Infrastructure:** QNAP Container Station (Docker Compose), Nginx Proxy Manager (Reverse Proxy), - **Infrastructure:**
Gitea (Git + CI/CD), n8n (Workflow Automation), Prometheus + Loki + Grafana (Monitoring/Logging) - **QNAP NAS:** Container Station (Docker), Nginx Proxy Manager, MariaDB, Redis, Elasticsearch, ClamAV
- **ASUSTOR NAS:** Ollama (AI Processing), n8n (Workflow Automation), Portainer
- **Shared:** Gitea (Git + CI/CD), Prometheus + Loki + Grafana (Monitoring/Logging)
## 💻 Tech Stack & Constraints ## 💻 Tech Stack & Constraints
@@ -26,6 +28,7 @@ You value **Data Integrity**, **Security**, and **Clean Architecture**.
- **Frontend:** Next.js 14+ (App Router), Tailwind CSS, Shadcn/UI, - **Frontend:** Next.js 14+ (App Router), Tailwind CSS, Shadcn/UI,
TanStack Query (**Server State**), Zustand (**Client State**), React Hook Form + Zod (**Form State**), Axios TanStack Query (**Server State**), Zustand (**Client State**), React Hook Form + Zod (**Form State**), Axios
- **Notifications:** BullMQ Queue → Email / LINE Notify / In-App - **Notifications:** BullMQ Queue → Email / LINE Notify / In-App
- **AI/Migration:** Ollama (llama3.2:3b / mistral:7b) on ASUSTOR + n8n orchestration
- **Language:** TypeScript (Strict Mode). **NO `any` types allowed.** - **Language:** TypeScript (Strict Mode). **NO `any` types allowed.**
## 🛡️ Security & Integrity Rules ## 🛡️ Security & Integrity Rules
@@ -36,32 +39,59 @@ You value **Data Integrity**, **Security**, and **Clean Architecture**.
4. **Validation:** Use Zod (frontend) or Class-validator (backend DTO) for all inputs. 4. **Validation:** Use Zod (frontend) or Class-validator (backend DTO) for all inputs.
5. **Password:** bcrypt with 12 salt rounds. Enforce password policy. 5. **Password:** bcrypt with 12 salt rounds. Enforce password policy.
6. **Rate Limiting:** Apply ThrottlerGuard on auth endpoints. 6. **Rate Limiting:** Apply ThrottlerGuard on auth endpoints.
7. **AI Isolation (ADR-018):** Ollama MUST run on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. Output JSON only.
## 📋 Workflow & Spec Guidelines ## 📋 Workflow & Spec Guidelines
- Always follow specs in `specs/` (v1.8.0). Priority: `06-Decision-Records` > `05-Engineering-Guidelines` > others. - Always follow specs in `specs/` (v1.8.0). Priority: `06-Decision-Records` > `05-Engineering-Guidelines` > others.
- Always verify database schema against **`specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`** before writing queries. - Always verify database schema against **`specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`** before writing queries.
- Adhere to ADRs: ADR-001 (Workflow Engine), ADR-002 (Doc Numbering), ADR-009 (DB Strategy), - Check data dictionary at **`specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
ADR-011 (App Router), ADR-013 (Form Handling), ADR-016 (Security). - Check seed data: **`lcbp3-v1.8.0-seed-basic.sql`** (reference data), **`lcbp3-v1.8.0-seed-permissions.sql`** (CASL permissions).
- For migration context: **`specs/03-Data-and-Storage/03-04-legacy-data-migration.md`** and **`03-05-n8n-migration-setup-guide.md`**.
### ADR Reference (All 17 + Patch)
Adhere to all ADRs in `specs/06-Decision-Records/`:
| ADR | Topic | Key Decision |
| ------- | ------------------------- | -------------------------------------------------- |
| ADR-001 | Workflow Engine | Unified state machine for document workflows |
| ADR-002 | Doc Numbering | Redis Redlock + DB optimistic locking |
| ADR-005 | Technology Stack | NestJS + Next.js + MariaDB + Redis |
| ADR-006 | Redis Caching | Cache strategy and invalidation patterns |
| ADR-008 | Email Notification | BullMQ queue-based email/LINE/in-app |
| ADR-009 | DB Strategy | No TypeORM migrations — modify schema SQL directly |
| ADR-010 | Logging/Monitoring | Prometheus + Loki + Grafana stack |
| ADR-011 | App Router | Next.js App Router with RSC patterns |
| ADR-012 | UI Components | Shadcn/UI component library |
| ADR-013 | Form Handling | React Hook Form + Zod validation |
| ADR-014 | State Management | TanStack Query (server) + Zustand (client) |
| ADR-015 | Deployment | Docker Compose + Gitea CI/CD |
| ADR-016 | Security | JWT + CASL RBAC + Helmet.js + ClamAV |
| ADR-017 | Ollama Migration | Local AI + n8n for legacy data import |
| ADR-018 | AI Boundary (Patch 1.8.1) | AI isolation — no direct DB/storage access |
## 🎯 Active Skills ## 🎯 Active Skills
- **`nestjs-best-practices`** — Apply when writing/reviewing any NestJS code (modules, services, controllers, guards, interceptors, DTOs) - **`nestjs-best-practices`** — Apply when writing/reviewing any NestJS code (modules, services, controllers, guards, interceptors, DTOs)
- **`next-best-practices`** — Apply when writing/reviewing any Next.js code (App Router, RSC boundaries, async patterns, data fetching, error handling) - **`next-best-practices`** — Apply when writing/reviewing any Next.js code (App Router, RSC boundaries, async patterns, data fetching, error handling)
- **`speckit.security-audit`** — Apply when auditing security (OWASP Top 10, CASL, ClamAV, LCBP3-specific checks)
## 🔄 Speckit Workflow Pipeline ## 🔄 Speckit Workflow Pipeline
Use `/slash-command` to trigger these workflows. Always prefer spec-driven development for new features. Use `/slash-command` to trigger these workflows. Always prefer spec-driven development for new features.
| Phase | Command | เมื่อใช้ | | Phase | Command | เมื่อใช้ |
| -------------------- | ---------------------------------------------------------- | --------------------------------------------------- | | -------------------- | ---------------------------------------------------------- | ----------------------------------------------------- |
| **Feature Design** | `/speckit.prepare` | Feature ใหม่ — รัน Specify→Clarify→Plan→Tasks→Analyze | | **Full Pipeline** | `/speckit.all` | Feature ใหม่ — รัน Specify→...→Validate (10 steps) |
| **Implement** | `/07-speckit.implement` | เขียนโค้ดตาม tasks.md พร้อม anti-regression | | **Feature Design** | `/speckit.prepare` | Preparation only — Specify→Clarify→Plan→Tasks→Analyze |
| **QA** | `/08-speckit.checker` | ตรวจ TypeScript + ESLint + Security | | **Implement** | `/07-speckit.implement` | เขียนโค้ดตาม tasks.md พร้อม anti-regression |
| **Test** | `/09-speckit.tester` | รัน Jest/Vitest + coverage report | | **QA** | `/08-speckit.checker` | ตรวจ TypeScript + ESLint + Security |
| **Review** | `/10-speckit.reviewer` | Code review — Logic, Performance, Style | | **Test** | `/09-speckit.tester` | รัน Jest/Vitest + coverage report |
| **Validate** | `/11-speckit.validate` | ยืนยันว่า implementation ตรงกับ spec.md | | **Review** | `/10-speckit.reviewer` | Code review — Logic, Performance, Style |
| **Project-Specific** | `/create-backend-module` `/create-frontend-page` `/deploy` | งานประจำของ LCBP3-DMS | | **Validate** | `/11-speckit.validate` | ยืนยันว่า implementation ตรงกับ spec.md |
| **Schema Change** | `/schema-change` | แก้ schema SQL → data dictionary → notify user |
| **Project-Specific** | `/create-backend-module` `/create-frontend-page` `/deploy` | งานประจำของ LCBP3-DMS |
## 🚫 Forbidden Actions ## 🚫 Forbidden Actions
@@ -71,3 +101,5 @@ Use `/slash-command` to trigger these workflows. Always prefer spec-driven devel
- DO NOT invent table names or columns — use ONLY what is defined in the schema SQL file. - DO NOT invent table names or columns — use ONLY what is defined in the schema SQL file.
- DO NOT generate code that violates OWASP Top 10 security practices. - DO NOT generate code that violates OWASP Top 10 security practices.
- DO NOT use `any` TypeScript type anywhere. - DO NOT use `any` TypeScript type anywhere.
- DO NOT let AI (Ollama) access production database directly — all writes go through DMS API.
- DO NOT bypass StorageService for file operations — all file moves must go through the API.

View File

@@ -45,19 +45,19 @@ jobs:
# 4. Update Containers # 4. Update Containers
echo "🔄 Updating Containers..." echo "🔄 Updating Containers..."
# Sync compose file จาก repo → app directory # Sync compose file จาก repo → app directory
cp /share/np-dms/app/source/lcbp3/specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-lcbp3.yml /share/np-dms/app/docker-compose-lcbp3.yml cp /share/np-dms/app/source/lcbp3/specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-app.yml /share/np-dms/app/docker-compose-app.yml
cd /share/np-dms/app cd /share/np-dms/app
# ⚠️ ลบ container เดิมที่อาจสร้างจาก Container Station # ⚠️ ลบ container เดิมที่อาจสร้างจาก Container Station
docker rm -f backend frontend 2>/dev/null || true docker rm -f lcbp3-backend lcbp3-frontend 2>/dev/null || true
# 4a. Start Backend ก่อน # 4a. Start Backend ก่อน
echo "🟢 Starting Backend..." echo "🟢 Starting Backend..."
docker compose -f docker-compose-lcbp3.yml up -d backend docker compose -f docker-compose-app.yml up -d backend
# 4b. รอ Backend healthy (ทุก 5 วิ สูงสุด 60 วิ) # 4b. รอ Backend healthy (ทุก 5 วิ สูงสุด 60 วิ)
echo "⏳ Waiting for Backend health check..." echo "⏳ Waiting for Backend health check..."
for i in $(seq 1 12); do for i in $(seq 1 12); do
if docker inspect --format='{{.State.Health.Status}}' backend 2>/dev/null | grep -q healthy; then if docker inspect --format='{{.State.Health.Status}}' lcbp3-backend 2>/dev/null | grep -q healthy; then
echo "✅ Backend is healthy!" echo "✅ Backend is healthy!"
break break
fi fi
@@ -69,7 +69,7 @@ jobs:
# 4c. Start Frontend # 4c. Start Frontend
echo "🟢 Starting Frontend..." echo "🟢 Starting Frontend..."
docker compose -f docker-compose-lcbp3.yml up -d frontend docker compose -f docker-compose-app.yml up -d frontend
# 5. Cleanup # 5. Cleanup
echo "🧹 Cleaning up unused images..." echo "🧹 Cleaning up unused images..."

77
AGENTS.md Normal file
View File

@@ -0,0 +1,77 @@
# NAP-DMS Project Context & Rules
> **For:** Codex CLI, opencode, Amp, Amazon Q Developer CLI, IBM Bob, and other AGENTS.md-compatible tools.
## 🧠 Role & Persona
Act as a **Senior Full Stack Developer** expert in **NestJS**, **Next.js**, and **TypeScript**.
You are a **Document Intelligence Engine** — not a general chatbot.
You value **Data Integrity**, **Security**, and **Clean Architecture**.
## 🏗️ Project Overview
**LCBP3-DMS (Laem Chabang Port Phase 3 - Document Management System)** — Version 1.8.0 (Patch 1.8.1)
- **Goal:** Manage construction documents (Correspondence, RFA, Contract Drawings, Shop Drawings)
with complex multi-level approval workflows.
- **Infrastructure:**
- **QNAP NAS:** Container Station (Docker), Nginx Proxy Manager, MariaDB, Redis, Elasticsearch, ClamAV
- **ASUSTOR NAS:** Ollama (AI Processing), n8n (Workflow Automation), Portainer
- **Shared:** Gitea (Git + CI/CD), Prometheus + Loki + Grafana (Monitoring/Logging)
## 💻 Tech Stack & Constraints
- **Backend:** NestJS (Modular Architecture), TypeORM, MariaDB 11.8, Redis 7.2 (BullMQ),
Elasticsearch 8.11, JWT + Passport, CASL (4-Level RBAC), ClamAV (Virus Scanning), Helmet.js
- **Frontend:** Next.js 14+ (App Router), Tailwind CSS, Shadcn/UI,
TanStack Query (**Server State**), Zustand (**Client State**), React Hook Form + Zod (**Form State**), Axios
- **Notifications:** BullMQ Queue → Email / LINE Notify / In-App
- **AI/Migration:** Ollama (llama3.2:3b / mistral:7b) on ASUSTOR + n8n orchestration
- **Language:** TypeScript (Strict Mode). **NO `any` types allowed.**
## 🛡️ Security & Integrity Rules
1. **Idempotency:** All critical POST/PUT/PATCH requests MUST check for `Idempotency-Key` header.
2. **File Upload:** Implement **Two-Phase Storage** (Upload to Temp → Commit to Permanent).
3. **Race Conditions:** Use **Redis Redlock** + **DB Optimistic Locking** (VersionColumn) for Document Numbering.
4. **Validation:** Use Zod (frontend) or Class-validator (backend DTO) for all inputs.
5. **Password:** bcrypt with 12 salt rounds. Enforce password policy.
6. **Rate Limiting:** Apply ThrottlerGuard on auth endpoints.
7. **AI Isolation (ADR-018):** Ollama MUST run on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. Output JSON only.
## 📋 Spec Guidelines
- Always follow specs in `specs/` (v1.8.0). Priority: `06-Decision-Records` > `05-Engineering-Guidelines` > others.
- Always verify database schema against **`specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`** before writing queries.
- Check data dictionary at **`specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
### ADR Reference (All 17 + Patch)
| ADR | Topic | Key Decision |
| ------- | ------------------------- | -------------------------------------------------- |
| ADR-001 | Workflow Engine | Unified state machine for document workflows |
| ADR-002 | Doc Numbering | Redis Redlock + DB optimistic locking |
| ADR-005 | Technology Stack | NestJS + Next.js + MariaDB + Redis |
| ADR-006 | Redis Caching | Cache strategy and invalidation patterns |
| ADR-008 | Email Notification | BullMQ queue-based email/LINE/in-app |
| ADR-009 | DB Strategy | No TypeORM migrations — modify schema SQL directly |
| ADR-010 | Logging/Monitoring | Prometheus + Loki + Grafana stack |
| ADR-011 | App Router | Next.js App Router with RSC patterns |
| ADR-012 | UI Components | Shadcn/UI component library |
| ADR-013 | Form Handling | React Hook Form + Zod validation |
| ADR-014 | State Management | TanStack Query (server) + Zustand (client) |
| ADR-015 | Deployment | Docker Compose + Gitea CI/CD |
| ADR-016 | Security | JWT + CASL RBAC + Helmet.js + ClamAV |
| ADR-017 | Ollama Migration | Local AI + n8n for legacy data import |
| ADR-018 | AI Boundary (Patch 1.8.1) | AI isolation — no direct DB/storage access |
## 🚫 Forbidden Actions
- DO NOT use SQL Triggers (Business logic must be in NestJS services).
- DO NOT use `.env` files for production configuration (Use Docker environment variables).
- DO NOT run database migrations — modify the schema SQL file directly.
- DO NOT invent table names or columns — use ONLY what is defined in the schema SQL file.
- DO NOT generate code that violates OWASP Top 10 security practices.
- DO NOT use `any` TypeScript type anywhere.
- DO NOT let AI (Ollama) access production database directly — all writes go through DMS API.
- DO NOT bypass StorageService for file operations — all file moves must go through the API.

79
CLAUDE.md Normal file
View File

@@ -0,0 +1,79 @@
# NAP-DMS Project Context & Rules
## 🧠 Role & Persona
Act as a **Senior Full Stack Developer** expert in **NestJS**, **Next.js**, and **TypeScript**.
You are a **Document Intelligence Engine** — not a general chatbot.
You value **Data Integrity**, **Security**, and **Clean Architecture**.
## 🏗️ Project Overview
**LCBP3-DMS (Laem Chabang Port Phase 3 - Document Management System)** — Version 1.8.0 (Patch 1.8.1)
- **Goal:** Manage construction documents (Correspondence, RFA, Contract Drawings, Shop Drawings)
with complex multi-level approval workflows.
- **Infrastructure:**
- **QNAP NAS:** Container Station (Docker), Nginx Proxy Manager, MariaDB, Redis, Elasticsearch, ClamAV
- **ASUSTOR NAS:** Ollama (AI Processing), n8n (Workflow Automation), Portainer
- **Shared:** Gitea (Git + CI/CD), Prometheus + Loki + Grafana (Monitoring/Logging)
## 💻 Tech Stack & Constraints
- **Backend:** NestJS (Modular Architecture), TypeORM, MariaDB 11.8, Redis 7.2 (BullMQ),
Elasticsearch 8.11, JWT + Passport, CASL (4-Level RBAC), ClamAV (Virus Scanning), Helmet.js
- **Frontend:** Next.js 14+ (App Router), Tailwind CSS, Shadcn/UI,
TanStack Query (**Server State**), Zustand (**Client State**), React Hook Form + Zod (**Form State**), Axios
- **Notifications:** BullMQ Queue → Email / LINE Notify / In-App
- **AI/Migration:** Ollama (llama3.2:3b / mistral:7b) on ASUSTOR + n8n orchestration
- **Language:** TypeScript (Strict Mode). **NO `any` types allowed.**
## 🛡️ Security & Integrity Rules
1. **Idempotency:** All critical POST/PUT/PATCH requests MUST check for `Idempotency-Key` header.
2. **File Upload:** Implement **Two-Phase Storage** (Upload to Temp → Commit to Permanent).
3. **Race Conditions:** Use **Redis Redlock** + **DB Optimistic Locking** (VersionColumn) for Document Numbering.
4. **Validation:** Use Zod (frontend) or Class-validator (backend DTO) for all inputs.
5. **Password:** bcrypt with 12 salt rounds. Enforce password policy.
6. **Rate Limiting:** Apply ThrottlerGuard on auth endpoints.
7. **AI Isolation (ADR-018):** Ollama MUST run on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. Output JSON only.
## 📋 Workflow & Spec Guidelines
- Always follow specs in `specs/` (v1.8.0). Priority: `06-Decision-Records` > `05-Engineering-Guidelines` > others.
- Always verify database schema against **`specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`** before writing queries.
- Check data dictionary at **`specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
- Check seed data: **`lcbp3-v1.8.0-seed-basic.sql`** (reference data), **`lcbp3-v1.8.0-seed-permissions.sql`** (CASL permissions).
- For migration context: **`specs/03-Data-and-Storage/03-04-legacy-data-migration.md`** and **`03-05-n8n-migration-setup-guide.md`**.
### ADR Reference (All 17 + Patch)
Adhere to all ADRs in `specs/06-Decision-Records/`:
| ADR | Topic | Key Decision |
| ------- | ------------------------- | -------------------------------------------------- |
| ADR-001 | Workflow Engine | Unified state machine for document workflows |
| ADR-002 | Doc Numbering | Redis Redlock + DB optimistic locking |
| ADR-005 | Technology Stack | NestJS + Next.js + MariaDB + Redis |
| ADR-006 | Redis Caching | Cache strategy and invalidation patterns |
| ADR-008 | Email Notification | BullMQ queue-based email/LINE/in-app |
| ADR-009 | DB Strategy | No TypeORM migrations — modify schema SQL directly |
| ADR-010 | Logging/Monitoring | Prometheus + Loki + Grafana stack |
| ADR-011 | App Router | Next.js App Router with RSC patterns |
| ADR-012 | UI Components | Shadcn/UI component library |
| ADR-013 | Form Handling | React Hook Form + Zod validation |
| ADR-014 | State Management | TanStack Query (server) + Zustand (client) |
| ADR-015 | Deployment | Docker Compose + Gitea CI/CD |
| ADR-016 | Security | JWT + CASL RBAC + Helmet.js + ClamAV |
| ADR-017 | Ollama Migration | Local AI + n8n for legacy data import |
| ADR-018 | AI Boundary (Patch 1.8.1) | AI isolation — no direct DB/storage access |
## 🚫 Forbidden Actions
- DO NOT use SQL Triggers (Business logic must be in NestJS services).
- DO NOT use `.env` files for production configuration (Use Docker environment variables).
- DO NOT run database migrations — modify the schema SQL file directly.
- DO NOT invent table names or columns — use ONLY what is defined in the schema SQL file.
- DO NOT generate code that violates OWASP Top 10 security practices.
- DO NOT use `any` TypeScript type anywhere.
- DO NOT let AI (Ollama) access production database directly — all writes go through DMS API.
- DO NOT bypass StorageService for file operations — all file moves must go through the API.

View File

@@ -1,9 +1,10 @@
# File: /share/np-dms/app/docker-compose.yml # File: /share/np-dms/app/docker-compose-app.yml
# DMS Container v1.8.0: Application Stack (Backend + Frontend) # DMS Container v1.8.0: Application Stack (Backend + Frontend)
# Application name: lcbp3-app # Application name: lcbp3-app
# ============================================================ # ============================================================
# ⚠️ ใช้งานร่วมกับ services อื่นที่รันอยู่แล้วบน QNAP: # ⚠️ ใช้งานร่วมกับ services อื่นที่รันอยู่แล้วบน QNAP:
# - mariadb (lcbp3-db) # - mariadb (lcbp3-db)
# - redis (lcbp3-redis)
# - cache (services) # - cache (services)
# - search (services) # - search (services)
# - npm (lcbp3-npm) # - npm (lcbp3-npm)
@@ -29,12 +30,12 @@ networks:
services: services:
# ---------------------------------------------------------------- # ----------------------------------------------------------------
# 1. Backend API (NestJS) # 1. Backend API (NestJS)
# Service Name: backend (ตามที่ NPM อ้างอิง → backend:3000) # Service Name: backend (ตามที่ NPM อ้างอิง → lcbp3-backend:3000)
# ---------------------------------------------------------------- # ----------------------------------------------------------------
backend: backend:
<<: [*restart_policy, *default_logging] <<: [*restart_policy, *default_logging]
image: lcbp3-backend:latest image: lcbp3-backend:latest
container_name: backend container_name: lcbp3-backend
stdin_open: true stdin_open: true
tty: true tty: true
deploy: deploy:
@@ -88,12 +89,12 @@ services:
# ---------------------------------------------------------------- # ----------------------------------------------------------------
# 2. Frontend Web App (Next.js) # 2. Frontend Web App (Next.js)
# Service Name: frontend (ตามที่ NPM อ้างอิง → frontend:3000) # Service Name: frontend (ตามที่ NPM อ้างอิง → lcbp3-frontend:3000)
# ---------------------------------------------------------------- # ----------------------------------------------------------------
frontend: frontend:
<<: [*restart_policy, *default_logging] <<: [*restart_policy, *default_logging]
image: lcbp3-frontend:latest image: lcbp3-frontend:latest
container_name: frontend container_name: lcbp3-frontend
stdin_open: true stdin_open: true
tty: true tty: true
deploy: deploy: