260304:1233 20260304:1200 update app to lcbp3
Some checks failed
Build and Deploy / deploy (push) Failing after 1m32s
Some checks failed
Build and Deploy / deploy (push) Failing after 1m32s
This commit is contained in:
@@ -30,10 +30,12 @@ Before generating code or planning a solution, you MUST conceptually load the co
|
||||
|
||||
4. **💾 DATABASE & SCHEMA (`specs/03-Data-and-Storage/`)**
|
||||
- _Action:_
|
||||
- **Read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`** for exact table structures and constraints.
|
||||
- **Read `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`** for exact table structures and constraints.
|
||||
- **Consult `specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
|
||||
- **Check `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-basic.sql`** to understand initial data states.
|
||||
- **Check `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql`** to understand initial permissions states.
|
||||
- **Check `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-basic.sql`** to understand initial data states.
|
||||
- **Check `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql`** to understand initial permissions states.
|
||||
- **Check `specs/03-Data-and-Storage/03-04-legacy-data-migration.md`** for migration context (ADR-017).
|
||||
- **Check `specs/03-Data-and-Storage/03-05-n8n-migration-setup-guide.md`** for n8n workflow setup.
|
||||
- _Constraint:_ NEVER invent table names or columns. Use ONLY what is defined here.
|
||||
|
||||
5. **⚙️ IMPLEMENTATION DETAILS (`specs/05-Engineering-Guidelines/`)**
|
||||
@@ -68,8 +70,9 @@ When proposing a change or writing code, you must explicitly reference the sourc
|
||||
### 4. Schema Changes
|
||||
|
||||
- **DO NOT** create or run TypeORM migration files.
|
||||
- Modify the schema directly in `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`.
|
||||
- Modify the schema directly in `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`.
|
||||
- Update `specs/03-Data-and-Storage/03-01-data-dictionary.md` if adding/changing columns.
|
||||
- Notify the user so they can apply the SQL change to the live database manually.
|
||||
- **AI Isolation (ADR-018):** Ollama runs on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. All writes go through DMS API.
|
||||
|
||||
---
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# 🚀 Spec-Kit: Antigravity Skills & Workflows
|
||||
|
||||
> **The Event Horizon of Software Quality.**
|
||||
> *Adapted for Google Antigravity IDE from [github/spec-kit](https://github.com/github/spec-kit).*
|
||||
> *Version: 1.1.0*
|
||||
> _Adapted for Google Antigravity IDE from [github/spec-kit](https://github.com/github/spec-kit)._
|
||||
> _Version: 1.1.0_
|
||||
|
||||
---
|
||||
|
||||
@@ -11,6 +11,7 @@
|
||||
Welcome to the **Antigravity Edition** of Spec-Kit. This system is architected to empower your AI pair programmer (Antigravity) to drive the entire Software Development Life Cycle (SDLC) using two powerful mechanisms: **Workflows** and **Skills**.
|
||||
|
||||
### 🔄 Dual-Mode Intelligence
|
||||
|
||||
In this edition, Spec-Kit commands have been split into two interactive layers:
|
||||
|
||||
1. **Workflows (`/command`)**: High-level orchestrations that guide the agent through a series of logical steps. **The easiest way to run a skill is by typing its corresponding workflow command.**
|
||||
@@ -25,10 +26,27 @@ In this edition, Spec-Kit commands have been split into two interactive layers:
|
||||
|
||||
To enable these agent capabilities in your project:
|
||||
|
||||
1. **Add the folder**: Drop the `.agent/` folder into the root of your project workspace.
|
||||
2. **That's it!** Antigravity automatically detects the `.agent/skills` and `.agent/workflows` directories. It will instantly gain the ability to perform Spec-Driven Development.
|
||||
1. **Add the folder**: Drop the `.agents/` folder into the root of your project workspace.
|
||||
2. **That's it!** Antigravity automatically detects the `.agents/skills` and `.agents/workflows` directories. It will instantly gain the ability to perform Spec-Driven Development.
|
||||
|
||||
> **💡 Compatibility Note:** This toolkit is fully compatible with **Claude Code**. To use it with Claude, simply rename the `.agent` folder to `.claude`. The skills and workflows will function identically.
|
||||
> **💡 Compatibility Note:** This toolkit is compatible with multiple AI coding agents. To use with Claude Code, rename the `.agents` folder to `.claude`. The skills and workflows will function identically.
|
||||
|
||||
### Prerequisites (Optional)
|
||||
|
||||
Some skills and scripts reference a `.specify/` directory for templates and project memory. If you want the full Spec-Kit experience (template-driven spec/plan creation), create this structure at repo root:
|
||||
|
||||
```text
|
||||
.specify/
|
||||
├── templates/
|
||||
│ ├── spec-template.md # Template for /speckit.specify
|
||||
│ ├── plan-template.md # Template for /speckit.plan
|
||||
│ ├── tasks-template.md # Template for /speckit.tasks
|
||||
│ └── agent-file-template.md # Template for update-agent-context.sh
|
||||
└── memory/
|
||||
└── constitution.md # Project governance rules (/speckit.constitution)
|
||||
```
|
||||
|
||||
> **Note:** If `.specify/` is absent, skills will still function — they'll create blank files instead of using templates. The constitution workflow (`/speckit.constitution`) will create this structure for you on first run.
|
||||
|
||||
---
|
||||
|
||||
@@ -37,35 +55,55 @@ To enable these agent capabilities in your project:
|
||||
The toolkit is organized into modular components that provide both the logic (Scripts) and the structure (Templates) for the agent.
|
||||
|
||||
```text
|
||||
.agent/
|
||||
.agents/
|
||||
├── skills/ # @ Mentions (Agent Intelligence)
|
||||
│ ├── speckit.analyze # Consistency Checker
|
||||
│ ├── speckit.checker # Static Analysis Aggregator
|
||||
│ ├── speckit.checklist # Requirements Validator
|
||||
│ ├── speckit.clarify # Ambiguity Resolver
|
||||
│ ├── speckit.constitution # Governance Manager
|
||||
│ ├── speckit.diff # Artifact Comparator
|
||||
│ ├── speckit.implement # Code Builder (Anti-Regression)
|
||||
│ ├── speckit.migrate # Legacy Code Migrator
|
||||
│ ├── speckit.plan # Technical Planner
|
||||
│ ├── speckit.quizme # Logic Challenger (Red Team)
|
||||
│ ├── speckit.reviewer # Code Reviewer
|
||||
│ ├── speckit.specify # Feature Definer
|
||||
│ ├── speckit.status # Progress Dashboard
|
||||
│ ├── speckit.tasks # Task Breaker
|
||||
│ ├── speckit.taskstoissues# Issue Tracker Syncer
|
||||
│ ├── speckit.tester # Test Runner & Coverage
|
||||
│ └── speckit.validate # Implementation Validator
|
||||
│ ├── nestjs-best-practices/ # NestJS Architecture Patterns
|
||||
│ ├── next-best-practices/ # Next.js App Router Patterns
|
||||
│ ├── speckit.analyze/ # Consistency Checker
|
||||
│ ├── speckit.checker/ # Static Analysis Aggregator
|
||||
│ ├── speckit.checklist/ # Requirements Validator
|
||||
│ ├── speckit.clarify/ # Ambiguity Resolver
|
||||
│ ├── speckit.constitution/ # Governance Manager
|
||||
│ ├── speckit.diff/ # Artifact Comparator
|
||||
│ ├── speckit.implement/ # Code Builder (Anti-Regression)
|
||||
│ ├── speckit.migrate/ # Legacy Code Migrator
|
||||
│ ├── speckit.plan/ # Technical Planner
|
||||
│ ├── speckit.quizme/ # Logic Challenger (Red Team)
|
||||
│ ├── speckit.reviewer/ # Code Reviewer
|
||||
│ ├── speckit.security-audit/ # Security Auditor (OWASP/CASL/ClamAV)
|
||||
│ ├── speckit.specify/ # Feature Definer
|
||||
│ ├── speckit.status/ # Progress Dashboard
|
||||
│ ├── speckit.tasks/ # Task Breaker
|
||||
│ ├── speckit.taskstoissues/ # Issue Tracker Syncer (GitHub + Gitea)
|
||||
│ ├── speckit.tester/ # Test Runner & Coverage
|
||||
│ └── speckit.validate/ # Implementation Validator
|
||||
│
|
||||
├── workflows/ # / Slash Commands (Orchestration)
|
||||
│ ├── 00-speckit.all.md # Full Pipeline
|
||||
│ ├── 01-speckit.constitution.md # Governance
|
||||
│ ├── 02-speckit.specify.md # Feature Spec
|
||||
│ ├── ... (Numbered 00-11)
|
||||
│ ├── speckit.prepare.md # Prep Pipeline
|
||||
│ └── util-speckit.*.md # Utilities
|
||||
│ ├── 00-speckit.all.md # Full Pipeline (10 steps: Specify → Validate)
|
||||
│ ├── 01–11-speckit.*.md # Individual phase workflows
|
||||
│ ├── speckit.prepare.md # Prep Pipeline (5 steps: Specify → Analyze)
|
||||
│ ├── schema-change.md # DB Schema Change (ADR-009)
|
||||
│ ├── create-backend-module.md # NestJS Module Scaffolding
|
||||
│ ├── create-frontend-page.md # Next.js Page Scaffolding
|
||||
│ ├── deploy.md # Deployment via Gitea CI/CD
|
||||
│ └── util-speckit.*.md # Utilities (checklist, diff, migrate, etc.)
|
||||
│
|
||||
└── scripts/ # Shared Bash Core (Kinetic logic)
|
||||
└── scripts/
|
||||
├── bash/ # Bash Core (Kinetic logic)
|
||||
│ ├── common.sh # Shared utilities & path resolution
|
||||
│ ├── check-prerequisites.sh # Prerequisite validation
|
||||
│ ├── create-new-feature.sh # Feature branch creation
|
||||
│ ├── setup-plan.sh # Plan template setup
|
||||
│ ├── update-agent-context.sh # Agent file updater (main)
|
||||
│ ├── plan-parser.sh # Plan data extraction (module)
|
||||
│ ├── content-generator.sh # Language-specific templates (module)
|
||||
│ └── agent-registry.sh # 17-agent type registry (module)
|
||||
├── powershell/ # PowerShell Equivalents (Windows-native)
|
||||
│ ├── common.ps1 # Shared utilities & prerequisites
|
||||
│ └── create-new-feature.ps1 # Feature branch creation
|
||||
├── fix_links.py # Spec link fixer
|
||||
├── verify_links.py # Spec link verifier
|
||||
└── start-mcp.js # MCP server launcher
|
||||
```
|
||||
|
||||
---
|
||||
@@ -73,8 +111,8 @@ The toolkit is organized into modular components that provide both the logic (Sc
|
||||
## 🗺️ Mapping: Commands to Capabilities
|
||||
|
||||
| Phase | Workflow Trigger | Antigravity Skill | Role |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| **Pipeline** | `/00-speckit.all` | N/A | Runs the full SDLC pipeline. |
|
||||
| :---------------- | :---------------------------- | :------------------------ | :------------------------------------------------------ |
|
||||
| **Full Pipeline** | `/00-speckit.all` | N/A | Runs full SDLC pipeline (10 steps: Specify → Validate). |
|
||||
| **Governance** | `/01-speckit.constitution` | `@speckit.constitution` | Establishes project rules & principles. |
|
||||
| **Definition** | `/02-speckit.specify` | `@speckit.specify` | Drafts structured `spec.md`. |
|
||||
| **Ambiguity** | `/03-speckit.clarify` | `@speckit.clarify` | Resolves gaps post-spec. |
|
||||
@@ -86,13 +124,15 @@ The toolkit is organized into modular components that provide both the logic (Sc
|
||||
| **Testing** | `/09-speckit.tester` | `@speckit.tester` | Runs test suite & reports coverage. |
|
||||
| **Review** | `/10-speckit.reviewer` | `@speckit.reviewer` | Performs code review (Logic, Perf, Style). |
|
||||
| **Validation** | `/11-speckit.validate` | `@speckit.validate` | Verifies implementation matches Spec requirements. |
|
||||
| **Preparation** | `/speckit.prepare` | N/A | Runs Specify -> Analyze sequence. |
|
||||
| **Preparation** | `/speckit.prepare` | N/A | Runs Specify → Analyze prep sequence (5 steps). |
|
||||
| **Schema** | `/schema-change` | N/A | DB schema changes per ADR-009 (no migrations). |
|
||||
| **Security** | N/A | `@speckit.security-audit` | OWASP Top 10 + CASL + ClamAV audit. |
|
||||
| **Checklist** | `/util-speckit.checklist` | `@speckit.checklist` | Generates feature checklists. |
|
||||
| **Diff** | `/util-speckit.diff` | `@speckit.diff` | Compares artifact versions. |
|
||||
| **Migration** | `/util-speckit.migrate` | `@speckit.migrate` | Port existing code to Spec-Kit. |
|
||||
| **Red Team** | `/util-speckit.quizme` | `@speckit.quizme` | Challenges logical flaws. |
|
||||
| **Status** | `/util-speckit.status` | `@speckit.status` | Shows feature completion status. |
|
||||
| **Tracking** | `/util-speckit.taskstoissues`| `@speckit.taskstoissues`| Syncs tasks to GitHub/Jira/etc. |
|
||||
| **Tracking** | `/util-speckit.taskstoissues` | `@speckit.taskstoissues` | Syncs tasks to GitHub/Gitea issues. |
|
||||
|
||||
---
|
||||
|
||||
@@ -101,19 +141,17 @@ The toolkit is organized into modular components that provide both the logic (Sc
|
||||
The following skills are designed to work together as a comprehensive defense against regression and poor quality. Run them in this order:
|
||||
|
||||
| Step | Skill | Core Question | Focus |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| **1. Checker** | `@speckit.checker` | *"Is the code compliant?"* | **Syntax & Security**. Runs compilation, linting (ESLint/GolangCI), and vulnerability scans (npm audit/govulncheck). Catches low-level errors first. |
|
||||
| **2. Tester** | `@speckit.tester` | *"Does it work?"* | **Functionality**. Executes your test suite (Jest/Pytest/Go Test) to ensure logic performs as expected and tests pass. |
|
||||
| **3. Reviewer** | `@speckit.reviewer` | *"Is the code written well?"* | **Quality & Maintainability**. Analyzes code structure for complexity, performance bottlenecks, and best practices, acting as a senior peer reviewer. |
|
||||
| **4. Validate** | `@speckit.validate` | *"Did we build the right thing?"* | **Requirements**. Semantically compares the implementation against the defined `spec.md` and `plan.md` to ensure all feature requirements are met. |
|
||||
| :-------------- | :------------------ | :-------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **1. Checker** | `@speckit.checker` | _"Is the code compliant?"_ | **Syntax & Security**. Runs compilation, linting (ESLint/GolangCI), and vulnerability scans (npm audit/govulncheck). Catches low-level errors first. |
|
||||
| **2. Tester** | `@speckit.tester` | _"Does it work?"_ | **Functionality**. Executes your test suite (Jest/Pytest/Go Test) to ensure logic performs as expected and tests pass. |
|
||||
| **3. Reviewer** | `@speckit.reviewer` | _"Is the code written well?"_ | **Quality & Maintainability**. Analyzes code structure for complexity, performance bottlenecks, and best practices, acting as a senior peer reviewer. |
|
||||
| **4. Validate** | `@speckit.validate` | _"Did we build the right thing?"_ | **Requirements**. Semantically compares the implementation against the defined `spec.md` and `plan.md` to ensure all feature requirements are met. |
|
||||
|
||||
> **🤖 Power User Tip:** You can amplify this pipeline by creating a custom **Claude Code (MCP) Server** or subagent that delegates heavy reasoning to **Gemini Pro 3** via the `gemini` CLI.
|
||||
> **🤖 Power User Tip:** You can amplify this pipeline by creating a custom **MCP Server** or subagent that delegates heavy reasoning to a dedicated LLM.
|
||||
>
|
||||
> * **Use Case:** Bind the `@speckit.validate` and `@speckit.reviewer` steps to Gemini Pro 3.
|
||||
> * **Benefit:** Gemini's 1M+ token context and reasoning capabilities excel at analyzing the full project context against the Spec, finding subtle logical flaws that smaller models miss.
|
||||
> * **How:** Create a wrapper script `scripts/gemini-reviewer.sh` that pipes the `tasks.md` and codebase to `gemini chat`, then expose this as a tool to Claude.
|
||||
|
||||
---
|
||||
> - **Use Case:** Bind the `@speckit.validate` and `@speckit.reviewer` steps to a large-context model.
|
||||
> - **Benefit:** Large-context models (1M+ tokens) excel at analyzing the full project context against the Spec, finding subtle logical flaws that smaller models miss.
|
||||
> - **How:** Create a wrapper script `scripts/gemini-reviewer.sh` that pipes the `tasks.md` and codebase to an LLM, then expose this as a tool.
|
||||
|
||||
---
|
||||
|
||||
@@ -122,44 +160,47 @@ The following skills are designed to work together as a comprehensive defense ag
|
||||
These workflows function as the "Control Plane" of the project, managing everything from idea inception to status tracking.
|
||||
|
||||
| Step | Workflow | Core Question | Focus |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| **1. Preparation** | `/speckit.prepare` | *"Are we ready?"* | **The Macro-Workflow**. Runs Skills 02–06 (Specify $\to$ Clarify $\to$ Plan $\to$ Tasks $\to$ Analyze) in one sequence to go from "Idea" to "Ready to Code". |
|
||||
| **2. Migration** | `/util-speckit.migrate` | *"Can we import?"* | **Onboarding**. Reverse-engineers existing code into `spec.md`, `plan.md`, and `tasks.md`. |
|
||||
| **3. Red Team** | `/util-speckit.quizme` | *"What did we miss?"* | **Hardening**. Socratic questioning to find logical gaps in your specification before you plan. |
|
||||
| **4. Export** | `/util-speckit.taskstoissues` | *"Who does what?"* | **Handoff**. Converts your `tasks.md` into real GitHub/Jira issues for the team. |
|
||||
| **5. Status** | `/util-speckit.status` | *"Are we there yet?"* | **Tracking**. Scans all artifacts to report feature completion percentage. |
|
||||
| **6. Utilities** | `/util-speckit.diff` <br> `/util-speckit.checklist` | *"What changed?"* | **Support**. View artifact diffs or generate quick acceptance checklists. |
|
||||
| :----------------- | :-------------------------------------------------- | :-------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **1. Preparation** | `/speckit.prepare` | _"Are we ready?"_ | **The Macro-Workflow**. Runs Skills 02–06 (Specify $\to$ Clarify $\to$ Plan $\to$ Tasks $\to$ Analyze) in one sequence to go from "Idea" to "Ready to Code". |
|
||||
| **2. Migration** | `/util-speckit.migrate` | _"Can we import?"_ | **Onboarding**. Reverse-engineers existing code into `spec.md`, `plan.md`, and `tasks.md`. |
|
||||
| **3. Red Team** | `/util-speckit.quizme` | _"What did we miss?"_ | **Hardening**. Socratic questioning to find logical gaps in your specification before you plan. |
|
||||
| **4. Export** | `/util-speckit.taskstoissues` | _"Who does what?"_ | **Handoff**. Converts your `tasks.md` into GitHub or Gitea issues with labels and milestones. |
|
||||
| **5. Status** | `/util-speckit.status` | _"Are we there yet?"_ | **Tracking**. Scans all artifacts to report feature completion percentage. |
|
||||
| **6. Utilities** | `/util-speckit.diff` <br> `/util-speckit.checklist` | _"What changed?"_ | **Support**. View artifact diffs or generate quick acceptance checklists. |
|
||||
|
||||
### 🔄 The Design Sequence
|
||||
|
||||
**Stage 1: Inception**
|
||||
* *Legacy Project?* $\to$ Run **`/util-speckit.migrate`**.
|
||||
* *New Feature?* $\to$ Run **`/speckit.prepare`**.
|
||||
|
||||
- _Legacy Project?_ $\to$ Run **`/util-speckit.migrate`**.
|
||||
- _New Feature?_ $\to$ Run **`/speckit.prepare`**.
|
||||
|
||||
**Stage 2: Hardening**
|
||||
* Run **`/util-speckit.quizme`** to catch edge cases.
|
||||
* Run **`/speckit.prepare`** again to regenerate the Plan based on restricted specs.
|
||||
|
||||
- Run **`/util-speckit.quizme`** to catch edge cases.
|
||||
- Run **`/speckit.prepare`** again to regenerate the Plan based on restricted specs.
|
||||
|
||||
**Stage 3: Handoff**
|
||||
* Run **`/util-speckit.taskstoissues`** to sync with your board.
|
||||
|
||||
- Run **`/util-speckit.taskstoissues`** to sync with your board.
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Getting Started in Antigravity
|
||||
|
||||
Once the `.agent` folder is present, you can talk to your agent as a partner. You can trigger a full process using a slash command or reference a specific capability using a skill mention.
|
||||
Once the `.agents` folder is present, you can talk to your agent as a partner. You can trigger a full process using a slash command or reference a specific capability using a skill mention.
|
||||
|
||||
**Example Conversation:**
|
||||
|
||||
> **Developer**: "**/02-speckit.specify** Add a real-time notification system using WebSockets."
|
||||
>
|
||||
> **Antigravity**: "Understood. Executing the **@speckit.specify** skill. Creating feature branch `005-notifications` and drafting the initial specification..."
|
||||
> *(Antigravity creates branch, drafts spec.md)*
|
||||
> _(Antigravity creates branch, drafts spec.md)_
|
||||
>
|
||||
> **Antigravity**: "Specification ready. I've identified some ambiguity in the reconnection strategy. Should I use **@speckit.clarify** to resolve this?"
|
||||
>
|
||||
> **Developer**: "Yes, use **@speckit.clarify**."
|
||||
> *(Antigravity executes the skill logic, interviewing the developer and updating the spec)*
|
||||
> _(Antigravity executes the skill logic, interviewing the developer and updating the spec)_
|
||||
>
|
||||
> **Antigravity**: "The spec is now robust. I'm ready to trigger **/04-speckit.plan** and **/05-speckit.tasks** to prepare for implementation."
|
||||
|
||||
@@ -170,25 +211,33 @@ Once the `.agent` folder is present, you can talk to your agent as a partner. Yo
|
||||
To get the most out of this system, follow these **Spec-Driven Development (SDD)** rules:
|
||||
|
||||
### 1. The Constitution is King 👑
|
||||
|
||||
**Never skip `/01-speckit.constitution`.**
|
||||
* This file is the "Context Window Anchor" for the AI.
|
||||
* It prevents hallucinations about tech stack (e.g., "Don't use jQuery" or "Always use TypeScript strict mode").
|
||||
* **Tip:** If Antigravity makes a style mistake, don't just fix the code—update the Constitution so it never happens again.
|
||||
|
||||
- This file is the "Context Window Anchor" for the AI.
|
||||
- It prevents hallucinations about tech stack (e.g., "Don't use jQuery" or "Always use TypeScript strict mode").
|
||||
- **Tip:** If Antigravity makes a style mistake, don't just fix the code—update the Constitution so it never happens again.
|
||||
|
||||
### 2. The Layered Defense 🛡️
|
||||
Don't rush to code. The workflow exists to catch errors *cheaply* before they become expensive bugs.
|
||||
* **Ambiguity Layer**: `/03-speckit.clarify` catches misunderstandings.
|
||||
* **Logic Layer**: `/util-speckit.quizme` catches edge cases.
|
||||
* **Consistency Layer**: `/06-speckit.analyze` catches gaps between Spec and Plan.
|
||||
|
||||
Don't rush to code. The workflow exists to catch errors _cheaply_ before they become expensive bugs.
|
||||
|
||||
- **Ambiguity Layer**: `/03-speckit.clarify` catches misunderstandings.
|
||||
- **Logic Layer**: `/util-speckit.quizme` catches edge cases.
|
||||
- **Consistency Layer**: `/06-speckit.analyze` catches gaps between Spec and Plan.
|
||||
|
||||
### 3. The 15-Minute Rule ⏱️
|
||||
|
||||
When generating `tasks.md` (Skill 05), ensure tasks are **atomic**.
|
||||
* **Bad Task**: "Implement User Auth" (Too big, AI will get lost).
|
||||
* **Good Task**: "Create `User` Mongoose schema with email validation" (Perfect).
|
||||
* **Rule of Thumb**: If a task takes Antigravity more than 3 tool calls to finish, it's too big. Break it down.
|
||||
|
||||
- **Bad Task**: "Implement User Auth" (Too big, AI will get lost).
|
||||
- **Good Task**: "Create `User` Mongoose schema with email validation" (Perfect).
|
||||
- **Rule of Thumb**: If a task takes Antigravity more than 3 tool calls to finish, it's too big. Break it down.
|
||||
|
||||
### 4. "Refine, Don't Rewind" ⏩
|
||||
|
||||
If you change your mind mid-project:
|
||||
|
||||
1. Don't just edit the code.
|
||||
2. Edit the `spec.md` to reflect the new requirement.
|
||||
3. Run `/util-speckit.diff` to see the drift.
|
||||
@@ -198,9 +247,11 @@ If you change your mind mid-project:
|
||||
|
||||
## 🧩 Adaptation Notes
|
||||
|
||||
* **Skill-Based Autonomy**: Mentions like `@speckit.plan` trigger the agent's internalized understanding of how to perform that role.
|
||||
* **Shared Script Core**: All logic resides in `.agent/scripts/bash` for consistent file and git operations.
|
||||
* **Agent-Native**: Designed to be invoked via Antigravity tool calls and reasoning rather than just terminal strings.
|
||||
- **Skill-Based Autonomy**: Mentions like `@speckit.plan` trigger the agent's internalized understanding of how to perform that role.
|
||||
- **Shared Script Core**: Logic resides in `.agents/scripts/bash` (modular) with PowerShell equivalents in `scripts/powershell/` for Windows-native execution.
|
||||
- **Agent-Native**: Designed to be invoked via Antigravity tool calls and reasoning rather than just terminal strings.
|
||||
- **LCBP3-DMS Specific**: Includes project-specific skills (`nestjs-best-practices`, `next-best-practices`, `speckit.security-audit`) and workflows (`/schema-change`, `/create-backend-module`, `/deploy`).
|
||||
|
||||
---
|
||||
*Built with logic from [Spec-Kit](https://github.com/github/spec-kit). Powered by Antigravity.*
|
||||
|
||||
_Built with logic from [Spec-Kit](https://github.com/github/spec-kit). Powered by Antigravity._
|
||||
|
||||
95
.agents/scripts/bash/agent-registry.sh
Normal file
95
.agents/scripts/bash/agent-registry.sh
Normal file
@@ -0,0 +1,95 @@
|
||||
#!/usr/bin/env bash
|
||||
# Agent registry — maps agent types to file paths and display names
|
||||
# Extracted from update-agent-context.sh for modularity
|
||||
#
|
||||
# Usage:
|
||||
# source agent-registry.sh
|
||||
# init_agent_registry "$REPO_ROOT"
|
||||
# get_agent_file "claude" # → /path/to/CLAUDE.md
|
||||
# get_agent_name "claude" # → "Claude Code"
|
||||
|
||||
# Initialize agent file paths (call after REPO_ROOT is set)
|
||||
init_agent_registry() {
|
||||
local repo_root="$1"
|
||||
|
||||
# Agent type → file path mapping
|
||||
declare -gA AGENT_FILES=(
|
||||
[claude]="$repo_root/CLAUDE.md"
|
||||
[gemini]="$repo_root/GEMINI.md"
|
||||
[copilot]="$repo_root/.github/agents/copilot-instructions.md"
|
||||
[cursor-agent]="$repo_root/.cursor/rules/specify-rules.mdc"
|
||||
[qwen]="$repo_root/QWEN.md"
|
||||
[opencode]="$repo_root/AGENTS.md"
|
||||
[codex]="$repo_root/AGENTS.md"
|
||||
[windsurf]="$repo_root/.windsurf/rules/specify-rules.md"
|
||||
[kilocode]="$repo_root/.kilocode/rules/specify-rules.md"
|
||||
[auggie]="$repo_root/.augment/rules/specify-rules.md"
|
||||
[roo]="$repo_root/.roo/rules/specify-rules.md"
|
||||
[codebuddy]="$repo_root/CODEBUDDY.md"
|
||||
[qoder]="$repo_root/QODER.md"
|
||||
[amp]="$repo_root/AGENTS.md"
|
||||
[shai]="$repo_root/SHAI.md"
|
||||
[q]="$repo_root/AGENTS.md"
|
||||
[bob]="$repo_root/AGENTS.md"
|
||||
)
|
||||
|
||||
# Agent type → display name mapping
|
||||
declare -gA AGENT_NAMES=(
|
||||
[claude]="Claude Code"
|
||||
[gemini]="Gemini CLI"
|
||||
[copilot]="GitHub Copilot"
|
||||
[cursor-agent]="Cursor IDE"
|
||||
[qwen]="Qwen Code"
|
||||
[opencode]="opencode"
|
||||
[codex]="Codex CLI"
|
||||
[windsurf]="Windsurf"
|
||||
[kilocode]="Kilo Code"
|
||||
[auggie]="Auggie CLI"
|
||||
[roo]="Roo Code"
|
||||
[codebuddy]="CodeBuddy CLI"
|
||||
[qoder]="Qoder CLI"
|
||||
[amp]="Amp"
|
||||
[shai]="SHAI"
|
||||
[q]="Amazon Q Developer CLI"
|
||||
[bob]="IBM Bob"
|
||||
)
|
||||
|
||||
# Template file path
|
||||
TEMPLATE_FILE="$repo_root/.specify/templates/agent-file-template.md"
|
||||
}
|
||||
|
||||
# Get file path for an agent type
|
||||
get_agent_file() {
|
||||
local agent_type="$1"
|
||||
echo "${AGENT_FILES[$agent_type]:-}"
|
||||
}
|
||||
|
||||
# Get display name for an agent type
|
||||
get_agent_name() {
|
||||
local agent_type="$1"
|
||||
echo "${AGENT_NAMES[$agent_type]:-}"
|
||||
}
|
||||
|
||||
# Get all registered agent types
|
||||
get_all_agent_types() {
|
||||
echo "${!AGENT_FILES[@]}"
|
||||
}
|
||||
|
||||
# Check if an agent type is valid
|
||||
is_valid_agent() {
|
||||
local agent_type="$1"
|
||||
[[ -n "${AGENT_FILES[$agent_type]:-}" ]]
|
||||
}
|
||||
|
||||
# Get supported agent types as a pipe-separated string (for error messages)
|
||||
get_supported_agents_string() {
|
||||
local result=""
|
||||
for key in "${!AGENT_FILES[@]}"; do
|
||||
if [[ -n "$result" ]]; then
|
||||
result="$result|$key"
|
||||
else
|
||||
result="$key"
|
||||
fi
|
||||
done
|
||||
echo "$result"
|
||||
}
|
||||
40
.agents/scripts/bash/content-generator.sh
Normal file
40
.agents/scripts/bash/content-generator.sh
Normal file
@@ -0,0 +1,40 @@
|
||||
#!/usr/bin/env bash
|
||||
# Content generation functions for update-agent-context
|
||||
# Extracted from update-agent-context.sh for modularity
|
||||
|
||||
# Get project directory structure based on project type
|
||||
get_project_structure() {
|
||||
local project_type="$1"
|
||||
|
||||
if [[ "$project_type" == *"web"* ]]; then
|
||||
echo "backend/\\nfrontend/\\ntests/"
|
||||
else
|
||||
echo "src/\\ntests/"
|
||||
fi
|
||||
}
|
||||
|
||||
# Get build/test commands for a given language
|
||||
get_commands_for_language() {
|
||||
local lang="$1"
|
||||
|
||||
case "$lang" in
|
||||
*"Python"*)
|
||||
echo "cd src && pytest && ruff check ."
|
||||
;;
|
||||
*"Rust"*)
|
||||
echo "cargo test && cargo clippy"
|
||||
;;
|
||||
*"JavaScript"*|*"TypeScript"*)
|
||||
echo "npm test \\&\\& npm run lint"
|
||||
;;
|
||||
*)
|
||||
echo "# Add commands for $lang"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Get language-specific conventions string
|
||||
get_language_conventions() {
|
||||
local lang="$1"
|
||||
echo "$lang: Follow standard conventions"
|
||||
}
|
||||
72
.agents/scripts/bash/plan-parser.sh
Normal file
72
.agents/scripts/bash/plan-parser.sh
Normal file
@@ -0,0 +1,72 @@
|
||||
#!/usr/bin/env bash
|
||||
# Plan parsing functions for update-agent-context
|
||||
# Extracted from update-agent-context.sh for modularity
|
||||
|
||||
# Extract a field value from plan.md by pattern
|
||||
# Usage: extract_plan_field "Language/Version" "/path/to/plan.md"
|
||||
extract_plan_field() {
|
||||
local field_pattern="$1"
|
||||
local plan_file="$2"
|
||||
|
||||
grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
|
||||
head -1 | \
|
||||
sed "s|^\*\*${field_pattern}\*\*: ||" | \
|
||||
sed 's/^[ \t]*//;s/[ \t]*$//' | \
|
||||
grep -v "NEEDS CLARIFICATION" | \
|
||||
grep -v "^N/A$" || echo ""
|
||||
}
|
||||
|
||||
# Parse plan.md and set global variables: NEW_LANG, NEW_FRAMEWORK, NEW_DB, NEW_PROJECT_TYPE
|
||||
parse_plan_data() {
|
||||
local plan_file="$1"
|
||||
|
||||
if [[ ! -f "$plan_file" ]]; then
|
||||
log_error "Plan file not found: $plan_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ ! -r "$plan_file" ]]; then
|
||||
log_error "Plan file is not readable: $plan_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Parsing plan data from $plan_file"
|
||||
|
||||
NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
|
||||
NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
|
||||
NEW_DB=$(extract_plan_field "Storage" "$plan_file")
|
||||
NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
|
||||
|
||||
# Log what we found
|
||||
if [[ -n "$NEW_LANG" ]]; then
|
||||
log_info "Found language: $NEW_LANG"
|
||||
else
|
||||
log_warning "No language information found in plan"
|
||||
fi
|
||||
|
||||
[[ -n "$NEW_FRAMEWORK" ]] && log_info "Found framework: $NEW_FRAMEWORK"
|
||||
[[ -n "$NEW_DB" && "$NEW_DB" != "N/A" ]] && log_info "Found database: $NEW_DB"
|
||||
[[ -n "$NEW_PROJECT_TYPE" ]] && log_info "Found project type: $NEW_PROJECT_TYPE"
|
||||
}
|
||||
|
||||
# Format technology stack string from language and framework
|
||||
format_technology_stack() {
|
||||
local lang="$1"
|
||||
local framework="$2"
|
||||
local parts=()
|
||||
|
||||
[[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
|
||||
[[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
|
||||
|
||||
if [[ ${#parts[@]} -eq 0 ]]; then
|
||||
echo ""
|
||||
elif [[ ${#parts[@]} -eq 1 ]]; then
|
||||
echo "${parts[0]}"
|
||||
else
|
||||
local result="${parts[0]}"
|
||||
for ((i=1; i<${#parts[@]}; i++)); do
|
||||
result="$result + ${parts[i]}"
|
||||
done
|
||||
echo "$result"
|
||||
fi
|
||||
}
|
||||
@@ -52,6 +52,12 @@ set -o pipefail
|
||||
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
# Load modular components (extracted for maintainability)
|
||||
# See each file for documentation of the functions it provides
|
||||
source "$SCRIPT_DIR/plan-parser.sh" # extract_plan_field, parse_plan_data, format_technology_stack
|
||||
source "$SCRIPT_DIR/content-generator.sh" # get_project_structure, get_commands_for_language, get_language_conventions
|
||||
source "$SCRIPT_DIR/agent-registry.sh" # init_agent_registry, get_agent_file, get_agent_name, etc.
|
||||
|
||||
# Get all paths and variables from common functions
|
||||
eval $(get_feature_paths)
|
||||
|
||||
|
||||
@@ -1,26 +1,28 @@
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Configuration
|
||||
BASE_DIR = Path(r"d:\nap-dms.lcbp3\specs")
|
||||
# Configuration - default base directory, can be overridden via CLI argument
|
||||
DEFAULT_BASE_DIR = Path(__file__).resolve().parent.parent.parent / "specs"
|
||||
|
||||
DIRECTORIES = [
|
||||
"00-overview",
|
||||
"01-requirements",
|
||||
"02-architecture",
|
||||
"03-implementation",
|
||||
"04-operations",
|
||||
"05-decisions",
|
||||
"06-tasks"
|
||||
"00-Overview",
|
||||
"01-Requirements",
|
||||
"02-Architecture",
|
||||
"03-Data-and-Storage",
|
||||
"04-Infrastructure-OPS",
|
||||
"05-Engineering-Guidelines",
|
||||
"06-Decision-Records"
|
||||
]
|
||||
|
||||
LINK_PATTERN = re.compile(r'(\[([^\]]+)\]\(([^)]+)\))')
|
||||
|
||||
def get_file_map():
|
||||
def get_file_map(base_dir: Path):
|
||||
"""Builds a map of {basename}.md -> {prefixed_name}.md across all dirs."""
|
||||
file_map = {}
|
||||
for dir_name in DIRECTORIES:
|
||||
directory = BASE_DIR / dir_name
|
||||
directory = base_dir / dir_name
|
||||
if not directory.exists():
|
||||
continue
|
||||
for file_path in directory.glob("*.md"):
|
||||
@@ -53,41 +55,14 @@ def get_file_map():
|
||||
if secondary_base:
|
||||
file_map[secondary_base] = f"{dir_name}/{actual_name}"
|
||||
|
||||
# Hardcoded specific overrides for versioning and common typos
|
||||
overrides = {
|
||||
"fullftack-js-v1.5.0.md": "03-implementation/03-01-fullftack-js-v1.7.0.md",
|
||||
"fullstack-js-v1.5.0.md": "03-implementation/03-01-fullftack-js-v1.7.0.md",
|
||||
"system-architecture.md": "02-architecture/02-01-system-architecture.md",
|
||||
"api-design.md": "02-architecture/02-02-api-design.md",
|
||||
"data-model.md": "02-architecture/02-03-data-model.md",
|
||||
"backend-guidelines.md": "03-implementation/03-02-backend-guidelines.md",
|
||||
"frontend-guidelines.md": "03-implementation/03-03-frontend-guidelines.md",
|
||||
"document-numbering.md": "03-implementation/03-04-document-numbering.md",
|
||||
"testing-strategy.md": "03-implementation/03-05-testing-strategy.md",
|
||||
"deployment-guide.md": "04-operations/04-01-deployment-guide.md",
|
||||
"environment-setup.md": "04-operations/04-02-environment-setup.md",
|
||||
"monitoring-alerting.md": "04-operations/04-03-monitoring-alerting.md",
|
||||
"backup-recovery.md": "04-operations/04-04-backup-recovery.md",
|
||||
"maintenance-procedures.md": "04-operations/04-05-maintenance-procedures.md",
|
||||
"security-operations.md": "04-operations/04-06-security-operations.md",
|
||||
"incident-response.md": "04-operations/04-07-incident-response.md",
|
||||
"document-numbering-operations.md": "04-operations/04-08-document-numbering-operations.md",
|
||||
# Missing task files - redirect to README or best match
|
||||
"task-be-011-notification-audit.md": "06-tasks/README.md",
|
||||
"task-be-001-database-migrations.md": "06-tasks/TASK-BE-015-schema-v160-migration.md", # Best match
|
||||
}
|
||||
|
||||
for k, v in overrides.items():
|
||||
file_map[k] = v
|
||||
|
||||
return file_map
|
||||
|
||||
def fix_links():
|
||||
file_map = get_file_map()
|
||||
def fix_links(base_dir: Path):
|
||||
file_map = get_file_map(base_dir)
|
||||
changes_made = 0
|
||||
|
||||
for dir_name in DIRECTORIES:
|
||||
directory = BASE_DIR / dir_name
|
||||
directory = base_dir / dir_name
|
||||
if not directory.exists():
|
||||
continue
|
||||
|
||||
@@ -107,8 +82,12 @@ def fix_links():
|
||||
if not target_path:
|
||||
continue
|
||||
|
||||
# Special case: file:///d:/nap-dms.lcbp3/specs/
|
||||
clean_target_path = target_path.replace("file:///d:/nap-dms.lcbp3/specs/", "").replace("file:///D:/nap-dms.lcbp3/specs/", "")
|
||||
# Special case: file:/// absolute paths
|
||||
clean_target_path = re.sub(
|
||||
r'^file:///[a-zA-Z]:[/\\].*?specs[/\\]',
|
||||
'',
|
||||
target_path
|
||||
)
|
||||
|
||||
resolved_locally = (file_path.parent / target_path).resolve()
|
||||
if resolved_locally.exists() and resolved_locally.is_file():
|
||||
@@ -119,7 +98,7 @@ def fix_links():
|
||||
if target_filename in file_map:
|
||||
correct_relative_to_specs = file_map[target_filename]
|
||||
# Calculate relative path from current file's parent to the correct file
|
||||
correct_abs = (BASE_DIR / correct_relative_to_specs).resolve()
|
||||
correct_abs = (base_dir / correct_relative_to_specs).resolve()
|
||||
|
||||
try:
|
||||
new_relative_path = os.path.relpath(correct_abs, file_path.parent).replace(os.sep, "/")
|
||||
@@ -143,4 +122,14 @@ def fix_links():
|
||||
print(f"\nTotal files updated: {changes_made}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
fix_links()
|
||||
if len(sys.argv) > 1:
|
||||
base_dir = Path(sys.argv[1])
|
||||
else:
|
||||
base_dir = DEFAULT_BASE_DIR
|
||||
|
||||
if not base_dir.exists():
|
||||
print(f"Error: Directory not found: {base_dir}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
print(f"Scanning specs directory: {base_dir}")
|
||||
fix_links(base_dir)
|
||||
|
||||
157
.agents/scripts/powershell/common.ps1
Normal file
157
.agents/scripts/powershell/common.ps1
Normal file
@@ -0,0 +1,157 @@
|
||||
# PowerShell equivalents for key .agents bash scripts
|
||||
# These provide Windows-native alternatives for the most commonly used functions
|
||||
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Common utility functions for Spec-Kit PowerShell scripts.
|
||||
.DESCRIPTION
|
||||
PowerShell equivalent of .agents/scripts/bash/common.sh
|
||||
Provides repository root detection, branch identification, and feature path resolution.
|
||||
#>
|
||||
|
||||
function Get-RepoRoot {
|
||||
try {
|
||||
$root = git rev-parse --show-toplevel 2>$null
|
||||
if ($LASTEXITCODE -eq 0) { return $root.Trim() }
|
||||
} catch {}
|
||||
# Fallback: navigate up from script location
|
||||
return (Resolve-Path "$PSScriptRoot\..\..\..").Path
|
||||
}
|
||||
|
||||
function Get-CurrentBranch {
|
||||
# Check environment variable first
|
||||
if ($env:SPECIFY_FEATURE) { return $env:SPECIFY_FEATURE }
|
||||
|
||||
try {
|
||||
$branch = git rev-parse --abbrev-ref HEAD 2>$null
|
||||
if ($LASTEXITCODE -eq 0) { return $branch.Trim() }
|
||||
} catch {}
|
||||
|
||||
# Fallback: find latest feature directory
|
||||
$repoRoot = Get-RepoRoot
|
||||
$specsDir = Join-Path $repoRoot "specs"
|
||||
if (Test-Path $specsDir) {
|
||||
$latest = Get-ChildItem -Path $specsDir -Directory |
|
||||
Where-Object { $_.Name -match '^\d{3}-' } |
|
||||
Sort-Object Name -Descending |
|
||||
Select-Object -First 1
|
||||
if ($latest) { return $latest.Name }
|
||||
}
|
||||
return "main"
|
||||
}
|
||||
|
||||
function Test-HasGit {
|
||||
try {
|
||||
git rev-parse --show-toplevel 2>$null | Out-Null
|
||||
return $LASTEXITCODE -eq 0
|
||||
} catch { return $false }
|
||||
}
|
||||
|
||||
function Test-FeatureBranch {
|
||||
param([string]$Branch, [bool]$HasGit)
|
||||
if (-not $HasGit) {
|
||||
Write-Warning "[specify] Git repository not detected; skipped branch validation"
|
||||
return $true
|
||||
}
|
||||
if ($Branch -notmatch '^\d{3}-') {
|
||||
Write-Error "Not on a feature branch. Current branch: $Branch"
|
||||
Write-Error "Feature branches should be named like: 001-feature-name"
|
||||
return $false
|
||||
}
|
||||
return $true
|
||||
}
|
||||
|
||||
function Find-FeatureDir {
|
||||
param([string]$RepoRoot, [string]$BranchName)
|
||||
$specsDir = Join-Path $RepoRoot "specs"
|
||||
|
||||
if ($BranchName -match '^(\d{3})-') {
|
||||
$prefix = $Matches[1]
|
||||
$matches = Get-ChildItem -Path $specsDir -Directory -Filter "$prefix-*" -ErrorAction SilentlyContinue
|
||||
if ($matches.Count -eq 1) { return $matches[0].FullName }
|
||||
if ($matches.Count -gt 1) {
|
||||
Write-Warning "Multiple spec dirs with prefix '$prefix': $($matches.Name -join ', ')"
|
||||
}
|
||||
}
|
||||
return Join-Path $specsDir $BranchName
|
||||
}
|
||||
|
||||
function Get-FeaturePaths {
|
||||
$repoRoot = Get-RepoRoot
|
||||
$branch = Get-CurrentBranch
|
||||
$hasGit = Test-HasGit
|
||||
$featureDir = Find-FeatureDir -RepoRoot $repoRoot -BranchName $branch
|
||||
|
||||
return [PSCustomObject]@{
|
||||
RepoRoot = $repoRoot
|
||||
Branch = $branch
|
||||
HasGit = $hasGit
|
||||
FeatureDir = $featureDir
|
||||
FeatureSpec = Join-Path $featureDir "spec.md"
|
||||
ImplPlan = Join-Path $featureDir "plan.md"
|
||||
Tasks = Join-Path $featureDir "tasks.md"
|
||||
Research = Join-Path $featureDir "research.md"
|
||||
DataModel = Join-Path $featureDir "data-model.md"
|
||||
Quickstart = Join-Path $featureDir "quickstart.md"
|
||||
ContractsDir = Join-Path $featureDir "contracts"
|
||||
}
|
||||
}
|
||||
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Check prerequisites for Spec-Kit workflows.
|
||||
.DESCRIPTION
|
||||
PowerShell equivalent of .agents/scripts/bash/check-prerequisites.sh
|
||||
.PARAMETER RequireTasks
|
||||
Require tasks.md to exist (for implementation phase)
|
||||
.PARAMETER IncludeTasks
|
||||
Include tasks.md in available docs list
|
||||
.PARAMETER PathsOnly
|
||||
Only output paths, no validation
|
||||
.EXAMPLE
|
||||
.\common.ps1
|
||||
$result = Check-Prerequisites -RequireTasks
|
||||
#>
|
||||
function Check-Prerequisites {
|
||||
param(
|
||||
[switch]$RequireTasks,
|
||||
[switch]$IncludeTasks,
|
||||
[switch]$PathsOnly
|
||||
)
|
||||
|
||||
$paths = Get-FeaturePaths
|
||||
$valid = Test-FeatureBranch -Branch $paths.Branch -HasGit $paths.HasGit
|
||||
if (-not $valid) { throw "Not on a feature branch" }
|
||||
|
||||
if ($PathsOnly) { return $paths }
|
||||
|
||||
# Validate required files
|
||||
if (-not (Test-Path $paths.FeatureDir)) {
|
||||
throw "Feature directory not found: $($paths.FeatureDir). Run /speckit.specify first."
|
||||
}
|
||||
if (-not (Test-Path $paths.ImplPlan)) {
|
||||
throw "plan.md not found. Run /speckit.plan first."
|
||||
}
|
||||
if ($RequireTasks -and -not (Test-Path $paths.Tasks)) {
|
||||
throw "tasks.md not found. Run /speckit.tasks first."
|
||||
}
|
||||
|
||||
# Build available docs list
|
||||
$docs = @()
|
||||
if (Test-Path $paths.Research) { $docs += "research.md" }
|
||||
if (Test-Path $paths.DataModel) { $docs += "data-model.md" }
|
||||
if ((Test-Path $paths.ContractsDir) -and (Get-ChildItem $paths.ContractsDir -ErrorAction SilentlyContinue)) {
|
||||
$docs += "contracts/"
|
||||
}
|
||||
if (Test-Path $paths.Quickstart) { $docs += "quickstart.md" }
|
||||
if ($IncludeTasks -and (Test-Path $paths.Tasks)) { $docs += "tasks.md" }
|
||||
|
||||
return [PSCustomObject]@{
|
||||
FeatureDir = $paths.FeatureDir
|
||||
AvailableDocs = $docs
|
||||
Paths = $paths
|
||||
}
|
||||
}
|
||||
|
||||
# Export functions when dot-sourced
|
||||
Export-ModuleMember -Function * -ErrorAction SilentlyContinue 2>$null
|
||||
138
.agents/scripts/powershell/create-new-feature.ps1
Normal file
138
.agents/scripts/powershell/create-new-feature.ps1
Normal file
@@ -0,0 +1,138 @@
|
||||
<#
|
||||
.SYNOPSIS
|
||||
Create a new feature branch and spec directory.
|
||||
.DESCRIPTION
|
||||
PowerShell equivalent of .agents/scripts/bash/create-new-feature.sh
|
||||
Creates a numbered feature branch and initializes the spec directory.
|
||||
.PARAMETER Description
|
||||
Natural language description of the feature.
|
||||
.PARAMETER ShortName
|
||||
Optional custom short name for the branch (2-4 words).
|
||||
.PARAMETER Number
|
||||
Optional manual branch number (overrides auto-detection).
|
||||
.EXAMPLE
|
||||
.\create-new-feature.ps1 -Description "Add user authentication" -ShortName "user-auth"
|
||||
#>
|
||||
param(
|
||||
[Parameter(Mandatory = $true, Position = 0)]
|
||||
[string]$Description,
|
||||
|
||||
[string]$ShortName,
|
||||
[int]$Number = 0
|
||||
)
|
||||
|
||||
$ErrorActionPreference = "Stop"
|
||||
|
||||
# Load common functions
|
||||
. "$PSScriptRoot\common.ps1"
|
||||
|
||||
$repoRoot = Get-RepoRoot
|
||||
$hasGit = Test-HasGit
|
||||
$specsDir = Join-Path $repoRoot "specs"
|
||||
if (-not (Test-Path $specsDir)) { New-Item -ItemType Directory -Path $specsDir | Out-Null }
|
||||
|
||||
# Stop words for smart branch name generation
|
||||
$stopWords = @('i','a','an','the','to','for','of','in','on','at','by','with','from',
|
||||
'is','are','was','were','be','been','being','have','has','had',
|
||||
'do','does','did','will','would','should','could','can','may','might',
|
||||
'must','shall','this','that','these','those','my','your','our','their',
|
||||
'want','need','add','get','set')
|
||||
|
||||
function ConvertTo-BranchName {
|
||||
param([string]$Text)
|
||||
$Text.ToLower() -replace '[^a-z0-9]', '-' -replace '-+', '-' -replace '^-|-$', ''
|
||||
}
|
||||
|
||||
function Get-SmartBranchName {
|
||||
param([string]$Desc)
|
||||
$words = ($Desc.ToLower() -replace '[^a-z0-9]', ' ').Split(' ', [StringSplitOptions]::RemoveEmptyEntries)
|
||||
$meaningful = $words | Where-Object { $_ -notin $stopWords -and $_.Length -ge 3 } | Select-Object -First 3
|
||||
if ($meaningful.Count -gt 0) { return ($meaningful -join '-') }
|
||||
return ConvertTo-BranchName $Desc
|
||||
}
|
||||
|
||||
function Get-HighestNumber {
|
||||
param([string]$Dir)
|
||||
$highest = 0
|
||||
if (Test-Path $Dir) {
|
||||
Get-ChildItem -Path $Dir -Directory | ForEach-Object {
|
||||
if ($_.Name -match '^(\d+)-') {
|
||||
$num = [int]$Matches[1]
|
||||
if ($num -gt $highest) { $highest = $num }
|
||||
}
|
||||
}
|
||||
}
|
||||
return $highest
|
||||
}
|
||||
|
||||
# Generate branch suffix
|
||||
if ($ShortName) {
|
||||
$branchSuffix = ConvertTo-BranchName $ShortName
|
||||
} else {
|
||||
$branchSuffix = Get-SmartBranchName $Description
|
||||
}
|
||||
|
||||
# Determine branch number
|
||||
if ($Number -gt 0) {
|
||||
$branchNumber = $Number
|
||||
} else {
|
||||
$highestSpec = Get-HighestNumber $specsDir
|
||||
$highestBranch = 0
|
||||
if ($hasGit) {
|
||||
try {
|
||||
git fetch --all --prune 2>$null | Out-Null
|
||||
$branches = git branch -a 2>$null
|
||||
foreach ($b in $branches) {
|
||||
$clean = $b.Trim('* ') -replace '^remotes/[^/]+/', ''
|
||||
if ($clean -match '^(\d{3})-') {
|
||||
$num = [int]$Matches[1]
|
||||
if ($num -gt $highestBranch) { $highestBranch = $num }
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
}
|
||||
$branchNumber = [Math]::Max($highestSpec, $highestBranch) + 1
|
||||
}
|
||||
|
||||
$featureNum = "{0:D3}" -f $branchNumber
|
||||
$branchName = "$featureNum-$branchSuffix"
|
||||
|
||||
# Truncate if exceeding GitHub's 244-byte limit
|
||||
if ($branchName.Length -gt 244) {
|
||||
$maxSuffix = 244 - 4 # 3 digits + 1 hyphen
|
||||
$branchSuffix = $branchSuffix.Substring(0, $maxSuffix).TrimEnd('-')
|
||||
Write-Warning "Branch name truncated to 244 bytes"
|
||||
$branchName = "$featureNum-$branchSuffix"
|
||||
}
|
||||
|
||||
# Create git branch
|
||||
if ($hasGit) {
|
||||
git checkout -b $branchName
|
||||
} else {
|
||||
Write-Warning "Git not detected; skipped branch creation for $branchName"
|
||||
}
|
||||
|
||||
# Create feature directory and spec file
|
||||
$featureDir = Join-Path $specsDir $branchName
|
||||
New-Item -ItemType Directory -Path $featureDir -Force | Out-Null
|
||||
|
||||
$templateFile = Join-Path $repoRoot ".specify" "templates" "spec-template.md"
|
||||
$specFile = Join-Path $featureDir "spec.md"
|
||||
if (Test-Path $templateFile) {
|
||||
Copy-Item $templateFile $specFile
|
||||
} else {
|
||||
New-Item -ItemType File -Path $specFile -Force | Out-Null
|
||||
}
|
||||
|
||||
$env:SPECIFY_FEATURE = $branchName
|
||||
|
||||
# Output
|
||||
[PSCustomObject]@{
|
||||
BranchName = $branchName
|
||||
SpecFile = $specFile
|
||||
FeatureNum = $featureNum
|
||||
}
|
||||
|
||||
Write-Host "BRANCH_NAME: $branchName"
|
||||
Write-Host "SPEC_FILE: $specFile"
|
||||
Write-Host "FEATURE_NUM: $featureNum"
|
||||
@@ -1,30 +1,33 @@
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Configuration
|
||||
BASE_DIR = Path(r"d:\nap-dms.lcbp3\specs")
|
||||
# Configuration - default base directory, can be overridden via CLI argument
|
||||
DEFAULT_BASE_DIR = Path(__file__).resolve().parent.parent.parent / "specs"
|
||||
|
||||
DIRECTORIES = [
|
||||
"00-overview",
|
||||
"01-requirements",
|
||||
"02-architecture",
|
||||
"03-implementation",
|
||||
"04-operations",
|
||||
"05-decisions"
|
||||
"00-Overview",
|
||||
"01-Requirements",
|
||||
"02-Architecture",
|
||||
"03-Data-and-Storage",
|
||||
"04-Infrastructure-OPS",
|
||||
"05-Engineering-Guidelines",
|
||||
"06-Decision-Records"
|
||||
]
|
||||
|
||||
# Regex for Markdown links: [label](path)
|
||||
# Handles relative paths, absolute file paths, and anchors
|
||||
LINK_PATTERN = re.compile(r'\[([^\]]+)\]\(([^)]+)\)')
|
||||
|
||||
def verify_links():
|
||||
def verify_links(base_dir: Path):
|
||||
results = {
|
||||
"total_links": 0,
|
||||
"broken_links": []
|
||||
}
|
||||
|
||||
for dir_name in DIRECTORIES:
|
||||
directory = BASE_DIR / dir_name
|
||||
directory = base_dir / dir_name
|
||||
if not directory.exists():
|
||||
print(f"Directory not found: {directory}")
|
||||
continue
|
||||
@@ -53,7 +56,7 @@ def verify_links():
|
||||
# 2. Handle relative paths
|
||||
# Remove anchor if present
|
||||
clean_target_str = target.split("#")[0]
|
||||
if not clean_target_str: # It was just an anchor to another file but path is empty? Wait.
|
||||
if not clean_target_str:
|
||||
continue
|
||||
|
||||
# Resolve path relative to current file
|
||||
@@ -71,8 +74,17 @@ def verify_links():
|
||||
return results
|
||||
|
||||
if __name__ == "__main__":
|
||||
print(f"Starting link verification in {BASE_DIR}...")
|
||||
audit_results = verify_links()
|
||||
if len(sys.argv) > 1:
|
||||
base_dir = Path(sys.argv[1])
|
||||
else:
|
||||
base_dir = DEFAULT_BASE_DIR
|
||||
|
||||
if not base_dir.exists():
|
||||
print(f"Error: Directory not found: {base_dir}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
print(f"Starting link verification in {base_dir}...")
|
||||
audit_results = verify_links(base_dir)
|
||||
|
||||
print(f"\nAudit Summary:")
|
||||
print(f"Total Internal Links Scanned: {audit_results['total_links']}")
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
---
|
||||
name: speckit.checklist
|
||||
description: Generate a custom checklist for the current feature based on user requirements.
|
||||
version: 1.0.0
|
||||
---
|
||||
|
||||
## Checklist Purpose: "Unit Tests for English"
|
||||
@@ -212,7 +213,7 @@ You are the **Antigravity Quality Gatekeeper**. Your role is to validate the qua
|
||||
|
||||
b. **Structure Reference**: Generate the checklist following the canonical template in `templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
|
||||
|
||||
7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
|
||||
6. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
|
||||
- Focus areas selected
|
||||
- Depth level
|
||||
- Actor/timing
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
---
|
||||
name: speckit.constitution
|
||||
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
|
||||
version: 1.0.0
|
||||
handoffs:
|
||||
- label: Build Specification
|
||||
agent: speckit.specify
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
---
|
||||
name: speckit.quizme
|
||||
description: Challenge the specification with Socratic questioning to identify logical gaps, unhandled edge cases, and robustness issues.
|
||||
version: 1.0.0
|
||||
handoffs:
|
||||
- label: Clarify Spec Requirements
|
||||
agent: speckit.clarify
|
||||
@@ -38,8 +39,9 @@ Execution steps:
|
||||
- Challenge security (e.g., "You rely on client-side validation here, but what if I curl the API?").
|
||||
|
||||
4. **The Quiz Loop**:
|
||||
- Present 3-5 challenging scenarios *one by one*.
|
||||
- Present 3-5 challenging scenarios _one by one_.
|
||||
- Format:
|
||||
|
||||
> **Scenario**: [Describe a plausible edge case or failure]
|
||||
> **Current Spec**: [Quote where the spec implies behavior or is silent]
|
||||
> **The Quiz**: What should the system do here?
|
||||
@@ -62,4 +64,4 @@ Execution steps:
|
||||
|
||||
- **Be a Skeptic**: Don't assume the happy path works.
|
||||
- **Focus on "When" and "If"**: When high load, If network drops, When concurrent edits.
|
||||
- **Don't be annoying**: Focus on *critical* flaws, not nitpicks.
|
||||
- **Don't be annoying**: Focus on _critical_ flaws, not nitpicks.
|
||||
|
||||
199
.agents/skills/speckit.security-audit/SKILL.md
Normal file
199
.agents/skills/speckit.security-audit/SKILL.md
Normal file
@@ -0,0 +1,199 @@
|
||||
---
|
||||
name: speckit.security-audit
|
||||
description: Perform a security-focused audit of the codebase against OWASP Top 10, CASL authorization, and LCBP3-DMS security requirements.
|
||||
version: 1.0.0
|
||||
depends-on:
|
||||
- speckit.checker
|
||||
---
|
||||
|
||||
## Role
|
||||
|
||||
You are the **Antigravity Security Sentinel**. Your mission is to identify security vulnerabilities, authorization gaps, and compliance issues specific to the LCBP3-DMS project before they reach production.
|
||||
|
||||
## Task
|
||||
|
||||
Perform a comprehensive security audit covering OWASP Top 10, CASL permission enforcement, file upload safety, and project-specific security rules defined in `specs/06-Decision-Records/ADR-016-security.md`.
|
||||
|
||||
## Context Loading
|
||||
|
||||
Before auditing, load the security context:
|
||||
|
||||
1. Read `specs/06-Decision-Records/ADR-016-security.md` for project security decisions
|
||||
2. Read `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` for backend security patterns
|
||||
3. Read `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql` for CASL permission definitions
|
||||
4. Read `GEMINI.md` for security rules (Section: Security & Integrity Rules)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Phase 1: OWASP Top 10 Scan
|
||||
|
||||
Scan the `backend/src/` directory for each OWASP category:
|
||||
|
||||
| # | OWASP Category | What to Check | Files to Scan |
|
||||
| --- | ------------------------- | ---------------------------------------------------------------------------------------- | ------------------------------------------------- |
|
||||
| A01 | Broken Access Control | Missing `@UseGuards(JwtAuthGuard, CaslAbilityGuard)` on controllers, unprotected routes | `**/*.controller.ts` |
|
||||
| A02 | Cryptographic Failures | Hardcoded secrets, weak hashing, missing HTTPS enforcement | `**/*.ts`, `docker-compose*.yml` |
|
||||
| A03 | Injection | Raw SQL queries, unsanitized user input in TypeORM queries, template literals in queries | `**/*.service.ts`, `**/*.repository.ts` |
|
||||
| A04 | Insecure Design | Missing rate limiting on auth endpoints, no idempotency checks on mutations | `**/*.controller.ts`, `**/*.guard.ts` |
|
||||
| A05 | Security Misconfiguration | Missing Helmet.js, CORS misconfiguration, debug mode in production | `main.ts`, `app.module.ts`, `docker-compose*.yml` |
|
||||
| A06 | Vulnerable Components | Outdated dependencies with known CVEs | `package.json`, `pnpm-lock.yaml` |
|
||||
| A07 | Auth Failures | Missing brute-force protection, weak password policy, JWT misconfiguration | `auth/`, `**/*.strategy.ts` |
|
||||
| A08 | Data Integrity | Missing input validation, unvalidated file types, missing CSRF protection | `**/*.dto.ts`, `**/*.interceptor.ts` |
|
||||
| A09 | Logging Failures | Missing audit logs for security events, sensitive data in logs | `**/*.service.ts`, `**/*.interceptor.ts` |
|
||||
| A10 | SSRF | Unrestricted outbound requests, user-controlled URLs | `**/*.service.ts` |
|
||||
|
||||
### Phase 2: CASL Authorization Audit
|
||||
|
||||
1. **Load permission matrix** from `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql`
|
||||
2. **Scan all controllers** for `@UseGuards(CaslAbilityGuard)` coverage:
|
||||
|
||||
```bash
|
||||
# Find controllers without CASL guard
|
||||
grep -rL "CaslAbilityGuard" backend/src/modules/*/\*.controller.ts
|
||||
```
|
||||
|
||||
3. **Verify 4-Level RBAC enforcement**:
|
||||
- Level 1: System Admin (full access)
|
||||
- Level 2: Project Admin (project-scoped)
|
||||
- Level 3: Department Lead (department-scoped)
|
||||
- Level 4: User (own-records only)
|
||||
|
||||
4. **Check ability definitions** — ensure every endpoint has:
|
||||
- `@CheckPolicies()` or `@Can()` decorator
|
||||
- Correct action (`read`, `create`, `update`, `delete`, `manage`)
|
||||
- Correct subject (entity class, not string)
|
||||
|
||||
5. **Cross-reference with routes** — verify:
|
||||
- No public endpoints that should be protected
|
||||
- No endpoints with broader permissions than required (principle of least privilege)
|
||||
- Query scoping: users can only query their own records (unless admin)
|
||||
|
||||
### Phase 3: File Upload Security (ClamAV)
|
||||
|
||||
Check LCBP3-DMS-specific file handling per ADR-016:
|
||||
|
||||
1. **Two-Phase Storage verification**:
|
||||
- Upload goes to temp directory first → scanned by ClamAV → moved to permanent
|
||||
- Check for direct writes to permanent storage (violation)
|
||||
|
||||
2. **ClamAV integration**:
|
||||
- Verify ClamAV service is configured in `docker-compose*.yml`
|
||||
- Check that file upload endpoints call ClamAV scan before commit
|
||||
- Verify rejection flow for infected files
|
||||
|
||||
3. **File type validation**:
|
||||
- Check allowed MIME types against whitelist
|
||||
- Verify file extension validation exists
|
||||
- Check for double-extension attacks (e.g., `file.pdf.exe`)
|
||||
|
||||
4. **File size limits**:
|
||||
- Verify upload size limits are enforced
|
||||
- Check for path traversal in filenames (`../`, `..\\`)
|
||||
|
||||
### Phase 4: LCBP3-DMS-Specific Checks
|
||||
|
||||
1. **Idempotency** — verify all POST/PUT/PATCH endpoints check `Idempotency-Key` header:
|
||||
|
||||
```bash
|
||||
# Find mutation endpoints without idempotency
|
||||
grep -rn "@Post\|@Put\|@Patch" backend/src/modules/*/\*.controller.ts
|
||||
# Cross-reference with idempotency guard usage
|
||||
grep -rn "IdempotencyGuard\|Idempotency-Key" backend/src/
|
||||
```
|
||||
|
||||
2. **Optimistic Locking** — verify document entities use `@VersionColumn()`:
|
||||
|
||||
```bash
|
||||
grep -rn "VersionColumn" backend/src/modules/*/entities/*.entity.ts
|
||||
```
|
||||
|
||||
3. **Redis Redlock** — verify document numbering uses distributed locks:
|
||||
|
||||
```bash
|
||||
grep -rn "Redlock\|redlock\|acquireLock" backend/src/
|
||||
```
|
||||
|
||||
4. **Password Security** — verify bcrypt with 12+ salt rounds:
|
||||
|
||||
```bash
|
||||
grep -rn "bcrypt\|saltRounds\|genSalt" backend/src/
|
||||
```
|
||||
|
||||
5. **Rate Limiting** — verify throttle guard on auth endpoints:
|
||||
|
||||
```bash
|
||||
grep -rn "ThrottlerGuard\|@Throttle" backend/src/modules/auth/
|
||||
```
|
||||
|
||||
6. **Environment Variables** — ensure no `.env` files for production:
|
||||
- Check for `.env` files committed to git
|
||||
- Verify Docker compose uses `environment:` section, not `env_file:`
|
||||
|
||||
## Severity Classification
|
||||
|
||||
| Severity | Description | Response |
|
||||
| -------------- | ----------------------------------------------------- | ----------------------- |
|
||||
| 🔴 **Critical** | Exploitable vulnerability, data exposure, auth bypass | Immediate fix required |
|
||||
| 🟠 **High** | Missing security control, potential escalation path | Fix before next release |
|
||||
| 🟡 **Medium** | Best practice violation, defense-in-depth gap | Plan fix in sprint |
|
||||
| 🟢 **Low** | Informational, minor hardening opportunity | Track in backlog |
|
||||
|
||||
## Report Format
|
||||
|
||||
Generate a structured report:
|
||||
|
||||
```markdown
|
||||
# 🔒 Security Audit Report
|
||||
|
||||
**Date**: <date>
|
||||
**Scope**: <backend/frontend/both>
|
||||
**Auditor**: Antigravity Security Sentinel
|
||||
|
||||
## Summary
|
||||
|
||||
| Severity | Count |
|
||||
| ---------- | ----- |
|
||||
| 🔴 Critical | X |
|
||||
| 🟠 High | X |
|
||||
| 🟡 Medium | X |
|
||||
| 🟢 Low | X |
|
||||
|
||||
## Findings
|
||||
|
||||
### [SEV-001] <Title> — 🔴 Critical
|
||||
|
||||
**Category**: OWASP A01 / CASL / ClamAV / LCBP3-Specific
|
||||
**File**: `<path>:<line>`
|
||||
**Description**: <what is wrong>
|
||||
**Impact**: <what could happen>
|
||||
**Recommendation**: <how to fix>
|
||||
**Code Example**:
|
||||
\`\`\`typescript
|
||||
// Before (vulnerable)
|
||||
...
|
||||
// After (fixed)
|
||||
...
|
||||
\`\`\`
|
||||
|
||||
## CASL Coverage Matrix
|
||||
|
||||
| Module | Controller | Guard? | Policies? | Level |
|
||||
| ------ | --------------- | ------ | --------- | ------------ |
|
||||
| auth | AuthController | ✅ | ✅ | N/A (public) |
|
||||
| users | UsersController | ✅ | ✅ | L1-L4 |
|
||||
| ... | ... | ... | ... | ... |
|
||||
|
||||
## Recommendations Priority
|
||||
|
||||
1. <Critical fix 1>
|
||||
2. <Critical fix 2>
|
||||
...
|
||||
```
|
||||
|
||||
## Operating Principles
|
||||
|
||||
- **Read-Only**: This skill only reads and reports. Never modify code.
|
||||
- **Evidence-Based**: Every finding must include the exact file path and line number.
|
||||
- **No False Confidence**: If a check is inconclusive, mark it as "⚠️ Needs Manual Review" rather than passing.
|
||||
- **LCBP3-Specific**: Prioritize project-specific rules (idempotency, ClamAV, Redlock) over generic checks.
|
||||
- **Frontend Too**: If scope includes frontend, also check for XSS in React components, unescaped user data, and exposed API keys.
|
||||
@@ -1,6 +1,7 @@
|
||||
---
|
||||
name: speckit.specify
|
||||
description: Create or update the feature specification from a natural language feature description.
|
||||
version: 1.0.0
|
||||
handoffs:
|
||||
- label: Build Technical Plan
|
||||
agent: speckit.plan
|
||||
@@ -46,6 +47,7 @@ Given that feature description, do this:
|
||||
2. **Check for existing branches before creating new one**:
|
||||
|
||||
a. First, fetch all remote branches to ensure we have the latest information:
|
||||
|
||||
```bash
|
||||
git fetch --all --prune
|
||||
```
|
||||
@@ -77,7 +79,6 @@ Given that feature description, do this:
|
||||
3. Load `templates/spec-template.md` to understand required sections.
|
||||
|
||||
4. Follow this execution flow:
|
||||
|
||||
1. Parse user description from Input
|
||||
If empty: ERROR "No feature description provided"
|
||||
2. Extract key concepts from description
|
||||
@@ -150,7 +151,6 @@ Given that feature description, do this:
|
||||
- Document specific issues found (quote relevant spec sections)
|
||||
|
||||
c. **Handle Validation Results**:
|
||||
|
||||
- **If all items pass**: Mark checklist complete and proceed to step 6
|
||||
|
||||
- **If items fail (excluding [NEEDS CLARIFICATION])**:
|
||||
@@ -174,7 +174,7 @@ Given that feature description, do this:
|
||||
**Suggested Answers**:
|
||||
|
||||
| Option | Answer | Implications |
|
||||
|--------|--------|--------------|
|
||||
| ------ | ------------------------- | ------------------------------------- |
|
||||
| A | [First suggested answer] | [What this means for the feature] |
|
||||
| B | [Second suggested answer] | [What this means for the feature] |
|
||||
| C | [Third suggested answer] | [What this means for the feature] |
|
||||
|
||||
@@ -1,6 +1,9 @@
|
||||
---
|
||||
name: speckit.taskstoissues
|
||||
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
|
||||
description: Convert existing tasks into actionable, dependency-ordered issues for the feature based on available design artifacts.
|
||||
version: 1.1.0
|
||||
depends-on:
|
||||
- speckit.tasks
|
||||
tools: ['github/github-mcp-server/issue_write']
|
||||
---
|
||||
|
||||
@@ -14,22 +17,190 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Role
|
||||
|
||||
You are the **Antigravity Tracker Integrator**. Your role is to synchronize technical tasks with external project management systems like GitHub Issues. You ensure that every piece of work has a clear, tracked identity for collaborative execution.
|
||||
You are the **Antigravity Tracker Integrator**. Your role is to synchronize technical tasks with external project management systems (GitHub Issues or Gitea Issues). You ensure that every piece of work has a clear, tracked identity for collaborative execution.
|
||||
|
||||
## Task
|
||||
|
||||
### Outline
|
||||
|
||||
1. Run `../scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
1. From the executed script, extract the path to **tasks**.
|
||||
1. Get the Git remote by running:
|
||||
Convert all tasks from `tasks.md` into well-structured issues on the appropriate platform (GitHub or Gitea), preserving dependency order, phase grouping, and labels.
|
||||
|
||||
```bash
|
||||
git config --get remote.origin.url
|
||||
```
|
||||
### Execution Steps
|
||||
|
||||
**ONLY PROCEED TO NEXT STEPS IF THE REMOTE IS A GITHUB URL**
|
||||
1. **Load Task Data**:
|
||||
Run `../scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute.
|
||||
For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
1. For each task in the list, use the GitHub MCP server to create a new issue in the repository that is representative of the Git remote.
|
||||
2. **Extract tasks path** from the executed script output.
|
||||
|
||||
**UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL**
|
||||
3. **Detect Platform** — Get the Git remote and determine the platform:
|
||||
|
||||
```bash
|
||||
git config --get remote.origin.url
|
||||
```
|
||||
|
||||
| Remote URL Pattern | Platform | API |
|
||||
| ---------------------------------------- | ----------- | --------------------------- |
|
||||
| `github.com` | GitHub | GitHub MCP or REST API |
|
||||
| `gitea.*`, custom domain with `/api/v1/` | Gitea | Gitea REST API |
|
||||
| Other | Unsupported | **STOP** with error message |
|
||||
|
||||
**Platform Detection Rules**:
|
||||
- If URL contains `github.com` → GitHub
|
||||
- If URL contains a known Gitea domain (check `$ARGUMENTS` for hints, or try `<host>/api/v1/version`) → Gitea
|
||||
- If `$ARGUMENTS` explicitly specifies platform (e.g., `--platform gitea`) → use that
|
||||
- If uncertain → **ASK** the user which platform to use
|
||||
|
||||
> **UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL**
|
||||
|
||||
4. **Parse `tasks.md`** — Extract structured data for each task:
|
||||
|
||||
| Field | Source | Example |
|
||||
| --------------- | ---------------------------- | -------------------------- |
|
||||
| Task ID | `T001`, `T002`, etc. | `T001` |
|
||||
| Phase | Phase heading | `Phase 1: Setup` |
|
||||
| Description | Task text after ID | `Create project structure` |
|
||||
| File paths | Paths in description | `src/models/user.py` |
|
||||
| Parallel marker | `[P]` flag | `true`/`false` |
|
||||
| User Story | `[US1]`, `[US2]`, etc. | `US1` |
|
||||
| Dependencies | Sequential ordering in phase | `T001 → T002` |
|
||||
|
||||
5. **Load Feature Context** (for issue body enrichment):
|
||||
- Read `spec.md` for requirement references
|
||||
- Read `plan.md` for architecture context (if exists)
|
||||
- Map tasks to requirements where possible
|
||||
|
||||
6. **Generate Issue Data** — For each task, create an issue with:
|
||||
|
||||
### Issue Title Format
|
||||
|
||||
```
|
||||
[<TaskID>] <Description>
|
||||
```
|
||||
|
||||
Example: `[T001] Create project structure per implementation plan`
|
||||
|
||||
### Issue Body Template
|
||||
|
||||
```markdown
|
||||
## Task Details
|
||||
|
||||
**Task ID**: <TaskID>
|
||||
**Phase**: <Phase Name>
|
||||
**Parallel**: <Yes/No>
|
||||
**User Story**: <Story reference, if any>
|
||||
|
||||
## Description
|
||||
|
||||
<Full task description from tasks.md>
|
||||
|
||||
## File Paths
|
||||
|
||||
- `<file path 1>`
|
||||
- `<file path 2>`
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] Implementation complete per task description
|
||||
- [ ] Relevant tests pass (if applicable)
|
||||
- [ ] No regressions introduced
|
||||
|
||||
## Context
|
||||
|
||||
**Feature**: <Feature name from spec.md>
|
||||
**Spec Reference**: <Requirement ID if mapped>
|
||||
|
||||
---
|
||||
|
||||
_Auto-generated by speckit.taskstoissues from `tasks.md`_
|
||||
```
|
||||
|
||||
7. **Apply Labels** — Assign labels based on task metadata:
|
||||
|
||||
| Condition | Label |
|
||||
| ---------------------------------- | ------------------ |
|
||||
| Phase 1 (Setup) | `phase:setup` |
|
||||
| Phase 2 (Foundation) | `phase:foundation` |
|
||||
| Phase 3+ (User Stories) | `phase:story` |
|
||||
| Final Phase (Polish) | `phase:polish` |
|
||||
| Has `[P]` marker | `parallel` |
|
||||
| Has `[US1]` marker | `story:US1` |
|
||||
| Task creates test files | `type:test` |
|
||||
| Task creates models/entities | `type:model` |
|
||||
| Task creates services | `type:service` |
|
||||
| Task creates controllers/endpoints | `type:api` |
|
||||
| Task creates UI components | `type:ui` |
|
||||
|
||||
**Label Creation**: If labels don't exist on the repo, create them first before assigning.
|
||||
|
||||
8. **Set Milestone** (optional):
|
||||
- If `$ARGUMENTS` includes `--milestone "<name>"`, assign all issues to that milestone
|
||||
- If milestone doesn't exist, create it with the feature name as the title
|
||||
|
||||
9. **Create Issues** — Execute in dependency order:
|
||||
|
||||
**For GitHub**: Use the GitHub MCP server tool `issue_write` to create issues.
|
||||
|
||||
**For Gitea**: Use the Gitea REST API:
|
||||
|
||||
```bash
|
||||
# Create issue
|
||||
curl -s -X POST "https://<gitea-host>/api/v1/repos/<owner>/<repo>/issues" \
|
||||
-H "Authorization: token <GITEA_TOKEN>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"title": "[T001] Create project structure",
|
||||
"body": "<issue body>",
|
||||
"labels": [<label_ids>]
|
||||
}'
|
||||
```
|
||||
|
||||
**Authentication**:
|
||||
- GitHub: Uses MCP server (pre-authenticated)
|
||||
- Gitea: Requires `GITEA_TOKEN` environment variable. If not set, **STOP** and ask user to provide it.
|
||||
|
||||
**Rate Limiting**:
|
||||
- Create issues sequentially with a 500ms delay between requests
|
||||
- If rate limited (HTTP 429), wait and retry with exponential backoff
|
||||
|
||||
10. **Track Created Issues** — Maintain a mapping of `TaskID → IssueNumber`:
|
||||
|
||||
```markdown
|
||||
| Task ID | Issue # | Title | URL |
|
||||
| ------- | ------- | ----------------------------- | ----- |
|
||||
| T001 | #42 | Create project structure | <url> |
|
||||
| T002 | #43 | Configure database connection | <url> |
|
||||
```
|
||||
|
||||
11. **Update `tasks.md`** (optional — ask user first):
|
||||
- Append issue references to each task line:
|
||||
```
|
||||
- [ ] T001 Create project structure (#42)
|
||||
```
|
||||
|
||||
12. **Report Completion**:
|
||||
- Total issues created
|
||||
- Issues by phase
|
||||
- Issues by label
|
||||
- Any failures (with retry suggestions)
|
||||
- Link to issue board/project
|
||||
- Mapping table (Task ID → Issue #)
|
||||
|
||||
## Arguments
|
||||
|
||||
| Argument | Description | Default |
|
||||
| ---------------------------- | --------------------------------------- | ------------- |
|
||||
| `--platform <github\|gitea>` | Force platform detection | Auto-detect |
|
||||
| `--milestone "<name>"` | Assign issues to milestone | None |
|
||||
| `--dry-run` | Preview issues without creating | `false` |
|
||||
| `--labels-only` | Only create labels, don't create issues | `false` |
|
||||
| `--update-tasks` | Auto-update tasks.md with issue refs | `false` (ask) |
|
||||
|
||||
## Operating Principles
|
||||
|
||||
- **Idempotency**: Check if an issue with the same title already exists before creating duplicates
|
||||
- **Dependency Order**: Create issues in task execution order so dependencies are naturally numbered
|
||||
- **Rich Context**: Include enough context in each issue body that it can be understood standalone
|
||||
- **Label Consistency**: Use a consistent label taxonomy across all issues
|
||||
- **Platform Safety**: Never create issues on repos that don't match the git remote
|
||||
- **Dry Run Support**: Always support `--dry-run` to preview before creating
|
||||
|
||||
@@ -4,34 +4,64 @@ description: Run the full speckit pipeline from specification to analysis in one
|
||||
|
||||
# Workflow: speckit.all
|
||||
|
||||
This meta-workflow orchestrates the complete specification pipeline.
|
||||
This meta-workflow orchestrates the **complete development lifecycle**, from specification through implementation and validation. For the preparation-only pipeline (steps 1-5), use `/speckit.prepare` instead.
|
||||
|
||||
## Pipeline Steps
|
||||
## Preparation Phase (Steps 1-5)
|
||||
|
||||
1. **Specify** (`/speckit.specify`):
|
||||
- Use the `view_file` tool to read: `.agent/skills/speckit.specify/SKILL.md`
|
||||
- Use the `view_file` tool to read: `.agents/skills/speckit.specify/SKILL.md`
|
||||
- Execute with user's feature description
|
||||
- Creates: `spec.md`
|
||||
|
||||
2. **Clarify** (`/speckit.clarify`):
|
||||
- Use the `view_file` tool to read: `.agent/skills/speckit.clarify/SKILL.md`
|
||||
- Use the `view_file` tool to read: `.agents/skills/speckit.clarify/SKILL.md`
|
||||
- Execute to resolve ambiguities
|
||||
- Updates: `spec.md`
|
||||
|
||||
3. **Plan** (`/speckit.plan`):
|
||||
- Use the `view_file` tool to read: `.agent/skills/speckit.plan/SKILL.md`
|
||||
- Use the `view_file` tool to read: `.agents/skills/speckit.plan/SKILL.md`
|
||||
- Execute to create technical design
|
||||
- Creates: `plan.md`
|
||||
|
||||
4. **Tasks** (`/speckit.tasks`):
|
||||
- Use the `view_file` tool to read: `.agent/skills/speckit.tasks/SKILL.md`
|
||||
- Use the `view_file` tool to read: `.agents/skills/speckit.tasks/SKILL.md`
|
||||
- Execute to generate task breakdown
|
||||
- Creates: `tasks.md`
|
||||
|
||||
5. **Analyze** (`/speckit.analyze`):
|
||||
- Use the `view_file` tool to read: `.agent/skills/speckit.analyze/SKILL.md`
|
||||
- Execute to validate consistency
|
||||
- Use the `view_file` tool to read: `.agents/skills/speckit.analyze/SKILL.md`
|
||||
- Execute to validate consistency across spec, plan, and tasks
|
||||
- Output: Analysis report
|
||||
- **Gate**: If critical issues found, stop and fix before proceeding
|
||||
|
||||
## Implementation Phase (Steps 6-7)
|
||||
|
||||
6. **Implement** (`/speckit.implement`):
|
||||
- Use the `view_file` tool to read: `.agents/skills/speckit.implement/SKILL.md`
|
||||
- Execute all tasks from `tasks.md` with anti-regression protocols
|
||||
- Output: Working implementation
|
||||
|
||||
7. **Check** (`/speckit.checker`):
|
||||
- Use the `view_file` tool to read: `.agents/skills/speckit.checker/SKILL.md`
|
||||
- Run static analysis (linters, type checkers, security scanners)
|
||||
- Output: Checker report
|
||||
|
||||
## Verification Phase (Steps 8-10)
|
||||
|
||||
8. **Test** (`/speckit.tester`):
|
||||
- Use the `view_file` tool to read: `.agents/skills/speckit.tester/SKILL.md`
|
||||
- Run tests with coverage
|
||||
- Output: Test + coverage report
|
||||
|
||||
9. **Review** (`/speckit.reviewer`):
|
||||
- Use the `view_file` tool to read: `.agents/skills/speckit.reviewer/SKILL.md`
|
||||
- Perform code review
|
||||
- Output: Review report with findings
|
||||
|
||||
10. **Validate** (`/speckit.validate`):
|
||||
- Use the `view_file` tool to read: `.agents/skills/speckit.validate/SKILL.md`
|
||||
- Verify implementation matches spec requirements
|
||||
- Output: Validation report (pass/fail)
|
||||
|
||||
## Usage
|
||||
|
||||
@@ -39,9 +69,17 @@ This meta-workflow orchestrates the complete specification pipeline.
|
||||
/speckit.all "Build a user authentication system with OAuth2 support"
|
||||
```
|
||||
|
||||
## Pipeline Comparison
|
||||
|
||||
| Pipeline | Steps | Use When |
|
||||
| ------------------ | ------------------------- | -------------------------------------- |
|
||||
| `/speckit.prepare` | 1-5 (Specify → Analyze) | Planning only — you'll implement later |
|
||||
| `/speckit.all` | 1-10 (Specify → Validate) | Full lifecycle in one pass |
|
||||
|
||||
## On Error
|
||||
|
||||
If any step fails, stop the pipeline and report:
|
||||
|
||||
- Which step failed
|
||||
- The error message
|
||||
- Suggested remediation (e.g., "Run `/speckit.clarify` to resolve ambiguities before continuing")
|
||||
|
||||
@@ -8,7 +8,7 @@ description: Create or update the project constitution from interactive or provi
|
||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.constitution/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.constitution/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -9,7 +9,7 @@ description: Create or update the feature specification from a natural language
|
||||
- This is typically the starting point of a new feature.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.specify/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.specify/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -8,7 +8,7 @@ description: Identify underspecified areas in the current feature spec by asking
|
||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.clarify/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.clarify/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -8,7 +8,7 @@ description: Execute the implementation planning workflow using the plan templat
|
||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.plan/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.plan/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -8,7 +8,7 @@ description: Generate an actionable, dependency-ordered tasks.md for the feature
|
||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.tasks/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.tasks/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -10,7 +10,7 @@ description: Perform a non-destructive cross-artifact consistency and quality an
|
||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.analyze/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.analyze/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -8,7 +8,7 @@ description: Execute the implementation plan by processing and executing all tas
|
||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.implement/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.implement/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -10,7 +10,7 @@ description: Run static analysis tools and aggregate results.
|
||||
- The user may specify paths to check or run on entire project.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.checker/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.checker/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -10,7 +10,7 @@ description: Execute tests, measure coverage, and report results.
|
||||
- The user may specify test paths, options, or just run all tests.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.tester/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.tester/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -8,7 +8,7 @@ description: Perform code review with actionable feedback and suggestions.
|
||||
- The user may specify files to review, "staged" for git staged changes, or "branch" for branch diff.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.reviewer/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.reviewer/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -8,7 +8,7 @@ description: Validate that implementation matches specification requirements.
|
||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.validate/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.validate/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -9,9 +9,11 @@ Follows `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` and ADR-00
|
||||
|
||||
## Steps
|
||||
|
||||
// turbo
|
||||
|
||||
1. **Verify requirements exist** — confirm the feature is in `specs/01-Requirements/` before starting
|
||||
|
||||
2. **Check schema** — read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql` for relevant tables
|
||||
// turbo 2. **Check schema** — read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql` for relevant tables
|
||||
|
||||
3. **Scaffold module folder**
|
||||
|
||||
@@ -40,10 +42,10 @@ backend/src/modules/<module-name>/
|
||||
|
||||
9. **Register in AppModule** — import the new module in `app.module.ts`.
|
||||
|
||||
10. **Write unit test** — cover service methods with Jest mocks. Run:
|
||||
// turbo 10. **Write unit test** — cover service methods with Jest mocks. Run:
|
||||
|
||||
```bash
|
||||
pnpm test:watch
|
||||
```
|
||||
|
||||
11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md`
|
||||
// turbo 11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md`
|
||||
|
||||
108
.agents/workflows/schema-change.md
Normal file
108
.agents/workflows/schema-change.md
Normal file
@@ -0,0 +1,108 @@
|
||||
---
|
||||
description: Manage database schema changes following ADR-009 (no migrations, modify SQL directly)
|
||||
---
|
||||
|
||||
# Schema Change Workflow
|
||||
|
||||
Use this workflow when modifying database schema for LCBP3-DMS.
|
||||
Follows `specs/06-Decision-Records/ADR-009-database-strategy.md` — **NO TypeORM migrations**.
|
||||
|
||||
## Pre-Change Checklist
|
||||
|
||||
- [ ] Change is required by a spec in `specs/01-Requirements/`
|
||||
- [ ] Existing data impact has been assessed
|
||||
- [ ] No SQL triggers are being added (business logic in NestJS only)
|
||||
|
||||
## Steps
|
||||
|
||||
1. **Read current schema** — load the full schema file:
|
||||
|
||||
```
|
||||
specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql
|
||||
```
|
||||
|
||||
2. **Read data dictionary** — understand current field definitions:
|
||||
|
||||
```
|
||||
specs/03-Data-and-Storage/03-01-data-dictionary.md
|
||||
```
|
||||
|
||||
// turbo 3. **Identify impact scope** — determine which tables, columns, indexes, or constraints are affected. List:
|
||||
|
||||
- Tables being modified/created
|
||||
- Columns being added/renamed/dropped
|
||||
- Foreign key relationships affected
|
||||
- Indexes being added/modified
|
||||
- Seed data impact (if any)
|
||||
|
||||
4. **Modify schema SQL** — edit `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`:
|
||||
- Add/modify table definitions
|
||||
- Maintain consistent formatting (uppercase SQL keywords, lowercase identifiers)
|
||||
- Add inline comments for new columns explaining purpose
|
||||
- Ensure `DEFAULT` values and `NOT NULL` constraints are correct
|
||||
- Add `version` column with `@VersionColumn()` marker comment if optimistic locking is needed
|
||||
|
||||
> [!CAUTION]
|
||||
> **NEVER use SQL Triggers.** All business logic must live in NestJS services.
|
||||
|
||||
5. **Update data dictionary** — edit `specs/03-Data-and-Storage/03-01-data-dictionary.md`:
|
||||
- Add new tables/columns with descriptions
|
||||
- Update data types and constraints
|
||||
- Document business rules for new fields
|
||||
- Add enum value definitions if applicable
|
||||
|
||||
6. **Update seed data** (if applicable):
|
||||
- `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-basic.sql` — for reference/lookup data
|
||||
- `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql` — for new CASL permissions
|
||||
|
||||
7. **Update TypeORM entity** — modify corresponding `backend/src/modules/<module>/entities/*.entity.ts`:
|
||||
- Map ONLY columns defined in schema SQL
|
||||
- Use correct TypeORM decorators (`@Column`, `@PrimaryGeneratedColumn`, `@ManyToOne`, etc.)
|
||||
- Add `@VersionColumn()` if optimistic locking is needed
|
||||
|
||||
8. **Update DTOs** — if new columns are exposed via API:
|
||||
- Add fields to `create-*.dto.ts` and/or `update-*.dto.ts`
|
||||
- Add `class-validator` decorators for all new fields
|
||||
- Never use `any` type
|
||||
|
||||
// turbo 9. **Run type check** — verify no TypeScript errors:
|
||||
|
||||
```bash
|
||||
cd backend && npx tsc --noEmit
|
||||
```
|
||||
|
||||
10. **Generate SQL diff** — create a summary of changes for the user to apply manually:
|
||||
|
||||
```
|
||||
-- Schema Change Summary
|
||||
-- Date: <current date>
|
||||
-- Feature: <feature name>
|
||||
-- Tables affected: <list>
|
||||
--
|
||||
-- ⚠️ Apply this SQL to the live database manually:
|
||||
|
||||
ALTER TABLE ...;
|
||||
-- or
|
||||
CREATE TABLE ...;
|
||||
```
|
||||
|
||||
11. **Notify user** — present the SQL diff and remind them:
|
||||
- Apply the SQL change to the live database manually
|
||||
- Verify the change doesn't break existing data
|
||||
- Run `pnpm test` after applying to confirm entity mappings work
|
||||
|
||||
## Common Patterns
|
||||
|
||||
| Change Type | Template |
|
||||
| ----------- | -------------------------------------------------------------- |
|
||||
| Add column | `ALTER TABLE \`table\` ADD COLUMN \`col\` TYPE DEFAULT value;` |
|
||||
| Add table | Full `CREATE TABLE` with constraints and indexes |
|
||||
| Add index | `CREATE INDEX \`idx_table_col\` ON \`table\` (\`col\`);` |
|
||||
| Add FK | `ALTER TABLE \`child\` ADD CONSTRAINT ... FOREIGN KEY ...` |
|
||||
| Add enum | Add to data dictionary + `ENUM('val1','val2')` in column def |
|
||||
|
||||
## On Error
|
||||
|
||||
- If schema SQL has syntax errors → fix and re-validate with `tsc --noEmit`
|
||||
- If entity mapping doesn't match schema → compare column-by-column against SQL
|
||||
- If seed data conflicts → check unique constraints and foreign keys
|
||||
@@ -8,20 +8,20 @@ This workflow orchestrates the sequential execution of the Speckit preparation p
|
||||
|
||||
1. **Step 1: Specify (Skill 02)**
|
||||
- Goal: Create or update the `spec.md` based on user input.
|
||||
- Action: Read and execute `.agent/skills/speckit.specify/SKILL.md`.
|
||||
- Action: Read and execute `.agents/skills/speckit.specify/SKILL.md`.
|
||||
|
||||
2. **Step 2: Clarify (Skill 03)**
|
||||
- Goal: Refine the `spec.md` by identifying and resolving ambiguities.
|
||||
- Action: Read and execute `.agent/skills/speckit.clarify/SKILL.md`.
|
||||
- Action: Read and execute `.agents/skills/speckit.clarify/SKILL.md`.
|
||||
|
||||
3. **Step 3: Plan (Skill 04)**
|
||||
- Goal: Generate `plan.md` from the finalized spec.
|
||||
- Action: Read and execute `.agent/skills/speckit.plan/SKILL.md`.
|
||||
- Action: Read and execute `.agents/skills/speckit.plan/SKILL.md`.
|
||||
|
||||
4. **Step 4: Tasks (Skill 05)**
|
||||
- Goal: Generate actional `tasks.md` from the plan.
|
||||
- Action: Read and execute `.agent/skills/speckit.tasks/SKILL.md`.
|
||||
- Goal: Generate actionable `tasks.md` from the plan.
|
||||
- Action: Read and execute `.agents/skills/speckit.tasks/SKILL.md`.
|
||||
|
||||
5. **Step 5: Analyze (Skill 06)**
|
||||
- Goal: Validate consistency across all design artifacts (spec, plan, tasks).
|
||||
- Action: Read and execute `.agent/skills/speckit.analyze/SKILL.md`.
|
||||
- Action: Read and execute `.agents/skills/speckit.analyze/SKILL.md`.
|
||||
|
||||
@@ -8,7 +8,7 @@ description: Generate a custom checklist for the current feature based on user r
|
||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.checklist/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.checklist/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -8,7 +8,7 @@ description: Compare two versions of a spec or plan to highlight changes.
|
||||
- The user has provided an input prompt (optional file paths or version references).
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.diff/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.diff/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -8,7 +8,7 @@ description: Migrate existing projects into the speckit structure by generating
|
||||
- The user has provided an input prompt (path to analyze, feature name).
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.migrate/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.migrate/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -10,7 +10,7 @@ description: Challenge the specification with Socratic questioning to identify l
|
||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.quizme/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.quizme/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -10,7 +10,7 @@ description: Display a dashboard showing feature status, completion percentage,
|
||||
- The user may optionally specify a feature to focus on.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.status/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.status/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -8,7 +8,7 @@ description: Convert existing tasks into actionable, dependency-ordered GitHub i
|
||||
- The user has provided an input prompt. Treat this as the primary input for the skill.
|
||||
|
||||
2. **Load Skill**:
|
||||
- Use the `view_file` tool to read the skill file at: `.agent/skills/speckit.taskstoissues/SKILL.md`
|
||||
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.taskstoissues/SKILL.md`
|
||||
|
||||
3. **Execute**:
|
||||
- Follow the instructions in the `SKILL.md` exactly.
|
||||
|
||||
@@ -12,12 +12,14 @@ You value **Data Integrity**, **Security**, and **Clean Architecture**.
|
||||
|
||||
## 🏗️ Project Overview
|
||||
|
||||
**LCBP3-DMS (Laem Chabang Port Phase 3 - Document Management System)** — Version 1.8.0
|
||||
**LCBP3-DMS (Laem Chabang Port Phase 3 - Document Management System)** — Version 1.8.0 (Patch 1.8.1)
|
||||
|
||||
- **Goal:** Manage construction documents (Correspondence, RFA, Contract Drawings, Shop Drawings)
|
||||
with complex multi-level approval workflows.
|
||||
- **Infrastructure:** QNAP Container Station (Docker Compose), Nginx Proxy Manager (Reverse Proxy),
|
||||
Gitea (Git + CI/CD), n8n (Workflow Automation), Prometheus + Loki + Grafana (Monitoring/Logging)
|
||||
- **Infrastructure:**
|
||||
- **QNAP NAS:** Container Station (Docker), Nginx Proxy Manager, MariaDB, Redis, Elasticsearch, ClamAV
|
||||
- **ASUSTOR NAS:** Ollama (AI Processing), n8n (Workflow Automation), Portainer
|
||||
- **Shared:** Gitea (Git + CI/CD), Prometheus + Loki + Grafana (Monitoring/Logging)
|
||||
|
||||
## 💻 Tech Stack & Constraints
|
||||
|
||||
@@ -26,6 +28,7 @@ You value **Data Integrity**, **Security**, and **Clean Architecture**.
|
||||
- **Frontend:** Next.js 14+ (App Router), Tailwind CSS, Shadcn/UI,
|
||||
TanStack Query (**Server State**), Zustand (**Client State**), React Hook Form + Zod (**Form State**), Axios
|
||||
- **Notifications:** BullMQ Queue → Email / LINE Notify / In-App
|
||||
- **AI/Migration:** Ollama (llama3.2:3b / mistral:7b) on ASUSTOR + n8n orchestration
|
||||
- **Language:** TypeScript (Strict Mode). **NO `any` types allowed.**
|
||||
|
||||
## 🛡️ Security & Integrity Rules
|
||||
@@ -36,31 +39,58 @@ You value **Data Integrity**, **Security**, and **Clean Architecture**.
|
||||
4. **Validation:** Use Zod (frontend) or Class-validator (backend DTO) for all inputs.
|
||||
5. **Password:** bcrypt with 12 salt rounds. Enforce password policy.
|
||||
6. **Rate Limiting:** Apply ThrottlerGuard on auth endpoints.
|
||||
7. **AI Isolation (ADR-018):** Ollama MUST run on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. Output JSON only.
|
||||
|
||||
## 📋 Workflow & Spec Guidelines
|
||||
|
||||
- Always follow specs in `specs/` (v1.8.0). Priority: `06-Decision-Records` > `05-Engineering-Guidelines` > others.
|
||||
- Always verify database schema against **`specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`** before writing queries.
|
||||
- Adhere to ADRs: ADR-001 (Workflow Engine), ADR-002 (Doc Numbering), ADR-009 (DB Strategy),
|
||||
ADR-011 (App Router), ADR-013 (Form Handling), ADR-016 (Security).
|
||||
- Always verify database schema against **`specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`** before writing queries.
|
||||
- Check data dictionary at **`specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
|
||||
- Check seed data: **`lcbp3-v1.8.0-seed-basic.sql`** (reference data), **`lcbp3-v1.8.0-seed-permissions.sql`** (CASL permissions).
|
||||
- For migration context: **`specs/03-Data-and-Storage/03-04-legacy-data-migration.md`** and **`03-05-n8n-migration-setup-guide.md`**.
|
||||
|
||||
### ADR Reference (All 17 + Patch)
|
||||
|
||||
Adhere to all ADRs in `specs/06-Decision-Records/`:
|
||||
|
||||
| ADR | Topic | Key Decision |
|
||||
| ------- | ------------------------- | -------------------------------------------------- |
|
||||
| ADR-001 | Workflow Engine | Unified state machine for document workflows |
|
||||
| ADR-002 | Doc Numbering | Redis Redlock + DB optimistic locking |
|
||||
| ADR-005 | Technology Stack | NestJS + Next.js + MariaDB + Redis |
|
||||
| ADR-006 | Redis Caching | Cache strategy and invalidation patterns |
|
||||
| ADR-008 | Email Notification | BullMQ queue-based email/LINE/in-app |
|
||||
| ADR-009 | DB Strategy | No TypeORM migrations — modify schema SQL directly |
|
||||
| ADR-010 | Logging/Monitoring | Prometheus + Loki + Grafana stack |
|
||||
| ADR-011 | App Router | Next.js App Router with RSC patterns |
|
||||
| ADR-012 | UI Components | Shadcn/UI component library |
|
||||
| ADR-013 | Form Handling | React Hook Form + Zod validation |
|
||||
| ADR-014 | State Management | TanStack Query (server) + Zustand (client) |
|
||||
| ADR-015 | Deployment | Docker Compose + Gitea CI/CD |
|
||||
| ADR-016 | Security | JWT + CASL RBAC + Helmet.js + ClamAV |
|
||||
| ADR-017 | Ollama Migration | Local AI + n8n for legacy data import |
|
||||
| ADR-018 | AI Boundary (Patch 1.8.1) | AI isolation — no direct DB/storage access |
|
||||
|
||||
## 🎯 Active Skills
|
||||
|
||||
- **`nestjs-best-practices`** — Apply when writing/reviewing any NestJS code (modules, services, controllers, guards, interceptors, DTOs)
|
||||
- **`next-best-practices`** — Apply when writing/reviewing any Next.js code (App Router, RSC boundaries, async patterns, data fetching, error handling)
|
||||
- **`speckit.security-audit`** — Apply when auditing security (OWASP Top 10, CASL, ClamAV, LCBP3-specific checks)
|
||||
|
||||
## 🔄 Speckit Workflow Pipeline
|
||||
|
||||
Use `/slash-command` to trigger these workflows. Always prefer spec-driven development for new features.
|
||||
|
||||
| Phase | Command | เมื่อใช้ |
|
||||
| -------------------- | ---------------------------------------------------------- | --------------------------------------------------- |
|
||||
| **Feature Design** | `/speckit.prepare` | Feature ใหม่ — รัน Specify→Clarify→Plan→Tasks→Analyze |
|
||||
| -------------------- | ---------------------------------------------------------- | ----------------------------------------------------- |
|
||||
| **Full Pipeline** | `/speckit.all` | Feature ใหม่ — รัน Specify→...→Validate (10 steps) |
|
||||
| **Feature Design** | `/speckit.prepare` | Preparation only — Specify→Clarify→Plan→Tasks→Analyze |
|
||||
| **Implement** | `/07-speckit.implement` | เขียนโค้ดตาม tasks.md พร้อม anti-regression |
|
||||
| **QA** | `/08-speckit.checker` | ตรวจ TypeScript + ESLint + Security |
|
||||
| **Test** | `/09-speckit.tester` | รัน Jest/Vitest + coverage report |
|
||||
| **Review** | `/10-speckit.reviewer` | Code review — Logic, Performance, Style |
|
||||
| **Validate** | `/11-speckit.validate` | ยืนยันว่า implementation ตรงกับ spec.md |
|
||||
| **Schema Change** | `/schema-change` | แก้ schema SQL → data dictionary → notify user |
|
||||
| **Project-Specific** | `/create-backend-module` `/create-frontend-page` `/deploy` | งานประจำของ LCBP3-DMS |
|
||||
|
||||
## 🚫 Forbidden Actions
|
||||
@@ -71,3 +101,5 @@ Use `/slash-command` to trigger these workflows. Always prefer spec-driven devel
|
||||
- DO NOT invent table names or columns — use ONLY what is defined in the schema SQL file.
|
||||
- DO NOT generate code that violates OWASP Top 10 security practices.
|
||||
- DO NOT use `any` TypeScript type anywhere.
|
||||
- DO NOT let AI (Ollama) access production database directly — all writes go through DMS API.
|
||||
- DO NOT bypass StorageService for file operations — all file moves must go through the API.
|
||||
|
||||
@@ -45,19 +45,19 @@ jobs:
|
||||
# 4. Update Containers
|
||||
echo "🔄 Updating Containers..."
|
||||
# Sync compose file จาก repo → app directory
|
||||
cp /share/np-dms/app/source/lcbp3/specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-lcbp3.yml /share/np-dms/app/docker-compose-lcbp3.yml
|
||||
cp /share/np-dms/app/source/lcbp3/specs/04-Infrastructure-OPS/04-00-docker-compose/docker-compose-app.yml /share/np-dms/app/docker-compose-app.yml
|
||||
cd /share/np-dms/app
|
||||
# ⚠️ ลบ container เดิมที่อาจสร้างจาก Container Station
|
||||
docker rm -f backend frontend 2>/dev/null || true
|
||||
docker rm -f lcbp3-backend lcbp3-frontend 2>/dev/null || true
|
||||
|
||||
# 4a. Start Backend ก่อน
|
||||
echo "🟢 Starting Backend..."
|
||||
docker compose -f docker-compose-lcbp3.yml up -d backend
|
||||
docker compose -f docker-compose-app.yml up -d backend
|
||||
|
||||
# 4b. รอ Backend healthy (ทุก 5 วิ สูงสุด 60 วิ)
|
||||
echo "⏳ Waiting for Backend health check..."
|
||||
for i in $(seq 1 12); do
|
||||
if docker inspect --format='{{.State.Health.Status}}' backend 2>/dev/null | grep -q healthy; then
|
||||
if docker inspect --format='{{.State.Health.Status}}' lcbp3-backend 2>/dev/null | grep -q healthy; then
|
||||
echo "✅ Backend is healthy!"
|
||||
break
|
||||
fi
|
||||
@@ -69,7 +69,7 @@ jobs:
|
||||
|
||||
# 4c. Start Frontend
|
||||
echo "🟢 Starting Frontend..."
|
||||
docker compose -f docker-compose-lcbp3.yml up -d frontend
|
||||
docker compose -f docker-compose-app.yml up -d frontend
|
||||
|
||||
# 5. Cleanup
|
||||
echo "🧹 Cleaning up unused images..."
|
||||
|
||||
77
AGENTS.md
Normal file
77
AGENTS.md
Normal file
@@ -0,0 +1,77 @@
|
||||
# NAP-DMS Project Context & Rules
|
||||
|
||||
> **For:** Codex CLI, opencode, Amp, Amazon Q Developer CLI, IBM Bob, and other AGENTS.md-compatible tools.
|
||||
|
||||
## 🧠 Role & Persona
|
||||
|
||||
Act as a **Senior Full Stack Developer** expert in **NestJS**, **Next.js**, and **TypeScript**.
|
||||
You are a **Document Intelligence Engine** — not a general chatbot.
|
||||
You value **Data Integrity**, **Security**, and **Clean Architecture**.
|
||||
|
||||
## 🏗️ Project Overview
|
||||
|
||||
**LCBP3-DMS (Laem Chabang Port Phase 3 - Document Management System)** — Version 1.8.0 (Patch 1.8.1)
|
||||
|
||||
- **Goal:** Manage construction documents (Correspondence, RFA, Contract Drawings, Shop Drawings)
|
||||
with complex multi-level approval workflows.
|
||||
- **Infrastructure:**
|
||||
- **QNAP NAS:** Container Station (Docker), Nginx Proxy Manager, MariaDB, Redis, Elasticsearch, ClamAV
|
||||
- **ASUSTOR NAS:** Ollama (AI Processing), n8n (Workflow Automation), Portainer
|
||||
- **Shared:** Gitea (Git + CI/CD), Prometheus + Loki + Grafana (Monitoring/Logging)
|
||||
|
||||
## 💻 Tech Stack & Constraints
|
||||
|
||||
- **Backend:** NestJS (Modular Architecture), TypeORM, MariaDB 11.8, Redis 7.2 (BullMQ),
|
||||
Elasticsearch 8.11, JWT + Passport, CASL (4-Level RBAC), ClamAV (Virus Scanning), Helmet.js
|
||||
- **Frontend:** Next.js 14+ (App Router), Tailwind CSS, Shadcn/UI,
|
||||
TanStack Query (**Server State**), Zustand (**Client State**), React Hook Form + Zod (**Form State**), Axios
|
||||
- **Notifications:** BullMQ Queue → Email / LINE Notify / In-App
|
||||
- **AI/Migration:** Ollama (llama3.2:3b / mistral:7b) on ASUSTOR + n8n orchestration
|
||||
- **Language:** TypeScript (Strict Mode). **NO `any` types allowed.**
|
||||
|
||||
## 🛡️ Security & Integrity Rules
|
||||
|
||||
1. **Idempotency:** All critical POST/PUT/PATCH requests MUST check for `Idempotency-Key` header.
|
||||
2. **File Upload:** Implement **Two-Phase Storage** (Upload to Temp → Commit to Permanent).
|
||||
3. **Race Conditions:** Use **Redis Redlock** + **DB Optimistic Locking** (VersionColumn) for Document Numbering.
|
||||
4. **Validation:** Use Zod (frontend) or Class-validator (backend DTO) for all inputs.
|
||||
5. **Password:** bcrypt with 12 salt rounds. Enforce password policy.
|
||||
6. **Rate Limiting:** Apply ThrottlerGuard on auth endpoints.
|
||||
7. **AI Isolation (ADR-018):** Ollama MUST run on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. Output JSON only.
|
||||
|
||||
## 📋 Spec Guidelines
|
||||
|
||||
- Always follow specs in `specs/` (v1.8.0). Priority: `06-Decision-Records` > `05-Engineering-Guidelines` > others.
|
||||
- Always verify database schema against **`specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`** before writing queries.
|
||||
- Check data dictionary at **`specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
|
||||
|
||||
### ADR Reference (All 17 + Patch)
|
||||
|
||||
| ADR | Topic | Key Decision |
|
||||
| ------- | ------------------------- | -------------------------------------------------- |
|
||||
| ADR-001 | Workflow Engine | Unified state machine for document workflows |
|
||||
| ADR-002 | Doc Numbering | Redis Redlock + DB optimistic locking |
|
||||
| ADR-005 | Technology Stack | NestJS + Next.js + MariaDB + Redis |
|
||||
| ADR-006 | Redis Caching | Cache strategy and invalidation patterns |
|
||||
| ADR-008 | Email Notification | BullMQ queue-based email/LINE/in-app |
|
||||
| ADR-009 | DB Strategy | No TypeORM migrations — modify schema SQL directly |
|
||||
| ADR-010 | Logging/Monitoring | Prometheus + Loki + Grafana stack |
|
||||
| ADR-011 | App Router | Next.js App Router with RSC patterns |
|
||||
| ADR-012 | UI Components | Shadcn/UI component library |
|
||||
| ADR-013 | Form Handling | React Hook Form + Zod validation |
|
||||
| ADR-014 | State Management | TanStack Query (server) + Zustand (client) |
|
||||
| ADR-015 | Deployment | Docker Compose + Gitea CI/CD |
|
||||
| ADR-016 | Security | JWT + CASL RBAC + Helmet.js + ClamAV |
|
||||
| ADR-017 | Ollama Migration | Local AI + n8n for legacy data import |
|
||||
| ADR-018 | AI Boundary (Patch 1.8.1) | AI isolation — no direct DB/storage access |
|
||||
|
||||
## 🚫 Forbidden Actions
|
||||
|
||||
- DO NOT use SQL Triggers (Business logic must be in NestJS services).
|
||||
- DO NOT use `.env` files for production configuration (Use Docker environment variables).
|
||||
- DO NOT run database migrations — modify the schema SQL file directly.
|
||||
- DO NOT invent table names or columns — use ONLY what is defined in the schema SQL file.
|
||||
- DO NOT generate code that violates OWASP Top 10 security practices.
|
||||
- DO NOT use `any` TypeScript type anywhere.
|
||||
- DO NOT let AI (Ollama) access production database directly — all writes go through DMS API.
|
||||
- DO NOT bypass StorageService for file operations — all file moves must go through the API.
|
||||
79
CLAUDE.md
Normal file
79
CLAUDE.md
Normal file
@@ -0,0 +1,79 @@
|
||||
# NAP-DMS Project Context & Rules
|
||||
|
||||
## 🧠 Role & Persona
|
||||
|
||||
Act as a **Senior Full Stack Developer** expert in **NestJS**, **Next.js**, and **TypeScript**.
|
||||
You are a **Document Intelligence Engine** — not a general chatbot.
|
||||
You value **Data Integrity**, **Security**, and **Clean Architecture**.
|
||||
|
||||
## 🏗️ Project Overview
|
||||
|
||||
**LCBP3-DMS (Laem Chabang Port Phase 3 - Document Management System)** — Version 1.8.0 (Patch 1.8.1)
|
||||
|
||||
- **Goal:** Manage construction documents (Correspondence, RFA, Contract Drawings, Shop Drawings)
|
||||
with complex multi-level approval workflows.
|
||||
- **Infrastructure:**
|
||||
- **QNAP NAS:** Container Station (Docker), Nginx Proxy Manager, MariaDB, Redis, Elasticsearch, ClamAV
|
||||
- **ASUSTOR NAS:** Ollama (AI Processing), n8n (Workflow Automation), Portainer
|
||||
- **Shared:** Gitea (Git + CI/CD), Prometheus + Loki + Grafana (Monitoring/Logging)
|
||||
|
||||
## 💻 Tech Stack & Constraints
|
||||
|
||||
- **Backend:** NestJS (Modular Architecture), TypeORM, MariaDB 11.8, Redis 7.2 (BullMQ),
|
||||
Elasticsearch 8.11, JWT + Passport, CASL (4-Level RBAC), ClamAV (Virus Scanning), Helmet.js
|
||||
- **Frontend:** Next.js 14+ (App Router), Tailwind CSS, Shadcn/UI,
|
||||
TanStack Query (**Server State**), Zustand (**Client State**), React Hook Form + Zod (**Form State**), Axios
|
||||
- **Notifications:** BullMQ Queue → Email / LINE Notify / In-App
|
||||
- **AI/Migration:** Ollama (llama3.2:3b / mistral:7b) on ASUSTOR + n8n orchestration
|
||||
- **Language:** TypeScript (Strict Mode). **NO `any` types allowed.**
|
||||
|
||||
## 🛡️ Security & Integrity Rules
|
||||
|
||||
1. **Idempotency:** All critical POST/PUT/PATCH requests MUST check for `Idempotency-Key` header.
|
||||
2. **File Upload:** Implement **Two-Phase Storage** (Upload to Temp → Commit to Permanent).
|
||||
3. **Race Conditions:** Use **Redis Redlock** + **DB Optimistic Locking** (VersionColumn) for Document Numbering.
|
||||
4. **Validation:** Use Zod (frontend) or Class-validator (backend DTO) for all inputs.
|
||||
5. **Password:** bcrypt with 12 salt rounds. Enforce password policy.
|
||||
6. **Rate Limiting:** Apply ThrottlerGuard on auth endpoints.
|
||||
7. **AI Isolation (ADR-018):** Ollama MUST run on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. Output JSON only.
|
||||
|
||||
## 📋 Workflow & Spec Guidelines
|
||||
|
||||
- Always follow specs in `specs/` (v1.8.0). Priority: `06-Decision-Records` > `05-Engineering-Guidelines` > others.
|
||||
- Always verify database schema against **`specs/03-Data-and-Storage/lcbp3-v1.8.0-schema.sql`** before writing queries.
|
||||
- Check data dictionary at **`specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
|
||||
- Check seed data: **`lcbp3-v1.8.0-seed-basic.sql`** (reference data), **`lcbp3-v1.8.0-seed-permissions.sql`** (CASL permissions).
|
||||
- For migration context: **`specs/03-Data-and-Storage/03-04-legacy-data-migration.md`** and **`03-05-n8n-migration-setup-guide.md`**.
|
||||
|
||||
### ADR Reference (All 17 + Patch)
|
||||
|
||||
Adhere to all ADRs in `specs/06-Decision-Records/`:
|
||||
|
||||
| ADR | Topic | Key Decision |
|
||||
| ------- | ------------------------- | -------------------------------------------------- |
|
||||
| ADR-001 | Workflow Engine | Unified state machine for document workflows |
|
||||
| ADR-002 | Doc Numbering | Redis Redlock + DB optimistic locking |
|
||||
| ADR-005 | Technology Stack | NestJS + Next.js + MariaDB + Redis |
|
||||
| ADR-006 | Redis Caching | Cache strategy and invalidation patterns |
|
||||
| ADR-008 | Email Notification | BullMQ queue-based email/LINE/in-app |
|
||||
| ADR-009 | DB Strategy | No TypeORM migrations — modify schema SQL directly |
|
||||
| ADR-010 | Logging/Monitoring | Prometheus + Loki + Grafana stack |
|
||||
| ADR-011 | App Router | Next.js App Router with RSC patterns |
|
||||
| ADR-012 | UI Components | Shadcn/UI component library |
|
||||
| ADR-013 | Form Handling | React Hook Form + Zod validation |
|
||||
| ADR-014 | State Management | TanStack Query (server) + Zustand (client) |
|
||||
| ADR-015 | Deployment | Docker Compose + Gitea CI/CD |
|
||||
| ADR-016 | Security | JWT + CASL RBAC + Helmet.js + ClamAV |
|
||||
| ADR-017 | Ollama Migration | Local AI + n8n for legacy data import |
|
||||
| ADR-018 | AI Boundary (Patch 1.8.1) | AI isolation — no direct DB/storage access |
|
||||
|
||||
## 🚫 Forbidden Actions
|
||||
|
||||
- DO NOT use SQL Triggers (Business logic must be in NestJS services).
|
||||
- DO NOT use `.env` files for production configuration (Use Docker environment variables).
|
||||
- DO NOT run database migrations — modify the schema SQL file directly.
|
||||
- DO NOT invent table names or columns — use ONLY what is defined in the schema SQL file.
|
||||
- DO NOT generate code that violates OWASP Top 10 security practices.
|
||||
- DO NOT use `any` TypeScript type anywhere.
|
||||
- DO NOT let AI (Ollama) access production database directly — all writes go through DMS API.
|
||||
- DO NOT bypass StorageService for file operations — all file moves must go through the API.
|
||||
@@ -1,9 +1,10 @@
|
||||
# File: /share/np-dms/app/docker-compose.yml
|
||||
# File: /share/np-dms/app/docker-compose-app.yml
|
||||
# DMS Container v1.8.0: Application Stack (Backend + Frontend)
|
||||
# Application name: lcbp3-app
|
||||
# ============================================================
|
||||
# ⚠️ ใช้งานร่วมกับ services อื่นที่รันอยู่แล้วบน QNAP:
|
||||
# - mariadb (lcbp3-db)
|
||||
# - redis (lcbp3-redis)
|
||||
# - cache (services)
|
||||
# - search (services)
|
||||
# - npm (lcbp3-npm)
|
||||
@@ -29,12 +30,12 @@ networks:
|
||||
services:
|
||||
# ----------------------------------------------------------------
|
||||
# 1. Backend API (NestJS)
|
||||
# Service Name: backend (ตามที่ NPM อ้างอิง → backend:3000)
|
||||
# Service Name: backend (ตามที่ NPM อ้างอิง → lcbp3-backend:3000)
|
||||
# ----------------------------------------------------------------
|
||||
backend:
|
||||
<<: [*restart_policy, *default_logging]
|
||||
image: lcbp3-backend:latest
|
||||
container_name: backend
|
||||
container_name: lcbp3-backend
|
||||
stdin_open: true
|
||||
tty: true
|
||||
deploy:
|
||||
@@ -88,12 +89,12 @@ services:
|
||||
|
||||
# ----------------------------------------------------------------
|
||||
# 2. Frontend Web App (Next.js)
|
||||
# Service Name: frontend (ตามที่ NPM อ้างอิง → frontend:3000)
|
||||
# Service Name: frontend (ตามที่ NPM อ้างอิง → lcbp3-frontend:3000)
|
||||
# ----------------------------------------------------------------
|
||||
frontend:
|
||||
<<: [*restart_policy, *default_logging]
|
||||
image: lcbp3-frontend:latest
|
||||
container_name: frontend
|
||||
container_name: lcbp3-frontend
|
||||
stdin_open: true
|
||||
tty: true
|
||||
deploy:
|
||||
|
||||
Reference in New Issue
Block a user