690404:1139 Modify ADR
CI / CD Pipeline / build (push) Successful in 4m34s
CI / CD Pipeline / deploy (push) Successful in 7m33s

This commit is contained in:
2026-04-04 11:39:56 +07:00
parent d775d5ad85
commit c95e0f537e
87 changed files with 7046 additions and 422 deletions
+60
View File
@@ -0,0 +1,60 @@
---
trigger: always_on
---
# NAP-DMS Project Context
## Role & Persona
Act as a **Senior Full Stack Developer** specialized in:
- NestJS, Next.js, TypeScript
- Document Management Systems (DMS)
Focus:
- Data Integrity
- Security
- Maintainability
- Performance
You are a **Document Intelligence Engine** — not a general chatbot.
Every response must be **precise**, **spec-compliant**, and **production-ready**.
## Project Information
- **Project:** NAP-DMS (LCBP3)
- **Version:** 1.8.5
- **Stack:** NestJS + Next.js + TypeScript + MariaDB + Ollama (AI)
- **Repo:** https://git.np-dms.work/np-dms/lcbp3
## Rule Enforcement Tiers
### 🔴 Tier 1 — CRITICAL (CI BLOCKER)
Build fails immediately if violated:
- Security (Auth, RBAC, Validation)
- UUID Strategy (ADR-019) — no `parseInt` / `Number` / `+` on UUID
- Database correctness — verify schema before writing queries
- File upload security (ClamAV + whitelist)
- AI validation boundary (ADR-018)
- Error handling strategy (ADR-007)
- Forbidden patterns: `any`, `console.log`, UUID misuse
### 🟡 Tier 2 — IMPORTANT (CODE REVIEW)
Must fix before merge:
- Architecture patterns (thin controller, business logic in service)
- Test coverage (80%+ business logic, 70%+ backend overall)
- Cache invalidation
- Naming conventions
### 🟢 Tier 3 — GUIDELINES
Best practice — follow when possible:
- Code style / formatting (Prettier handles)
- Comment completeness
- Minor optimizations
-88
View File
@@ -1,88 +0,0 @@
---
trigger: always_on
---
# Project Specifications & Context Protocol
Description: Enforces strict adherence to the project's documentation structure for all agent activities.
Globs: \*
---
## 🧠 Role & Persona
Act as a **Senior Full Stack Developer** specialized in:
- NestJS, Next.js, TypeScript
- Document Management Systems (DMS)
Focus:
- Data Integrity
- Security
- Maintainability
- Performance
You are a **Document Intelligence Engine** — not a general chatbot.
Every response must be **precise**, **spec-compliant**, and **production-ready**.
## 🧭 Rule Enforcement Tiers
### 🔴 Tier 1 — CRITICAL (CI BLOCKER)
Build fails immediately if violated:
- Security (Auth, RBAC, Validation)
- UUID Strategy (ADR-019) — no `parseInt` / `Number` / `+` on UUID
- Database correctness — verify schema before writing queries
- File upload security (ClamAV + whitelist)
- AI validation boundary (ADR-018)
- Forbidden patterns: `any`, `console.log`, UUID misuse
### 🟡 Tier 2 — IMPORTANT (CODE REVIEW)
Must fix before merge:
- Architecture patterns (thin controller, business logic in service)
- Test coverage (80%+ business logic, 70%+ backend overall)
- Cache invalidation
- Naming conventions
### 🟢 Tier 3 — GUIDELINES
Best practice — follow when possible:
- Code style / formatting (Prettier handles)
- Comment completeness
- Minor optimizations
## 📖 The Context Loading Protocol
Before generating code or planning a solution, you MUST conceptually load the context in this specific order:
1. **📖 PROJECT CONTEXT (`specs/00-Overview/`)**
- _Action:_ Align with the high-level goals and domain language described here.
2. **✅ REQUIREMENTS (`specs/01-Requirements/`)**
- _Action:_ Verify that your plan satisfies the functional requirements and user stories.
3. **🏗 ARCHITECTURE & DECISIONS (`specs/02-Architecture/` & `specs/06-Decision-Records/`)**
- _Action:_ Adhere to the defined system design.
- _Crucial:_ Check `specs/06-Decision-Records/` (ADRs) to ensure you do not violate previously agreed-upon technical decisions.
4. **💾 DATABASE & SCHEMA (`specs/03-Data-and-Storage/`)**
- _Action:_ Read schema SQL files and data dictionary. Use only defined names.
5. **⚙️ IMPLEMENTATION DETAILS (`specs/05-Engineering-Guidelines/`)**
- _Action:_ Follow Tech Stack, Naming Conventions, and Code Patterns.
6. **🚀 OPERATIONS & INFRASTRUCTURE (`specs/04-Infrastructure-OPS/`)**
- _Action:_ Ensure deployability and configuration compliance.
### 🗂️ Key Spec Files (Priority: ADRs > Engineering Guidelines > others)
| Document | Path | Use When |
| ----------------------- | ----------------------------------------------------------------- | ------------------------------- |
| **Glossary** | `specs/00-overview/00-02-glossary.md` | Verify domain terminology |
| **Schema Tables** | `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` | Before writing any query |
| **Data Dictionary** | `specs/03-Data-and-Storage/03-01-data-dictionary.md` | Field meanings + business rules |
| **Edge Cases** | `specs/01-Requirements/01-06-edge-cases-and-rules.md` | Prevent bugs in flows |
| **ADR-019 UUID** | `specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md` | UUID-related work |
| **Backend Guidelines** | `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` | NestJS patterns |
| **Frontend Guidelines** | `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md` | Next.js patterns |
| **Testing Strategy** | `specs/05-Engineering-Guidelines/05-04-testing-strategy.md` | Coverage goals |
+71
View File
@@ -0,0 +1,71 @@
---
trigger: always_on
---
# ADR-019 UUID Strategy
## CRITICAL RULES
- **NEVER** use `parseInt()` on UUID values
- **NEVER** use `Number()` on UUID values
- **NEVER** use `+` operator on UUID values
- **ALWAYS** use `publicId` (string UUID) for API responses
- **NEVER** expose internal INT `id` in API responses (use `@Exclude()`)
## Identifier Types
| Context | Type | Notes |
| ---------------- | ------------------------- | ------------------------------------------- |
| Internal / DB FK | `INT AUTO_INCREMENT` | Never exposed in API |
| Public API / URL | `UUIDv7` (MariaDB native) | Stored as BINARY(16), no transformer needed |
| Entity Property | `publicId: string` | Exposed directly in API (no transformation) |
| API Response | `publicId: string` (UUID) | INT `id` has `@Exclude()` — never appears |
## Backend Pattern (NestJS/TypeORM)
```typescript
// Entity
@Entity()
class Project extends UuidBaseEntity {
@Column({ type: 'uuid' })
publicId: string; // UUID string, no transformation needed
@PrimaryKey()
@Exclude()
id: number; // Internal INT, never exposed
}
// API Response → { id: "019505a1-7c3e-7000-8000-abc123def456" }
// Uses publicId directly, no @Expose({ name: 'id' }) needed
```
## Frontend Pattern (Next.js)
```typescript
// ✅ CORRECT — Use publicId only
type ProjectOption = {
publicId?: string; // No uuid, no id fallback
projectName?: string;
};
// ❌ WRONG — Multiple identifiers cause confusion
type ProjectOption = {
publicId?: string;
uuid?: string; // Don't do this
id?: number; // Don't do this
};
// ❌ NEVER use parseInt on UUID
parseInt(projectId); // "0195..." → 19 (WRONG!)
// ❌ NEVER use id ?? '' fallback
const value = c.publicId ?? c.id ?? ''; // Wrong!
// ✅ CORRECT — Use publicId only
const value = c.publicId; // "019505a1-7c3e-7000-8000-abc123def456"
```
## Related Documents
- `specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md`
- `specs/05-Engineering-Guidelines/05-07-hybrid-uuid-implementation-plan.md`
-41
View File
@@ -1,41 +0,0 @@
---
trigger: always_on
description: Control which shell commands the agent may run automatically.
allowAuto:
- 'pnpm test:watch'
- 'pnpm test:debug'
- 'pnpm test:e2e'
- 'git status'
- 'git log --oneline'
- 'git diff'
- 'git branch'
- 'tsc --noEmit'
denyAuto:
- 'rm -rf'
- 'Remove-Item'
- 'git push --force'
- 'git reset --hard'
- 'git clean -fd'
- 'curl | bash'
- 'docker compose down'
- 'DROP TABLE'
- 'TRUNCATE'
- 'DELETE FROM'
- 'pnpm migration:*'
- 'npm run migration:*'
- 'npx auth secret'
alwaysReview: true
scopes:
- 'backend/src/**'
- 'backend/test/**'
- 'frontend/app/**'
---
# Execution Rules
- Only auto-execute commands that are explicitly listed in `allowAuto`.
- Commands in `denyAuto` must always be blocked, even if manually requested.
- All shell operations that create, modify, or delete files in `backend/src/`, `backend/test/`, or `frontend/app/` require human review.
- Alert before running any SQL that modifies data (INSERT/UPDATE/DELETE/DROP/TRUNCATE).
- Alert if environment variables related to DB connection or secrets (DATABASE_URL, JWT_SECRET, passwords) would be displayed or logged.
- Never auto-execute commands that expose sensitive credentials via MCP tools or shell output.
-18
View File
@@ -1,18 +0,0 @@
---
trigger: always_on
---
# 🆔 Identifier Strategy (ADR-019) — CRITICAL
| Context | Type | Notes |
| ---------------- | ------------------------- | ------------------------------------------- |
| Internal / DB FK | `INT AUTO_INCREMENT` | Never exposed in API |
| Public API / URL | `UUIDv7` (MariaDB native) | Stored as BINARY(16), no transformer needed |
| Entity Property | `publicId: string` | Exposed directly in API (no transformation) |
**CRITICAL RULES:**
- **NEVER** use `parseInt`, `Number()`, or `+` on UUID values.
- **NEVER** use `id ?? ''` fallback for identifiers in the frontend.
- Use `publicId` only in frontend and public API responses.
- `INT id` has `@Exclude()` on the backend — it must never appear in API responses.
+36
View File
@@ -0,0 +1,36 @@
---
trigger: always_on
---
# Security Rules (Non-Negotiable)
## Mandatory Security Requirements
1. **Idempotency:** All critical `POST`/`PUT`/`PATCH` MUST validate `Idempotency-Key` header
2. **Two-Phase File Upload:** Upload → Temp → Commit → Permanent
3. **Race Conditions:** Redis Redlock + TypeORM `@VersionColumn` for Document Numbering
4. **Validation:** Zod (frontend) + class-validator (backend DTO)
5. **Password:** bcrypt 12 salt rounds, min 8 chars, rotate every 90 days
6. **Rate Limiting:** `ThrottlerGuard` on all auth endpoints
7. **File Upload:** Whitelist PDF/DWG/DOCX/XLSX/ZIP, max 50MB, ClamAV scan
8. **AI Isolation (ADR-018):** Ollama on Admin Desktop ONLY — NO direct DB/storage access
9. **Error Handling (ADR-007):** Use layered error classification with user-friendly messages
10. **AI Integration (ADR-020):** RFA-First approach with unified pipeline architecture
11. **AI Audit Trail:** Log all AI interactions and human validations
12. **Rate Limiting:** Apply to AI endpoints to prevent abuse
## Full Documentation
`specs/06-Decision-Records/ADR-016-security-authentication.md`
## Security Checklist (Before Every Commit)
- [ ] Input validation implemented (Zod/class-validator)
- [ ] RBAC/CASL permissions checked
- [ ] No SQL injection vulnerabilities
- [ ] File upload validation (whitelist + ClamAV)
- [ ] Rate limiting applied to auth endpoints
- [ ] AI boundary enforcement (ADR-018) - no direct DB/storage access
- [ ] AI audit logging implemented for AI interactions
- [ ] Error handling follows ADR-007 layered classification
- [ ] OWASP Top 10 review passed
-23
View File
@@ -1,23 +0,0 @@
---
trigger: always_on
---
# 🛡️ Security Rules (Non-Negotiable)
1. **Idempotency:** All critical `POST`/`PUT`/`PATCH` MUST validate `Idempotency-Key` header
2. **Two-Phase File Upload:** Upload → Temp → Commit → Permanent
3. **Race Conditions:** Redis Redlock + TypeORM `@VersionColumn` for Document Numbering
4. **Validation:** Zod (frontend) + class-validator (backend DTO)
5. **AI Isolation (ADR-018):** Ollama on Admin Desktop ONLY — NO direct DB/storage access
## 🚫 Forbidden Actions
| ❌ Forbidden | ✅ Correct Approach |
| ----------------------------------------------- | --------------------------------------------- |
| SQL Triggers for business logic | NestJS Service methods |
| TypeORM migration files | Edit schema SQL directly (ADR-009) |
| `any` TypeScript type | Proper types / generics |
| `console.log` in committed code | NestJS Logger (backend) / remove (frontend) |
| Direct file operations bypassing StorageService | `StorageService` for all file moves |
| Inline email/notification sending | BullMQ queue job |
| `parseInt()` on UUID values | Use UUID string directly (ADR-019) |
+32
View File
@@ -0,0 +1,32 @@
---
trigger: always_on
---
# TypeScript Rules
## Strict Requirements
- **Strict Mode** — all strict checks enforced
- **ZERO `any` types** — use proper types or `unknown` + narrowing
- **ZERO `console.log`** — NestJS `Logger` (backend); remove before commit (frontend)
## Comment Language Policy
- **Comments:** Thai (เข้าใจง่ายสำหรับทีมไทย)
- **Code Identifiers:** English (variables, functions, classes)
## Error Handling Pattern
```typescript
// Backend (NestJS)
import { Logger } from '@nestjs/common';
const logger = new Logger('ServiceName');
// Use logger instead of console.log
logger.error('Error message', error.stack);
throw new HttpException('Message', HttpStatus.BAD_REQUEST);
// Frontend (Next.js)
// Remove all console.log before commit
// Use proper error boundaries and toast notifications
```
-24
View File
@@ -1,24 +0,0 @@
---
trigger: always_on
---
# 📐 TypeScript Rules
- **Strict Mode** — all strict checks enforced
- **ZERO `any` types** — use proper types or `unknown` + narrowing
- **ZERO `console.log`** — NestJS `Logger` (backend); remove before commit (frontend)
## 🏷️ Domain Terminology (Thai Comments, English Code)
| ✅ Use | ❌ Don't Use |
| ------------------ | ------------------------------------- |
| Correspondence | Letter, Communication, Document |
| RFA | Approval Request, Submit for Approval |
| Workflow Engine | Approval Flow, Process Engine |
| Document Numbering | Document ID, Auto Number |
## 🔄 Development Flow (Tiered)
- **🔴 Critical (DB/API/Security):** MUST follow all Context Protocol steps.
- **🟡 Normal (UI/Feature):** Follow existing patterns, check spec for relevant module.
- **🟢 Quick Fix:** Fix directly, check forbidden patterns before commit.
+38
View File
@@ -0,0 +1,38 @@
---
trigger: always_on
---
# Domain Terminology
## DMS Glossary
| ✅ Use | ❌ Don't Use |
| ------------------ | ------------------------------------- |
| Correspondence | Letter, Communication, Document |
| RFA | Approval Request, Submit for Approval |
| Transmittal | Delivery Note, Cover Letter |
| Circulation | Distribution, Routing |
| Shop Drawing | Construction Drawing |
| Contract Drawing | Design Drawing, Blueprint |
| Workflow Engine | Approval Flow, Process Engine |
| Document Numbering | Document ID, Auto Number |
| RBAC | Permission System (generic) |
## Full Glossary
`specs/00-overview/00-02-glossary.md`
## Key Spec Files Priority
Spec priority: **`06-Decision-Records`** > **`05-Engineering-Guidelines`** > others
| Document | Path | Use When |
| ----------------------- | ----------------------------------------------------------------- | ------------------------------- |
| **Glossary** | `specs/00-overview/00-02-glossary.md` | Verify domain terminology |
| **Schema Tables** | `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` | Before writing any query |
| **Data Dictionary** | `specs/03-Data-and-Storage/03-01-data-dictionary.md` | Field meanings + business rules |
| **Edge Cases** | `specs/01-Requirements/01-06-edge-cases-and-rules.md` | Prevent bugs in flows |
| **ADR-019 UUID** | `specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md` | UUID-related work |
| **Backend Guidelines** | `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` | NestJS patterns |
| **Frontend Guidelines** | `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md` | Next.js patterns |
| **Testing Strategy** | `specs/05-Engineering-Guidelines/05-04-testing-strategy.md` | Coverage goals |
+41
View File
@@ -0,0 +1,41 @@
---
trigger: always_on
---
# Forbidden Actions
## ❌ Never Do This
| ❌ Forbidden | ✅ Correct Approach |
| ----------------------------------------------- | ----------------------------------------------- |
| SQL Triggers for business logic | NestJS Service methods |
| `.env` files in production | `docker-compose.yml` environment section |
| TypeORM migration files | Edit schema SQL directly (ADR-009) |
| Inventing table/column names | Verify against `schema-02-tables.sql` |
| `any` TypeScript type | Proper types / generics |
| `console.log` in committed code | NestJS Logger (backend) / remove (frontend) |
| `req: any` in controllers | `RequestWithUser` typed interface |
| `parseInt()` on UUID values | Use UUID string directly (ADR-019) |
| Exposing INT PK in API responses | UUIDv7 (ADR-019) |
| AI accessing DB/storage directly | AI → DMS API → DB (ADR-018) |
| Direct file operations bypassing StorageService | `StorageService` for all file moves |
| Inline email/notification sending | BullMQ queue job |
| Deploying without Release Gates | Complete `04-08-release-management-policy.md` |
| AI direct cloud API calls | On-premises Ollama only (ADR-018) |
| AI outputs without human validation | Human-in-the-loop validation required (ADR-020) |
## Schema Changes (ADR-009)
- **NO TypeORM migrations** — edit SQL schema directly
- Always check `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` before writing queries
- Update Data Dictionary when changing fields
## UUID Handling
See `01-adr-019-uuid.md` for complete UUID rules.
Quick reminder:
-`parseInt(uuid)` → NEVER
-`Number(uuid)` → NEVER
- ✅ Use UUID string directly
-13
View File
@@ -1,13 +0,0 @@
---
trigger: always_on
---
# ✅ Quick Reference Checklist (Before Every Commit)
- [ ] UUID pattern verified (no parseInt on UUID)
- [ ] No `any` types in TypeScript
- [ ] No `console.log` in committed code
- [ ] Comments in Thai, Code identifiers in English
- [ ] Schema changes via SQL directly (not migration)
- [ ] Relevant ADRs checked (ADR-009, ADR-018, ADR-019)
- [ ] i18n keys used instead of hardcode text
+63
View File
@@ -0,0 +1,63 @@
---
trigger: always_on
globs:
- "backend/**/*.service.ts"
- "backend/**/*.controller.ts"
- "backend/**/*.dto.ts"
- "backend/**/*.entity.ts"
---
# Backend Patterns (NestJS)
## Architecture
- **Thin Controller** — business logic in Service layer
- **DTO Validation** — class-validator + class-transformer
- **RBAC** — CASL for authorization
- **Error Handling** — Logger + HttpException
## UUID Resolution Pattern
```typescript
// Controller - accept UUID in DTO
@Post()
async create(@Body() dto: CreateCorrespondenceDto) {
// Resolve UUID to internal ID
const contract = await this.contractService.findOneByUuid(dto.contractUuid);
const contractId = contract.id; // Internal INT for DB queries
return this.service.create(dto, contractId);
}
// Service - use internal ID for DB operations
async create(dto: CreateCorrespondenceDto, contractId: number) {
// Use contractId (INT) for database queries
const correspondence = this.repo.create({
contractId, // FK is INT
// ... other fields
});
return this.repo.save(correspondence);
}
```
## API Response Pattern
```typescript
// Entity
@Entity()
class Contract extends UuidBaseEntity {
@Column({ type: 'uuid' })
publicId: string;
@PrimaryKey()
@Exclude()
id: number;
}
// Response automatically includes publicId as 'id'
// { id: "019505a1-7c3e-7000-8000-abc123def456", ... }
```
## Full Guidelines
`specs/05-Engineering-Guidelines/05-02-backend-guidelines.md`
+54
View File
@@ -0,0 +1,54 @@
---
trigger: always_on
globs:
- "frontend/**/*.tsx"
- "frontend/**/*.ts"
- "frontend/**/*.css"
---
# Frontend Patterns (Next.js)
## Form Handling
- **RHF** (React Hook Form) for form management
- **Zod** for validation schema
- **TanStack Query** for server state
## UUID Handling
```typescript
// ✅ CORRECT - Use publicId only
interface ProjectOption {
publicId?: string;
projectName?: string;
}
// Select options
const options = contracts.map(c => ({
label: `${c.contractName} (${c.contractCode})`,
value: c.publicId!, // Use publicId, no fallback to id
}));
// ❌ WRONG - Never use these patterns
const value = c.publicId ?? c.id ?? ''; // Wrong!
const id = parseInt(projectId); // Wrong - parseInt on UUID!
```
## API Client Pattern
```typescript
// Use publicId directly in API calls
const contract = await contractService.getById(publicId);
// Form submission with UUID
const onSubmit = async (data: FormData) => {
await correspondenceService.create({
contractUuid: selectedContract.publicId!, // UUID string
// ... other fields
});
};
```
## Full Guidelines
`specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`
+42
View File
@@ -0,0 +1,42 @@
---
trigger: always_on
---
# Development Flow
## 🔴 Critical Work — DB / API / Security / Workflow Engine
**MUST complete all steps:**
1. **Glossary check** — verify domain terms in `00-02-glossary.md`
2. **Read the spec** — select from Key Spec Files table
3. **Check schema** — verify table/column in `schema-02-tables.sql`
4. **Check data dictionary** — confirm field meanings + business rules
5. **Scan edge cases**`01-06-edge-cases-and-rules.md`
6. **Check ADRs** — verify decisions align (ADR-009, ADR-018, ADR-019)
7. **Write code** — TypeScript strict, no `any`, no `console.log`
## 🟡 Normal Work — UI / Feature / Integration
- Follow existing patterns in codebase
- Check spec for relevant module only
- No need to read all specs
## 🟢 Quick Fix — Bug Fix / Typo / Style
- Fix directly
- Add minimal test if logic changed
- Check forbidden patterns before commit
## Context-Aware Triggers
| Request | Files to Check | Expected Response |
| -------------------- | ------------------------------------------------------- | --------------------------------------------------- |
| "สร้าง API ใหม่" | `05-02-backend-guidelines.md`, `schema-02-tables.sql` | NestJS Controller + Service + DTO + CASL Guard |
| "แก้ฟอร์ม frontend" | `05-03-frontend-guidelines.md`, `01-06-edge-cases.md` | RHF+Zod + TanStack Query + Thai comments |
| "เพิ่ม field ใหม่" | `ADR-009`, `data-dictionary.md`, `schema-02-tables.sql` | Edit SQL directly + update Data Dictionary + Entity |
| "ตรวจสอบ UUID" | `ADR-019`, `05-07-hybrid-uuid-implementation-plan.md` | UUIDv7 MariaDB native UUID + TransformInterceptor |
| "สร้าง migration" | `ADR-009`, `03-06-migration-business-scope.md` | Edit SQL schema directly + n8n workflow |
| "ตรวจสอบ permission" | `seed-permissions.sql`, `ADR-016` | CASL 4-Level RBAC matrix |
| "deploy production" | `04-08-release-management-policy.md`, `ADR-015` | Release Gates + Blue-Green strategy |
| "เพิ่ม test" | `05-04-testing-strategy.md` | Coverage goals + test patterns |
+36
View File
@@ -0,0 +1,36 @@
---
trigger: always_on
---
# Commit Checklist
## Pre-Commit Verification
- [ ] UUID pattern verified (no parseInt on UUID)
- [ ] No `any` types in TypeScript
- [ ] No `console.log` in committed code
- [ ] Comments in Thai
- [ ] Code identifiers in English
- [ ] Schema changes via SQL directly (not migration)
- [ ] Test coverage meets targets (Backend 70%+, Business Logic 80%+)
- [ ] Relevant ADRs checked (ADR-009, ADR-018, ADR-019)
- [ ] Glossary terms used correctly
- [ ] Error handling complete (Logger + HttpException)
- [ ] i18n keys used instead of hardcode text
- [ ] Cache invalidation when data modified
- [ ] Security checklist passed (OWASP Top 10)
## Commit Message Format
```
type(scope): description
[optional body]
```
Types: `feat`, `fix`, `docs`, `style`, `refactor`, `test`, `chore`
Examples:
- `feat(correspondence): add originator organization validation`
- `fix(uuid): correct parseInt usage to string comparison`
- `spec(agents): bump to v1.8.5 - refactor structure`
+78
View File
@@ -0,0 +1,78 @@
---
trigger: always_on
---
# ADR-007 Error Handling Strategy
## CRITICAL RULES
- **ALWAYS** use layered error classification (Validation, Business, System)
- **NEVER** expose technical details to end users
- **ALWAYS** provide user-friendly error messages with recovery guidance
- **ALWAYS** log technical details for debugging
- **NEVER** use generic error messages without context
## Error Classification
| Error Type | Description | User Message | Technical Log |
|------------|-------------|--------------|---------------|
| **Validation** | Input validation failures | Clear field-level errors | Full validation details |
| **Business** | Business rule violations | Actionable guidance | Business context + user ID |
| **System** | Infrastructure failures | Generic "try again" | Full stack trace + metrics |
## Backend Pattern (NestJS)
```typescript
// Custom Exception Hierarchy
export class BusinessException extends HttpException {
constructor(
message: string,
userMessage: string,
recoveryAction?: string,
errorCode?: string
) {
super({ message, userMessage, recoveryAction, errorCode }, 400);
}
}
// Global Exception Filter
@Catch()
export class GlobalExceptionFilter implements ExceptionFilter {
catch(exception: unknown, host: ArgumentsHost) {
// Classify error and provide appropriate response
// Log technical details
// Return user-friendly message
}
}
```
## Frontend Pattern (Next.js)
```typescript
// Error Display Component
const ErrorDisplay = ({ error, onRetry }) => {
const userMessage = error.userMessage || 'เกิดข้อผิดพลาด';
const recoveryAction = error.recoveryAction;
return (
<div>
<p>{userMessage}</p>
{recoveryAction && <p>{recoveryAction}</p>}
{onRetry && <button onClick={onRetry}></button>}
</div>
);
};
```
## Required Implementation
- [ ] Global Exception Filter with layered classification
- [ ] Custom exception hierarchy (Validation, Business, System)
- [ ] Standardized error response DTOs
- [ ] Frontend error display components
- [ ] Error recovery mechanisms where applicable
## Related Documents
- `specs/06-Decision-Records/ADR-007-error-handling-strategy.md`
- `specs/06-Decision-Records/ADR-010-logging-monitoring-strategy.md`
+100
View File
@@ -0,0 +1,100 @@
---
trigger: always_on
---
# ADR-020 AI Integration Architecture
## CRITICAL RULES
- **ALWAYS** follow ADR-018 AI boundary policy (isolation on Admin Desktop)
- **ALWAYS** use RFA-First approach for AI implementation
- **NEVER** allow AI direct database/storage access
- **ALWAYS** implement human-in-the-loop validation
- **NEVER** send sensitive data to cloud AI services
## AI Integration Patterns
### Architecture Overview
```
Frontend → AI Gateway API → Admin Desktop (Ollama) → Backend Validation
```
### Key Components
| Component | Location | Purpose |
|-----------|----------|---------|
| **AI Gateway** | Backend (NestJS) | API endpoints, validation, audit logging |
| **Ollama Engine** | Admin Desktop (Desk-5439) | LLM inference (Gemma 4) |
| **OCR Engine** | Admin Desktop (Desk-5439) | Thai/English text extraction |
| **Orchestrator** | QNAP NAS (n8n) | Workflow management |
## Backend Implementation (NestJS)
```typescript
// AI Module with boundary enforcement
@Module({
controllers: [AiController],
providers: [AiService, AiGateway],
exports: [AiService],
})
export class AiModule {
constructor() {
// Enforce ADR-018 boundaries
}
}
// AI Service with validation
@Injectable()
export class AiService {
async extractMetadata(documentId: string): Promise<AIMetadata> {
// 1. Validate permissions
// 2. Send to Admin Desktop AI
// 3. Validate AI response
// 4. Log audit trail
// 5. Return validated results
}
}
```
## Frontend Pattern (Next.js)
```typescript
// Document Review Form (reusable component)
const DocumentReviewForm = ({ document, aiSuggestions }) => {
return (
<form>
<Field label="Document Type" suggestions={aiSuggestions.documentType} />
<Field label="Project Code" suggestions={aiSuggestions.projectCode} />
<Field label="Discipline" suggestions={aiSuggestions.discipline} />
<ConfidenceScore score={aiSuggestions.confidence} />
<HumanValidationActions />
</form>
);
};
```
## Security Requirements
- **AI Isolation:** All AI processing on Admin Desktop only
- **Data Privacy:** No cloud AI services, on-premises only
- **Audit Trail:** Log all AI interactions and human validations
- **Rate Limiting:** Prevent AI abuse and resource exhaustion
- **Validation:** All AI outputs must be validated before use
## Required Implementation
- [ ] AiModule with ADR-018 boundary enforcement
- [ ] AI Gateway API endpoints with validation
- [ ] DocumentReviewForm reusable component
- [ ] Admin Desktop Ollama + PaddleOCR setup
- [ ] n8n workflow orchestration
- [ ] AI audit logging and monitoring
- [ ] Human-in-the-loop validation workflows
## Related Documents
- `specs/06-Decision-Records/ADR-018-ai-boundary.md`
- `specs/06-Decision-Records/ADR-020-ai-intelligence-integration.md`
- `specs/06-Decision-Records/ADR-017-ollama-data-migration.md`
+85
View File
@@ -0,0 +1,85 @@
---
description: Run the full speckit pipeline from specification to analysis in one command.
---
# Workflow: speckit.all
This meta-workflow orchestrates the **complete development lifecycle**, from specification through implementation and validation. For the preparation-only pipeline (steps 1-5), use `/speckit.prepare` instead.
## Preparation Phase (Steps 1-5)
1. **Specify** (`/speckit.specify`):
- Use the `view_file` tool to read: `.agents/skills/speckit.specify/SKILL.md`
- Execute with user's feature description
- Creates: `spec.md`
2. **Clarify** (`/speckit.clarify`):
- Use the `view_file` tool to read: `.agents/skills/speckit.clarify/SKILL.md`
- Execute to resolve ambiguities
- Updates: `spec.md`
3. **Plan** (`/speckit.plan`):
- Use the `view_file` tool to read: `.agents/skills/speckit.plan/SKILL.md`
- Execute to create technical design
- Creates: `plan.md`
4. **Tasks** (`/speckit.tasks`):
- Use the `view_file` tool to read: `.agents/skills/speckit.tasks/SKILL.md`
- Execute to generate task breakdown
- Creates: `tasks.md`
5. **Analyze** (`/speckit.analyze`):
- Use the `view_file` tool to read: `.agents/skills/speckit.analyze/SKILL.md`
- Execute to validate consistency across spec, plan, and tasks
- Output: Analysis report
- **Gate**: If critical issues found, stop and fix before proceeding
## Implementation Phase (Steps 6-7)
6. **Implement** (`/speckit.implement`):
- Use the `view_file` tool to read: `.agents/skills/speckit.implement/SKILL.md`
- Execute all tasks from `tasks.md` with anti-regression protocols
- Output: Working implementation
7. **Check** (`/speckit.checker`):
- Use the `view_file` tool to read: `.agents/skills/speckit.checker/SKILL.md`
- Run static analysis (linters, type checkers, security scanners)
- Output: Checker report
## Verification Phase (Steps 8-10)
8. **Test** (`/speckit.tester`):
- Use the `view_file` tool to read: `.agents/skills/speckit.tester/SKILL.md`
- Run tests with coverage
- Output: Test + coverage report
9. **Review** (`/speckit.reviewer`):
- Use the `view_file` tool to read: `.agents/skills/speckit.reviewer/SKILL.md`
- Perform code review
- Output: Review report with findings
10. **Validate** (`/speckit.validate`):
- Use the `view_file` tool to read: `.agents/skills/speckit.validate/SKILL.md`
- Verify implementation matches spec requirements
- Output: Validation report (pass/fail)
## Usage
```
/speckit.all "Build a user authentication system with OAuth2 support"
```
## Pipeline Comparison
| Pipeline | Steps | Use When |
| ------------------ | ------------------------- | -------------------------------------- |
| `/speckit.prepare` | 1-5 (Specify → Analyze) | Planning only — you'll implement later |
| `/speckit.all` | 1-10 (Specify → Validate) | Full lifecycle in one pass |
## On Error
If any step fails, stop the pipeline and report:
- Which step failed
- The error message
- Suggested remediation (e.g., "Run `/speckit.clarify` to resolve ambiguities before continuing")
@@ -0,0 +1,18 @@
---
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
---
# Workflow: speckit.constitution
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.constitution/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `.specify/` directory doesn't exist: Initialize the speckit structure first
@@ -0,0 +1,19 @@
---
description: Create or update the feature specification from a natural language feature description.
---
# Workflow: speckit.specify
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
- This is typically the starting point of a new feature.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.specify/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the feature description for the skill's logic.
4. **On Error**:
- If no feature description provided: Ask the user to describe the feature they want to specify
@@ -0,0 +1,18 @@
---
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
---
# Workflow: speckit.clarify
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.clarify/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `spec.md` is missing: Run `/speckit.specify` first to create the feature specification
+18
View File
@@ -0,0 +1,18 @@
---
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
---
# Workflow: speckit.plan
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.plan/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `spec.md` is missing: Run `/speckit.specify` first to create the feature specification
@@ -0,0 +1,19 @@
---
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
---
# Workflow: speckit.tasks
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.tasks/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `plan.md` is missing: Run `/speckit.plan` first
- If `spec.md` is missing: Run `/speckit.specify` first
@@ -0,0 +1,22 @@
---
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
---
// turbo-all
# Workflow: speckit.analyze
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.analyze/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `spec.md` is missing: Run `/speckit.specify` first
- If `plan.md` is missing: Run `/speckit.plan` first
- If `tasks.md` is missing: Run `/speckit.tasks` first
@@ -0,0 +1,20 @@
---
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
---
# Workflow: speckit.implement
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.implement/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `tasks.md` is missing: Run `/speckit.tasks` first
- If `plan.md` is missing: Run `/speckit.plan` first
- If `spec.md` is missing: Run `/speckit.specify` first
@@ -0,0 +1,21 @@
---
description: Run static analysis tools and aggregate results.
---
// turbo-all
# Workflow: speckit.checker
1. **Context Analysis**:
- The user may specify paths to check or run on entire project.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.checker/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no linting tools available: Report which tools to install based on project type
- If tools fail: Show raw error and suggest config fixes
@@ -0,0 +1,21 @@
---
description: Execute tests, measure coverage, and report results.
---
// turbo-all
# Workflow: speckit.tester
1. **Context Analysis**:
- The user may specify test paths, options, or just run all tests.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.tester/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no test framework detected: Report "No test framework found. Install Jest, Vitest, Pytest, or similar."
- If tests fail: Show failure details and suggest fixes
@@ -0,0 +1,19 @@
---
description: Perform code review with actionable feedback and suggestions.
---
# Workflow: speckit.reviewer
1. **Context Analysis**:
- The user may specify files to review, "staged" for git staged changes, or "branch" for branch diff.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.reviewer/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no files to review: Ask user to stage changes or specify file paths
- If not a git repo: Review current directory files instead
@@ -0,0 +1,19 @@
---
description: Validate that implementation matches specification requirements.
---
# Workflow: speckit.validate
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.validate/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `tasks.md` is missing: Run `/speckit.tasks` first
- If implementation not started: Run `/speckit.implement` first
@@ -0,0 +1,51 @@
---
description: Create a new NestJS backend feature module following project standards
---
# Create NestJS Backend Module
Use this workflow when creating a new feature module in `backend/src/modules/`.
Follows `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` and ADR-005.
## Steps
// turbo
1. **Verify requirements exist** — confirm the feature is in `specs/01-Requirements/` before starting
// turbo 2. **Check schema** — read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql` for relevant tables
3. **Scaffold module folder**
```
backend/src/modules/<module-name>/
├── <module-name>.module.ts
├── <module-name>.controller.ts
├── <module-name>.service.ts
├── dto/
│ ├── create-<module-name>.dto.ts
│ └── update-<module-name>.dto.ts
├── entities/
│ └── <module-name>.entity.ts
└── <module-name>.controller.spec.ts
```
4. **Create Entity** — map ONLY columns defined in the schema SQL. Use TypeORM decorators. Add `@VersionColumn()` if the entity needs optimistic locking.
5. **Create DTOs** — use `class-validator` decorators. Never use `any`. Validate all inputs.
6. **Create Service** — inject repository via constructor DI. Use transactions for multi-step writes. Add `Idempotency-Key` guard for POST/PUT/PATCH operations.
7. **Create Controller** — apply `@UseGuards(JwtAuthGuard, CaslAbilityGuard)`. Use proper HTTP status codes. Document with `@ApiTags` and `@ApiOperation`.
8. **Register in Module** — add to `imports`, `providers`, `controllers`, `exports` as needed.
9. **Register in AppModule** — import the new module in `app.module.ts`.
// turbo 10. **Write unit test** — cover service methods with Jest mocks. Run:
```bash
pnpm test:watch
```
// turbo 11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md`
@@ -0,0 +1,64 @@
---
description: Create a new Next.js App Router page following project standards
---
# Create Next.js Frontend Page
Use this workflow when creating a new page in `frontend/app/`.
Follows `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`, ADR-011, ADR-012, ADR-013, ADR-014.
## Steps
1. **Determine route** — decide the route path, e.g. `app/(dashboard)/documents/page.tsx`
2. **Classify components** — decide what is Server Component (default) vs Client Component (`'use client'`)
- Server Component: initial data load, static content, SEO
- Client Component: interactivity, forms, TanStack Query hooks, Zustand
3. **Create page file** — Server Component by default:
```typescript
// app/(dashboard)/<route>/page.tsx
import { Metadata } from 'next';
export const metadata: Metadata = {
title: '<Page Title> | LCBP3-DMS',
};
export default async function <PageName>Page() {
return (
<div>
{/* Page content */}
</div>
);
}
```
4. **Create API hook** (if client-side data needed) — add to `hooks/use-<feature>.ts`:
```typescript
'use client';
import { useQuery } from '@tanstack/react-query';
import { apiClient } from '@/lib/api-client';
export function use<Feature>() {
return useQuery({
queryKey: ['<feature>'],
queryFn: () => apiClient.get('<endpoint>'),
});
}
```
5. **Build UI components** — use Shadcn/UI primitives. Place reusable components in `components/<feature>/`.
6. **Handle forms** — use React Hook Form + Zod schema validation. Never access form values without validation.
7. **Handle errors** — add `error.tsx` alongside `page.tsx` for route-level error boundaries.
8. **Add loading state** — add `loading.tsx` for Suspense fallback if page does async work.
9. **Add to navigation** — update sidebar/nav config if the page should appear in the menu.
10. **Access control** — ensure page checks CASL permissions. Redirect unauthorized users via middleware or `notFound()`.
11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`
+71
View File
@@ -0,0 +1,71 @@
---
description: Deploy the application via Gitea Actions to QNAP Container Station
---
# Deploy to Production
Use this workflow to deploy updated backend and/or frontend to QNAP via Gitea Actions CI/CD.
Follows `specs/04-Infrastructure-OPS/` and ADR-015.
## Pre-deployment Checklist
- [ ] All tests pass locally (`pnpm test:watch`)
- [ ] No TypeScript errors (`tsc --noEmit`)
- [ ] No `any` types introduced
- [ ] Schema changes applied to `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`
- [ ] Environment variables documented (NOT in `.env` files)
## Steps
1. **Commit and push to Gitea**
```bash
git status
git add .
git commit -m "feat(<scope>): <description>"
git push origin main
```
2. **Monitor Gitea Actions** — open Gitea web UI → Actions tab → verify pipeline starts
3. **Pipeline stages (automatic)**
- `build-backend` → Docker image build + push to registry
- `build-frontend` → Docker image build + push to registry
- `deploy` → SSH to QNAP → `docker compose pull` + `docker compose up -d`
4. **Verify backend health**
```bash
curl http://<QNAP_IP>:3000/health
# Expected: { "status": "ok" }
```
5. **Verify frontend**
```bash
curl -I http://<QNAP_IP>:3001
# Expected: HTTP 200
```
6. **Check logs in Grafana** — navigate to Grafana → Loki → filter by container name
- Backend: `container_name="lcbp3-backend"`
- Frontend: `container_name="lcbp3-frontend"`
7. **Verify database** — confirm schema changes are reflected (if any)
8. **Rollback (if needed)**
```bash
# SSH into QNAP
docker compose pull <service>=<previous-image-tag>
docker compose up -d <service>
```
## Common Issues
| Symptom | Cause | Fix |
| ----------------- | --------------------- | ----------------------------------- |
| Backend unhealthy | DB connection failed | Check MariaDB container + env vars |
| Frontend blank | Build error | Check Next.js build logs in Grafana |
| 502 Bad Gateway | Container not started | `docker compose ps` to check status |
| Pipeline stuck | Gitea runner offline | Restart runner on QNAP |
+62
View File
@@ -0,0 +1,62 @@
---
auto_execution_mode: 0
description: Review code changes for bugs, security issues, and improvements
---
You are a senior software engineer performing a thorough code review to identify potential bugs.
Your task is to find all potential bugs and code improvements in the code changes. Focus on:
1. Logic errors and incorrect behavior
2. Edge cases that aren't handled
3. Null/undefined reference issues
4. Race conditions or concurrency issues
5. Security vulnerabilities
6. Improper resource management or resource leaks
7. API contract violations
8. Incorrect caching behavior, including cache staleness issues, cache key-related bugs, incorrect cache invalidation, and ineffective caching
9. Violations of existing code patterns or conventions
## 🔴 Tier 1 Critical Rules (CI Blockers)
The following are **CI-blocking issues** that must be caught in code review. These align with project specs in `specs/05-Engineering-Guidelines/` and `specs/06-Decision-Records/`:
### ADR-019: UUID Handling
- **❌ NEVER use `parseInt()`, `Number()`, or `+` operator on UUID values**
- Example of violation: `parseInt(projectId)` where `projectId` is UUID string
- ✅ Correct: Use UUID string directly without conversion
- **❌ NEVER expose internal INT PK in API responses**
- API must expose only `publicId` (transformed to `id` via `@Expose()`)
- Verify DTOs have `@Exclude()` on `id: number` field
### TypeScript Strict Rules
- **❌ ZERO `any` types allowed** — use proper types or `unknown` + narrowing
- **❌ ZERO `console.log`** — must use NestJS `Logger` (backend) or remove (frontend)
- **❌ NO `req: any` in controllers** — use `RequestWithUser` typed interface
### Database & Architecture
- **❌ NO SQL Triggers for business logic** — use NestJS Service methods instead
- **❌ NO `.env` files in production** — use Docker environment variables
- **❌ NO direct table/column name invention** — verify against `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`
### Security (ADR-016)
- Idempotency validation for critical `POST`/`PUT`/`PATCH` endpoints
- Two-phase file upload pattern (Upload → Temp → Commit → Permanent)
- Input validation with class-validator (backend) and Zod (frontend)
### Test Coverage Requirements
- **Backend Services:** 80% minimum
- **Backend Overall:** 70% minimum
- **Business Logic:** 80% minimum
Make sure to:
1. If exploring the codebase, call multiple tools in parallel for increased efficiency. Do not spend too much time exploring.
2. If you find any pre-existing bugs in the code, you should also report those since it's important for us to maintain general code quality for the user.
3. Do NOT report issues that are speculative or low-confidence. All your conclusions should be based on a complete understanding of the codebase.
4. Remember that if you were given a specific git commit, it may not be checked out and local code states may be different.
+108
View File
@@ -0,0 +1,108 @@
---
description: Manage database schema changes following ADR-009 (no migrations, modify SQL directly)
---
# Schema Change Workflow
Use this workflow when modifying database schema for LCBP3-DMS.
Follows `specs/06-Decision-Records/ADR-009-database-strategy.md`**NO TypeORM migrations**.
## Pre-Change Checklist
- [ ] Change is required by a spec in `specs/01-Requirements/`
- [ ] Existing data impact has been assessed
- [ ] No SQL triggers are being added (business logic in NestJS only)
## Steps
1. **Read current schema** — load the full schema file:
```
specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql
```
2. **Read data dictionary** — understand current field definitions:
```
specs/03-Data-and-Storage/03-01-data-dictionary.md
```
// turbo 3. **Identify impact scope** — determine which tables, columns, indexes, or constraints are affected. List:
- Tables being modified/created
- Columns being added/renamed/dropped
- Foreign key relationships affected
- Indexes being added/modified
- Seed data impact (if any)
4. **Modify schema SQL** — edit `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`:
- Add/modify table definitions
- Maintain consistent formatting (uppercase SQL keywords, lowercase identifiers)
- Add inline comments for new columns explaining purpose
- Ensure `DEFAULT` values and `NOT NULL` constraints are correct
- Add `version` column with `@VersionColumn()` marker comment if optimistic locking is needed
> [!CAUTION]
> **NEVER use SQL Triggers.** All business logic must live in NestJS services.
5. **Update data dictionary** — edit `specs/03-Data-and-Storage/03-01-data-dictionary.md`:
- Add new tables/columns with descriptions
- Update data types and constraints
- Document business rules for new fields
- Add enum value definitions if applicable
6. **Update seed data** (if applicable):
- `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-basic.sql` — for reference/lookup data
- `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql` — for new CASL permissions
7. **Update TypeORM entity** — modify corresponding `backend/src/modules/<module>/entities/*.entity.ts`:
- Map ONLY columns defined in schema SQL
- Use correct TypeORM decorators (`@Column`, `@PrimaryGeneratedColumn`, `@ManyToOne`, etc.)
- Add `@VersionColumn()` if optimistic locking is needed
8. **Update DTOs** — if new columns are exposed via API:
- Add fields to `create-*.dto.ts` and/or `update-*.dto.ts`
- Add `class-validator` decorators for all new fields
- Never use `any` type
// turbo 9. **Run type check** — verify no TypeScript errors:
```bash
cd backend && npx tsc --noEmit
```
10. **Generate SQL diff** — create a summary of changes for the user to apply manually:
```
-- Schema Change Summary
-- Date: <current date>
-- Feature: <feature name>
-- Tables affected: <list>
--
-- ⚠️ Apply this SQL to the live database manually:
ALTER TABLE ...;
-- or
CREATE TABLE ...;
```
11. **Notify user** — present the SQL diff and remind them:
- Apply the SQL change to the live database manually
- Verify the change doesn't break existing data
- Run `pnpm test` after applying to confirm entity mappings work
## Common Patterns
| Change Type | Template |
| ----------- | -------------------------------------------------------------- |
| Add column | `ALTER TABLE \`table\` ADD COLUMN \`col\` TYPE DEFAULT value;` |
| Add table | Full `CREATE TABLE` with constraints and indexes |
| Add index | `CREATE INDEX \`idx_table_col\` ON \`table\` (\`col\`);` |
| Add FK | `ALTER TABLE \`child\` ADD CONSTRAINT ... FOREIGN KEY ...` |
| Add enum | Add to data dictionary + `ENUM('val1','val2')` in column def |
## On Error
- If schema SQL has syntax errors → fix and re-validate with `tsc --noEmit`
- If entity mapping doesn't match schema → compare column-by-column against SQL
- If seed data conflicts → check unique constraints and foreign keys
+27
View File
@@ -0,0 +1,27 @@
---
description: Execute the full preparation pipeline (Specify -> Clarify -> Plan -> Tasks -> Analyze) in sequence.
---
# Workflow: speckit.prepare
This workflow orchestrates the sequential execution of the Speckit preparation phase skills (02-06).
1. **Step 1: Specify (Skill 02)**
- Goal: Create or update the `spec.md` based on user input.
- Action: Read and execute `.agents/skills/speckit.specify/SKILL.md`.
2. **Step 2: Clarify (Skill 03)**
- Goal: Refine the `spec.md` by identifying and resolving ambiguities.
- Action: Read and execute `.agents/skills/speckit.clarify/SKILL.md`.
3. **Step 3: Plan (Skill 04)**
- Goal: Generate `plan.md` from the finalized spec.
- Action: Read and execute `.agents/skills/speckit.plan/SKILL.md`.
4. **Step 4: Tasks (Skill 05)**
- Goal: Generate actionable `tasks.md` from the plan.
- Action: Read and execute `.agents/skills/speckit.tasks/SKILL.md`.
5. **Step 5: Analyze (Skill 06)**
- Goal: Validate consistency across all design artifacts (spec, plan, tasks).
- Action: Read and execute `.agents/skills/speckit.analyze/SKILL.md`.
@@ -0,0 +1,18 @@
---
description: Generate a custom checklist for the current feature based on user requirements.
---
# Workflow: speckit.checklist
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.checklist/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `spec.md` is missing: Run `/speckit.specify` first to create the feature specification
@@ -0,0 +1,19 @@
---
description: Compare two versions of a spec or plan to highlight changes.
---
# Workflow: speckit.diff
1. **Context Analysis**:
- The user has provided an input prompt (optional file paths or version references).
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.diff/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no files to compare: Use current feature's `spec.md` vs git HEAD
- If `spec.md` doesn't exist: Run `/speckit.specify` first
@@ -0,0 +1,19 @@
---
description: Migrate existing projects into the speckit structure by generating spec.md, plan.md, and tasks.md from existing code.
---
# Workflow: speckit.migrate
1. **Context Analysis**:
- The user has provided an input prompt (path to analyze, feature name).
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.migrate/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If path doesn't exist: Ask user to provide valid directory path
- If no code found: Report that no analyzable code was detected
@@ -0,0 +1,20 @@
---
description: Challenge the specification with Socratic questioning to identify logical gaps, unhandled edge cases, and robustness issues.
---
// turbo-all
# Workflow: speckit.quizme
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.quizme/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If required files don't exist, inform the user which prerequisite workflow to run first (e.g., `/speckit.specify` to create `spec.md`).
@@ -0,0 +1,20 @@
---
description: Display a dashboard showing feature status, completion percentage, and blockers.
---
// turbo-all
# Workflow: speckit.status
1. **Context Analysis**:
- The user may optionally specify a feature to focus on.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.status/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no features exist: Report "No features found. Run `/speckit.specify` to create your first feature."
@@ -0,0 +1,18 @@
---
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
---
# Workflow: speckit.taskstoissues
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.taskstoissues/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `tasks.md` is missing: Run `/speckit.tasks` first