690414:1113 Update README.md /.agents/skills, /.windsurf/workflows

This commit is contained in:
2026-04-14 11:13:42 +07:00
parent 02400fd88c
commit 6d45bdaeb5
194 changed files with 12708 additions and 8762 deletions
-63
View File
@@ -1,63 +0,0 @@
---
trigger: always_on
globs:
- "backend/**/*.service.ts"
- "backend/**/*.controller.ts"
- "backend/**/*.dto.ts"
- "backend/**/*.entity.ts"
---
# Backend Patterns (NestJS)
## Architecture
- **Thin Controller** — business logic in Service layer
- **DTO Validation** — class-validator + class-transformer
- **RBAC** — CASL for authorization
- **Error Handling** — Logger + HttpException
## UUID Resolution Pattern
```typescript
// Controller - accept UUID in DTO
@Post()
async create(@Body() dto: CreateCorrespondenceDto) {
// Resolve UUID to internal ID
const contract = await this.contractService.findOneByUuid(dto.contractUuid);
const contractId = contract.id; // Internal INT for DB queries
return this.service.create(dto, contractId);
}
// Service - use internal ID for DB operations
async create(dto: CreateCorrespondenceDto, contractId: number) {
// Use contractId (INT) for database queries
const correspondence = this.repo.create({
contractId, // FK is INT
// ... other fields
});
return this.repo.save(correspondence);
}
```
## API Response Pattern
```typescript
// Entity
@Entity()
class Contract extends UuidBaseEntity {
@Column({ type: 'uuid' })
publicId: string;
@PrimaryKey()
@Exclude()
id: number;
}
// Response automatically includes publicId as 'id'
// { id: "019505a1-7c3e-7000-8000-abc123def456", ... }
```
## Full Guidelines
`specs/05-Engineering-Guidelines/05-02-backend-guidelines.md`
-54
View File
@@ -1,54 +0,0 @@
---
trigger: always_on
globs:
- "frontend/**/*.tsx"
- "frontend/**/*.ts"
- "frontend/**/*.css"
---
# Frontend Patterns (Next.js)
## Form Handling
- **RHF** (React Hook Form) for form management
- **Zod** for validation schema
- **TanStack Query** for server state
## UUID Handling
```typescript
// ✅ CORRECT - Use publicId only
interface ProjectOption {
publicId?: string;
projectName?: string;
}
// Select options
const options = contracts.map(c => ({
label: `${c.contractName} (${c.contractCode})`,
value: c.publicId!, // Use publicId, no fallback to id
}));
// ❌ WRONG - Never use these patterns
const value = c.publicId ?? c.id ?? ''; // Wrong!
const id = parseInt(projectId); // Wrong - parseInt on UUID!
```
## API Client Pattern
```typescript
// Use publicId directly in API calls
const contract = await contractService.getById(publicId);
// Form submission with UUID
const onSubmit = async (data: FormData) => {
await correspondenceService.create({
contractUuid: selectedContract.publicId!, // UUID string
// ... other fields
});
};
```
## Full Guidelines
`specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`
-85
View File
@@ -1,85 +0,0 @@
---
description: Run the full speckit pipeline from specification to analysis in one command.
---
# Workflow: speckit.all
This meta-workflow orchestrates the **complete development lifecycle**, from specification through implementation and validation. For the preparation-only pipeline (steps 1-5), use `/speckit.prepare` instead.
## Preparation Phase (Steps 1-5)
1. **Specify** (`/speckit.specify`):
- Use the `view_file` tool to read: `.agents/skills/speckit.specify/SKILL.md`
- Execute with user's feature description
- Creates: `spec.md`
2. **Clarify** (`/speckit.clarify`):
- Use the `view_file` tool to read: `.agents/skills/speckit.clarify/SKILL.md`
- Execute to resolve ambiguities
- Updates: `spec.md`
3. **Plan** (`/speckit.plan`):
- Use the `view_file` tool to read: `.agents/skills/speckit.plan/SKILL.md`
- Execute to create technical design
- Creates: `plan.md`
4. **Tasks** (`/speckit.tasks`):
- Use the `view_file` tool to read: `.agents/skills/speckit.tasks/SKILL.md`
- Execute to generate task breakdown
- Creates: `tasks.md`
5. **Analyze** (`/speckit.analyze`):
- Use the `view_file` tool to read: `.agents/skills/speckit.analyze/SKILL.md`
- Execute to validate consistency across spec, plan, and tasks
- Output: Analysis report
- **Gate**: If critical issues found, stop and fix before proceeding
## Implementation Phase (Steps 6-7)
6. **Implement** (`/speckit.implement`):
- Use the `view_file` tool to read: `.agents/skills/speckit.implement/SKILL.md`
- Execute all tasks from `tasks.md` with anti-regression protocols
- Output: Working implementation
7. **Check** (`/speckit.checker`):
- Use the `view_file` tool to read: `.agents/skills/speckit.checker/SKILL.md`
- Run static analysis (linters, type checkers, security scanners)
- Output: Checker report
## Verification Phase (Steps 8-10)
8. **Test** (`/speckit.tester`):
- Use the `view_file` tool to read: `.agents/skills/speckit.tester/SKILL.md`
- Run tests with coverage
- Output: Test + coverage report
9. **Review** (`/speckit.reviewer`):
- Use the `view_file` tool to read: `.agents/skills/speckit.reviewer/SKILL.md`
- Perform code review
- Output: Review report with findings
10. **Validate** (`/speckit.validate`):
- Use the `view_file` tool to read: `.agents/skills/speckit.validate/SKILL.md`
- Verify implementation matches spec requirements
- Output: Validation report (pass/fail)
## Usage
```
/speckit.all "Build a user authentication system with OAuth2 support"
```
## Pipeline Comparison
| Pipeline | Steps | Use When |
| ------------------ | ------------------------- | -------------------------------------- |
| `/speckit.prepare` | 1-5 (Specify → Analyze) | Planning only — you'll implement later |
| `/speckit.all` | 1-10 (Specify → Validate) | Full lifecycle in one pass |
## On Error
If any step fails, stop the pipeline and report:
- Which step failed
- The error message
- Suggested remediation (e.g., "Run `/speckit.clarify` to resolve ambiguities before continuing")
@@ -1,18 +0,0 @@
---
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
---
# Workflow: speckit.constitution
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.constitution/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `.specify/` directory doesn't exist: Initialize the speckit structure first
@@ -1,19 +0,0 @@
---
description: Create or update the feature specification from a natural language feature description.
---
# Workflow: speckit.specify
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
- This is typically the starting point of a new feature.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.specify/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the feature description for the skill's logic.
4. **On Error**:
- If no feature description provided: Ask the user to describe the feature they want to specify
@@ -1,18 +0,0 @@
---
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
---
# Workflow: speckit.clarify
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.clarify/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `spec.md` is missing: Run `/speckit.specify` first to create the feature specification
-18
View File
@@ -1,18 +0,0 @@
---
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
---
# Workflow: speckit.plan
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.plan/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `spec.md` is missing: Run `/speckit.specify` first to create the feature specification
@@ -1,19 +0,0 @@
---
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
---
# Workflow: speckit.tasks
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.tasks/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `plan.md` is missing: Run `/speckit.plan` first
- If `spec.md` is missing: Run `/speckit.specify` first
@@ -1,22 +0,0 @@
---
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
---
// turbo-all
# Workflow: speckit.analyze
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.analyze/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `spec.md` is missing: Run `/speckit.specify` first
- If `plan.md` is missing: Run `/speckit.plan` first
- If `tasks.md` is missing: Run `/speckit.tasks` first
@@ -1,20 +0,0 @@
---
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
---
# Workflow: speckit.implement
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.implement/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `tasks.md` is missing: Run `/speckit.tasks` first
- If `plan.md` is missing: Run `/speckit.plan` first
- If `spec.md` is missing: Run `/speckit.specify` first
@@ -1,21 +0,0 @@
---
description: Run static analysis tools and aggregate results.
---
// turbo-all
# Workflow: speckit.checker
1. **Context Analysis**:
- The user may specify paths to check or run on entire project.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.checker/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no linting tools available: Report which tools to install based on project type
- If tools fail: Show raw error and suggest config fixes
@@ -1,21 +0,0 @@
---
description: Execute tests, measure coverage, and report results.
---
// turbo-all
# Workflow: speckit.tester
1. **Context Analysis**:
- The user may specify test paths, options, or just run all tests.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.tester/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no test framework detected: Report "No test framework found. Install Jest, Vitest, Pytest, or similar."
- If tests fail: Show failure details and suggest fixes
@@ -1,19 +0,0 @@
---
description: Perform code review with actionable feedback and suggestions.
---
# Workflow: speckit.reviewer
1. **Context Analysis**:
- The user may specify files to review, "staged" for git staged changes, or "branch" for branch diff.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.reviewer/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no files to review: Ask user to stage changes or specify file paths
- If not a git repo: Review current directory files instead
@@ -1,19 +0,0 @@
---
description: Validate that implementation matches specification requirements.
---
# Workflow: speckit.validate
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.validate/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `tasks.md` is missing: Run `/speckit.tasks` first
- If implementation not started: Run `/speckit.implement` first
@@ -1,51 +0,0 @@
---
description: Create a new NestJS backend feature module following project standards
---
# Create NestJS Backend Module
Use this workflow when creating a new feature module in `backend/src/modules/`.
Follows `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` and ADR-005.
## Steps
// turbo
1. **Verify requirements exist** — confirm the feature is in `specs/01-Requirements/` before starting
// turbo 2. **Check schema** — read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql` for relevant tables
3. **Scaffold module folder**
```
backend/src/modules/<module-name>/
├── <module-name>.module.ts
├── <module-name>.controller.ts
├── <module-name>.service.ts
├── dto/
│ ├── create-<module-name>.dto.ts
│ └── update-<module-name>.dto.ts
├── entities/
│ └── <module-name>.entity.ts
└── <module-name>.controller.spec.ts
```
4. **Create Entity** — map ONLY columns defined in the schema SQL. Use TypeORM decorators. Add `@VersionColumn()` if the entity needs optimistic locking.
5. **Create DTOs** — use `class-validator` decorators. Never use `any`. Validate all inputs.
6. **Create Service** — inject repository via constructor DI. Use transactions for multi-step writes. Add `Idempotency-Key` guard for POST/PUT/PATCH operations.
7. **Create Controller** — apply `@UseGuards(JwtAuthGuard, CaslAbilityGuard)`. Use proper HTTP status codes. Document with `@ApiTags` and `@ApiOperation`.
8. **Register in Module** — add to `imports`, `providers`, `controllers`, `exports` as needed.
9. **Register in AppModule** — import the new module in `app.module.ts`.
// turbo 10. **Write unit test** — cover service methods with Jest mocks. Run:
```bash
pnpm test:watch
```
// turbo 11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md`
@@ -1,64 +0,0 @@
---
description: Create a new Next.js App Router page following project standards
---
# Create Next.js Frontend Page
Use this workflow when creating a new page in `frontend/app/`.
Follows `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`, ADR-011, ADR-012, ADR-013, ADR-014.
## Steps
1. **Determine route** — decide the route path, e.g. `app/(dashboard)/documents/page.tsx`
2. **Classify components** — decide what is Server Component (default) vs Client Component (`'use client'`)
- Server Component: initial data load, static content, SEO
- Client Component: interactivity, forms, TanStack Query hooks, Zustand
3. **Create page file** — Server Component by default:
```typescript
// app/(dashboard)/<route>/page.tsx
import { Metadata } from 'next';
export const metadata: Metadata = {
title: '<Page Title> | LCBP3-DMS',
};
export default async function <PageName>Page() {
return (
<div>
{/* Page content */}
</div>
);
}
```
4. **Create API hook** (if client-side data needed) — add to `hooks/use-<feature>.ts`:
```typescript
'use client';
import { useQuery } from '@tanstack/react-query';
import { apiClient } from '@/lib/api-client';
export function use<Feature>() {
return useQuery({
queryKey: ['<feature>'],
queryFn: () => apiClient.get('<endpoint>'),
});
}
```
5. **Build UI components** — use Shadcn/UI primitives. Place reusable components in `components/<feature>/`.
6. **Handle forms** — use React Hook Form + Zod schema validation. Never access form values without validation.
7. **Handle errors** — add `error.tsx` alongside `page.tsx` for route-level error boundaries.
8. **Add loading state** — add `loading.tsx` for Suspense fallback if page does async work.
9. **Add to navigation** — update sidebar/nav config if the page should appear in the menu.
10. **Access control** — ensure page checks CASL permissions. Redirect unauthorized users via middleware or `notFound()`.
11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`
-71
View File
@@ -1,71 +0,0 @@
---
description: Deploy the application via Gitea Actions to QNAP Container Station
---
# Deploy to Production
Use this workflow to deploy updated backend and/or frontend to QNAP via Gitea Actions CI/CD.
Follows `specs/04-Infrastructure-OPS/` and ADR-015.
## Pre-deployment Checklist
- [ ] All tests pass locally (`pnpm test:watch`)
- [ ] No TypeScript errors (`tsc --noEmit`)
- [ ] No `any` types introduced
- [ ] Schema changes applied to `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`
- [ ] Environment variables documented (NOT in `.env` files)
## Steps
1. **Commit and push to Gitea**
```bash
git status
git add .
git commit -m "feat(<scope>): <description>"
git push origin main
```
2. **Monitor Gitea Actions** — open Gitea web UI → Actions tab → verify pipeline starts
3. **Pipeline stages (automatic)**
- `build-backend` → Docker image build + push to registry
- `build-frontend` → Docker image build + push to registry
- `deploy` → SSH to QNAP → `docker compose pull` + `docker compose up -d`
4. **Verify backend health**
```bash
curl http://<QNAP_IP>:3000/health
# Expected: { "status": "ok" }
```
5. **Verify frontend**
```bash
curl -I http://<QNAP_IP>:3001
# Expected: HTTP 200
```
6. **Check logs in Grafana** — navigate to Grafana → Loki → filter by container name
- Backend: `container_name="lcbp3-backend"`
- Frontend: `container_name="lcbp3-frontend"`
7. **Verify database** — confirm schema changes are reflected (if any)
8. **Rollback (if needed)**
```bash
# SSH into QNAP
docker compose pull <service>=<previous-image-tag>
docker compose up -d <service>
```
## Common Issues
| Symptom | Cause | Fix |
| ----------------- | --------------------- | ----------------------------------- |
| Backend unhealthy | DB connection failed | Check MariaDB container + env vars |
| Frontend blank | Build error | Check Next.js build logs in Grafana |
| 502 Bad Gateway | Container not started | `docker compose ps` to check status |
| Pipeline stuck | Gitea runner offline | Restart runner on QNAP |
-62
View File
@@ -1,62 +0,0 @@
---
auto_execution_mode: 0
description: Review code changes for bugs, security issues, and improvements
---
You are a senior software engineer performing a thorough code review to identify potential bugs.
Your task is to find all potential bugs and code improvements in the code changes. Focus on:
1. Logic errors and incorrect behavior
2. Edge cases that aren't handled
3. Null/undefined reference issues
4. Race conditions or concurrency issues
5. Security vulnerabilities
6. Improper resource management or resource leaks
7. API contract violations
8. Incorrect caching behavior, including cache staleness issues, cache key-related bugs, incorrect cache invalidation, and ineffective caching
9. Violations of existing code patterns or conventions
## 🔴 Tier 1 Critical Rules (CI Blockers)
The following are **CI-blocking issues** that must be caught in code review. These align with project specs in `specs/05-Engineering-Guidelines/` and `specs/06-Decision-Records/`:
### ADR-019: UUID Handling
- **❌ NEVER use `parseInt()`, `Number()`, or `+` operator on UUID values**
- Example of violation: `parseInt(projectId)` where `projectId` is UUID string
- ✅ Correct: Use UUID string directly without conversion
- **❌ NEVER expose internal INT PK in API responses**
- API must expose only `publicId` (transformed to `id` via `@Expose()`)
- Verify DTOs have `@Exclude()` on `id: number` field
### TypeScript Strict Rules
- **❌ ZERO `any` types allowed** — use proper types or `unknown` + narrowing
- **❌ ZERO `console.log`** — must use NestJS `Logger` (backend) or remove (frontend)
- **❌ NO `req: any` in controllers** — use `RequestWithUser` typed interface
### Database & Architecture
- **❌ NO SQL Triggers for business logic** — use NestJS Service methods instead
- **❌ NO `.env` files in production** — use Docker environment variables
- **❌ NO direct table/column name invention** — verify against `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`
### Security (ADR-016)
- Idempotency validation for critical `POST`/`PUT`/`PATCH` endpoints
- Two-phase file upload pattern (Upload → Temp → Commit → Permanent)
- Input validation with class-validator (backend) and Zod (frontend)
### Test Coverage Requirements
- **Backend Services:** 80% minimum
- **Backend Overall:** 70% minimum
- **Business Logic:** 80% minimum
Make sure to:
1. If exploring the codebase, call multiple tools in parallel for increased efficiency. Do not spend too much time exploring.
2. If you find any pre-existing bugs in the code, you should also report those since it's important for us to maintain general code quality for the user.
3. Do NOT report issues that are speculative or low-confidence. All your conclusions should be based on a complete understanding of the codebase.
4. Remember that if you were given a specific git commit, it may not be checked out and local code states may be different.
-108
View File
@@ -1,108 +0,0 @@
---
description: Manage database schema changes following ADR-009 (no migrations, modify SQL directly)
---
# Schema Change Workflow
Use this workflow when modifying database schema for LCBP3-DMS.
Follows `specs/06-Decision-Records/ADR-009-database-strategy.md`**NO TypeORM migrations**.
## Pre-Change Checklist
- [ ] Change is required by a spec in `specs/01-Requirements/`
- [ ] Existing data impact has been assessed
- [ ] No SQL triggers are being added (business logic in NestJS only)
## Steps
1. **Read current schema** — load the full schema file:
```
specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql
```
2. **Read data dictionary** — understand current field definitions:
```
specs/03-Data-and-Storage/03-01-data-dictionary.md
```
// turbo 3. **Identify impact scope** — determine which tables, columns, indexes, or constraints are affected. List:
- Tables being modified/created
- Columns being added/renamed/dropped
- Foreign key relationships affected
- Indexes being added/modified
- Seed data impact (if any)
4. **Modify schema SQL** — edit `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`:
- Add/modify table definitions
- Maintain consistent formatting (uppercase SQL keywords, lowercase identifiers)
- Add inline comments for new columns explaining purpose
- Ensure `DEFAULT` values and `NOT NULL` constraints are correct
- Add `version` column with `@VersionColumn()` marker comment if optimistic locking is needed
> [!CAUTION]
> **NEVER use SQL Triggers.** All business logic must live in NestJS services.
5. **Update data dictionary** — edit `specs/03-Data-and-Storage/03-01-data-dictionary.md`:
- Add new tables/columns with descriptions
- Update data types and constraints
- Document business rules for new fields
- Add enum value definitions if applicable
6. **Update seed data** (if applicable):
- `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-basic.sql` — for reference/lookup data
- `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql` — for new CASL permissions
7. **Update TypeORM entity** — modify corresponding `backend/src/modules/<module>/entities/*.entity.ts`:
- Map ONLY columns defined in schema SQL
- Use correct TypeORM decorators (`@Column`, `@PrimaryGeneratedColumn`, `@ManyToOne`, etc.)
- Add `@VersionColumn()` if optimistic locking is needed
8. **Update DTOs** — if new columns are exposed via API:
- Add fields to `create-*.dto.ts` and/or `update-*.dto.ts`
- Add `class-validator` decorators for all new fields
- Never use `any` type
// turbo 9. **Run type check** — verify no TypeScript errors:
```bash
cd backend && npx tsc --noEmit
```
10. **Generate SQL diff** — create a summary of changes for the user to apply manually:
```
-- Schema Change Summary
-- Date: <current date>
-- Feature: <feature name>
-- Tables affected: <list>
--
-- ⚠️ Apply this SQL to the live database manually:
ALTER TABLE ...;
-- or
CREATE TABLE ...;
```
11. **Notify user** — present the SQL diff and remind them:
- Apply the SQL change to the live database manually
- Verify the change doesn't break existing data
- Run `pnpm test` after applying to confirm entity mappings work
## Common Patterns
| Change Type | Template |
| ----------- | -------------------------------------------------------------- |
| Add column | `ALTER TABLE \`table\` ADD COLUMN \`col\` TYPE DEFAULT value;` |
| Add table | Full `CREATE TABLE` with constraints and indexes |
| Add index | `CREATE INDEX \`idx_table_col\` ON \`table\` (\`col\`);` |
| Add FK | `ALTER TABLE \`child\` ADD CONSTRAINT ... FOREIGN KEY ...` |
| Add enum | Add to data dictionary + `ENUM('val1','val2')` in column def |
## On Error
- If schema SQL has syntax errors → fix and re-validate with `tsc --noEmit`
- If entity mapping doesn't match schema → compare column-by-column against SQL
- If seed data conflicts → check unique constraints and foreign keys
-27
View File
@@ -1,27 +0,0 @@
---
description: Execute the full preparation pipeline (Specify -> Clarify -> Plan -> Tasks -> Analyze) in sequence.
---
# Workflow: speckit.prepare
This workflow orchestrates the sequential execution of the Speckit preparation phase skills (02-06).
1. **Step 1: Specify (Skill 02)**
- Goal: Create or update the `spec.md` based on user input.
- Action: Read and execute `.agents/skills/speckit.specify/SKILL.md`.
2. **Step 2: Clarify (Skill 03)**
- Goal: Refine the `spec.md` by identifying and resolving ambiguities.
- Action: Read and execute `.agents/skills/speckit.clarify/SKILL.md`.
3. **Step 3: Plan (Skill 04)**
- Goal: Generate `plan.md` from the finalized spec.
- Action: Read and execute `.agents/skills/speckit.plan/SKILL.md`.
4. **Step 4: Tasks (Skill 05)**
- Goal: Generate actionable `tasks.md` from the plan.
- Action: Read and execute `.agents/skills/speckit.tasks/SKILL.md`.
5. **Step 5: Analyze (Skill 06)**
- Goal: Validate consistency across all design artifacts (spec, plan, tasks).
- Action: Read and execute `.agents/skills/speckit.analyze/SKILL.md`.
@@ -1,18 +0,0 @@
---
description: Generate a custom checklist for the current feature based on user requirements.
---
# Workflow: speckit.checklist
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.checklist/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `spec.md` is missing: Run `/speckit.specify` first to create the feature specification
@@ -1,19 +0,0 @@
---
description: Compare two versions of a spec or plan to highlight changes.
---
# Workflow: speckit.diff
1. **Context Analysis**:
- The user has provided an input prompt (optional file paths or version references).
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.diff/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no files to compare: Use current feature's `spec.md` vs git HEAD
- If `spec.md` doesn't exist: Run `/speckit.specify` first
@@ -1,19 +0,0 @@
---
description: Migrate existing projects into the speckit structure by generating spec.md, plan.md, and tasks.md from existing code.
---
# Workflow: speckit.migrate
1. **Context Analysis**:
- The user has provided an input prompt (path to analyze, feature name).
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.migrate/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If path doesn't exist: Ask user to provide valid directory path
- If no code found: Report that no analyzable code was detected
@@ -1,20 +0,0 @@
---
description: Challenge the specification with Socratic questioning to identify logical gaps, unhandled edge cases, and robustness issues.
---
// turbo-all
# Workflow: speckit.quizme
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.quizme/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If required files don't exist, inform the user which prerequisite workflow to run first (e.g., `/speckit.specify` to create `spec.md`).
@@ -1,20 +0,0 @@
---
description: Display a dashboard showing feature status, completion percentage, and blockers.
---
// turbo-all
# Workflow: speckit.status
1. **Context Analysis**:
- The user may optionally specify a feature to focus on.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.status/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no features exist: Report "No features found. Run `/speckit.specify` to create your first feature."
@@ -1,18 +0,0 @@
---
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
---
# Workflow: speckit.taskstoissues
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit.taskstoissues/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `tasks.md` is missing: Run `/speckit.tasks` first
+180 -28
View File
@@ -2,7 +2,7 @@
> **The Event Horizon of Software Quality.** > **The Event Horizon of Software Quality.**
> _Adapted for Google Antigravity IDE from [github/spec-kit](https://github.com/github/spec-kit)._ > _Adapted for Google Antigravity IDE from [github/spec-kit](https://github.com/github/spec-kit)._
> _Version: 1.2.0 — LCBP3-DMS Edition (v1.8.1 UAT Ready)_ > _Version: 1.8.6 — LCBP3-DMS Edition (v1.8.6 Production Ready)_
--- ---
@@ -55,7 +55,7 @@ Some skills and scripts reference a `.specify/` directory for templates and proj
The toolkit is organized into modular components that provide both the logic (Scripts) and the structure (Templates) for the agent. The toolkit is organized into modular components that provide both the logic (Scripts) and the structure (Templates) for the agent.
```text ```text
.agents/ .agents/ # Agent Skills & Rules
├── skills/ # @ Mentions (Agent Intelligence) ├── skills/ # @ Mentions (Agent Intelligence)
│ ├── nestjs-best-practices/ # NestJS Architecture Patterns │ ├── nestjs-best-practices/ # NestJS Architecture Patterns
│ ├── next-best-practices/ # Next.js App Router Patterns │ ├── next-best-practices/ # Next.js App Router Patterns
@@ -78,32 +78,37 @@ The toolkit is organized into modular components that provide both the logic (Sc
│ ├── speckit-tester/ # Test Runner & Coverage │ ├── speckit-tester/ # Test Runner & Coverage
│ └── speckit-validate/ # Implementation Validator │ └── speckit-validate/ # Implementation Validator
├── workflows/ # / Slash Commands (Orchestration) ├── rules/ # Project Context & Validation Rules
│ ├── 00-speckit-all.md # Full Pipeline (10 steps: Specify → Validate) │ ├── 00-project-context.md # Role, Persona, Rule Tiers
│ ├── 0111-speckit-*.md # Individual phase workflows │ ├── 01-adr-019-uuid.md # UUID Strategy (Critical)
│ ├── speckit-prepare.md # Prep Pipeline (5 steps: Specify → Analyze) │ ├── 02-security.md # Security Requirements
│ ├── schema-change.md # DB Schema Change (ADR-009) │ ├── 03-typescript.md # TypeScript Standards
│ ├── create-backend-module.md # NestJS Module Scaffolding │ ├── 04-domain-terminology.md # DMS Glossary Compliance
│ ├── create-frontend-page.md # Next.js Page Scaffolding │ ├── 05-forbidden-actions.md # Critical Prohibited Patterns
│ ├── deploy.md # Deployment via Gitea CI/CD │ ├── 06-backend-patterns.md # NestJS Architecture Rules
── util-speckit-*.md # Utilities (checklist, diff, migrate, etc.) ── 07-frontend-patterns.md # Next.js App Router Rules
│ ├── 08-development-flow.md # Development Workflow
│ ├── 09-commit-checklist.md # Pre-commit Validation
│ ├── 10-error-handling.md # ADR-007 Compliance
│ └── 11-ai-integration.md # ADR-018/020 AI Boundaries
└── scripts/ └── scripts/
├── bash/ # Bash Core (Kinetic logic) ├── bash/ # Bash Core (Kinetic logic)
│ ├── common.sh # Shared utilities & path resolution
│ ├── check-prerequisites.sh # Prerequisite validation
│ ├── create-new-feature.sh # Feature branch creation
│ ├── setup-plan.sh # Plan template setup
│ ├── update-agent-context.sh # Agent file updater (main)
│ ├── plan-parser.sh # Plan data extraction (module)
│ ├── content-generator.sh # Language-specific templates (module)
│ └── agent-registry.sh # 17-agent type registry (module)
├── powershell/ # PowerShell Equivalents (Windows-native) ├── powershell/ # PowerShell Equivalents (Windows-native)
│ ├── common.ps1 # Shared utilities & prerequisites
│ └── create-new-feature.ps1 # Feature branch creation
├── fix_links.py # Spec link fixer ├── fix_links.py # Spec link fixer
├── verify_links.py # Spec link verifier ├── verify_links.py # Spec link verifier
└── start-mcp.js # MCP server launcher └── start-mcp.js # MCP server launcher
.windsurf/workflows/ # / Slash Commands (Orchestration)
├── 00-speckit.all.md # Full Pipeline (10 steps: Specify → Validate)
├── 0111-speckit-*.md # Individual phase workflows
├── speckit-prepare.md # Prep Pipeline (5 steps: Specify → Analyze)
├── schema-change.md # DB Schema Change (ADR-009)
├── create-backend-module.md # NestJS Module Scaffolding
├── create-frontend-page.md # Next.js Page Scaffolding
├── deploy.md # Deployment via Gitea CI/CD
├── review.md # Code Review Workflow
└── util-speckit-*.md # Utilities (checklist, diff, migrate, etc.)
``` ```
--- ---
@@ -254,19 +259,19 @@ If you change your mind mid-project:
--- ---
## 🏗️ LCBP3-DMS Project Notes (v1.8.1) ## 🏗️ LCBP3-DMS Project Notes (v1.8.6)
### 📊 Current Status: UAT Ready (2026-03-11) ### 📊 Current Status: Production Ready (2026-04-14)
| Area | Status | | Area | Status |
| ------------- | ------------------------------------- | | ------------- | ------------------------------- |
| Backend | ✅ 18 Modules, Production Ready | | Backend | ✅ 18 Modules, Production Ready |
| Frontend | ✅ 100% Complete | | Frontend | ✅ 100% Complete |
| Database | ✅ Schema v1.8.0 Stable | | Database | ✅ Schema v1.8.6 Stable |
| Documentation | ✅ **10/10 Gaps Closed** | | Documentation | ✅ **10/10 Gaps Closed** |
| AI Migration | 🔄 Pre-migration Setup (n8n + Ollama) | | AI Migration | ✅ Ollama Integration Complete |
| UAT | 🔄 In Progress | | UAT | ✅ Completed Successfully |
| Deployment | 📋 Pending Go-Live | | Deployment | Production Deployed |
### 📁 Key Spec Files (Always Check Before Writing Code) ### 📁 Key Spec Files (Always Check Before Writing Code)
@@ -300,4 +305,151 @@ If you change your mind mid-project:
--- ---
## 🔧 Troubleshooting
### Common Issues & Solutions
#### **Version Inconsistency Errors**
**Problem**: Scripts report version mismatches between files.
**Solution**:
```bash
# Run version validation
./scripts/bash/validate-versions.sh
# Fix by updating all files to v1.8.6
# Then re-run validation to confirm
```
**Files to check**:
- `.agents/README.md`
- `.agents/skills/VERSION`
- `.agents/rules/00-project-context.md`
- `.agents/skills/skills.md`
#### **Missing Workflow Files**
**Problem**: Workflows not found in `.windsurf/workflows/`.
**Solution**:
```bash
# Sync workflow check
./scripts/bash/sync-workflows.sh
# Verify all 23 expected workflows are present
# Create missing ones from templates if needed
```
#### **Skill Health Issues**
**Problem**: Skills missing SKILL.md or required sections.
**Solution**:
```bash
# Run comprehensive skill audit
./scripts/bash/audit-skills.sh
# Check specific skill issues
# Missing files will be listed with specific errors
```
**Required SKILL.md sections**:
- Front matter: `name`, `description`, `version`
- Content: `## Role`, `## Task`
#### **Script Permission Issues**
**Problem**: Bash scripts not executable.
**Solution**:
```bash
# Make scripts executable
chmod +x .agents/scripts/bash/*.sh
# Verify with
ls -la .agents/scripts/bash/
```
#### **PowerShell Execution Policy**
**Problem**: PowerShell scripts blocked by execution policy.
**Solution**:
```powershell
# Check current policy
Get-ExecutionPolicy
# Allow scripts for current user
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
# Or run bypass for single script
PowerShell -ExecutionPolicy Bypass -File .agents/scripts/powershell/audit-skills.ps1
```
### Debug Mode
**Enable verbose output**:
```bash
# Run scripts with debug info
bash -x .agents/scripts/bash/audit-skills.sh
# PowerShell with verbose output
$VerbosePreference = "Continue"
. .agents/scripts/powershell/audit-skills.ps1
```
### Health Check Commands
**Quick health assessment**:
```bash
# 1. Check versions
./scripts/bash/validate-versions.sh
# 2. Audit skills
./scripts/bash/audit-skills.sh
# 3. Sync workflows
./scripts/bash/sync-workflows.sh
# 4. Check directory structure
find .agents -type f -name "*.md" | wc -l
find .windsurf/workflows -name "*.md" | wc -l
```
**PowerShell equivalent**:
```powershell
# 1. Check versions
. .agents/scripts/powershell/validate-versions.ps1
# 2. Audit skills
. .agents/scripts/powershell/audit-skills.ps1
# 3. Count files
(Get-ChildItem -Path .agents -Recurse -Filter "*.md").Count
(Get-ChildItem -Path .windsurf/workflows -Filter "*.md").Count
```
### Getting Help
**If issues persist**:
1. Check LCBP3 project version alignment
2. Verify `.specify/` directory structure (if using templates)
3. Ensure all dependencies are installed (bash, powershell core)
4. Review the specific error messages in script output
5. Check this README for workflow path updates (`.windsurf/workflows`)
---
_Built with logic from [Spec-Kit](https://github.com/github/spec-kit). Powered by Antigravity._ _Built with logic from [Spec-Kit](https://github.com/github/spec-kit). Powered by Antigravity._
@@ -24,7 +24,7 @@ Every response must be **precise**, **spec-compliant**, and **production-ready**
## Project Information ## Project Information
- **Project:** NAP-DMS (LCBP3) - **Project:** NAP-DMS (LCBP3)
- **Version:** 1.8.5 - **Version:** 1.8.6
- **Stack:** NestJS + Next.js + TypeScript + MariaDB + Ollama (AI) - **Stack:** NestJS + Next.js + TypeScript + MariaDB + Ollama (AI)
- **Repo:** https://git.np-dms.work/np-dms/lcbp3 - **Repo:** https://git.np-dms.work/np-dms/lcbp3
+571
View File
@@ -0,0 +1,571 @@
#!/usr/bin/env node
/**
* advanced-validator.js - Advanced validation capabilities for .agents
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
const yaml = require('js-yaml');
// Configuration
const BASE_DIR = path.resolve(__dirname, '../..');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
const WORKFLOWS_DIR = path.join(BASE_DIR, '.windsurf', 'workflows');
// Advanced validation class
class AdvancedValidator {
constructor() {
this.validationResults = {
timestamp: new Date().toISOString(),
validations: {},
summary: {
total_validations: 0,
passed_validations: 0,
failed_validations: 0,
warnings: 0,
critical_issues: 0
}
};
this.criticalIssues = [];
}
log(message, level = 'info') {
const colors = {
info: '\x1b[36m', // Cyan
pass: '\x1b[32m', // Green
fail: '\x1b[31m', // Red
warn: '\x1b[33m', // Yellow
critical: '\x1b[35m', // Magenta
reset: '\x1b[0m'
};
const color = colors[level] || colors.info;
console.log(`${color}[${level.toUpperCase()}] ${message}${colors.reset}`);
}
validateSkillFrontMatter(skillPath, skillName) {
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (!fs.existsSync(skillMdPath)) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: 'SKILL.md file not found',
path: skillMdPath
});
return false;
}
try {
const content = fs.readFileSync(skillMdPath, 'utf8');
const frontMatterMatch = content.match(/^---\n([\s\S]*?)\n---/);
if (!frontMatterMatch) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: 'No front matter found',
path: skillMdPath
});
return false;
}
try {
const frontMatter = yaml.load(frontMatterMatch[1]);
const requiredFields = ['name', 'description', 'version'];
const missingFields = requiredFields.filter(field => !frontMatter[field]);
if (missingFields.length > 0) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: `Missing required fields: ${missingFields.join(', ')}`,
missing_fields: missingFields,
front_matter: frontMatter,
path: skillMdPath
});
return false;
}
// Validate version format
const versionPattern = /^\d+\.\d+\.\d+$/;
if (!versionPattern.test(frontMatter.version)) {
this.addValidationResult(`skill_${skillName}_version_format`, 'warn', {
message: 'Version format should be X.Y.Z',
version: frontMatter.version,
path: skillMdPath
});
}
// Validate dependencies if present
if (frontMatter['depends-on']) {
const dependencies = Array.isArray(frontMatter['depends-on'])
? frontMatter['depends-on']
: [frontMatter['depends-on']];
for (const dep of dependencies) {
const depPath = path.join(SKILLS_DIR, dep);
if (!fs.existsSync(depPath)) {
this.addValidationResult(`skill_${skillName}_dependency_${dep}`, 'critical', {
message: `Dependency not found: ${dep}`,
dependency: dep,
path: skillMdPath
});
}
}
}
this.addValidationResult(`skill_${skillName}_frontmatter`, 'pass', {
message: 'Front matter is valid',
front_matter: frontMatter,
path: skillMdPath
});
return true;
} catch (yamlError) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: `Invalid YAML in front matter: ${yamlError.message}`,
path: skillMdPath
});
return false;
}
} catch (error) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: `Error reading SKILL.md: ${error.message}`,
path: skillMdPath
});
return false;
}
}
validateSkillContent(skillPath, skillName) {
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (!fs.existsSync(skillMdPath)) {
return false;
}
try {
const content = fs.readFileSync(skillMdPath, 'utf8');
// Check for required sections
const requiredSections = ['## Role', '## Task'];
const missingSections = requiredSections.filter(section => !content.includes(section));
if (missingSections.length > 0) {
this.addValidationResult(`skill_${skillName}_content`, 'fail', {
message: `Missing required sections: ${missingSections.join(', ')}`,
missing_sections: missingSections,
path: skillMdPath
});
return false;
}
// Check for forbidden patterns
const forbiddenPatterns = [
{ pattern: /TODO.*FIX/gi, message: 'TODO items should be resolved' },
{ pattern: /FIXME/gi, message: 'FIXME items should be addressed' },
{ pattern: /XXX/gi, message: 'XXX markers should be replaced' }
];
for (const { pattern, message } of forbiddenPatterns) {
if (pattern.test(content)) {
this.addValidationResult(`skill_${skillName}_forbidden_patterns`, 'warn', {
message: `${message} found in content`,
pattern: pattern.toString(),
path: skillMdPath
});
}
}
// Validate content length
const contentLength = content.length;
if (contentLength < 500) {
this.addValidationResult(`skill_${skillName}_content_length`, 'warn', {
message: 'Skill content seems too short',
length: contentLength,
path: skillMdPath
});
}
this.addValidationResult(`skill_${skillName}_content`, 'pass', {
message: 'Skill content is valid',
length: contentLength,
path: skillMdPath
});
return true;
} catch (error) {
this.addValidationResult(`skill_${skillName}_content`, 'fail', {
message: `Error validating content: ${error.message}`,
path: skillMdPath
});
return false;
}
}
validateWorkflowStructure(workflowPath, workflowName) {
if (!fs.existsSync(workflowPath)) {
this.addValidationResult(`workflow_${workflowName}_exists`, 'fail', {
message: 'Workflow file not found',
path: workflowPath
});
return false;
}
try {
const content = fs.readFileSync(workflowPath, 'utf8');
// Check for markdown headers
if (!content.includes('#')) {
this.addValidationResult(`workflow_${workflowName}_structure`, 'fail', {
message: 'No markdown headers found',
path: workflowPath
});
return false;
}
// Check for workflow-specific patterns
const hasWorkflowContent = content.length > 200;
if (!hasWorkflowContent) {
this.addValidationResult(`workflow_${workflowName}_content`, 'warn', {
message: 'Workflow content seems too short',
length: content.length,
path: workflowPath
});
}
// Validate skill references
const skillReferences = content.match(/@speckit-\w+/g) || [];
for (const skillRef of skillReferences) {
const skillName = skillRef.replace('@', '');
const skillPath = path.join(SKILLS_DIR, skillName);
if (!fs.existsSync(skillPath)) {
this.addValidationResult(`workflow_${workflowName}_skill_ref_${skillName}`, 'critical', {
message: `Workflow references non-existent skill: ${skillRef}`,
skill_reference: skillRef,
path: workflowPath
});
}
}
this.addValidationResult(`workflow_${workflowName}_structure`, 'pass', {
message: 'Workflow structure is valid',
skill_references: skillReferences,
path: workflowPath
});
return true;
} catch (error) {
this.addValidationResult(`workflow_${workflowName}_structure`, 'fail', {
message: `Error validating workflow: ${error.message}`,
path: workflowPath
});
return false;
}
}
validateCrossReferences() {
this.log('Validating cross-references...', 'info');
// Check README.md references
const readmePath = path.join(AGENTS_DIR, 'README.md');
if (fs.existsSync(readmePath)) {
const readmeContent = fs.readFileSync(readmePath, 'utf8');
// Check if README references correct workflow path
if (readmeContent.includes('.agents/workflows') && !readmeContent.includes('.windsurf/workflows')) {
this.addValidationResult('readme_workflow_reference', 'critical', {
message: 'README.md references .agents/workflows instead of .windsurf/workflows',
path: readmePath
});
}
// Check version consistency in README
const versionMatches = readmeContent.match(/v?(\d+\.\d+\.\d+)/g) || [];
const uniqueVersions = [...new Set(versionMatches)];
if (uniqueVersions.length > 1) {
this.addValidationResult('readme_version_consistency', 'warn', {
message: 'Multiple versions found in README.md',
versions: uniqueVersions,
path: readmePath
});
}
}
// Check skills.md references
const skillsMdPath = path.join(SKILLS_DIR, 'skills.md');
if (fs.existsSync(skillsMdPath)) {
const skillsContent = fs.readFileSync(skillsMdPath, 'utf8');
// Validate skill dependency matrix
if (skillsContent.includes('## Skill Dependency Matrix')) {
this.addValidationResult('skills_dependency_matrix', 'pass', {
message: 'Skills documentation includes dependency matrix',
path: skillsMdPath
});
} else {
this.addValidationResult('skills_dependency_matrix', 'warn', {
message: 'Skills documentation missing dependency matrix',
path: skillsMdPath
});
}
}
}
validateSecurityCompliance() {
this.log('Validating security compliance...', 'info');
// Check for security patterns in rules
const securityRulePath = path.join(AGENTS_DIR, 'rules', '02-security.md');
if (fs.existsSync(securityRulePath)) {
const securityContent = fs.readFileSync(securityRulePath, 'utf8');
const requiredSecurityTopics = [
'authentication',
'authorization',
'rbac',
'validation',
'audit'
];
const missingTopics = requiredSecurityTopics.filter(topic =>
!securityContent.toLowerCase().includes(topic.toLowerCase())
);
if (missingTopics.length > 0) {
this.addValidationResult('security_rules_completeness', 'warn', {
message: `Security rules missing topics: ${missingTopics.join(', ')}`,
missing_topics: missingTopics,
path: securityRulePath
});
} else {
this.addValidationResult('security_rules_completeness', 'pass', {
message: 'Security rules cover all required topics',
path: securityRulePath
});
}
}
// Check for ADR-019 compliance in rules
const uuidRulePath = path.join(AGENTS_DIR, 'rules', '01-adr-019-uuid.md');
if (fs.existsSync(uuidRulePath)) {
const uuidContent = fs.readFileSync(uuidRulePath, 'utf8');
const criticalUuidRules = [
'parseInt',
'Number(',
'publicId',
'@Exclude()'
];
const missingRules = criticalUuidRules.filter(rule =>
!uuidContent.includes(rule)
);
if (missingRules.length > 0) {
this.addValidationResult('uuid_rules_completeness', 'critical', {
message: `UUID rules missing critical patterns: ${missingRules.join(', ')}`,
missing_patterns: missingRules,
path: uuidRulePath
});
} else {
this.addValidationResult('uuid_rules_completeness', 'pass', {
message: 'UUID rules cover all critical patterns',
path: uuidRulePath
});
}
}
}
validatePerformanceMetrics() {
this.log('Validating performance metrics...', 'info');
// Check file sizes
const criticalFiles = [
{ path: path.join(AGENTS_DIR, 'README.md'), name: 'README.md' },
{ path: path.join(SKILLS_DIR, 'skills.md'), name: 'skills.md' },
{ path: path.join(AGENTS_DIR, 'skills', 'VERSION'), name: 'VERSION' }
];
for (const file of criticalFiles) {
if (fs.existsSync(file.path)) {
const stats = fs.statSync(file.path);
const sizeKB = stats.size / 1024;
if (sizeKB > 100) {
this.addValidationResult(`file_size_${file.name}`, 'warn', {
message: `File ${file.name} is large (${sizeKB.toFixed(1)}KB)`,
size_kb: sizeKB,
path: file.path
});
} else {
this.addValidationResult(`file_size_${file.name}`, 'pass', {
message: `File ${file.name} size is acceptable`,
size_kb: sizeKB,
path: file.path
});
}
}
}
// Check directory structure depth
function getDirectoryDepth(dirPath, currentDepth = 0) {
let maxDepth = currentDepth;
if (fs.existsSync(dirPath)) {
const items = fs.readdirSync(dirPath);
for (const item of items) {
const itemPath = path.join(dirPath, item);
if (fs.statSync(itemPath).isDirectory()) {
const depth = getDirectoryDepth(itemPath, currentDepth + 1);
maxDepth = Math.max(maxDepth, depth);
}
}
}
return maxDepth;
}
const agentsDepth = getDirectoryDepth(AGENTS_DIR);
if (agentsDepth > 5) {
this.addValidationResult('directory_depth', 'warn', {
message: `.agents directory structure is deep (${agentsDepth} levels)`,
depth: agentsDepth,
path: AGENTS_DIR
});
} else {
this.addValidationResult('directory_depth', 'pass', {
message: `.agents directory structure depth is acceptable`,
depth: agentsDepth,
path: AGENTS_DIR
});
}
}
addValidationResult(name, status, details) {
this.validationResults.validations[name] = {
status,
timestamp: new Date().toISOString(),
...details
};
this.validationResults.summary.total_validations++;
switch (status) {
case 'pass':
this.validationResults.summary.passed_validations++;
this.log(`${name}: PASS - ${details.message}`, 'pass');
break;
case 'fail':
this.validationResults.summary.failed_validations++;
this.log(`${name}: FAIL - ${details.message}`, 'fail');
break;
case 'warn':
this.validationResults.summary.warnings++;
this.log(`${name}: WARN - ${details.message}`, 'warn');
break;
case 'critical':
this.validationResults.summary.critical_issues++;
this.criticalIssues.push({ name, ...details });
this.log(`${name}: CRITICAL - ${details.message}`, 'critical');
break;
}
}
async runAdvancedValidation() {
this.log('Starting advanced validation...', 'info');
this.log(`Base directory: ${BASE_DIR}`, 'info');
// Validate all skills
this.log('Validating skills...', 'info');
if (fs.existsSync(SKILLS_DIR)) {
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
const itemPath = path.join(SKILLS_DIR, item);
return fs.statSync(itemPath).isDirectory();
});
for (const skillDir of skillDirs) {
const skillPath = path.join(SKILLS_DIR, skillDir);
this.validateSkillFrontMatter(skillPath, skillDir);
this.validateSkillContent(skillPath, skillDir);
}
}
// Validate all workflows
this.log('Validating workflows...', 'info');
if (fs.existsSync(WORKFLOWS_DIR)) {
const workflowFiles = fs.readdirSync(WORKFLOWS_DIR).filter(file => file.endsWith('.md'));
for (const workflowFile of workflowFiles) {
const workflowPath = path.join(WORKFLOWS_DIR, workflowFile);
const workflowName = workflowFile.replace('.md', '');
this.validateWorkflowStructure(workflowPath, workflowName);
}
}
// Cross-reference validation
this.validateCrossReferences();
// Security compliance validation
this.validateSecurityCompliance();
// Performance metrics validation
this.validatePerformanceMetrics();
// Generate summary
this.generateSummary();
return this.validationResults;
}
generateSummary() {
const { summary, critical_issues } = this.validationResults;
this.log('=== Advanced Validation Summary ===', 'info');
this.log(`Total validations: ${summary.total_validations}`, 'info');
this.log(`Passed: ${summary.passed_validations}`, 'pass');
this.log(`Failed: ${summary.failed_validations}`, summary.failed_validations > 0 ? 'fail' : 'info');
this.log(`Warnings: ${summary.warnings}`, 'warn');
this.log(`Critical issues: ${summary.critical_issues}`, 'critical');
if (critical_issues.length > 0) {
this.log('Critical Issues:', 'critical');
critical_issues.forEach(issue => {
this.log(` - ${issue.name}: ${issue.message}`, 'critical');
});
}
// Save validation results
const validationReportPath = path.join(AGENTS_DIR, 'reports', 'advanced-validation.json');
const reportsDir = path.dirname(validationReportPath);
if (!fs.existsSync(reportsDir)) {
fs.mkdirSync(reportsDir, { recursive: true });
}
fs.writeFileSync(validationReportPath, JSON.stringify(this.validationResults, null, 2));
this.log(`Advanced validation report saved to: ${validationReportPath}`, 'info');
}
}
// CLI interface
async function main() {
const validator = new AdvancedValidator();
try {
const results = await validator.runAdvancedValidation();
process.exit(results.summary.critical_issues > 0 ? 1 : 0);
} catch (error) {
console.error('Advanced validation failed:', error);
process.exit(1);
}
}
// Export for use in other modules
module.exports = { AdvancedValidator };
// Run if called directly
if (require.main === module) {
main();
}
+188
View File
@@ -0,0 +1,188 @@
#!/bin/bash
# audit-skills.sh - Verify skill completeness and health
# Part of LCBP3-DMS Phase 2 improvements
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Base directory
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
AGENTS_DIR="$BASE_DIR/.agents"
SKILLS_DIR="$AGENTS_DIR/skills"
echo "=== Skills Health Audit ==="
echo "Base directory: $BASE_DIR"
echo
# Function to check if skill has required files
check_skill_health() {
local skill_dir="$1"
local skill_name="$(basename "$skill_dir")"
local issues=0
# Check for SKILL.md
if [[ -f "$skill_dir/SKILL.md" ]]; then
echo -e "${GREEN} OK${NC}: $skill_name/SKILL.md"
else
echo -e "${RED} MISSING${NC}: $skill_name/SKILL.md"
((issues++))
fi
# Check for templates directory (optional)
if [[ -d "$skill_dir/templates" ]]; then
template_count=$(find "$skill_dir/templates" -name "*.md" -type f | wc -l)
if [[ $template_count -gt 0 ]]; then
echo -e "${GREEN} OK${NC}: $skill_name/templates ($template_count files)"
else
echo -e "${YELLOW} EMPTY${NC}: $skill_name/templates (no files)"
fi
fi
# Check SKILL.md content if exists
local skill_file="$skill_dir/SKILL.md"
if [[ -f "$skill_file" ]]; then
# Check for required front matter fields
local required_fields=("name" "description" "version")
for field in "${required_fields[@]}"; do
if grep -q "^$field:" "$skill_file"; then
echo -e " ${GREEN} FIELD${NC}: $field"
else
echo -e " ${RED} MISSING FIELD${NC}: $field"
((issues++))
fi
done
# Check for Role section
if grep -q "^## Role$" "$skill_file"; then
echo -e " ${GREEN} SECTION${NC}: Role"
else
echo -e " ${YELLOW} MISSING SECTION${NC}: Role"
((issues++))
fi
# Check for Task section
if grep -q "^## Task$" "$skill_file"; then
echo -e " ${GREEN} SECTION${NC}: Task"
else
echo -e " ${YELLOW} MISSING SECTION${NC}: Task"
((issues++))
fi
fi
return $issues
}
# Function to get skill version from SKILL.md
get_skill_version() {
local skill_file="$1"
if [[ -f "$skill_file" ]]; then
grep "^version:" "$skill_file" | head -1 | sed 's/version: *//' || echo "unknown"
else
echo "no_file"
fi
}
# Check skills directory
if [[ ! -d "$SKILLS_DIR" ]]; then
echo -e "${RED}ERROR: Skills directory not found${NC}"
exit 1
fi
echo "Scanning skills directory: $SKILLS_DIR"
echo
# Get all skill directories
SKILL_DIRS=()
while IFS= read -r -d '' dir; do
SKILL_DIRS+=("$dir")
done < <(find "$SKILLS_DIR" -maxdepth 1 -type d -not -path "$SKILLS_DIR" -print0 | sort -z)
echo "Found ${#SKILL_DIRS[@]} skill directories"
echo
# Audit each skill
TOTAL_ISSUES=0
SKILL_SUMMARY=()
for skill_dir in "${SKILL_DIRS[@]}"; do
skill_name="$(basename "$skill_dir")"
echo "Auditing: $skill_name"
echo "------------------------"
check_skill_health "$skill_dir"
issues=$?
skill_version=$(get_skill_version "$skill_dir/SKILL.md")
SKILL_SUMMARY+=("$skill_name:$issues:$skill_version")
TOTAL_ISSUES=$((TOTAL_ISSUES + issues))
echo
done
# Summary report
echo "=== Skills Audit Summary ==="
echo
echo "Skill Status:"
echo "-----------"
for summary in "${SKILL_SUMMARY[@]}"; do
IFS=':' read -r name issues version <<< "$summary"
if [[ $issues -eq 0 ]]; then
echo -e "${GREEN} HEALTHY${NC}: $name (v$version)"
else
echo -e "${RED} ISSUES${NC}: $name (v$version) - $issues issues"
fi
done
echo
# Check skills.md version consistency
SKILLS_VERSION_FILE="$SKILLS_DIR/VERSION"
if [[ -f "$SKILLS_VERSION_FILE" ]]; then
global_version=$(grep "^version:" "$SKILLS_VERSION_FILE" | sed 's/version: *//')
echo "Global skills version: v$global_version"
echo
# Check for version mismatches
echo "Version Consistency Check:"
echo "------------------------"
VERSION_MISMATCHES=0
for summary in "${SKILL_SUMMARY[@]}"; do
IFS=':' read -r name issues version <<< "$summary"
if [[ "$version" != "unknown" && "$version" != "no_file" && "$version" != "$global_version" ]]; then
echo -e "${YELLOW} MISMATCH${NC}: $name is v$version, global is v$global_version"
((VERSION_MISMATCHES++))
fi
done
if [[ $VERSION_MISMATCHES -eq 0 ]]; then
echo -e "${GREEN} All skills match global version${NC}"
fi
fi
echo
# Overall health
if [[ $TOTAL_ISSUES -eq 0 ]]; then
echo -e "${GREEN}=== SUCCESS: All skills healthy ===${NC}"
echo "Total skills: ${#SKILL_DIRS[@]}"
exit 0
else
echo -e "${RED}=== ISSUES FOUND: $TOTAL_ISSUES total issues ===${NC}"
echo
echo "Recommendations:"
echo "1. Fix missing SKILL.md files"
echo "2. Add required front matter fields"
echo "3. Ensure Role and Task sections exist"
echo "4. Align skill versions with global version"
exit 1
fi
+149
View File
@@ -0,0 +1,149 @@
#!/bin/bash
# sync-workflows.sh - Sync workflow references between .agents and .windsurf
# Part of LCBP3-DMS Phase 2 improvements
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Base directory
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
AGENTS_DIR="$BASE_DIR/.agents"
WINDSURF_DIR="$BASE_DIR/.windsurf"
WORKFLOWS_DIR="$WINDSURF_DIR/workflows"
echo "=== Workflow Synchronization Check ==="
echo "Base directory: $BASE_DIR"
echo
# Function to check if workflow exists
check_workflow() {
local workflow_name="$1"
local workflow_file="$WORKFLOWS_DIR/$workflow_name"
if [[ -f "$workflow_file" ]]; then
echo -e "${GREEN} EXISTS${NC}: $workflow_name"
return 0
else
echo -e "${RED} MISSING${NC}: $workflow_name"
return 1
fi
}
# Function to list all workflows
list_workflows() {
if [[ -d "$WORKFLOWS_DIR" ]]; then
find "$WORKFLOWS_DIR" -name "*.md" -type f | sort
else
echo "No workflows directory found"
fi
}
# Check directories
echo "Checking directory structure..."
if [[ -d "$AGENTS_DIR" ]]; then
echo -e "${GREEN} OK${NC}: .agents directory exists"
else
echo -e "${RED} ERROR${NC}: .agents directory not found"
exit 1
fi
if [[ -d "$WINDSURF_DIR" ]]; then
echo -e "${GREEN} OK${NC}: .windsurf directory exists"
else
echo -e "${RED} ERROR${NC}: .windsurf directory not found"
exit 1
fi
if [[ -d "$WORKFLOWS_DIR" ]]; then
echo -e "${GREEN} OK${NC}: workflows directory exists"
else
echo -e "${RED} ERROR${NC}: workflows directory not found"
exit 1
fi
echo
# Expected workflows based on README documentation
echo "Checking expected workflows..."
EXPECTED_WORKFLOWS=(
"00-speckit.all.md"
"01-speckit.constitution.md"
"02-speckit.specify.md"
"03-speckit.clarify.md"
"04-speckit.plan.md"
"05-speckit.tasks.md"
"06-speckit.analyze.md"
"07-speckit.implement.md"
"08-speckit.checker.md"
"09-speckit.tester.md"
"10-speckit.reviewer.md"
"11-speckit.validate.md"
"speckit.prepare.md"
"schema-change.md"
"create-backend-module.md"
"create-frontend-page.md"
"deploy.md"
"review.md"
"util-speckit.checklist.md"
"util-speckit.diff.md"
"util-speckit.migrate.md"
"util-speckit.quizme.md"
"util-speckit.status.md"
"util-speckit.taskstoissues.md"
)
MISSING_WORKFLOWS=0
for workflow in "${EXPECTED_WORKFLOWS[@]}"; do
if ! check_workflow "$workflow"; then
((MISSING_WORKFLOWS++))
fi
done
echo
# List all actual workflows
echo "All workflows in $WORKFLOWS_DIR:"
echo "--------------------------------"
while IFS= read -r workflow; do
echo " $(basename "$workflow")"
done < <(list_workflows)
echo
# Check for orphaned workflows (unexpected ones)
echo "Checking for unexpected workflows..."
ACTUAL_WORKFLOWS=()
while IFS= read -r workflow; do
ACTUAL_WORKFLOWS+=("$(basename "$workflow")")
done < <(list_workflows)
for actual_workflow in "${ACTUAL_WORKFLOWS[@]}"; do
if [[ ! " ${EXPECTED_WORKFLOWS[*]} " =~ " ${actual_workflow} " ]]; then
echo -e "${YELLOW} UNEXPECTED${NC}: $actual_workflow"
fi
done
echo
# Summary
if [[ $MISSING_WORKFLOWS -eq 0 ]]; then
echo -e "${GREEN}=== SUCCESS: All expected workflows present ===${NC}"
echo "Total workflows: ${#ACTUAL_WORKFLOWS[@]}"
exit 0
else
echo -e "${RED}=== FAILED: $MISSING_WORKFLOWS workflows missing ===${NC}"
echo
echo "To fix missing workflows:"
echo "1. Create missing workflow files in $WORKFLOWS_DIR"
echo "2. Use existing workflows as templates"
echo "3. Run this script again to verify"
exit 1
fi
+108
View File
@@ -0,0 +1,108 @@
#!/bin/bash
# validate-versions.sh - Check version consistency across .agents files
# Part of LCBP3-DMS Phase 2 improvements
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Base directory
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
AGENTS_DIR="$BASE_DIR/.agents"
# Expected version (should match LCBP3 version)
EXPECTED_VERSION="1.8.6"
echo "=== .agents Version Validation ==="
echo "Base directory: $BASE_DIR"
echo "Expected version: $EXPECTED_VERSION"
echo
# Function to extract version from file
extract_version() {
local file="$1"
local pattern="$2"
if [[ -f "$file" ]]; then
grep -o "$pattern" "$file" | head -1 | sed 's/.*\([0-9]\+\.[0-9]\+\.[0-9]\+\).*/\1/' || echo "NOT_FOUND"
else
echo "FILE_NOT_FOUND"
fi
}
# Files to check
declare -A FILES_TO_CHECK=(
["$AGENTS_DIR/README.md"]="Version: \([0-9]\+\.[0-9]\+\.[0-9]\+\)"
["$AGENTS_DIR/skills/VERSION"]="version: \([0-9]\+\.[0-9]\+\.[0-9]\+\)"
["$AGENTS_DIR/rules/00-project-context.md"]="Version: \([0-9]\+\.[0-9]\+\.[0-9]\+\)"
["$AGENTS_DIR/skills/skills.md"]="V\([0-9]\+\.[0-9]\+\.[0-9]\+\)"
)
# Track issues
ISSUES=0
echo "Checking version consistency..."
echo
for file in "${!FILES_TO_CHECK[@]}"; do
pattern="${FILES_TO_CHECK[$file]}"
relative_path="${file#$BASE_DIR/}"
version=$(extract_version "$file" "$pattern")
if [[ "$version" == "NOT_FOUND" ]] || [[ "$version" == "FILE_NOT_FOUND" ]]; then
echo -e "${RED} ERROR${NC}: $relative_path - Version not found"
((ISSUES++))
elif [[ "$version" != "$EXPECTED_VERSION" ]]; then
echo -e "${RED} ERROR${NC}: $relative_path - Found v$version, expected v$EXPECTED_VERSION"
((ISSUES++))
else
echo -e "${GREEN} OK${NC}: $relative_path - v$version"
fi
done
echo
# Check for version mismatches in skill files
echo "Checking skill file versions..."
SKILL_VERSIONS_FILE="$AGENTS_DIR/skills/VERSION"
if [[ -f "$SKILL_VERSIONS_FILE" ]]; then
skills_version=$(extract_version "$SKILL_VERSIONS_FILE" "version: \([0-9]\+\.[0-9]\+\.[0-9]\+\)")
echo "Skills version file: v$skills_version"
fi
# Check workflow versions (in .windsurf/workflows)
WORKFLOWS_DIR="$BASE_DIR/.windsurf/workflows"
if [[ -d "$WORKFLOWS_DIR" ]]; then
echo "Checking workflow files..."
workflow_count=0
for workflow in "$WORKFLOWS_DIR"/*.md; do
if [[ -f "$workflow" ]]; then
workflow_count=$((workflow_count + 1))
fi
done
echo -e "${GREEN} OK${NC}: Found $workflow_count workflow files"
else
echo -e "${YELLOW} WARNING${NC}: Workflows directory not found at $WORKFLOWS_DIR"
fi
echo
# Summary
if [[ $ISSUES -eq 0 ]]; then
echo -e "${GREEN}=== SUCCESS: All versions consistent ===${NC}"
exit 0
else
echo -e "${RED}=== FAILED: $ISSUES version issues found ===${NC}"
echo
echo "To fix version issues:"
echo "1. Update files to use v$EXPECTED_VERSION"
echo "2. Ensure LCBP3 project version matches"
echo "3. Run this script again to verify"
exit 1
fi
+516
View File
@@ -0,0 +1,516 @@
# ci-hooks.ps1 - Continuous integration hooks for .agents (PowerShell version)
# Part of LCBP3-DMS Phase 3 enhancements
param(
[Parameter(Mandatory=$false)]
[ValidateSet("pre-commit", "pre-push", "ci-pipeline", "install-hooks", "help")]
[string]$Command = "help"
)
# Configuration
$BaseDir = Split-Path -Parent (Split-Path -Parent $PSScriptRoot)
$AgentsDir = Join-Path $BaseDir ".agents"
$CILogDir = Join-Path $AgentsDir "logs\ci"
$CIReportDir = Join-Path $AgentsDir "reports\ci"
# Ensure directories exist
if (-not (Test-Path $CILogDir)) { New-Item -ItemType Directory -Path $CILogDir -Force | Out-Null }
if (-not (Test-Path $CIReportDir)) { New-Item -ItemType Directory -Path $CIReportDir -Force | Out-Null }
# Colors for output
$Colors = @{
Red = "`e[0;31m"
Green = "`e[0;32m"
Yellow = "`e[1;33m"
Blue = "`e[0;34m"
NoColor = "`e[0m"
}
# Logging function
function Write-CILog {
param(
[string]$Level,
[string]$Message
)
$timestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
$logFile = Join-Path $CILogDir "ci-$(Get-Date -Format 'yyyy-MM-dd').log"
"$timestamp [$Level] $Message" | Out-File -FilePath $logFile -Append
# Console output with colors
switch ($Level) {
"INFO" { Write-Host $Message -ForegroundColor $Colors.Blue }
"PASS" { Write-Host $Message -ForegroundColor $Colors.Green }
"WARN" { Write-Host $Message -ForegroundColor $Colors.Yellow }
"FAIL" { Write-Host $Message -ForegroundColor $Colors.Red }
default { Write-Host $Message }
}
}
# Pre-commit hook
function Invoke-PreCommitHook {
Write-CILog "INFO" "Running pre-commit validation..."
$exitCode = 0
# 1. Run version validation
Write-CILog "INFO" "Checking version consistency..."
$versionScript = Join-Path $AgentsDir "scripts\powershell\validate-versions.ps1"
if (Test-Path $versionScript) {
try {
& $versionScript | Out-File -FilePath (Join-Path $CILogDir "pre-commit-versions.log") -Append
Write-CILog "PASS" "Version validation passed"
} catch {
Write-CILog "FAIL" "Version validation failed"
$exitCode = 1
}
} else {
Write-CILog "WARN" "Version validation script not found"
}
# 2. Run skill audit
Write-CILog "INFO" "Auditing skills..."
$auditScript = Join-Path $AgentsDir "scripts\powershell\audit-skills.ps1"
if (Test-Path $auditScript) {
try {
& $auditScript | Out-File -FilePath (Join-Path $CILogDir "pre-commit-skills.log") -Append
Write-CILog "PASS" "Skill audit passed"
} catch {
Write-CILog "FAIL" "Skill audit failed"
$exitCode = 1
}
} else {
Write-CILog "WARN" "Skill audit script not found"
}
# 3. Run integration tests (if Node.js available)
if (Get-Command node -ErrorAction SilentlyContinue) {
Write-CILog "INFO" "Running integration tests..."
$testScript = Join-Path $AgentsDir "tests\skill-integration.test.js"
if (Test-Path $testScript) {
try {
node $testScript | Out-File -FilePath (Join-Path $CILogDir "pre-commit-tests.log") -Append
Write-CILog "PASS" "Integration tests passed"
} catch {
Write-CILog "WARN" "Integration tests failed (non-blocking)"
}
} else {
Write-CILog "WARN" "Integration test script not found"
}
} else {
Write-CILog "WARN" "Node.js not available, skipping integration tests"
}
# 4. Check for forbidden patterns
Write-CILog "INFO" "Checking for forbidden patterns..."
$forbiddenPatterns = @("TODO", "FIXME", "XXX", "HACK")
$foundForbidden = $false
foreach ($pattern in $forbiddenPatterns) {
$skillsDir = Join-Path $AgentsDir "skills"
if (Test-Path $skillsDir) {
$matches = Select-String -Path $skillsDir\*.md -Pattern $pattern -Recurse
if ($matches) {
Write-CILog "WARN" "Found forbidden pattern: $pattern"
$foundForbidden = $true
}
}
}
if (-not $foundForbidden) {
Write-CILog "PASS" "No forbidden patterns found"
}
# Generate pre-commit report
$reportFile = Join-Path $CIReportDir "pre-commit-$(Get-Date -Format 'yyyyMMdd-HHmmss').json"
$report = @{
timestamp = (Get-Date -Format "yyyy-MM-ddTHH:mm:sszzz")
hook_type = "pre-commit"
exit_code = $exitCode
checks_performed = @(
"version_validation",
"skill_audit",
"integration_tests",
"forbidden_patterns"
)
log_files = @(
"pre-commit-versions.log",
"pre-commit-skills.log",
"pre-commit-tests.log"
)
}
$report | ConvertTo-Json -Depth 10 | Out-File -FilePath $reportFile
Write-CILog "INFO" "Pre-commit report saved to: $reportFile"
if ($exitCode -eq 0) {
Write-CILog "PASS" "Pre-commit validation completed successfully"
} else {
Write-CILog "FAIL" "Pre-commit validation failed"
}
return $exitCode
}
# Pre-push hook
function Invoke-PrePushHook {
Write-CILog "INFO" "Running pre-push validation..."
$exitCode = 0
# 1. Full health check
Write-CILog "INFO" "Running full health check..."
if (Get-Command node -ErrorAction SilentlyContinue) {
$healthScript = Join-Path $AgentsDir "scripts\health-monitor.js"
if (Test-Path $healthScript) {
try {
node $healthScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-health.log") -Append
Write-CILog "PASS" "Health check passed"
} catch {
Write-CILog "FAIL" "Health check failed"
$exitCode = 1
}
} else {
Write-CILog "WARN" "Health monitor script not found"
}
} else {
Write-CILog "WARN" "Node.js not available, using basic health check"
$auditScript = Join-Path $AgentsDir "scripts\powershell\audit-skills.ps1"
if (Test-Path $auditScript) {
try {
& $auditScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-basic.log") -Append
Write-CILog "PASS" "Basic health check passed"
} catch {
Write-CILog "FAIL" "Basic health check failed"
$exitCode = 1
}
}
}
# 2. Advanced validation (if available)
if (Get-Command node -ErrorAction SilentlyContinue) {
$advancedScript = Join-Path $AgentsDir "scripts\advanced-validator.js"
if (Test-Path $advancedScript) {
Write-CILog "INFO" "Running advanced validation..."
try {
node $advancedScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-advanced.log") -Append
Write-CILog "PASS" "Advanced validation passed"
} catch {
Write-CILog "WARN" "Advanced validation found issues (non-blocking)"
}
}
}
# 3. Dependency validation
if (Get-Command node -ErrorAction SilentlyContinue) {
$dependencyScript = Join-Path $AgentsDir "scripts\dependency-validator.js"
if (Test-Path $dependencyScript) {
Write-CILog "INFO" "Running dependency validation..."
try {
node $dependencyScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-dependencies.log") -Append
Write-CILog "PASS" "Dependency validation passed"
} catch {
Write-CILog "WARN" "Dependency validation found issues (non-blocking)"
}
}
}
# 4. Performance monitoring
if (Get-Command node -ErrorAction SilentlyContinue) {
$performanceScript = Join-Path $AgentsDir "scripts\performance-monitor.js"
if (Test-Path $performanceScript) {
Write-CILog "INFO" "Running performance monitoring..."
try {
node $performanceScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-performance.log") -Append
Write-CILog "PASS" "Performance monitoring passed"
} catch {
Write-CILog "WARN" "Performance monitoring found issues (non-blocking)"
}
}
}
# Generate pre-push report
$reportFile = Join-Path $CIReportDir "pre-push-$(Get-Date -Format 'yyyyMMdd-HHmmss').json"
$report = @{
timestamp = (Get-Date -Format "yyyy-MM-ddTHH:mm:sszzz")
hook_type = "pre-push"
exit_code = $exitCode
checks_performed = @(
"health_check",
"advanced_validation",
"dependency_validation",
"performance_monitoring"
)
log_files = @(
"pre-push-health.log",
"pre-push-advanced.log",
"pre-push-dependencies.log",
"pre-push-performance.log"
)
}
$report | ConvertTo-Json -Depth 10 | Out-File -FilePath $reportFile
Write-CILog "INFO" "Pre-push report saved to: $reportFile"
if ($exitCode -eq 0) {
Write-CILog "PASS" "Pre-push validation completed successfully"
} else {
Write-CILog "FAIL" "Pre-push validation failed"
}
return $exitCode
}
# CI pipeline hook
function Invoke-CIPipelineHook {
Write-CILog "INFO" "Running CI pipeline validation..."
$exitCode = 0
$pipelineStart = Get-Date
# Create pipeline workspace
$workspace = Join-Path $CIReportDir "pipeline-$(Get-Date -Format 'yyyyMMdd-HHmmss')"
New-Item -ItemType Directory -Path $workspace -Force | Out-Null
# 1. Environment validation
Write-CILog "INFO" "Validating CI environment..."
# Check required tools
$requiredTools = @("node", "npm")
foreach ($tool in $requiredTools) {
if (Get-Command $tool -ErrorAction SilentlyContinue) {
Write-CILog "PASS" "Tool available: $tool"
} else {
Write-CILog "FAIL" "Tool missing: $tool"
$exitCode = 1
}
}
# Check Node.js modules
$packageJson = Join-Path $AgentsDir "package.json"
if (Test-Path $packageJson) {
Push-Location $AgentsDir
try {
npm list --depth=0 | Out-Null
Write-CILog "PASS" "Node.js dependencies installed"
} catch {
Write-CILog "WARN" "Installing Node.js dependencies..."
npm install | Out-File -FilePath (Join-Path $workspace "npm-install.log")
if ($LASTEXITCODE -ne 0) {
Write-CILog "FAIL" "Failed to install Node.js dependencies"
$exitCode = 1
}
}
Pop-Location
}
# 2. Full test suite
Write-CILog "INFO" "Running full test suite..."
# Integration tests
$integrationTest = Join-Path $AgentsDir "tests\skill-integration.test.js"
if (Test-Path $integrationTest) {
try {
node $integrationTest | Out-File -FilePath (Join-Path $workspace "integration-tests.log")
Write-CILog "PASS" "Integration tests passed"
} catch {
Write-CILog "FAIL" "Integration tests failed"
$exitCode = 1
}
}
# Workflow validation tests
$workflowTest = Join-Path $AgentsDir "tests\workflow-validation.test.js"
if (Test-Path $workflowTest) {
try {
node $workflowTest | Out-File -FilePath (Join-Path $workspace "workflow-tests.log")
Write-CILog "PASS" "Workflow validation tests passed"
} catch {
Write-CILog "FAIL" "Workflow validation tests failed"
$exitCode = 1
}
}
# 3. Comprehensive validation
Write-CILog "INFO" "Running comprehensive validation..."
# Health monitoring
$healthScript = Join-Path $AgentsDir "scripts\health-monitor.js"
if (Test-Path $healthScript) {
try {
node $healthScript | Out-File -FilePath (Join-Path $workspace "health-check.log")
Write-CILog "PASS" "Health monitoring passed"
} catch {
Write-CILog "FAIL" "Health monitoring failed"
$exitCode = 1
}
}
# Advanced validation
$advancedScript = Join-Path $AgentsDir "scripts\advanced-validator.js"
if (Test-Path $advancedScript) {
try {
node $advancedScript | Out-File -FilePath (Join-Path $workspace "advanced-validation.log")
Write-CILog "PASS" "Advanced validation passed"
} catch {
Write-CILog "WARN" "Advanced validation found issues"
}
}
# Dependency validation
$dependencyScript = Join-Path $AgentsDir "scripts\dependency-validator.js"
if (Test-Path $dependencyScript) {
try {
node $dependencyScript | Out-File -FilePath (Join-Path $workspace "dependency-validation.log")
Write-CILog "PASS" "Dependency validation passed"
} catch {
Write-CILog "WARN" "Dependency validation found issues"
}
}
# Performance monitoring
$performanceScript = Join-Path $AgentsDir "scripts\performance-monitor.js"
if (Test-Path $performanceScript) {
try {
node $performanceScript | Out-File -FilePath (Join-Path $workspace "performance-monitor.log")
Write-CILog "PASS" "Performance monitoring passed"
} catch {
Write-CILog "WARN" "Performance monitoring found issues"
}
}
# 4. Generate artifacts
Write-CILog "INFO" "Generating CI artifacts..."
$pipelineEnd = Get-Date
$duration = ($pipelineEnd - $pipelineStart).TotalSeconds
# Consolidated report
$reportFile = Join-Path $workspace "ci-pipeline-report.json"
$report = @{
timestamp = (Get-Date -Format "yyyy-MM-ddTHH:mm:sszzz")
pipeline_type = "full_ci"
duration_seconds = [int]$duration
exit_code = $exitCode
environment = @{
node_version = (node --version)
platform = $env:OS
working_directory = $BaseDir
}
checks_performed = @(
"environment_validation",
"integration_tests",
"workflow_validation_tests",
"health_monitoring",
"advanced_validation",
"dependency_validation",
"performance_monitoring"
)
artifacts = @(
"integration-tests.log",
"workflow-tests.log",
"health-check.log",
"advanced-validation.log",
"dependency-validation.log",
"performance-monitor.log",
"npm-install.log"
)
workspace = $workspace
}
$report | ConvertTo-Json -Depth 10 | Out-File -FilePath $reportFile
Write-CILog "INFO" "CI pipeline report saved to: $reportFile"
Write-CILog "INFO" "CI artifacts saved to: $workspace"
Write-CILog "INFO" "Pipeline duration: $([int]$duration)s"
if ($exitCode -eq 0) {
Write-CILog "PASS" "CI pipeline completed successfully"
} else {
Write-CILog "FAIL" "CI pipeline failed"
}
return $exitCode
}
# Install Git hooks
function Install-GitHooks {
Write-CILog "INFO" "Installing Git hooks..."
$hooksDir = Join-Path $BaseDir ".git\hooks"
$agentsHooksDir = Join-Path $AgentsDir "scripts\git-hooks"
# Create git-hooks directory
if (-not (Test-Path $agentsHooksDir)) {
New-Item -ItemType Directory -Path $agentsHooksDir -Force | Out-Null
}
# Create pre-commit hook
$preCommitContent = @'
#!/bin/bash
# Pre-commit hook for .agents validation
echo "Running .agents pre-commit validation..."
if bash .agents/scripts/ci-hooks.sh pre-commit; then
echo "Pre-commit validation passed"
exit 0
else
echo "Pre-commit validation failed"
exit 1
fi
'@
$preCommitContent | Out-File -FilePath (Join-Path $agentsHooksDir "pre-commit") -Encoding UTF8
# Create pre-push hook
$prePushContent = @'
#!/bin/bash
# Pre-push hook for .agents validation
echo "Running .agents pre-push validation..."
if bash .agents/scripts/ci-hooks.sh pre-push; then
echo "Pre-push validation passed"
exit 0
else
echo "Pre-push validation failed"
exit 1
fi
'@
$prePushContent | Out-File -FilePath (Join-Path $agentsHooksDir "pre-push") -Encoding UTF8
# Install hooks if .git directory exists
if (Test-Path $hooksDir) {
Copy-Item (Join-Path $agentsHooksDir "pre-commit") $hooksDir -Force
Copy-Item (Join-Path $agentsHooksDir "pre-push") $hooksDir -Force
Write-CILog "PASS" "Git hooks installed successfully"
} else {
Write-CILog "WARN" "Git repository not found, hooks copied to .agents\scripts\git-hooks"
}
}
# Main execution
switch ($Command) {
"pre-commit" {
exit (Invoke-PreCommitHook)
}
"pre-push" {
exit (Invoke-PrePushHook)
}
"ci-pipeline" {
exit (Invoke-CIPipelineHook)
}
"install-hooks" {
Install-GitHooks
}
"help" {
Write-Host "Usage: .\ci-hooks.ps1 -Command {pre-commit|pre-push|ci-pipeline|install-hooks|help}"
Write-Host ""
Write-Host "Commands:"
Write-Host " pre-commit - Run pre-commit validation"
Write-Host " pre-push - Run pre-push validation"
Write-Host " ci-pipeline - Run full CI pipeline"
Write-Host " install-hooks - Install Git hooks"
Write-Host " help - Show this help"
}
default {
Write-Host "Unknown command: $Command"
Write-Host "Use 'help' to see available commands"
exit 1
}
}
+445
View File
@@ -0,0 +1,445 @@
#!/bin/bash
# ci-hooks.sh - Continuous integration hooks for .agents
# Part of LCBP3-DMS Phase 3 enhancements
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Base directory
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
AGENTS_DIR="$BASE_DIR/.agents"
# CI configuration
CI_LOG_DIR="$AGENTS_DIR/logs/ci"
CI_REPORT_DIR="$AGENTS_DIR/reports/ci"
# Ensure directories exist
mkdir -p "$CI_LOG_DIR" "$CI_REPORT_DIR"
# Logging function
ci_log() {
local level="$1"
local message="$2"
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
local log_file="$CI_LOG_DIR/ci-$(date '+%Y-%m-%d').log"
echo "[$timestamp] [$level] $message" | tee -a "$log_file"
# Console output with colors
case "$level" in
"INFO") echo -e "${BLUE}$message${NC}" ;;
"PASS") echo -e "${GREEN}$message${NC}" ;;
"WARN") echo -e "${YELLOW}$message${NC}" ;;
"FAIL") echo -e "${RED}$message${NC}" ;;
*) echo "$message" ;;
esac
}
# Pre-commit hook
pre_commit_hook() {
ci_log "INFO" "Running pre-commit validation..."
local exit_code=0
# 1. Run version validation
ci_log "INFO" "Checking version consistency..."
if "$AGENTS_DIR/scripts/bash/validate-versions.sh" >> "$CI_LOG_DIR/pre-commit-versions.log" 2>&1; then
ci_log "PASS" "Version validation passed"
else
ci_log "FAIL" "Version validation failed"
exit_code=1
fi
# 2. Run skill audit
ci_log "INFO" "Auditing skills..."
if "$AGENTS_DIR/scripts/bash/audit-skills.sh" >> "$CI_LOG_DIR/pre-commit-skills.log" 2>&1; then
ci_log "PASS" "Skill audit passed"
else
ci_log "FAIL" "Skill audit failed"
exit_code=1
fi
# 3. Run integration tests (if Node.js available)
if command -v node >/dev/null 2>&1; then
ci_log "INFO" "Running integration tests..."
if node "$AGENTS_DIR/tests/skill-integration.test.js" >> "$CI_LOG_DIR/pre-commit-tests.log" 2>&1; then
ci_log "PASS" "Integration tests passed"
else
ci_log "WARN" "Integration tests failed (non-blocking)"
fi
else
ci_log "WARN" "Node.js not available, skipping integration tests"
fi
# 4. Check for forbidden patterns
ci_log "INFO" "Checking for forbidden patterns..."
local forbidden_patterns=("TODO" "FIXME" "XXX" "HACK")
local found_forbidden=false
for pattern in "${forbidden_patterns[@]}"; do
if grep -r "$pattern" "$AGENTS_DIR/skills" --include="*.md" >/dev/null 2>&1; then
ci_log "WARN" "Found forbidden pattern: $pattern"
found_forbidden=true
fi
done
if [ "$found_forbidden" = false ]; then
ci_log "PASS" "No forbidden patterns found"
fi
# Generate pre-commit report
local report_file="$CI_REPORT_DIR/pre-commit-$(date '+%Y%m%d-%H%M%S').json"
cat > "$report_file" << EOF
{
"timestamp": "$(date -Iseconds)",
"hook_type": "pre-commit",
"exit_code": $exit_code,
"checks_performed": [
"version_validation",
"skill_audit",
"integration_tests",
"forbidden_patterns"
],
"log_files": [
"pre-commit-versions.log",
"pre-commit-skills.log",
"pre-commit-tests.log"
]
}
EOF
ci_log "INFO" "Pre-commit report saved to: $report_file"
if [ $exit_code -eq 0 ]; then
ci_log "PASS" "Pre-commit validation completed successfully"
else
ci_log "FAIL" "Pre-commit validation failed"
fi
return $exit_code
}
# Pre-push hook
pre_push_hook() {
ci_log "INFO" "Running pre-push validation..."
local exit_code=0
# 1. Full health check
ci_log "INFO" "Running full health check..."
if command -v node >/dev/null 2>&1; then
if node "$AGENTS_DIR/scripts/health-monitor.js" >> "$CI_LOG_DIR/pre-push-health.log" 2>&1; then
ci_log "PASS" "Health check passed"
else
ci_log "FAIL" "Health check failed"
exit_code=1
fi
else
ci_log "WARN" "Node.js not available, using basic health check"
if "$AGENTS_DIR/scripts/bash/audit-skills.sh" >> "$CI_LOG_DIR/pre-push-basic.log" 2>&1; then
ci_log "PASS" "Basic health check passed"
else
ci_log "FAIL" "Basic health check failed"
exit_code=1
fi
fi
# 2. Advanced validation (if available)
if command -v node >/dev/null 2>&1 && [ -f "$AGENTS_DIR/scripts/advanced-validator.js" ]; then
ci_log "INFO" "Running advanced validation..."
if node "$AGENTS_DIR/scripts/advanced-validator.js" >> "$CI_LOG_DIR/pre-push-advanced.log" 2>&1; then
ci_log "PASS" "Advanced validation passed"
else
ci_log "WARN" "Advanced validation found issues (non-blocking)"
fi
fi
# 3. Dependency validation
if command -v node >/dev/null 2>&1 && [ -f "$AGENTS_DIR/scripts/dependency-validator.js" ]; then
ci_log "INFO" "Running dependency validation..."
if node "$AGENTS_DIR/scripts/dependency-validator.js" >> "$CI_LOG_DIR/pre-push-dependencies.log" 2>&1; then
ci_log "PASS" "Dependency validation passed"
else
ci_log "WARN" "Dependency validation found issues (non-blocking)"
fi
fi
# 4. Performance monitoring
if command -v node >/dev/null 2>&1 && [ -f "$AGENTS_DIR/scripts/performance-monitor.js" ]; then
ci_log "INFO" "Running performance monitoring..."
if node "$AGENTS_DIR/scripts/performance-monitor.js" >> "$CI_LOG_DIR/pre-push-performance.log" 2>&1; then
ci_log "PASS" "Performance monitoring passed"
else
ci_log "WARN" "Performance monitoring found issues (non-blocking)"
fi
fi
# Generate pre-push report
local report_file="$CI_REPORT_DIR/pre-push-$(date '+%Y%m%d-%H%M%S').json"
cat > "$report_file" << EOF
{
"timestamp": "$(date -Iseconds)",
"hook_type": "pre-push",
"exit_code": $exit_code,
"checks_performed": [
"health_check",
"advanced_validation",
"dependency_validation",
"performance_monitoring"
],
"log_files": [
"pre-push-health.log",
"pre-push-advanced.log",
"pre-push-dependencies.log",
"pre-push-performance.log"
]
}
EOF
ci_log "INFO" "Pre-push report saved to: $report_file"
if [ $exit_code -eq 0 ]; then
ci_log "PASS" "Pre-push validation completed successfully"
else
ci_log "FAIL" "Pre-push validation failed"
fi
return $exit_code
}
# CI pipeline hook
ci_pipeline_hook() {
ci_log "INFO" "Running CI pipeline validation..."
local exit_code=0
local pipeline_start=$(date +%s)
# Create pipeline workspace
local workspace="$CI_REPORT_DIR/pipeline-$(date '+%Y%m%d-%H%M%S')"
mkdir -p "$workspace"
# 1. Environment validation
ci_log "INFO" "Validating CI environment..."
# Check required tools
local required_tools=("node" "npm")
for tool in "${required_tools[@]}"; do
if command -v "$tool" >/dev/null 2>&1; then
ci_log "PASS" "Tool available: $tool"
else
ci_log "FAIL" "Tool missing: $tool"
exit_code=1
fi
done
# Check Node.js modules
if [ -f "$AGENTS_DIR/package.json" ]; then
cd "$AGENTS_DIR"
if npm list --depth=0 >/dev/null 2>&1; then
ci_log "PASS" "Node.js dependencies installed"
else
ci_log "WARN" "Installing Node.js dependencies..."
npm install >> "$workspace/npm-install.log" 2>&1 || {
ci_log "FAIL" "Failed to install Node.js dependencies"
exit_code=1
}
fi
cd "$BASE_DIR"
fi
# 2. Full test suite
ci_log "INFO" "Running full test suite..."
# Integration tests
if node "$AGENTS_DIR/tests/skill-integration.test.js" >> "$workspace/integration-tests.log" 2>&1; then
ci_log "PASS" "Integration tests passed"
else
ci_log "FAIL" "Integration tests failed"
exit_code=1
fi
# Workflow validation tests
if node "$AGENTS_DIR/tests/workflow-validation.test.js" >> "$workspace/workflow-tests.log" 2>&1; then
ci_log "PASS" "Workflow validation tests passed"
else
ci_log "FAIL" "Workflow validation tests failed"
exit_code=1
fi
# 3. Comprehensive validation
ci_log "INFO" "Running comprehensive validation..."
# Health monitoring
if node "$AGENTS_DIR/scripts/health-monitor.js" >> "$workspace/health-check.log" 2>&1; then
ci_log "PASS" "Health monitoring passed"
else
ci_log "FAIL" "Health monitoring failed"
exit_code=1
fi
# Advanced validation
if node "$AGENTS_DIR/scripts/advanced-validator.js" >> "$workspace/advanced-validation.log" 2>&1; then
ci_log "PASS" "Advanced validation passed"
else
ci_log "WARN" "Advanced validation found issues"
fi
# Dependency validation
if node "$AGENTS_DIR/scripts/dependency-validator.js" >> "$workspace/dependency-validation.log" 2>&1; then
ci_log "PASS" "Dependency validation passed"
else
ci_log "WARN" "Dependency validation found issues"
fi
# Performance monitoring
if node "$AGENTS_DIR/scripts/performance-monitor.js" >> "$workspace/performance-monitor.log" 2>&1; then
ci_log "PASS" "Performance monitoring passed"
else
ci_log "WARN" "Performance monitoring found issues"
fi
# 4. Generate artifacts
ci_log "INFO" "Generating CI artifacts..."
local pipeline_end=$(date +%s)
local duration=$((pipeline_end - pipeline_start))
# Consolidated report
local report_file="$workspace/ci-pipeline-report.json"
cat > "$report_file" << EOF
{
"timestamp": "$(date -Iseconds)",
"pipeline_type": "full_ci",
"duration_seconds": $duration,
"exit_code": $exit_code,
"environment": {
"node_version": "$(node --version)",
"platform": "$(uname -s)",
"working_directory": "$BASE_DIR"
},
"checks_performed": [
"environment_validation",
"integration_tests",
"workflow_validation_tests",
"health_monitoring",
"advanced_validation",
"dependency_validation",
"performance_monitoring"
],
"artifacts": [
"integration-tests.log",
"workflow-tests.log",
"health-check.log",
"advanced-validation.log",
"dependency-validation.log",
"performance-monitor.log",
"npm-install.log"
],
"workspace": "$workspace"
}
EOF
ci_log "INFO" "CI pipeline report saved to: $report_file"
ci_log "INFO" "CI artifacts saved to: $workspace"
ci_log "INFO" "Pipeline duration: ${duration}s"
if [ $exit_code -eq 0 ]; then
ci_log "PASS" "CI pipeline completed successfully"
else
ci_log "FAIL" "CI pipeline failed"
fi
return $exit_code
}
# Install Git hooks
install_git_hooks() {
ci_log "INFO" "Installing Git hooks..."
local hooks_dir="$BASE_DIR/.git/hooks"
local agents_hooks_dir="$AGENTS_DIR/scripts/git-hooks"
# Create git-hooks directory
mkdir -p "$agents_hooks_dir"
# Create pre-commit hook
cat > "$agents_hooks_dir/pre-commit" << 'EOF'
#!/bin/bash
# Pre-commit hook for .agents validation
echo "Running .agents pre-commit validation..."
if bash .agents/scripts/ci-hooks.sh pre-commit; then
echo "Pre-commit validation passed"
exit 0
else
echo "Pre-commit validation failed"
exit 1
fi
EOF
# Create pre-push hook
cat > "$agents_hooks_dir/pre-push" << 'EOF'
#!/bin/bash
# Pre-push hook for .agents validation
echo "Running .agents pre-push validation..."
if bash .agents/scripts/ci-hooks.sh pre-push; then
echo "Pre-push validation passed"
exit 0
else
echo "Pre-push validation failed"
exit 1
fi
EOF
# Make hooks executable
chmod +x "$agents_hooks_dir/pre-commit"
chmod +x "$agents_hooks_dir/pre-push"
# Install hooks if .git directory exists
if [ -d "$hooks_dir" ]; then
cp "$agents_hooks_dir/pre-commit" "$hooks_dir/"
cp "$agents_hooks_dir/pre-push" "$hooks_dir/"
ci_log "PASS" "Git hooks installed successfully"
else
ci_log "WARN" "Git repository not found, hooks copied to .agents/scripts/git-hooks"
fi
}
# Main function
main() {
local command="${1:-help}"
case "$command" in
"pre-commit")
pre_commit_hook
;;
"pre-push")
pre_push_hook
;;
"ci-pipeline")
ci_pipeline_hook
;;
"install-hooks")
install_git_hooks
;;
"help"|*)
echo "Usage: $0 {pre-commit|pre-push|ci-pipeline|install-hooks|help}"
echo ""
echo "Commands:"
echo " pre-commit - Run pre-commit validation"
echo " pre-push - Run pre-push validation"
echo " ci-pipeline - Run full CI pipeline"
echo " install-hooks - Install Git hooks"
echo " help - Show this help"
;;
esac
}
# Run main function with all arguments
main "$@"
+457
View File
@@ -0,0 +1,457 @@
#!/usr/bin/env node
/**
* dependency-validator.js - Skill dependency validation system
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
const yaml = require('js-yaml');
// Configuration
const BASE_DIR = path.resolve(__dirname, '../..');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
const WORKFLOWS_DIR = path.join(BASE_DIR, '.windsurf', 'workflows');
// Dependency validation class
class DependencyValidator {
constructor() {
this.validationResults = {
timestamp: new Date().toISOString(),
dependency_graph: {},
circular_dependencies: [],
missing_dependencies: [],
orphaned_skills: [],
dependency_chains: {},
validation_summary: {
total_skills: 0,
skills_with_dependencies: 0,
circular_dependencies_found: 0,
missing_dependencies_found: 0,
orphaned_skills_found: 0,
max_dependency_depth: 0,
validation_status: 'unknown'
}
};
}
log(message, level = 'info') {
const colors = {
info: '\x1b[36m', // Cyan
pass: '\x1b[32m', // Green
fail: '\x1b[31m', // Red
warn: '\x1b[33m', // Yellow
critical: '\x1b[35m', // Magenta
reset: '\x1b[0m'
};
const color = colors[level] || colors.info;
console.log(`${color}[${level.toUpperCase()}] ${message}${colors.reset}`);
}
extractSkillDependencies(skillPath, skillName) {
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (!fs.existsSync(skillMdPath)) {
this.log(`No SKILL.md found for ${skillName}`, 'warn');
return { dependencies: [], handoffs: [], error: 'SKILL.md not found' };
}
try {
const content = fs.readFileSync(skillMdPath, 'utf8');
// Extract dependencies from front matter
let dependencies = [];
let handoffs = [];
const frontMatterMatch = content.match(/^---\n([\s\S]*?)\n---/);
if (frontMatterMatch) {
try {
const frontMatter = yaml.load(frontMatterMatch[1]);
// Handle depends-on field
if (frontMatter['depends-on']) {
if (Array.isArray(frontMatter['depends-on'])) {
dependencies = frontMatter['depends-on'];
} else {
dependencies = [frontMatter['depends-on']];
}
}
// Handle handoffs field
if (frontMatter.handoffs && Array.isArray(frontMatter.handoffs)) {
handoffs = frontMatter.handoffs.map(h => h.agent);
}
} catch (yamlError) {
this.log(`Invalid YAML in ${skillName} front matter: ${yamlError.message}`, 'warn');
}
}
// Also extract skill references from content
const contentSkillRefs = content.match(/@speckit-\w+/g) || [];
const contentDependencies = contentSkillRefs.map(ref => ref.replace('@', ''));
// Merge dependencies (avoid duplicates)
const allDependencies = [...new Set([...dependencies, ...contentDependencies])];
return {
dependencies: allDependencies,
handoffs: handoffs,
content_references: contentSkillRefs,
front_matter_dependencies: dependencies,
error: null
};
} catch (error) {
this.log(`Error reading ${skillName}: ${error.message}`, 'warn');
return { dependencies: [], handoffs: [], error: error.message };
}
}
buildDependencyGraph() {
this.log('Building dependency graph...', 'info');
if (!fs.existsSync(SKILLS_DIR)) {
this.log('Skills directory not found', 'fail');
return;
}
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
const itemPath = path.join(SKILLS_DIR, item);
return fs.statSync(itemPath).isDirectory();
});
this.validationResults.validation_summary.total_skills = skillDirs.length;
// Extract dependencies for each skill
for (const skillDir of skillDirs) {
const skillPath = path.join(SKILLS_DIR, skillDir);
const dependencyInfo = this.extractSkillDependencies(skillPath, skillDir);
this.validationResults.dependency_graph[skillDir] = dependencyInfo;
if (dependencyInfo.dependencies.length > 0 || dependencyInfo.handoffs.length > 0) {
this.validationResults.validation_summary.skills_with_dependencies++;
}
}
this.log(`Analyzed ${skillDirs.length} skills`, 'info');
this.log(`Skills with dependencies: ${this.validationResults.validation_summary.skills_with_dependencies}`, 'info');
}
validateDependencies() {
this.log('Validating dependencies...', 'info');
const { dependency_graph } = this.validationResults;
const allSkills = Object.keys(dependency_graph);
// Check for missing dependencies
for (const [skillName, dependencyInfo] of Object.entries(dependency_graph)) {
for (const dependency of dependencyInfo.dependencies) {
if (!allSkills.includes(dependency)) {
this.validationResults.missing_dependencies.push({
skill: skillName,
missing_dependency: dependency,
dependency_type: 'depends-on'
});
this.validationResults.validation_summary.missing_dependencies_found++;
this.log(`Missing dependency: ${skillName} depends on ${dependency}`, 'fail');
}
}
for (const handoff of dependencyInfo.handoffs) {
if (!allSkills.includes(handoff)) {
this.validationResults.missing_dependencies.push({
skill: skillName,
missing_dependency: handoff,
dependency_type: 'handoff'
});
this.validationResults.validation_summary.missing_dependencies_found++;
this.log(`Missing handoff: ${skillName} hands off to ${handoff}`, 'fail');
}
}
}
// Check for orphaned skills (no one depends on them)
const dependedOnSkills = new Set();
for (const dependencyInfo of Object.values(dependency_graph)) {
dependencyInfo.dependencies.forEach(dep => dependedOnSkills.add(dep));
dependencyInfo.handoffs.forEach(handoff => dependedOnSkills.add(handoff));
}
for (const skill of allSkills) {
if (!dependedOnSkills.has(skill) && skill !== 'speckit-constitution') {
// Constitution is allowed to be orphaned (it's a starting point)
this.validationResults.orphaned_skills.push(skill);
this.validationResults.validation_summary.orphaned_skills_found++;
this.log(`Orphaned skill: ${skill} (no dependencies on it)`, 'warn');
}
}
}
detectCircularDependencies() {
this.log('Detecting circular dependencies...', 'info');
const { dependency_graph } = this.validationResults;
const visited = new Set();
const recursionStack = new Set();
const circularDeps = [];
function dfs(skillName, path = []) {
if (recursionStack.has(skillName)) {
// Found circular dependency
const cycleStart = path.indexOf(skillName);
const cycle = path.slice(cycleStart).concat(skillName);
circularDeps.push(cycle);
return;
}
if (visited.has(skillName)) {
return;
}
visited.add(skillName);
recursionStack.add(skillName);
path.push(skillName);
const dependencyInfo = dependency_graph[skillName];
if (dependencyInfo) {
for (const dependency of dependencyInfo.dependencies) {
dfs(dependency, [...path]);
}
}
recursionStack.delete(skillName);
}
// Run DFS from each skill
for (const skillName of Object.keys(dependency_graph)) {
if (!visited.has(skillName)) {
dfs(skillName);
}
}
this.validationResults.circular_dependencies = circularDeps;
this.validationResults.validation_summary.circular_dependencies_found = circularDeps.length;
if (circularDeps.length > 0) {
this.log(`Found ${circularDeps.length} circular dependencies:`, 'critical');
circularDeps.forEach((cycle, index) => {
this.log(` ${index + 1}. ${cycle.join(' -> ')}`, 'critical');
});
} else {
this.log('No circular dependencies found', 'pass');
}
}
calculateDependencyChains() {
this.log('Calculating dependency chains...', 'info');
const { dependency_graph } = this.validationResults;
const chains = {};
function calculateDepth(skillName, visited = new Set()) {
if (visited.has(skillName)) {
return 0; // Circular dependency protection
}
visited.add(skillName);
const dependencyInfo = dependency_graph[skillName];
if (!dependencyInfo || dependencyInfo.dependencies.length === 0) {
return 1;
}
let maxDepth = 0;
for (const dependency of dependencyInfo.dependencies) {
const depth = calculateDepth(dependency, new Set(visited));
maxDepth = Math.max(maxDepth, depth);
}
return maxDepth + 1;
}
function getDependencyChain(skillName) {
const dependencyInfo = dependency_graph[skillName];
if (!dependencyInfo || dependencyInfo.dependencies.length === 0) {
return [skillName];
}
const chains = [];
for (const dependency of dependencyInfo.dependencies) {
const depChain = getDependencyChain(dependency);
chains.push(depChain.concat(skillName));
}
// Return the longest chain
return chains.reduce((longest, current) =>
current.length > longest.length ? current : longest, [skillName]
);
}
for (const skillName of Object.keys(dependency_graph)) {
const depth = calculateDepth(skillName);
const chain = getDependencyChain(skillName);
chains[skillName] = {
depth: depth,
chain: chain,
chain_length: chain.length
};
}
this.validationResults.dependency_chains = chains;
const maxDepth = Math.max(...Object.values(chains).map(c => c.depth));
this.validationResults.validation_summary.max_dependency_depth = maxDepth;
this.log(`Maximum dependency depth: ${maxDepth}`, 'info');
}
validateWorkflowDependencies() {
this.log('Validating workflow dependencies...', 'info');
if (!fs.existsSync(WORKFLOWS_DIR)) {
this.log('Workflows directory not found', 'warn');
return;
}
const workflowFiles = fs.readdirSync(WORKFLOWS_DIR).filter(file => file.endsWith('.md'));
const allSkills = Object.keys(this.validationResults.dependency_graph);
for (const workflowFile of workflowFiles) {
const workflowPath = path.join(WORKFLOWS_DIR, workflowFile);
try {
const content = fs.readFileSync(workflowPath, 'utf8');
const skillReferences = content.match(/@speckit-\w+/g) || [];
for (const skillRef of skillReferences) {
const skillName = skillRef.replace('@', '');
if (!allSkills.includes(skillName)) {
this.validationResults.missing_dependencies.push({
workflow: workflowFile,
missing_dependency: skillName,
dependency_type: 'workflow-reference'
});
this.validationResults.validation_summary.missing_dependencies_found++;
this.log(`Workflow ${workflowFile} references missing skill: ${skillRef}`, 'fail');
}
}
} catch (error) {
this.log(`Error reading workflow ${workflowFile}: ${error.message}`, 'warn');
}
}
}
generateDependencyReport() {
this.log('Generating dependency report...', 'info');
// Determine overall validation status
const summary = this.validationResults.validation_summary;
if (summary.circular_dependencies_found > 0) {
summary.validation_status = 'critical';
} else if (summary.missing_dependencies_found > 0) {
summary.validation_status = 'failed';
} else if (summary.orphaned_skills_found > 0) {
summary.validation_status = 'warning';
} else {
summary.validation_status = 'passed';
}
// Save report
const reportPath = path.join(AGENTS_DIR, 'reports', 'dependency-validation.json');
const reportsDir = path.dirname(reportPath);
if (!fs.existsSync(reportsDir)) {
fs.mkdirSync(reportsDir, { recursive: true });
}
fs.writeFileSync(reportPath, JSON.stringify(this.validationResults, null, 2));
this.log(`Dependency validation report saved to: ${reportPath}`, 'info');
}
printSummary() {
const summary = this.validationResults.validation_summary;
this.log('=== Dependency Validation Summary ===', 'info');
this.log(`Total skills: ${summary.total_skills}`, 'info');
this.log(`Skills with dependencies: ${summary.skills_with_dependencies}`, 'info');
this.log(`Circular dependencies: ${summary.circular_dependencies_found}`, summary.circular_dependencies_found > 0 ? 'critical' : 'pass');
this.log(`Missing dependencies: ${summary.missing_dependencies_found}`, summary.missing_dependencies_found > 0 ? 'fail' : 'pass');
this.log(`Orphaned skills: ${summary.orphaned_skills_found}`, summary.orphaned_skills_found > 0 ? 'warn' : 'info');
this.log(`Max dependency depth: ${summary.max_dependency_depth}`, 'info');
this.log(`Validation status: ${summary.validation_status.toUpperCase()}`,
summary.validation_status === 'passed' ? 'pass' :
summary.validation_status === 'warning' ? 'warn' : 'fail');
// Show longest dependency chains
const chains = this.validationResults.dependency_chains;
const sortedChains = Object.entries(chains)
.sort(([,a], [,b]) => b.depth - a.depth)
.slice(0, 3);
if (sortedChains.length > 0) {
this.log('Top 3 longest dependency chains:', 'info');
sortedChains.forEach(([skillName, chainInfo], index) => {
this.log(` ${index + 1}. ${chainInfo.chain.join(' -> ')} (depth: ${chainInfo.depth})`, 'info');
});
}
}
async runDependencyValidation() {
this.log('Starting dependency validation...', 'info');
this.log(`Base directory: ${BASE_DIR}`, 'info');
// Build dependency graph
this.buildDependencyGraph();
// Validate dependencies
this.validateDependencies();
// Detect circular dependencies
this.detectCircularDependencies();
// Calculate dependency chains
this.calculateDependencyChains();
// Validate workflow dependencies
this.validateWorkflowDependencies();
// Generate report
this.generateDependencyReport();
// Print summary
this.printSummary();
return this.validationResults;
}
}
// CLI interface
async function main() {
const validator = new DependencyValidator();
try {
const results = await validator.runDependencyValidation();
const status = results.validation_summary.validation_status;
process.exit(status === 'passed' || status === 'warning' ? 0 : 1);
} catch (error) {
console.error('Dependency validation failed:', error);
process.exit(1);
}
}
// Export for use in other modules
module.exports = { DependencyValidator };
// Run if called directly
if (require.main === module) {
main();
}
+369
View File
@@ -0,0 +1,369 @@
#!/usr/bin/env node
/**
* health-monitor.js - Automated health monitoring system for .agents
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
const { execSync } = require('child_process');
// Configuration
const BASE_DIR = path.resolve(__dirname, '../..');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
const HEALTH_LOG_PATH = path.join(AGENTS_DIR, 'logs', 'health.log');
const HEALTH_REPORT_PATH = path.join(AGENTS_DIR, 'reports', 'health-report.json');
// Ensure directories exist
[ path.dirname(HEALTH_LOG_PATH), path.dirname(HEALTH_REPORT_PATH) ].forEach(dir => {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
});
// Health monitoring class
class HealthMonitor {
constructor() {
this.startTime = new Date();
this.metrics = {
timestamp: this.startTime.toISOString(),
version: '1.8.6',
checks: {},
summary: {
total_checks: 0,
passed_checks: 0,
failed_checks: 0,
warnings: 0,
overall_health: 'unknown'
}
};
}
log(message, level = 'info') {
const timestamp = new Date().toISOString();
const logEntry = `[${timestamp}] [${level.toUpperCase()}] ${message}\n`;
// Console output with colors
const colors = {
info: '\x1b[36m', // Cyan
pass: '\x1b[32m', // Green
fail: '\x1b[31m', // Red
warn: '\x1b[33m', // Yellow
reset: '\x1b[0m'
};
const color = colors[level] || colors.info;
console.log(`${color}${logEntry.trim()}${colors.reset}`);
// File logging
fs.appendFileSync(HEALTH_LOG_PATH, logEntry);
}
checkDirectoryExists(dirPath, checkName) {
this.metrics.summary.total_checks++;
const exists = fs.existsSync(dirPath);
this.metrics.checks[checkName] = {
type: 'directory_exists',
status: exists ? 'pass' : 'fail',
path: dirPath,
message: exists ? 'Directory exists' : 'Directory missing'
};
if (exists) {
this.metrics.summary.passed_checks++;
this.log(`${checkName}: PASS - Directory exists`, 'pass');
} else {
this.metrics.summary.failed_checks++;
this.log(`${checkName}: FAIL - Directory missing: ${dirPath}`, 'fail');
}
return exists;
}
checkFileExists(filePath, checkName) {
this.metrics.summary.total_checks++;
const exists = fs.existsSync(filePath);
this.metrics.checks[checkName] = {
type: 'file_exists',
status: exists ? 'pass' : 'fail',
path: filePath,
message: exists ? 'File exists' : 'File missing'
};
if (exists) {
this.metrics.summary.passed_checks++;
this.log(`${checkName}: PASS - File exists`, 'pass');
} else {
this.metrics.summary.failed_checks++;
this.log(`${checkName}: FAIL - File missing: ${filePath}`, 'fail');
}
return exists;
}
checkFileVersion(filePath, expectedVersion, checkName) {
this.metrics.summary.total_checks++;
if (!fs.existsSync(filePath)) {
this.metrics.summary.failed_checks++;
this.metrics.checks[checkName] = {
type: 'version_check',
status: 'fail',
path: filePath,
message: 'File does not exist'
};
this.log(`${checkName}: FAIL - File not found: ${filePath}`, 'fail');
return false;
}
try {
const content = fs.readFileSync(filePath, 'utf8');
const versionMatch = content.match(/v?(\d+\.\d+\.\d+)/);
const actualVersion = versionMatch ? versionMatch[1] : 'not_found';
const versionMatches = actualVersion === expectedVersion;
this.metrics.checks[checkName] = {
type: 'version_check',
status: versionMatches ? 'pass' : 'fail',
path: filePath,
expected_version: expectedVersion,
actual_version: actualVersion,
message: versionMatches ? 'Version matches' : `Version mismatch (expected ${expectedVersion}, found ${actualVersion})`
};
if (versionMatches) {
this.metrics.summary.passed_checks++;
this.log(`${checkName}: PASS - Version ${actualVersion}`, 'pass');
} else {
this.metrics.summary.failed_checks++;
this.log(`${checkName}: FAIL - Version mismatch (expected ${expectedVersion}, found ${actualVersion})`, 'fail');
}
return versionMatches;
} catch (error) {
this.metrics.summary.failed_checks++;
this.metrics.checks[checkName] = {
type: 'version_check',
status: 'fail',
path: filePath,
message: `Error reading file: ${error.message}`
};
this.log(`${checkName}: FAIL - Error reading file: ${error.message}`, 'fail');
return false;
}
}
checkSkillHealth() {
this.log('Checking skill health...', 'info');
const skillsDir = path.join(AGENTS_DIR, 'skills');
if (!fs.existsSync(skillsDir)) {
this.log('Skills directory not found', 'fail');
return;
}
const skillDirs = fs.readdirSync(skillsDir).filter(item => {
const itemPath = path.join(skillsDir, item);
return fs.statSync(itemPath).isDirectory();
});
this.metrics.checks['skill_count'] = {
type: 'skill_count',
status: skillDirs.length >= 20 ? 'pass' : 'warn',
count: skillDirs.length,
expected: 20,
message: `Found ${skillDirs.length} skills (expected at least 20)`
};
if (skillDirs.length >= 20) {
this.metrics.summary.passed_checks++;
this.log(`Skill count: PASS - Found ${skillDirs.length} skills`, 'pass');
} else {
this.metrics.summary.warnings++;
this.log(`Skill count: WARN - Only ${skillDirs.length} skills found (expected at least 20)`, 'warn');
}
// Check individual skills
let healthySkills = 0;
skillDirs.forEach(skillDir => {
const skillPath = path.join(skillsDir, skillDir);
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (fs.existsSync(skillMdPath)) {
try {
const content = fs.readFileSync(skillMdPath, 'utf8');
const hasName = content.includes('name:');
const hasDescription = content.includes('description:');
const hasVersion = content.includes('version:');
const hasRole = content.includes('## Role');
const hasTask = content.includes('## Task');
const isHealthy = hasName && hasDescription && hasVersion && hasRole && hasTask;
if (isHealthy) healthySkills++;
this.metrics.checks[`skill_${skillDir}_health`] = {
type: 'skill_health',
status: isHealthy ? 'pass' : 'fail',
skill: skillDir,
has_name: hasName,
has_description: hasDescription,
has_version: hasVersion,
has_role: hasRole,
has_task: hasTask,
message: isHealthy ? 'Skill is healthy' : 'Skill has missing sections'
};
} catch (error) {
this.metrics.checks[`skill_${skillDir}_health`] = {
type: 'skill_health',
status: 'fail',
skill: skillDir,
message: `Error reading skill: ${error.message}`
};
}
}
});
this.metrics.summary.total_checks++;
if (healthySkills === skillDirs.length) {
this.metrics.summary.passed_checks++;
this.log(`Individual skills: PASS - All ${healthySkills} skills are healthy`, 'pass');
} else {
this.metrics.summary.failed_checks++;
this.log(`Individual skills: FAIL - Only ${healthySkills}/${skillDirs.length} skills are healthy`, 'fail');
}
}
checkWorkflowHealth() {
this.log('Checking workflow health...', 'info');
const workflowsDir = path.join(BASE_DIR, '.windsurf', 'workflows');
if (!fs.existsSync(workflowsDir)) {
this.log('Workflows directory not found', 'fail');
return;
}
const workflowFiles = fs.readdirSync(workflowsDir).filter(file => file.endsWith('.md'));
this.metrics.checks['workflow_count'] = {
type: 'workflow_count',
status: workflowFiles.length >= 20 ? 'pass' : 'warn',
count: workflowFiles.length,
expected: 20,
message: `Found ${workflowFiles.length} workflows (expected at least 20)`
};
if (workflowFiles.length >= 20) {
this.metrics.summary.passed_checks++;
this.log(`Workflow count: PASS - Found ${workflowFiles.length} workflows`, 'pass');
} else {
this.metrics.summary.warnings++;
this.log(`Workflow count: WARN - Only ${workflowFiles.length} workflows found (expected at least 20)`, 'warn');
}
}
calculateOverallHealth() {
const { total_checks, passed_checks, failed_checks, warnings } = this.metrics.summary;
if (failed_checks === 0) {
this.metrics.summary.overall_health = warnings === 0 ? 'excellent' : 'good';
} else if (failed_checks <= total_checks * 0.1) {
this.metrics.summary.overall_health = 'fair';
} else {
this.metrics.summary.overall_health = 'poor';
}
this.log(`Overall health: ${this.metrics.summary.overall_health}`, 'info');
}
generateReport() {
const report = {
...this.metrics,
duration: new Date() - this.startTime,
environment: {
node_version: process.version,
platform: process.platform,
agents_dir: AGENTS_DIR
}
};
fs.writeFileSync(HEALTH_REPORT_PATH, JSON.stringify(report, null, 2));
this.log(`Health report saved to: ${HEALTH_REPORT_PATH}`, 'info');
return report;
}
async runFullHealthCheck() {
this.log('Starting comprehensive health check...', 'info');
this.log(`Base directory: ${BASE_DIR}`, 'info');
// Core directory checks
this.checkDirectoryExists(AGENTS_DIR, 'agents_directory');
this.checkDirectoryExists(path.join(AGENTS_DIR, 'skills'), 'skills_directory');
this.checkDirectoryExists(path.join(AGENTS_DIR, 'scripts'), 'scripts_directory');
this.checkDirectoryExists(path.join(AGENTS_DIR, 'rules'), 'rules_directory');
this.checkDirectoryExists(path.join(BASE_DIR, '.windsurf', 'workflows'), 'workflows_directory');
// Core file checks
this.checkFileExists(path.join(AGENTS_DIR, 'README.md'), 'readme_file');
this.checkFileExists(path.join(AGENTS_DIR, 'skills', 'VERSION'), 'skills_version_file');
this.checkFileExists(path.join(AGENTS_DIR, 'skills', 'skills.md'), 'skills_documentation');
// Version consistency checks
this.checkFileVersion(path.join(AGENTS_DIR, 'README.md'), '1.8.6', 'readme_version');
this.checkFileVersion(path.join(AGENTS_DIR, 'skills', 'VERSION'), '1.8.6', 'skills_version_file_version');
this.checkFileVersion(path.join(AGENTS_DIR, 'skills', 'skills.md'), '1.8.6', 'skills_documentation_version');
this.checkFileVersion(path.join(AGENTS_DIR, 'rules', '00-project-context.md'), '1.8.6', 'project_context_version');
// Script availability checks
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'validate-versions.sh'), 'bash_version_script');
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'audit-skills.sh'), 'bash_audit_script');
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'sync-workflows.sh'), 'bash_sync_script');
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'powershell', 'validate-versions.ps1'), 'powershell_version_script');
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'powershell', 'audit-skills.ps1'), 'powershell_audit_script');
// Detailed health checks
this.checkSkillHealth();
this.checkWorkflowHealth();
// Calculate overall health
this.calculateOverallHealth();
// Generate report
const report = this.generateReport();
// Summary
this.log('=== Health Check Summary ===', 'info');
this.log(`Total checks: ${this.metrics.summary.total_checks}`, 'info');
this.log(`Passed: ${this.metrics.summary.passed_checks}`, 'pass');
this.log(`Failed: ${this.metrics.summary.failed_checks}`, this.metrics.summary.failed_checks > 0 ? 'fail' : 'info');
this.log(`Warnings: ${this.metrics.summary.warnings}`, 'warn');
this.log(`Overall health: ${this.metrics.summary.overall_health}`, 'info');
this.log(`Duration: ${new Date() - this.startTime}ms`, 'info');
return report;
}
}
// CLI interface
async function main() {
const monitor = new HealthMonitor();
try {
const report = await monitor.runFullHealthCheck();
process.exit(report.summary.failed_checks > 0 ? 1 : 0);
} catch (error) {
console.error('Health check failed:', error);
process.exit(1);
}
}
// Export for use in other modules
module.exports = { HealthMonitor };
// Run if called directly
if (require.main === module) {
main();
}
+494
View File
@@ -0,0 +1,494 @@
#!/usr/bin/env node
/**
* performance-monitor.js - Performance monitoring for .agents skills
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
const { performance } = require('perf_hooks');
// Configuration
const BASE_DIR = path.resolve(__dirname, '../..');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
const PERFORMANCE_LOG_PATH = path.join(AGENTS_DIR, 'logs', 'performance.log');
const PERFORMANCE_REPORT_PATH = path.join(AGENTS_DIR, 'reports', 'performance-report.json');
// Ensure directories exist
[ path.dirname(PERFORMANCE_LOG_PATH), path.dirname(PERFORMANCE_REPORT_PATH) ].forEach(dir => {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
});
// Performance monitoring class
class PerformanceMonitor {
constructor() {
this.startTime = performance.now();
this.metrics = {
timestamp: new Date().toISOString(),
duration: 0,
skill_metrics: {},
workflow_metrics: {},
system_metrics: {},
summary: {
total_skills_analyzed: 0,
total_workflows_analyzed: 0,
average_skill_size: 0,
average_workflow_size: 0,
performance_score: 0,
recommendations: []
}
};
}
log(message, level = 'info') {
const timestamp = new Date().toISOString();
const logEntry = `[${timestamp}] [${level.toUpperCase()}] ${message}\n`;
// Console output with colors
const colors = {
info: '\x1b[36m', // Cyan
good: '\x1b[32m', // Green
warn: '\x1b[33m', // Yellow
poor: '\x1b[31m', // Red
reset: '\x1b[0m'
};
const color = colors[level] || colors.info;
console.log(`${color}${logEntry.trim()}${colors.reset}`);
// File logging
fs.appendFileSync(PERFORMANCE_LOG_PATH, logEntry);
}
analyzeSkillPerformance(skillPath, skillName) {
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (!fs.existsSync(skillMdPath)) {
this.log(`Skipping ${skillName} - SKILL.md not found`, 'warn');
return null;
}
const startTime = performance.now();
try {
const stats = fs.statSync(skillMdPath);
const content = fs.readFileSync(skillMdPath, 'utf8');
// Basic metrics
const fileSizeKB = stats.size / 1024;
const lineCount = content.split('\n').length;
const wordCount = content.split(/\s+/).filter(word => word.length > 0).length;
const charCount = content.length;
// Content complexity metrics
const sectionCount = (content.match(/^#+\s/gm) || []).length;
const codeBlockCount = (content.match(/```[\s\S]*?```/g) || []).length;
const listCount = (content.match(/^[-*+]\s/gm) || []).length;
// Front matter analysis
const frontMatterMatch = content.match(/^---\n([\s\S]*?)\n---/);
const frontMatterSize = frontMatterMatch ? frontMatterMatch[1].length : 0;
const hasFrontMatter = frontMatterMatch !== null;
// Readability metrics
const sentences = content.split(/[.!?]+/).filter(s => s.trim().length > 0);
const avgWordsPerSentence = sentences.length > 0 ? wordCount / sentences.length : 0;
const avgCharsPerWord = wordCount > 0 ? charCount / wordCount : 0;
// Performance score calculation
let performanceScore = 100;
// Size penalties
if (fileSizeKB > 50) performanceScore -= 10;
if (fileSizeKB > 100) performanceScore -= 20;
// Content quality bonuses
if (hasFrontMatter) performanceScore += 5;
if (sectionCount >= 3) performanceScore += 5;
if (codeBlockCount > 0) performanceScore += 5;
// Readability penalties
if (avgWordsPerSentence > 25) performanceScore -= 5;
if (avgWordsPerSentence > 35) performanceScore -= 10;
const analysisTime = performance.now() - startTime;
const skillMetrics = {
skill_name: skillName,
file_path: skillMdPath,
file_size_kb: Math.round(fileSizeKB * 100) / 100,
line_count: lineCount,
word_count: wordCount,
char_count: charCount,
section_count: sectionCount,
code_block_count: codeBlockCount,
list_count: listCount,
front_matter_size: frontMatterSize,
has_front_matter: hasFrontMatter,
avg_words_per_sentence: Math.round(avgWordsPerSentence * 100) / 100,
avg_chars_per_word: Math.round(avgCharsPerWord * 100) / 100,
performance_score: Math.max(0, Math.min(100, performanceScore)),
analysis_time_ms: Math.round(analysisTime * 100) / 100,
last_modified: stats.mtime.toISOString()
};
this.metrics.skill_metrics[skillName] = skillMetrics;
// Log performance assessment
if (performanceScore >= 80) {
this.log(`${skillName}: GOOD performance (score: ${performanceScore})`, 'good');
} else if (performanceScore >= 60) {
this.log(`${skillName}: OK performance (score: ${performanceScore})`, 'info');
} else {
this.log(`${skillName}: POOR performance (score: ${performanceScore})`, 'poor');
}
return skillMetrics;
} catch (error) {
this.log(`Error analyzing ${skillName}: ${error.message}`, 'warn');
return null;
}
}
analyzeWorkflowPerformance(workflowPath, workflowName) {
const startTime = performance.now();
if (!fs.existsSync(workflowPath)) {
this.log(`Skipping workflow ${workflowName} - file not found`, 'warn');
return null;
}
try {
const stats = fs.statSync(workflowPath);
const content = fs.readFileSync(workflowPath, 'utf8');
// Basic metrics
const fileSizeKB = stats.size / 1024;
const lineCount = content.split('\n').length;
const wordCount = content.split(/\s+/).filter(word => word.length > 0).length;
// Workflow-specific metrics
const stepCount = (content.match(/^\d+\./gm) || []).length;
const codeBlockCount = (content.match(/```[\s\S]*?```/g) || []).length;
const skillReferences = (content.match(/@speckit-\w+/g) || []).length;
// Performance score calculation
let performanceScore = 100;
// Size penalties
if (fileSizeKB > 20) performanceScore -= 10;
if (fileSizeKB > 50) performanceScore -= 20;
// Content quality bonuses
if (stepCount > 0) performanceScore += 10;
if (codeBlockCount > 0) performanceScore += 5;
if (skillReferences > 0) performanceScore += 5;
const analysisTime = performance.now() - startTime;
const workflowMetrics = {
workflow_name: workflowName,
file_path: workflowPath,
file_size_kb: Math.round(fileSizeKB * 100) / 100,
line_count: lineCount,
word_count: wordCount,
step_count: stepCount,
code_block_count: codeBlockCount,
skill_references: skillReferences,
performance_score: Math.max(0, Math.min(100, performanceScore)),
analysis_time_ms: Math.round(analysisTime * 100) / 100,
last_modified: stats.mtime.toISOString()
};
this.metrics.workflow_metrics[workflowName] = workflowMetrics;
// Log performance assessment
if (performanceScore >= 80) {
this.log(`${workflowName}: GOOD performance (score: ${performanceScore})`, 'good');
} else if (performanceScore >= 60) {
this.log(`${workflowName}: OK performance (score: ${performanceScore})`, 'info');
} else {
this.log(`${workflowName}: POOR performance (score: ${performanceScore})`, 'poor');
}
return workflowMetrics;
} catch (error) {
this.log(`Error analyzing workflow ${workflowName}: ${error.message}`, 'warn');
return null;
}
}
analyzeSystemMetrics() {
this.log('Analyzing system metrics...', 'info');
// Directory sizes
const agentsSize = this.getDirectorySize(AGENTS_DIR);
const skillsSize = this.getDirectorySize(SKILLS_DIR);
const workflowsDir = path.join(BASE_DIR, '.windsurf', 'workflows');
const workflowsSize = fs.existsSync(workflowsDir) ? this.getDirectorySize(workflowsDir) : 0;
// File counts
const totalFiles = this.countFiles(AGENTS_DIR);
const skillFiles = this.countFiles(SKILLS_DIR);
const workflowFiles = fs.existsSync(workflowsDir) ? this.countFiles(workflowsDir) : 0;
this.metrics.system_metrics = {
agents_directory_size_kb: Math.round(agentsSize / 1024),
skills_directory_size_kb: Math.round(skillsSize / 1024),
workflows_directory_size_kb: Math.round(workflowsSize / 1024),
total_files: totalFiles,
skill_files: skillFiles,
workflow_files: workflowFiles,
analysis_timestamp: new Date().toISOString()
};
this.log(`System: ${totalFiles} files, ${Math.round(agentsSize / 1024)}KB total`, 'info');
}
getDirectorySize(dirPath) {
let totalSize = 0;
if (!fs.existsSync(dirPath)) {
return 0;
}
const items = fs.readdirSync(dirPath);
for (const item of items) {
const itemPath = path.join(dirPath, item);
const stats = fs.statSync(itemPath);
if (stats.isDirectory()) {
totalSize += this.getDirectorySize(itemPath);
} else {
totalSize += stats.size;
}
}
return totalSize;
}
countFiles(dirPath) {
let fileCount = 0;
if (!fs.existsSync(dirPath)) {
return 0;
}
const items = fs.readdirSync(dirPath);
for (const item of items) {
const itemPath = path.join(dirPath, item);
const stats = fs.statSync(itemPath);
if (stats.isDirectory()) {
fileCount += this.countFiles(itemPath);
} else {
fileCount++;
}
}
return fileCount;
}
generateRecommendations() {
const recommendations = [];
const { skill_metrics, workflow_metrics, system_metrics } = this.metrics;
// Analyze skill performance
const skillScores = Object.values(skill_metrics).map(m => m.performance_score);
const avgSkillScore = skillScores.length > 0 ? skillScores.reduce((a, b) => a + b, 0) / skillScores.length : 0;
if (avgSkillScore < 70) {
recommendations.push({
type: 'performance',
priority: 'high',
message: 'Average skill performance is below optimal. Consider optimizing skill documentation.',
details: `Average score: ${Math.round(avgSkillScore)}`
});
}
// Check for oversized files
const largeSkills = Object.values(skill_metrics).filter(m => m.file_size_kb > 50);
if (largeSkills.length > 0) {
recommendations.push({
type: 'size',
priority: 'medium',
message: `${largeSkills.length} skills have large file sizes (>50KB). Consider breaking down complex skills.`,
details: largeSkills.map(s => `${s.skill_name} (${s.file_size_kb}KB)`).join(', ')
});
}
// Check for missing front matter
const skillsWithoutFrontMatter = Object.values(skill_metrics).filter(m => !m.has_front_matter);
if (skillsWithoutFrontMatter.length > 0) {
recommendations.push({
type: 'structure',
priority: 'high',
message: `${skillsWithoutFrontMatter.length} skills missing front matter. Add proper YAML front matter.`,
details: skillsWithoutFrontMatter.map(s => s.skill_name).join(', ')
});
}
// Analyze workflow performance
const workflowScores = Object.values(workflow_metrics).map(m => m.performance_score);
const avgWorkflowScore = workflowScores.length > 0 ? workflowScores.reduce((a, b) => a + b, 0) / workflowScores.length : 0;
if (avgWorkflowScore < 70) {
recommendations.push({
type: 'performance',
priority: 'medium',
message: 'Average workflow performance could be improved. Add more detailed steps and examples.',
details: `Average score: ${Math.round(avgWorkflowScore)}`
});
}
// System recommendations
if (system_metrics.agents_directory_size_kb > 1000) {
recommendations.push({
type: 'maintenance',
priority: 'low',
message: '.agents directory is growing large. Consider archiving old logs and reports.',
details: `Current size: ${system_metrics.agents_directory_size_kb}KB`
});
}
this.metrics.summary.recommendations = recommendations;
// Log recommendations
if (recommendations.length > 0) {
this.log('Performance Recommendations:', 'info');
recommendations.forEach((rec, index) => {
const priority = rec.priority === 'high' ? 'HIGH' : rec.priority === 'medium' ? 'MED' : 'LOW';
this.log(` ${index + 1}. [${priority}] ${rec.message}`, 'warn');
});
} else {
this.log('No performance issues detected - system is optimized!', 'good');
}
}
calculateOverallPerformance() {
const { skill_metrics, workflow_metrics } = this.metrics;
const skillScores = Object.values(skill_metrics).map(m => m.performance_score);
const workflowScores = Object.values(workflow_metrics).map(m => m.performance_score);
const avgSkillScore = skillScores.length > 0 ? skillScores.reduce((a, b) => a + b, 0) / skillScores.length : 100;
const avgWorkflowScore = workflowScores.length > 0 ? workflowScores.reduce((a, b) => a + b, 0) / workflowScores.length : 100;
// Weight skills more heavily than workflows
const overallScore = (avgSkillScore * 0.7) + (avgWorkflowScore * 0.3);
this.metrics.summary.performance_score = Math.round(overallScore);
this.metrics.summary.average_skill_size = skillScores.length > 0
? Math.round(Object.values(skill_metrics).reduce((sum, m) => sum + m.file_size_kb, 0) / skillScores.length * 100) / 100
: 0;
this.metrics.summary.average_workflow_size = workflowScores.length > 0
? Math.round(Object.values(workflow_metrics).reduce((sum, m) => sum + m.file_size_kb, 0) / workflowScores.length * 100) / 100
: 0;
this.metrics.summary.total_skills_analyzed = skillScores.length;
this.metrics.summary.total_workflows_analyzed = workflowScores.length;
}
generateReport() {
this.metrics.duration = performance.now() - this.startTime;
const report = {
...this.metrics,
generated_at: new Date().toISOString(),
environment: {
node_version: process.version,
platform: process.platform,
memory_usage: process.memoryUsage()
}
};
fs.writeFileSync(PERFORMANCE_REPORT_PATH, JSON.stringify(report, null, 2));
this.log(`Performance report saved to: ${PERFORMANCE_REPORT_PATH}`, 'info');
return report;
}
async runPerformanceAnalysis() {
this.log('Starting performance analysis...', 'info');
this.log(`Base directory: ${BASE_DIR}`, 'info');
// Analyze skills
this.log('Analyzing skill performance...', 'info');
if (fs.existsSync(SKILLS_DIR)) {
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
const itemPath = path.join(SKILLS_DIR, item);
return fs.statSync(itemPath).isDirectory();
});
for (const skillDir of skillDirs) {
const skillPath = path.join(SKILLS_DIR, skillDir);
this.analyzeSkillPerformance(skillPath, skillDir);
}
}
// Analyze workflows
this.log('Analyzing workflow performance...', 'info');
const workflowsDir = path.join(BASE_DIR, '.windsurf', 'workflows');
if (fs.existsSync(workflowsDir)) {
const workflowFiles = fs.readdirSync(workflowsDir).filter(file => file.endsWith('.md'));
for (const workflowFile of workflowFiles) {
const workflowPath = path.join(workflowsDir, workflowFile);
const workflowName = workflowFile.replace('.md', '');
this.analyzeWorkflowPerformance(workflowPath, workflowName);
}
}
// System metrics
this.analyzeSystemMetrics();
// Calculate overall performance
this.calculateOverallPerformance();
// Generate recommendations
this.generateRecommendations();
// Generate report
const report = this.generateReport();
// Summary
this.log('=== Performance Analysis Summary ===', 'info');
this.log(`Overall performance score: ${this.metrics.summary.performance_score}/100`, 'info');
this.log(`Skills analyzed: ${this.metrics.summary.total_skills_analyzed}`, 'info');
this.log(`Workflows analyzed: ${this.metrics.summary.total_workflows_analyzed}`, 'info');
this.log(`Average skill size: ${this.metrics.summary.average_skill_size}KB`, 'info');
this.log(`Average workflow size: ${this.metrics.summary.average_workflow_size}KB`, 'info');
this.log(`Analysis duration: ${Math.round(this.metrics.duration)}ms`, 'info');
this.log(`Recommendations: ${this.metrics.summary.recommendations.length}`, 'info');
return report;
}
}
// CLI interface
async function main() {
const monitor = new PerformanceMonitor();
try {
const report = await monitor.runPerformanceAnalysis();
process.exit(report.summary.performance_score < 60 ? 1 : 0);
} catch (error) {
console.error('Performance analysis failed:', error);
process.exit(1);
}
}
// Export for use in other modules
module.exports = { PerformanceMonitor };
// Run if called directly
if (require.main === module) {
main();
}
+203
View File
@@ -0,0 +1,203 @@
# audit-skills.ps1 - Verify skill completeness and health
# Part of LCBP3-DMS Phase 2 improvements
param(
[string]$BaseDir = (Split-Path -Parent (Split-Path -Parent $PSScriptRoot))
)
# Colors for output
$Colors = @{
Red = "`e[0;31m"
Green = "`e[0;32m"
Yellow = "`e[1;33m"
Blue = "`e[0;34m"
NoColor = "`e[0m"
}
$AgentsDir = Join-Path $BaseDir ".agents"
$SkillsDir = Join-Path $AgentsDir "skills"
Write-Host "=== Skills Health Audit ===" -ForegroundColor Cyan
Write-Host "Base directory: $BaseDir"
Write-Host ""
# Function to check if skill has required files
function Test-SkillHealth {
param(
[string]$SkillDir
)
$skillName = Split-Path $SkillDir -Leaf
$issues = 0
# Check for SKILL.md
$skillFile = Join-Path $SkillDir "SKILL.md"
if (Test-Path $skillFile) {
Write-Host " OK: $skillName/SKILL.md" -ForegroundColor $Colors.Green
} else {
Write-Host " MISSING: $skillName/SKILL.md" -ForegroundColor $Colors.Red
$issues++
}
# Check for templates directory (optional)
$templatesDir = Join-Path $SkillDir "templates"
if (Test-Path $templatesDir) {
$templateCount = (Get-ChildItem -Path $templatesDir -Filter "*.md" -File | Measure-Object).Count
if ($templateCount -gt 0) {
Write-Host " OK: $skillName/templates ($templateCount files)" -ForegroundColor $Colors.Green
} else {
Write-Host " EMPTY: $skillName/templates (no files)" -ForegroundColor $Colors.Yellow
}
}
# Check SKILL.md content if exists
if (Test-Path $skillFile) {
$content = Get-Content $skillFile -Raw
# Check for required front matter fields
$requiredFields = @("name", "description", "version")
foreach ($field in $requiredFields) {
if ($content -match "^$field:") {
Write-Host " FIELD: $field" -ForegroundColor $Colors.Green
} else {
Write-Host " MISSING FIELD: $field" -ForegroundColor $Colors.Red
$issues++
}
}
# Check for Role section
if ($content -match "^## Role$") {
Write-Host " SECTION: Role" -ForegroundColor $Colors.Green
} else {
Write-Host " MISSING SECTION: Role" -ForegroundColor $Colors.Yellow
$issues++
}
# Check for Task section
if ($content -match "^## Task$") {
Write-Host " SECTION: Task" -ForegroundColor $Colors.Green
} else {
Write-Host " MISSING SECTION: Task" -ForegroundColor $Colors.Yellow
$issues++
}
}
return $issues
}
# Function to get skill version from SKILL.md
function Get-SkillVersion {
param(
[string]$SkillFile
)
if (Test-Path $SkillFile) {
try {
$content = Get-Content $SkillFile -Raw
if ($content -match "^version:\s*(.+)") {
return $matches[1].Trim()
}
} catch {
return "error"
}
}
return "no_file"
}
# Check skills directory
if (-not (Test-Path $SkillsDir)) {
Write-Host "ERROR: Skills directory not found" -ForegroundColor $Colors.Red
exit 1
}
Write-Host "Scanning skills directory: $SkillsDir"
Write-Host ""
# Get all skill directories
$skillDirs = Get-ChildItem -Path $SkillsDir -Directory | Sort-Object Name
Write-Host "Found $($skillDirs.Count) skill directories"
Write-Host ""
# Audit each skill
$totalIssues = 0
$skillSummary = @()
foreach ($skillDir in $skillDirs) {
$skillName = $skillDir.Name
Write-Host "Auditing: $skillName"
Write-Host "------------------------"
$issues = Test-SkillHealth -SkillDir $skillDir.FullName
$skillVersion = Get-SkillVersion -SkillFile (Join-Path $skillDir.FullName "SKILL.md")
$skillSummary += @{
Name = $skillName
Issues = $issues
Version = $skillVersion
}
$totalIssues += $issues
Write-Host ""
}
# Summary report
Write-Host "=== Skills Audit Summary ===" -ForegroundColor Cyan
Write-Host ""
Write-Host "Skill Status:"
Write-Host "-----------"
foreach ($summary in $skillSummary) {
if ($summary.Issues -eq 0) {
Write-Host " HEALTHY: $($summary.Name) (v$($summary.Version))" -ForegroundColor $Colors.Green
} else {
Write-Host " ISSUES: $($summary.Name) (v$($summary.Version)) - $($summary.Issues) issues" -ForegroundColor $Colors.Red
}
}
Write-Host ""
# Check skills.md version consistency
$skillsVersionFile = Join-Path $SkillsDir "VERSION"
if (Test-Path $skillsVersionFile) {
$content = Get-Content $skillsVersionFile -Raw
if ($content -match "^version:\s*(.+)") {
$globalVersion = $matches[1].Trim()
Write-Host "Global skills version: v$globalVersion"
Write-Host ""
# Check for version mismatches
Write-Host "Version Consistency Check:"
Write-Host "------------------------"
$versionMismatches = 0
foreach ($summary in $skillSummary) {
if ($summary.Version -ne "unknown" -and $summary.Version -ne "no_file" -and $summary.Version -ne $globalVersion) {
Write-Host " MISMATCH: $($summary.Name) is v$($summary.Version), global is v$globalVersion" -ForegroundColor $Colors.Yellow
$versionMismatches++
}
}
if ($versionMismatches -eq 0) {
Write-Host " All skills match global version" -ForegroundColor $Colors.Green
}
}
}
Write-Host ""
# Overall health
if ($totalIssues -eq 0) {
Write-Host "=== SUCCESS: All skills healthy ===" -ForegroundColor $Colors.Green
Write-Host "Total skills: $($skillDirs.Count)"
exit 0
} else {
Write-Host "=== ISSUES FOUND: $totalIssues total issues ===" -ForegroundColor $Colors.Red
Write-Host ""
Write-Host "Recommendations:"
Write-Host "1. Fix missing SKILL.md files"
Write-Host "2. Add required front matter fields"
Write-Host "3. Ensure Role and Task sections exist"
Write-Host "4. Align skill versions with global version"
exit 1
}
@@ -0,0 +1,112 @@
# validate-versions.ps1 - Check version consistency across .agents files
# Part of LCBP3-DMS Phase 2 improvements
param(
[string]$BaseDir = (Split-Path -Parent (Split-Path -Parent $PSScriptRoot)),
[string]$ExpectedVersion = "1.8.6"
)
# Colors for output
$Colors = @{
Red = "`e[0;31m"
Green = "`e[0;32m"
Yellow = "`e[1;33m"
NoColor = "`e[0m"
}
$AgentsDir = Join-Path $BaseDir ".agents"
Write-Host "=== .agents Version Validation ===" -ForegroundColor Cyan
Write-Host "Base directory: $BaseDir"
Write-Host "Expected version: $ExpectedVersion"
Write-Host ""
# Function to extract version from file
function Get-VersionFromFile {
param(
[string]$FilePath,
[string]$Pattern
)
if (Test-Path $FilePath) {
try {
$content = Get-Content $FilePath -Raw
if ($content -match $Pattern) {
return $matches[1]
} else {
return "NOT_FOUND"
}
} catch {
return "ERROR"
}
} else {
return "FILE_NOT_FOUND"
}
}
# Files to check
$FilesToCheck = @{
(Join-Path $AgentsDir "README.md") = "Version: ([0-9]+\.[0-9]+\.[0-9]+)"
(Join-Path $AgentsDir "skills\VERSION") = "version: ([0-9]+\.[0-9]+\.[0-9]+)"
(Join-Path $AgentsDir "rules\00-project-context.md") = "Version: ([0-9]+\.[0-9]+\.[0-9]+)"
(Join-Path $AgentsDir "skills\skills.md") = "V([0-9]+\.[0-9]+\.[0-9]+)"
}
# Track issues
$Issues = 0
Write-Host "Checking version consistency..."
Write-Host ""
foreach ($file in $FilesToCheck.Keys) {
$pattern = $FilesToCheck[$file]
$relativePath = $file.Replace($BaseDir + "\", "")
$version = Get-VersionFromFile -FilePath $file -Pattern $pattern
if ($version -eq "NOT_FOUND" -or $version -eq "FILE_NOT_FOUND") {
Write-Host " ERROR: $relativePath - Version not found" -ForegroundColor $Colors.Red
$Issues++
} elseif ($version -ne $ExpectedVersion) {
Write-Host " ERROR: $relativePath - Found v$version, expected v$ExpectedVersion" -ForegroundColor $Colors.Red
$Issues++
} else {
Write-Host " OK: $relativePath - v$version" -ForegroundColor $Colors.Green
}
}
Write-Host ""
# Check for version mismatches in skill files
Write-Host "Checking skill file versions..."
$SkillsVersionFile = Join-Path $AgentsDir "skills\VERSION"
if (Test-Path $SkillsVersionFile) {
$skillsVersion = Get-VersionFromFile -FilePath $SkillsVersionFile -Pattern "version: ([0-9]+\.[0-9]+\.[0-9]+)"
Write-Host "Skills version file: v$skillsVersion"
}
# Check workflow versions (in .windsurf\workflows)
$WorkflowsDir = Join-Path $BaseDir ".windsurf\workflows"
if (Test-Path $WorkflowsDir) {
Write-Host "Checking workflow files..."
$workflowCount = (Get-ChildItem -Path $WorkflowsDir -Filter "*.md" -File | Measure-Object).Count
Write-Host " OK: Found $workflowCount workflow files" -ForegroundColor $Colors.Green
} else {
Write-Host " WARNING: Workflows directory not found at $WorkflowsDir" -ForegroundColor $Colors.Yellow
}
Write-Host ""
# Summary
if ($Issues -eq 0) {
Write-Host "=== SUCCESS: All versions consistent ===" -ForegroundColor $Colors.Green
exit 0
} else {
Write-Host "=== FAILED: $Issues version issues found ===" -ForegroundColor $Colors.Red
Write-Host ""
Write-Host "To fix version issues:"
Write-Host "1. Update files to use v$ExpectedVersion"
Write-Host "2. Ensure LCBP3 project version matches"
Write-Host "3. Run this script again to verify"
exit 1
}
+8 -2
View File
@@ -1,10 +1,16 @@
# Speckit Skills Version # Speckit Skills Version
version: 1.1.0 version: 1.8.6
release_date: 2026-01-24 release_date: 2026-04-14
## Changelog ## Changelog
### 1.8.6 (2026-04-14)
- Version alignment with LCBP3-DMS v1.8.6
- Complete skill implementations for all 20 skills
- Enhanced security and audit capabilities
- Production-ready deployment status
### 1.1.0 (2026-01-24) ### 1.1.0 (2026-01-24)
- New QA skills: tester, reviewer, checker - New QA skills: tester, reviewer, checker
- tester: Execute tests, measure coverage, report results - tester: Execute tests, measure coverage, report results
+105
View File
@@ -0,0 +1,105 @@
# 🧠 NAP-DMS Agent Skills (v1.8.6)
ไฟล์นี้กำหนดทักษะและความสามารถเฉพาะทางของ Document Intelligence Engine สำหรับโครงการ LCBP3 v1.8.6 เพื่อรักษามาตรฐานสูงสุดด้าน Security และ Data Integrity
**Status**: Production Ready | **Last Updated**: 2026-04-14 | **Total Skills**: 20
---
## 🏗️ Architectural & Data Integrity
- **Identifier Strategy Mastery (ADR-019):**
- บังคับใช้ **UUIDv7** เป็น Public ID ใน API และ URL เสมอ
- ตรวจสอบและป้องกันการใช้ `parseInt()`, `Number()`, หรือตัวดำเนินการทางคณิตศาสตร์ (`+`) กับ UUID
- ตรวจสอบว่า Entity มีการใช้ `@Exclude()` บน Primary Key ที่เป็น `INT AUTO_INCREMENT` เพื่อไม่ให้หลุดออกไปยัง API
- **Strict Validation Engine:**
- บังคับใช้ **Zod** สำหรับการทำ Form Validation ฝั่ง Frontend
- บังคับใช้ **class-validator** สำหรับ Backend DTOs
- ตรวจสอบการส่ง **Idempotency-Key** ใน Header สำหรับทุก Mutation Request (POST/PUT/PATCH)
## ⚙️ Workflow & Concurrency Control
- **DMS Workflow Engine Proficiency:**
- มีความเชี่ยวชาญใน **DSL-based state machines**; ตรวจสอบทุกการเปลี่ยนสถานะเอกสารเทียบกับกฎใน DSL Parser เสมอ
- ป้องกันการอนุมัติซ้ำซ้อนโดยการตรวจสอบสถานะปัจจุบันจากฐานข้อมูลก่อนเริ่ม Logic การเปลี่ยน State ทุกครั้ง
- **Collision-Free Numbering (ADR-002):**
- ใช้ทักษะการทำ **Distributed Locking** ผ่าน **Redis Redlock** ร่วมกับ TypeORM `@VersionColumn` สำหรับการเจนเลขที่เอกสาร (Document Numbering)
- ห้ามเจนเลขโดยใช้ Logic ฝั่ง Application เพียงอย่างเดียวเด็ดขาด
- **Asynchronous Task Orchestration (ADR-008):**
- แยกงานที่ใช้เวลานาน (เช่น การส่ง Notification, การทำ Correspondence Routing) ไปทำที่ **BullMQ** เท่านั้น
## 🛡️ Security & Integrity Audit
- **RBAC Matrix Enforcement (ADR-016):**
- บังคับใช้ **JwtAuthGuard**, **RolesGuard** และ **CASL AbilityFactory** ในทุก Controller ใหม่
- ตรวจสอบการมีอยู่ของ `AuditLogInterceptor` สำหรับทุก API ที่มีการเปลี่ยนแปลงข้อมูล
- **Secure File Lifecycle:**
- ใช้ Logic **Two-Phase Upload**: Upload → Temp → ClamAV Scan → Commit → Permanent
- บังคับใช้ Whitelist File Extension และ Max Size 50MB ตามที่กำหนดใน ADR-016
## 🤖 AI Boundary & Privacy (ADR-018/020)
- **Data Isolation:**
- รับรองว่าฟีเจอร์ AI จะรันผ่าน **Ollama (On-premises)** เท่านั้น และไม่ส่งข้อมูลออกนอกเน็ตเวิร์ก
- AI จะเข้าถึงข้อมูลผ่าน **DMS API** เท่านั้น (ห้ามต่อ Database หรือ Storage โดยตรง)
- **Human-in-the-loop Validation:**
- ออกแบบให้ผลลัพธ์จาก AI (เช่น การดึง Metadata เอกสาร) ต้องผ่านการยืนยันจาก User ก่อนบันทึกลงระบบเสมอ
## 🏷️ Domain Terminology Consistency
- **Term Correction:** แก้ไขคำศัพท์ให้ถูกต้องตาม Glossary ทันที (เช่น เปลี่ยน Letter เป็น **Correspondence**, Approval Flow เป็น **Workflow Engine**)
- **i18n Guidelines:** ห้ามเขียน Thai/English String ลงใน Component โดยตรง ต้องใช้ i18n Keys เท่านั้น
---
## 🔄 Skill Dependency Matrix
| Skill | Dependencies | Handoffs To | Notes |
| -------------------------- | -------------------- | -------------------------------- | ----------------------------- |
| **speckit-constitution** | None | speckit-specify | Project governance foundation |
| **speckit-specify** | speckit-constitution | speckit-clarify | Feature specification |
| **speckit-clarify** | speckit-specify | speckit-plan | Resolve ambiguities |
| **speckit-plan** | speckit-clarify | speckit-tasks, speckit-checklist | Technical design |
| **speckit-tasks** | speckit-plan | speckit-implement | Task breakdown |
| **speckit-implement** | speckit-tasks | speckit-checker | Code implementation |
| **speckit-checker** | speckit-implement | speckit-tester | Static analysis |
| **speckit-tester** | speckit-checker | speckit-reviewer | Test execution |
| **speckit-reviewer** | speckit-tester | speckit-validate | Code review |
| **speckit-validate** | speckit-reviewer | None | Requirements validation |
| **speckit-analyze** | speckit-tasks | None | Cross-artifact consistency |
| **speckit-migrate** | None | speckit-plan | Legacy code import |
| **speckit-quizme** | speckit-specify | speckit-plan | Logic validation |
| **speckit-diff** | None | speckit-plan | Version comparison |
| **speckit-status** | None | None | Progress tracking |
| **speckit-taskstoissues** | speckit-tasks | None | Issue sync |
| **speckit-checklist** | speckit-plan | None | Requirements validation |
| **nestjs-best-practices** | None | speckit-implement | Backend patterns |
| **next-best-practices** | None | speckit-implement | Frontend patterns |
| **speckit-security-audit** | None | speckit-reviewer | Security validation |
---
## 🛠️ Skill Health Monitoring
### Health Check Scripts
- **Bash**: `./scripts/bash/audit-skills.sh` - Comprehensive skill health audit
- **PowerShell**: `./scripts/powershell/audit-skills.ps1` - Windows equivalent
### Validation Scripts
- **Version Check**: `./scripts/bash/validate-versions.sh` - Ensure version consistency
- **Workflow Sync**: `./scripts/bash/sync-workflows.sh` - Verify workflow integration
### Health Metrics
- **Total Skills**: 20 implemented
- **Version Alignment**: v1.8.6 across all skills
- **Template Coverage**: 100% for skills requiring templates
- **Documentation**: Complete front matter and sections
### Maintenance Schedule
- **Daily**: Run `audit-skills.sh` for health monitoring
- **Weekly**: Run `validate-versions.sh` for version consistency
- **Monthly**: Review skill dependencies and update documentation
+241
View File
@@ -0,0 +1,241 @@
/**
* skill-integration.test.js - Integration tests for .agents skills
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
const { execSync } = require('child_process');
// Test configuration
const BASE_DIR = path.resolve(__dirname, '..');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
const WORKFLOWS_DIR = path.join(BASE_DIR, '.windsurf', 'workflows');
// Test utilities
class SkillTestSuite {
constructor() {
this.results = {
passed: 0,
failed: 0,
errors: []
};
}
log(message, type = 'info') {
const colors = {
info: '\x1b[36m', // Cyan
pass: '\x1b[32m', // Green
fail: '\x1b[31m', // Red
warn: '\x1b[33m', // Yellow
reset: '\x1b[0m'
};
const color = colors[type] || colors.info;
console.log(`${color}${message}${colors.reset}`);
}
assert(condition, message) {
if (condition) {
this.log(` PASS: ${message}`, 'pass');
this.results.passed++;
return true;
} else {
this.log(` FAIL: ${message}`, 'fail');
this.results.failed++;
this.results.errors.push(message);
return false;
}
}
testDirectoryExists(dirPath, description) {
const exists = fs.existsSync(dirPath);
this.assert(exists, `${description} exists at ${dirPath}`);
return exists;
}
testFileExists(filePath, description) {
const exists = fs.existsSync(filePath);
this.assert(exists, `${description} exists at ${filePath}`);
return exists;
}
testFileContent(filePath, pattern, description) {
if (!fs.existsSync(filePath)) {
this.assert(false, `${description} - file not found: ${filePath}`);
return false;
}
try {
const content = fs.readFileSync(filePath, 'utf8');
const matches = content.match(pattern);
this.assert(matches !== null, `${description} - pattern found in ${filePath}`);
return matches !== null;
} catch (error) {
this.assert(false, `${description} - error reading file: ${error.message}`);
return false;
}
}
runScript(scriptPath, description) {
try {
const output = execSync(scriptPath, { encoding: 'utf8', cwd: BASE_DIR });
this.log(` SCRIPT: ${description} executed successfully`, 'pass');
return { success: true, output };
} catch (error) {
this.log(` SCRIPT: ${description} failed - ${error.message}`, 'fail');
this.results.failed++;
this.results.errors.push(`${description}: ${error.message}`);
return { success: false, error: error.message };
}
}
}
// Test suite implementation
const testSuite = new SkillTestSuite();
function runAllTests() {
testSuite.log('=== .agents Integration Test Suite ===', 'info');
testSuite.log(`Base directory: ${BASE_DIR}`, 'info');
testSuite.log(`Started: ${new Date().toISOString()}`, 'info');
testSuite.log('');
// Test 1: Directory Structure
testSuite.log('Test 1: Directory Structure', 'info');
testSuite.testDirectoryExists(AGENTS_DIR, '.agents directory');
testSuite.testDirectoryExists(SKILLS_DIR, 'skills directory');
testSuite.testDirectoryExists(WORKFLOWS_DIR, 'workflows directory');
testSuite.testDirectoryExists(path.join(AGENTS_DIR, 'scripts'), 'scripts directory');
testSuite.testDirectoryExists(path.join(AGENTS_DIR, 'rules'), 'rules directory');
testSuite.log('');
// Test 2: Core Files
testSuite.log('Test 2: Core Files', 'info');
testSuite.testFileExists(path.join(AGENTS_DIR, 'README.md'), 'README.md');
testSuite.testFileExists(path.join(SKILLS_DIR, 'VERSION'), 'skills VERSION file');
testSuite.testFileExists(path.join(SKILLS_DIR, 'skills.md'), 'skills.md documentation');
testSuite.log('');
// Test 3: Script Files
testSuite.log('Test 3: Validation Scripts', 'info');
testSuite.testFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'validate-versions.sh'), 'bash validate-versions.sh');
testSuite.testFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'audit-skills.sh'), 'bash audit-skills.sh');
testSuite.testFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'sync-workflows.sh'), 'bash sync-workflows.sh');
testSuite.testFileExists(path.join(AGENTS_DIR, 'scripts', 'powershell', 'validate-versions.ps1'), 'powershell validate-versions.ps1');
testSuite.testFileExists(path.join(AGENTS_DIR, 'scripts', 'powershell', 'audit-skills.ps1'), 'powershell audit-skills.ps1');
testSuite.log('');
// Test 4: Version Consistency
testSuite.log('Test 4: Version Consistency', 'info');
testSuite.testFileContent(path.join(AGENTS_DIR, 'README.md'), /v1\.8\.6/, 'README.md version');
testSuite.testFileContent(path.join(SKILLS_DIR, 'VERSION'), /version: 1\.8\.6/, 'skills VERSION file');
testSuite.testFileContent(path.join(SKILLS_DIR, 'skills.md'), /v1\.8\.6/, 'skills.md version');
testSuite.testFileContent(path.join(AGENTS_DIR, 'rules', '00-project-context.md'), /v1\.8\.6/, 'project context version');
testSuite.log('');
// Test 5: Skills Structure
testSuite.log('Test 5: Skills Structure', 'info');
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
const itemPath = path.join(SKILLS_DIR, item);
return fs.statSync(itemPath).isDirectory() && item.startsWith('speckit-') || item === 'nestjs-best-practices' || item === 'next-best-practices';
});
testSuite.assert(skillDirs.length >= 20, `Found at least 20 skill directories (found ${skillDirs.length})`);
// Test a few key skills
const keySkills = ['speckit-plan', 'speckit-implement', 'speckit-specify', 'speckit-validate'];
keySkills.forEach(skill => {
const skillPath = path.join(SKILLS_DIR, skill);
const skillMdPath = path.join(skillPath, 'SKILL.md');
testSuite.testDirectoryExists(skillPath, `${skill} directory`);
testSuite.testFileExists(skillMdPath, `${skill} SKILL.md`);
if (fs.existsSync(skillMdPath)) {
testSuite.testFileContent(skillMdPath, /^name:/, `${skill} has name field`);
testSuite.testFileContent(skillMdPath, /^description:/, `${skill} has description field`);
testSuite.testFileContent(skillMdPath, /^version:/, `${skill} has version field`);
testSuite.testFileContent(skillMdPath, /^## Role$/, `${skill} has Role section`);
testSuite.testFileContent(skillMdPath, /^## Task$/, `${skill} has Task section`);
}
});
testSuite.log('');
// Test 6: Workflows Structure
testSuite.log('Test 6: Workflows Structure', 'info');
const workflowFiles = fs.readdirSync(WORKFLOWS_DIR).filter(item => item.endsWith('.md'));
testSuite.assert(workflowFiles.length >= 20, `Found at least 20 workflow files (found ${workflowFiles.length})`);
// Test key workflows
const keyWorkflows = ['00-speckit.all.md', '02-speckit.specify.md', '04-speckit.plan.md', '07-speckit.implement.md'];
keyWorkflows.forEach(workflow => {
const workflowPath = path.join(WORKFLOWS_DIR, workflow);
testSuite.testFileExists(workflowPath, `${workflow} file`);
});
testSuite.log('');
// Test 7: Rules Structure
testSuite.log('Test 7: Rules Structure', 'info');
const rulesDir = path.join(AGENTS_DIR, 'rules');
const ruleFiles = fs.readdirSync(rulesDir).filter(item => item.endsWith('.md'));
testSuite.assert(ruleFiles.length >= 10, `Found at least 10 rule files (found ${ruleFiles.length})`);
// Test key rules
const keyRules = ['00-project-context.md', '01-adr-019-uuid.md', '02-security.md'];
keyRules.forEach(rule => {
const rulePath = path.join(rulesDir, rule);
testSuite.testFileExists(rulePath, `${rule} file`);
});
testSuite.log('');
// Test 8: Script Execution (if on Unix-like system)
if (process.platform !== 'win32') {
testSuite.log('Test 8: Script Execution', 'info');
// Test version validation script
const versionScript = path.join(AGENTS_DIR, 'scripts', 'bash', 'validate-versions.sh');
if (fs.existsSync(versionScript)) {
try {
// Make executable
fs.chmodSync(versionScript, '755');
testSuite.runScript(versionScript, 'Version validation script');
} catch (error) {
testSuite.log(` SKIP: Cannot execute version script - ${error.message}`, 'warn');
}
}
testSuite.log('');
}
// Test 9: Documentation Quality
testSuite.log('Test 9: Documentation Quality', 'info');
testSuite.testFileContent(path.join(AGENTS_DIR, 'README.md'), /## Troubleshooting/, 'README.md has troubleshooting section');
testSuite.testFileContent(path.join(SKILLS_DIR, 'skills.md'), /## Skill Dependency Matrix/, 'skills.md has dependency matrix');
testSuite.testFileContent(path.join(AGENTS_DIR, 'README.md'), /## Architecture/, 'README.md has architecture section');
testSuite.log('');
// Results Summary
testSuite.log('=== Test Results Summary ===', 'info');
testSuite.log(`Passed: ${testSuite.results.passed}`, 'pass');
testSuite.log(`Failed: ${testSuite.results.failed}`, testSuite.results.failed > 0 ? 'fail' : 'pass');
if (testSuite.results.errors.length > 0) {
testSuite.log('Errors:', 'fail');
testSuite.results.errors.forEach(error => {
testSuite.log(` - ${error}`, 'fail');
});
}
testSuite.log(`Completed: ${new Date().toISOString()}`, 'info');
return testSuite.results.failed === 0;
}
// Export for use in other modules
module.exports = { SkillTestSuite, runAllTests };
// Run tests if called directly
if (require.main === module) {
const success = runAllTests();
process.exit(success ? 0 : 1);
}
+235
View File
@@ -0,0 +1,235 @@
/**
* workflow-validation.test.js - Integration tests for workflows
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
// Test configuration
const BASE_DIR = path.resolve(__dirname, '..');
const WORKFLOWS_DIR = path.join(BASE_DIR, '.windsurf', 'workflows');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
// Test utilities
class WorkflowTestSuite {
constructor() {
this.results = {
passed: 0,
failed: 0,
errors: []
};
}
log(message, type = 'info') {
const colors = {
info: '\x1b[36m', // Cyan
pass: '\x1b[32m', // Green
fail: '\x1b[31m', // Red
warn: '\x1b[33m', // Yellow
reset: '\x1b[0m'
};
const color = colors[type] || colors.info;
console.log(`${color}${message}${colors.reset}`);
}
assert(condition, message) {
if (condition) {
this.log(` PASS: ${message}`, 'pass');
this.results.passed++;
return true;
} else {
this.log(` FAIL: ${message}`, 'fail');
this.results.failed++;
this.results.errors.push(message);
return false;
}
}
testWorkflowFile(filePath, expectedName) {
if (!fs.existsSync(filePath)) {
this.assert(false, `Workflow file exists: ${expectedName}`);
return false;
}
try {
const content = fs.readFileSync(filePath, 'utf8');
// Basic structure checks
this.assert(content.length > 0, `${expectedName} has content`);
this.assert(content.includes('#'), `${expectedName} has markdown headers`);
// Check for workflow-specific patterns
if (expectedName.includes('speckit-')) {
this.assert(content.includes('speckit-'), `${expectedName} contains speckit reference`);
}
// Check for proper markdown formatting
const lines = content.split('\n');
const nonEmptyLines = lines.filter(line => line.trim().length > 0);
this.assert(nonEmptyLines.length >= 5, `${expectedName} has sufficient content`);
return true;
} catch (error) {
this.assert(false, `${expectedName} - error reading file: ${error.message}`);
return false;
}
}
validateWorkflowDependency(workflowName, workflowContent) {
// Check if workflow references existing skills
const skillReferences = workflowContent.match(/@speckit-\w+/g) || [];
const skillsDir = path.join(AGENTS_DIR, 'skills');
for (const skillRef of skillReferences) {
const skillName = skillRef.replace('@', '');
const skillPath = path.join(skillsDir, skillName);
if (!fs.existsSync(skillPath)) {
this.assert(false, `${workflowName} references non-existent skill: ${skillRef}`);
return false;
}
}
return true;
}
}
// Expected workflows mapping
const expectedWorkflows = {
'00-speckit.all.md': 'Full pipeline workflow',
'01-speckit.constitution.md': 'Constitution workflow',
'02-speckit.specify.md': 'Specification workflow',
'03-speckit.clarify.md': 'Clarification workflow',
'04-speckit.plan.md': 'Planning workflow',
'05-speckit.tasks.md': 'Task breakdown workflow',
'06-speckit.analyze.md': 'Analysis workflow',
'07-speckit.implement.md': 'Implementation workflow',
'08-speckit.checker.md': 'Static analysis workflow',
'09-speckit.tester.md': 'Testing workflow',
'10-speckit.reviewer.md': 'Code review workflow',
'11-speckit.validate.md': 'Validation workflow',
'speckit.prepare.md': 'Preparation workflow',
'schema-change.md': 'Schema change workflow',
'create-backend-module.md': 'Backend module creation',
'create-frontend-page.md': 'Frontend page creation',
'deploy.md': 'Deployment workflow',
'review.md': 'Code review workflow',
'util-speckit.checklist.md': 'Checklist utility',
'util-speckit.diff.md': 'Diff utility',
'util-speckit.migrate.md': 'Migration utility',
'util-speckit.quizme.md': 'Quiz utility',
'util-speckit.status.md': 'Status utility',
'util-speckit.taskstoissues.md': 'Task to issues utility'
};
// Test suite implementation
const workflowTestSuite = new WorkflowTestSuite();
function runWorkflowTests() {
workflowTestSuite.log('=== Workflow Validation Test Suite ===', 'info');
workflowTestSuite.log(`Workflows directory: ${WORKFLOWS_DIR}`, 'info');
workflowTestSuite.log(`Started: ${new Date().toISOString()}`, 'info');
workflowTestSuite.log('');
// Test 1: Workflows directory exists
workflowTestSuite.log('Test 1: Directory Structure', 'info');
workflowTestSuite.assert(fs.existsSync(WORKFLOWS_DIR), 'Workflows directory exists');
workflowTestSuite.log('');
// Test 2: Expected workflow files exist
workflowTestSuite.log('Test 2: Expected Workflow Files', 'info');
let foundWorkflows = 0;
for (const [filename, description] of Object.entries(expectedWorkflows)) {
const filePath = path.join(WORKFLOWS_DIR, filename);
workflowTestSuite.testWorkflowFile(filePath, description);
if (fs.existsSync(filePath)) {
foundWorkflows++;
}
}
workflowTestSuite.assert(foundWorkflows >= 20, `Found at least 20 workflows (found ${foundWorkflows})`);
workflowTestSuite.log('');
// Test 3: Workflow content validation
workflowTestSuite.log('Test 3: Content Validation', 'info');
for (const [filename, description] of Object.entries(expectedWorkflows)) {
const filePath = path.join(WORKFLOWS_DIR, filename);
if (fs.existsSync(filePath)) {
try {
const content = fs.readFileSync(filePath, 'utf8');
// Check for proper workflow structure
workflowTestSuite.assert(content.includes('#'), `${filename} has markdown headers`);
workflowTestSuite.assert(content.length > 100, `${filename} has substantial content`);
// Validate skill dependencies
workflowTestSuite.validateWorkflowDependency(filename, content);
} catch (error) {
workflowTestSuite.assert(false, `${filename} - content validation error: ${error.message}`);
}
}
}
workflowTestSuite.log('');
// Test 4: Workflow naming consistency
workflowTestSuite.log('Test 4: Naming Consistency', 'info');
const actualFiles = fs.readdirSync(WORKFLOWS_DIR).filter(file => file.endsWith('.md'));
for (const actualFile of actualFiles) {
if (!expectedWorkflows[actualFile]) {
workflowTestSuite.log(` UNEXPECTED: ${actualFile} not in expected list`, 'warn');
}
}
for (const expectedFile of Object.keys(expectedWorkflows)) {
if (!actualFiles.includes(expectedFile)) {
workflowTestSuite.assert(false, `Missing expected workflow: ${expectedFile}`);
}
}
workflowTestSuite.log('');
// Test 5: Cross-reference validation
workflowTestSuite.log('Test 5: Cross-Reference Validation', 'info');
// Check if README.md references workflows correctly
const readmePath = path.join(AGENTS_DIR, 'README.md');
if (fs.existsSync(readmePath)) {
const readmeContent = fs.readFileSync(readmePath, 'utf8');
workflowTestSuite.assert(
readmeContent.includes('.windsurf/workflows'),
'README.md references correct workflows path'
);
}
workflowTestSuite.log('');
// Results Summary
workflowTestSuite.log('=== Workflow Test Results Summary ===', 'info');
workflowTestSuite.log(`Passed: ${workflowTestSuite.results.passed}`, 'pass');
workflowTestSuite.log(`Failed: ${workflowTestSuite.results.failed}`, workflowTestSuite.results.failed > 0 ? 'fail' : 'pass');
if (workflowTestSuite.results.errors.length > 0) {
workflowTestSuite.log('Errors:', 'fail');
workflowTestSuite.results.errors.forEach(error => {
workflowTestSuite.log(` - ${error}`, 'fail');
});
}
workflowTestSuite.log(`Completed: ${new Date().toISOString()}`, 'info');
return workflowTestSuite.results.failed === 0;
}
// Export for use in other modules
module.exports = { WorkflowTestSuite, runWorkflowTests };
// Run tests if called directly
if (require.main === module) {
const success = runWorkflowTests();
process.exit(success ? 0 : 1);
}
-85
View File
@@ -1,85 +0,0 @@
---
description: Run the full speckit pipeline from specification to analysis in one command.
---
# Workflow: speckit-all
This meta-workflow orchestrates the **complete development lifecycle**, from specification through implementation and validation. For the preparation-only pipeline (steps 1-5), use `/speckit-prepare` instead.
## Preparation Phase (Steps 1-5)
1. **Specify** (`/speckit-specify`):
- Use the `view_file` tool to read: `.agents/skills/speckit-specify/SKILL.md`
- Execute with user's feature description
- Creates: `spec.md`
2. **Clarify** (`/speckit-clarify`):
- Use the `view_file` tool to read: `.agents/skills/speckit-clarify/SKILL.md`
- Execute to resolve ambiguities
- Updates: `spec.md`
3. **Plan** (`/speckit-plan`):
- Use the `view_file` tool to read: `.agents/skills/speckit-plan/SKILL.md`
- Execute to create technical design
- Creates: `plan.md`
4. **Tasks** (`/speckit-tasks`):
- Use the `view_file` tool to read: `.agents/skills/speckit-tasks/SKILL.md`
- Execute to generate task breakdown
- Creates: `tasks.md`
5. **Analyze** (`/speckit-analyze`):
- Use the `view_file` tool to read: `.agents/skills/speckit-analyze/SKILL.md`
- Execute to validate consistency across spec, plan, and tasks
- Output: Analysis report
- **Gate**: If critical issues found, stop and fix before proceeding
## Implementation Phase (Steps 6-7)
6. **Implement** (`/speckit-implement`):
- Use the `view_file` tool to read: `.agents/skills/speckit-implement/SKILL.md`
- Execute all tasks from `tasks.md` with anti-regression protocols
- Output: Working implementation
7. **Check** (`/speckit-checker`):
- Use the `view_file` tool to read: `.agents/skills/speckit-checker/SKILL.md`
- Run static analysis (linters, type checkers, security scanners)
- Output: Checker report
## Verification Phase (Steps 8-10)
8. **Test** (`/speckit-tester`):
- Use the `view_file` tool to read: `.agents/skills/speckit-tester/SKILL.md`
- Run tests with coverage
- Output: Test + coverage report
9. **Review** (`/speckit-reviewer`):
- Use the `view_file` tool to read: `.agents/skills/speckit-reviewer/SKILL.md`
- Perform code review
- Output: Review report with findings
10. **Validate** (`/speckit-validate`):
- Use the `view_file` tool to read: `.agents/skills/speckit-validate/SKILL.md`
- Verify implementation matches spec requirements
- Output: Validation report (pass/fail)
## Usage
```
/speckit-all "Build a user authentication system with OAuth2 support"
```
## Pipeline Comparison
| Pipeline | Steps | Use When |
| ------------------ | ------------------------- | -------------------------------------- |
| `/speckit-prepare` | 1-5 (Specify → Analyze) | Planning only — you'll implement later |
| `/speckit-all` | 1-10 (Specify → Validate) | Full lifecycle in one pass |
## On Error
If any step fails, stop the pipeline and report:
- Which step failed
- The error message
- Suggested remediation (e.g., "Run `/speckit-clarify` to resolve ambiguities before continuing")
@@ -1,18 +0,0 @@
---
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
---
# Workflow: speckit-constitution
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-constitution/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `.specify/` directory doesn't exist: Initialize the speckit structure first
-19
View File
@@ -1,19 +0,0 @@
---
description: Create or update the feature specification from a natural language feature description.
---
# Workflow: speckit-specify
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
- This is typically the starting point of a new feature.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-specify/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the feature description for the skill's logic.
4. **On Error**:
- If no feature description provided: Ask the user to describe the feature they want to specify
-18
View File
@@ -1,18 +0,0 @@
---
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
---
# Workflow: speckit-clarify
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-clarify/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `spec.md` is missing: Run `/speckit-specify` first to create the feature specification
-18
View File
@@ -1,18 +0,0 @@
---
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
---
# Workflow: speckit-plan
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-plan/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `spec.md` is missing: Run `/speckit-specify` first to create the feature specification
-19
View File
@@ -1,19 +0,0 @@
---
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
---
# Workflow: speckit-tasks
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-tasks/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `plan.md` is missing: Run `/speckit-plan` first
- If `spec.md` is missing: Run `/speckit-specify` first
-22
View File
@@ -1,22 +0,0 @@
---
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
---
// turbo-all
# Workflow: speckit-analyze
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-analyze/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `spec.md` is missing: Run `/speckit-specify` first
- If `plan.md` is missing: Run `/speckit-plan` first
- If `tasks.md` is missing: Run `/speckit-tasks` first
-20
View File
@@ -1,20 +0,0 @@
---
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
---
# Workflow: speckit-implement
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-implement/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `tasks.md` is missing: Run `/speckit-tasks` first
- If `plan.md` is missing: Run `/speckit-plan` first
- If `spec.md` is missing: Run `/speckit-specify` first
-21
View File
@@ -1,21 +0,0 @@
---
description: Run static analysis tools and aggregate results.
---
// turbo-all
# Workflow: speckit-checker
1. **Context Analysis**:
- The user may specify paths to check or run on entire project.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-checker/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no linting tools available: Report which tools to install based on project type
- If tools fail: Show raw error and suggest config fixes
-21
View File
@@ -1,21 +0,0 @@
---
description: Execute tests, measure coverage, and report results.
---
// turbo-all
# Workflow: speckit-tester
1. **Context Analysis**:
- The user may specify test paths, options, or just run all tests.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-tester/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no test framework detected: Report "No test framework found. Install Jest, Vitest, Pytest, or similar."
- If tests fail: Show failure details and suggest fixes
-19
View File
@@ -1,19 +0,0 @@
---
description: Perform code review with actionable feedback and suggestions.
---
# Workflow: speckit-reviewer
1. **Context Analysis**:
- The user may specify files to review, "staged" for git staged changes, or "branch" for branch diff.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-reviewer/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no files to review: Ask user to stage changes or specify file paths
- If not a git repo: Review current directory files instead
-19
View File
@@ -1,19 +0,0 @@
---
description: Validate that implementation matches specification requirements.
---
# Workflow: speckit-validate
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-validate/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `tasks.md` is missing: Run `/speckit-tasks` first
- If implementation not started: Run `/speckit-implement` first
@@ -1,51 +0,0 @@
---
description: Create a new NestJS backend feature module following project standards
---
# Create NestJS Backend Module
Use this workflow when creating a new feature module in `backend/src/modules/`.
Follows `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` and ADR-005.
## Steps
// turbo
1. **Verify requirements exist** — confirm the feature is in `specs/01-Requirements/` before starting
// turbo 2. **Check schema** — read `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql` for relevant tables
3. **Scaffold module folder**
```
backend/src/modules/<module-name>/
├── <module-name>.module.ts
├── <module-name>.controller.ts
├── <module-name>.service.ts
├── dto/
│ ├── create-<module-name>.dto.ts
│ └── update-<module-name>.dto.ts
├── entities/
│ └── <module-name>.entity.ts
└── <module-name>.controller.spec.ts
```
4. **Create Entity** — map ONLY columns defined in the schema SQL. Use TypeORM decorators. Add `@VersionColumn()` if the entity needs optimistic locking.
5. **Create DTOs** — use `class-validator` decorators. Never use `any`. Validate all inputs.
6. **Create Service** — inject repository via constructor DI. Use transactions for multi-step writes. Add `Idempotency-Key` guard for POST/PUT/PATCH operations.
7. **Create Controller** — apply `@UseGuards(JwtAuthGuard, CaslAbilityGuard)`. Use proper HTTP status codes. Document with `@ApiTags` and `@ApiOperation`.
8. **Register in Module** — add to `imports`, `providers`, `controllers`, `exports` as needed.
9. **Register in AppModule** — import the new module in `app.module.ts`.
// turbo 10. **Write unit test** — cover service methods with Jest mocks. Run:
```bash
pnpm test:watch
```
// turbo 11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md`
-64
View File
@@ -1,64 +0,0 @@
---
description: Create a new Next.js App Router page following project standards
---
# Create Next.js Frontend Page
Use this workflow when creating a new page in `frontend/app/`.
Follows `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`, ADR-011, ADR-012, ADR-013, ADR-014.
## Steps
1. **Determine route** — decide the route path, e.g. `app/(dashboard)/documents/page.tsx`
2. **Classify components** — decide what is Server Component (default) vs Client Component (`'use client'`)
- Server Component: initial data load, static content, SEO
- Client Component: interactivity, forms, TanStack Query hooks, Zustand
3. **Create page file** — Server Component by default:
```typescript
// app/(dashboard)/<route>/page.tsx
import { Metadata } from 'next';
export const metadata: Metadata = {
title: '<Page Title> | LCBP3-DMS',
};
export default async function <PageName>Page() {
return (
<div>
{/* Page content */}
</div>
);
}
```
4. **Create API hook** (if client-side data needed) — add to `hooks/use-<feature>.ts`:
```typescript
'use client';
import { useQuery } from '@tanstack/react-query';
import { apiClient } from '@/lib/api-client';
export function use<Feature>() {
return useQuery({
queryKey: ['<feature>'],
queryFn: () => apiClient.get('<endpoint>'),
});
}
```
5. **Build UI components** — use Shadcn/UI primitives. Place reusable components in `components/<feature>/`.
6. **Handle forms** — use React Hook Form + Zod schema validation. Never access form values without validation.
7. **Handle errors** — add `error.tsx` alongside `page.tsx` for route-level error boundaries.
8. **Add loading state** — add `loading.tsx` for Suspense fallback if page does async work.
9. **Add to navigation** — update sidebar/nav config if the page should appear in the menu.
10. **Access control** — ensure page checks CASL permissions. Redirect unauthorized users via middleware or `notFound()`.
11. **Citation** — confirm implementation references `specs/01-Requirements/` and `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`
-71
View File
@@ -1,71 +0,0 @@
---
description: Deploy the application via Gitea Actions to QNAP Container Station
---
# Deploy to Production
Use this workflow to deploy updated backend and/or frontend to QNAP via Gitea Actions CI/CD.
Follows `specs/04-Infrastructure-OPS/` and ADR-015.
## Pre-deployment Checklist
- [ ] All tests pass locally (`pnpm test:watch`)
- [ ] No TypeScript errors (`tsc --noEmit`)
- [ ] No `any` types introduced
- [ ] Schema changes applied to `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`
- [ ] Environment variables documented (NOT in `.env` files)
## Steps
1. **Commit and push to Gitea**
```bash
git status
git add .
git commit -m "feat(<scope>): <description>"
git push origin main
```
2. **Monitor Gitea Actions** — open Gitea web UI → Actions tab → verify pipeline starts
3. **Pipeline stages (automatic)**
- `build-backend` → Docker image build + push to registry
- `build-frontend` → Docker image build + push to registry
- `deploy` → SSH to QNAP → `docker compose pull` + `docker compose up -d`
4. **Verify backend health**
```bash
curl http://<QNAP_IP>:3000/health
# Expected: { "status": "ok" }
```
5. **Verify frontend**
```bash
curl -I http://<QNAP_IP>:3001
# Expected: HTTP 200
```
6. **Check logs in Grafana** — navigate to Grafana → Loki → filter by container name
- Backend: `container_name="lcbp3-backend"`
- Frontend: `container_name="lcbp3-frontend"`
7. **Verify database** — confirm schema changes are reflected (if any)
8. **Rollback (if needed)**
```bash
# SSH into QNAP
docker compose pull <service>=<previous-image-tag>
docker compose up -d <service>
```
## Common Issues
| Symptom | Cause | Fix |
| ----------------- | --------------------- | ----------------------------------- |
| Backend unhealthy | DB connection failed | Check MariaDB container + env vars |
| Frontend blank | Build error | Check Next.js build logs in Grafana |
| 502 Bad Gateway | Container not started | `docker compose ps` to check status |
| Pipeline stuck | Gitea runner offline | Restart runner on QNAP |
-62
View File
@@ -1,62 +0,0 @@
---
auto_execution_mode: 0
description: Review code changes for bugs, security issues, and improvements
---
You are a senior software engineer performing a thorough code review to identify potential bugs.
Your task is to find all potential bugs and code improvements in the code changes. Focus on:
1. Logic errors and incorrect behavior
2. Edge cases that aren't handled
3. Null/undefined reference issues
4. Race conditions or concurrency issues
5. Security vulnerabilities
6. Improper resource management or resource leaks
7. API contract violations
8. Incorrect caching behavior, including cache staleness issues, cache key-related bugs, incorrect cache invalidation, and ineffective caching
9. Violations of existing code patterns or conventions
## 🔴 Tier 1 Critical Rules (CI Blockers)
The following are **CI-blocking issues** that must be caught in code review. These align with project specs in `specs/05-Engineering-Guidelines/` and `specs/06-Decision-Records/`:
### ADR-019: UUID Handling
- **❌ NEVER use `parseInt()`, `Number()`, or `+` operator on UUID values**
- Example of violation: `parseInt(projectId)` where `projectId` is UUID string
- ✅ Correct: Use UUID string directly without conversion
- **❌ NEVER expose internal INT PK in API responses**
- API must expose only `publicId` (transformed to `id` via `@Expose()`)
- Verify DTOs have `@Exclude()` on `id: number` field
### TypeScript Strict Rules
- **❌ ZERO `any` types allowed** — use proper types or `unknown` + narrowing
- **❌ ZERO `console.log`** — must use NestJS `Logger` (backend) or remove (frontend)
- **❌ NO `req: any` in controllers** — use `RequestWithUser` typed interface
### Database & Architecture
- **❌ NO SQL Triggers for business logic** — use NestJS Service methods instead
- **❌ NO `.env` files in production** — use Docker environment variables
- **❌ NO direct table/column name invention** — verify against `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`
### Security (ADR-016)
- Idempotency validation for critical `POST`/`PUT`/`PATCH` endpoints
- Two-phase file upload pattern (Upload → Temp → Commit → Permanent)
- Input validation with class-validator (backend) and Zod (frontend)
### Test Coverage Requirements
- **Backend Services:** 80% minimum
- **Backend Overall:** 70% minimum
- **Business Logic:** 80% minimum
Make sure to:
1. If exploring the codebase, call multiple tools in parallel for increased efficiency. Do not spend too much time exploring.
2. If you find any pre-existing bugs in the code, you should also report those since it's important for us to maintain general code quality for the user.
3. Do NOT report issues that are speculative or low-confidence. All your conclusions should be based on a complete understanding of the codebase.
4. Remember that if you were given a specific git commit, it may not be checked out and local code states may be different.
-108
View File
@@ -1,108 +0,0 @@
---
description: Manage database schema changes following ADR-009 (no migrations, modify SQL directly)
---
# Schema Change Workflow
Use this workflow when modifying database schema for LCBP3-DMS.
Follows `specs/06-Decision-Records/ADR-009-database-strategy.md`**NO TypeORM migrations**.
## Pre-Change Checklist
- [ ] Change is required by a spec in `specs/01-Requirements/`
- [ ] Existing data impact has been assessed
- [ ] No SQL triggers are being added (business logic in NestJS only)
## Steps
1. **Read current schema** — load the full schema file:
```
specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql
```
2. **Read data dictionary** — understand current field definitions:
```
specs/03-Data-and-Storage/03-01-data-dictionary.md
```
// turbo 3. **Identify impact scope** — determine which tables, columns, indexes, or constraints are affected. List:
- Tables being modified/created
- Columns being added/renamed/dropped
- Foreign key relationships affected
- Indexes being added/modified
- Seed data impact (if any)
4. **Modify schema SQL** — edit `specs/03-Data-and-Storage/lcbp3-v1.7.0-schema.sql`:
- Add/modify table definitions
- Maintain consistent formatting (uppercase SQL keywords, lowercase identifiers)
- Add inline comments for new columns explaining purpose
- Ensure `DEFAULT` values and `NOT NULL` constraints are correct
- Add `version` column with `@VersionColumn()` marker comment if optimistic locking is needed
> [!CAUTION]
> **NEVER use SQL Triggers.** All business logic must live in NestJS services.
5. **Update data dictionary** — edit `specs/03-Data-and-Storage/03-01-data-dictionary.md`:
- Add new tables/columns with descriptions
- Update data types and constraints
- Document business rules for new fields
- Add enum value definitions if applicable
6. **Update seed data** (if applicable):
- `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-basic.sql` — for reference/lookup data
- `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql` — for new CASL permissions
7. **Update TypeORM entity** — modify corresponding `backend/src/modules/<module>/entities/*.entity.ts`:
- Map ONLY columns defined in schema SQL
- Use correct TypeORM decorators (`@Column`, `@PrimaryGeneratedColumn`, `@ManyToOne`, etc.)
- Add `@VersionColumn()` if optimistic locking is needed
8. **Update DTOs** — if new columns are exposed via API:
- Add fields to `create-*.dto.ts` and/or `update-*.dto.ts`
- Add `class-validator` decorators for all new fields
- Never use `any` type
// turbo 9. **Run type check** — verify no TypeScript errors:
```bash
cd backend && npx tsc --noEmit
```
10. **Generate SQL diff** — create a summary of changes for the user to apply manually:
```
-- Schema Change Summary
-- Date: <current date>
-- Feature: <feature name>
-- Tables affected: <list>
--
-- ⚠️ Apply this SQL to the live database manually:
ALTER TABLE ...;
-- or
CREATE TABLE ...;
```
11. **Notify user** — present the SQL diff and remind them:
- Apply the SQL change to the live database manually
- Verify the change doesn't break existing data
- Run `pnpm test` after applying to confirm entity mappings work
## Common Patterns
| Change Type | Template |
| ----------- | -------------------------------------------------------------- |
| Add column | `ALTER TABLE \`table\` ADD COLUMN \`col\` TYPE DEFAULT value;` |
| Add table | Full `CREATE TABLE` with constraints and indexes |
| Add index | `CREATE INDEX \`idx_table_col\` ON \`table\` (\`col\`);` |
| Add FK | `ALTER TABLE \`child\` ADD CONSTRAINT ... FOREIGN KEY ...` |
| Add enum | Add to data dictionary + `ENUM('val1','val2')` in column def |
## On Error
- If schema SQL has syntax errors → fix and re-validate with `tsc --noEmit`
- If entity mapping doesn't match schema → compare column-by-column against SQL
- If seed data conflicts → check unique constraints and foreign keys
-27
View File
@@ -1,27 +0,0 @@
---
description: Execute the full preparation pipeline (Specify -> Clarify -> Plan -> Tasks -> Analyze) in sequence.
---
# Workflow: speckit-prepare
This workflow orchestrates the sequential execution of the Speckit preparation phase skills (02-06).
1. **Step 1: Specify (Skill 02)**
- Goal: Create or update the `spec.md` based on user input.
- Action: Read and execute `.agents/skills/speckit-specify/SKILL.md`.
2. **Step 2: Clarify (Skill 03)**
- Goal: Refine the `spec.md` by identifying and resolving ambiguities.
- Action: Read and execute `.agents/skills/speckit-clarify/SKILL.md`.
3. **Step 3: Plan (Skill 04)**
- Goal: Generate `plan.md` from the finalized spec.
- Action: Read and execute `.agents/skills/speckit-plan/SKILL.md`.
4. **Step 4: Tasks (Skill 05)**
- Goal: Generate actionable `tasks.md` from the plan.
- Action: Read and execute `.agents/skills/speckit-tasks/SKILL.md`.
5. **Step 5: Analyze (Skill 06)**
- Goal: Validate consistency across all design artifacts (spec, plan, tasks).
- Action: Read and execute `.agents/skills/speckit-analyze/SKILL.md`.
@@ -1,18 +0,0 @@
---
description: Generate a custom checklist for the current feature based on user requirements.
---
# Workflow: speckit-checklist
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-checklist/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `spec.md` is missing: Run `/speckit-specify` first to create the feature specification
-19
View File
@@ -1,19 +0,0 @@
---
description: Compare two versions of a spec or plan to highlight changes.
---
# Workflow: speckit-diff
1. **Context Analysis**:
- The user has provided an input prompt (optional file paths or version references).
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-diff/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no files to compare: Use current feature's `spec.md` vs git HEAD
- If `spec.md` doesn't exist: Run `/speckit-specify` first
-19
View File
@@ -1,19 +0,0 @@
---
description: Migrate existing projects into the speckit structure by generating spec.md, plan.md, and tasks.md from existing code.
---
# Workflow: speckit-migrate
1. **Context Analysis**:
- The user has provided an input prompt (path to analyze, feature name).
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-migrate/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If path doesn't exist: Ask user to provide valid directory path
- If no code found: Report that no analyzable code was detected
-20
View File
@@ -1,20 +0,0 @@
---
description: Challenge the specification with Socratic questioning to identify logical gaps, unhandled edge cases, and robustness issues.
---
// turbo-all
# Workflow: speckit-quizme
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-quizme/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If required files don't exist, inform the user which prerequisite workflow to run first (e.g., `/speckit-specify` to create `spec.md`).
-20
View File
@@ -1,20 +0,0 @@
---
description: Display a dashboard showing feature status, completion percentage, and blockers.
---
// turbo-all
# Workflow: speckit-status
1. **Context Analysis**:
- The user may optionally specify a feature to focus on.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-status/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If no features exist: Report "No features found. Run `/speckit-specify` to create your first feature."
@@ -1,18 +0,0 @@
---
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
---
# Workflow: speckit-taskstoissues
1. **Context Analysis**:
- The user has provided an input prompt. Treat this as the primary input for the skill.
2. **Load Skill**:
- Use the `view_file` tool to read the skill file at: `.agents/skills/speckit-taskstoissues/SKILL.md`
3. **Execute**:
- Follow the instructions in the `SKILL.md` exactly.
- Apply the user's prompt as the input arguments/context for the skill's logic.
4. **On Error**:
- If `tasks.md` is missing: Run `/speckit-tasks` first
+1 -1
View File
@@ -6,7 +6,7 @@ pnpm lint-staged
# 2. Additional Global Safety Checks (Per t2.md) - Optimized for staged files # 2. Additional Global Safety Checks (Per t2.md) - Optimized for staged files
# Use || true to prevent script exit if grep finds nothing for the file list # Use || true to prevent script exit if grep finds nothing for the file list
staged_files=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\.(ts|tsx|js|jsx)$') || true staged_files=$(git diff --cached --name-only --diff-filter=ACM | grep -E '\.(ts|tsx|js|jsx)$' | grep -E '^(backend|frontend)/') || true
if [ -n "$staged_files" ]; then if [ -n "$staged_files" ]; then
# UUID misuse check # UUID misuse check
-60
View File
@@ -1,60 +0,0 @@
---
trigger: always_on
---
# NAP-DMS Project Context
## Role & Persona
Act as a **Senior Full Stack Developer** specialized in:
- NestJS, Next.js, TypeScript
- Document Management Systems (DMS)
Focus:
- Data Integrity
- Security
- Maintainability
- Performance
You are a **Document Intelligence Engine** — not a general chatbot.
Every response must be **precise**, **spec-compliant**, and **production-ready**.
## Project Information
- **Project:** NAP-DMS (LCBP3)
- **Version:** 1.8.5
- **Stack:** NestJS + Next.js + TypeScript + MariaDB + Ollama (AI)
- **Repo:** https://git.np-dms.work/np-dms/lcbp3
## Rule Enforcement Tiers
### 🔴 Tier 1 — CRITICAL (CI BLOCKER)
Build fails immediately if violated:
- Security (Auth, RBAC, Validation)
- UUID Strategy (ADR-019) — no `parseInt` / `Number` / `+` on UUID
- Database correctness — verify schema before writing queries
- File upload security (ClamAV + whitelist)
- AI validation boundary (ADR-018)
- Error handling strategy (ADR-007)
- Forbidden patterns: `any`, `console.log`, UUID misuse
### 🟡 Tier 2 — IMPORTANT (CODE REVIEW)
Must fix before merge:
- Architecture patterns (thin controller, business logic in service)
- Test coverage (80%+ business logic, 70%+ backend overall)
- Cache invalidation
- Naming conventions
### 🟢 Tier 3 — GUIDELINES
Best practice — follow when possible:
- Code style / formatting (Prettier handles)
- Comment completeness
- Minor optimizations
-71
View File
@@ -1,71 +0,0 @@
---
trigger: always_on
---
# ADR-019 UUID Strategy
## CRITICAL RULES
- **NEVER** use `parseInt()` on UUID values
- **NEVER** use `Number()` on UUID values
- **NEVER** use `+` operator on UUID values
- **ALWAYS** use `publicId` (string UUID) for API responses
- **NEVER** expose internal INT `id` in API responses (use `@Exclude()`)
## Identifier Types
| Context | Type | Notes |
| ---------------- | ------------------------- | ------------------------------------------- |
| Internal / DB FK | `INT AUTO_INCREMENT` | Never exposed in API |
| Public API / URL | `UUIDv7` (MariaDB native) | Stored as BINARY(16), no transformer needed |
| Entity Property | `publicId: string` | Exposed directly in API (no transformation) |
| API Response | `publicId: string` (UUID) | INT `id` has `@Exclude()` — never appears |
## Backend Pattern (NestJS/TypeORM)
```typescript
// Entity
@Entity()
class Project extends UuidBaseEntity {
@Column({ type: 'uuid' })
publicId: string; // UUID string, no transformation needed
@PrimaryKey()
@Exclude()
id: number; // Internal INT, never exposed
}
// API Response → { id: "019505a1-7c3e-7000-8000-abc123def456" }
// Uses publicId directly, no @Expose({ name: 'id' }) needed
```
## Frontend Pattern (Next.js)
```typescript
// ✅ CORRECT — Use publicId only
type ProjectOption = {
publicId?: string; // No uuid, no id fallback
projectName?: string;
};
// ❌ WRONG — Multiple identifiers cause confusion
type ProjectOption = {
publicId?: string;
uuid?: string; // Don't do this
id?: number; // Don't do this
};
// ❌ NEVER use parseInt on UUID
parseInt(projectId); // "0195..." → 19 (WRONG!)
// ❌ NEVER use id ?? '' fallback
const value = c.publicId ?? c.id ?? ''; // Wrong!
// ✅ CORRECT — Use publicId only
const value = c.publicId; // "019505a1-7c3e-7000-8000-abc123def456"
```
## Related Documents
- `specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md`
- `specs/05-Engineering-Guidelines/05-07-hybrid-uuid-implementation-plan.md`
-36
View File
@@ -1,36 +0,0 @@
---
trigger: always_on
---
# Security Rules (Non-Negotiable)
## Mandatory Security Requirements
1. **Idempotency:** All critical `POST`/`PUT`/`PATCH` MUST validate `Idempotency-Key` header
2. **Two-Phase File Upload:** Upload → Temp → Commit → Permanent
3. **Race Conditions:** Redis Redlock + TypeORM `@VersionColumn` for Document Numbering
4. **Validation:** Zod (frontend) + class-validator (backend DTO)
5. **Password:** bcrypt 12 salt rounds, min 8 chars, rotate every 90 days
6. **Rate Limiting:** `ThrottlerGuard` on all auth endpoints
7. **File Upload:** Whitelist PDF/DWG/DOCX/XLSX/ZIP, max 50MB, ClamAV scan
8. **AI Isolation (ADR-018):** Ollama on Admin Desktop ONLY — NO direct DB/storage access
9. **Error Handling (ADR-007):** Use layered error classification with user-friendly messages
10. **AI Integration (ADR-020):** RFA-First approach with unified pipeline architecture
11. **AI Audit Trail:** Log all AI interactions and human validations
12. **Rate Limiting:** Apply to AI endpoints to prevent abuse
## Full Documentation
`specs/06-Decision-Records/ADR-016-security-authentication.md`
## Security Checklist (Before Every Commit)
- [ ] Input validation implemented (Zod/class-validator)
- [ ] RBAC/CASL permissions checked
- [ ] No SQL injection vulnerabilities
- [ ] File upload validation (whitelist + ClamAV)
- [ ] Rate limiting applied to auth endpoints
- [ ] AI boundary enforcement (ADR-018) - no direct DB/storage access
- [ ] AI audit logging implemented for AI interactions
- [ ] Error handling follows ADR-007 layered classification
- [ ] OWASP Top 10 review passed
-32
View File
@@ -1,32 +0,0 @@
---
trigger: always_on
---
# TypeScript Rules
## Strict Requirements
- **Strict Mode** — all strict checks enforced
- **ZERO `any` types** — use proper types or `unknown` + narrowing
- **ZERO `console.log`** — NestJS `Logger` (backend); remove before commit (frontend)
## Comment Language Policy
- **Comments:** Thai (เข้าใจง่ายสำหรับทีมไทย)
- **Code Identifiers:** English (variables, functions, classes)
## Error Handling Pattern
```typescript
// Backend (NestJS)
import { Logger } from '@nestjs/common';
const logger = new Logger('ServiceName');
// Use logger instead of console.log
logger.error('Error message', error.stack);
throw new HttpException('Message', HttpStatus.BAD_REQUEST);
// Frontend (Next.js)
// Remove all console.log before commit
// Use proper error boundaries and toast notifications
```
-38
View File
@@ -1,38 +0,0 @@
---
trigger: always_on
---
# Domain Terminology
## DMS Glossary
| ✅ Use | ❌ Don't Use |
| ------------------ | ------------------------------------- |
| Correspondence | Letter, Communication, Document |
| RFA | Approval Request, Submit for Approval |
| Transmittal | Delivery Note, Cover Letter |
| Circulation | Distribution, Routing |
| Shop Drawing | Construction Drawing |
| Contract Drawing | Design Drawing, Blueprint |
| Workflow Engine | Approval Flow, Process Engine |
| Document Numbering | Document ID, Auto Number |
| RBAC | Permission System (generic) |
## Full Glossary
`specs/00-overview/00-02-glossary.md`
## Key Spec Files Priority
Spec priority: **`06-Decision-Records`** > **`05-Engineering-Guidelines`** > others
| Document | Path | Use When |
| ----------------------- | ----------------------------------------------------------------- | ------------------------------- |
| **Glossary** | `specs/00-overview/00-02-glossary.md` | Verify domain terminology |
| **Schema Tables** | `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` | Before writing any query |
| **Data Dictionary** | `specs/03-Data-and-Storage/03-01-data-dictionary.md` | Field meanings + business rules |
| **Edge Cases** | `specs/01-Requirements/01-06-edge-cases-and-rules.md` | Prevent bugs in flows |
| **ADR-019 UUID** | `specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md` | UUID-related work |
| **Backend Guidelines** | `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` | NestJS patterns |
| **Frontend Guidelines** | `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md` | Next.js patterns |
| **Testing Strategy** | `specs/05-Engineering-Guidelines/05-04-testing-strategy.md` | Coverage goals |
-41
View File
@@ -1,41 +0,0 @@
---
trigger: always_on
---
# Forbidden Actions
## ❌ Never Do This
| ❌ Forbidden | ✅ Correct Approach |
| ----------------------------------------------- | ----------------------------------------------- |
| SQL Triggers for business logic | NestJS Service methods |
| `.env` files in production | `docker-compose.yml` environment section |
| TypeORM migration files | Edit schema SQL directly (ADR-009) |
| Inventing table/column names | Verify against `schema-02-tables.sql` |
| `any` TypeScript type | Proper types / generics |
| `console.log` in committed code | NestJS Logger (backend) / remove (frontend) |
| `req: any` in controllers | `RequestWithUser` typed interface |
| `parseInt()` on UUID values | Use UUID string directly (ADR-019) |
| Exposing INT PK in API responses | UUIDv7 (ADR-019) |
| AI accessing DB/storage directly | AI → DMS API → DB (ADR-018) |
| Direct file operations bypassing StorageService | `StorageService` for all file moves |
| Inline email/notification sending | BullMQ queue job |
| Deploying without Release Gates | Complete `04-08-release-management-policy.md` |
| AI direct cloud API calls | On-premises Ollama only (ADR-018) |
| AI outputs without human validation | Human-in-the-loop validation required (ADR-020) |
## Schema Changes (ADR-009)
- **NO TypeORM migrations** — edit SQL schema directly
- Always check `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` before writing queries
- Update Data Dictionary when changing fields
## UUID Handling
See `01-adr-019-uuid.md` for complete UUID rules.
Quick reminder:
- ❌ `parseInt(uuid)` → NEVER
- ❌ `Number(uuid)` → NEVER
- ✅ Use UUID string directly
-42
View File
@@ -1,42 +0,0 @@
---
trigger: always_on
---
# Development Flow
## 🔴 Critical Work — DB / API / Security / Workflow Engine
**MUST complete all steps:**
1. **Glossary check** — verify domain terms in `00-02-glossary.md`
2. **Read the spec** — select from Key Spec Files table
3. **Check schema** — verify table/column in `schema-02-tables.sql`
4. **Check data dictionary** — confirm field meanings + business rules
5. **Scan edge cases**`01-06-edge-cases-and-rules.md`
6. **Check ADRs** — verify decisions align (ADR-009, ADR-018, ADR-019)
7. **Write code** — TypeScript strict, no `any`, no `console.log`
## 🟡 Normal Work — UI / Feature / Integration
- Follow existing patterns in codebase
- Check spec for relevant module only
- No need to read all specs
## 🟢 Quick Fix — Bug Fix / Typo / Style
- Fix directly
- Add minimal test if logic changed
- Check forbidden patterns before commit
## Context-Aware Triggers
| Request | Files to Check | Expected Response |
| -------------------- | ------------------------------------------------------- | --------------------------------------------------- |
| "สร้าง API ใหม่" | `05-02-backend-guidelines.md`, `schema-02-tables.sql` | NestJS Controller + Service + DTO + CASL Guard |
| "แก้ฟอร์ม frontend" | `05-03-frontend-guidelines.md`, `01-06-edge-cases.md` | RHF+Zod + TanStack Query + Thai comments |
| "เพิ่ม field ใหม่" | `ADR-009`, `data-dictionary.md`, `schema-02-tables.sql` | Edit SQL directly + update Data Dictionary + Entity |
| "ตรวจสอบ UUID" | `ADR-019`, `05-07-hybrid-uuid-implementation-plan.md` | UUIDv7 MariaDB native UUID + TransformInterceptor |
| "สร้าง migration" | `ADR-009`, `03-06-migration-business-scope.md` | Edit SQL schema directly + n8n workflow |
| "ตรวจสอบ permission" | `seed-permissions.sql`, `ADR-016` | CASL 4-Level RBAC matrix |
| "deploy production" | `04-08-release-management-policy.md`, `ADR-015` | Release Gates + Blue-Green strategy |
| "เพิ่ม test" | `05-04-testing-strategy.md` | Coverage goals + test patterns |
-36
View File
@@ -1,36 +0,0 @@
---
trigger: always_on
---
# Commit Checklist
## Pre-Commit Verification
- [ ] UUID pattern verified (no parseInt on UUID)
- [ ] No `any` types in TypeScript
- [ ] No `console.log` in committed code
- [ ] Comments in Thai
- [ ] Code identifiers in English
- [ ] Schema changes via SQL directly (not migration)
- [ ] Test coverage meets targets (Backend 70%+, Business Logic 80%+)
- [ ] Relevant ADRs checked (ADR-009, ADR-018, ADR-019)
- [ ] Glossary terms used correctly
- [ ] Error handling complete (Logger + HttpException)
- [ ] i18n keys used instead of hardcode text
- [ ] Cache invalidation when data modified
- [ ] Security checklist passed (OWASP Top 10)
## Commit Message Format
```
type(scope): description
[optional body]
```
Types: `feat`, `fix`, `docs`, `style`, `refactor`, `test`, `chore`
Examples:
- `feat(correspondence): add originator organization validation`
- `fix(uuid): correct parseInt usage to string comparison`
- `spec(agents): bump to v1.8.5 - refactor structure`
-78
View File
@@ -1,78 +0,0 @@
---
trigger: always_on
---
# ADR-007 Error Handling Strategy
## CRITICAL RULES
- **ALWAYS** use layered error classification (Validation, Business, System)
- **NEVER** expose technical details to end users
- **ALWAYS** provide user-friendly error messages with recovery guidance
- **ALWAYS** log technical details for debugging
- **NEVER** use generic error messages without context
## Error Classification
| Error Type | Description | User Message | Technical Log |
|------------|-------------|--------------|---------------|
| **Validation** | Input validation failures | Clear field-level errors | Full validation details |
| **Business** | Business rule violations | Actionable guidance | Business context + user ID |
| **System** | Infrastructure failures | Generic "try again" | Full stack trace + metrics |
## Backend Pattern (NestJS)
```typescript
// Custom Exception Hierarchy
export class BusinessException extends HttpException {
constructor(
message: string,
userMessage: string,
recoveryAction?: string,
errorCode?: string
) {
super({ message, userMessage, recoveryAction, errorCode }, 400);
}
}
// Global Exception Filter
@Catch()
export class GlobalExceptionFilter implements ExceptionFilter {
catch(exception: unknown, host: ArgumentsHost) {
// Classify error and provide appropriate response
// Log technical details
// Return user-friendly message
}
}
```
## Frontend Pattern (Next.js)
```typescript
// Error Display Component
const ErrorDisplay = ({ error, onRetry }) => {
const userMessage = error.userMessage || 'เกิดข้อผิดพลาด';
const recoveryAction = error.recoveryAction;
return (
<div>
<p>{userMessage}</p>
{recoveryAction && <p>{recoveryAction}</p>}
{onRetry && <button onClick={onRetry}>ลองใหม่</button>}
</div>
);
};
```
## Required Implementation
- [ ] Global Exception Filter with layered classification
- [ ] Custom exception hierarchy (Validation, Business, System)
- [ ] Standardized error response DTOs
- [ ] Frontend error display components
- [ ] Error recovery mechanisms where applicable
## Related Documents
- `specs/06-Decision-Records/ADR-007-error-handling-strategy.md`
- `specs/06-Decision-Records/ADR-010-logging-monitoring-strategy.md`
-100
View File
@@ -1,100 +0,0 @@
---
trigger: always_on
---
# ADR-020 AI Integration Architecture
## CRITICAL RULES
- **ALWAYS** follow ADR-018 AI boundary policy (isolation on Admin Desktop)
- **ALWAYS** use RFA-First approach for AI implementation
- **NEVER** allow AI direct database/storage access
- **ALWAYS** implement human-in-the-loop validation
- **NEVER** send sensitive data to cloud AI services
## AI Integration Patterns
### Architecture Overview
```
Frontend → AI Gateway API → Admin Desktop (Ollama) → Backend Validation
```
### Key Components
| Component | Location | Purpose |
|-----------|----------|---------|
| **AI Gateway** | Backend (NestJS) | API endpoints, validation, audit logging |
| **Ollama Engine** | Admin Desktop (Desk-5439) | LLM inference (Gemma 4) |
| **OCR Engine** | Admin Desktop (Desk-5439) | Thai/English text extraction |
| **Orchestrator** | QNAP NAS (n8n) | Workflow management |
## Backend Implementation (NestJS)
```typescript
// AI Module with boundary enforcement
@Module({
controllers: [AiController],
providers: [AiService, AiGateway],
exports: [AiService],
})
export class AiModule {
constructor() {
// Enforce ADR-018 boundaries
}
}
// AI Service with validation
@Injectable()
export class AiService {
async extractMetadata(documentId: string): Promise<AIMetadata> {
// 1. Validate permissions
// 2. Send to Admin Desktop AI
// 3. Validate AI response
// 4. Log audit trail
// 5. Return validated results
}
}
```
## Frontend Pattern (Next.js)
```typescript
// Document Review Form (reusable component)
const DocumentReviewForm = ({ document, aiSuggestions }) => {
return (
<form>
<Field label="Document Type" suggestions={aiSuggestions.documentType} />
<Field label="Project Code" suggestions={aiSuggestions.projectCode} />
<Field label="Discipline" suggestions={aiSuggestions.discipline} />
<ConfidenceScore score={aiSuggestions.confidence} />
<HumanValidationActions />
</form>
);
};
```
## Security Requirements
- **AI Isolation:** All AI processing on Admin Desktop only
- **Data Privacy:** No cloud AI services, on-premises only
- **Audit Trail:** Log all AI interactions and human validations
- **Rate Limiting:** Prevent AI abuse and resource exhaustion
- **Validation:** All AI outputs must be validated before use
## Required Implementation
- [ ] AiModule with ADR-018 boundary enforcement
- [ ] AI Gateway API endpoints with validation
- [ ] DocumentReviewForm reusable component
- [ ] Admin Desktop Ollama + PaddleOCR setup
- [ ] n8n workflow orchestration
- [ ] AI audit logging and monitoring
- [ ] Human-in-the-loop validation workflows
## Related Documents
- `specs/06-Decision-Records/ADR-018-ai-boundary.md`
- `specs/06-Decision-Records/ADR-020-ai-intelligence-integration.md`
- `specs/06-Decision-Records/ADR-017-ollama-data-migration.md`
+1
View File
@@ -1,4 +1,5 @@
--- ---
auto_execution_mode: 0
description: Run the full speckit pipeline from specification to analysis in one command. description: Run the full speckit pipeline from specification to analysis in one command.
--- ---
@@ -1,4 +1,5 @@
--- ---
auto_execution_mode: 0
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync. description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
--- ---
@@ -1,4 +1,5 @@
--- ---
auto_execution_mode: 0
description: Create or update the feature specification from a natural language feature description. description: Create or update the feature specification from a natural language feature description.
--- ---
@@ -1,4 +1,5 @@
--- ---
auto_execution_mode: 0
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec. description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
--- ---
+1
View File
@@ -1,4 +1,5 @@
--- ---
auto_execution_mode: 0
description: Execute the implementation planning workflow using the plan template to generate design artifacts. description: Execute the implementation planning workflow using the plan template to generate design artifacts.
--- ---
+1
View File
@@ -1,4 +1,5 @@
--- ---
auto_execution_mode: 0
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts. description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
--- ---
@@ -1,4 +1,5 @@
--- ---
auto_execution_mode: 0
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation. description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
--- ---
@@ -1,4 +1,5 @@
--- ---
auto_execution_mode: 0
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
--- ---
@@ -1,4 +1,5 @@
--- ---
auto_execution_mode: 0
description: Run static analysis tools and aggregate results. description: Run static analysis tools and aggregate results.
--- ---
+1
View File
@@ -1,4 +1,5 @@
--- ---
auto_execution_mode: 0
description: Execute tests, measure coverage, and report results. description: Execute tests, measure coverage, and report results.
--- ---
@@ -1,4 +1,5 @@
--- ---
auto_execution_mode: 0
description: Perform code review with actionable feedback and suggestions. description: Perform code review with actionable feedback and suggestions.
--- ---

Some files were not shown because too many files have changed in this diff Show More