Compare commits

...

140 Commits

Author SHA1 Message Date
admin 83b6620093 260427:1659 update Infras #02
CI / CD Pipeline / build (push) Successful in 5m51s
CI / CD Pipeline / deploy (push) Successful in 2m47s
2026-04-27 16:59:38 +07:00
admin a57fef4d44 690427:0812 Update Infras #01
CI / CD Pipeline / build (push) Successful in 5m51s
CI / CD Pipeline / deploy (push) Successful in 2m9s
2026-04-27 08:12:28 +07:00
admin 9384581aee 690421:1652 Update document-chunk Module #01
CI / CD Pipeline / build (push) Successful in 4m51s
CI / CD Pipeline / deploy (push) Successful in 3m17s
2026-04-21 16:52:58 +07:00
admin 3143dd7263 690421:1628 Update RAG Module #01
CI / CD Pipeline / build (push) Successful in 4m53s
CI / CD Pipeline / deploy (push) Failing after 5m7s
2026-04-21 16:28:23 +07:00
admin cf78e14709 690421:1611 Update ClamAV# #02
CI / CD Pipeline / build (push) Successful in 4m48s
CI / CD Pipeline / deploy (push) Failing after 1m21s
2026-04-21 16:11:22 +07:00
admin 72f28184ff fix(infra): resolve container startup failures with minimal capabilities
CI / CD Pipeline / build (push) Successful in 5m0s
CI / CD Pipeline / deploy (push) Failing after 56s
- Add CHOWN, SETUID, SETGID capabilities to backend container
- Add CHOWN, SETUID, SETGID capabilities to frontend container
- Maintain security hardening while allowing health checks to function
- Fix 'cannot start a stopped process: unknown' Docker error
- Containers need minimal capabilities for health checks and logging
2026-04-21 15:49:13 +07:00
admin 486aca08a8 690421:1536 Update ClamAV
CI / CD Pipeline / build (push) Successful in 4m54s
CI / CD Pipeline / deploy (push) Failing after 1m15s
2026-04-21 15:36:59 +07:00
admin 1549098eac fix(infra): update ClamAV image tag from 1.3 to 1.4.4
CI / CD Pipeline / build (push) Successful in 4m54s
CI / CD Pipeline / deploy (push) Failing after 1m16s
- Fix deployment failure due to non-existent clamav/clamav:1.3 image
- Update to latest available tag clamav/clamav:1.4.4
- Resolves manifest unknown error in CI/CD deployment
2026-04-21 15:01:48 +07:00
admin 486bf3b9a4 feat(infra-ops): finalize infrastructure configurations before merge
CI / CD Pipeline / build (push) Successful in 6m38s
CI / CD Pipeline / deploy (push) Failing after 47s
- Update ASUSTOR gitea-runner and registry configurations
- Add environment examples for registry services
- Clean up MariaDB configuration files
- Prepare for merge to main branch
2026-04-21 13:33:12 +07:00
admin e2753e4eac 690420:2332 Refactor QNAP service 2026-04-20 23:32:30 +07:00
admin 2e89761b0f fix(ci): update pnpm-lock.yaml to resolve dependency mismatches
CI / CD Pipeline / build (push) Successful in 13m43s
CI / CD Pipeline / deploy (push) Failing after 6m51s
2026-04-19 20:58:39 +07:00
admin 13745e5874 690419:1831 feat: update CI/CD to use SSH key authentication #05
CI / CD Pipeline / build (push) Failing after 4m57s
CI / CD Pipeline / deploy (push) Has been skipped
2026-04-19 18:31:30 +07:00
admin 733f3c3987 690419:1411 feat: update CI/CD to use SSH key authentication #05
CI / CD Pipeline / build (push) Successful in 9m10s
CI / CD Pipeline / deploy (push) Failing after 4m10s
2026-04-19 14:11:51 +07:00
admin c894c08fb8 690419:1329 feat: update CI/CD to use SSH key authentication #04
CI / CD Pipeline / build (push) Successful in 8m10s
CI / CD Pipeline / deploy (push) Failing after 14m39s
2026-04-19 13:29:42 +07:00
admin 657698558b 690419:1310 feat: update CI/CD to use SSH key authentication #03
CI / CD Pipeline / build (push) Successful in 10m31s
CI / CD Pipeline / deploy (push) Failing after 52s
2026-04-19 13:10:01 +07:00
admin 844caf477d 690419:1109 feat: update CI/CD to use SSH key authentication #02
CI / CD Pipeline / deploy (push) Has been cancelled
CI / CD Pipeline / build (push) Has been cancelled
2026-04-19 11:09:35 +07:00
admin feb1319fb3 690419:1035 feat: update CI/CD to use SSH key authentication
CI / CD Pipeline / build (push) Successful in 8m22s
CI / CD Pipeline / deploy (push) Failing after 31s
2026-04-19 10:35:23 +07:00
admin d422b040d9 690419:1012 Refactor Infra gitea #02
CI / CD Pipeline / build (push) Successful in 9m32s
CI / CD Pipeline / deploy (push) Failing after 54s
2026-04-19 10:12:58 +07:00
admin 29a6509c58 690418:1638 Refactor Infra gitea
CI / CD Pipeline / build (push) Has been cancelled
CI / CD Pipeline / deploy (push) Has been cancelled
2026-04-18 16:38:04 +07:00
admin 8b658e8530 fix(ci): use self-hosted runner for build job
CI / CD Pipeline / build (push) Failing after 7m32s
CI / CD Pipeline / deploy (push) Has been skipped
- Change from ubuntu-latest to self-hosted runner
- Self-hosted runner is on same network as QNAP
- Use standard actions/checkout@v4 with HTTPS
2026-04-18 08:49:58 +07:00
admin 0b7dd466ec fix(ci): add debugging and correct SSH host key
CI / CD Pipeline / build (push) Failing after 4m48s
CI / CD Pipeline / deploy (push) Has been skipped
- Add verbose logging to see exact error
- Use actual RSA host key from git.np-dms.work:2222
- Test SSH connection before clone
2026-04-18 08:40:01 +07:00
admin e5db7511c6 fix(ci): use SSH checkout directly (no HTTPS fallback)
CI / CD Pipeline / build (push) Failing after 19s
CI / CD Pipeline / deploy (push) Has been skipped
- SSH port 2222 is proven to work from push tests
- Simpler and faster than HTTPS with timeout fallback
2026-04-18 08:32:29 +07:00
admin b7d637642a fix(ci): use SSH fallback when HTTPS checkout times out
CI / CD Pipeline / build (push) Failing after 56s
CI / CD Pipeline / deploy (push) Has been skipped
- Replace actions/checkout with manual git clone
- Try HTTPS with 30s timeout, fallback to SSH port 2222
- Uses DEPLOY_KEY secret for SSH authentication
2026-04-18 08:24:42 +07:00
admin 5e4e0444ed 690417:1707 Refactor Work flow ADR-021
CI / CD Pipeline / build (push) Failing after 8m55s
CI / CD Pipeline / deploy (push) Has been skipped
2026-04-17 17:07:41 +07:00
admin d7e48448e0 feat(adr-021): integrate workflow context into Transmittals & Circulation (v1.8.7)
CI / CD Pipeline / build (push) Failing after 7m24s
CI / CD Pipeline / deploy (push) Has been skipped
2026-04-17 16:33:26 +07:00
admin 5977e48e38 fix(workflow): ADR-021 code review fixes (8 bugs)
- fix(transmittal): guard duplicate workflow instance on submit()
- fix(workflow-guard): add organizationId to context so Level-2 RBAC works
- fix(circulation): organizationId context passed relation object not INT FK
- fix(transmittal): require Idempotency-Key header on POST submit endpoint
- fix(workflow): userId non-optional in processTransition controller
- fix(circulation): auto-close counts PENDING and IN_PROGRESS tasks
- fix(transmittal): status badge uses workflowState/DRAFT not purpose field
- fix(workflow): log cache invalidation failures instead of swallowing
- fix(workflow): implement getAvailableActions endpoint stub
- fix(i18n): add removeFile key to EN/TH locales
2026-04-17 16:25:51 +07:00
admin 3a5fc8d4af 690417:1538 Refactor Work flow ADR-021 2026-04-17 15:38:20 +07:00
admin 6d45bdaeb5 690414:1113 Update README.md /.agents/skills, /.windsurf/workflows 2026-04-14 11:13:42 +07:00
admin 02400fd88c 690412:1716 Done Task-FE-AI-03
CI / CD Pipeline / build (push) Failing after 7m53s
CI / CD Pipeline / deploy (push) Has been skipped
2026-04-12 17:16:37 +07:00
admin ca0454a043 690409:1012 Done Task-FE-AI-03
CI / CD Pipeline / build (push) Successful in 4m46s
CI / CD Pipeline / deploy (push) Successful in 5m47s
2026-04-09 10:12:23 +07:00
admin 99c8d61856 690409:0953 Done Task-BE-AI-02
CI / CD Pipeline / build (push) Successful in 4m30s
CI / CD Pipeline / deploy (push) Successful in 1m6s
2026-04-09 09:53:57 +07:00
admin 4f34aeae6b 690408:0909 Done Task BE-ERR-02
CI / CD Pipeline / build (push) Successful in 5m26s
CI / CD Pipeline / deploy (push) Successful in 8m19s
2026-04-08 09:09:12 +07:00
admin 961ee72343 690406:2310 Done Task BE-ERR-01
CI / CD Pipeline / build (push) Failing after 4m53s
CI / CD Pipeline / deploy (push) Has been skipped
2026-04-06 23:10:56 +07:00
admin c95e0f537e 690404:1139 Modify ADR
CI / CD Pipeline / build (push) Successful in 4m34s
CI / CD Pipeline / deploy (push) Successful in 7m33s
2026-04-04 11:39:56 +07:00
admin d775d5ad85 690403:2205 Modify AI (Add Gemma4 & PaddleOCR
CI / CD Pipeline / build (push) Successful in 4m43s
CI / CD Pipeline / deploy (push) Successful in 1m15s
2026-04-03 22:05:34 +07:00
admin 9c835ec4ac 690403:1632 fix dashboard cir and trans
CI / CD Pipeline / build (push) Successful in 5m0s
CI / CD Pipeline / deploy (push) Successful in 8m34s
2026-04-03 16:32:15 +07:00
admin d4f0d02c62 690402:2240 fix dashboard
CI / CD Pipeline / build (push) Failing after 4m18s
CI / CD Pipeline / deploy (push) Has been skipped
2026-04-02 22:40:11 +07:00
admin c188219e28 690402:2046 fix correspondence ATG Gemini Flash
CI / CD Pipeline / build (push) Successful in 7m10s
CI / CD Pipeline / deploy (push) Failing after 10m1s
2026-04-02 20:46:56 +07:00
Nattanin 92a9b6898b 690402:1503 Fix Property 'nonce' does not exist
CI / CD Pipeline / build (push) Successful in 5m12s
CI / CD Pipeline / deploy (push) Successful in 5m26s
2026-04-02 15:03:05 +07:00
admin 9e40fcc118 690401:2223 fix double warp by cluade opus local work
CI / CD Pipeline / build (push) Successful in 26m9s
CI / CD Pipeline / deploy (push) Failing after 9m4s
2026-04-01 22:23:06 +07:00
admin 1d868d10b3 690401:1326 fix secutities uuid
CI / CD Pipeline / build (push) Successful in 28m24s
CI / CD Pipeline / deploy (push) Failing after 16m23s
2026-04-01 13:26:19 +07:00
admin 83b04773f7 690401:0842 fix setting pagre
CI / CD Pipeline / build (push) Successful in 22m43s
CI / CD Pipeline / deploy (push) Successful in 9m11s
2026-04-01 08:42:53 +07:00
admin 6b89df874e 690401:0823 Update agent rules
CI / CD Pipeline / deploy (push) Has been cancelled
CI / CD Pipeline / build (push) Has been cancelled
2026-04-01 08:23:31 +07:00
admin b5960ba24c 690331:2336 Change to use .env
CI / CD Pipeline / build (push) Successful in 24m12s
CI / CD Pipeline / deploy (push) Failing after 7m3s
2026-03-31 23:36:03 +07:00
admin fb73d1c5b5 690331:1652 Correspondence Page Refactor by GPT-5.3-Codex Medium #04
CI / CD Pipeline / build (push) Successful in 12m6s
CI / CD Pipeline / deploy (push) Failing after 4m37s
2026-03-31 16:52:24 +07:00
admin bf5c67fc7e 690331:1616 Correspondence Page Refactor by GPT-5.3-Codex Medium #03
CI / CD Pipeline / build (push) Successful in 15m34s
CI / CD Pipeline / deploy (push) Successful in 6m11s
2026-03-31 16:16:12 +07:00
admin 156c28f49e fix: add robots.txt and remove *.txt from gitignore
CI / CD Pipeline / deploy (push) Has been cancelled
CI / CD Pipeline / build (push) Has been cancelled
- Add robots.txt to repository for SEO compliance
- Remove *.txt from .gitignore as it was blocking robots.txt
- Robots.txt allows crawling except for /api/ paths
2026-03-31 16:07:10 +07:00
admin edbf516cdd 690331:1506 Correspondence Page Refactor by GPT-5.3-Codex Medium #02
CI / CD Pipeline / build (push) Successful in 20m59s
CI / CD Pipeline / deploy (push) Successful in 5m32s
2026-03-31 15:06:52 +07:00
admin 7231870e02 690331:1259 Correspondence Page Refactor by GPT-5.3-Codex Medium #01
CI / CD Pipeline / build (push) Successful in 22m7s
CI / CD Pipeline / deploy (push) Failing after 9m6s
2026-03-31 12:59:30 +07:00
admin 4538c83010 260330:1630 Addied correspondence_revieion_attcahments table table #04
CI / CD Pipeline / build (push) Successful in 17m50s
CI / CD Pipeline / deploy (push) Successful in 7m12s
2026-03-30 16:30:24 +07:00
admin f9dbff7811 fix(frontend): add discipline to InitialCorrespondenceData to resolve TS build error
CI / CD Pipeline / build (push) Successful in 18m2s
CI / CD Pipeline / deploy (push) Successful in 5m59s
2026-03-30 15:14:49 +07:00
admin 2d9bbdbfa4 260330:1424 Addied correspondence_revieion_attcahments table table #03
CI / CD Pipeline / build (push) Successful in 18m20s
CI / CD Pipeline / deploy (push) Failing after 10m30s
2026-03-30 14:24:18 +07:00
admin 7080a37a82 260330:1327 Addied correspondence_revieion_attcahments table table #02
CI / CD Pipeline / build (push) Failing after 23m26s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-30 13:27:38 +07:00
admin c83588ab43 260330:1011 Addied correspondence_revieion_attcahments table table #01
CI / CD Pipeline / build (push) Failing after 21m19s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-30 10:11:40 +07:00
admin 1c6fec6c65 690329:2252 Fixing refactor Correspondence GPT-5.3-Codex #05
CI / CD Pipeline / build (push) Successful in 22m17s
CI / CD Pipeline / deploy (push) Successful in 7m49s
2026-03-29 22:52:42 +07:00
admin abbdebf2b9 690329:2209 Fixing refactor Correspondence GPT-5.3-Codex #04
CI / CD Pipeline / build (push) Successful in 20m35s
CI / CD Pipeline / deploy (push) Successful in 10m59s
2026-03-29 22:09:40 +07:00
admin df3020012d 690329:2102 Fixing refactor Correspondence GPT-5.3-Codex #03
CI / CD Pipeline / build (push) Successful in 11m36s
CI / CD Pipeline / deploy (push) Successful in 8m41s
2026-03-29 21:02:40 +07:00
admin 0a52bd830d 690329:2041 Fixing refactor Correspondence GPT-5.3-Codex #02
CI / CD Pipeline / build (push) Failing after 15m15s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-29 20:41:08 +07:00
admin 2c0fcc0ac9 690329:2019 Fixing refactor Correspondence GPT-5.3-Codex #01
CI / CD Pipeline / build (push) Successful in 17m23s
CI / CD Pipeline / deploy (push) Failing after 2m29s
2026-03-29 20:19:22 +07:00
admin 43e380e5c1 690329:1904 Fixing revision number Kimi K2.5 #01
CI / CD Pipeline / build (push) Successful in 18m7s
CI / CD Pipeline / deploy (push) Failing after 3m15s
2026-03-29 19:04:24 +07:00
admin 65aaae9d90 690329:1621 Fixing superadmin by GPT-5.3 #01
CI / CD Pipeline / build (push) Successful in 14m25s
CI / CD Pipeline / deploy (push) Successful in 4m41s
2026-03-29 16:21:57 +07:00
admin 2074654c18 690329:1537 Fixing bugs uuid by Kimi K2.5 #06
CI / CD Pipeline / build (push) Successful in 11m14s
CI / CD Pipeline / deploy (push) Successful in 8m15s
2026-03-29 15:37:05 +07:00
admin a44d9296b5 690329:1444 Fixing bugs uuid by Kimi K2.5 #06
CI / CD Pipeline / build (push) Successful in 15m55s
CI / CD Pipeline / deploy (push) Successful in 5m7s
2026-03-29 14:44:55 +07:00
admin adb20daf3d 690329:1322 Fixing bugs uuid by Kimi K2.5 #05
CI / CD Pipeline / build (push) Successful in 9m32s
CI / CD Pipeline / deploy (push) Successful in 4m45s
2026-03-29 13:22:37 +07:00
admin 06b897ec8e 690329:1250 Fixing bugs uuid by Kimi K2.5 #04
CI / CD Pipeline / build (push) Successful in 12m8s
CI / CD Pipeline / deploy (push) Successful in 8m52s
2026-03-29 12:50:14 +07:00
admin e8965658b1 690329:1114 Fixing bugs uuid by GPT-5.3 #03
CI / CD Pipeline / build (push) Successful in 11m44s
CI / CD Pipeline / deploy (push) Successful in 7m37s
2026-03-29 11:14:06 +07:00
admin 6d873f016d 690329:1035 Fixing bugs uuid by Kimi #02
CI / CD Pipeline / build (push) Successful in 9m38s
CI / CD Pipeline / deploy (push) Successful in 11m45s
2026-03-29 10:35:47 +07:00
admin d296076705 690329:0005 Fixing bugr uuid by Kimi #01
CI / CD Pipeline / build (push) Successful in 7m27s
CI / CD Pipeline / deploy (push) Successful in 11m30s
2026-03-29 00:05:04 +07:00
admin 2993131496 690328:2128 Fixing Refactor uuid by Kimi #12
CI / CD Pipeline / build (push) Successful in 6m14s
CI / CD Pipeline / deploy (push) Successful in 7m51s
2026-03-28 21:28:53 +07:00
admin 7a9a15560b 690328:1703 Fixing Refactor uuid by Kimi #11
CI / CD Pipeline / build (push) Successful in 8m10s
CI / CD Pipeline / deploy (push) Successful in 4m26s
2026-03-28 17:03:12 +07:00
admin 57a3ed2d37 690328:1547 Fixing Refactor uuid by Kimi #10
CI / CD Pipeline / build (push) Successful in 6m16s
CI / CD Pipeline / deploy (push) Failing after 7m15s
2026-03-28 15:47:07 +07:00
admin 3f78f0ec00 690328:1328 Fixing Refactor uuid by Kimi #08
CI / CD Pipeline / build (push) Successful in 7m22s
CI / CD Pipeline / deploy (push) Failing after 12m2s
2026-03-28 13:29:14 +07:00
admin 3510a10ca0 chore(backend): add .jest-cache to .gitignore and remove tracked cache files
- Jest cache files should not be committed to version control
2026-03-28 13:13:57 +07:00
admin 919934e701 690328:1311 Fixing Refactor uuid by Kimi #07
CI / CD Pipeline / build (push) Successful in 6m40s
CI / CD Pipeline / deploy (push) Failing after 9m21s
2026-03-28 13:11:49 +07:00
admin 33274adab0 fix(backend): update transformIgnorePatterns for ESM modules in pnpm
- Fix uuid and @nestjs/elasticsearch ESM parsing errors

- Use flexible regex pattern: node_modules/(?!.*(uuid|@nestjs\+elasticsearch).*/)

- Supports pnpm's nested .pnpm structure
2026-03-28 13:09:39 +07:00
admin e1773481e2 fix(backend): resolve ESLint errors for Jest config and test setup files
- Add allowDefaultProject for JS config files in eslint.config.mjs

- Add no-console: off for test setup files

- Fix async arrow function without await in jest-e2e.setup.ts

- Remove unused eslint-disable directives
2026-03-28 12:56:04 +07:00
admin a2720e9dc9 690328:1217 Fixing Refactor uuid by Kimi #04
CI / CD Pipeline / build (push) Failing after 5m10s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-28 12:17:01 +07:00
admin 5e65a161f4 690328:1129 Fixing Refactor uuid by Kimi #03
CI / CD Pipeline / build (push) Failing after 5m40s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-28 11:29:52 +07:00
admin 0879f14187 690328:1122 Fixing Refactor uuid by Kimi #02
CI / CD Pipeline / build (push) Failing after 4m9s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-28 11:22:03 +07:00
admin da8579d21b 690328:1106 Fixing Refactor uuid by Kimi #01
CI / CD Pipeline / build (push) Successful in 5m11s
CI / CD Pipeline / deploy (push) Failing after 4m28s
2026-03-28 11:06:25 +07:00
admin 76b18e7c37 690327:1651 Fixing Project list by ATG Opus #01
CI / CD Pipeline / build (push) Successful in 5m2s
CI / CD Pipeline / deploy (push) Successful in 5m1s
2026-03-27 16:51:57 +07:00
admin 5925ac8314 690327:1611 Fixing Refactor ADR-019 Naming convention uuid #187
CI / CD Pipeline / build (push) Successful in 5m51s
CI / CD Pipeline / deploy (push) Successful in 5m31s
2026-03-27 16:11:48 +07:00
admin fc61ff2491 690327:1426 Fixing Refactor ADR-019 Naming convention uuid #17
CI / CD Pipeline / build (push) Failing after 5m3s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-27 14:26:52 +07:00
admin 47f12508f4 690327:1345 Fixing Refactor ADR-019 Naming convention uuid #16
CI / CD Pipeline / build (push) Failing after 5m11s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-27 13:45:29 +07:00
admin bb33e542c7 690327:1235 Fixing Refactor ADR-019 Naming convention uuid #15
CI / CD Pipeline / build (push) Successful in 5m27s
CI / CD Pipeline / deploy (push) Successful in 11m21s
2026-03-27 12:35:08 +07:00
admin 2eab2e73d6 690327:1118 Fixing Refactor ADR-019 Naming convention uuid #14
CI / CD Pipeline / build (push) Successful in 5m40s
CI / CD Pipeline / deploy (push) Failing after 7m13s
2026-03-27 11:18:04 +07:00
admin 63d906a02a 690327:0841 Fixing Refactor ADR-019 Naming convention uuid #13
CI / CD Pipeline / build (push) Successful in 5m22s
CI / CD Pipeline / deploy (push) Successful in 7m38s
2026-03-27 08:41:05 +07:00
admin 437758ba50 690327:0758 Fixing Refactor ADR-019 Naming convention uuid #12
CI / CD Pipeline / build (push) Successful in 6m41s
CI / CD Pipeline / deploy (push) Failing after 5m44s
2026-03-27 07:58:39 +07:00
admin 9c5ac74ce5 690327:0024 Fixing Refactor ADR-019 Naming convention uuid #11
CI / CD Pipeline / build (push) Successful in 6m35s
CI / CD Pipeline / deploy (push) Failing after 12m21s
2026-03-27 00:24:16 +07:00
admin 50b6a0f901 690326:2306 Fixing Refactor ADR-019 Naming convention uuid #10
CI / CD Pipeline / build (push) Successful in 5m37s
CI / CD Pipeline / deploy (push) Successful in 2m30s
2026-03-26 23:06:09 +07:00
admin 54f6044e93 690326:2236 Fixing Refactor ADR-019 Naming convention uuid #09
CI / CD Pipeline / build (push) Successful in 5m42s
CI / CD Pipeline / deploy (push) Failing after 20m17s
2026-03-26 22:36:32 +07:00
admin 740c116b95 690326:2212 Fixing Refactor ADR-019 Naming convention uuid #08
CI / CD Pipeline / build (push) Successful in 6m25s
CI / CD Pipeline / deploy (push) Failing after 39s
2026-03-26 22:12:55 +07:00
admin 0a1ea1e4bb 690326:2139 Fixing Refactor ADR-019 Naming convention uuid #07
CI / CD Pipeline / build (push) Successful in 10m6s
CI / CD Pipeline / deploy (push) Failing after 1m16s
2026-03-26 21:39:03 +07:00
admin 25ea2fcd0f 260326:1726 Fixing Refactor ADR-019 Naming convention uuid #06
CI / CD Pipeline / build (push) Successful in 15m6s
CI / CD Pipeline / deploy (push) Successful in 8m56s
2026-03-26 17:26:28 +07:00
admin 29922aec1f 260326:1634 Fixing Refactor ADR-019 Naming convention uuid #05
CI / CD Pipeline / build (push) Successful in 13m18s
CI / CD Pipeline / deploy (push) Failing after 4m3s
2026-03-26 16:34:17 +07:00
admin 59cb928dd7 260326:1547 Fixing Refactor ADR-019 Naming convention uuid #04
CI / CD Pipeline / build (push) Successful in 10m26s
CI / CD Pipeline / deploy (push) Failing after 4m6s
2026-03-26 15:47:45 +07:00
admin 968fc1f462 260326:1523 Fixing Refactor ADR-019 Naming convention uuid #03
CI / CD Pipeline / build (push) Successful in 10m26s
CI / CD Pipeline / deploy (push) Failing after 1m0s
2026-03-26 15:23:56 +07:00
admin e97d0fa5ef 260326:1455 Fixing Refactor ADR-019 Naming convention uuid #02
CI / CD Pipeline / build (push) Failing after 16m2s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-26 14:55:55 +07:00
admin 1aff83214f 260326:1347 Fixing Refactor ADR-019 Naming convention uuid #01
CI / CD Pipeline / build (push) Failing after 17m29s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-26 13:47:07 +07:00
admin 978d66e49e 260326:0907 Fixing Naming convention missunderstand #03
CI / CD Pipeline / build (push) Successful in 11m23s
CI / CD Pipeline / deploy (push) Failing after 4m12s
2026-03-26 09:07:31 +07:00
admin c1eb79511a 260326:0842 Fixing Naming convention missunderstand #0
CI / CD Pipeline / build (push) Successful in 15m38s
CI / CD Pipeline / deploy (push) Failing after 1m26s
2026-03-26 08:42:54 +07:00
admin d36d4b0bf4 690325:2132 Fixing Naming convention missunderstand #01
CI / CD Pipeline / build (push) Failing after 38m8s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-25 21:32:47 +07:00
admin 509fe7b597 260325:1349 Refactor correspondence & rfa #04
CI / CD Pipeline / build (push) Successful in 16m44s
CI / CD Pipeline / deploy (push) Successful in 7m33s
2026-03-25 13:49:56 +07:00
admin 883aeb4590 260325:1333 Refactor correspondence & rfa #03
CI / CD Pipeline / build (push) Failing after 11m25s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-25 13:33:51 +07:00
admin 7865064388 260325:1324 Refactor correspondence & rfa #02
CI / CD Pipeline / build (push) Failing after 6m6s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-25 13:24:50 +07:00
admin 89e0ea0567 260325:1223 Refactor correspondence & rfa #01
CI / CD Pipeline / build (push) Failing after 27m23s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-25 12:23:40 +07:00
admin aa82b890a5 260324:2133 Refactor correspondence & rfa
CI / CD Pipeline / build (push) Failing after 17m3s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-24 21:33:59 +07:00
admin 42fc9fa502 260324:1439 Refactor RFA :correct ci-deploy #03
CI / CD Pipeline / build (push) Successful in 23m28s
CI / CD Pipeline / deploy (push) Successful in 5m48s
2026-03-24 14:39:09 +07:00
admin cb9e2e4e26 260324:1357 Refactor RFA :correct ci-deploy #02
CI / CD Pipeline / build (push) Successful in 25m36s
CI / CD Pipeline / deploy (push) Failing after 3m32s
2026-03-24 13:57:07 +07:00
admin 4cd0952482 260324:1349 Refactor RFA #01
CI / CD Pipeline / build (push) Failing after 1m52s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-24 13:49:30 +07:00
admin a3e3206b06 260324:1033 fix ci-deploy : optimize #05
CI / CD Pipeline / build (push) Successful in 6m43s
CI / CD Pipeline / deploy (push) Successful in 1m5s
2026-03-24 10:33:25 +07:00
admin 1a2b7ead65 260324:1018 fix ci-deploy : optimize #04
CI / CD Pipeline / build (push) Successful in 7m45s
CI / CD Pipeline / deploy (push) Failing after 1m13s
2026-03-24 10:18:19 +07:00
admin 2d13618154 260324:0956 fix ci-deploy : optimize #03
CI / CD Pipeline / build (push) Successful in 8m56s
CI / CD Pipeline / deploy (push) Failing after 34s
2026-03-24 09:56:24 +07:00
admin a60bb0ba71 260324:0945 fix ci-deploy : optimize #02
CI / CD Pipeline / build (push) Successful in 8m5s
CI / CD Pipeline / deploy (push) Failing after 34s
2026-03-24 09:45:34 +07:00
admin 837bea4237 260324:0918 fix ci-deploy : optimize #01
CI / CD Pipeline / build (push) Successful in 10m43s
CI / CD Pipeline / deploy (push) Failing after 3m29s
2026-03-24 09:18:58 +07:00
admin fab2f43944 260323:1702 fix CI : Deploy : Turbopack Build Error #02
CI / CD Pipeline / build (push) Successful in 9m20s
CI / CD Pipeline / deploy (push) Failing after 3m59s
2026-03-23 17:02:49 +07:00
admin 8548de9a94 260323:1443 fix CI : Deploy : Turbopack Build Error #01
CI / CD Pipeline / build (push) Successful in 8m6s
CI / CD Pipeline / deploy (push) Failing after 3m42s
2026-03-23 14:43:57 +07:00
admin d20eb945fa 260323:1423 fix CI : Verify relaase : 🐋 Login to Internal Registry #08
CI / CD Pipeline / build (push) Successful in 8m40s
CI / CD Pipeline / deploy (push) Failing after 7m22s
2026-03-23 14:23:30 +07:00
admin 7018713f1a 260323:1406 fix CI : Verify relaase : 🐋 Login to Internal Registry #07
CI / CD Pipeline / build (push) Successful in 8m56s
CI / CD Pipeline / deploy (push) Failing after 15s
2026-03-23 14:06:09 +07:00
admin 148dfb5507 260323:1404 fix CI : Verify relaase : 🐋 Login to Internal Registry #06
CI / CD Pipeline / build (push) Failing after 26s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-23 14:04:57 +07:00
admin 463f1c7224 260323:1353 fix CI : Verify relaase : 🐋 Login to Internal Registry #05
CI / CD Pipeline / build (push) Successful in 9m48s
CI / CD Pipeline / deploy (push) Failing after 17s
2026-03-23 13:53:38 +07:00
admin 47228506dc 260323:1333 fix CI : Verify relaase : 🐋 Login to Internal Registry #04
CI / CD Pipeline / build (push) Successful in 8m22s
CI / CD Pipeline / deploy (push) Failing after 19s
2026-03-23 13:33:10 +07:00
admin f82a32fbe4 260323:1316 fix CI : Verify relaase : 🐋 Login to Internal Registry #03
CI / CD Pipeline / build (push) Successful in 12m35s
CI / CD Pipeline / release (push) Failing after 27s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-23 13:16:30 +07:00
admin 80c404a27a 260323:1132 fix CI : Verify relaase : 🐋 Login to Internal Registry #02
CI / CD Pipeline / build (push) Successful in 14m22s
CI / CD Pipeline / release (push) Failing after 1m4s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-23 11:32:23 +07:00
admin 4ef6679e72 260323:1113 fix CI : Verify relaase : 🐋 Login to Internal Registry #01
CI / CD Pipeline / build (push) Successful in 10m39s
CI / CD Pipeline / release (push) Failing after 4m28s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-23 11:13:48 +07:00
admin e3c476f011 260323:1050 fix CI : Verify Build frontend #02 correct _???
CI / CD Pipeline / build (push) Successful in 15m14s
CI / CD Pipeline / release (push) Failing after 20s
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-23 10:50:20 +07:00
admin 32141f519a 260323:0954 fix CI : Run Tests frontend #01
CI / CD Pipeline / build (push) Failing after 15m27s
CI / CD Pipeline / release (push) Has been skipped
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-23 09:54:31 +07:00
admin 4422c68894 260323:0917 fix CI : Run Tests #01
CI / CD Pipeline / build (push) Failing after 10m45s
CI / CD Pipeline / release (push) Has been skipped
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-23 09:17:26 +07:00
admin 55116abe5a 260323:0833 fix CI : lint #01 existing is assigned a value but never used.
CI / CD Pipeline / build (push) Failing after 17m41s
CI / CD Pipeline / release (push) Has been skipped
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-23 08:33:03 +07:00
admin 2da4907991 690322:2200 Fixing Deployment Errors #03
CI / CD Pipeline / build (push) Failing after 9m39s
CI / CD Pipeline / release (push) Has been skipped
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-22 22:00:18 +07:00
admin ae019f243a 690322:2132 Fixing Deployment Errors #02
CI / CD Pipeline / build (push) Failing after 6m50s
CI / CD Pipeline / release (push) Has been skipped
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-22 21:32:35 +07:00
admin 4dc14aba5b 690322:2123 Fixing Deployment Errors
CI / CD Pipeline / build (push) Failing after 7m56s
CI / CD Pipeline / release (push) Has been skipped
CI / CD Pipeline / deploy (push) Has been skipped
2026-03-22 21:23:08 +07:00
admin 5dad7ab7c1 690322:2047 Fixing Deployment Blocking TypeScript Errors
CI Pipeline / build (push) Failing after 9m47s
Build and Deploy / deploy (push) Failing after 4m42s
2026-03-22 20:47:22 +07:00
admin 11984bfa29 260322:1648 Correct Coresspondence / Doing RFA / Correct CI
CI Pipeline / build (push) Failing after 12m41s
Build and Deploy / deploy (push) Failing after 2m44s
2026-03-22 16:48:12 +07:00
admin e5deedb42e chore: setup husky, lint-staged and ci pipeline (infrastructure) 2026-03-22 10:02:48 +07:00
admin a91127e296 260322:0907 Correct Coresspondence / Doing RFA / Corret Deploy
Build and Deploy / deploy (push) Failing after 6m53s
2026-03-22 09:07:47 +07:00
admin ee469b467c 260322:0903 Correct Coresspondence / Doing RFA/ fetch
Build and Deploy / deploy (push) Failing after 54s
2026-03-22 09:03:08 +07:00
admin 8f24698957 260322:0859 Correct Coresspondence / Doing RFA
Build and Deploy / deploy (push) Failing after 9s
2026-03-22 08:59:22 +07:00
admin 36d078ae24 260322:0849 Correct Coresspondence / Doing RFA
Build and Deploy / deploy (push) Failing after 52s
2026-03-22 08:49:50 +07:00
admin 03d16cfd64 260321:1700 Correct Coresspondence / Doing RFA 2026-03-21 17:00:41 +07:00
1199 changed files with 169632 additions and 36086 deletions
-78
View File
@@ -1,78 +0,0 @@
---
trigger: always_on
---
# Project Specifications & Context Protocol
Description: Enforces strict adherence to the project's documentation structure for all agent activities.
Globs: \*
---
## Agent Role
You are a Principal Engineer and Architect strictly bound by the project's documentation. You do not improvise outside of the defined specifications.
## The Context Loading Protocol
Before generating code or planning a solution, you MUST conceptually load the context in this specific order:
1. **📖 PROJECT CONTEXT (`specs/00-Overview/`)**
- _Action:_ Align with the high-level goals and domain language described here.
2. **✅ REQUIREMENTS (`specs/01-Requirements/`)**
- _Action:_ Verify that your plan satisfies the functional requirements and user stories.
- _Constraint:_ If a requirement is ambiguous, stop and ask.
3. **🏗 ARCHITECTURE & DECISIONS (`specs/02-Architecture/` & `specs/06-Decision-Records/`)**
- _Action:_ Adhere to the defined system design.
- _Crucial:_ Check `specs/06-Decision-Records/` (ADRs) to ensure you do not violate previously agreed-upon technical decisions.
4. **💾 DATABASE & SCHEMA (`specs/03-Data-and-Storage/`)**
- _Action:_
- **Read `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`** for exact table structures and constraints. (Schema split: `01-drop`, `02-tables`, `03-views-indexes`)
- **Consult `specs/03-Data-and-Storage/03-01-data-dictionary.md`** for field meanings and business rules.
- **Check `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-basic.sql`** to understand initial data states.
- **Check `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql`** to understand initial permissions states.
- **Check `specs/03-Data-and-Storage/03-04-legacy-data-migration.md`** for migration context (ADR-017).
- **Check `specs/03-Data-and-Storage/03-05-n8n-migration-setup-guide.md`** for n8n workflow setup.
- _Constraint:_ NEVER invent table names or columns. Use ONLY what is defined here.
5. **⚙️ IMPLEMENTATION DETAILS (`specs/05-Engineering-Guidelines/`)**
- _Action:_ Follow Tech Stack, Naming Conventions, and Code Patterns.
6. **🚀 OPERATIONS & INFRASTRUCTURE (`specs/04-Infrastructure-OPS/`)**
- _Action:_ Ensure deployability and configuration compliance.
- _Constraint:_ Ensure deployment paths, port mappings, and volume mounts are consistent with this documentation.
## Execution Rules
### 1. Citation Requirement
When proposing a change or writing code, you must explicitly reference the source of truth:
> "Implementing feature X per `specs/01-Requirements/` using pattern defined in `specs/05-Engineering-Guidelines/`."
### 2. Conflict Resolution
- **Spec vs. Training Data:** The `specs/` folder ALWAYS supersedes your general training data.
- **Spec vs. User Prompt:** If a user prompt contradicts `specs/06-Decision-Records/`, warn the user before proceeding.
### 3. File Generation
- Do not create new files outside of the established project structure:
- Backend: `backend/src/modules/<name>/`, `backend/src/common/`
- Frontend: `frontend/app/`, `frontend/components/`, `frontend/hooks/`, `frontend/lib/`
- Specs: `specs/` subdirectories only
- Keep the code style consistent with `specs/05-Engineering-Guidelines/`.
- New modules MUST follow the workflow in `.agents/workflows/create-backend-module.md` or `.agents/workflows/create-frontend-page.md`.
### 4. Schema Changes
- **DO NOT** create or run TypeORM migration files.
- Modify the schema directly in `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` (or `01-drop`/`03-views-indexes` as appropriate).
- Update `specs/03-Data-and-Storage/03-01-data-dictionary.md` if adding/changing columns.
- Notify the user so they can apply the SQL change to the live database manually.
- **AI Isolation (ADR-018):** Ollama runs on ASUSTOR only. AI has NO direct DB access, NO write access to uploads. All writes go through DMS API.
---
-38
View File
@@ -1,38 +0,0 @@
---
trigger: always_on
description: Control which shell commands the agent may run automatically.
allowAuto:
- 'pnpm test:watch'
- 'pnpm test:debug'
- 'pnpm test:e2e'
- 'git status'
- 'git log --oneline'
- 'git diff'
- 'git branch'
- 'tsc --noEmit'
denyAuto:
- 'rm -rf'
- 'Remove-Item'
- 'git push --force'
- 'git reset --hard'
- 'git clean -fd'
- 'curl | bash'
- 'docker compose down'
- 'DROP TABLE'
- 'TRUNCATE'
- 'DELETE FROM'
alwaysReview: true
scopes:
- 'backend/src/**'
- 'backend/test/**'
- 'frontend/app/**'
---
# Execution Rules
- Only auto-execute commands that are explicitly listed in `allowAuto`.
- Commands in `denyAuto` must always be blocked, even if manually requested.
- All shell operations that create, modify, or delete files in `backend/src/`, `backend/test/`, or `frontend/app/` require human review.
- Alert before running any SQL that modifies data (INSERT/UPDATE/DELETE/DROP/TRUNCATE).
- Alert if environment variables related to DB connection or secrets (DATABASE_URL, JWT_SECRET, passwords) would be displayed or logged.
- Never auto-execute commands that expose sensitive credentials via MCP tools or shell output.
+200 -48
View File
@@ -2,7 +2,7 @@
> **The Event Horizon of Software Quality.**
> _Adapted for Google Antigravity IDE from [github/spec-kit](https://github.com/github/spec-kit)._
> _Version: 1.2.0 — LCBP3-DMS Edition (v1.8.1 UAT Ready)_
> _Version: 1.8.6 — LCBP3-DMS Edition (v1.8.6 Production Ready)_
---
@@ -55,7 +55,7 @@ Some skills and scripts reference a `.specify/` directory for templates and proj
The toolkit is organized into modular components that provide both the logic (Scripts) and the structure (Templates) for the agent.
```text
.agents/
.agents/ # Agent Skills & Rules
├── skills/ # @ Mentions (Agent Intelligence)
│ ├── nestjs-best-practices/ # NestJS Architecture Patterns
│ ├── next-best-practices/ # Next.js App Router Patterns
@@ -78,32 +78,37 @@ The toolkit is organized into modular components that provide both the logic (Sc
│ ├── speckit-tester/ # Test Runner & Coverage
│ └── speckit-validate/ # Implementation Validator
├── workflows/ # / Slash Commands (Orchestration)
│ ├── 00-speckit-all.md # Full Pipeline (10 steps: Specify → Validate)
│ ├── 0111-speckit-*.md # Individual phase workflows
│ ├── speckit-prepare.md # Prep Pipeline (5 steps: Specify → Analyze)
│ ├── schema-change.md # DB Schema Change (ADR-009)
│ ├── create-backend-module.md # NestJS Module Scaffolding
│ ├── create-frontend-page.md # Next.js Page Scaffolding
│ ├── deploy.md # Deployment via Gitea CI/CD
── util-speckit-*.md # Utilities (checklist, diff, migrate, etc.)
├── rules/ # Project Context & Validation Rules
│ ├── 00-project-context.md # Role, Persona, Rule Tiers
│ ├── 01-adr-019-uuid.md # UUID Strategy (Critical)
│ ├── 02-security.md # Security Requirements
│ ├── 03-typescript.md # TypeScript Standards
│ ├── 04-domain-terminology.md # DMS Glossary Compliance
│ ├── 05-forbidden-actions.md # Critical Prohibited Patterns
│ ├── 06-backend-patterns.md # NestJS Architecture Rules
── 07-frontend-patterns.md # Next.js App Router Rules
│ ├── 08-development-flow.md # Development Workflow
│ ├── 09-commit-checklist.md # Pre-commit Validation
│ ├── 10-error-handling.md # ADR-007 Compliance
│ └── 11-ai-integration.md # ADR-018/020 AI Boundaries
└── scripts/
├── bash/ # Bash Core (Kinetic logic)
│ ├── common.sh # Shared utilities & path resolution
│ ├── check-prerequisites.sh # Prerequisite validation
│ ├── create-new-feature.sh # Feature branch creation
│ ├── setup-plan.sh # Plan template setup
│ ├── update-agent-context.sh # Agent file updater (main)
│ ├── plan-parser.sh # Plan data extraction (module)
│ ├── content-generator.sh # Language-specific templates (module)
│ └── agent-registry.sh # 17-agent type registry (module)
├── powershell/ # PowerShell Equivalents (Windows-native)
│ ├── common.ps1 # Shared utilities & prerequisites
│ └── create-new-feature.ps1 # Feature branch creation
├── fix_links.py # Spec link fixer
├── verify_links.py # Spec link verifier
└── start-mcp.js # MCP server launcher
.windsurf/workflows/ # / Slash Commands (Orchestration)
├── 00-speckit.all.md # Full Pipeline (10 steps: Specify → Validate)
├── 0111-speckit-*.md # Individual phase workflows
├── speckit-prepare.md # Prep Pipeline (5 steps: Specify → Analyze)
├── schema-change.md # DB Schema Change (ADR-009)
├── create-backend-module.md # NestJS Module Scaffolding
├── create-frontend-page.md # Next.js Page Scaffolding
├── deploy.md # Deployment via Gitea CI/CD
├── review.md # Code Review Workflow
└── util-speckit-*.md # Utilities (checklist, diff, migrate, etc.)
```
---
@@ -254,41 +259,41 @@ If you change your mind mid-project:
---
## 🏗️ LCBP3-DMS Project Notes (v1.8.1)
## 🏗️ LCBP3-DMS Project Notes (v1.8.6)
### 📊 Current Status: UAT Ready (2026-03-11)
### 📊 Current Status: Production Ready (2026-04-14)
| Area | Status |
|------|--------|
| Backend | ✅ 18 Modules, Production Ready |
| Frontend | ✅ 100% Complete |
| Database | ✅ Schema v1.8.0 Stable |
| Documentation | ✅ **10/10 Gaps Closed** |
| AI Migration | 🔄 Pre-migration Setup (n8n + Ollama) |
| UAT | 🔄 In Progress |
| Deployment | 📋 Pending Go-Live |
| Area | Status |
| ------------- | ------------------------------- |
| Backend | ✅ 18 Modules, Production Ready |
| Frontend | ✅ 100% Complete |
| Database | ✅ Schema v1.8.6 Stable |
| Documentation | ✅ **10/10 Gaps Closed** |
| AI Migration | ✅ Ollama Integration Complete |
| UAT | ✅ Completed Successfully |
| Deployment | Production Deployed |
### 📁 Key Spec Files (Always Check Before Writing Code)
| เอกสาร | Path | ใช้เมื่อ |
|--------|------|--------|
| Schema Tables | `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` | ก่อนเขียน Query |
| Data Dictionary | `specs/03-Data-and-Storage/03-01-data-dictionary.md` | ตรวจ Business Rules |
| Edge Cases | `specs/01-Requirements/01-06-edge-cases-and-rules.md` | 37 Rules |
| Migration Scope | `specs/03-Data-and-Storage/03-06-migration-business-scope.md` | Migration Bot |
| Release Policy | `specs/04-Infrastructure-OPS/04-08-release-management-policy.md` | ก่อน Deploy |
| UAT Criteria | `specs/01-Requirements/01-05-acceptance-criteria.md` | ตรวจ Feature |
| เอกสาร | Path | ใช้เมื่อ |
| --------------- | ---------------------------------------------------------------- | ------------------- |
| Schema Tables | `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` | ก่อนเขียน Query |
| Data Dictionary | `specs/03-Data-and-Storage/03-01-data-dictionary.md` | ตรวจ Business Rules |
| Edge Cases | `specs/01-Requirements/01-06-edge-cases-and-rules.md` | 37 Rules |
| Migration Scope | `specs/03-Data-and-Storage/03-06-migration-business-scope.md` | Migration Bot |
| Release Policy | `specs/04-Infrastructure-OPS/04-08-release-management-policy.md` | ก่อน Deploy |
| UAT Criteria | `specs/01-Requirements/01-05-acceptance-criteria.md` | ตรวจ Feature |
### ⚡ Project-Specific Workflow Cheatsheet
| Task | Workflow / Command | Notes |
|------|--------------------|-------|
| Create Backend Module | `/create-backend-module` | Scaffolds NestJS module |
| Create Frontend Page | `/create-frontend-page` | Next.js App Router page |
| Schema Change | `/schema-change` | ADR-009: No migrations |
| Deploy | `/deploy` | Blue-Green via Gitea CI/CD |
| UAT Feature Check | `/11-speckit-validate` | vs `01-05-acceptance-criteria.md` |
| Security Audit | `@speckit-security-audit` | OWASP + CASL + ClamAV |
| Task | Workflow / Command | Notes |
| --------------------- | ------------------------- | --------------------------------- |
| Create Backend Module | `/create-backend-module` | Scaffolds NestJS module |
| Create Frontend Page | `/create-frontend-page` | Next.js App Router page |
| Schema Change | `/schema-change` | ADR-009: No migrations |
| Deploy | `/deploy` | Blue-Green via Gitea CI/CD |
| UAT Feature Check | `/11-speckit-validate` | vs `01-05-acceptance-criteria.md` |
| Security Audit | `@speckit-security-audit` | OWASP + CASL + ClamAV |
### 🚫 Critical Forbidden Actions
@@ -300,4 +305,151 @@ If you change your mind mid-project:
---
## 🔧 Troubleshooting
### Common Issues & Solutions
#### **Version Inconsistency Errors**
**Problem**: Scripts report version mismatches between files.
**Solution**:
```bash
# Run version validation
./scripts/bash/validate-versions.sh
# Fix by updating all files to v1.8.6
# Then re-run validation to confirm
```
**Files to check**:
- `.agents/README.md`
- `.agents/skills/VERSION`
- `.agents/rules/00-project-context.md`
- `.agents/skills/skills.md`
#### **Missing Workflow Files**
**Problem**: Workflows not found in `.windsurf/workflows/`.
**Solution**:
```bash
# Sync workflow check
./scripts/bash/sync-workflows.sh
# Verify all 23 expected workflows are present
# Create missing ones from templates if needed
```
#### **Skill Health Issues**
**Problem**: Skills missing SKILL.md or required sections.
**Solution**:
```bash
# Run comprehensive skill audit
./scripts/bash/audit-skills.sh
# Check specific skill issues
# Missing files will be listed with specific errors
```
**Required SKILL.md sections**:
- Front matter: `name`, `description`, `version`
- Content: `## Role`, `## Task`
#### **Script Permission Issues**
**Problem**: Bash scripts not executable.
**Solution**:
```bash
# Make scripts executable
chmod +x .agents/scripts/bash/*.sh
# Verify with
ls -la .agents/scripts/bash/
```
#### **PowerShell Execution Policy**
**Problem**: PowerShell scripts blocked by execution policy.
**Solution**:
```powershell
# Check current policy
Get-ExecutionPolicy
# Allow scripts for current user
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
# Or run bypass for single script
PowerShell -ExecutionPolicy Bypass -File .agents/scripts/powershell/audit-skills.ps1
```
### Debug Mode
**Enable verbose output**:
```bash
# Run scripts with debug info
bash -x .agents/scripts/bash/audit-skills.sh
# PowerShell with verbose output
$VerbosePreference = "Continue"
. .agents/scripts/powershell/audit-skills.ps1
```
### Health Check Commands
**Quick health assessment**:
```bash
# 1. Check versions
./scripts/bash/validate-versions.sh
# 2. Audit skills
./scripts/bash/audit-skills.sh
# 3. Sync workflows
./scripts/bash/sync-workflows.sh
# 4. Check directory structure
find .agents -type f -name "*.md" | wc -l
find .windsurf/workflows -name "*.md" | wc -l
```
**PowerShell equivalent**:
```powershell
# 1. Check versions
. .agents/scripts/powershell/validate-versions.ps1
# 2. Audit skills
. .agents/scripts/powershell/audit-skills.ps1
# 3. Count files
(Get-ChildItem -Path .agents -Recurse -Filter "*.md").Count
(Get-ChildItem -Path .windsurf/workflows -Filter "*.md").Count
```
### Getting Help
**If issues persist**:
1. Check LCBP3 project version alignment
2. Verify `.specify/` directory structure (if using templates)
3. Ensure all dependencies are installed (bash, powershell core)
4. Review the specific error messages in script output
5. Check this README for workflow path updates (`.windsurf/workflows`)
---
_Built with logic from [Spec-Kit](https://github.com/github/spec-kit). Powered by Antigravity._
+571
View File
@@ -0,0 +1,571 @@
#!/usr/bin/env node
/**
* advanced-validator.js - Advanced validation capabilities for .agents
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
const yaml = require('js-yaml');
// Configuration
const BASE_DIR = path.resolve(__dirname, '../..');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
const WORKFLOWS_DIR = path.join(BASE_DIR, '.windsurf', 'workflows');
// Advanced validation class
class AdvancedValidator {
constructor() {
this.validationResults = {
timestamp: new Date().toISOString(),
validations: {},
summary: {
total_validations: 0,
passed_validations: 0,
failed_validations: 0,
warnings: 0,
critical_issues: 0
}
};
this.criticalIssues = [];
}
log(message, level = 'info') {
const colors = {
info: '\x1b[36m', // Cyan
pass: '\x1b[32m', // Green
fail: '\x1b[31m', // Red
warn: '\x1b[33m', // Yellow
critical: '\x1b[35m', // Magenta
reset: '\x1b[0m'
};
const color = colors[level] || colors.info;
console.log(`${color}[${level.toUpperCase()}] ${message}${colors.reset}`);
}
validateSkillFrontMatter(skillPath, skillName) {
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (!fs.existsSync(skillMdPath)) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: 'SKILL.md file not found',
path: skillMdPath
});
return false;
}
try {
const content = fs.readFileSync(skillMdPath, 'utf8');
const frontMatterMatch = content.match(/^---\n([\s\S]*?)\n---/);
if (!frontMatterMatch) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: 'No front matter found',
path: skillMdPath
});
return false;
}
try {
const frontMatter = yaml.load(frontMatterMatch[1]);
const requiredFields = ['name', 'description', 'version'];
const missingFields = requiredFields.filter(field => !frontMatter[field]);
if (missingFields.length > 0) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: `Missing required fields: ${missingFields.join(', ')}`,
missing_fields: missingFields,
front_matter: frontMatter,
path: skillMdPath
});
return false;
}
// Validate version format
const versionPattern = /^\d+\.\d+\.\d+$/;
if (!versionPattern.test(frontMatter.version)) {
this.addValidationResult(`skill_${skillName}_version_format`, 'warn', {
message: 'Version format should be X.Y.Z',
version: frontMatter.version,
path: skillMdPath
});
}
// Validate dependencies if present
if (frontMatter['depends-on']) {
const dependencies = Array.isArray(frontMatter['depends-on'])
? frontMatter['depends-on']
: [frontMatter['depends-on']];
for (const dep of dependencies) {
const depPath = path.join(SKILLS_DIR, dep);
if (!fs.existsSync(depPath)) {
this.addValidationResult(`skill_${skillName}_dependency_${dep}`, 'critical', {
message: `Dependency not found: ${dep}`,
dependency: dep,
path: skillMdPath
});
}
}
}
this.addValidationResult(`skill_${skillName}_frontmatter`, 'pass', {
message: 'Front matter is valid',
front_matter: frontMatter,
path: skillMdPath
});
return true;
} catch (yamlError) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: `Invalid YAML in front matter: ${yamlError.message}`,
path: skillMdPath
});
return false;
}
} catch (error) {
this.addValidationResult(`skill_${skillName}_frontmatter`, 'fail', {
message: `Error reading SKILL.md: ${error.message}`,
path: skillMdPath
});
return false;
}
}
validateSkillContent(skillPath, skillName) {
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (!fs.existsSync(skillMdPath)) {
return false;
}
try {
const content = fs.readFileSync(skillMdPath, 'utf8');
// Check for required sections
const requiredSections = ['## Role', '## Task'];
const missingSections = requiredSections.filter(section => !content.includes(section));
if (missingSections.length > 0) {
this.addValidationResult(`skill_${skillName}_content`, 'fail', {
message: `Missing required sections: ${missingSections.join(', ')}`,
missing_sections: missingSections,
path: skillMdPath
});
return false;
}
// Check for forbidden patterns
const forbiddenPatterns = [
{ pattern: /TODO.*FIX/gi, message: 'TODO items should be resolved' },
{ pattern: /FIXME/gi, message: 'FIXME items should be addressed' },
{ pattern: /XXX/gi, message: 'XXX markers should be replaced' }
];
for (const { pattern, message } of forbiddenPatterns) {
if (pattern.test(content)) {
this.addValidationResult(`skill_${skillName}_forbidden_patterns`, 'warn', {
message: `${message} found in content`,
pattern: pattern.toString(),
path: skillMdPath
});
}
}
// Validate content length
const contentLength = content.length;
if (contentLength < 500) {
this.addValidationResult(`skill_${skillName}_content_length`, 'warn', {
message: 'Skill content seems too short',
length: contentLength,
path: skillMdPath
});
}
this.addValidationResult(`skill_${skillName}_content`, 'pass', {
message: 'Skill content is valid',
length: contentLength,
path: skillMdPath
});
return true;
} catch (error) {
this.addValidationResult(`skill_${skillName}_content`, 'fail', {
message: `Error validating content: ${error.message}`,
path: skillMdPath
});
return false;
}
}
validateWorkflowStructure(workflowPath, workflowName) {
if (!fs.existsSync(workflowPath)) {
this.addValidationResult(`workflow_${workflowName}_exists`, 'fail', {
message: 'Workflow file not found',
path: workflowPath
});
return false;
}
try {
const content = fs.readFileSync(workflowPath, 'utf8');
// Check for markdown headers
if (!content.includes('#')) {
this.addValidationResult(`workflow_${workflowName}_structure`, 'fail', {
message: 'No markdown headers found',
path: workflowPath
});
return false;
}
// Check for workflow-specific patterns
const hasWorkflowContent = content.length > 200;
if (!hasWorkflowContent) {
this.addValidationResult(`workflow_${workflowName}_content`, 'warn', {
message: 'Workflow content seems too short',
length: content.length,
path: workflowPath
});
}
// Validate skill references
const skillReferences = content.match(/@speckit-\w+/g) || [];
for (const skillRef of skillReferences) {
const skillName = skillRef.replace('@', '');
const skillPath = path.join(SKILLS_DIR, skillName);
if (!fs.existsSync(skillPath)) {
this.addValidationResult(`workflow_${workflowName}_skill_ref_${skillName}`, 'critical', {
message: `Workflow references non-existent skill: ${skillRef}`,
skill_reference: skillRef,
path: workflowPath
});
}
}
this.addValidationResult(`workflow_${workflowName}_structure`, 'pass', {
message: 'Workflow structure is valid',
skill_references: skillReferences,
path: workflowPath
});
return true;
} catch (error) {
this.addValidationResult(`workflow_${workflowName}_structure`, 'fail', {
message: `Error validating workflow: ${error.message}`,
path: workflowPath
});
return false;
}
}
validateCrossReferences() {
this.log('Validating cross-references...', 'info');
// Check README.md references
const readmePath = path.join(AGENTS_DIR, 'README.md');
if (fs.existsSync(readmePath)) {
const readmeContent = fs.readFileSync(readmePath, 'utf8');
// Check if README references correct workflow path
if (readmeContent.includes('.agents/workflows') && !readmeContent.includes('.windsurf/workflows')) {
this.addValidationResult('readme_workflow_reference', 'critical', {
message: 'README.md references .agents/workflows instead of .windsurf/workflows',
path: readmePath
});
}
// Check version consistency in README
const versionMatches = readmeContent.match(/v?(\d+\.\d+\.\d+)/g) || [];
const uniqueVersions = [...new Set(versionMatches)];
if (uniqueVersions.length > 1) {
this.addValidationResult('readme_version_consistency', 'warn', {
message: 'Multiple versions found in README.md',
versions: uniqueVersions,
path: readmePath
});
}
}
// Check skills.md references
const skillsMdPath = path.join(SKILLS_DIR, 'skills.md');
if (fs.existsSync(skillsMdPath)) {
const skillsContent = fs.readFileSync(skillsMdPath, 'utf8');
// Validate skill dependency matrix
if (skillsContent.includes('## Skill Dependency Matrix')) {
this.addValidationResult('skills_dependency_matrix', 'pass', {
message: 'Skills documentation includes dependency matrix',
path: skillsMdPath
});
} else {
this.addValidationResult('skills_dependency_matrix', 'warn', {
message: 'Skills documentation missing dependency matrix',
path: skillsMdPath
});
}
}
}
validateSecurityCompliance() {
this.log('Validating security compliance...', 'info');
// Check for security patterns in rules
const securityRulePath = path.join(AGENTS_DIR, 'rules', '02-security.md');
if (fs.existsSync(securityRulePath)) {
const securityContent = fs.readFileSync(securityRulePath, 'utf8');
const requiredSecurityTopics = [
'authentication',
'authorization',
'rbac',
'validation',
'audit'
];
const missingTopics = requiredSecurityTopics.filter(topic =>
!securityContent.toLowerCase().includes(topic.toLowerCase())
);
if (missingTopics.length > 0) {
this.addValidationResult('security_rules_completeness', 'warn', {
message: `Security rules missing topics: ${missingTopics.join(', ')}`,
missing_topics: missingTopics,
path: securityRulePath
});
} else {
this.addValidationResult('security_rules_completeness', 'pass', {
message: 'Security rules cover all required topics',
path: securityRulePath
});
}
}
// Check for ADR-019 compliance in rules
const uuidRulePath = path.join(AGENTS_DIR, 'rules', '01-adr-019-uuid.md');
if (fs.existsSync(uuidRulePath)) {
const uuidContent = fs.readFileSync(uuidRulePath, 'utf8');
const criticalUuidRules = [
'parseInt',
'Number(',
'publicId',
'@Exclude()'
];
const missingRules = criticalUuidRules.filter(rule =>
!uuidContent.includes(rule)
);
if (missingRules.length > 0) {
this.addValidationResult('uuid_rules_completeness', 'critical', {
message: `UUID rules missing critical patterns: ${missingRules.join(', ')}`,
missing_patterns: missingRules,
path: uuidRulePath
});
} else {
this.addValidationResult('uuid_rules_completeness', 'pass', {
message: 'UUID rules cover all critical patterns',
path: uuidRulePath
});
}
}
}
validatePerformanceMetrics() {
this.log('Validating performance metrics...', 'info');
// Check file sizes
const criticalFiles = [
{ path: path.join(AGENTS_DIR, 'README.md'), name: 'README.md' },
{ path: path.join(SKILLS_DIR, 'skills.md'), name: 'skills.md' },
{ path: path.join(AGENTS_DIR, 'skills', 'VERSION'), name: 'VERSION' }
];
for (const file of criticalFiles) {
if (fs.existsSync(file.path)) {
const stats = fs.statSync(file.path);
const sizeKB = stats.size / 1024;
if (sizeKB > 100) {
this.addValidationResult(`file_size_${file.name}`, 'warn', {
message: `File ${file.name} is large (${sizeKB.toFixed(1)}KB)`,
size_kb: sizeKB,
path: file.path
});
} else {
this.addValidationResult(`file_size_${file.name}`, 'pass', {
message: `File ${file.name} size is acceptable`,
size_kb: sizeKB,
path: file.path
});
}
}
}
// Check directory structure depth
function getDirectoryDepth(dirPath, currentDepth = 0) {
let maxDepth = currentDepth;
if (fs.existsSync(dirPath)) {
const items = fs.readdirSync(dirPath);
for (const item of items) {
const itemPath = path.join(dirPath, item);
if (fs.statSync(itemPath).isDirectory()) {
const depth = getDirectoryDepth(itemPath, currentDepth + 1);
maxDepth = Math.max(maxDepth, depth);
}
}
}
return maxDepth;
}
const agentsDepth = getDirectoryDepth(AGENTS_DIR);
if (agentsDepth > 5) {
this.addValidationResult('directory_depth', 'warn', {
message: `.agents directory structure is deep (${agentsDepth} levels)`,
depth: agentsDepth,
path: AGENTS_DIR
});
} else {
this.addValidationResult('directory_depth', 'pass', {
message: `.agents directory structure depth is acceptable`,
depth: agentsDepth,
path: AGENTS_DIR
});
}
}
addValidationResult(name, status, details) {
this.validationResults.validations[name] = {
status,
timestamp: new Date().toISOString(),
...details
};
this.validationResults.summary.total_validations++;
switch (status) {
case 'pass':
this.validationResults.summary.passed_validations++;
this.log(`${name}: PASS - ${details.message}`, 'pass');
break;
case 'fail':
this.validationResults.summary.failed_validations++;
this.log(`${name}: FAIL - ${details.message}`, 'fail');
break;
case 'warn':
this.validationResults.summary.warnings++;
this.log(`${name}: WARN - ${details.message}`, 'warn');
break;
case 'critical':
this.validationResults.summary.critical_issues++;
this.criticalIssues.push({ name, ...details });
this.log(`${name}: CRITICAL - ${details.message}`, 'critical');
break;
}
}
async runAdvancedValidation() {
this.log('Starting advanced validation...', 'info');
this.log(`Base directory: ${BASE_DIR}`, 'info');
// Validate all skills
this.log('Validating skills...', 'info');
if (fs.existsSync(SKILLS_DIR)) {
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
const itemPath = path.join(SKILLS_DIR, item);
return fs.statSync(itemPath).isDirectory();
});
for (const skillDir of skillDirs) {
const skillPath = path.join(SKILLS_DIR, skillDir);
this.validateSkillFrontMatter(skillPath, skillDir);
this.validateSkillContent(skillPath, skillDir);
}
}
// Validate all workflows
this.log('Validating workflows...', 'info');
if (fs.existsSync(WORKFLOWS_DIR)) {
const workflowFiles = fs.readdirSync(WORKFLOWS_DIR).filter(file => file.endsWith('.md'));
for (const workflowFile of workflowFiles) {
const workflowPath = path.join(WORKFLOWS_DIR, workflowFile);
const workflowName = workflowFile.replace('.md', '');
this.validateWorkflowStructure(workflowPath, workflowName);
}
}
// Cross-reference validation
this.validateCrossReferences();
// Security compliance validation
this.validateSecurityCompliance();
// Performance metrics validation
this.validatePerformanceMetrics();
// Generate summary
this.generateSummary();
return this.validationResults;
}
generateSummary() {
const { summary, critical_issues } = this.validationResults;
this.log('=== Advanced Validation Summary ===', 'info');
this.log(`Total validations: ${summary.total_validations}`, 'info');
this.log(`Passed: ${summary.passed_validations}`, 'pass');
this.log(`Failed: ${summary.failed_validations}`, summary.failed_validations > 0 ? 'fail' : 'info');
this.log(`Warnings: ${summary.warnings}`, 'warn');
this.log(`Critical issues: ${summary.critical_issues}`, 'critical');
if (critical_issues.length > 0) {
this.log('Critical Issues:', 'critical');
critical_issues.forEach(issue => {
this.log(` - ${issue.name}: ${issue.message}`, 'critical');
});
}
// Save validation results
const validationReportPath = path.join(AGENTS_DIR, 'reports', 'advanced-validation.json');
const reportsDir = path.dirname(validationReportPath);
if (!fs.existsSync(reportsDir)) {
fs.mkdirSync(reportsDir, { recursive: true });
}
fs.writeFileSync(validationReportPath, JSON.stringify(this.validationResults, null, 2));
this.log(`Advanced validation report saved to: ${validationReportPath}`, 'info');
}
}
// CLI interface
async function main() {
const validator = new AdvancedValidator();
try {
const results = await validator.runAdvancedValidation();
process.exit(results.summary.critical_issues > 0 ? 1 : 0);
} catch (error) {
console.error('Advanced validation failed:', error);
process.exit(1);
}
}
// Export for use in other modules
module.exports = { AdvancedValidator };
// Run if called directly
if (require.main === module) {
main();
}
+195
View File
@@ -0,0 +1,195 @@
#!/bin/bash
# audit-skills.sh - Verify skill completeness and health
# Part of LCBP3-DMS Phase 2 improvements
set -uo pipefail
# Note: no -e — we let per-skill checks accumulate issues without terminating
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Base directory
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../.." && pwd)"
AGENTS_DIR="$BASE_DIR/.agents"
SKILLS_DIR="$AGENTS_DIR/skills"
echo "=== Skills Health Audit ==="
echo "Base directory: $BASE_DIR"
echo
# Function to check if skill has required files
check_skill_health() {
local skill_dir="$1"
local skill_name="$(basename "$skill_dir")"
local issues=0
# Check for SKILL.md
if [[ -f "$skill_dir/SKILL.md" ]]; then
echo -e "${GREEN} OK${NC}: $skill_name/SKILL.md"
else
echo -e "${RED} MISSING${NC}: $skill_name/SKILL.md"
((issues++))
fi
# Check for templates directory (optional)
if [[ -d "$skill_dir/templates" ]]; then
template_count=$(find "$skill_dir/templates" -name "*.md" -type f | wc -l)
if [[ $template_count -gt 0 ]]; then
echo -e "${GREEN} OK${NC}: $skill_name/templates ($template_count files)"
else
echo -e "${YELLOW} EMPTY${NC}: $skill_name/templates (no files)"
fi
fi
# Check SKILL.md content if exists
local skill_file="$skill_dir/SKILL.md"
if [[ -f "$skill_file" ]]; then
# Check for required front matter fields
local required_fields=("name" "description" "version")
for field in "${required_fields[@]}"; do
if grep -q "^$field:" "$skill_file"; then
echo -e " ${GREEN} FIELD${NC}: $field"
else
echo -e " ${RED} MISSING FIELD${NC}: $field"
((issues++)) || true
fi
done
# Check for LCBP3 context reference (speckit-* skills only)
if [[ "$skill_name" == speckit-* ]]; then
if grep -q '_LCBP3-CONTEXT\.md' "$skill_file"; then
echo -e " ${GREEN} CONTEXT${NC}: LCBP3 appendix referenced"
else
echo -e " ${YELLOW} MISSING${NC}: LCBP3 context reference"
((issues++)) || true
fi
fi
fi
return $issues
}
# Function to get skill version from SKILL.md
get_skill_version() {
local skill_file="$1"
if [[ -f "$skill_file" ]]; then
# Match 'version: X.Y.Z' (or quoted) at a LINE START only; ignore nested ` version:` fields.
# Output: bare X.Y.Z with no quotes/whitespace.
local raw
raw=$(grep -E "^version:[[:space:]]*['\"]?[0-9]+\.[0-9]+\.[0-9]+" "$skill_file" | head -1 || true)
if [[ -n "$raw" ]]; then
printf '%s' "$raw" | sed -E "s/^version:[[:space:]]*['\"]?([0-9]+\.[0-9]+\.[0-9]+).*/\1/"
else
echo "unknown"
fi
else
echo "no_file"
fi
}
# Check skills directory
if [[ ! -d "$SKILLS_DIR" ]]; then
echo -e "${RED}ERROR: Skills directory not found${NC}"
exit 1
fi
echo "Scanning skills directory: $SKILLS_DIR"
echo
# Get all skill directories
SKILL_DIRS=()
while IFS= read -r -d '' dir; do
SKILL_DIRS+=("$dir")
done < <(find "$SKILLS_DIR" -maxdepth 1 -type d -not -path "$SKILLS_DIR" -print0 | sort -z)
echo "Found ${#SKILL_DIRS[@]} skill directories"
echo
# Audit each skill
TOTAL_ISSUES=0
SKILL_SUMMARY=()
for skill_dir in "${SKILL_DIRS[@]}"; do
skill_name="$(basename "$skill_dir")"
# Skip non-skill entries (e.g. _LCBP3-CONTEXT.md would not match here; safe)
[[ "$skill_name" == _* ]] && continue
echo "Auditing: $skill_name"
echo "------------------------"
set +e
check_skill_health "$skill_dir"
issues=$?
set -u
skill_version=$(get_skill_version "$skill_dir/SKILL.md")
SKILL_SUMMARY+=("$skill_name:$issues:$skill_version")
TOTAL_ISSUES=$((TOTAL_ISSUES + issues))
echo
done
# Summary report
echo "=== Skills Audit Summary ==="
echo
echo "Skill Status:"
echo "-----------"
for summary in "${SKILL_SUMMARY[@]}"; do
IFS=':' read -r name issues version <<< "$summary"
if [[ $issues -eq 0 ]]; then
echo -e "${GREEN} HEALTHY${NC}: $name (v$version)"
else
echo -e "${RED} ISSUES${NC}: $name (v$version) - $issues issues"
fi
done
echo
# Check skills.md version consistency
SKILLS_VERSION_FILE="$SKILLS_DIR/VERSION"
if [[ -f "$SKILLS_VERSION_FILE" ]]; then
global_version=$(grep "^version:" "$SKILLS_VERSION_FILE" | sed 's/version: *//' | tr -d '\r\n ')
echo "Global skills version: v$global_version"
echo
# Check for version mismatches
echo "Version Consistency Check:"
echo "------------------------"
VERSION_MISMATCHES=0
for summary in "${SKILL_SUMMARY[@]}"; do
IFS=':' read -r name issues version <<< "$summary"
if [[ "$version" != "unknown" && "$version" != "no_file" && "$version" != "$global_version" ]]; then
echo -e "${YELLOW} MISMATCH${NC}: $name is v$version, global is v$global_version"
((VERSION_MISMATCHES++))
fi
done
if [[ $VERSION_MISMATCHES -eq 0 ]]; then
echo -e "${GREEN} All skills match global version${NC}"
fi
fi
echo
# Overall health
if [[ $TOTAL_ISSUES -eq 0 ]]; then
echo -e "${GREEN}=== SUCCESS: All skills healthy ===${NC}"
echo "Total skills: ${#SKILL_DIRS[@]}"
exit 0
else
echo -e "${RED}=== ISSUES FOUND: $TOTAL_ISSUES total issues ===${NC}"
echo
echo "Recommendations:"
echo "1. Fix missing SKILL.md files"
echo "2. Add required front matter fields"
echo "3. Ensure Role and Task sections exist"
echo "4. Align skill versions with global version"
exit 1
fi
+149
View File
@@ -0,0 +1,149 @@
#!/bin/bash
# sync-workflows.sh - Sync workflow references between .agents and .windsurf
# Part of LCBP3-DMS Phase 2 improvements
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Base directory
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
AGENTS_DIR="$BASE_DIR/.agents"
WINDSURF_DIR="$BASE_DIR/.windsurf"
WORKFLOWS_DIR="$WINDSURF_DIR/workflows"
echo "=== Workflow Synchronization Check ==="
echo "Base directory: $BASE_DIR"
echo
# Function to check if workflow exists
check_workflow() {
local workflow_name="$1"
local workflow_file="$WORKFLOWS_DIR/$workflow_name"
if [[ -f "$workflow_file" ]]; then
echo -e "${GREEN} EXISTS${NC}: $workflow_name"
return 0
else
echo -e "${RED} MISSING${NC}: $workflow_name"
return 1
fi
}
# Function to list all workflows
list_workflows() {
if [[ -d "$WORKFLOWS_DIR" ]]; then
find "$WORKFLOWS_DIR" -name "*.md" -type f | sort
else
echo "No workflows directory found"
fi
}
# Check directories
echo "Checking directory structure..."
if [[ -d "$AGENTS_DIR" ]]; then
echo -e "${GREEN} OK${NC}: .agents directory exists"
else
echo -e "${RED} ERROR${NC}: .agents directory not found"
exit 1
fi
if [[ -d "$WINDSURF_DIR" ]]; then
echo -e "${GREEN} OK${NC}: .windsurf directory exists"
else
echo -e "${RED} ERROR${NC}: .windsurf directory not found"
exit 1
fi
if [[ -d "$WORKFLOWS_DIR" ]]; then
echo -e "${GREEN} OK${NC}: workflows directory exists"
else
echo -e "${RED} ERROR${NC}: workflows directory not found"
exit 1
fi
echo
# Expected workflows based on README documentation
echo "Checking expected workflows..."
EXPECTED_WORKFLOWS=(
"00-speckit.all.md"
"01-speckit.constitution.md"
"02-speckit.specify.md"
"03-speckit.clarify.md"
"04-speckit.plan.md"
"05-speckit.tasks.md"
"06-speckit.analyze.md"
"07-speckit.implement.md"
"08-speckit.checker.md"
"09-speckit.tester.md"
"10-speckit.reviewer.md"
"11-speckit.validate.md"
"speckit.prepare.md"
"schema-change.md"
"create-backend-module.md"
"create-frontend-page.md"
"deploy.md"
"review.md"
"util-speckit.checklist.md"
"util-speckit.diff.md"
"util-speckit.migrate.md"
"util-speckit.quizme.md"
"util-speckit.status.md"
"util-speckit.taskstoissues.md"
)
MISSING_WORKFLOWS=0
for workflow in "${EXPECTED_WORKFLOWS[@]}"; do
if ! check_workflow "$workflow"; then
((MISSING_WORKFLOWS++))
fi
done
echo
# List all actual workflows
echo "All workflows in $WORKFLOWS_DIR:"
echo "--------------------------------"
while IFS= read -r workflow; do
echo " $(basename "$workflow")"
done < <(list_workflows)
echo
# Check for orphaned workflows (unexpected ones)
echo "Checking for unexpected workflows..."
ACTUAL_WORKFLOWS=()
while IFS= read -r workflow; do
ACTUAL_WORKFLOWS+=("$(basename "$workflow")")
done < <(list_workflows)
for actual_workflow in "${ACTUAL_WORKFLOWS[@]}"; do
if [[ ! " ${EXPECTED_WORKFLOWS[*]} " =~ " ${actual_workflow} " ]]; then
echo -e "${YELLOW} UNEXPECTED${NC}: $actual_workflow"
fi
done
echo
# Summary
if [[ $MISSING_WORKFLOWS -eq 0 ]]; then
echo -e "${GREEN}=== SUCCESS: All expected workflows present ===${NC}"
echo "Total workflows: ${#ACTUAL_WORKFLOWS[@]}"
exit 0
else
echo -e "${RED}=== FAILED: $MISSING_WORKFLOWS workflows missing ===${NC}"
echo
echo "To fix missing workflows:"
echo "1. Create missing workflow files in $WORKFLOWS_DIR"
echo "2. Use existing workflows as templates"
echo "3. Run this script again to verify"
exit 1
fi
+106
View File
@@ -0,0 +1,106 @@
#!/bin/bash
# validate-versions.sh - Check version consistency across .agents files
# Part of LCBP3-DMS Phase 2 improvements
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Base directory
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../../.." && pwd)"
AGENTS_DIR="$BASE_DIR/.agents"
# Expected version (should match LCBP3 version)
EXPECTED_VERSION="1.8.9"
echo "=== .agents Version Validation ==="
echo "Base directory: $BASE_DIR"
echo "Expected version: $EXPECTED_VERSION"
echo
# Function to extract version from file
extract_version() {
local file="$1"
local pattern="$2"
if [[ -f "$file" ]]; then
grep -o "$pattern" "$file" | head -1 | sed 's/.*\([0-9]\+\.[0-9]\+\.[0-9]\+\).*/\1/' || echo "NOT_FOUND"
else
echo "FILE_NOT_FOUND"
fi
}
# Files to check
declare -A FILES_TO_CHECK=(
["$AGENTS_DIR/skills/VERSION"]="version: \([0-9]\+\.[0-9]\+\.[0-9]\+\)"
["$AGENTS_DIR/skills/skills.md"]="[Vv]\([0-9]\+\.[0-9]\+\.[0-9]\+\)"
)
# Track issues
ISSUES=0
echo "Checking version consistency..."
echo
for file in "${!FILES_TO_CHECK[@]}"; do
pattern="${FILES_TO_CHECK[$file]}"
relative_path="${file#$BASE_DIR/}"
version=$(extract_version "$file" "$pattern")
if [[ "$version" == "NOT_FOUND" ]] || [[ "$version" == "FILE_NOT_FOUND" ]]; then
echo -e "${RED} ERROR${NC}: $relative_path - Version not found"
((ISSUES++))
elif [[ "$version" != "$EXPECTED_VERSION" ]]; then
echo -e "${RED} ERROR${NC}: $relative_path - Found v$version, expected v$EXPECTED_VERSION"
((ISSUES++))
else
echo -e "${GREEN} OK${NC}: $relative_path - v$version"
fi
done
echo
# Check for version mismatches in skill files
echo "Checking skill file versions..."
SKILL_VERSIONS_FILE="$AGENTS_DIR/skills/VERSION"
if [[ -f "$SKILL_VERSIONS_FILE" ]]; then
skills_version=$(extract_version "$SKILL_VERSIONS_FILE" "version: \([0-9]\+\.[0-9]\+\.[0-9]\+\)")
echo "Skills version file: v$skills_version"
fi
# Check workflow versions (in .windsurf/workflows)
WORKFLOWS_DIR="$BASE_DIR/.windsurf/workflows"
if [[ -d "$WORKFLOWS_DIR" ]]; then
echo "Checking workflow files..."
workflow_count=0
for workflow in "$WORKFLOWS_DIR"/*.md; do
if [[ -f "$workflow" ]]; then
workflow_count=$((workflow_count + 1))
fi
done
echo -e "${GREEN} OK${NC}: Found $workflow_count workflow files"
else
echo -e "${YELLOW} WARNING${NC}: Workflows directory not found at $WORKFLOWS_DIR"
fi
echo
# Summary
if [[ $ISSUES -eq 0 ]]; then
echo -e "${GREEN}=== SUCCESS: All versions consistent ===${NC}"
exit 0
else
echo -e "${RED}=== FAILED: $ISSUES version issues found ===${NC}"
echo
echo "To fix version issues:"
echo "1. Update files to use v$EXPECTED_VERSION"
echo "2. Ensure LCBP3 project version matches"
echo "3. Run this script again to verify"
exit 1
fi
+516
View File
@@ -0,0 +1,516 @@
# ci-hooks.ps1 - Continuous integration hooks for .agents (PowerShell version)
# Part of LCBP3-DMS Phase 3 enhancements
param(
[Parameter(Mandatory=$false)]
[ValidateSet("pre-commit", "pre-push", "ci-pipeline", "install-hooks", "help")]
[string]$Command = "help"
)
# Configuration
$BaseDir = Split-Path -Parent (Split-Path -Parent $PSScriptRoot)
$AgentsDir = Join-Path $BaseDir ".agents"
$CILogDir = Join-Path $AgentsDir "logs\ci"
$CIReportDir = Join-Path $AgentsDir "reports\ci"
# Ensure directories exist
if (-not (Test-Path $CILogDir)) { New-Item -ItemType Directory -Path $CILogDir -Force | Out-Null }
if (-not (Test-Path $CIReportDir)) { New-Item -ItemType Directory -Path $CIReportDir -Force | Out-Null }
# Colors for output
$Colors = @{
Red = "`e[0;31m"
Green = "`e[0;32m"
Yellow = "`e[1;33m"
Blue = "`e[0;34m"
NoColor = "`e[0m"
}
# Logging function
function Write-CILog {
param(
[string]$Level,
[string]$Message
)
$timestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
$logFile = Join-Path $CILogDir "ci-$(Get-Date -Format 'yyyy-MM-dd').log"
"$timestamp [$Level] $Message" | Out-File -FilePath $logFile -Append
# Console output with colors
switch ($Level) {
"INFO" { Write-Host $Message -ForegroundColor $Colors.Blue }
"PASS" { Write-Host $Message -ForegroundColor $Colors.Green }
"WARN" { Write-Host $Message -ForegroundColor $Colors.Yellow }
"FAIL" { Write-Host $Message -ForegroundColor $Colors.Red }
default { Write-Host $Message }
}
}
# Pre-commit hook
function Invoke-PreCommitHook {
Write-CILog "INFO" "Running pre-commit validation..."
$exitCode = 0
# 1. Run version validation
Write-CILog "INFO" "Checking version consistency..."
$versionScript = Join-Path $AgentsDir "scripts\powershell\validate-versions.ps1"
if (Test-Path $versionScript) {
try {
& $versionScript | Out-File -FilePath (Join-Path $CILogDir "pre-commit-versions.log") -Append
Write-CILog "PASS" "Version validation passed"
} catch {
Write-CILog "FAIL" "Version validation failed"
$exitCode = 1
}
} else {
Write-CILog "WARN" "Version validation script not found"
}
# 2. Run skill audit
Write-CILog "INFO" "Auditing skills..."
$auditScript = Join-Path $AgentsDir "scripts\powershell\audit-skills.ps1"
if (Test-Path $auditScript) {
try {
& $auditScript | Out-File -FilePath (Join-Path $CILogDir "pre-commit-skills.log") -Append
Write-CILog "PASS" "Skill audit passed"
} catch {
Write-CILog "FAIL" "Skill audit failed"
$exitCode = 1
}
} else {
Write-CILog "WARN" "Skill audit script not found"
}
# 3. Run integration tests (if Node.js available)
if (Get-Command node -ErrorAction SilentlyContinue) {
Write-CILog "INFO" "Running integration tests..."
$testScript = Join-Path $AgentsDir "tests\skill-integration.test.js"
if (Test-Path $testScript) {
try {
node $testScript | Out-File -FilePath (Join-Path $CILogDir "pre-commit-tests.log") -Append
Write-CILog "PASS" "Integration tests passed"
} catch {
Write-CILog "WARN" "Integration tests failed (non-blocking)"
}
} else {
Write-CILog "WARN" "Integration test script not found"
}
} else {
Write-CILog "WARN" "Node.js not available, skipping integration tests"
}
# 4. Check for forbidden patterns
Write-CILog "INFO" "Checking for forbidden patterns..."
$forbiddenPatterns = @("TODO", "FIXME", "XXX", "HACK")
$foundForbidden = $false
foreach ($pattern in $forbiddenPatterns) {
$skillsDir = Join-Path $AgentsDir "skills"
if (Test-Path $skillsDir) {
$matches = Select-String -Path $skillsDir\*.md -Pattern $pattern -Recurse
if ($matches) {
Write-CILog "WARN" "Found forbidden pattern: $pattern"
$foundForbidden = $true
}
}
}
if (-not $foundForbidden) {
Write-CILog "PASS" "No forbidden patterns found"
}
# Generate pre-commit report
$reportFile = Join-Path $CIReportDir "pre-commit-$(Get-Date -Format 'yyyyMMdd-HHmmss').json"
$report = @{
timestamp = (Get-Date -Format "yyyy-MM-ddTHH:mm:sszzz")
hook_type = "pre-commit"
exit_code = $exitCode
checks_performed = @(
"version_validation",
"skill_audit",
"integration_tests",
"forbidden_patterns"
)
log_files = @(
"pre-commit-versions.log",
"pre-commit-skills.log",
"pre-commit-tests.log"
)
}
$report | ConvertTo-Json -Depth 10 | Out-File -FilePath $reportFile
Write-CILog "INFO" "Pre-commit report saved to: $reportFile"
if ($exitCode -eq 0) {
Write-CILog "PASS" "Pre-commit validation completed successfully"
} else {
Write-CILog "FAIL" "Pre-commit validation failed"
}
return $exitCode
}
# Pre-push hook
function Invoke-PrePushHook {
Write-CILog "INFO" "Running pre-push validation..."
$exitCode = 0
# 1. Full health check
Write-CILog "INFO" "Running full health check..."
if (Get-Command node -ErrorAction SilentlyContinue) {
$healthScript = Join-Path $AgentsDir "scripts\health-monitor.js"
if (Test-Path $healthScript) {
try {
node $healthScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-health.log") -Append
Write-CILog "PASS" "Health check passed"
} catch {
Write-CILog "FAIL" "Health check failed"
$exitCode = 1
}
} else {
Write-CILog "WARN" "Health monitor script not found"
}
} else {
Write-CILog "WARN" "Node.js not available, using basic health check"
$auditScript = Join-Path $AgentsDir "scripts\powershell\audit-skills.ps1"
if (Test-Path $auditScript) {
try {
& $auditScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-basic.log") -Append
Write-CILog "PASS" "Basic health check passed"
} catch {
Write-CILog "FAIL" "Basic health check failed"
$exitCode = 1
}
}
}
# 2. Advanced validation (if available)
if (Get-Command node -ErrorAction SilentlyContinue) {
$advancedScript = Join-Path $AgentsDir "scripts\advanced-validator.js"
if (Test-Path $advancedScript) {
Write-CILog "INFO" "Running advanced validation..."
try {
node $advancedScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-advanced.log") -Append
Write-CILog "PASS" "Advanced validation passed"
} catch {
Write-CILog "WARN" "Advanced validation found issues (non-blocking)"
}
}
}
# 3. Dependency validation
if (Get-Command node -ErrorAction SilentlyContinue) {
$dependencyScript = Join-Path $AgentsDir "scripts\dependency-validator.js"
if (Test-Path $dependencyScript) {
Write-CILog "INFO" "Running dependency validation..."
try {
node $dependencyScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-dependencies.log") -Append
Write-CILog "PASS" "Dependency validation passed"
} catch {
Write-CILog "WARN" "Dependency validation found issues (non-blocking)"
}
}
}
# 4. Performance monitoring
if (Get-Command node -ErrorAction SilentlyContinue) {
$performanceScript = Join-Path $AgentsDir "scripts\performance-monitor.js"
if (Test-Path $performanceScript) {
Write-CILog "INFO" "Running performance monitoring..."
try {
node $performanceScript | Out-File -FilePath (Join-Path $CILogDir "pre-push-performance.log") -Append
Write-CILog "PASS" "Performance monitoring passed"
} catch {
Write-CILog "WARN" "Performance monitoring found issues (non-blocking)"
}
}
}
# Generate pre-push report
$reportFile = Join-Path $CIReportDir "pre-push-$(Get-Date -Format 'yyyyMMdd-HHmmss').json"
$report = @{
timestamp = (Get-Date -Format "yyyy-MM-ddTHH:mm:sszzz")
hook_type = "pre-push"
exit_code = $exitCode
checks_performed = @(
"health_check",
"advanced_validation",
"dependency_validation",
"performance_monitoring"
)
log_files = @(
"pre-push-health.log",
"pre-push-advanced.log",
"pre-push-dependencies.log",
"pre-push-performance.log"
)
}
$report | ConvertTo-Json -Depth 10 | Out-File -FilePath $reportFile
Write-CILog "INFO" "Pre-push report saved to: $reportFile"
if ($exitCode -eq 0) {
Write-CILog "PASS" "Pre-push validation completed successfully"
} else {
Write-CILog "FAIL" "Pre-push validation failed"
}
return $exitCode
}
# CI pipeline hook
function Invoke-CIPipelineHook {
Write-CILog "INFO" "Running CI pipeline validation..."
$exitCode = 0
$pipelineStart = Get-Date
# Create pipeline workspace
$workspace = Join-Path $CIReportDir "pipeline-$(Get-Date -Format 'yyyyMMdd-HHmmss')"
New-Item -ItemType Directory -Path $workspace -Force | Out-Null
# 1. Environment validation
Write-CILog "INFO" "Validating CI environment..."
# Check required tools
$requiredTools = @("node", "npm")
foreach ($tool in $requiredTools) {
if (Get-Command $tool -ErrorAction SilentlyContinue) {
Write-CILog "PASS" "Tool available: $tool"
} else {
Write-CILog "FAIL" "Tool missing: $tool"
$exitCode = 1
}
}
# Check Node.js modules
$packageJson = Join-Path $AgentsDir "package.json"
if (Test-Path $packageJson) {
Push-Location $AgentsDir
try {
npm list --depth=0 | Out-Null
Write-CILog "PASS" "Node.js dependencies installed"
} catch {
Write-CILog "WARN" "Installing Node.js dependencies..."
npm install | Out-File -FilePath (Join-Path $workspace "npm-install.log")
if ($LASTEXITCODE -ne 0) {
Write-CILog "FAIL" "Failed to install Node.js dependencies"
$exitCode = 1
}
}
Pop-Location
}
# 2. Full test suite
Write-CILog "INFO" "Running full test suite..."
# Integration tests
$integrationTest = Join-Path $AgentsDir "tests\skill-integration.test.js"
if (Test-Path $integrationTest) {
try {
node $integrationTest | Out-File -FilePath (Join-Path $workspace "integration-tests.log")
Write-CILog "PASS" "Integration tests passed"
} catch {
Write-CILog "FAIL" "Integration tests failed"
$exitCode = 1
}
}
# Workflow validation tests
$workflowTest = Join-Path $AgentsDir "tests\workflow-validation.test.js"
if (Test-Path $workflowTest) {
try {
node $workflowTest | Out-File -FilePath (Join-Path $workspace "workflow-tests.log")
Write-CILog "PASS" "Workflow validation tests passed"
} catch {
Write-CILog "FAIL" "Workflow validation tests failed"
$exitCode = 1
}
}
# 3. Comprehensive validation
Write-CILog "INFO" "Running comprehensive validation..."
# Health monitoring
$healthScript = Join-Path $AgentsDir "scripts\health-monitor.js"
if (Test-Path $healthScript) {
try {
node $healthScript | Out-File -FilePath (Join-Path $workspace "health-check.log")
Write-CILog "PASS" "Health monitoring passed"
} catch {
Write-CILog "FAIL" "Health monitoring failed"
$exitCode = 1
}
}
# Advanced validation
$advancedScript = Join-Path $AgentsDir "scripts\advanced-validator.js"
if (Test-Path $advancedScript) {
try {
node $advancedScript | Out-File -FilePath (Join-Path $workspace "advanced-validation.log")
Write-CILog "PASS" "Advanced validation passed"
} catch {
Write-CILog "WARN" "Advanced validation found issues"
}
}
# Dependency validation
$dependencyScript = Join-Path $AgentsDir "scripts\dependency-validator.js"
if (Test-Path $dependencyScript) {
try {
node $dependencyScript | Out-File -FilePath (Join-Path $workspace "dependency-validation.log")
Write-CILog "PASS" "Dependency validation passed"
} catch {
Write-CILog "WARN" "Dependency validation found issues"
}
}
# Performance monitoring
$performanceScript = Join-Path $AgentsDir "scripts\performance-monitor.js"
if (Test-Path $performanceScript) {
try {
node $performanceScript | Out-File -FilePath (Join-Path $workspace "performance-monitor.log")
Write-CILog "PASS" "Performance monitoring passed"
} catch {
Write-CILog "WARN" "Performance monitoring found issues"
}
}
# 4. Generate artifacts
Write-CILog "INFO" "Generating CI artifacts..."
$pipelineEnd = Get-Date
$duration = ($pipelineEnd - $pipelineStart).TotalSeconds
# Consolidated report
$reportFile = Join-Path $workspace "ci-pipeline-report.json"
$report = @{
timestamp = (Get-Date -Format "yyyy-MM-ddTHH:mm:sszzz")
pipeline_type = "full_ci"
duration_seconds = [int]$duration
exit_code = $exitCode
environment = @{
node_version = (node --version)
platform = $env:OS
working_directory = $BaseDir
}
checks_performed = @(
"environment_validation",
"integration_tests",
"workflow_validation_tests",
"health_monitoring",
"advanced_validation",
"dependency_validation",
"performance_monitoring"
)
artifacts = @(
"integration-tests.log",
"workflow-tests.log",
"health-check.log",
"advanced-validation.log",
"dependency-validation.log",
"performance-monitor.log",
"npm-install.log"
)
workspace = $workspace
}
$report | ConvertTo-Json -Depth 10 | Out-File -FilePath $reportFile
Write-CILog "INFO" "CI pipeline report saved to: $reportFile"
Write-CILog "INFO" "CI artifacts saved to: $workspace"
Write-CILog "INFO" "Pipeline duration: $([int]$duration)s"
if ($exitCode -eq 0) {
Write-CILog "PASS" "CI pipeline completed successfully"
} else {
Write-CILog "FAIL" "CI pipeline failed"
}
return $exitCode
}
# Install Git hooks
function Install-GitHooks {
Write-CILog "INFO" "Installing Git hooks..."
$hooksDir = Join-Path $BaseDir ".git\hooks"
$agentsHooksDir = Join-Path $AgentsDir "scripts\git-hooks"
# Create git-hooks directory
if (-not (Test-Path $agentsHooksDir)) {
New-Item -ItemType Directory -Path $agentsHooksDir -Force | Out-Null
}
# Create pre-commit hook
$preCommitContent = @'
#!/bin/bash
# Pre-commit hook for .agents validation
echo "Running .agents pre-commit validation..."
if bash .agents/scripts/ci-hooks.sh pre-commit; then
echo "Pre-commit validation passed"
exit 0
else
echo "Pre-commit validation failed"
exit 1
fi
'@
$preCommitContent | Out-File -FilePath (Join-Path $agentsHooksDir "pre-commit") -Encoding UTF8
# Create pre-push hook
$prePushContent = @'
#!/bin/bash
# Pre-push hook for .agents validation
echo "Running .agents pre-push validation..."
if bash .agents/scripts/ci-hooks.sh pre-push; then
echo "Pre-push validation passed"
exit 0
else
echo "Pre-push validation failed"
exit 1
fi
'@
$prePushContent | Out-File -FilePath (Join-Path $agentsHooksDir "pre-push") -Encoding UTF8
# Install hooks if .git directory exists
if (Test-Path $hooksDir) {
Copy-Item (Join-Path $agentsHooksDir "pre-commit") $hooksDir -Force
Copy-Item (Join-Path $agentsHooksDir "pre-push") $hooksDir -Force
Write-CILog "PASS" "Git hooks installed successfully"
} else {
Write-CILog "WARN" "Git repository not found, hooks copied to .agents\scripts\git-hooks"
}
}
# Main execution
switch ($Command) {
"pre-commit" {
exit (Invoke-PreCommitHook)
}
"pre-push" {
exit (Invoke-PrePushHook)
}
"ci-pipeline" {
exit (Invoke-CIPipelineHook)
}
"install-hooks" {
Install-GitHooks
}
"help" {
Write-Host "Usage: .\ci-hooks.ps1 -Command {pre-commit|pre-push|ci-pipeline|install-hooks|help}"
Write-Host ""
Write-Host "Commands:"
Write-Host " pre-commit - Run pre-commit validation"
Write-Host " pre-push - Run pre-push validation"
Write-Host " ci-pipeline - Run full CI pipeline"
Write-Host " install-hooks - Install Git hooks"
Write-Host " help - Show this help"
}
default {
Write-Host "Unknown command: $Command"
Write-Host "Use 'help' to see available commands"
exit 1
}
}
+445
View File
@@ -0,0 +1,445 @@
#!/bin/bash
# ci-hooks.sh - Continuous integration hooks for .agents
# Part of LCBP3-DMS Phase 3 enhancements
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Base directory
BASE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
AGENTS_DIR="$BASE_DIR/.agents"
# CI configuration
CI_LOG_DIR="$AGENTS_DIR/logs/ci"
CI_REPORT_DIR="$AGENTS_DIR/reports/ci"
# Ensure directories exist
mkdir -p "$CI_LOG_DIR" "$CI_REPORT_DIR"
# Logging function
ci_log() {
local level="$1"
local message="$2"
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
local log_file="$CI_LOG_DIR/ci-$(date '+%Y-%m-%d').log"
echo "[$timestamp] [$level] $message" | tee -a "$log_file"
# Console output with colors
case "$level" in
"INFO") echo -e "${BLUE}$message${NC}" ;;
"PASS") echo -e "${GREEN}$message${NC}" ;;
"WARN") echo -e "${YELLOW}$message${NC}" ;;
"FAIL") echo -e "${RED}$message${NC}" ;;
*) echo "$message" ;;
esac
}
# Pre-commit hook
pre_commit_hook() {
ci_log "INFO" "Running pre-commit validation..."
local exit_code=0
# 1. Run version validation
ci_log "INFO" "Checking version consistency..."
if "$AGENTS_DIR/scripts/bash/validate-versions.sh" >> "$CI_LOG_DIR/pre-commit-versions.log" 2>&1; then
ci_log "PASS" "Version validation passed"
else
ci_log "FAIL" "Version validation failed"
exit_code=1
fi
# 2. Run skill audit
ci_log "INFO" "Auditing skills..."
if "$AGENTS_DIR/scripts/bash/audit-skills.sh" >> "$CI_LOG_DIR/pre-commit-skills.log" 2>&1; then
ci_log "PASS" "Skill audit passed"
else
ci_log "FAIL" "Skill audit failed"
exit_code=1
fi
# 3. Run integration tests (if Node.js available)
if command -v node >/dev/null 2>&1; then
ci_log "INFO" "Running integration tests..."
if node "$AGENTS_DIR/tests/skill-integration.test.js" >> "$CI_LOG_DIR/pre-commit-tests.log" 2>&1; then
ci_log "PASS" "Integration tests passed"
else
ci_log "WARN" "Integration tests failed (non-blocking)"
fi
else
ci_log "WARN" "Node.js not available, skipping integration tests"
fi
# 4. Check for forbidden patterns
ci_log "INFO" "Checking for forbidden patterns..."
local forbidden_patterns=("TODO" "FIXME" "XXX" "HACK")
local found_forbidden=false
for pattern in "${forbidden_patterns[@]}"; do
if grep -r "$pattern" "$AGENTS_DIR/skills" --include="*.md" >/dev/null 2>&1; then
ci_log "WARN" "Found forbidden pattern: $pattern"
found_forbidden=true
fi
done
if [ "$found_forbidden" = false ]; then
ci_log "PASS" "No forbidden patterns found"
fi
# Generate pre-commit report
local report_file="$CI_REPORT_DIR/pre-commit-$(date '+%Y%m%d-%H%M%S').json"
cat > "$report_file" << EOF
{
"timestamp": "$(date -Iseconds)",
"hook_type": "pre-commit",
"exit_code": $exit_code,
"checks_performed": [
"version_validation",
"skill_audit",
"integration_tests",
"forbidden_patterns"
],
"log_files": [
"pre-commit-versions.log",
"pre-commit-skills.log",
"pre-commit-tests.log"
]
}
EOF
ci_log "INFO" "Pre-commit report saved to: $report_file"
if [ $exit_code -eq 0 ]; then
ci_log "PASS" "Pre-commit validation completed successfully"
else
ci_log "FAIL" "Pre-commit validation failed"
fi
return $exit_code
}
# Pre-push hook
pre_push_hook() {
ci_log "INFO" "Running pre-push validation..."
local exit_code=0
# 1. Full health check
ci_log "INFO" "Running full health check..."
if command -v node >/dev/null 2>&1; then
if node "$AGENTS_DIR/scripts/health-monitor.js" >> "$CI_LOG_DIR/pre-push-health.log" 2>&1; then
ci_log "PASS" "Health check passed"
else
ci_log "FAIL" "Health check failed"
exit_code=1
fi
else
ci_log "WARN" "Node.js not available, using basic health check"
if "$AGENTS_DIR/scripts/bash/audit-skills.sh" >> "$CI_LOG_DIR/pre-push-basic.log" 2>&1; then
ci_log "PASS" "Basic health check passed"
else
ci_log "FAIL" "Basic health check failed"
exit_code=1
fi
fi
# 2. Advanced validation (if available)
if command -v node >/dev/null 2>&1 && [ -f "$AGENTS_DIR/scripts/advanced-validator.js" ]; then
ci_log "INFO" "Running advanced validation..."
if node "$AGENTS_DIR/scripts/advanced-validator.js" >> "$CI_LOG_DIR/pre-push-advanced.log" 2>&1; then
ci_log "PASS" "Advanced validation passed"
else
ci_log "WARN" "Advanced validation found issues (non-blocking)"
fi
fi
# 3. Dependency validation
if command -v node >/dev/null 2>&1 && [ -f "$AGENTS_DIR/scripts/dependency-validator.js" ]; then
ci_log "INFO" "Running dependency validation..."
if node "$AGENTS_DIR/scripts/dependency-validator.js" >> "$CI_LOG_DIR/pre-push-dependencies.log" 2>&1; then
ci_log "PASS" "Dependency validation passed"
else
ci_log "WARN" "Dependency validation found issues (non-blocking)"
fi
fi
# 4. Performance monitoring
if command -v node >/dev/null 2>&1 && [ -f "$AGENTS_DIR/scripts/performance-monitor.js" ]; then
ci_log "INFO" "Running performance monitoring..."
if node "$AGENTS_DIR/scripts/performance-monitor.js" >> "$CI_LOG_DIR/pre-push-performance.log" 2>&1; then
ci_log "PASS" "Performance monitoring passed"
else
ci_log "WARN" "Performance monitoring found issues (non-blocking)"
fi
fi
# Generate pre-push report
local report_file="$CI_REPORT_DIR/pre-push-$(date '+%Y%m%d-%H%M%S').json"
cat > "$report_file" << EOF
{
"timestamp": "$(date -Iseconds)",
"hook_type": "pre-push",
"exit_code": $exit_code,
"checks_performed": [
"health_check",
"advanced_validation",
"dependency_validation",
"performance_monitoring"
],
"log_files": [
"pre-push-health.log",
"pre-push-advanced.log",
"pre-push-dependencies.log",
"pre-push-performance.log"
]
}
EOF
ci_log "INFO" "Pre-push report saved to: $report_file"
if [ $exit_code -eq 0 ]; then
ci_log "PASS" "Pre-push validation completed successfully"
else
ci_log "FAIL" "Pre-push validation failed"
fi
return $exit_code
}
# CI pipeline hook
ci_pipeline_hook() {
ci_log "INFO" "Running CI pipeline validation..."
local exit_code=0
local pipeline_start=$(date +%s)
# Create pipeline workspace
local workspace="$CI_REPORT_DIR/pipeline-$(date '+%Y%m%d-%H%M%S')"
mkdir -p "$workspace"
# 1. Environment validation
ci_log "INFO" "Validating CI environment..."
# Check required tools
local required_tools=("node" "npm")
for tool in "${required_tools[@]}"; do
if command -v "$tool" >/dev/null 2>&1; then
ci_log "PASS" "Tool available: $tool"
else
ci_log "FAIL" "Tool missing: $tool"
exit_code=1
fi
done
# Check Node.js modules
if [ -f "$AGENTS_DIR/package.json" ]; then
cd "$AGENTS_DIR"
if npm list --depth=0 >/dev/null 2>&1; then
ci_log "PASS" "Node.js dependencies installed"
else
ci_log "WARN" "Installing Node.js dependencies..."
npm install >> "$workspace/npm-install.log" 2>&1 || {
ci_log "FAIL" "Failed to install Node.js dependencies"
exit_code=1
}
fi
cd "$BASE_DIR"
fi
# 2. Full test suite
ci_log "INFO" "Running full test suite..."
# Integration tests
if node "$AGENTS_DIR/tests/skill-integration.test.js" >> "$workspace/integration-tests.log" 2>&1; then
ci_log "PASS" "Integration tests passed"
else
ci_log "FAIL" "Integration tests failed"
exit_code=1
fi
# Workflow validation tests
if node "$AGENTS_DIR/tests/workflow-validation.test.js" >> "$workspace/workflow-tests.log" 2>&1; then
ci_log "PASS" "Workflow validation tests passed"
else
ci_log "FAIL" "Workflow validation tests failed"
exit_code=1
fi
# 3. Comprehensive validation
ci_log "INFO" "Running comprehensive validation..."
# Health monitoring
if node "$AGENTS_DIR/scripts/health-monitor.js" >> "$workspace/health-check.log" 2>&1; then
ci_log "PASS" "Health monitoring passed"
else
ci_log "FAIL" "Health monitoring failed"
exit_code=1
fi
# Advanced validation
if node "$AGENTS_DIR/scripts/advanced-validator.js" >> "$workspace/advanced-validation.log" 2>&1; then
ci_log "PASS" "Advanced validation passed"
else
ci_log "WARN" "Advanced validation found issues"
fi
# Dependency validation
if node "$AGENTS_DIR/scripts/dependency-validator.js" >> "$workspace/dependency-validation.log" 2>&1; then
ci_log "PASS" "Dependency validation passed"
else
ci_log "WARN" "Dependency validation found issues"
fi
# Performance monitoring
if node "$AGENTS_DIR/scripts/performance-monitor.js" >> "$workspace/performance-monitor.log" 2>&1; then
ci_log "PASS" "Performance monitoring passed"
else
ci_log "WARN" "Performance monitoring found issues"
fi
# 4. Generate artifacts
ci_log "INFO" "Generating CI artifacts..."
local pipeline_end=$(date +%s)
local duration=$((pipeline_end - pipeline_start))
# Consolidated report
local report_file="$workspace/ci-pipeline-report.json"
cat > "$report_file" << EOF
{
"timestamp": "$(date -Iseconds)",
"pipeline_type": "full_ci",
"duration_seconds": $duration,
"exit_code": $exit_code,
"environment": {
"node_version": "$(node --version)",
"platform": "$(uname -s)",
"working_directory": "$BASE_DIR"
},
"checks_performed": [
"environment_validation",
"integration_tests",
"workflow_validation_tests",
"health_monitoring",
"advanced_validation",
"dependency_validation",
"performance_monitoring"
],
"artifacts": [
"integration-tests.log",
"workflow-tests.log",
"health-check.log",
"advanced-validation.log",
"dependency-validation.log",
"performance-monitor.log",
"npm-install.log"
],
"workspace": "$workspace"
}
EOF
ci_log "INFO" "CI pipeline report saved to: $report_file"
ci_log "INFO" "CI artifacts saved to: $workspace"
ci_log "INFO" "Pipeline duration: ${duration}s"
if [ $exit_code -eq 0 ]; then
ci_log "PASS" "CI pipeline completed successfully"
else
ci_log "FAIL" "CI pipeline failed"
fi
return $exit_code
}
# Install Git hooks
install_git_hooks() {
ci_log "INFO" "Installing Git hooks..."
local hooks_dir="$BASE_DIR/.git/hooks"
local agents_hooks_dir="$AGENTS_DIR/scripts/git-hooks"
# Create git-hooks directory
mkdir -p "$agents_hooks_dir"
# Create pre-commit hook
cat > "$agents_hooks_dir/pre-commit" << 'EOF'
#!/bin/bash
# Pre-commit hook for .agents validation
echo "Running .agents pre-commit validation..."
if bash .agents/scripts/ci-hooks.sh pre-commit; then
echo "Pre-commit validation passed"
exit 0
else
echo "Pre-commit validation failed"
exit 1
fi
EOF
# Create pre-push hook
cat > "$agents_hooks_dir/pre-push" << 'EOF'
#!/bin/bash
# Pre-push hook for .agents validation
echo "Running .agents pre-push validation..."
if bash .agents/scripts/ci-hooks.sh pre-push; then
echo "Pre-push validation passed"
exit 0
else
echo "Pre-push validation failed"
exit 1
fi
EOF
# Make hooks executable
chmod +x "$agents_hooks_dir/pre-commit"
chmod +x "$agents_hooks_dir/pre-push"
# Install hooks if .git directory exists
if [ -d "$hooks_dir" ]; then
cp "$agents_hooks_dir/pre-commit" "$hooks_dir/"
cp "$agents_hooks_dir/pre-push" "$hooks_dir/"
ci_log "PASS" "Git hooks installed successfully"
else
ci_log "WARN" "Git repository not found, hooks copied to .agents/scripts/git-hooks"
fi
}
# Main function
main() {
local command="${1:-help}"
case "$command" in
"pre-commit")
pre_commit_hook
;;
"pre-push")
pre_push_hook
;;
"ci-pipeline")
ci_pipeline_hook
;;
"install-hooks")
install_git_hooks
;;
"help"|*)
echo "Usage: $0 {pre-commit|pre-push|ci-pipeline|install-hooks|help}"
echo ""
echo "Commands:"
echo " pre-commit - Run pre-commit validation"
echo " pre-push - Run pre-push validation"
echo " ci-pipeline - Run full CI pipeline"
echo " install-hooks - Install Git hooks"
echo " help - Show this help"
;;
esac
}
# Run main function with all arguments
main "$@"
+457
View File
@@ -0,0 +1,457 @@
#!/usr/bin/env node
/**
* dependency-validator.js - Skill dependency validation system
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
const yaml = require('js-yaml');
// Configuration
const BASE_DIR = path.resolve(__dirname, '../..');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
const WORKFLOWS_DIR = path.join(BASE_DIR, '.windsurf', 'workflows');
// Dependency validation class
class DependencyValidator {
constructor() {
this.validationResults = {
timestamp: new Date().toISOString(),
dependency_graph: {},
circular_dependencies: [],
missing_dependencies: [],
orphaned_skills: [],
dependency_chains: {},
validation_summary: {
total_skills: 0,
skills_with_dependencies: 0,
circular_dependencies_found: 0,
missing_dependencies_found: 0,
orphaned_skills_found: 0,
max_dependency_depth: 0,
validation_status: 'unknown'
}
};
}
log(message, level = 'info') {
const colors = {
info: '\x1b[36m', // Cyan
pass: '\x1b[32m', // Green
fail: '\x1b[31m', // Red
warn: '\x1b[33m', // Yellow
critical: '\x1b[35m', // Magenta
reset: '\x1b[0m'
};
const color = colors[level] || colors.info;
console.log(`${color}[${level.toUpperCase()}] ${message}${colors.reset}`);
}
extractSkillDependencies(skillPath, skillName) {
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (!fs.existsSync(skillMdPath)) {
this.log(`No SKILL.md found for ${skillName}`, 'warn');
return { dependencies: [], handoffs: [], error: 'SKILL.md not found' };
}
try {
const content = fs.readFileSync(skillMdPath, 'utf8');
// Extract dependencies from front matter
let dependencies = [];
let handoffs = [];
const frontMatterMatch = content.match(/^---\n([\s\S]*?)\n---/);
if (frontMatterMatch) {
try {
const frontMatter = yaml.load(frontMatterMatch[1]);
// Handle depends-on field
if (frontMatter['depends-on']) {
if (Array.isArray(frontMatter['depends-on'])) {
dependencies = frontMatter['depends-on'];
} else {
dependencies = [frontMatter['depends-on']];
}
}
// Handle handoffs field
if (frontMatter.handoffs && Array.isArray(frontMatter.handoffs)) {
handoffs = frontMatter.handoffs.map(h => h.agent);
}
} catch (yamlError) {
this.log(`Invalid YAML in ${skillName} front matter: ${yamlError.message}`, 'warn');
}
}
// Also extract skill references from content
const contentSkillRefs = content.match(/@speckit-\w+/g) || [];
const contentDependencies = contentSkillRefs.map(ref => ref.replace('@', ''));
// Merge dependencies (avoid duplicates)
const allDependencies = [...new Set([...dependencies, ...contentDependencies])];
return {
dependencies: allDependencies,
handoffs: handoffs,
content_references: contentSkillRefs,
front_matter_dependencies: dependencies,
error: null
};
} catch (error) {
this.log(`Error reading ${skillName}: ${error.message}`, 'warn');
return { dependencies: [], handoffs: [], error: error.message };
}
}
buildDependencyGraph() {
this.log('Building dependency graph...', 'info');
if (!fs.existsSync(SKILLS_DIR)) {
this.log('Skills directory not found', 'fail');
return;
}
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
const itemPath = path.join(SKILLS_DIR, item);
return fs.statSync(itemPath).isDirectory();
});
this.validationResults.validation_summary.total_skills = skillDirs.length;
// Extract dependencies for each skill
for (const skillDir of skillDirs) {
const skillPath = path.join(SKILLS_DIR, skillDir);
const dependencyInfo = this.extractSkillDependencies(skillPath, skillDir);
this.validationResults.dependency_graph[skillDir] = dependencyInfo;
if (dependencyInfo.dependencies.length > 0 || dependencyInfo.handoffs.length > 0) {
this.validationResults.validation_summary.skills_with_dependencies++;
}
}
this.log(`Analyzed ${skillDirs.length} skills`, 'info');
this.log(`Skills with dependencies: ${this.validationResults.validation_summary.skills_with_dependencies}`, 'info');
}
validateDependencies() {
this.log('Validating dependencies...', 'info');
const { dependency_graph } = this.validationResults;
const allSkills = Object.keys(dependency_graph);
// Check for missing dependencies
for (const [skillName, dependencyInfo] of Object.entries(dependency_graph)) {
for (const dependency of dependencyInfo.dependencies) {
if (!allSkills.includes(dependency)) {
this.validationResults.missing_dependencies.push({
skill: skillName,
missing_dependency: dependency,
dependency_type: 'depends-on'
});
this.validationResults.validation_summary.missing_dependencies_found++;
this.log(`Missing dependency: ${skillName} depends on ${dependency}`, 'fail');
}
}
for (const handoff of dependencyInfo.handoffs) {
if (!allSkills.includes(handoff)) {
this.validationResults.missing_dependencies.push({
skill: skillName,
missing_dependency: handoff,
dependency_type: 'handoff'
});
this.validationResults.validation_summary.missing_dependencies_found++;
this.log(`Missing handoff: ${skillName} hands off to ${handoff}`, 'fail');
}
}
}
// Check for orphaned skills (no one depends on them)
const dependedOnSkills = new Set();
for (const dependencyInfo of Object.values(dependency_graph)) {
dependencyInfo.dependencies.forEach(dep => dependedOnSkills.add(dep));
dependencyInfo.handoffs.forEach(handoff => dependedOnSkills.add(handoff));
}
for (const skill of allSkills) {
if (!dependedOnSkills.has(skill) && skill !== 'speckit-constitution') {
// Constitution is allowed to be orphaned (it's a starting point)
this.validationResults.orphaned_skills.push(skill);
this.validationResults.validation_summary.orphaned_skills_found++;
this.log(`Orphaned skill: ${skill} (no dependencies on it)`, 'warn');
}
}
}
detectCircularDependencies() {
this.log('Detecting circular dependencies...', 'info');
const { dependency_graph } = this.validationResults;
const visited = new Set();
const recursionStack = new Set();
const circularDeps = [];
function dfs(skillName, path = []) {
if (recursionStack.has(skillName)) {
// Found circular dependency
const cycleStart = path.indexOf(skillName);
const cycle = path.slice(cycleStart).concat(skillName);
circularDeps.push(cycle);
return;
}
if (visited.has(skillName)) {
return;
}
visited.add(skillName);
recursionStack.add(skillName);
path.push(skillName);
const dependencyInfo = dependency_graph[skillName];
if (dependencyInfo) {
for (const dependency of dependencyInfo.dependencies) {
dfs(dependency, [...path]);
}
}
recursionStack.delete(skillName);
}
// Run DFS from each skill
for (const skillName of Object.keys(dependency_graph)) {
if (!visited.has(skillName)) {
dfs(skillName);
}
}
this.validationResults.circular_dependencies = circularDeps;
this.validationResults.validation_summary.circular_dependencies_found = circularDeps.length;
if (circularDeps.length > 0) {
this.log(`Found ${circularDeps.length} circular dependencies:`, 'critical');
circularDeps.forEach((cycle, index) => {
this.log(` ${index + 1}. ${cycle.join(' -> ')}`, 'critical');
});
} else {
this.log('No circular dependencies found', 'pass');
}
}
calculateDependencyChains() {
this.log('Calculating dependency chains...', 'info');
const { dependency_graph } = this.validationResults;
const chains = {};
function calculateDepth(skillName, visited = new Set()) {
if (visited.has(skillName)) {
return 0; // Circular dependency protection
}
visited.add(skillName);
const dependencyInfo = dependency_graph[skillName];
if (!dependencyInfo || dependencyInfo.dependencies.length === 0) {
return 1;
}
let maxDepth = 0;
for (const dependency of dependencyInfo.dependencies) {
const depth = calculateDepth(dependency, new Set(visited));
maxDepth = Math.max(maxDepth, depth);
}
return maxDepth + 1;
}
function getDependencyChain(skillName) {
const dependencyInfo = dependency_graph[skillName];
if (!dependencyInfo || dependencyInfo.dependencies.length === 0) {
return [skillName];
}
const chains = [];
for (const dependency of dependencyInfo.dependencies) {
const depChain = getDependencyChain(dependency);
chains.push(depChain.concat(skillName));
}
// Return the longest chain
return chains.reduce((longest, current) =>
current.length > longest.length ? current : longest, [skillName]
);
}
for (const skillName of Object.keys(dependency_graph)) {
const depth = calculateDepth(skillName);
const chain = getDependencyChain(skillName);
chains[skillName] = {
depth: depth,
chain: chain,
chain_length: chain.length
};
}
this.validationResults.dependency_chains = chains;
const maxDepth = Math.max(...Object.values(chains).map(c => c.depth));
this.validationResults.validation_summary.max_dependency_depth = maxDepth;
this.log(`Maximum dependency depth: ${maxDepth}`, 'info');
}
validateWorkflowDependencies() {
this.log('Validating workflow dependencies...', 'info');
if (!fs.existsSync(WORKFLOWS_DIR)) {
this.log('Workflows directory not found', 'warn');
return;
}
const workflowFiles = fs.readdirSync(WORKFLOWS_DIR).filter(file => file.endsWith('.md'));
const allSkills = Object.keys(this.validationResults.dependency_graph);
for (const workflowFile of workflowFiles) {
const workflowPath = path.join(WORKFLOWS_DIR, workflowFile);
try {
const content = fs.readFileSync(workflowPath, 'utf8');
const skillReferences = content.match(/@speckit-\w+/g) || [];
for (const skillRef of skillReferences) {
const skillName = skillRef.replace('@', '');
if (!allSkills.includes(skillName)) {
this.validationResults.missing_dependencies.push({
workflow: workflowFile,
missing_dependency: skillName,
dependency_type: 'workflow-reference'
});
this.validationResults.validation_summary.missing_dependencies_found++;
this.log(`Workflow ${workflowFile} references missing skill: ${skillRef}`, 'fail');
}
}
} catch (error) {
this.log(`Error reading workflow ${workflowFile}: ${error.message}`, 'warn');
}
}
}
generateDependencyReport() {
this.log('Generating dependency report...', 'info');
// Determine overall validation status
const summary = this.validationResults.validation_summary;
if (summary.circular_dependencies_found > 0) {
summary.validation_status = 'critical';
} else if (summary.missing_dependencies_found > 0) {
summary.validation_status = 'failed';
} else if (summary.orphaned_skills_found > 0) {
summary.validation_status = 'warning';
} else {
summary.validation_status = 'passed';
}
// Save report
const reportPath = path.join(AGENTS_DIR, 'reports', 'dependency-validation.json');
const reportsDir = path.dirname(reportPath);
if (!fs.existsSync(reportsDir)) {
fs.mkdirSync(reportsDir, { recursive: true });
}
fs.writeFileSync(reportPath, JSON.stringify(this.validationResults, null, 2));
this.log(`Dependency validation report saved to: ${reportPath}`, 'info');
}
printSummary() {
const summary = this.validationResults.validation_summary;
this.log('=== Dependency Validation Summary ===', 'info');
this.log(`Total skills: ${summary.total_skills}`, 'info');
this.log(`Skills with dependencies: ${summary.skills_with_dependencies}`, 'info');
this.log(`Circular dependencies: ${summary.circular_dependencies_found}`, summary.circular_dependencies_found > 0 ? 'critical' : 'pass');
this.log(`Missing dependencies: ${summary.missing_dependencies_found}`, summary.missing_dependencies_found > 0 ? 'fail' : 'pass');
this.log(`Orphaned skills: ${summary.orphaned_skills_found}`, summary.orphaned_skills_found > 0 ? 'warn' : 'info');
this.log(`Max dependency depth: ${summary.max_dependency_depth}`, 'info');
this.log(`Validation status: ${summary.validation_status.toUpperCase()}`,
summary.validation_status === 'passed' ? 'pass' :
summary.validation_status === 'warning' ? 'warn' : 'fail');
// Show longest dependency chains
const chains = this.validationResults.dependency_chains;
const sortedChains = Object.entries(chains)
.sort(([,a], [,b]) => b.depth - a.depth)
.slice(0, 3);
if (sortedChains.length > 0) {
this.log('Top 3 longest dependency chains:', 'info');
sortedChains.forEach(([skillName, chainInfo], index) => {
this.log(` ${index + 1}. ${chainInfo.chain.join(' -> ')} (depth: ${chainInfo.depth})`, 'info');
});
}
}
async runDependencyValidation() {
this.log('Starting dependency validation...', 'info');
this.log(`Base directory: ${BASE_DIR}`, 'info');
// Build dependency graph
this.buildDependencyGraph();
// Validate dependencies
this.validateDependencies();
// Detect circular dependencies
this.detectCircularDependencies();
// Calculate dependency chains
this.calculateDependencyChains();
// Validate workflow dependencies
this.validateWorkflowDependencies();
// Generate report
this.generateDependencyReport();
// Print summary
this.printSummary();
return this.validationResults;
}
}
// CLI interface
async function main() {
const validator = new DependencyValidator();
try {
const results = await validator.runDependencyValidation();
const status = results.validation_summary.validation_status;
process.exit(status === 'passed' || status === 'warning' ? 0 : 1);
} catch (error) {
console.error('Dependency validation failed:', error);
process.exit(1);
}
}
// Export for use in other modules
module.exports = { DependencyValidator };
// Run if called directly
if (require.main === module) {
main();
}
+369
View File
@@ -0,0 +1,369 @@
#!/usr/bin/env node
/**
* health-monitor.js - Automated health monitoring system for .agents
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
const { execSync } = require('child_process');
// Configuration
const BASE_DIR = path.resolve(__dirname, '../..');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
const HEALTH_LOG_PATH = path.join(AGENTS_DIR, 'logs', 'health.log');
const HEALTH_REPORT_PATH = path.join(AGENTS_DIR, 'reports', 'health-report.json');
// Ensure directories exist
[ path.dirname(HEALTH_LOG_PATH), path.dirname(HEALTH_REPORT_PATH) ].forEach(dir => {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
});
// Health monitoring class
class HealthMonitor {
constructor() {
this.startTime = new Date();
this.metrics = {
timestamp: this.startTime.toISOString(),
version: '1.8.6',
checks: {},
summary: {
total_checks: 0,
passed_checks: 0,
failed_checks: 0,
warnings: 0,
overall_health: 'unknown'
}
};
}
log(message, level = 'info') {
const timestamp = new Date().toISOString();
const logEntry = `[${timestamp}] [${level.toUpperCase()}] ${message}\n`;
// Console output with colors
const colors = {
info: '\x1b[36m', // Cyan
pass: '\x1b[32m', // Green
fail: '\x1b[31m', // Red
warn: '\x1b[33m', // Yellow
reset: '\x1b[0m'
};
const color = colors[level] || colors.info;
console.log(`${color}${logEntry.trim()}${colors.reset}`);
// File logging
fs.appendFileSync(HEALTH_LOG_PATH, logEntry);
}
checkDirectoryExists(dirPath, checkName) {
this.metrics.summary.total_checks++;
const exists = fs.existsSync(dirPath);
this.metrics.checks[checkName] = {
type: 'directory_exists',
status: exists ? 'pass' : 'fail',
path: dirPath,
message: exists ? 'Directory exists' : 'Directory missing'
};
if (exists) {
this.metrics.summary.passed_checks++;
this.log(`${checkName}: PASS - Directory exists`, 'pass');
} else {
this.metrics.summary.failed_checks++;
this.log(`${checkName}: FAIL - Directory missing: ${dirPath}`, 'fail');
}
return exists;
}
checkFileExists(filePath, checkName) {
this.metrics.summary.total_checks++;
const exists = fs.existsSync(filePath);
this.metrics.checks[checkName] = {
type: 'file_exists',
status: exists ? 'pass' : 'fail',
path: filePath,
message: exists ? 'File exists' : 'File missing'
};
if (exists) {
this.metrics.summary.passed_checks++;
this.log(`${checkName}: PASS - File exists`, 'pass');
} else {
this.metrics.summary.failed_checks++;
this.log(`${checkName}: FAIL - File missing: ${filePath}`, 'fail');
}
return exists;
}
checkFileVersion(filePath, expectedVersion, checkName) {
this.metrics.summary.total_checks++;
if (!fs.existsSync(filePath)) {
this.metrics.summary.failed_checks++;
this.metrics.checks[checkName] = {
type: 'version_check',
status: 'fail',
path: filePath,
message: 'File does not exist'
};
this.log(`${checkName}: FAIL - File not found: ${filePath}`, 'fail');
return false;
}
try {
const content = fs.readFileSync(filePath, 'utf8');
const versionMatch = content.match(/v?(\d+\.\d+\.\d+)/);
const actualVersion = versionMatch ? versionMatch[1] : 'not_found';
const versionMatches = actualVersion === expectedVersion;
this.metrics.checks[checkName] = {
type: 'version_check',
status: versionMatches ? 'pass' : 'fail',
path: filePath,
expected_version: expectedVersion,
actual_version: actualVersion,
message: versionMatches ? 'Version matches' : `Version mismatch (expected ${expectedVersion}, found ${actualVersion})`
};
if (versionMatches) {
this.metrics.summary.passed_checks++;
this.log(`${checkName}: PASS - Version ${actualVersion}`, 'pass');
} else {
this.metrics.summary.failed_checks++;
this.log(`${checkName}: FAIL - Version mismatch (expected ${expectedVersion}, found ${actualVersion})`, 'fail');
}
return versionMatches;
} catch (error) {
this.metrics.summary.failed_checks++;
this.metrics.checks[checkName] = {
type: 'version_check',
status: 'fail',
path: filePath,
message: `Error reading file: ${error.message}`
};
this.log(`${checkName}: FAIL - Error reading file: ${error.message}`, 'fail');
return false;
}
}
checkSkillHealth() {
this.log('Checking skill health...', 'info');
const skillsDir = path.join(AGENTS_DIR, 'skills');
if (!fs.existsSync(skillsDir)) {
this.log('Skills directory not found', 'fail');
return;
}
const skillDirs = fs.readdirSync(skillsDir).filter(item => {
const itemPath = path.join(skillsDir, item);
return fs.statSync(itemPath).isDirectory();
});
this.metrics.checks['skill_count'] = {
type: 'skill_count',
status: skillDirs.length >= 20 ? 'pass' : 'warn',
count: skillDirs.length,
expected: 20,
message: `Found ${skillDirs.length} skills (expected at least 20)`
};
if (skillDirs.length >= 20) {
this.metrics.summary.passed_checks++;
this.log(`Skill count: PASS - Found ${skillDirs.length} skills`, 'pass');
} else {
this.metrics.summary.warnings++;
this.log(`Skill count: WARN - Only ${skillDirs.length} skills found (expected at least 20)`, 'warn');
}
// Check individual skills
let healthySkills = 0;
skillDirs.forEach(skillDir => {
const skillPath = path.join(skillsDir, skillDir);
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (fs.existsSync(skillMdPath)) {
try {
const content = fs.readFileSync(skillMdPath, 'utf8');
const hasName = content.includes('name:');
const hasDescription = content.includes('description:');
const hasVersion = content.includes('version:');
const hasRole = content.includes('## Role');
const hasTask = content.includes('## Task');
const isHealthy = hasName && hasDescription && hasVersion && hasRole && hasTask;
if (isHealthy) healthySkills++;
this.metrics.checks[`skill_${skillDir}_health`] = {
type: 'skill_health',
status: isHealthy ? 'pass' : 'fail',
skill: skillDir,
has_name: hasName,
has_description: hasDescription,
has_version: hasVersion,
has_role: hasRole,
has_task: hasTask,
message: isHealthy ? 'Skill is healthy' : 'Skill has missing sections'
};
} catch (error) {
this.metrics.checks[`skill_${skillDir}_health`] = {
type: 'skill_health',
status: 'fail',
skill: skillDir,
message: `Error reading skill: ${error.message}`
};
}
}
});
this.metrics.summary.total_checks++;
if (healthySkills === skillDirs.length) {
this.metrics.summary.passed_checks++;
this.log(`Individual skills: PASS - All ${healthySkills} skills are healthy`, 'pass');
} else {
this.metrics.summary.failed_checks++;
this.log(`Individual skills: FAIL - Only ${healthySkills}/${skillDirs.length} skills are healthy`, 'fail');
}
}
checkWorkflowHealth() {
this.log('Checking workflow health...', 'info');
const workflowsDir = path.join(BASE_DIR, '.windsurf', 'workflows');
if (!fs.existsSync(workflowsDir)) {
this.log('Workflows directory not found', 'fail');
return;
}
const workflowFiles = fs.readdirSync(workflowsDir).filter(file => file.endsWith('.md'));
this.metrics.checks['workflow_count'] = {
type: 'workflow_count',
status: workflowFiles.length >= 20 ? 'pass' : 'warn',
count: workflowFiles.length,
expected: 20,
message: `Found ${workflowFiles.length} workflows (expected at least 20)`
};
if (workflowFiles.length >= 20) {
this.metrics.summary.passed_checks++;
this.log(`Workflow count: PASS - Found ${workflowFiles.length} workflows`, 'pass');
} else {
this.metrics.summary.warnings++;
this.log(`Workflow count: WARN - Only ${workflowFiles.length} workflows found (expected at least 20)`, 'warn');
}
}
calculateOverallHealth() {
const { total_checks, passed_checks, failed_checks, warnings } = this.metrics.summary;
if (failed_checks === 0) {
this.metrics.summary.overall_health = warnings === 0 ? 'excellent' : 'good';
} else if (failed_checks <= total_checks * 0.1) {
this.metrics.summary.overall_health = 'fair';
} else {
this.metrics.summary.overall_health = 'poor';
}
this.log(`Overall health: ${this.metrics.summary.overall_health}`, 'info');
}
generateReport() {
const report = {
...this.metrics,
duration: new Date() - this.startTime,
environment: {
node_version: process.version,
platform: process.platform,
agents_dir: AGENTS_DIR
}
};
fs.writeFileSync(HEALTH_REPORT_PATH, JSON.stringify(report, null, 2));
this.log(`Health report saved to: ${HEALTH_REPORT_PATH}`, 'info');
return report;
}
async runFullHealthCheck() {
this.log('Starting comprehensive health check...', 'info');
this.log(`Base directory: ${BASE_DIR}`, 'info');
// Core directory checks
this.checkDirectoryExists(AGENTS_DIR, 'agents_directory');
this.checkDirectoryExists(path.join(AGENTS_DIR, 'skills'), 'skills_directory');
this.checkDirectoryExists(path.join(AGENTS_DIR, 'scripts'), 'scripts_directory');
this.checkDirectoryExists(path.join(AGENTS_DIR, 'rules'), 'rules_directory');
this.checkDirectoryExists(path.join(BASE_DIR, '.windsurf', 'workflows'), 'workflows_directory');
// Core file checks
this.checkFileExists(path.join(AGENTS_DIR, 'README.md'), 'readme_file');
this.checkFileExists(path.join(AGENTS_DIR, 'skills', 'VERSION'), 'skills_version_file');
this.checkFileExists(path.join(AGENTS_DIR, 'skills', 'skills.md'), 'skills_documentation');
// Version consistency checks
this.checkFileVersion(path.join(AGENTS_DIR, 'README.md'), '1.8.6', 'readme_version');
this.checkFileVersion(path.join(AGENTS_DIR, 'skills', 'VERSION'), '1.8.6', 'skills_version_file_version');
this.checkFileVersion(path.join(AGENTS_DIR, 'skills', 'skills.md'), '1.8.6', 'skills_documentation_version');
this.checkFileVersion(path.join(AGENTS_DIR, 'rules', '00-project-context.md'), '1.8.6', 'project_context_version');
// Script availability checks
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'validate-versions.sh'), 'bash_version_script');
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'audit-skills.sh'), 'bash_audit_script');
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'bash', 'sync-workflows.sh'), 'bash_sync_script');
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'powershell', 'validate-versions.ps1'), 'powershell_version_script');
this.checkFileExists(path.join(AGENTS_DIR, 'scripts', 'powershell', 'audit-skills.ps1'), 'powershell_audit_script');
// Detailed health checks
this.checkSkillHealth();
this.checkWorkflowHealth();
// Calculate overall health
this.calculateOverallHealth();
// Generate report
const report = this.generateReport();
// Summary
this.log('=== Health Check Summary ===', 'info');
this.log(`Total checks: ${this.metrics.summary.total_checks}`, 'info');
this.log(`Passed: ${this.metrics.summary.passed_checks}`, 'pass');
this.log(`Failed: ${this.metrics.summary.failed_checks}`, this.metrics.summary.failed_checks > 0 ? 'fail' : 'info');
this.log(`Warnings: ${this.metrics.summary.warnings}`, 'warn');
this.log(`Overall health: ${this.metrics.summary.overall_health}`, 'info');
this.log(`Duration: ${new Date() - this.startTime}ms`, 'info');
return report;
}
}
// CLI interface
async function main() {
const monitor = new HealthMonitor();
try {
const report = await monitor.runFullHealthCheck();
process.exit(report.summary.failed_checks > 0 ? 1 : 0);
} catch (error) {
console.error('Health check failed:', error);
process.exit(1);
}
}
// Export for use in other modules
module.exports = { HealthMonitor };
// Run if called directly
if (require.main === module) {
main();
}
+494
View File
@@ -0,0 +1,494 @@
#!/usr/bin/env node
/**
* performance-monitor.js - Performance monitoring for .agents skills
* Part of LCBP3-DMS Phase 3 enhancements
*/
const fs = require('fs');
const path = require('path');
const { performance } = require('perf_hooks');
// Configuration
const BASE_DIR = path.resolve(__dirname, '../..');
const AGENTS_DIR = path.join(BASE_DIR, '.agents');
const SKILLS_DIR = path.join(AGENTS_DIR, 'skills');
const PERFORMANCE_LOG_PATH = path.join(AGENTS_DIR, 'logs', 'performance.log');
const PERFORMANCE_REPORT_PATH = path.join(AGENTS_DIR, 'reports', 'performance-report.json');
// Ensure directories exist
[ path.dirname(PERFORMANCE_LOG_PATH), path.dirname(PERFORMANCE_REPORT_PATH) ].forEach(dir => {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
});
// Performance monitoring class
class PerformanceMonitor {
constructor() {
this.startTime = performance.now();
this.metrics = {
timestamp: new Date().toISOString(),
duration: 0,
skill_metrics: {},
workflow_metrics: {},
system_metrics: {},
summary: {
total_skills_analyzed: 0,
total_workflows_analyzed: 0,
average_skill_size: 0,
average_workflow_size: 0,
performance_score: 0,
recommendations: []
}
};
}
log(message, level = 'info') {
const timestamp = new Date().toISOString();
const logEntry = `[${timestamp}] [${level.toUpperCase()}] ${message}\n`;
// Console output with colors
const colors = {
info: '\x1b[36m', // Cyan
good: '\x1b[32m', // Green
warn: '\x1b[33m', // Yellow
poor: '\x1b[31m', // Red
reset: '\x1b[0m'
};
const color = colors[level] || colors.info;
console.log(`${color}${logEntry.trim()}${colors.reset}`);
// File logging
fs.appendFileSync(PERFORMANCE_LOG_PATH, logEntry);
}
analyzeSkillPerformance(skillPath, skillName) {
const skillMdPath = path.join(skillPath, 'SKILL.md');
if (!fs.existsSync(skillMdPath)) {
this.log(`Skipping ${skillName} - SKILL.md not found`, 'warn');
return null;
}
const startTime = performance.now();
try {
const stats = fs.statSync(skillMdPath);
const content = fs.readFileSync(skillMdPath, 'utf8');
// Basic metrics
const fileSizeKB = stats.size / 1024;
const lineCount = content.split('\n').length;
const wordCount = content.split(/\s+/).filter(word => word.length > 0).length;
const charCount = content.length;
// Content complexity metrics
const sectionCount = (content.match(/^#+\s/gm) || []).length;
const codeBlockCount = (content.match(/```[\s\S]*?```/g) || []).length;
const listCount = (content.match(/^[-*+]\s/gm) || []).length;
// Front matter analysis
const frontMatterMatch = content.match(/^---\n([\s\S]*?)\n---/);
const frontMatterSize = frontMatterMatch ? frontMatterMatch[1].length : 0;
const hasFrontMatter = frontMatterMatch !== null;
// Readability metrics
const sentences = content.split(/[.!?]+/).filter(s => s.trim().length > 0);
const avgWordsPerSentence = sentences.length > 0 ? wordCount / sentences.length : 0;
const avgCharsPerWord = wordCount > 0 ? charCount / wordCount : 0;
// Performance score calculation
let performanceScore = 100;
// Size penalties
if (fileSizeKB > 50) performanceScore -= 10;
if (fileSizeKB > 100) performanceScore -= 20;
// Content quality bonuses
if (hasFrontMatter) performanceScore += 5;
if (sectionCount >= 3) performanceScore += 5;
if (codeBlockCount > 0) performanceScore += 5;
// Readability penalties
if (avgWordsPerSentence > 25) performanceScore -= 5;
if (avgWordsPerSentence > 35) performanceScore -= 10;
const analysisTime = performance.now() - startTime;
const skillMetrics = {
skill_name: skillName,
file_path: skillMdPath,
file_size_kb: Math.round(fileSizeKB * 100) / 100,
line_count: lineCount,
word_count: wordCount,
char_count: charCount,
section_count: sectionCount,
code_block_count: codeBlockCount,
list_count: listCount,
front_matter_size: frontMatterSize,
has_front_matter: hasFrontMatter,
avg_words_per_sentence: Math.round(avgWordsPerSentence * 100) / 100,
avg_chars_per_word: Math.round(avgCharsPerWord * 100) / 100,
performance_score: Math.max(0, Math.min(100, performanceScore)),
analysis_time_ms: Math.round(analysisTime * 100) / 100,
last_modified: stats.mtime.toISOString()
};
this.metrics.skill_metrics[skillName] = skillMetrics;
// Log performance assessment
if (performanceScore >= 80) {
this.log(`${skillName}: GOOD performance (score: ${performanceScore})`, 'good');
} else if (performanceScore >= 60) {
this.log(`${skillName}: OK performance (score: ${performanceScore})`, 'info');
} else {
this.log(`${skillName}: POOR performance (score: ${performanceScore})`, 'poor');
}
return skillMetrics;
} catch (error) {
this.log(`Error analyzing ${skillName}: ${error.message}`, 'warn');
return null;
}
}
analyzeWorkflowPerformance(workflowPath, workflowName) {
const startTime = performance.now();
if (!fs.existsSync(workflowPath)) {
this.log(`Skipping workflow ${workflowName} - file not found`, 'warn');
return null;
}
try {
const stats = fs.statSync(workflowPath);
const content = fs.readFileSync(workflowPath, 'utf8');
// Basic metrics
const fileSizeKB = stats.size / 1024;
const lineCount = content.split('\n').length;
const wordCount = content.split(/\s+/).filter(word => word.length > 0).length;
// Workflow-specific metrics
const stepCount = (content.match(/^\d+\./gm) || []).length;
const codeBlockCount = (content.match(/```[\s\S]*?```/g) || []).length;
const skillReferences = (content.match(/@speckit-\w+/g) || []).length;
// Performance score calculation
let performanceScore = 100;
// Size penalties
if (fileSizeKB > 20) performanceScore -= 10;
if (fileSizeKB > 50) performanceScore -= 20;
// Content quality bonuses
if (stepCount > 0) performanceScore += 10;
if (codeBlockCount > 0) performanceScore += 5;
if (skillReferences > 0) performanceScore += 5;
const analysisTime = performance.now() - startTime;
const workflowMetrics = {
workflow_name: workflowName,
file_path: workflowPath,
file_size_kb: Math.round(fileSizeKB * 100) / 100,
line_count: lineCount,
word_count: wordCount,
step_count: stepCount,
code_block_count: codeBlockCount,
skill_references: skillReferences,
performance_score: Math.max(0, Math.min(100, performanceScore)),
analysis_time_ms: Math.round(analysisTime * 100) / 100,
last_modified: stats.mtime.toISOString()
};
this.metrics.workflow_metrics[workflowName] = workflowMetrics;
// Log performance assessment
if (performanceScore >= 80) {
this.log(`${workflowName}: GOOD performance (score: ${performanceScore})`, 'good');
} else if (performanceScore >= 60) {
this.log(`${workflowName}: OK performance (score: ${performanceScore})`, 'info');
} else {
this.log(`${workflowName}: POOR performance (score: ${performanceScore})`, 'poor');
}
return workflowMetrics;
} catch (error) {
this.log(`Error analyzing workflow ${workflowName}: ${error.message}`, 'warn');
return null;
}
}
analyzeSystemMetrics() {
this.log('Analyzing system metrics...', 'info');
// Directory sizes
const agentsSize = this.getDirectorySize(AGENTS_DIR);
const skillsSize = this.getDirectorySize(SKILLS_DIR);
const workflowsDir = path.join(BASE_DIR, '.windsurf', 'workflows');
const workflowsSize = fs.existsSync(workflowsDir) ? this.getDirectorySize(workflowsDir) : 0;
// File counts
const totalFiles = this.countFiles(AGENTS_DIR);
const skillFiles = this.countFiles(SKILLS_DIR);
const workflowFiles = fs.existsSync(workflowsDir) ? this.countFiles(workflowsDir) : 0;
this.metrics.system_metrics = {
agents_directory_size_kb: Math.round(agentsSize / 1024),
skills_directory_size_kb: Math.round(skillsSize / 1024),
workflows_directory_size_kb: Math.round(workflowsSize / 1024),
total_files: totalFiles,
skill_files: skillFiles,
workflow_files: workflowFiles,
analysis_timestamp: new Date().toISOString()
};
this.log(`System: ${totalFiles} files, ${Math.round(agentsSize / 1024)}KB total`, 'info');
}
getDirectorySize(dirPath) {
let totalSize = 0;
if (!fs.existsSync(dirPath)) {
return 0;
}
const items = fs.readdirSync(dirPath);
for (const item of items) {
const itemPath = path.join(dirPath, item);
const stats = fs.statSync(itemPath);
if (stats.isDirectory()) {
totalSize += this.getDirectorySize(itemPath);
} else {
totalSize += stats.size;
}
}
return totalSize;
}
countFiles(dirPath) {
let fileCount = 0;
if (!fs.existsSync(dirPath)) {
return 0;
}
const items = fs.readdirSync(dirPath);
for (const item of items) {
const itemPath = path.join(dirPath, item);
const stats = fs.statSync(itemPath);
if (stats.isDirectory()) {
fileCount += this.countFiles(itemPath);
} else {
fileCount++;
}
}
return fileCount;
}
generateRecommendations() {
const recommendations = [];
const { skill_metrics, workflow_metrics, system_metrics } = this.metrics;
// Analyze skill performance
const skillScores = Object.values(skill_metrics).map(m => m.performance_score);
const avgSkillScore = skillScores.length > 0 ? skillScores.reduce((a, b) => a + b, 0) / skillScores.length : 0;
if (avgSkillScore < 70) {
recommendations.push({
type: 'performance',
priority: 'high',
message: 'Average skill performance is below optimal. Consider optimizing skill documentation.',
details: `Average score: ${Math.round(avgSkillScore)}`
});
}
// Check for oversized files
const largeSkills = Object.values(skill_metrics).filter(m => m.file_size_kb > 50);
if (largeSkills.length > 0) {
recommendations.push({
type: 'size',
priority: 'medium',
message: `${largeSkills.length} skills have large file sizes (>50KB). Consider breaking down complex skills.`,
details: largeSkills.map(s => `${s.skill_name} (${s.file_size_kb}KB)`).join(', ')
});
}
// Check for missing front matter
const skillsWithoutFrontMatter = Object.values(skill_metrics).filter(m => !m.has_front_matter);
if (skillsWithoutFrontMatter.length > 0) {
recommendations.push({
type: 'structure',
priority: 'high',
message: `${skillsWithoutFrontMatter.length} skills missing front matter. Add proper YAML front matter.`,
details: skillsWithoutFrontMatter.map(s => s.skill_name).join(', ')
});
}
// Analyze workflow performance
const workflowScores = Object.values(workflow_metrics).map(m => m.performance_score);
const avgWorkflowScore = workflowScores.length > 0 ? workflowScores.reduce((a, b) => a + b, 0) / workflowScores.length : 0;
if (avgWorkflowScore < 70) {
recommendations.push({
type: 'performance',
priority: 'medium',
message: 'Average workflow performance could be improved. Add more detailed steps and examples.',
details: `Average score: ${Math.round(avgWorkflowScore)}`
});
}
// System recommendations
if (system_metrics.agents_directory_size_kb > 1000) {
recommendations.push({
type: 'maintenance',
priority: 'low',
message: '.agents directory is growing large. Consider archiving old logs and reports.',
details: `Current size: ${system_metrics.agents_directory_size_kb}KB`
});
}
this.metrics.summary.recommendations = recommendations;
// Log recommendations
if (recommendations.length > 0) {
this.log('Performance Recommendations:', 'info');
recommendations.forEach((rec, index) => {
const priority = rec.priority === 'high' ? 'HIGH' : rec.priority === 'medium' ? 'MED' : 'LOW';
this.log(` ${index + 1}. [${priority}] ${rec.message}`, 'warn');
});
} else {
this.log('No performance issues detected - system is optimized!', 'good');
}
}
calculateOverallPerformance() {
const { skill_metrics, workflow_metrics } = this.metrics;
const skillScores = Object.values(skill_metrics).map(m => m.performance_score);
const workflowScores = Object.values(workflow_metrics).map(m => m.performance_score);
const avgSkillScore = skillScores.length > 0 ? skillScores.reduce((a, b) => a + b, 0) / skillScores.length : 100;
const avgWorkflowScore = workflowScores.length > 0 ? workflowScores.reduce((a, b) => a + b, 0) / workflowScores.length : 100;
// Weight skills more heavily than workflows
const overallScore = (avgSkillScore * 0.7) + (avgWorkflowScore * 0.3);
this.metrics.summary.performance_score = Math.round(overallScore);
this.metrics.summary.average_skill_size = skillScores.length > 0
? Math.round(Object.values(skill_metrics).reduce((sum, m) => sum + m.file_size_kb, 0) / skillScores.length * 100) / 100
: 0;
this.metrics.summary.average_workflow_size = workflowScores.length > 0
? Math.round(Object.values(workflow_metrics).reduce((sum, m) => sum + m.file_size_kb, 0) / workflowScores.length * 100) / 100
: 0;
this.metrics.summary.total_skills_analyzed = skillScores.length;
this.metrics.summary.total_workflows_analyzed = workflowScores.length;
}
generateReport() {
this.metrics.duration = performance.now() - this.startTime;
const report = {
...this.metrics,
generated_at: new Date().toISOString(),
environment: {
node_version: process.version,
platform: process.platform,
memory_usage: process.memoryUsage()
}
};
fs.writeFileSync(PERFORMANCE_REPORT_PATH, JSON.stringify(report, null, 2));
this.log(`Performance report saved to: ${PERFORMANCE_REPORT_PATH}`, 'info');
return report;
}
async runPerformanceAnalysis() {
this.log('Starting performance analysis...', 'info');
this.log(`Base directory: ${BASE_DIR}`, 'info');
// Analyze skills
this.log('Analyzing skill performance...', 'info');
if (fs.existsSync(SKILLS_DIR)) {
const skillDirs = fs.readdirSync(SKILLS_DIR).filter(item => {
const itemPath = path.join(SKILLS_DIR, item);
return fs.statSync(itemPath).isDirectory();
});
for (const skillDir of skillDirs) {
const skillPath = path.join(SKILLS_DIR, skillDir);
this.analyzeSkillPerformance(skillPath, skillDir);
}
}
// Analyze workflows
this.log('Analyzing workflow performance...', 'info');
const workflowsDir = path.join(BASE_DIR, '.windsurf', 'workflows');
if (fs.existsSync(workflowsDir)) {
const workflowFiles = fs.readdirSync(workflowsDir).filter(file => file.endsWith('.md'));
for (const workflowFile of workflowFiles) {
const workflowPath = path.join(workflowsDir, workflowFile);
const workflowName = workflowFile.replace('.md', '');
this.analyzeWorkflowPerformance(workflowPath, workflowName);
}
}
// System metrics
this.analyzeSystemMetrics();
// Calculate overall performance
this.calculateOverallPerformance();
// Generate recommendations
this.generateRecommendations();
// Generate report
const report = this.generateReport();
// Summary
this.log('=== Performance Analysis Summary ===', 'info');
this.log(`Overall performance score: ${this.metrics.summary.performance_score}/100`, 'info');
this.log(`Skills analyzed: ${this.metrics.summary.total_skills_analyzed}`, 'info');
this.log(`Workflows analyzed: ${this.metrics.summary.total_workflows_analyzed}`, 'info');
this.log(`Average skill size: ${this.metrics.summary.average_skill_size}KB`, 'info');
this.log(`Average workflow size: ${this.metrics.summary.average_workflow_size}KB`, 'info');
this.log(`Analysis duration: ${Math.round(this.metrics.duration)}ms`, 'info');
this.log(`Recommendations: ${this.metrics.summary.recommendations.length}`, 'info');
return report;
}
}
// CLI interface
async function main() {
const monitor = new PerformanceMonitor();
try {
const report = await monitor.runPerformanceAnalysis();
process.exit(report.summary.performance_score < 60 ? 1 : 0);
} catch (error) {
console.error('Performance analysis failed:', error);
process.exit(1);
}
}
// Export for use in other modules
module.exports = { PerformanceMonitor };
// Run if called directly
if (require.main === module) {
main();
}
+198
View File
@@ -0,0 +1,198 @@
# audit-skills.ps1 - Verify skill completeness and health
# Part of LCBP3-DMS Phase 2 improvements
param(
[string]$BaseDir = (Split-Path -Parent (Split-Path -Parent (Split-Path -Parent $PSScriptRoot)))
)
# Map to ConsoleColor enum (Write-Host expects enum, not ANSI strings)
$Colors = @{
Red = 'Red'
Green = 'Green'
Yellow = 'Yellow'
Blue = 'Blue'
NoColor = 'Gray'
}
$AgentsDir = Join-Path $BaseDir ".agents"
$SkillsDir = Join-Path $AgentsDir "skills"
Write-Host "=== Skills Health Audit ===" -ForegroundColor Cyan
Write-Host "Base directory: $BaseDir"
Write-Host ""
# Function to check if skill has required files
function Test-SkillHealth {
param(
[string]$SkillDir
)
$skillName = Split-Path $SkillDir -Leaf
$issues = 0
# Check for SKILL.md
$skillFile = Join-Path $SkillDir "SKILL.md"
if (Test-Path $skillFile) {
Write-Host " OK: $skillName/SKILL.md" -ForegroundColor $Colors.Green
} else {
Write-Host " MISSING: $skillName/SKILL.md" -ForegroundColor $Colors.Red
$issues++
}
# Check for templates directory (optional)
$templatesDir = Join-Path $SkillDir "templates"
if (Test-Path $templatesDir) {
$templateCount = (Get-ChildItem -Path $templatesDir -Filter "*.md" -File | Measure-Object).Count
if ($templateCount -gt 0) {
Write-Host " OK: $skillName/templates ($templateCount files)" -ForegroundColor $Colors.Green
} else {
Write-Host " EMPTY: $skillName/templates (no files)" -ForegroundColor $Colors.Yellow
}
}
# Check SKILL.md content if exists
if (Test-Path $skillFile) {
$content = Get-Content $skillFile -Raw
# Check for required front matter fields
$requiredFields = @('name', 'description', 'version')
foreach ($field in $requiredFields) {
$pattern = "(?m)^${field}:"
if ($content -match $pattern) {
Write-Host " FIELD: $field" -ForegroundColor $Colors.Green
} else {
Write-Host " MISSING FIELD: $field" -ForegroundColor $Colors.Red
$issues++
}
}
# Check for LCBP3 context reference (speckit-* skills)
if ($skillName -like 'speckit-*') {
if ($content -match '_LCBP3-CONTEXT\.md') {
Write-Host " CONTEXT: LCBP3 appendix referenced" -ForegroundColor $Colors.Green
} else {
Write-Host " MISSING: LCBP3 context reference" -ForegroundColor $Colors.Yellow
$issues++
}
}
}
return $issues
}
# Function to get skill version from SKILL.md
function Get-SkillVersion {
param(
[string]$SkillFile
)
if (Test-Path $SkillFile) {
try {
$content = Get-Content $SkillFile -Raw
if ($content -match "(?m)^version:\s*['""]?([0-9]+\.[0-9]+\.[0-9]+)['""]?") {
return $matches[1].Trim()
}
} catch {
return "error"
}
}
return "no_file"
}
# Check skills directory
if (-not (Test-Path $SkillsDir)) {
Write-Host "ERROR: Skills directory not found" -ForegroundColor $Colors.Red
exit 1
}
Write-Host "Scanning skills directory: $SkillsDir"
Write-Host ""
# Get all skill directories
$skillDirs = Get-ChildItem -Path $SkillsDir -Directory | Sort-Object Name
Write-Host "Found $($skillDirs.Count) skill directories"
Write-Host ""
# Audit each skill
$totalIssues = 0
$skillSummary = @()
foreach ($skillDir in $skillDirs) {
$skillName = $skillDir.Name
Write-Host "Auditing: $skillName"
Write-Host "------------------------"
$issues = Test-SkillHealth -SkillDir $skillDir.FullName
$skillVersion = Get-SkillVersion -SkillFile (Join-Path $skillDir.FullName "SKILL.md")
$skillSummary += @{
Name = $skillName
Issues = $issues
Version = $skillVersion
}
$totalIssues += $issues
Write-Host ""
}
# Summary report
Write-Host "=== Skills Audit Summary ===" -ForegroundColor Cyan
Write-Host ""
Write-Host "Skill Status:"
Write-Host "-----------"
foreach ($summary in $skillSummary) {
if ($summary.Issues -eq 0) {
Write-Host " HEALTHY: $($summary.Name) (v$($summary.Version))" -ForegroundColor $Colors.Green
} else {
Write-Host " ISSUES: $($summary.Name) (v$($summary.Version)) - $($summary.Issues) issues" -ForegroundColor $Colors.Red
}
}
Write-Host ""
# Check skills.md version consistency
$skillsVersionFile = Join-Path $SkillsDir "VERSION"
if (Test-Path $skillsVersionFile) {
$content = Get-Content $skillsVersionFile -Raw
if ($content -match "^version:\s*(.+)") {
$globalVersion = $matches[1].Trim()
Write-Host "Global skills version: v$globalVersion"
Write-Host ""
# Check for version mismatches
Write-Host "Version Consistency Check:"
Write-Host "------------------------"
$versionMismatches = 0
foreach ($summary in $skillSummary) {
if ($summary.Version -ne "unknown" -and $summary.Version -ne "no_file" -and $summary.Version -ne $globalVersion) {
Write-Host " MISMATCH: $($summary.Name) is v$($summary.Version), global is v$globalVersion" -ForegroundColor $Colors.Yellow
$versionMismatches++
}
}
if ($versionMismatches -eq 0) {
Write-Host " All skills match global version" -ForegroundColor $Colors.Green
}
}
}
Write-Host ""
# Overall health
if ($totalIssues -eq 0) {
Write-Host "=== SUCCESS: All skills healthy ===" -ForegroundColor $Colors.Green
Write-Host "Total skills: $($skillDirs.Count)"
exit 0
} else {
Write-Host "=== ISSUES FOUND: $totalIssues total issues ===" -ForegroundColor $Colors.Red
Write-Host ""
Write-Host "Recommendations:"
Write-Host "1. Fix missing SKILL.md files"
Write-Host "2. Add required front matter fields"
Write-Host "3. Ensure Role and Task sections exist"
Write-Host "4. Align skill versions with global version"
exit 1
}
@@ -0,0 +1,110 @@
# validate-versions.ps1 - Check version consistency across .agents files
# Part of LCBP3-DMS Phase 2 improvements
param(
[string]$BaseDir = (Split-Path -Parent (Split-Path -Parent (Split-Path -Parent $PSScriptRoot))),
[string]$ExpectedVersion = "1.8.9"
)
# Map to ConsoleColor enum (Write-Host expects enum, not ANSI)
$Colors = @{
Red = 'Red'
Green = 'Green'
Yellow = 'Yellow'
NoColor = 'Gray'
}
$AgentsDir = Join-Path $BaseDir ".agents"
Write-Host "=== .agents Version Validation ===" -ForegroundColor Cyan
Write-Host "Base directory: $BaseDir"
Write-Host "Expected version: $ExpectedVersion"
Write-Host ""
# Function to extract version from file
function Get-VersionFromFile {
param(
[string]$FilePath,
[string]$Pattern
)
if (Test-Path $FilePath) {
try {
$content = Get-Content $FilePath -Raw
if ($content -match $Pattern) {
return $matches[1]
} else {
return "NOT_FOUND"
}
} catch {
return "ERROR"
}
} else {
return "FILE_NOT_FOUND"
}
}
# Files to check
$FilesToCheck = @{
(Join-Path $AgentsDir "skills\VERSION") = "version: ([0-9]+\.[0-9]+\.[0-9]+)"
(Join-Path $AgentsDir "skills\skills.md") = "V([0-9]+\.[0-9]+\.[0-9]+)"
}
# Track issues
$Issues = 0
Write-Host "Checking version consistency..."
Write-Host ""
foreach ($file in $FilesToCheck.Keys) {
$pattern = $FilesToCheck[$file]
$relativePath = $file.Replace($BaseDir + "\", "")
$version = Get-VersionFromFile -FilePath $file -Pattern $pattern
if ($version -eq "NOT_FOUND" -or $version -eq "FILE_NOT_FOUND") {
Write-Host " ERROR: $relativePath - Version not found" -ForegroundColor $Colors.Red
$Issues++
} elseif ($version -ne $ExpectedVersion) {
Write-Host " ERROR: $relativePath - Found v$version, expected v$ExpectedVersion" -ForegroundColor $Colors.Red
$Issues++
} else {
Write-Host " OK: $relativePath - v$version" -ForegroundColor $Colors.Green
}
}
Write-Host ""
# Check for version mismatches in skill files
Write-Host "Checking skill file versions..."
$SkillsVersionFile = Join-Path $AgentsDir "skills\VERSION"
if (Test-Path $SkillsVersionFile) {
$skillsVersion = Get-VersionFromFile -FilePath $SkillsVersionFile -Pattern "version: ([0-9]+\.[0-9]+\.[0-9]+)"
Write-Host "Skills version file: v$skillsVersion"
}
# Check workflow versions (in .windsurf\workflows)
$WorkflowsDir = Join-Path $BaseDir ".windsurf\workflows"
if (Test-Path $WorkflowsDir) {
Write-Host "Checking workflow files..."
$workflowCount = (Get-ChildItem -Path $WorkflowsDir -Filter "*.md" -File | Measure-Object).Count
Write-Host " OK: Found $workflowCount workflow files" -ForegroundColor $Colors.Green
} else {
Write-Host " WARNING: Workflows directory not found at $WorkflowsDir" -ForegroundColor $Colors.Yellow
}
Write-Host ""
# Summary
if ($Issues -eq 0) {
Write-Host "=== SUCCESS: All versions consistent ===" -ForegroundColor $Colors.Green
exit 0
} else {
Write-Host "=== FAILED: $Issues version issues found ===" -ForegroundColor $Colors.Red
Write-Host ""
Write-Host "To fix version issues:"
Write-Host "1. Update files to use v$ExpectedVersion"
Write-Host "2. Ensure LCBP3 project version matches"
Write-Host "3. Run this script again to verify"
exit 1
}
+109
View File
@@ -0,0 +1,109 @@
# `.agents/skills/` — LCBP3 Agent Skill Pack
**Version:** 1.8.9 | **Last Updated:** 2026-04-22 | **Total Skills:** 20
Agent skills for AI-assisted development in **Windsurf IDE** (and compatible agents: Codex CLI, opencode, Amp, Antigravity, AGENTS.md-aware tools).
---
## 📂 Layout
```
.agents/skills/
├── VERSION # Single source of truth for skill-pack version
├── skills.md # Overview + dependency matrix + health monitoring
├── _LCBP3-CONTEXT.md # Shared LCBP3 context injected into every speckit-* skill
├── README.md # (this file)
├── nestjs-best-practices/ # Backend rules (40 rules across 10 categories)
├── next-best-practices/ # Frontend rules (Next.js 15+)
└── speckit-*/ # 18 workflow skills (spec → plan → tasks → implement → …)
```
Each skill directory contains:
- `SKILL.md` — frontmatter (`name`, `description`, `version: 1.8.9`, `scope`, `depends-on`, `handoffs`) + instructions
- `templates/` _(optional)_ — artifact templates (spec/plan/tasks/checklist)
- `rules/` _(nestjs only)_ — individual rule files grouped by prefix (`arch-`, `security-`, `db-`, etc.)
---
## 🚀 How Windsurf Invokes These Skills
Windsurf exposes two entry points:
1. **Skill tool** — Windsurf discovers skills by scanning `.agents/skills/*/SKILL.md` frontmatter. Skills marked `user-invocable: false` are used silently by Cascade.
2. **Slash commands**`.windsurf/workflows/*.md` wraps each skill as a slash command (e.g. `/04-speckit.plan`). The workflow file is short; the heavy lifting is delegated to the skill via `skill` tool.
Both paths end up executing the same `SKILL.md` instructions.
---
## 🧭 Typical Flow
```
/01-speckit.constitution → AGENTS.md / product vision
/02-speckit.specify → specs/feat-XXX/spec.md
/03-speckit.clarify → updates spec.md (up to 5 targeted questions)
/04-speckit.plan → specs/feat-XXX/plan.md + data-model.md + contracts/
/05-speckit.tasks → specs/feat-XXX/tasks.md
/06-speckit.analyze → cross-artifact consistency report (read-only)
/07-speckit.implement → executes tasks with Ironclad Protocols (Blast Radius + Strangler + TDD)
/08-speckit.checker → pnpm lint / typecheck / markdown-lint
/09-speckit.tester → pnpm test + coverage gates (Backend 70%+, Business Logic 80%+)
/10-speckit.reviewer → code review with Tier 1/2/3 classification
/11-speckit.validate → UAT / acceptance-criteria.md
```
Use `/00-speckit.all` to run specify → clarify → plan → tasks → analyze in one go.
---
## 🛠️ Helper Scripts
From repo root:
| Script | Purpose |
| --- | --- |
| `./.agents/scripts/bash/check-prerequisites.sh --json` | Emit `FEATURE_DIR` + `AVAILABLE_DOCS` for a feature branch |
| `./.agents/scripts/bash/setup-plan.sh --json` | Emit `FEATURE_SPEC`, `IMPL_PLAN`, `SPECS_DIR`, `BRANCH` |
| `./.agents/scripts/bash/update-agent-context.sh windsurf` | Append tech entries to `AGENTS.md` |
| `./.agents/scripts/bash/audit-skills.sh` | Validate all `SKILL.md` frontmatter + presence |
| `./.agents/scripts/bash/validate-versions.sh` | Version consistency check |
| `./.agents/scripts/bash/sync-workflows.sh` | Verify every skill has a `.windsurf/workflows/*.md` wrapper |
All scripts mirror to `.agents/scripts/powershell/*.ps1` for Windows.
---
## ⚠️ Tier 1 Non-Negotiables (auto-enforced)
- ADR-019 — `publicId` exposed directly; no `parseInt` / `Number` / `+` on UUID; no `id ?? ''` fallback
- ADR-009 — edit SQL schema directly, no TypeORM migrations
- ADR-016 — JWT + CASL on every mutation; `Idempotency-Key` required; ClamAV two-phase upload
- ADR-018 — AI via DMS API only (Ollama on Admin Desktop; no direct DB/storage)
- ADR-007 — layered error classification (Validation / Business / System)
- Zero `any`, zero `console.log` (use `Logger`)
See [`_LCBP3-CONTEXT.md`](./_LCBP3-CONTEXT.md) for the complete list.
---
## 🤝 Extending
To add a new skill:
1. Create `NAME/SKILL.md` with frontmatter: `name`, `description`, `version: 1.8.9`, `scope`, `depends-on`.
2. Append an LCBP3 context reference pointing to `_LCBP3-CONTEXT.md`.
3. Wrap with `.windsurf/workflows/NAME.md` so it becomes a slash command.
4. Update [`skills.md`](./skills.md) dependency matrix.
5. Run `./.agents/scripts/bash/audit-skills.sh` → must pass.
---
## 📚 References
- **Canonical rules:** `AGENTS.md` (repo root)
- **Product vision:** `specs/00-Overview/00-03-product-vision.md`
- **ADRs:** `specs/06-Decision-Records/`
- **Engineering guidelines:** `specs/05-Engineering-Guidelines/`
- **Contributing:** `CONTRIBUTING.md`
+17 -2
View File
@@ -1,10 +1,25 @@
# Speckit Skills Version
version: 1.1.0
release_date: 2026-01-24
version: 1.8.9
release_date: 2026-04-22
## Changelog
### 1.8.9 (2026-04-22)
- Full LCBP3-native rebuild of `.agents/skills/`
- Fixed ADR-019 drift (removed `@Expose({ name: 'id' })` and `id ?? ''` fallback patterns)
- Replaced all dead references (`GEMINI.md` → `AGENTS.md`, v1.7.0 → v1.8.0 schema, `.specify/memory/` → `AGENTS.md`)
- Added real helper scripts under `.agents/scripts/bash/` and `.agents/scripts/powershell/`
- Added ADR-007/008/020/021 coverage
- New rules: workflow-engine, file-two-phase-upload, ai-boundary, i18n, file-upload, workflow-banner
- Standardized frontmatter across all 20 skills (`version: 1.8.9`)
### 1.8.6 (2026-04-14)
- Version alignment with LCBP3-DMS v1.8.6
- Complete skill implementations for all 20 skills
- Enhanced security and audit capabilities
- Production-ready deployment status
### 1.1.0 (2026-01-24)
- New QA skills: tester, reviewer, checker
- tester: Execute tests, measure coverage, report results
+91
View File
@@ -0,0 +1,91 @@
# 🧭 LCBP3-DMS Context Appendix (Shared)
> This file is included/referenced by every Speckit skill as the authoritative project context.
> Skills **must** load it (or the files it links to) before generating any artifact.
**Project:** NAP-DMS (LCBP3) — Laem Chabang Port Phase 3 Document Management System
**Stack:** NestJS 11 + Next.js 16 + TypeScript + MariaDB 11.8 + Redis + BullMQ + Elasticsearch + Ollama (on-prem AI)
**Version:** 1.8.9 (2026-04-18)
---
## 📌 Canonical Rule Sources (read in this order)
1. **`AGENTS.md`** (repo root) — primary rule file for AI agents; supersedes legacy `GEMINI.md`.
2. **`specs/06-Decision-Records/`** — architectural decisions (22 ADRs); ADR priority > Engineering Guidelines.
3. **`specs/05-Engineering-Guidelines/`** — backend/frontend/testing/i18n/git patterns.
4. **`specs/00-Overview/00-02-glossary.md`** — domain terminology (Correspondence / RFA / Transmittal / Circulation).
5. **`specs/00-Overview/00-03-product-vision.md`** — project constitution (Vision, Strategic Pillars, Guardrails).
6. **`CONTRIBUTING.md`** — spec writing standards, PR template, review levels.
7. **`README.md`** — technology stack + getting started.
---
## 🔴 Tier 1 Non-Negotiables
- **ADR-019 UUID:** `publicId: string` exposed directly — **no** `@Expose({ name: 'id' })` rename; **no** `parseInt`/`Number`/`+` on UUID; **no** `id ?? ''` fallback in frontend.
- **ADR-009:** No TypeORM migrations — edit `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` or add a `deltas/*.sql` file.
- **ADR-016 Security:** JWT + CASL 4-Level RBAC; `@UseGuards(JwtAuthGuard, CaslAbilityGuard)` on every mutation controller; `ThrottlerGuard` on auth; bcrypt 12 rounds; `Idempotency-Key` required on POST/PUT/PATCH.
- **ADR-002 Document Numbering:** Redis Redlock + TypeORM `@VersionColumn` (double-lock). Never use application-side counter alone.
- **ADR-008 Notifications:** BullMQ queue — never inline email/notification in a request thread.
- **ADR-018 AI Boundary:** Ollama on Admin Desktop only; AI → DMS API → DB (never direct DB/storage). Human-in-the-loop validation required.
- **ADR-007 Error Handling:** Layered (Validation / Business / System); `BusinessException` hierarchy; user-friendly `userMessage` + `recoveryAction`; technical stack only in logs.
- **TypeScript Strict:** Zero `any`, zero `console.log` (use NestJS `Logger`).
- **i18n:** No hardcoded Thai/English strings in components — use i18n keys (see `05-08-i18n-guidelines.md`).
- **File Upload:** Two-phase (Temp → ClamAV → Permanent), whitelist `PDF/DWG/DOCX/XLSX/ZIP`, max 50MB, `StorageService` only.
---
## 🏷️ Domain Glossary (reject generic terms)
| ✅ Use | ❌ Don't Use |
| --- | --- |
| Correspondence | Letter, Communication, Document |
| RFA | Approval Request, Submit for Approval |
| Transmittal | Delivery Note, Cover Letter |
| Circulation | Distribution, Routing |
| Shop Drawing | Construction Drawing |
| Contract Drawing | Design Drawing, Blueprint |
| Workflow Engine | Approval Flow, Process Engine |
| Document Numbering | Document ID, Auto Number |
---
## 📁 Key Files for Generating / Validating Artifacts
| When you need... | Read |
| --- | --- |
| A new feature spec | `.agents/skills/speckit-specify/templates/spec-template.md` + `specs/01-Requirements/01-06-edge-cases-and-rules.md` |
| A plan | `.agents/skills/speckit-plan/templates/plan-template.md` + relevant ADRs |
| Task breakdown | `.agents/skills/speckit-tasks/templates/tasks-template.md` + existing patterns in `specs/08-Tasks/` |
| Acceptance criteria / UAT | `specs/01-Requirements/01-05-acceptance-criteria.md` |
| Schema / table definition | `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql` + `03-01-data-dictionary.md` |
| RBAC / permissions | `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql` + `01-02-01-rbac-matrix.md` |
| Release / hotfix | `specs/04-Infrastructure-OPS/04-08-release-management-policy.md` |
---
## 🛠️ Helper Scripts (real paths in this repo)
- `./.agents/scripts/bash/check-prerequisites.sh` / `powershell/*.ps1`
- `./.agents/scripts/bash/setup-plan.sh`
- `./.agents/scripts/bash/update-agent-context.sh windsurf`
- `./.agents/scripts/bash/audit-skills.sh`
- `./.agents/scripts/bash/validate-versions.sh`
- `./.agents/scripts/bash/sync-workflows.sh`
---
## ✅ Commit Checklist (applied automatically by speckit-implement)
- [ ] UUID pattern verified (no `parseInt` / `Number` / `+` on UUID, no `id ?? ''` fallback)
- [ ] No `any`, no `console.log` in committed code
- [ ] Business comments in Thai, code identifiers in English
- [ ] Schema changes via SQL directly (not migration)
- [ ] Test coverage meets targets (Backend 70%+, Business Logic 80%+)
- [ ] Relevant ADRs referenced (007/008/009/016/018/019/020/021)
- [ ] Domain glossary terms used correctly
- [ ] Error handling: `Logger` + `HttpException` / `BusinessException`
- [ ] i18n keys used (no hardcode text)
- [ ] Cache invalidation when data mutated
- [ ] OWASP Top 10 review passed
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
+11 -11
View File
@@ -36,11 +36,12 @@ npx skills add Kadajett/agent-nestjs-skills -a claude-code -a cursor
- `area-description.md` - Individual rule files
- `scripts/` - Build scripts and utilities
- `metadata.json` - Document metadata (version, organization, abstract)
- __`AGENTS.md`__ - Compiled output (generated)
- **`AGENTS.md`** - Compiled output (generated)
## Getting Started
1. Install dependencies:
```bash
cd scripts && npm install
```
@@ -74,7 +75,7 @@ npx skills add Kadajett/agent-nestjs-skills -a claude-code -a cursor
Each rule file should follow this structure:
```markdown
````markdown
---
title: Rule Title Here
impact: MEDIUM
@@ -91,6 +92,7 @@ Brief explanation of the rule and why it matters.
```typescript
// Bad code example
```
````
**Correct (description of what's right):**
@@ -102,7 +104,6 @@ Optional explanatory text after examples.
Reference: [NestJS Documentation](https://docs.nestjs.com)
## File Naming Convention
- Files starting with `_` are special (excluded from build)
@@ -113,13 +114,13 @@ Reference: [NestJS Documentation](https://docs.nestjs.com)
## Impact Levels
| Level | Description |
|-------|-------------|
| CRITICAL | Violations cause runtime errors, security vulnerabilities, or architectural breakdown |
| HIGH | Significant impact on reliability, security, or maintainability |
| MEDIUM-HIGH | Notable impact on quality and developer experience |
| MEDIUM | Moderate impact on code quality and best practices |
| LOW-MEDIUM | Minor improvements for consistency and maintainability |
| Level | Description |
| ----------- | ------------------------------------------------------------------------------------- |
| CRITICAL | Violations cause runtime errors, security vulnerabilities, or architectural breakdown |
| HIGH | Significant impact on reliability, security, or maintainability |
| MEDIUM-HIGH | Notable impact on quality and developer experience |
| MEDIUM | Moderate impact on code quality and best practices |
| LOW-MEDIUM | Minor improvements for consistency and maintainability |
## Scripts
@@ -160,4 +161,3 @@ These NestJS skills work with:
- [Claude Code](https://claude.ai/code) - Anthropic's official CLI
- [AdaL](https://sylph.ai/adal) - Self-evolving AI coding agent with MCP support
+150 -19
View File
@@ -1,10 +1,12 @@
---
name: nestjs-best-practices
description: NestJS best practices and architecture patterns for building production-ready applications. This skill should be used when writing, reviewing, or refactoring NestJS code to ensure proper patterns for modules, dependency injection, security, and performance.
description: NestJS best practices and architecture patterns for building production-ready LCBP3-DMS backend code. Enforces ADR-009 (no TypeORM migrations), ADR-019 (hybrid UUID), ADR-016 (security), ADR-007 (error handling), ADR-008 (BullMQ), ADR-001/002 (workflow + numbering), ADR-018/020 (AI boundary), and ADR-021 (workflow context).
version: 1.8.9
scope: backend
user-invocable: false
license: MIT
metadata:
author: Kadajett
version: "1.1.0"
upstream: 'Kadajett/nestjs-best-practices v1.1.0 (forked + LCBP3-aligned)'
---
# NestJS Best Practices
@@ -24,18 +26,18 @@ Reference these guidelines when:
## Rule Categories by Priority
| Priority | Category | Impact | Prefix |
|----------|----------|--------|--------|
| 1 | Architecture | CRITICAL | `arch-` |
| 2 | Dependency Injection | CRITICAL | `di-` |
| 3 | Error Handling | HIGH | `error-` |
| 4 | Security | HIGH | `security-` |
| 5 | Performance | HIGH | `perf-` |
| 6 | Testing | MEDIUM-HIGH | `test-` |
| 7 | Database & ORM | MEDIUM-HIGH | `db-` |
| 8 | API Design | MEDIUM | `api-` |
| 9 | Microservices | MEDIUM | `micro-` |
| 10 | DevOps & Deployment | LOW-MEDIUM | `devops-` |
| Priority | Category | Impact | Prefix |
| -------- | -------------------- | ----------- | ----------- |
| 1 | Architecture | CRITICAL | `arch-` |
| 2 | Dependency Injection | CRITICAL | `di-` |
| 3 | Error Handling | HIGH | `error-` |
| 4 | Security | HIGH | `security-` |
| 5 | Performance | HIGH | `perf-` |
| 6 | Testing | MEDIUM-HIGH | `test-` |
| 7 | Database & ORM | MEDIUM-HIGH | `db-` |
| 8 | API Design | MEDIUM | `api-` |
| 9 | Microservices | MEDIUM | `micro-` |
| 10 | DevOps & Deployment | LOW-MEDIUM | `devops-` |
## Quick Reference
@@ -86,9 +88,10 @@ Reference these guidelines when:
### 7. Database & ORM (MEDIUM-HIGH)
- `db-use-transactions` - Transaction management
- `db-avoid-n-plus-one` - Avoid N+1 query problems
- `db-use-migrations` - Use migrations for schema changes
- `db-hybrid-identifier` - **CRITICAL** ADR-019: INT PK + UUID public API
- `db-avoid-n-plus-one` - HIGH N+1 query prevention
- `db-use-transactions` - HIGH Transaction management
- `db-no-typeorm-migrations` - **CRITICAL** ADR-009: No TypeORM migrations - use SQL files
### 8. API Design (MEDIUM)
@@ -109,7 +112,134 @@ Reference these guidelines when:
- `devops-use-logging` - Structured logging
- `devops-graceful-shutdown` - Zero-downtime deployments
## How to Use
### 11. LCBP3-Specific (CRITICAL — Project Overrides)
- `db-no-typeorm-migrations`**CRITICAL** ADR-009: edit SQL directly
- `lcbp3-workflow-engine`**CRITICAL** ADR-001/002/021: DSL state machine + double-lock numbering + workflow context
- `security-file-two-phase-upload`**CRITICAL** ADR-016: Upload → Temp → ClamAV → Commit
- `lcbp3-ai-boundary`**CRITICAL** ADR-018/020: Ollama on-prem only, human-in-the-loop
## NAP-DMS Project-Specific Rules (MUST FOLLOW)
These rules override general NestJS best practices for the NAP-DMS project:
### ADR-009: No TypeORM Migrations
- **ห้ามสร้างไฟล์ migration ของ TypeORM**
- แก้ไข schema โดยตรงที่: `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`
- ใช้ n8n workflow สำหรับ data migration ถ้าจำเป็น
### ADR-019: Hybrid Identifier Strategy (CRITICAL — March 2026 Pattern)
> **Updated pattern:** `UuidBaseEntity` exposes `publicId` **directly**. ห้ามใช้ `@Expose({ name: 'id' })` — API จะคืน `publicId` เป็น field name ตรงๆ.
```typescript
// ✅ CORRECT — ใช้ UuidBaseEntity
@Entity()
export class Project extends UuidBaseEntity {
// publicId (string UUIDv7) + id (INT, @Exclude) สืบทอดจาก UuidBaseEntity
// API response → { publicId: "019505a1-7c3e-7000-8000-abc123..." }
@Column()
projectCode: string;
@Column()
projectName: string;
}
```
```typescript
// ❌ WRONG — pattern เก่า ห้ามใช้
@Entity()
export class OldProject {
@PrimaryGeneratedColumn()
@Exclude()
id: number;
@Column({ type: 'uuid' })
@Expose({ name: 'id' }) // ❌ อย่า rename publicId เป็น 'id'
publicId: string;
}
```
**DTO Input (รับ UUID จาก Frontend):**
```typescript
export class CreateContractDto {
@IsUUID('7')
projectUuid: string; // รับ UUID string จาก client
}
// Controller resolves UUID → INT internally
@Post()
async create(@Body() dto: CreateContractDto) {
const projectId = await this.projectService.resolveInternalId(dto.projectUuid);
return this.contractService.create({ ...dto, projectId });
}
```
**ห้ามเด็ดขาด (CI Blocker):**
-`parseInt(projectPublicId)` — "019505…" → 19 (silently wrong)
-`Number(publicId)` / `+publicId` — NaN
-`@Expose({ name: 'id' })` บน `publicId` (pattern เก่า)
- ❌ Expose INT `id` ใน API response (ต้อง `@Exclude()` เสมอ)
### Two-Phase File Upload
```typescript
// Phase 1: Upload to temp
@Post('upload')
async uploadFile(@UploadedFile() file: Express.Multer.File) {
await this.virusScan(file);
const tempId = await this.fileStorage.saveToTemp(file);
return { temp_id: tempId, expires_at: addHours(new Date(), 24) };
}
// Phase 2: Commit in transaction
async createEntity(dto: CreateDto, tempIds: string[]) {
return this.dataSource.transaction(async (manager) => {
const entity = await manager.save(Entity, dto);
await this.fileStorage.commitFiles(tempIds, entity.id, manager);
return entity;
});
}
```
### Idempotency Requirement
- ทุก POST/PUT/PATCH ที่สำคัญต้องมี `Idempotency-Key` header
- ใช้ `IdempotencyInterceptor` ที่มีอยู่แล้ว
### Document Numbering (Double-Lock)
```typescript
async generateNextNumber(context: NumberingContext): Promise<string> {
const lockKey = `doc_num:${context.projectId}:${context.typeId}`;
const lock = await this.redisLock.acquire(lockKey, 3000);
try {
const counter = await this.counterRepo.findOne({
where: context,
lock: { mode: 'optimistic' },
});
counter.last_number++;
return this.formatNumber(await this.counterRepo.save(counter));
} finally {
await lock.release();
}
}
```
### Anti-Patterns (ห้ามทำ)
- ❌ ใช้ SQL Triggers สำหรับ business logic
- ❌ ใช้ `.env` ใน production (ใช้ Docker ENV)
- ❌ ใช้ `any` type (strict mode enforced)
- ❌ ใช้ `console.log` (ใช้ NestJS Logger)
- ❌ สร้างตาราง routing แยก (ใช้ Workflow Engine)
---
Read individual rule files for detailed explanations and code examples:
@@ -120,6 +250,7 @@ rules/_sections.md
```
Each rule file contains:
- Brief explanation of why it matters
- Incorrect code example with explanation
- Correct code example with explanation
@@ -0,0 +1,24 @@
{
"version": "1.8.9",
"organization": "**NAP-DMS / LCBP3** — Laem Chabang Port Phase 3 Document Management System",
"date": "2026-04-22",
"abstract": "Comprehensive NestJS best-practices guide compiled for the LCBP3-DMS backend. Contains 40+ rules across 11 categories (10 general + 1 project-specific), prioritized by impact. Forked from Kadajett/nestjs-best-practices (v1.1.0) and aligned to LCBP3 ADRs: ADR-001 (workflow engine), ADR-002 (document numbering), ADR-007 (error handling), ADR-008 (notifications/BullMQ), ADR-009 (no TypeORM migrations), ADR-016 (security), ADR-018/020 (AI boundary), ADR-019 (hybrid UUID identifier — March 2026 pattern), and ADR-021 (workflow context).\n\nThis document is the single, consolidated reference used by Cascade and other AI coding agents when writing, reviewing, or refactoring backend code in this repository. All LCBP3-specific overrides live in section 11.",
"references": [
"[AGENTS.md (root)](../../../AGENTS.md) — canonical AI agent rules",
"[CONTRIBUTING.md](../../../CONTRIBUTING.md) — spec authoring + PR process",
"[ADR-001 Unified Workflow Engine](../../../specs/06-Decision-Records/ADR-001-unified-workflow-engine.md)",
"[ADR-002 Document Numbering Strategy](../../../specs/06-Decision-Records/ADR-002-document-numbering-strategy.md)",
"[ADR-007 Error Handling Strategy](../../../specs/06-Decision-Records/ADR-007-error-handling-strategy.md)",
"[ADR-008 Email/Notification Strategy](../../../specs/06-Decision-Records/ADR-008-email-notification-strategy.md)",
"[ADR-009 Database Migration Strategy](../../../specs/06-Decision-Records/ADR-009-database-migration-strategy.md)",
"[ADR-016 Security & Authentication](../../../specs/06-Decision-Records/ADR-016-security-authentication.md)",
"[ADR-018 AI Boundary](../../../specs/06-Decision-Records/ADR-018-ai-boundary.md)",
"[ADR-019 Hybrid Identifier Strategy](../../../specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md)",
"[ADR-020 AI Intelligence Integration](../../../specs/06-Decision-Records/ADR-020-ai-intelligence-integration.md)",
"[ADR-021 Workflow Context](../../../specs/06-Decision-Records/ADR-021-workflow-context.md)",
"[Backend Engineering Guidelines](../../../specs/05-Engineering-Guidelines/05-02-backend-guidelines.md)",
"[Schema — v1.8.0 Tables](../../../specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql)",
"[Data Dictionary](../../../specs/03-Data-and-Storage/03-01-data-dictionary.md)",
"Upstream: [Kadajett/nestjs-best-practices](https://github.com/Kadajett/nestjs-best-practices) v1.1.0"
]
}
@@ -126,7 +126,7 @@ export class UsersController {
@SerializeOptions({ type: UserResponseDto })
async findAll(): Promise<UserResponseDto[]> {
const users = await this.usersService.findAll();
return users.map(u => plainToInstance(UserResponseDto, u));
return users.map((u) => plainToInstance(UserResponseDto, u));
}
@Get(':id')
@@ -159,10 +159,7 @@ export class UsersService {
@Controller('users')
export class UsersController {
@Get(':id')
async findOne(
@Param('id') id: string,
@Headers('X-API-Version') version: string = '1',
): Promise<any> {
async findOne(@Param('id') id: string, @Headers('X-API-Version') version: string = '1'): Promise<any> {
return this.usersService.findOne(id, version);
}
}
@@ -1,7 +1,7 @@
---
title: Avoid Circular Dependencies
impact: CRITICAL
impactDescription: "#1 cause of runtime crashes"
impactDescription: '#1 cause of runtime crashes'
tags: architecture, modules, dependencies
---
@@ -1,7 +1,7 @@
---
title: Organize by Feature Modules
impact: CRITICAL
impactDescription: "3-5x faster onboarding and development"
impactDescription: '3-5x faster onboarding and development'
tags: architecture, modules, organization
---
@@ -1,7 +1,7 @@
---
title: Single Responsibility for Services
impact: CRITICAL
impactDescription: "40%+ improvement in testability"
impactDescription: '40%+ improvement in testability'
tags: architecture, services, single-responsibility
---
@@ -19,7 +19,7 @@ export class UserAndOrderService {
private userRepo: UserRepository,
private orderRepo: OrderRepository,
private mailer: MailService,
private payment: PaymentService,
private payment: PaymentService
) {}
async createUser(dto: CreateUserDto) {
@@ -90,7 +90,7 @@ export class OrdersController {
constructor(
private orders: OrdersService,
private payment: PaymentService,
private notifications: NotificationService,
private notifications: NotificationService
) {}
@Post()
@@ -20,7 +20,7 @@ export class OrdersService {
private emailService: EmailService,
private analyticsService: AnalyticsService,
private notificationService: NotificationService,
private loyaltyService: LoyaltyService,
private loyaltyService: LoyaltyService
) {}
async createOrder(dto: CreateOrderDto): Promise<Order> {
@@ -51,7 +51,7 @@ export class OrderCreatedEvent {
public readonly orderId: string,
public readonly userId: string,
public readonly items: OrderItem[],
public readonly total: number,
public readonly total: number
) {}
}
@@ -60,17 +60,14 @@ export class OrderCreatedEvent {
export class OrdersService {
constructor(
private eventEmitter: EventEmitter2,
private repo: Repository<Order>,
private repo: Repository<Order>
) {}
async createOrder(dto: CreateOrderDto): Promise<Order> {
const order = await this.repo.save(dto);
// Emit event - no knowledge of consumers
this.eventEmitter.emit(
'order.created',
new OrderCreatedEvent(order.id, order.userId, order.items, order.total),
);
this.eventEmitter.emit('order.created', new OrderCreatedEvent(order.id, order.userId, order.items, order.total));
return order;
}
@@ -15,9 +15,7 @@ Create custom repositories to encapsulate complex queries and database logic. Th
// Complex queries in services
@Injectable()
export class UsersService {
constructor(
@InjectRepository(User) private repo: Repository<User>,
) {}
constructor(@InjectRepository(User) private repo: Repository<User>) {}
async findActiveWithOrders(minOrders: number): Promise<User[]> {
// Complex query logic mixed with business logic
@@ -42,9 +40,7 @@ export class UsersService {
// Custom repository with encapsulated queries
@Injectable()
export class UsersRepository {
constructor(
@InjectRepository(User) private repo: Repository<User>,
) {}
constructor(@InjectRepository(User) private repo: Repository<User>) {}
async findById(id: string): Promise<User | null> {
return this.repo.findOne({ where: { id } });
@@ -0,0 +1,229 @@
---
title: Hybrid Identifier Strategy (ADR-019)
impact: CRITICAL
impactDescription: Use INT PK internally + UUID for public API per project ADR-019
tags: database, uuid, identifier, adr-019, api-design, typeorm
---
## Hybrid Identifier Strategy (ADR-019) — March 2026 Pattern
**This project follows ADR-019: INT Primary Key (internal) + UUIDv7 (public API)**
Unlike standard practices that use UUID as the primary key, this project uses a **hybrid approach** optimized for MariaDB performance and API consistency.
> **Updated pattern (March 2026):** Entities extend `UuidBaseEntity`. The `publicId` column is exposed **directly** in API responses — ห้ามใช้ `@Expose({ name: 'id' })` เพื่อ rename.
### The Strategy
| Layer | Field | Type | Usage |
| --------------- | ---------- | ----------------------------------- | ------------------------------------------------- |
| **Database PK** | `id` | `INT AUTO_INCREMENT` | Internal foreign keys only (marked `@Exclude()`) |
| **Public API** | `publicId` | `MariaDB UUID` (native, BINARY(16)) | External references, URLs — exposed as-is |
| **DTO Input** | `xxxUuid` | `string` (UUIDv7) | Accept UUID in create/update DTOs |
| **DTO Output** | `publicId` | `string` (UUIDv7) | API returns `publicId` field directly (no rename) |
### Why Hybrid IDs?
- **Performance**: INT PK is faster for joins and indexing than UUID
- **Security**: Internal IDs never exposed in API (enumerable IDs are a risk)
- **Compatibility**: UUID works well with distributed systems and external integrations
- **MariaDB Native**: Uses MariaDB's native UUID type (stored as BINARY(16), auto-converts to string)
### Entity Definition (Current Pattern)
```typescript
import { Entity, Column } from 'typeorm';
import { UuidBaseEntity } from '@/common/entities/uuid-base.entity';
@Entity('contracts')
export class Contract extends UuidBaseEntity {
// publicId (string UUIDv7) + id (INT, @Exclude) สืบทอดจาก UuidBaseEntity
// API response → { publicId: "019505a1-7c3e-7000-8000-abc123...", contractCode: ..., ... }
@Column()
contractCode: string;
@Column()
contractName: string;
@Column({ name: 'project_id' })
projectId: number; // INT FK — internal, not exposed if marked @Exclude in UuidBaseEntity
}
```
**`UuidBaseEntity` (shared base):**
```typescript
import { PrimaryGeneratedColumn, Column, CreateDateColumn, UpdateDateColumn } from 'typeorm';
import { Exclude } from 'class-transformer';
export abstract class UuidBaseEntity {
@PrimaryGeneratedColumn()
@Exclude() // ❗ CRITICAL: INT id must never leak to API
id: number;
@Column({ type: 'uuid', unique: true, generated: 'uuid' })
publicId: string; // UUIDv7, exposed as-is
@CreateDateColumn()
createdAt: Date;
@UpdateDateColumn()
updatedAt: Date;
}
```
### DTO Pattern (Accept UUID, Resolve to INT Internally)
```typescript
// dto/create-contract.dto.ts
import { IsUUID, IsNotEmpty } from 'class-validator';
export class CreateContractDto {
@IsNotEmpty()
@IsUUID('7') // UUIDv7 (MariaDB native)
projectUuid: string; // Accept UUID from client
@IsNotEmpty()
contractCode: string;
@IsNotEmpty()
contractName: string;
}
// ❌ NO Response DTO with @Expose rename needed.
// Entity class_transformer via TransformInterceptor will serialize publicId directly.
```
### Service/Controller Pattern
```typescript
@Controller('contracts')
@UseGuards(JwtAuthGuard, CaslAbilityGuard)
export class ContractsController {
constructor(
private contractsService: ContractsService,
private uuidResolver: UuidResolver
) {}
@Post()
async create(@Body() dto: CreateContractDto) {
// Resolve UUID → INT PK for FK relationship
const projectId = await this.uuidResolver.resolveProject(dto.projectUuid);
const contract = await this.contractsService.create({
...dto,
projectId,
});
// Response: TransformInterceptor + @Exclude on id → publicId exposed directly
return contract;
}
@Get(':publicId')
async findOne(@Param('publicId', ParseUuidPipe) publicId: string) {
return this.contractsService.findOneByPublicId(publicId);
}
}
```
### UUID Resolver Helper
```typescript
@Injectable()
export class UuidResolver {
constructor(
@InjectRepository(Project)
private projectRepo: Repository<Project>,
@InjectRepository(Contract)
private contractRepo: Repository<Contract>
) {}
async resolveProject(publicId: string): Promise<number> {
const project = await this.projectRepo.findOne({
where: { publicId },
select: ['id'], // Only INT PK for FK
});
if (!project) throw new NotFoundException('Project not found');
return project.id;
}
async resolveContract(publicId: string): Promise<number> {
const contract = await this.contractRepo.findOne({
where: { publicId },
select: ['id'],
});
if (!contract) throw new NotFoundException('Contract not found');
return contract.id;
}
}
```
### TransformInterceptor (Required — register ONCE)
```typescript
// Register via APP_INTERCEPTOR in CommonModule — ห้ามซ้ำใน main.ts
@Injectable()
export class TransformInterceptor implements NestInterceptor {
intercept(context: ExecutionContext, next: CallHandler): Observable<any> {
return next.handle().pipe(
map((data) => instanceToPlain(data)) // Applies @Exclude / @Expose
);
}
}
// common.module.ts
@Module({
providers: [
{
provide: APP_INTERCEPTOR,
useClass: TransformInterceptor,
},
],
})
export class CommonModule {}
```
> **Warning:** ห้ามเรียก `app.useGlobalInterceptors(new TransformInterceptor())` ใน `main.ts` ซ้ำ — จะทำให้ response double-wrap `{ data: { data: ... } }`.
### Critical: NEVER ParseInt on UUID
```typescript
// ❌ WRONG - parseInt on UUID gives garbage value
const id = parseInt(projectPublicId); // "0195a1b2-..." → 195 (wrong!)
// ❌ WRONG - Number() on UUID
const id = Number(projectPublicId); // NaN
// ❌ WRONG - Unary plus on UUID
const id = +projectPublicId; // NaN
// ✅ CORRECT - Resolve via database lookup
const projectId = await uuidResolver.resolveProject(projectPublicId);
// ✅ CORRECT - Use TypeORM find with publicId column
const project = await projectRepo.findOne({ where: { publicId: projectPublicId } });
const id = project.id; // Get INT PK from entity
```
### Query with publicId (No Resolution Needed)
```typescript
// Direct UUID lookup in TypeORM
const project = await this.projectRepo.findOne({
where: { publicId: projectPublicId },
});
// Relations use INT FK internally
const contracts = await this.contractRepo.find({
where: { projectId: project.id }, // INT for FK query
});
```
### Reference
- [ADR-019 Hybrid Identifier Strategy](../../../../specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md)
- [UUID Implementation Plan](../../../../specs/05-Engineering-Guidelines/05-07-hybrid-uuid-implementation-plan.md)
- [Data Dictionary](../../../../specs/03-Data-and-Storage/03-01-data-dictionary.md)
> **Warning**: Using `parseInt()`, `Number()`, or unary `+` on UUID values violates ADR-019 and will cause data corruption. Always resolve UUIDs via database lookup.
@@ -0,0 +1,100 @@
---
title: No TypeORM Migrations (ADR-009)
impact: CRITICAL
impactDescription: Edit SQL schema files directly; n8n handles data migration. Do not generate TypeORM migration files.
tags: database, schema, migration, adr-009, sql, n8n
---
## No TypeORM Migrations (ADR-009)
**This project does NOT use TypeORM migration files.**
All schema changes must be made **directly** in the canonical SQL file:
- `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`
Delta scripts (for incremental rollout to existing environments) go under:
- `specs/03-Data-and-Storage/deltas/YYYY-MM-DD-descriptive-name.sql`
Data migration (e.g., backfilling a new column) is handled by **n8n workflows**, not TypeORM's `QueryRunner`.
---
## Why No Migrations?
1. **Single source of truth** — The full SQL schema is always readable as one file. No need to replay a migration chain to understand current state.
2. **Review friendly** — Schema diff = git diff on the SQL file. Reviewers see the complete picture.
3. **Ops alignment** — DBAs and operators work in SQL, not TypeScript.
4. **n8n for data** — Business-meaningful data transforms live in n8n where they can be versioned, retried, and orchestrated with monitoring.
---
## ✅ Workflow for a Schema Change
1. **Update Data Dictionary** first:
- `specs/03-Data-and-Storage/03-01-data-dictionary.md` — add field meaning + business rules.
2. **Update the canonical schema**:
- Edit `lcbp3-v1.8.0-schema-02-tables.sql` — add/alter column, constraint, index.
3. **Add a delta script** (if deploying to existing env):
- `specs/03-Data-and-Storage/deltas/2026-04-22-add-rfa-revision-column.sql`
```sql
-- Delta: Add revision column to rfa table
ALTER TABLE rfa
ADD COLUMN revision INT NOT NULL DEFAULT 1 AFTER status;
CREATE INDEX idx_rfa_revision ON rfa(revision);
```
4. **Update the Entity** (`backend/src/.../entities/rfa.entity.ts`):
```typescript
@Column({ type: 'int', default: 1 })
revision: number;
```
5. **If data backfill needed** → create n8n workflow, not TypeScript migration.
---
## ❌ Forbidden
```bash
# ❌ DO NOT generate migrations
pnpm typeorm migration:generate ./src/migrations/AddRevision
# ❌ DO NOT run migrations
pnpm typeorm migration:run
```
```typescript
// ❌ DO NOT write migration classes
export class AddRevision1730000000000 implements MigrationInterface {
async up(queryRunner: QueryRunner): Promise<void> { /* ... */ }
async down(queryRunner: QueryRunner): Promise<void> { /* ... */ }
}
```
---
## ✅ TypeORM Config (runtime only)
```typescript
// ormconfig.ts
export default {
type: 'mariadb',
// ...
synchronize: false, // ❗ NEVER true (would auto-sync entity ↔ schema)
migrationsRun: false, // ❗ NEVER true
// ❌ Do NOT specify `migrations:` entries
};
```
`synchronize: false` is mandatory because the canonical SQL file is authoritative — TypeORM should never mutate the schema.
---
## Reference
- [ADR-009 Database Migration Strategy](../../../../specs/06-Decision-Records/ADR-009-database-migration-strategy.md)
- [Data Dictionary](../../../../specs/03-Data-and-Storage/03-01-data-dictionary.md)
- [Schema Tables](../../../../specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql)
@@ -1,129 +1,128 @@
---
title: Use Database Migrations
title: No TypeORM Migrations (ADR-009)
impact: HIGH
impactDescription: Enables safe, repeatable database schema changes
tags: database, migrations, typeorm, schema
impactDescription: Use direct SQL schema files instead of TypeORM migrations per project ADR
tags: database, schema, typeorm, migrations, adr-009
---
## Use Database Migrations
## No TypeORM Migrations (ADR-009)
Never use `synchronize: true` in production. Use migrations for all schema changes. Migrations provide version control for your database, enable safe rollbacks, and ensure consistency across all environments.
**This project follows ADR-009: Direct SQL Schema Management**
**Incorrect (using synchronize or manual SQL):**
Unlike standard NestJS/TypeORM practices, this project does **NOT** use TypeORM migrations. Instead, we manage database schema through direct SQL files.
```typescript
// Use synchronize in production
TypeOrmModule.forRoot({
type: 'postgres',
synchronize: true, // DANGEROUS in production!
// Can drop columns, tables, or data
});
### Why No Migrations?
// Manual SQL in production
@Injectable()
export class DatabaseService {
async addColumn(): Promise<void> {
await this.dataSource.query('ALTER TABLE users ADD COLUMN age INT');
// No version control, no rollback, inconsistent across envs
}
}
- **ADR-009 Decision**: Explicit schema control over auto-generated migrations
- **MariaDB-specific features**: Native UUID type, virtual columns, custom indexing
- **Team workflow**: Schema changes reviewed as SQL, not TypeORM migration classes
- **Audit trail**: Single source of truth in `specs/03-Data-and-Storage/`
// Modify entities without migration
@Entity()
export class User {
@Column()
email: string;
### Schema File Locations
@Column() // Added without migration
newField: string; // Will crash in production if synchronize is false
}
```
specs/03-Data-and-Storage/
├── lcbp3-v1.8.0-schema-01-drop.sql # Drop statements (dev only)
├── lcbp3-v1.8.0-schema-02-tables.sql # CREATE TABLE statements
├── lcbp3-v1.8.0-schema-03-views-indexes.sql # Views, indexes, constraints
└── deltas/ # Incremental changes
├── 01-add-reference-date.sql
├── 02-add-rbac-bulk-permission.sql
└── 03-fix-numbering-enums.sql
```
**Correct (use migrations for all schema changes):**
### Correct: Using SQL Schema Files
```typescript
// Configure TypeORM for migrations
// data-source.ts
export const dataSource = new DataSource({
type: 'postgres',
host: process.env.DB_HOST,
port: parseInt(process.env.DB_PORT),
username: process.env.DB_USERNAME,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
entities: ['dist/**/*.entity.js'],
migrations: ['dist/migrations/*.js'],
synchronize: false, // Always false in production
migrationsRun: true, // Run migrations on startup
});
// app.module.ts
// TypeORM configuration - NO migrationsRun
TypeOrmModule.forRootAsync({
inject: [ConfigService],
useFactory: (config: ConfigService) => ({
type: 'postgres',
type: 'mariadb',
host: config.get('DB_HOST'),
synchronize: config.get('NODE_ENV') === 'development', // Only in dev
migrations: ['dist/migrations/*.js'],
migrationsRun: true,
port: config.get('DB_PORT'),
username: config.get('DB_USERNAME'),
password: config.get('DB_PASSWORD'),
database: config.get('DB_NAME'),
entities: ['dist/**/*.entity.js'],
synchronize: false, // NEVER true, even in development
migrationsRun: false, // Disabled per ADR-009
// Migrations are managed via SQL files, not TypeORM
}),
});
```
// migrations/1705312800000-AddUserAge.ts
import { MigrationInterface, QueryRunner } from 'typeorm';
### Schema Change Process (ADR-009)
export class AddUserAge1705312800000 implements MigrationInterface {
name = 'AddUserAge1705312800000';
1. **Modify SQL file directly**:
public async up(queryRunner: QueryRunner): Promise<void> {
// Add column with default to handle existing rows
await queryRunner.query(`
ALTER TABLE "users" ADD "age" integer DEFAULT 0
`);
```sql
-- specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql
ALTER TABLE correspondences
ADD COLUMN priority VARCHAR(20) DEFAULT 'normal';
```
// Add index for frequently queried columns
await queryRunner.query(`
CREATE INDEX "IDX_users_age" ON "users" ("age")
`);
}
2. **Create delta for existing databases**:
public async down(queryRunner: QueryRunner): Promise<void> {
// Always implement down for rollback
await queryRunner.query(`DROP INDEX "IDX_users_age"`);
await queryRunner.query(`ALTER TABLE "users" DROP COLUMN "age"`);
}
}
```sql
-- specs/03-Data-and-Storage/deltas/04-add-priority-column.sql
ALTER TABLE correspondences
ADD COLUMN priority VARCHAR(20) DEFAULT 'normal';
```
// Safe column rename (two-step)
export class RenameNameToFullName1705312900000 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<void> {
// Step 1: Add new column
await queryRunner.query(`
ALTER TABLE "users" ADD "full_name" varchar(255)
`);
3. **Apply to database manually or via deployment script**:
```bash
mysql -u root -p lcbp3 < specs/03-Data-and-Storage/deltas/04-add-priority-column.sql
```
// Step 2: Copy data
await queryRunner.query(`
UPDATE "users" SET "full_name" = "name"
`);
### Entity Definition (No Migration Needed)
// Step 3: Add NOT NULL constraint
await queryRunner.query(`
ALTER TABLE "users" ALTER COLUMN "full_name" SET NOT NULL
`);
```typescript
@Entity('correspondences')
export class Correspondence {
@PrimaryGeneratedColumn()
id: number; // Internal INT PK
// Step 4: Drop old column (after verifying app works)
await queryRunner.query(`
ALTER TABLE "users" DROP COLUMN "name"
`);
}
@Column({ type: 'uuid' })
uuid: string; // Public UUID
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`ALTER TABLE "users" ADD "name" varchar(255)`);
await queryRunner.query(`UPDATE "users" SET "name" = "full_name"`);
await queryRunner.query(`ALTER TABLE "users" DROP COLUMN "full_name"`);
}
@Column({ name: 'priority', default: 'normal' })
priority: string;
// No migration class needed - schema managed via SQL
}
```
Reference: [TypeORM Migrations](https://typeorm.io/migrations)
### Anti-Pattern: TypeORM Migrations (Do NOT Use)
```typescript
// ❌ WRONG - Do not create migration files
// migrations/1705312800000-AddUserAge.ts
export class AddUserAge1705312800000 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.query(`ALTER TABLE "users" ADD "age" integer`);
}
}
// ❌ WRONG - Do not enable migrationsRun
TypeOrmModule.forRoot({
migrationsRun: true, // Disabled per ADR-009
migrations: ['dist/migrations/*.js'],
});
```
### When You Need Schema Changes
1. Check `specs/03-Data-and-Storage/lcbp3-v1.8.0-schema-02-tables.sql`
2. Add your DDL to the appropriate SQL file
3. Create delta file in `deltas/` directory
4. Apply SQL to your database
5. Update corresponding Entity class
### Reference
- [ADR-009 Database Strategy](../../../../specs/06-Decision-Records/ADR-009-db-strategy.md)
- [Schema SQL Files](../../../../specs/03-Data-and-Storage/)
- [Data Dictionary](../../../../specs/03-Data-and-Storage/03-01-data-dictionary.md)
> **Warning**: Attempting to use TypeORM migrations in this project violates ADR-009 and will be rejected in code review.
@@ -47,12 +47,7 @@ export class OrdersService {
for (const item of items) {
await manager.save(OrderItem, { orderId: order.id, ...item });
await manager.decrement(
Inventory,
{ productId: item.productId },
'stock',
item.quantity,
);
await manager.decrement(Inventory, { productId: item.productId }, 'stock', item.quantity);
}
// If this throws, everything rolls back
@@ -75,12 +70,7 @@ export class TransferService {
try {
// Debit source account
await queryRunner.manager.decrement(
Account,
{ id: fromId },
'balance',
amount,
);
await queryRunner.manager.decrement(Account, { id: fromId }, 'balance', amount);
// Verify sufficient funds
const source = await queryRunner.manager.findOne(Account, {
@@ -91,12 +81,7 @@ export class TransferService {
}
// Credit destination account
await queryRunner.manager.increment(
Account,
{ id: toId },
'balance',
amount,
);
await queryRunner.manager.increment(Account, { id: toId }, 'balance', amount);
// Log the transaction
await queryRunner.manager.save(TransactionLog, {
@@ -121,13 +106,10 @@ export class TransferService {
export class UsersRepository {
constructor(
@InjectRepository(User) private repo: Repository<User>,
private dataSource: DataSource,
private dataSource: DataSource
) {}
async createWithProfile(
userData: CreateUserDto,
profileData: CreateProfileDto,
): Promise<User> {
async createWithProfile(userData: CreateUserDto, profileData: CreateProfileDto): Promise<User> {
return this.dataSource.transaction(async (manager) => {
const user = await manager.save(User, userData);
await manager.save(Profile, { ...profileData, userId: user.id });
@@ -79,9 +79,7 @@ export class DatabaseService implements OnApplicationShutdown {
console.log(`Database service shutting down on ${signal}`);
// Close all connections gracefully
await Promise.all(
this.connections.map((conn) => conn.close()),
);
await Promise.all(this.connections.map((conn) => conn.close()));
console.log('All database connections closed');
}
@@ -150,9 +148,7 @@ export class HealthController {
throw new ServiceUnavailableException('Shutting down');
}
return this.health.check([
() => this.db.pingCheck('database'),
]);
return this.health.check([() => this.db.pingCheck('database')]);
}
}
@@ -208,10 +204,7 @@ export class RequestTracker implements NestMiddleware, OnApplicationShutdown {
});
// Wait with timeout
await Promise.race([
this.shutdownPromise,
new Promise((resolve) => setTimeout(resolve, 30000)),
]);
await Promise.race([this.shutdownPromise, new Promise((resolve) => setTimeout(resolve, 30000))]);
}
console.log('All requests completed');
@@ -61,9 +61,7 @@ export const appConfig = registerAs('app', () => ({
// config/validation.schema.ts
export const validationSchema = Joi.object({
NODE_ENV: Joi.string()
.valid('development', 'production', 'test')
.default('development'),
NODE_ENV: Joi.string().valid('development', 'production', 'test').default('development'),
PORT: Joi.number().default(3000),
DB_HOST: Joi.string().required(),
DB_PORT: Joi.number().default(5432),
@@ -137,7 +135,7 @@ export class AppService {
export class DatabaseService {
constructor(
@Inject(databaseConfig.KEY)
private dbConfig: ConfigType<typeof databaseConfig>,
private dbConfig: ConfigType<typeof databaseConfig>
) {
// Full type inference!
const host = this.dbConfig.host; // string
@@ -147,12 +145,7 @@ export class DatabaseService {
// Environment files support
ConfigModule.forRoot({
envFilePath: [
`.env.${process.env.NODE_ENV}.local`,
`.env.${process.env.NODE_ENV}`,
'.env.local',
'.env',
],
envFilePath: [`.env.${process.env.NODE_ENV}.local`, `.env.${process.env.NODE_ENV}`, '.env.local', '.env'],
});
// .env.development
@@ -45,9 +45,7 @@ logger.log('User ' + userId + ' created at ' + new Date());
async function bootstrap() {
const app = await NestFactory.create(AppModule, {
logger:
process.env.NODE_ENV === 'production'
? ['error', 'warn', 'log']
: ['error', 'warn', 'log', 'debug', 'verbose'],
process.env.NODE_ENV === 'production' ? ['error', 'warn', 'log'] : ['error', 'warn', 'log', 'debug', 'verbose'],
});
}
@@ -82,7 +80,7 @@ export class JsonLogger implements LoggerService {
timestamp: new Date().toISOString(),
message,
...context,
}),
})
);
}
@@ -94,7 +92,7 @@ export class JsonLogger implements LoggerService {
message,
trace,
...context,
}),
})
);
}
@@ -105,7 +103,7 @@ export class JsonLogger implements LoggerService {
timestamp: new Date().toISOString(),
message,
...context,
}),
})
);
}
@@ -116,7 +114,7 @@ export class JsonLogger implements LoggerService {
timestamp: new Date().toISOString(),
message,
...context,
}),
})
);
}
}
@@ -166,7 +164,7 @@ export class ContextLogger {
userId: this.cls.get('userId'),
message,
...data,
}),
})
);
}
@@ -181,7 +179,7 @@ export class ContextLogger {
error: error.message,
stack: error.stack,
...data,
}),
})
);
}
}
@@ -194,10 +192,7 @@ import { LoggerModule } from 'nestjs-pino';
LoggerModule.forRoot({
pinoHttp: {
level: process.env.NODE_ENV === 'production' ? 'info' : 'debug',
transport:
process.env.NODE_ENV !== 'production'
? { target: 'pino-pretty' }
: undefined,
transport: process.env.NODE_ENV !== 'production' ? { target: 'pino-pretty' } : undefined,
redact: ['req.headers.authorization', 'req.body.password'],
serializers: {
req: (req) => ({
@@ -55,7 +55,7 @@ export class OrdersService {
constructor(
private usersService: UsersService,
private inventoryService: InventoryService,
private paymentService: PaymentService,
private paymentService: PaymentService
) {}
async createOrder(dto: CreateOrderDto): Promise<Order> {
@@ -28,14 +28,14 @@ interface NotificationService {
@Injectable()
export class OrdersService {
constructor(
private notifications: NotificationService, // Depends on 8 methods, uses 1
private notifications: NotificationService // Depends on 8 methods, uses 1
) {}
async confirmOrder(order: Order): Promise<void> {
await this.notifications.sendEmail(
order.customer.email,
'Order Confirmed',
`Your order ${order.id} has been confirmed.`,
`Your order ${order.id} has been confirmed.`
);
}
}
@@ -43,12 +43,12 @@ export class OrdersService {
// Testing is painful - must mock unused methods
const mockNotificationService = {
sendEmail: jest.fn(),
sendSms: jest.fn(), // Never used, but required
sendPush: jest.fn(), // Never used, but required
sendSlack: jest.fn(), // Never used, but required
logNotification: jest.fn(), // Never used, but required
sendSms: jest.fn(), // Never used, but required
sendPush: jest.fn(), // Never used, but required
sendSlack: jest.fn(), // Never used, but required
logNotification: jest.fn(), // Never used, but required
getDeliveryStatus: jest.fn(), // Never used, but required
retryFailed: jest.fn(), // Never used, but required
retryFailed: jest.fn(), // Never used, but required
scheduleNotification: jest.fn(), // Never used, but required
};
```
@@ -105,14 +105,14 @@ export class SendGridEmailService implements EmailSender {
@Injectable()
export class OrdersService {
constructor(
@Inject(EMAIL_SENDER) private emailSender: EmailSender, // Minimal dependency
@Inject(EMAIL_SENDER) private emailSender: EmailSender // Minimal dependency
) {}
async confirmOrder(order: Order): Promise<void> {
await this.emailSender.sendEmail(
order.customer.email,
'Order Confirmed',
`Your order ${order.id} has been confirmed.`,
`Your order ${order.id} has been confirmed.`
);
}
}
@@ -150,7 +150,7 @@ type MultiChannelSender = EmailSender & SmsSender & PushSender;
export class AlertService {
constructor(
@Inject(MULTI_CHANNEL_SENDER)
private sender: EmailSender & SmsSender,
private sender: EmailSender & SmsSender
) {}
async sendCriticalAlert(user: User, message: string): Promise<void> {
@@ -178,9 +178,7 @@ export class OrdersService {
```typescript
// Shared test suite that any implementation must pass
function testPaymentGatewayContract(
createGateway: () => PaymentGateway,
) {
function testPaymentGatewayContract(createGateway: () => PaymentGateway) {
describe('PaymentGateway contract', () => {
let gateway: PaymentGateway;
@@ -197,13 +195,11 @@ function testPaymentGatewayContract(
});
it('throws InvalidCurrencyException for unsupported currency', async () => {
await expect(gateway.charge(1000, 'INVALID'))
.rejects.toThrow(InvalidCurrencyException);
await expect(gateway.charge(1000, 'INVALID')).rejects.toThrow(InvalidCurrencyException);
});
it('throws TransactionNotFoundException for invalid refund', async () => {
await expect(gateway.refund('nonexistent'))
.rejects.toThrow(TransactionNotFoundException);
await expect(gateway.refund('nonexistent')).rejects.toThrow(TransactionNotFoundException);
});
});
}
@@ -40,7 +40,7 @@ export class UsersService {
export class UsersService {
constructor(
private readonly userRepo: UserRepository,
@Inject('CONFIG') private readonly config: ConfigType,
@Inject('CONFIG') private readonly config: ConfigType
) {}
async findAll(): Promise<User[]> {
@@ -19,7 +19,9 @@ interface PaymentGateway {
@Injectable()
export class StripeService implements PaymentGateway {
charge(amount: number) { /* ... */ }
charge(amount: number) {
/* ... */
}
}
@Injectable()
@@ -58,9 +60,7 @@ export class MockPaymentService implements PaymentGateway {
providers: [
{
provide: PAYMENT_GATEWAY,
useClass: process.env.NODE_ENV === 'test'
? MockPaymentService
: StripeService,
useClass: process.env.NODE_ENV === 'test' ? MockPaymentService : StripeService,
},
],
exports: [PAYMENT_GATEWAY],
@@ -70,9 +70,7 @@ export class PaymentModule {}
// Injection
@Injectable()
export class OrdersService {
constructor(
@Inject(PAYMENT_GATEWAY) private payment: PaymentGateway,
) {}
constructor(@Inject(PAYMENT_GATEWAY) private payment: PaymentGateway) {}
async createOrder(dto: CreateOrderDto) {
await this.payment.charge(dto.amount);
@@ -88,7 +88,7 @@ export class UsersController {
export class EntityNotFoundException extends Error {
constructor(
public readonly entity: string,
public readonly id: string,
public readonly id: string
) {
super(`${entity} with ID "${id}" not found`);
}
@@ -95,20 +95,11 @@ export class AllExceptionsFilter implements ExceptionFilter {
const response = ctx.getResponse<Response>();
const request = ctx.getRequest<Request>();
const status =
exception instanceof HttpException
? exception.getStatus()
: HttpStatus.INTERNAL_SERVER_ERROR;
const status = exception instanceof HttpException ? exception.getStatus() : HttpStatus.INTERNAL_SERVER_ERROR;
const message =
exception instanceof HttpException
? exception.message
: 'Internal server error';
const message = exception instanceof HttpException ? exception.message : 'Internal server error';
this.logger.error(
`${request.method} ${request.url}`,
exception instanceof Error ? exception.stack : exception,
);
this.logger.error(`${request.method} ${request.url}`, exception instanceof Error ? exception.stack : exception);
response.status(status).json({
statusCode: status,
@@ -120,10 +111,7 @@ export class AllExceptionsFilter implements ExceptionFilter {
}
// Register globally in main.ts
app.useGlobalFilters(
new AllExceptionsFilter(app.get(Logger)),
new DomainExceptionFilter(),
);
app.useGlobalFilters(new AllExceptionsFilter(app.get(Logger)), new DomainExceptionFilter());
// Or via module
@Module({
@@ -0,0 +1,157 @@
---
title: AI Integration Boundary (ADR-018 / ADR-020)
impact: CRITICAL
impactDescription: AI runs on Admin Desktop only; AI → DMS API → DB (never direct); human-in-the-loop validation mandatory; full audit trail.
tags: ai, ollama, boundary, adr-018, adr-020, privacy, audit
---
## AI Integration Boundary
LCBP3 uses **on-premises AI only** (Ollama on Admin Desktop) with strict isolation from data layers.
---
## The Boundary
```
┌────────────────────────────────────────────────────────────┐
│ User Browser (Next.js) │
└─────────────────────────┬──────────────────────────────────┘
│ (authenticated HTTPS)
┌─────────────────────────▼──────────────────────────────────┐
│ DMS API (NestJS) ◀── enforces CASL, validation, audit │
│ ├─ AiGateway (proxies to Ollama) │
│ └─ DB + Storage (Elasticsearch, MariaDB, File System) │
└─────────────────────────┬──────────────────────────────────┘
│ (HTTP → Admin Desktop, internal)
┌─────────────────────────▼──────────────────────────────────┐
│ Admin Desktop (Desk-5439) │
│ ├─ Ollama (Gemma 4) │
│ ├─ PaddleOCR (Thai + English) │
│ └─ n8n orchestration │
└────────────────────────────────────────────────────────────┘
```
**❗ Admin Desktop has NO network access to MariaDB, no SMB to storage, no shared secrets.** It receives base64-encoded file bytes over HTTPS and returns extracted text + suggestions.
---
## Required Patterns
### 1. AiGateway Module (backend)
```typescript
@Module({
controllers: [AiController],
providers: [AiService, AiGateway, AiAuditLogger],
exports: [AiService],
})
export class AiModule {}
@Injectable()
export class AiService {
async extractMetadata(fileId: number, user: User): Promise<ExtractedMetadata> {
// 1. Authorize (CASL: user can read this file)
await this.ability.ensureCan(user, 'read', File, fileId);
// 2. Load file (DMS API, inside the boundary)
const fileBytes = await this.storageService.read(fileId);
// 3. Call Admin Desktop AI over HTTP
const raw = await this.aiGateway.extract(fileBytes);
// 4. Validate AI output schema (Zod)
const parsed = ExtractedMetadataSchema.parse(raw);
// 5. Audit log (who, what, when, model, confidence)
await this.auditLogger.log({
userId: user.id,
action: 'ai.extract_metadata',
fileId,
model: raw.model,
confidence: parsed.confidence,
});
// 6. Return — frontend MUST render for human confirmation
return parsed;
}
}
```
### 2. Human-in-the-Loop
AI output is **never persisted directly**. Users must confirm via `DocumentReviewForm`:
```tsx
<DocumentReviewForm
document={doc}
aiSuggestions={suggestions}
onConfirm={(reviewed) => saveMetadata(reviewed)} // user edits applied
/>
```
The `user_confirmed_at` timestamp and diff (AI suggestion → final value) are stored in the audit log.
### 3. Rate Limiting
```typescript
@Post('ai/extract')
@UseGuards(JwtAuthGuard, CaslAbilityGuard, ThrottlerGuard)
@Throttle({ default: { limit: 10, ttl: 60_000 } }) // 10 req/min/user
async extract(@Body() dto: ExtractDto) { /* ... */ }
```
---
## ❌ Forbidden
```typescript
// ❌ AI container connecting to DB
// docker-compose.yml inside ai-service:
// environment:
// DATABASE_URL: mysql://... ← NEVER
// ❌ AI SDK calling cloud API
import OpenAI from 'openai'; // ❌ No cloud AI SDKs in production code
const client = new OpenAI({ apiKey: ... });
// ❌ Persisting AI output without human confirm
async extractAndSave(fileId: number) {
const metadata = await this.ai.extract(fileId);
await this.repo.save({ fileId, ...metadata }); // ❌ skips human review
}
// ❌ Skipping audit log
const result = await this.aiGateway.extract(bytes); // no logging
return result;
```
---
## Audit Log Schema
```sql
CREATE TABLE ai_audit_log (
id INT AUTO_INCREMENT PRIMARY KEY,
public_id UUID UNIQUE NOT NULL,
user_id INT NOT NULL,
action VARCHAR(64) NOT NULL, -- 'ai.extract_metadata', 'ai.classify', etc.
file_id INT,
model VARCHAR(64), -- 'gemma-4:7b', 'paddleocr-v3'
confidence DECIMAL(4,3),
input_hash CHAR(64), -- SHA-256 of input for replay detection
output_summary JSON,
human_confirmed_at DATETIME,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
INDEX idx_user_created (user_id, created_at),
INDEX idx_file (file_id)
);
```
---
## Reference
- [ADR-018 AI Boundary](../../../../specs/06-Decision-Records/ADR-018-ai-boundary.md)
- [ADR-020 AI Intelligence Integration](../../../../specs/06-Decision-Records/ADR-020-ai-intelligence-integration.md)
- [ADR-017 Ollama Data Migration](../../../../specs/06-Decision-Records/ADR-017-ollama-data-migration.md)
@@ -0,0 +1,181 @@
---
title: Workflow Engine + Document Numbering + Workflow Context (ADR-001 / 002 / 021)
impact: CRITICAL
impactDescription: DSL-based state machine; double-lock numbering; integrated workflow context exposed to clients.
tags: workflow, numbering, redlock, version-column, adr-001, adr-002, adr-021
---
## Workflow Engine + Numbering + Context
LCBP3 uses a **unified workflow engine** (DSL-based state machine) across RFA, Transmittal, Correspondence, Circulation, and Shop Drawing. Every state transition goes through the same engine — no per-type routing tables.
---
## ADR-001: Unified Workflow Engine
### State Transition Pattern
```typescript
@Injectable()
export class WorkflowEngine {
async transition(
instanceId: string,
action: WorkflowAction,
actor: User,
context?: WorkflowContext,
): Promise<WorkflowInstance> {
// 1. Load current state from DB (never trust client-provided state)
const instance = await this.repo.findOneByPublicId(instanceId);
if (!instance) throw new NotFoundException();
// 2. Validate transition against DSL
const dsl = await this.dslService.load(instance.workflowTypeId);
const nextState = dsl.resolve(instance.currentState, action);
if (!nextState) {
throw new BusinessException(
`Action ${action} not allowed from state ${instance.currentState}`,
'ไม่สามารถดำเนินการนี้ได้ในสถานะปัจจุบัน',
'กรุณาตรวจสอบขั้นตอนการอนุมัติ',
'WF_INVALID_TRANSITION',
);
}
// 3. Apply transition atomically (optimistic lock via @VersionColumn)
instance.currentState = nextState;
await this.repo.save(instance); // throws OptimisticLockVersionMismatchError on race
// 4. Emit event for listeners (notifications via BullMQ — ADR-008)
this.eventBus.publish(new WorkflowTransitionedEvent(instance, action, actor));
return instance;
}
}
```
### ❌ Anti-Patterns
- ❌ Hard-coded `switch (state)` in controllers/services
- ❌ Trusting `currentState` from request body
- ❌ Creating separate routing tables per document type
---
## ADR-002: Document Numbering (Double-Lock)
Concurrent requests for a new document number **must** use both:
1. **Redis Redlock** — distributed lock across app instances
2. **TypeORM `@VersionColumn`** — optimistic lock on counter row
### Counter Entity
```typescript
@Entity('document_number_counters')
@Unique(['projectId', 'documentTypeId'])
export class DocumentNumberCounter extends UuidBaseEntity {
@Column({ name: 'project_id' })
projectId: number;
@Column({ name: 'document_type_id' })
documentTypeId: number;
@Column({ name: 'last_number', default: 0 })
lastNumber: number;
@VersionColumn()
version: number; // ❗ Optimistic lock — do not rename, do not remove
}
```
### Service Pattern
```typescript
@Injectable()
export class DocumentNumberingService {
constructor(
@InjectRepository(DocumentNumberCounter)
private counterRepo: Repository<DocumentNumberCounter>,
private redlock: RedlockService,
private readonly logger: Logger,
) {}
async generateNext(ctx: NumberingContext): Promise<string> {
const lockKey = `doc_num:${ctx.projectId}:${ctx.documentTypeId}`;
// Distributed lock — 3s TTL, up to 5 retries
const lock = await this.redlock.acquire([lockKey], 3000);
try {
// Optimistic lock via @VersionColumn
const counter = await this.counterRepo.findOne({
where: { projectId: ctx.projectId, documentTypeId: ctx.documentTypeId },
});
if (!counter) {
throw new NotFoundException('Counter not initialized for this project/type');
}
counter.lastNumber += 1;
await this.counterRepo.save(counter); // may throw OptimisticLockVersionMismatchError
return this.formatNumber(ctx, counter.lastNumber);
} catch (err) {
if (err instanceof OptimisticLockVersionMismatchError) {
this.logger.warn(`Numbering race detected for ${lockKey}, retrying`);
// Let caller retry via BullMQ retry policy
}
throw err;
} finally {
await lock.release();
}
}
private formatNumber(ctx: NumberingContext, seq: number): string {
// e.g. "LCBP3-RFA-0042"
return `${ctx.projectCode}-${ctx.typeCode}-${String(seq).padStart(4, '0')}`;
}
}
```
### ❌ Anti-Patterns
- ❌ App-side counter only (`let counter = 0; counter++`)
- ❌ Using `findOne` + `update` without `@VersionColumn`
- ❌ Using only Redis lock without DB optimistic lock (race if Redis fails)
---
## ADR-021: Integrated Workflow Context
Every workflow-aware API response **must** expose:
```typescript
export class WorkflowEnvelope<T> {
data: T;
workflow: {
instancePublicId: string;
currentState: string; // e.g. 'pending_review'
availableActions: string[]; // e.g. ['approve', 'reject', 'request-revision']
canEdit: boolean; // computed from CASL + current state
lastTransitionAt: string; // ISO 8601
};
stepAttachments?: Array<{ // files produced by the current/previous step
publicId: string;
fileName: string;
stepCode: string;
downloadUrl: string;
}>;
}
```
Frontend uses `workflow.availableActions` to render buttons — no client-side state machine logic.
---
## Reference
- [ADR-001 Unified Workflow Engine](../../../../specs/06-Decision-Records/ADR-001-unified-workflow-engine.md)
- [ADR-002 Document Numbering Strategy](../../../../specs/06-Decision-Records/ADR-002-document-numbering-strategy.md)
- [ADR-021 Workflow Context](../../../../specs/06-Decision-Records/ADR-021-workflow-context.md)
@@ -64,11 +64,7 @@ import { BullModule } from '@nestjs/bullmq';
},
},
}),
BullModule.registerQueue(
{ name: 'email' },
{ name: 'reports' },
{ name: 'notifications' },
),
BullModule.registerQueue({ name: 'email' }, { name: 'reports' }, { name: 'notifications' }),
],
})
export class QueueModule {}
@@ -76,9 +72,7 @@ export class QueueModule {}
// Producer: Add jobs to queue
@Injectable()
export class ReportsService {
constructor(
@InjectQueue('reports') private reportsQueue: Queue,
) {}
constructor(@InjectQueue('reports') private reportsQueue: Queue) {}
async requestReport(dto: GenerateReportDto): Promise<{ jobId: string }> {
// Return immediately, process in background
@@ -176,7 +170,7 @@ export class NotificationService {
{
attempts: 5,
backoff: { type: 'exponential', delay: 5000 },
},
}
);
}
}
@@ -194,7 +188,7 @@ export class ScheduledJobsService implements OnModuleInit {
{
repeat: { cron: '0 0 * * *' },
jobId: 'daily-cleanup', // Prevent duplicates
},
}
);
// Send digest every hour
@@ -204,7 +198,7 @@ export class ScheduledJobsService implements OnModuleInit {
{
repeat: { every: 60 * 60 * 1000 },
jobId: 'hourly-digest',
},
}
);
}
}
@@ -64,7 +64,7 @@ export class DatabaseService implements OnModuleInit {
export class CacheWarmerService implements OnApplicationBootstrap {
constructor(
private cache: CacheService,
private products: ProductsService,
private products: ProductsService
) {}
async onApplicationBootstrap(): Promise<void> {
@@ -81,10 +81,7 @@ export class ModuleLoaderService {
constructor(private lazyModuleLoader: LazyModuleLoader) {}
async load<T>(
key: string,
importFn: () => Promise<{ default: Type<T> } | Type<T>>,
): Promise<ModuleRef> {
async load<T>(key: string, importFn: () => Promise<{ default: Type<T> } | Type<T>>): Promise<ModuleRef> {
if (!this.loadedModules.has(key)) {
const module = await importFn();
const moduleType = 'default' in module ? module.default : module;
@@ -51,9 +51,7 @@ export class UsersService {
imports: [ConfigModule],
inject: [ConfigService],
useFactory: (config: ConfigService) => ({
stores: [
new KeyvRedis(config.get('REDIS_URL')),
],
stores: [new KeyvRedis(config.get('REDIS_URL'))],
ttl: 60 * 1000, // Default 60s
}),
}),
@@ -66,7 +64,7 @@ export class AppModule {}
export class ProductsService {
constructor(
@Inject(CACHE_MANAGER) private cache: Cache,
private productsRepo: ProductRepository,
private productsRepo: ProductRepository
) {}
async getPopular(): Promise<Product[]> {
@@ -117,10 +115,7 @@ export class CacheInvalidationService {
@OnEvent('product.updated')
@OnEvent('product.deleted')
async invalidateProductCaches(event: ProductEvent) {
await Promise.all([
this.cache.del('products:popular'),
this.cache.del(`product:${event.productId}`),
]);
await Promise.all([this.cache.del('products:popular'), this.cache.del(`product:${event.productId}`)]);
}
}
```
@@ -111,7 +111,7 @@ export class AuthService {
export class JwtStrategy extends PassportStrategy(Strategy) {
constructor(
private config: ConfigService,
private usersService: UsersService,
private usersService: UsersService
) {
super({
jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
@@ -0,0 +1,137 @@
---
title: Two-Phase File Upload + ClamAV (ADR-016)
impact: CRITICAL
impactDescription: Upload → Temp → ClamAV scan → Commit → Permanent. Whitelist + 50MB cap. StorageService only.
tags: file-upload, clamav, security, adr-016, storage
---
## Two-Phase File Upload (ADR-016)
**Never write uploaded files directly to permanent storage.** All uploads must go through:
```
Client → Upload endpoint → Temp storage → ClamAV scan → Commit endpoint → Permanent storage
```
---
## Constraints (non-negotiable)
| Rule | Value |
| --- | --- |
| Allowed MIME types | `application/pdf`, `image/vnd.dwg`, `application/vnd.openxmlformats-officedocument.wordprocessingml.document`, `application/vnd.openxmlformats-officedocument.spreadsheetml.sheet`, `application/zip` |
| Allowed extensions | `.pdf`, `.dwg`, `.docx`, `.xlsx`, `.zip` |
| Max size | 50 MB |
| Temp TTL | 24 h (purged by cron) |
| Virus scan | ClamAV (blocking) |
| Mover | `StorageService` only — never `fs.rename` directly from controller |
---
## Phase 1: Upload to Temp
```typescript
@Post('upload')
@UseGuards(JwtAuthGuard, ThrottlerGuard)
@UseInterceptors(FileInterceptor('file', {
limits: { fileSize: 50 * 1024 * 1024 }, // 50 MB
}))
async uploadTemp(
@UploadedFile() file: Express.Multer.File,
@CurrentUser() user: User,
): Promise<{ tempId: string; expiresAt: string }> {
// 1. Validate MIME + extension (defense in depth)
this.fileValidator.assertAllowed(file);
// 2. Scan with ClamAV
const scanResult = await this.clamavService.scan(file.buffer);
if (!scanResult.clean) {
throw new BusinessException(
`ClamAV rejected: ${scanResult.signature}`,
'ไฟล์ไม่ปลอดภัย ระบบตรวจพบความเสี่ยง',
'กรุณาตรวจสอบไฟล์และลองใหม่อีกครั้ง',
'FILE_INFECTED',
);
}
// 3. Save to temp (encrypted at rest)
const tempId = await this.storageService.saveToTemp(file, user.id);
return {
tempId,
expiresAt: addHours(new Date(), 24).toISOString(),
};
}
```
---
## Phase 2: Commit in Transaction
The business operation (e.g., creating a Correspondence) promotes temp files to permanent **in the same DB transaction**.
```typescript
async createCorrespondence(dto: CreateCorrespondenceDto, user: User) {
return this.dataSource.transaction(async (manager) => {
// 1. Create domain entity
const entity = await manager.save(Correspondence, {
...dto,
createdById: user.id,
});
// 2. Commit temp files → permanent (ACID together with entity)
await this.storageService.commitFiles(
dto.tempFileIds,
{ entityId: entity.id, entityType: 'correspondence' },
manager,
);
return entity;
});
}
```
If the transaction rolls back, temp files remain and expire in 24h — no orphaned permanent files.
---
## StorageService Contract
```typescript
export interface StorageService {
saveToTemp(file: Express.Multer.File, ownerId: number): Promise<string>;
commitFiles(
tempIds: string[],
target: { entityId: number; entityType: string },
manager: EntityManager,
): Promise<FileRecord[]>;
purgeExpiredTemp(): Promise<number>; // called by cron
getPermanentPath(fileId: number): Promise<string>;
}
```
---
## ❌ Forbidden
```typescript
// ❌ Direct write to permanent
fs.writeFileSync(`/var/storage/${file.originalname}`, file.buffer);
// ❌ Skip ClamAV
await this.storageService.savePermanent(file);
// ❌ Non-whitelist MIME
@UseInterceptors(FileInterceptor('file')) // no size or type limit
// ❌ Commit outside transaction
const entity = await this.repo.save(...);
await this.storageService.commitFiles(tempIds, ...); // race: entity exists, files may fail
```
---
## Reference
- [ADR-016 Security & Authentication](../../../../specs/06-Decision-Records/ADR-016-security-authentication.md)
- [Edge Cases](../../../../specs/01-Requirements/01-06-edge-cases-and-rules.md) — file upload scenarios
@@ -47,15 +47,12 @@ export class AdminController {
export class JwtAuthGuard implements CanActivate {
constructor(
private jwtService: JwtService,
private reflector: Reflector,
private reflector: Reflector
) {}
async canActivate(context: ExecutionContext): Promise<boolean> {
// Check for @Public() decorator
const isPublic = this.reflector.getAllAndOverride<boolean>('isPublic', [
context.getHandler(),
context.getClass(),
]);
const isPublic = this.reflector.getAllAndOverride<boolean>('isPublic', [context.getHandler(), context.getClass()]);
if (isPublic) return true;
const request = context.switchToHttp().getRequest();
@@ -85,10 +82,7 @@ export class RolesGuard implements CanActivate {
constructor(private reflector: Reflector) {}
canActivate(context: ExecutionContext): boolean {
const requiredRoles = this.reflector.getAllAndOverride<Role[]>('roles', [
context.getHandler(),
context.getClass(),
]);
const requiredRoles = this.reflector.getAllAndOverride<Role[]>('roles', [context.getHandler(), context.getClass()]);
if (!requiredRoles) return true;
@@ -30,9 +30,9 @@ export class UsersController {
// DTOs without validation decorators
export class CreateUserDto {
name: string; // No validation
email: string; // Could be "not-an-email"
age: number; // Could be "abc" or -999
name: string; // No validation
email: string; // Could be "not-an-email"
age: number; // Could be "abc" or -999
}
```
@@ -45,13 +45,13 @@ async function bootstrap() {
app.useGlobalPipes(
new ValidationPipe({
whitelist: true, // Strip unknown properties
forbidNonWhitelisted: true, // Throw on unknown properties
transform: true, // Auto-transform to DTO types
whitelist: true, // Strip unknown properties
forbidNonWhitelisted: true, // Throw on unknown properties
transform: true, // Auto-transform to DTO types
transformOptions: {
enableImplicitConversion: true,
},
}),
})
);
await app.listen(3000);
@@ -61,7 +61,7 @@ describe('UsersController (e2e)', () => {
whitelist: true,
transform: true,
forbidNonWhitelisted: true,
}),
})
);
await app.init();
@@ -97,9 +97,7 @@ describe('UsersController (e2e)', () => {
describe('/users/:id (GET)', () => {
it('should return 404 for non-existent user', () => {
return request(app.getHttpServer())
.get('/users/non-existent-id')
.expect(404);
return request(app.getHttpServer()).get('/users/non-existent-id').expect(404);
});
});
});
@@ -127,9 +125,7 @@ describe('Protected Routes (e2e)', () => {
});
it('should return 401 without token', () => {
return request(app.getHttpServer())
.get('/users/me')
.expect(401);
return request(app.getHttpServer()).get('/users/me').expect(401);
});
it('should return user profile with valid token', () => {
@@ -84,9 +84,7 @@ describe('WeatherService', () => {
});
it('should handle API timeout', async () => {
httpService.get.mockReturnValue(
throwError(() => new Error('ETIMEDOUT')),
);
httpService.get.mockReturnValue(throwError(() => new Error('ETIMEDOUT')));
await expect(service.getWeather('NYC')).rejects.toThrow('Weather service unavailable');
});
@@ -95,7 +93,7 @@ describe('WeatherService', () => {
httpService.get.mockReturnValue(
throwError(() => ({
response: { status: 429, data: { message: 'Rate limited' } },
})),
}))
);
await expect(service.getWeather('NYC')).rejects.toThrow(TooManyRequestsException);
@@ -117,10 +115,7 @@ describe('UsersService', () => {
};
const module = await Test.createTestingModule({
providers: [
UsersService,
{ provide: getRepositoryToken(User), useValue: mockRepo },
],
providers: [UsersService, { provide: getRepositoryToken(User), useValue: mockRepo }],
}).compile();
service = module.get(UsersService);
@@ -86,9 +86,7 @@ describe('UsersService', () => {
it('should throw on duplicate email', async () => {
repo.findOne.mockResolvedValue({ id: '1', email: 'test@test.com' });
await expect(
service.create({ name: 'Test', email: 'test@test.com' }),
).rejects.toThrow(ConflictException);
await expect(service.create({ name: 'Test', email: 'test@test.com' })).rejects.toThrow(ConflictException);
});
});
@@ -32,6 +32,7 @@ const CATEGORIES = [
{ prefix: 'api-', name: 'API Design', impact: 'MEDIUM', section: 8 },
{ prefix: 'micro-', name: 'Microservices', impact: 'MEDIUM', section: 9 },
{ prefix: 'devops-', name: 'DevOps & Deployment', impact: 'LOW-MEDIUM', section: 10 },
{ prefix: 'lcbp3-', name: 'LCBP3 Project-Specific', impact: 'CRITICAL', section: 11 },
];
interface RuleFrontmatter {
@@ -50,8 +51,10 @@ interface Rule {
}
function parseFrontmatter(content: string): { frontmatter: RuleFrontmatter | null; body: string } {
// Normalize CRLF → LF so the regex works on Windows-authored files
const normalized = content.replace(/\r\n/g, '\n');
const frontmatterRegex = /^---\n([\s\S]*?)\n---\n([\s\S]*)$/;
const match = content.match(frontmatterRegex);
const match = normalized.match(frontmatterRegex);
if (!match) {
return { frontmatter: null, body: content };
@@ -98,7 +101,7 @@ function parseFrontmatter(content: string): { frontmatter: RuleFrontmatter | nul
return {
frontmatter: frontmatter as RuleFrontmatter,
body: body.trim()
body: body.trim(),
};
}
@@ -118,8 +121,7 @@ function readMetadata(): any {
function readRules(): Rule[] {
const rulesDir = path.join(__dirname, '..', 'rules');
const files = fs.readdirSync(rulesDir)
.filter(f => f.endsWith('.md') && !f.startsWith('_'));
const files = fs.readdirSync(rulesDir).filter((f) => f.endsWith('.md') && !f.startsWith('_'));
const rules: Rule[] = [];
@@ -144,7 +146,7 @@ function readRules(): Rule[] {
frontmatter,
content: body,
category: category.name,
categorySection: category.section
categorySection: category.section,
});
}
+212 -3
View File
@@ -1,6 +1,8 @@
---
name: next-best-practices
description: Next.js best practices - file conventions, RSC boundaries, data patterns, async APIs, metadata, error handling, route handlers, image/font optimization, bundling
description: Next.js best practices for LCBP3-DMS frontend. Enforces ADR-019 (publicId only, no parseInt/id fallback), TanStack Query + RHF + Zod, shadcn/ui, i18n, ADR-007 error UX, ADR-021 IntegratedBanner/WorkflowLifecycle, two-phase file upload.
version: 1.8.9
scope: frontend
user-invocable: false
---
@@ -11,6 +13,7 @@ Apply these rules when writing or reviewing Next.js code.
## File Conventions
See [file-conventions.md](./file-conventions.md) for:
- Project structure and special files
- Route segments (dynamic, catch-all, groups)
- Parallel and intercepting routes
@@ -21,6 +24,7 @@ See [file-conventions.md](./file-conventions.md) for:
Detect invalid React Server Component patterns.
See [rsc-boundaries.md](./rsc-boundaries.md) for:
- Async client component detection (invalid)
- Non-serializable props detection
- Server Action exceptions
@@ -30,6 +34,7 @@ See [rsc-boundaries.md](./rsc-boundaries.md) for:
Next.js 15+ async API changes.
See [async-patterns.md](./async-patterns.md) for:
- Async `params` and `searchParams`
- Async `cookies()` and `headers()`
- Migration codemod
@@ -37,18 +42,21 @@ See [async-patterns.md](./async-patterns.md) for:
## Runtime Selection
See [runtime-selection.md](./runtime-selection.md) for:
- Default to Node.js runtime
- When Edge runtime is appropriate
## Directives
See [directives.md](./directives.md) for:
- `'use client'`, `'use server'` (React)
- `'use cache'` (Next.js)
## Functions
See [functions.md](./functions.md) for:
- Navigation hooks: `useRouter`, `usePathname`, `useSearchParams`, `useParams`
- Server functions: `cookies`, `headers`, `draftMode`, `after`
- Generate functions: `generateStaticParams`, `generateMetadata`
@@ -56,6 +64,7 @@ See [functions.md](./functions.md) for:
## Error Handling
See [error-handling.md](./error-handling.md) for:
- `error.tsx`, `global-error.tsx`, `not-found.tsx`
- `redirect`, `permanentRedirect`, `notFound`
- `forbidden`, `unauthorized` (auth errors)
@@ -63,7 +72,10 @@ See [error-handling.md](./error-handling.md) for:
## Data Patterns
Project-specific: See [uuid-handling.md](./uuid-handling.md) for ADR-019 UUID handling patterns.
See [data-patterns.md](./data-patterns.md) for:
- Server Components vs Server Actions vs Route Handlers
- Avoiding data waterfalls (`Promise.all`, Suspense, preload)
- Client component data fetching
@@ -71,6 +83,7 @@ See [data-patterns.md](./data-patterns.md) for:
## Route Handlers
See [route-handlers.md](./route-handlers.md) for:
- `route.ts` basics
- GET handler conflicts with `page.tsx`
- Environment behavior (no React DOM)
@@ -79,6 +92,7 @@ See [route-handlers.md](./route-handlers.md) for:
## Metadata & OG Images
See [metadata.md](./metadata.md) for:
- Static and dynamic metadata
- `generateMetadata` function
- OG image generation with `next/og`
@@ -87,6 +101,7 @@ See [metadata.md](./metadata.md) for:
## Image Optimization
See [image.md](./image.md) for:
- Always use `next/image` over `<img>`
- Remote images configuration
- Responsive `sizes` attribute
@@ -96,6 +111,7 @@ See [image.md](./image.md) for:
## Font Optimization
See [font.md](./font.md) for:
- `next/font` setup
- Google Fonts, local fonts
- Tailwind CSS integration
@@ -104,6 +120,7 @@ See [font.md](./font.md) for:
## Bundling
See [bundling.md](./bundling.md) for:
- Server-incompatible packages
- CSS imports (not link tags)
- Polyfills (already included)
@@ -113,6 +130,7 @@ See [bundling.md](./bundling.md) for:
## Scripts
See [scripts.md](./scripts.md) for:
- `next/script` vs native script tags
- Inline scripts need `id`
- Loading strategies
@@ -121,6 +139,7 @@ See [scripts.md](./scripts.md) for:
## Hydration Errors
See [hydration-error.md](./hydration-error.md) for:
- Common causes (browser APIs, dates, invalid HTML)
- Debugging with error overlay
- Fixes for each cause
@@ -128,26 +147,216 @@ See [hydration-error.md](./hydration-error.md) for:
## Suspense Boundaries
See [suspense-boundaries.md](./suspense-boundaries.md) for:
- CSR bailout with `useSearchParams` and `usePathname`
- Which hooks require Suspense boundaries
## Parallel & Intercepting Routes
See [parallel-routes.md](./parallel-routes.md) for:
- Modal patterns with `@slot` and `(.)` interceptors
- `default.tsx` for fallbacks
- Closing modals correctly with `router.back()`
## i18n (Thai / English)
See [i18n.md](./i18n.md) for:
- `useTranslations('namespace')` pattern
- Key naming (kebab-case, feature-namespaced)
- When Zod messages stay inline vs i18n
- Server-side `userMessage` passthrough
## Two-Phase File Upload
See [two-phase-upload.md](./two-phase-upload.md) for:
- `useDropzone` + `useMutation` hook
- `tempFileIds` form-state pattern
- Whitelist MIME / max-size (must mirror backend)
- Clear-on-submit / expired-temp handling
## Self-Hosting
See [self-hosting.md](./self-hosting.md) for:
- `output: 'standalone'` for Docker
- Cache handlers for multi-instance ISR
- What works vs needs extra setup
## Debug Tricks
## NAP-DMS Project-Specific Rules (MUST FOLLOW)
These rules are mandatory for the NAP-DMS LCBP3 frontend project:
### State Management (บังคับใช้)
**Server State - TanStack Query (React Query)**
```tsx
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query';
// ❌ ห้ามใช้ useEffect โดยตรง
// ✅ ใช้ TanStack Query
export function useCorrespondences(projectId: string) {
return useQuery({
queryKey: ['correspondences', projectId],
queryFn: () => correspondenceService.getAll(projectId),
staleTime: 5 * 60 * 1000,
});
}
```
**Form State - React Hook Form + Zod**
```tsx
import { useForm } from 'react-hook-form';
import { zodResolver } from '@hookform/resolvers/zod';
import * as z from 'zod';
const schema = z.object({
title: z.string().min(1, 'กรุณาระบุหัวเรื่อง'),
projectUuid: z.string().uuid('กรุณาเลือกโปรเจกต์'),
});
const form = useForm({
resolver: zodResolver(schema),
});
```
### ADR-019 UUID Handling (CRITICAL — March 2026 Pattern)
> **Updated:** ใช้ `publicId` ตรงๆ — ห้ามใช้ `id ?? ''` fallback หรือ `uuid` ร่วม.
```tsx
// ✅ CORRECT — Interface มีแค่ publicId
interface Contract {
publicId?: string; // UUID from API — ใช้ตัวนี้
contractCode: string;
contractName: string;
}
// ✅ CORRECT — Select options (ไม่มี fallback)
const options = contracts.map((c) => ({
label: `${c.contractName} (${c.contractCode})`,
value: c.publicId ?? '', // ใช้ publicId ล้วน
key: c.publicId ?? c.contractCode, // fallback ไป business field ได้
}));
// ❌ WRONG — pattern เก่า (ห้าม)
interface OldContract {
id?: number; // ❌ อย่า expose INT id
uuid?: string; // ❌ ใช้ชื่อ uuid
publicId?: string;
}
const oldValue = String(c.publicId ?? c.id ?? ''); // ❌ `id ?? ''` fallback ห้าม
// ❌ NEVER parseInt on UUID
// const badId = parseInt(projectPublicId); // "019505..." → 19 (WRONG!)
// ✅ ส่ง UUID string ตรงๆ ไป API
apiClient.get(`/projects/${projectPublicId}`);
```
### Naming Conventions
**Code Identifiers - ภาษาอังกฤษ**
```tsx
// ✅ Correct
interface Correspondence {
documentNumber: string;
createdAt: string;
}
// ❌ Wrong
interface เอกสาร {
เลขที่: string;
}
```
**Comments - ภาษาไทย**
```tsx
// ✅ Correct - อธิบาย logic เป็นภาษาไทย
// ตรวจสอบว่ามีการระบุ projectUuid หรือไม่
if (!data.projectUuid) {
throw new Error('กรุณาเลือกโปรเจกต์');
}
// ❌ Wrong - ห้ามใช้ภาษาอังกฤษใน comments
// Check if projectUuid is provided
```
### UI Components
**บังคับใช้ shadcn/ui**
```tsx
// ✅ Correct
import { Button } from '@/components/ui/button';
import { Card, CardContent } from '@/components/ui/card';
// ❌ Wrong - ไม่สร้าง component เองถ้ามีใน shadcn
const MyButton = () => <button className="...">Click</button>;
```
### File Upload Pattern
```tsx
import { useDropzone } from 'react-dropzone';
// Two-phase upload
const onDrop = useCallback(async (files: File[]) => {
// Phase 1: Upload to temp
const tempFiles = await Promise.all(files.map((file) => uploadService.uploadTemp(file)));
setTempIds(tempFiles.map((f) => f.tempId));
}, []);
// Phase 2: Commit on form submit
const onSubmit = async (data: FormData) => {
await correspondenceService.create({
...data,
tempFileIds,
});
};
```
### API Client Setup
```typescript
// lib/api/client.ts
const apiClient = axios.create({
baseURL: process.env.NEXT_PUBLIC_API_URL,
timeout: 30000,
});
// Auto-add Idempotency-Key
apiClient.interceptors.request.use((config) => {
if (['post', 'put', 'patch'].includes(config.method?.toLowerCase() || '')) {
config.headers['Idempotency-Key'] = uuidv4();
}
return config;
});
```
### Anti-Patterns (ห้ามทำ)
- ❌ Fetch data ใน useEffect โดยตรง (ใช้ TanStack Query)
- ❌ Props drilling ลึกเกิน 3 levels
- ❌ Inline styles (ใช้ Tailwind)
- ❌ `console.log` ใน committed code
- ❌ `parseInt()` / `Number()` / `+` บน UUID values (ADR-019)
- ❌ `id ?? ''` fallback บน `publicId` (ใช้ `publicId ?? ''` หรือ fallback ไป business field)
- ❌ Expose `uuid` คู่กับ `publicId` ใน interface (ใช้ `publicId` อย่างเดียว)
- ❌ ใช้ index เป็น key ใน list
- ❌ Snake_case ใน form field names (ใช้ camelCase)
- ❌ Hardcode Thai/English string ใน component (ใช้ i18n keys)
- ❌ `any` type (strict mode)
---
See [debug-tricks.md](./debug-tricks.md) for:
- MCP endpoint for AI-assisted debugging
- Rebuild specific routes with `--debug-build-paths`
@@ -9,21 +9,18 @@ Always type them as `Promise<...>` and await them.
### Pages and Layouts
```tsx
type Props = { params: Promise<{ slug: string }> }
type Props = { params: Promise<{ slug: string }> };
export default async function Page({ params }: Props) {
const { slug } = await params
const { slug } = await params;
}
```
### Route Handlers
```tsx
export async function GET(
request: Request,
{ params }: { params: Promise<{ id: string }> }
) {
const { id } = await params
export async function GET(request: Request, { params }: { params: Promise<{ id: string }> }) {
const { id } = await params;
}
```
@@ -31,13 +28,13 @@ export async function GET(
```tsx
type Props = {
params: Promise<{ slug: string }>
searchParams: Promise<{ query?: string }>
}
params: Promise<{ slug: string }>;
searchParams: Promise<{ query?: string }>;
};
export default async function Page({ params, searchParams }: Props) {
const { slug } = await params
const { query } = await searchParams
const { slug } = await params;
const { query } = await searchParams;
}
```
@@ -46,37 +43,37 @@ export default async function Page({ params, searchParams }: Props) {
Use `React.use()` for non-async components:
```tsx
import { use } from 'react'
import { use } from 'react';
type Props = { params: Promise<{ slug: string }> }
type Props = { params: Promise<{ slug: string }> };
export default function Page({ params }: Props) {
const { slug } = use(params)
const { slug } = use(params);
}
```
### generateMetadata
```tsx
type Props = { params: Promise<{ slug: string }> }
type Props = { params: Promise<{ slug: string }> };
export async function generateMetadata({ params }: Props): Promise<Metadata> {
const { slug } = await params
return { title: slug }
const { slug } = await params;
return { title: slug };
}
```
## Async Cookies and Headers
```tsx
import { cookies, headers } from 'next/headers'
import { cookies, headers } from 'next/headers';
export default async function Page() {
const cookieStore = await cookies()
const headersList = await headers()
const cookieStore = await cookies();
const headersList = await headers();
const theme = cookieStore.get('theme')
const userAgent = headersList.get('user-agent')
const theme = cookieStore.get('theme');
const userAgent = headersList.get('user-agent');
}
```
+28 -26
View File
@@ -21,21 +21,21 @@ If the package is only needed on client:
```tsx
// Bad: Fails - package uses window
import SomeChart from 'some-chart-library'
import SomeChart from 'some-chart-library';
export default function Page() {
return <SomeChart />
return <SomeChart />;
}
// Good: Use dynamic import with ssr: false
import dynamic from 'next/dynamic'
import dynamic from 'next/dynamic';
const SomeChart = dynamic(() => import('some-chart-library'), {
ssr: false,
})
});
export default function Page() {
return <SomeChart />
return <SomeChart />;
}
```
@@ -47,10 +47,11 @@ For packages that should run on server but have bundling issues:
// next.config.js
module.exports = {
serverExternalPackages: ['problematic-package'],
}
};
```
Use this for:
- Packages with native bindings (sharp, bcrypt)
- Packages that don't bundle well (some ORMs)
- Packages with circular dependencies
@@ -61,19 +62,19 @@ Wrap the entire usage in a client component:
```tsx
// components/ChartWrapper.tsx
'use client'
'use client';
import { Chart } from 'chart-library'
import { Chart } from 'chart-library';
export function ChartWrapper(props) {
return <Chart {...props} />
return <Chart {...props} />;
}
// app/page.tsx (server component)
import { ChartWrapper } from '@/components/ChartWrapper'
import { ChartWrapper } from '@/components/ChartWrapper';
export default function Page() {
return <ChartWrapper data={data} />
return <ChartWrapper data={data} />;
}
```
@@ -83,13 +84,13 @@ Import CSS files instead of using `<link>` tags. Next.js handles bundling and op
```tsx
// Bad: Manual link tag
<link rel="stylesheet" href="/styles.css" />
<link rel="stylesheet" href="/styles.css" />;
// Good: Import CSS
import './styles.css'
import './styles.css';
// Good: CSS Modules
import styles from './Button.module.css'
import styles from './Button.module.css';
```
## Polyfills
@@ -121,21 +122,21 @@ Module not found: ESM packages need to be imported
// next.config.js
module.exports = {
transpilePackages: ['some-esm-package', 'another-package'],
}
};
```
## Common Problematic Packages
| Package | Issue | Solution |
|---------|-------|----------|
| `sharp` | Native bindings | `serverExternalPackages: ['sharp']` |
| `bcrypt` | Native bindings | `serverExternalPackages: ['bcrypt']` or use `bcryptjs` |
| `canvas` | Native bindings | `serverExternalPackages: ['canvas']` |
| `recharts` | Uses window | `dynamic(() => import('recharts'), { ssr: false })` |
| `react-quill` | Uses document | `dynamic(() => import('react-quill'), { ssr: false })` |
| `mapbox-gl` | Uses window | `dynamic(() => import('mapbox-gl'), { ssr: false })` |
| `monaco-editor` | Uses window | `dynamic(() => import('@monaco-editor/react'), { ssr: false })` |
| `lottie-web` | Uses document | `dynamic(() => import('lottie-react'), { ssr: false })` |
| Package | Issue | Solution |
| --------------- | --------------- | --------------------------------------------------------------- |
| `sharp` | Native bindings | `serverExternalPackages: ['sharp']` |
| `bcrypt` | Native bindings | `serverExternalPackages: ['bcrypt']` or use `bcryptjs` |
| `canvas` | Native bindings | `serverExternalPackages: ['canvas']` |
| `recharts` | Uses window | `dynamic(() => import('recharts'), { ssr: false })` |
| `react-quill` | Uses document | `dynamic(() => import('react-quill'), { ssr: false })` |
| `mapbox-gl` | Uses window | `dynamic(() => import('mapbox-gl'), { ssr: false })` |
| `monaco-editor` | Uses window | `dynamic(() => import('@monaco-editor/react'), { ssr: false })` |
| `lottie-web` | Uses document | `dynamic(() => import('lottie-react'), { ssr: false })` |
## Bundle Analysis
@@ -146,6 +147,7 @@ next experimental-analyze
```
This opens an interactive UI to:
- Filter by route, environment (client/server), and type
- Inspect module sizes and import chains
- View treemap visualization
@@ -174,7 +176,7 @@ module.exports = {
webpack: (config) => {
// custom webpack config
},
}
};
```
Reference: https://nextjs.org/docs/app/building-your-application/upgrading/from-webpack-to-turbopack
@@ -33,17 +33,20 @@ async function UsersPage() {
const users = await db.user.findMany();
// Or fetch from external API
const posts = await fetch('https://api.example.com/posts').then(r => r.json());
const posts = await fetch('https://api.example.com/posts').then((r) => r.json());
return (
<ul>
{users.map(user => <li key={user.id}>{user.name}</li>)}
{users.map((user) => (
<li key={user.id}>{user.name}</li>
))}
</ul>
);
}
```
**Benefits**:
- No API to maintain
- No client-server waterfall
- Secrets stay on server
@@ -89,12 +92,14 @@ export default function NewPost() {
```
**Benefits**:
- End-to-end type safety
- Progressive enhancement (works without JS)
- Automatic request handling
- Integrated with React transitions
**Constraints**:
- POST only (no GET caching semantics)
- Internal use only (no external access)
- Cannot return non-serializable data
@@ -122,12 +127,14 @@ export async function POST(request: NextRequest) {
```
**When to use**:
- External API access (mobile apps, third parties)
- Webhooks from external services
- GET endpoints that need HTTP caching
- OpenAPI/Swagger documentation needed
**When NOT to use**:
- Internal data fetching (use Server Components)
- Mutations from your UI (use Server Actions)
@@ -138,8 +145,8 @@ export async function POST(request: NextRequest) {
```tsx
// Bad: Sequential waterfalls
async function Dashboard() {
const user = await getUser(); // Wait...
const posts = await getPosts(); // Then wait...
const user = await getUser(); // Wait...
const posts = await getPosts(); // Then wait...
const comments = await getComments(); // Then wait...
return <div>...</div>;
@@ -151,11 +158,7 @@ async function Dashboard() {
```tsx
// Good: Parallel fetching
async function Dashboard() {
const [user, posts, comments] = await Promise.all([
getUser(),
getPosts(),
getComments(),
]);
const [user, posts, comments] = await Promise.all([getUser(), getPosts(), getComments()]);
return <div>...</div>;
}
@@ -238,7 +241,7 @@ async function Page() {
}
// Client Component
'use client';
('use client');
function ClientComponent({ initialData }) {
const [data, setData] = useState(initialData);
// ...
@@ -256,7 +259,7 @@ function ClientComponent() {
useEffect(() => {
fetch('/api/data')
.then(r => r.json())
.then((r) => r.json())
.then(setData);
}, []);
@@ -289,9 +292,9 @@ function ClientComponent() {
## Quick Reference
| Pattern | Use Case | HTTP Method | Caching |
|---------|----------|-------------|---------|
| Server Component fetch | Internal reads | Any | Full Next.js caching |
| Server Action | Mutations, form submissions | POST only | No |
| Route Handler | External APIs, webhooks | Any | GET can be cached |
| Client fetch to API | Client-side reads | Any | HTTP cache headers |
| Pattern | Use Case | HTTP Method | Caching |
| ---------------------- | --------------------------- | ----------- | -------------------- |
| Server Component fetch | Internal reads | Any | Full Next.js caching |
| Server Action | Mutations, form submissions | POST only | No |
| Route Handler | External APIs, webhooks | Any | GET can be cached |
| Client fetch to API | Client-side reads | Any | HTTP cache headers |
@@ -35,42 +35,58 @@ curl -X POST http://localhost:<port>/_next/mcp \
### Available Tools
#### `get_errors`
Get current errors from dev server (build errors, runtime errors with source-mapped stacks):
```json
{ "name": "get_errors", "arguments": {} }
```
#### `get_routes`
Discover all routes by scanning filesystem:
```json
{ "name": "get_routes", "arguments": {} }
// Optional: { "name": "get_routes", "arguments": { "routerType": "app" } }
```
Returns: `{ "appRouter": ["/", "/api/users/[id]", ...], "pagesRouter": [...] }`
#### `get_project_metadata`
Get project path and dev server URL:
```json
{ "name": "get_project_metadata", "arguments": {} }
```
Returns: `{ "projectPath": "/path/to/project", "devServerUrl": "http://localhost:3000" }`
#### `get_page_metadata`
Get runtime metadata about current page render (requires active browser session):
```json
{ "name": "get_page_metadata", "arguments": {} }
```
Returns segment trie data showing layouts, boundaries, and page components.
#### `get_logs`
Get path to Next.js development log file:
```json
{ "name": "get_logs", "arguments": {} }
```
Returns path to `<distDir>/logs/next-development.log`
#### `get_server_action_by_id`
Locate a Server Action by ID:
```json
{ "name": "get_server_action_by_id", "arguments": { "actionId": "<action-id>" } }
```
@@ -100,6 +116,7 @@ next build --debug-build-paths "/blog/[slug]"
```
Use this to:
- Quickly verify a build fix without full rebuild
- Debug static generation issues for specific pages
- Iterate faster on build errors
@@ -7,18 +7,19 @@ These are React directives, not Next.js specific.
### `'use client'`
Marks a component as a Client Component. Required for:
- React hooks (`useState`, `useEffect`, etc.)
- Event handlers (`onClick`, `onChange`)
- Browser APIs (`window`, `localStorage`)
```tsx
'use client'
'use client';
import { useState } from 'react'
import { useState } from 'react';
export function Counter() {
const [count, setCount] = useState(0)
return <button onClick={() => setCount(count + 1)}>{count}</button>
const [count, setCount] = useState(0);
return <button onClick={() => setCount(count + 1)}>{count}</button>;
}
```
@@ -29,7 +30,7 @@ Reference: https://react.dev/reference/rsc/use-client
Marks a function as a Server Action. Can be passed to Client Components.
```tsx
'use server'
'use server';
export async function submitForm(formData: FormData) {
// Runs on server
@@ -41,10 +42,10 @@ Or inline within a Server Component:
```tsx
export default function Page() {
async function submit() {
'use server'
'use server';
// Runs on server
}
return <form action={submit}>...</form>
return <form action={submit}>...</form>;
}
```
@@ -59,10 +60,10 @@ Reference: https://react.dev/reference/rsc/use-server
Marks a function or component for caching. Part of Next.js Cache Components.
```tsx
'use cache'
'use cache';
export async function getCachedData() {
return await fetchData()
return await fetchData();
}
```
@@ -11,21 +11,15 @@ Reference: https://nextjs.org/docs/app/getting-started/error-handling
Catches errors in a route segment and its children:
```tsx
'use client'
'use client';
export default function Error({
error,
reset,
}: {
error: Error & { digest?: string }
reset: () => void
}) {
export default function Error({ error, reset }: { error: Error & { digest?: string }; reset: () => void }) {
return (
<div>
<h2>Something went wrong!</h2>
<button onClick={() => reset()}>Try again</button>
</div>
)
);
}
```
@@ -36,15 +30,9 @@ export default function Error({
Catches errors in root layout:
```tsx
'use client'
'use client';
export default function GlobalError({
error,
reset,
}: {
error: Error & { digest?: string }
reset: () => void
}) {
export default function GlobalError({ error, reset }: { error: Error & { digest?: string }; reset: () => void }) {
return (
<html>
<body>
@@ -52,7 +40,7 @@ export default function GlobalError({
<button onClick={() => reset()}>Try again</button>
</body>
</html>
)
);
}
```
@@ -107,6 +95,7 @@ async function createPost(formData: FormData) {
```
Same applies to:
- `redirect()` - 307 temporary redirect
- `permanentRedirect()` - 308 permanent redirect
- `notFound()` - 404 not found
@@ -116,15 +105,15 @@ Same applies to:
Use `unstable_rethrow()` to re-throw these errors in catch blocks:
```tsx
import { unstable_rethrow } from 'next/navigation'
import { unstable_rethrow } from 'next/navigation';
async function action() {
try {
// ...
redirect('/success')
redirect('/success');
} catch (error) {
unstable_rethrow(error) // Re-throws Next.js internal errors
return { error: 'Something went wrong' }
unstable_rethrow(error); // Re-throws Next.js internal errors
return { error: 'Something went wrong' };
}
}
```
@@ -132,13 +121,13 @@ async function action() {
## Redirects
```tsx
import { redirect, permanentRedirect } from 'next/navigation'
import { redirect, permanentRedirect } from 'next/navigation';
// 307 Temporary - use for most cases
redirect('/new-path')
redirect('/new-path');
// 308 Permanent - use for URL migrations (cached by browsers)
permanentRedirect('/new-url')
permanentRedirect('/new-url');
```
## Auth Errors
@@ -146,20 +135,20 @@ permanentRedirect('/new-url')
Trigger auth-related error pages:
```tsx
import { forbidden, unauthorized } from 'next/navigation'
import { forbidden, unauthorized } from 'next/navigation';
async function Page() {
const session = await getSession()
const session = await getSession();
if (!session) {
unauthorized() // Renders unauthorized.tsx (401)
unauthorized(); // Renders unauthorized.tsx (401)
}
if (!session.hasAccess) {
forbidden() // Renders forbidden.tsx (403)
forbidden(); // Renders forbidden.tsx (403)
}
return <Dashboard />
return <Dashboard />;
}
```
@@ -168,12 +157,12 @@ Create corresponding error pages:
```tsx
// app/forbidden.tsx
export default function Forbidden() {
return <div>You don't have access to this resource</div>
return <div>You don't have access to this resource</div>;
}
// app/unauthorized.tsx
export default function Unauthorized() {
return <div>Please log in to continue</div>
return <div>Please log in to continue</div>;
}
```
@@ -190,24 +179,24 @@ export default function NotFound() {
<h2>Not Found</h2>
<p>Could not find the requested resource</p>
</div>
)
);
}
```
### Triggering Not Found
```tsx
import { notFound } from 'next/navigation'
import { notFound } from 'next/navigation';
export default async function Page({ params }: { params: Promise<{ id: string }> }) {
const { id } = await params
const post = await getPost(id)
const { id } = await params;
const post = await getPost(id);
if (!post) {
notFound() // Renders closest not-found.tsx
notFound(); // Renders closest not-found.tsx
}
return <div>{post.title}</div>
return <div>{post.title}</div>;
}
```
@@ -27,16 +27,16 @@ app/
## Special Files
| File | Purpose |
|------|---------|
| `page.tsx` | UI for a route segment |
| `layout.tsx` | Shared UI for segment and children |
| `loading.tsx` | Loading UI (Suspense boundary) |
| `error.tsx` | Error UI (Error boundary) |
| `not-found.tsx` | 404 UI |
| `route.ts` | API endpoint |
| `template.tsx` | Like layout but re-renders on navigation |
| `default.tsx` | Fallback for parallel routes |
| File | Purpose |
| --------------- | ---------------------------------------- |
| `page.tsx` | UI for a route segment |
| `layout.tsx` | Shared UI for segment and children |
| `loading.tsx` | Loading UI (Suspense boundary) |
| `error.tsx` | Error UI (Error boundary) |
| `not-found.tsx` | 404 UI |
| `route.ts` | API endpoint |
| `template.tsx` | Like layout but re-renders on navigation |
| `default.tsx` | Fallback for parallel routes |
## Route Segments
@@ -74,6 +74,7 @@ app/
```
Conventions:
- `(.)` - same level
- `(..)` - one level up
- `(..)(..)` - two levels up
@@ -128,10 +129,10 @@ export const proxyConfig = {
};
```
| Version | File | Export | Config |
|---------|------|--------|--------|
| v14-15 | `middleware.ts` | `middleware()` | `config` |
| v16+ | `proxy.ts` | `proxy()` | `proxyConfig` |
| Version | File | Export | Config |
| ------- | --------------- | -------------- | ------------- |
| v14-15 | `middleware.ts` | `middleware()` | `config` |
| v16+ | `proxy.ts` | `proxy()` | `proxyConfig` |
**Migration**: Run `npx @next/codemod@latest upgrade` to auto-rename.
+28 -27
View File
@@ -6,44 +6,45 @@ Use `next/font` for automatic font optimization with zero layout shift.
```tsx
// app/layout.tsx
import { Inter } from 'next/font/google'
import { Inter } from 'next/font/google';
const inter = Inter({ subsets: ['latin'] })
const inter = Inter({ subsets: ['latin'] });
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
<html lang="en" className={inter.className}>
<body>{children}</body>
</html>
)
);
}
```
## Multiple Fonts
```tsx
import { Inter, Roboto_Mono } from 'next/font/google'
import { Inter, Roboto_Mono } from 'next/font/google';
const inter = Inter({
subsets: ['latin'],
variable: '--font-inter',
})
});
const robotoMono = Roboto_Mono({
subsets: ['latin'],
variable: '--font-roboto-mono',
})
});
export default function RootLayout({ children }: { children: React.ReactNode }) {
return (
<html lang="en" className={`${inter.variable} ${robotoMono.variable}`}>
<body>{children}</body>
</html>
)
);
}
```
Use in CSS:
```css
body {
font-family: var(--font-inter);
@@ -61,35 +62,35 @@ code {
const inter = Inter({
subsets: ['latin'],
weight: '400',
})
});
// Multiple weights
const inter = Inter({
subsets: ['latin'],
weight: ['400', '500', '700'],
})
});
// Variable font (recommended) - includes all weights
const inter = Inter({
subsets: ['latin'],
// No weight needed - variable fonts support all weights
})
});
// With italic
const inter = Inter({
subsets: ['latin'],
style: ['normal', 'italic'],
})
});
```
## Local Fonts
```tsx
import localFont from 'next/font/local'
import localFont from 'next/font/local';
const myFont = localFont({
src: './fonts/MyFont.woff2',
})
});
// Multiple files for different weights
const myFont = localFont({
@@ -105,32 +106,32 @@ const myFont = localFont({
style: 'normal',
},
],
})
});
// Variable font
const myFont = localFont({
src: './fonts/MyFont-Variable.woff2',
variable: '--font-my-font',
})
});
```
## Tailwind CSS Integration
```tsx
// app/layout.tsx
import { Inter } from 'next/font/google'
import { Inter } from 'next/font/google';
const inter = Inter({
subsets: ['latin'],
variable: '--font-inter',
})
});
export default function RootLayout({ children }) {
return (
<html lang="en" className={inter.variable}>
<body>{children}</body>
</html>
)
);
}
```
@@ -144,7 +145,7 @@ module.exports = {
},
},
},
}
};
```
## Preloading Subsets
@@ -153,10 +154,10 @@ Only load needed character subsets:
```tsx
// Latin only (most common)
const inter = Inter({ subsets: ['latin'] })
const inter = Inter({ subsets: ['latin'] });
// Multiple subsets
const inter = Inter({ subsets: ['latin', 'latin-ext', 'cyrillic'] })
const inter = Inter({ subsets: ['latin', 'latin-ext', 'cyrillic'] });
```
## Display Strategy
@@ -167,7 +168,7 @@ Control font loading behavior:
const inter = Inter({
subsets: ['latin'],
display: 'swap', // Default - shows fallback, swaps when loaded
})
});
// Options:
// 'auto' - browser decides
@@ -231,15 +232,15 @@ const inter = Inter({ subsets: ['latin'] })
```tsx
// For component-specific fonts, export from a shared file
// lib/fonts.ts
import { Inter, Playfair_Display } from 'next/font/google'
import { Inter, Playfair_Display } from 'next/font/google';
export const inter = Inter({ subsets: ['latin'], variable: '--font-inter' })
export const playfair = Playfair_Display({ subsets: ['latin'], variable: '--font-playfair' })
export const inter = Inter({ subsets: ['latin'], variable: '--font-inter' });
export const playfair = Playfair_Display({ subsets: ['latin'], variable: '--font-playfair' });
// components/Heading.tsx
import { playfair } from '@/lib/fonts'
import { playfair } from '@/lib/fonts';
export function Heading({ children }) {
return <h1 className={playfair.className}>{children}</h1>
return <h1 className={playfair.className}>{children}</h1>;
}
```
+44 -44
View File
@@ -6,45 +6,45 @@ Reference: https://nextjs.org/docs/app/api-reference/functions
## Navigation Hooks (Client)
| Hook | Purpose | Reference |
|------|---------|-----------|
| `useRouter` | Programmatic navigation (`push`, `replace`, `back`, `refresh`) | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-router) |
| `usePathname` | Get current pathname | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-pathname) |
| `useSearchParams` | Read URL search parameters | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-search-params) |
| `useParams` | Access dynamic route parameters | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-params) |
| `useSelectedLayoutSegment` | Active child segment (one level) | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-selected-layout-segment) |
| `useSelectedLayoutSegments` | All active segments below layout | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-selected-layout-segments) |
| `useLinkStatus` | Check link prefetch status | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-link-status) |
| `useReportWebVitals` | Report Core Web Vitals metrics | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-report-web-vitals) |
| Hook | Purpose | Reference |
| --------------------------- | -------------------------------------------------------------- | ---------------------------------------------------------------------------------------- |
| `useRouter` | Programmatic navigation (`push`, `replace`, `back`, `refresh`) | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-router) |
| `usePathname` | Get current pathname | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-pathname) |
| `useSearchParams` | Read URL search parameters | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-search-params) |
| `useParams` | Access dynamic route parameters | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-params) |
| `useSelectedLayoutSegment` | Active child segment (one level) | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-selected-layout-segment) |
| `useSelectedLayoutSegments` | All active segments below layout | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-selected-layout-segments) |
| `useLinkStatus` | Check link prefetch status | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-link-status) |
| `useReportWebVitals` | Report Core Web Vitals metrics | [Docs](https://nextjs.org/docs/app/api-reference/functions/use-report-web-vitals) |
## Server Functions
| Function | Purpose | Reference |
|----------|---------|-----------|
| `cookies` | Read/write cookies | [Docs](https://nextjs.org/docs/app/api-reference/functions/cookies) |
| `headers` | Read request headers | [Docs](https://nextjs.org/docs/app/api-reference/functions/headers) |
| `draftMode` | Enable preview of unpublished CMS content | [Docs](https://nextjs.org/docs/app/api-reference/functions/draft-mode) |
| `after` | Run code after response finishes streaming | [Docs](https://nextjs.org/docs/app/api-reference/functions/after) |
| Function | Purpose | Reference |
| ------------ | -------------------------------------------- | ---------------------------------------------------------------------- |
| `cookies` | Read/write cookies | [Docs](https://nextjs.org/docs/app/api-reference/functions/cookies) |
| `headers` | Read request headers | [Docs](https://nextjs.org/docs/app/api-reference/functions/headers) |
| `draftMode` | Enable preview of unpublished CMS content | [Docs](https://nextjs.org/docs/app/api-reference/functions/draft-mode) |
| `after` | Run code after response finishes streaming | [Docs](https://nextjs.org/docs/app/api-reference/functions/after) |
| `connection` | Wait for connection before dynamic rendering | [Docs](https://nextjs.org/docs/app/api-reference/functions/connection) |
| `userAgent` | Parse User-Agent header | [Docs](https://nextjs.org/docs/app/api-reference/functions/userAgent) |
| `userAgent` | Parse User-Agent header | [Docs](https://nextjs.org/docs/app/api-reference/functions/userAgent) |
## Generate Functions
| Function | Purpose | Reference |
|----------|---------|-----------|
| `generateStaticParams` | Pre-render dynamic routes at build time | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-static-params) |
| `generateMetadata` | Dynamic metadata | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-metadata) |
| `generateViewport` | Dynamic viewport config | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-viewport) |
| `generateSitemaps` | Multiple sitemaps for large sites | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-sitemaps) |
| `generateImageMetadata` | Multiple OG images per route | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-image-metadata) |
| Function | Purpose | Reference |
| ----------------------- | --------------------------------------- | ----------------------------------------------------------------------------------- |
| `generateStaticParams` | Pre-render dynamic routes at build time | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-static-params) |
| `generateMetadata` | Dynamic metadata | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-metadata) |
| `generateViewport` | Dynamic viewport config | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-viewport) |
| `generateSitemaps` | Multiple sitemaps for large sites | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-sitemaps) |
| `generateImageMetadata` | Multiple OG images per route | [Docs](https://nextjs.org/docs/app/api-reference/functions/generate-image-metadata) |
## Request/Response
| Function | Purpose | Reference |
|----------|---------|-----------|
| `NextRequest` | Extended Request with helpers | [Docs](https://nextjs.org/docs/app/api-reference/functions/next-request) |
| `NextResponse` | Extended Response with helpers | [Docs](https://nextjs.org/docs/app/api-reference/functions/next-response) |
| `ImageResponse` | Generate OG images | [Docs](https://nextjs.org/docs/app/api-reference/functions/image-response) |
| Function | Purpose | Reference |
| --------------- | ------------------------------ | -------------------------------------------------------------------------- |
| `NextRequest` | Extended Request with helpers | [Docs](https://nextjs.org/docs/app/api-reference/functions/next-request) |
| `NextResponse` | Extended Response with helpers | [Docs](https://nextjs.org/docs/app/api-reference/functions/next-response) |
| `ImageResponse` | Generate OG images | [Docs](https://nextjs.org/docs/app/api-reference/functions/image-response) |
## Common Examples
@@ -54,30 +54,30 @@ Use `next/link` for internal navigation instead of `<a>` tags.
```tsx
// Bad: Plain anchor tag
<a href="/about">About</a>
<a href="/about">About</a>;
// Good: Next.js Link
import Link from 'next/link'
import Link from 'next/link';
<Link href="/about">About</Link>
<Link href="/about">About</Link>;
```
Active link styling:
```tsx
'use client'
'use client';
import Link from 'next/link'
import { usePathname } from 'next/navigation'
import Link from 'next/link';
import { usePathname } from 'next/navigation';
export function NavLink({ href, children }) {
const pathname = usePathname()
const pathname = usePathname();
return (
<Link href={href} className={pathname === href ? 'active' : ''}>
{children}
</Link>
)
);
}
```
@@ -86,23 +86,23 @@ export function NavLink({ href, children }) {
```tsx
// app/blog/[slug]/page.tsx
export async function generateStaticParams() {
const posts = await getPosts()
return posts.map((post) => ({ slug: post.slug }))
const posts = await getPosts();
return posts.map((post) => ({ slug: post.slug }));
}
```
### After Response
```tsx
import { after } from 'next/server'
import { after } from 'next/server';
export async function POST(request: Request) {
const data = await processRequest(request)
const data = await processRequest(request);
after(async () => {
await logAnalytics(data)
})
await logAnalytics(data);
});
return Response.json({ success: true })
return Response.json({ success: true });
}
```
@@ -17,16 +17,16 @@ In development, click the hydration error to see the server/client diff.
```tsx
// Bad: Causes mismatch - window doesn't exist on server
<div>{window.innerWidth}</div>
<div>{window.innerWidth}</div>;
// Good: Use client component with mounted check
'use client'
import { useState, useEffect } from 'react'
('use client');
import { useState, useEffect } from 'react';
export function ClientOnly({ children }: { children: React.ReactNode }) {
const [mounted, setMounted] = useState(false)
useEffect(() => setMounted(true), [])
return mounted ? children : null
const [mounted, setMounted] = useState(false);
useEffect(() => setMounted(true), []);
return mounted ? children : null;
}
```
@@ -36,12 +36,12 @@ Server and client may be in different timezones:
```tsx
// Bad: Causes mismatch
<span>{new Date().toLocaleString()}</span>
<span>{new Date().toLocaleString()}</span>;
// Good: Render on client only
'use client'
const [time, setTime] = useState<string>()
useEffect(() => setTime(new Date().toLocaleString()), [])
('use client');
const [time, setTime] = useState<string>();
useEffect(() => setTime(new Date().toLocaleString()), []);
```
### Random Values or IDs
@@ -78,14 +78,9 @@ Scripts that modify DOM during hydration.
```tsx
// Good: Use next/script with afterInteractive
import Script from 'next/script'
import Script from 'next/script';
export default function Page() {
return (
<Script
src="https://example.com/script.js"
strategy="afterInteractive"
/>
)
return <Script src="https://example.com/script.js" strategy="afterInteractive" />;
}
```
@@ -0,0 +1,79 @@
# i18n (Thai / English)
LCBP3 frontend **must not** hardcode Thai or English UI strings in components.
## Rules
1. **All user-facing strings go through the i18n layer** (`next-intl` / `i18next` — check `frontend/package.json`).
2. **Keys use kebab-case**, namespaced by feature:
- `correspondence.list.title`
- `correspondence.form.submit`
- `common.actions.cancel`
3. **Comments in code remain Thai** (business logic explanation); **only UI copy** goes through i18n.
4. **Error messages** from backend (via ADR-007 `userMessage`) are already localized server-side — render them directly, don't translate client-side.
---
## ❌ Wrong
```tsx
export function CorrespondenceHeader() {
return <h1>รายการหนังสือติดต่อ</h1>; // ❌ hardcoded Thai
}
toast.success('บันทึกสำเร็จ'); // ❌ hardcoded
```
---
## ✅ Right
```tsx
import { useTranslations } from 'next-intl';
export function CorrespondenceHeader() {
const t = useTranslations('correspondence.list');
return <h1>{t('title')}</h1>;
}
toast.success(t('save.success'));
```
Translation files:
```json
// messages/th.json
{
"correspondence": {
"list": { "title": "รายการหนังสือติดต่อ" },
"save": { "success": "บันทึกสำเร็จ" }
}
}
// messages/en.json
{
"correspondence": {
"list": { "title": "Correspondence List" },
"save": { "success": "Saved successfully" }
}
}
```
---
## Zod Error Messages
Zod error messages shown in forms **do** stay in Thai inline (per `specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md`), because they're schema-bound and rarely need translation. If dual-language support becomes required, wrap with an i18n-aware resolver:
```ts
const schema = z.object({
projectUuid: z.string().uuid(t('validation.project.required')),
});
```
---
## Reference
- [i18n Guidelines](../../../specs/05-Engineering-Guidelines/05-08-i18n-guidelines.md)
- [Frontend Guidelines](../../../specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md)
+9 -9
View File
@@ -6,11 +6,11 @@ Use `next/image` for automatic image optimization.
```tsx
// Bad: Avoid native img
<img src="/hero.png" alt="Hero" />
<img src="/hero.png" alt="Hero" />;
// Good: Use next/image
import Image from 'next/image'
<Image src="/hero.png" alt="Hero" width={800} height={400} />
import Image from 'next/image';
<Image src="/hero.png" alt="Hero" width={800} height={400} />;
```
## Required Props
@@ -51,7 +51,7 @@ module.exports = {
},
],
},
}
};
```
## Responsive Images
@@ -155,19 +155,19 @@ When using `output: 'export'`, use `unoptimized` or custom loader:
```tsx
// Option 1: Disable optimization
<Image src="/hero.png" alt="Hero" width={800} height={400} unoptimized />
<Image src="/hero.png" alt="Hero" width={800} height={400} unoptimized />;
// Option 2: Global config
// next.config.js
module.exports = {
output: 'export',
images: { unoptimized: true },
}
};
// Option 3: Custom loader (Cloudinary, Imgix, etc.)
const cloudinaryLoader = ({ src, width, quality }) => {
return `https://res.cloudinary.com/demo/image/upload/w_${width},q_${quality || 75}/${src}`
}
return `https://res.cloudinary.com/demo/image/upload/w_${width},q_${quality || 75}/${src}`;
};
<Image loader={cloudinaryLoader} src="sample.jpg" alt="Sample" width={800} height={400} />
<Image loader={cloudinaryLoader} src="sample.jpg" alt="Sample" width={800} height={400} />;
```
+64 -72
View File
@@ -7,6 +7,7 @@ Add SEO metadata to Next.js pages using the Metadata API.
The `metadata` object and `generateMetadata` function are **only supported in Server Components**. They cannot be used in Client Components.
If the target page has `'use client'`:
1. Remove `'use client'` if possible, move client logic to child components
2. Or extract metadata to a parent Server Component layout
3. Or split the file: Server Component with metadata imports Client Components
@@ -14,25 +15,25 @@ If the target page has `'use client'`:
## Static Metadata
```tsx
import type { Metadata } from 'next'
import type { Metadata } from 'next';
export const metadata: Metadata = {
title: 'Page Title',
description: 'Page description for search engines',
}
};
```
## Dynamic Metadata
```tsx
import type { Metadata } from 'next'
import type { Metadata } from 'next';
type Props = { params: Promise<{ slug: string }> }
type Props = { params: Promise<{ slug: string }> };
export async function generateMetadata({ params }: Props): Promise<Metadata> {
const { slug } = await params
const post = await getPost(slug)
return { title: post.title, description: post.description }
const { slug } = await params;
const post = await getPost(slug);
return { title: post.title, description: post.description };
}
```
@@ -41,11 +42,11 @@ export async function generateMetadata({ params }: Props): Promise<Metadata> {
Use React `cache()` when the same data is needed for both metadata and page:
```tsx
import { cache } from 'react'
import { cache } from 'react';
export const getPost = cache(async (slug: string) => {
return await db.posts.findFirst({ where: { slug } })
})
return await db.posts.findFirst({ where: { slug } });
});
```
## Viewport
@@ -53,17 +54,17 @@ export const getPost = cache(async (slug: string) => {
Separate from metadata for streaming support:
```tsx
import type { Viewport } from 'next'
import type { Viewport } from 'next';
export const viewport: Viewport = {
width: 'device-width',
initialScale: 1,
themeColor: '#000000',
}
};
// Or dynamic
export function generateViewport({ params }): Viewport {
return { themeColor: getThemeColor(params) }
return { themeColor: getThemeColor(params) };
}
```
@@ -74,7 +75,7 @@ In root layout for consistent naming:
```tsx
export const metadata: Metadata = {
title: { default: 'Site Name', template: '%s | Site Name' },
}
};
```
## Metadata File Conventions
@@ -83,16 +84,16 @@ Reference: https://nextjs.org/docs/app/getting-started/project-structure#metadat
Place these files in `app/` directory (or route segments):
| File | Purpose |
|------|---------|
| `favicon.ico` | Favicon |
| `icon.png` / `icon.svg` | App icon |
| `apple-icon.png` | Apple app icon |
| `opengraph-image.png` | OG image |
| `twitter-image.png` | Twitter card image |
| `sitemap.ts` / `sitemap.xml` | Sitemap (use `generateSitemaps` for multiple) |
| `robots.ts` / `robots.txt` | Robots directives |
| `manifest.ts` / `manifest.json` | Web app manifest |
| File | Purpose |
| ------------------------------- | --------------------------------------------- |
| `favicon.ico` | Favicon |
| `icon.png` / `icon.svg` | App icon |
| `apple-icon.png` | Apple app icon |
| `opengraph-image.png` | OG image |
| `twitter-image.png` | Twitter card image |
| `sitemap.ts` / `sitemap.xml` | Sitemap (use `generateSitemaps` for multiple) |
| `robots.ts` / `robots.txt` | Robots directives |
| `manifest.ts` / `manifest.json` | Web app manifest |
## SEO Best Practice: Static Files Are Often Enough
@@ -108,6 +109,7 @@ app/
```
**Tips:**
- A single `opengraph-image.png` covers both Open Graph and Twitter (Twitter falls back to OG)
- Static `title` and `description` in layout metadata is sufficient for most pages
- Only use dynamic `generateMetadata` when content varies per page
@@ -126,7 +128,7 @@ Generate dynamic Open Graph images using `next/og`.
```tsx
// Good
import { ImageResponse } from 'next/og'
import { ImageResponse } from 'next/og';
// Bad
// import { ImageResponse } from '@vercel/og'
@@ -137,11 +139,11 @@ import { ImageResponse } from 'next/og'
```tsx
// app/opengraph-image.tsx
import { ImageResponse } from 'next/og'
import { ImageResponse } from 'next/og';
export const alt = 'Site Name'
export const size = { width: 1200, height: 630 }
export const contentType = 'image/png'
export const alt = 'Site Name';
export const size = { width: 1200, height: 630 };
export const contentType = 'image/png';
export default function Image() {
return new ImageResponse(
@@ -161,7 +163,7 @@ export default function Image() {
</div>
),
{ ...size }
)
);
}
```
@@ -169,17 +171,17 @@ export default function Image() {
```tsx
// app/blog/[slug]/opengraph-image.tsx
import { ImageResponse } from 'next/og'
import { ImageResponse } from 'next/og';
export const alt = 'Blog Post'
export const size = { width: 1200, height: 630 }
export const contentType = 'image/png'
export const alt = 'Blog Post';
export const size = { width: 1200, height: 630 };
export const contentType = 'image/png';
type Props = { params: Promise<{ slug: string }> }
type Props = { params: Promise<{ slug: string }> };
export default async function Image({ params }: Props) {
const { slug } = await params
const post = await getPost(slug)
const { slug } = await params;
const post = await getPost(slug);
return new ImageResponse(
(
@@ -202,33 +204,26 @@ export default async function Image({ params }: Props) {
</div>
),
{ ...size }
)
);
}
```
## Custom Fonts
```tsx
import { ImageResponse } from 'next/og'
import { join } from 'path'
import { readFile } from 'fs/promises'
import { ImageResponse } from 'next/og';
import { join } from 'path';
import { readFile } from 'fs/promises';
export default async function Image() {
const fontPath = join(process.cwd(), 'assets/fonts/Inter-Bold.ttf')
const fontData = await readFile(fontPath)
const fontPath = join(process.cwd(), 'assets/fonts/Inter-Bold.ttf');
const fontData = await readFile(fontPath);
return new ImageResponse(
(
<div style={{ fontFamily: 'Inter', fontSize: 64 }}>
Custom Font Text
</div>
),
{
width: 1200,
height: 630,
fonts: [{ name: 'Inter', data: fontData, style: 'normal' }],
}
)
return new ImageResponse(<div style={{ fontFamily: 'Inter', fontSize: 64 }}>Custom Font Text</div>, {
width: 1200,
height: 630,
fonts: [{ name: 'Inter', data: fontData, style: 'normal' }],
});
}
```
@@ -240,6 +235,7 @@ export default async function Image() {
## Styling Notes
ImageResponse uses Flexbox layout:
- Use `display: 'flex'`
- No CSS Grid support
- Styles must be inline objects
@@ -250,22 +246,22 @@ Use `generateImageMetadata` for multiple images per route:
```tsx
// app/blog/[slug]/opengraph-image.tsx
import { ImageResponse } from 'next/og'
import { ImageResponse } from 'next/og';
export async function generateImageMetadata({ params }) {
const images = await getPostImages(params.slug)
const images = await getPostImages(params.slug);
return images.map((img, idx) => ({
id: idx,
alt: img.alt,
size: { width: 1200, height: 630 },
contentType: 'image/png',
}))
}));
}
export default async function Image({ params, id }) {
const images = await getPostImages(params.slug)
const image = images[id]
return new ImageResponse(/* ... */)
const images = await getPostImages(params.slug);
const image = images[id];
return new ImageResponse(/* ... */);
}
```
@@ -275,26 +271,22 @@ Use `generateSitemaps` for large sites:
```tsx
// app/sitemap.ts
import type { MetadataRoute } from 'next'
import type { MetadataRoute } from 'next';
export async function generateSitemaps() {
// Return array of sitemap IDs
return [{ id: 0 }, { id: 1 }, { id: 2 }]
return [{ id: 0 }, { id: 1 }, { id: 2 }];
}
export default async function sitemap({
id,
}: {
id: number
}): Promise<MetadataRoute.Sitemap> {
const start = id * 50000
const end = start + 50000
const products = await getProducts(start, end)
export default async function sitemap({ id }: { id: number }): Promise<MetadataRoute.Sitemap> {
const start = id * 50000;
const end = start + 50000;
const products = await getProducts(start, end);
return products.map((product) => ({
url: `https://example.com/product/${product.id}`,
lastModified: product.updatedAt,
}))
}));
}
```
@@ -24,13 +24,7 @@ app/
```tsx
// app/layout.tsx
export default function RootLayout({
children,
modal,
}: {
children: React.ReactNode;
modal: React.ReactNode;
}) {
export default function RootLayout({ children, modal }: { children: React.ReactNode; modal: React.ReactNode }) {
return (
<html>
<body>
@@ -63,11 +57,7 @@ The `(.)` prefix intercepts routes at the same level.
// app/@modal/(.)photos/[id]/page.tsx
import { Modal } from '@/components/modal';
export default async function PhotoModal({
params
}: {
params: Promise<{ id: string }>
}) {
export default async function PhotoModal({ params }: { params: Promise<{ id: string }> }) {
const { id } = await params;
const photo = await getPhoto(id);
@@ -83,11 +73,7 @@ export default async function PhotoModal({
```tsx
// app/photos/[id]/page.tsx
export default async function PhotoPage({
params
}: {
params: Promise<{ id: string }>
}) {
export default async function PhotoPage({ params }: { params: Promise<{ id: string }> }) {
const { id } = await params;
const photo = await getPhoto(id);
@@ -127,11 +113,14 @@ export function Modal({ children }: { children: React.ReactNode }) {
}, [router]);
// Close on overlay click
const handleOverlayClick = useCallback((e: React.MouseEvent) => {
if (e.target === overlayRef.current) {
router.back(); // Correct
}
}, [router]);
const handleOverlayClick = useCallback(
(e: React.MouseEvent) => {
if (e.target === overlayRef.current) {
router.back(); // Correct
}
},
[router]
);
return (
<div
@@ -156,11 +145,13 @@ export function Modal({ children }: { children: React.ReactNode }) {
### Why NOT `router.push('/')` or `<Link href="/">`?
Using `push` or `Link` to "close" a modal:
1. Adds a new history entry (back button shows modal again)
2. Doesn't properly clear the intercepted route
3. Can cause the modal to flash or persist unexpectedly
`router.back()` correctly:
1. Removes the intercepted route from history
2. Returns to the previous page
3. Properly unmounts the modal
@@ -169,18 +160,19 @@ Using `push` or `Link` to "close" a modal:
Matchers match **route segments**, not filesystem paths:
| Matcher | Matches | Example |
|---------|---------|---------|
| `(.)` | Same level | `@modal/(.)photos` intercepts `/photos` |
| `(..)` | One level up | `@modal/(..)settings` from `/dashboard/@modal` intercepts `/settings` |
| `(..)(..)` | Two levels up | Rarely used |
| `(...)` | From root | `@modal/(...)photos` intercepts `/photos` from anywhere |
| Matcher | Matches | Example |
| ---------- | ------------- | --------------------------------------------------------------------- |
| `(.)` | Same level | `@modal/(.)photos` intercepts `/photos` |
| `(..)` | One level up | `@modal/(..)settings` from `/dashboard/@modal` intercepts `/settings` |
| `(..)(..)` | Two levels up | Rarely used |
| `(...)` | From root | `@modal/(...)photos` intercepts `/photos` from anywhere |
**Common mistake**: Thinking `(..)` means "parent folder" - it means "parent route segment".
## Handling Hard Navigation
When users directly visit `/photos/123` (bookmark, refresh, shared link):
- The intercepting route is bypassed
- The full `photos/[id]/page.tsx` renders
- Modal doesn't appear (expected behavior)
@@ -230,6 +222,7 @@ app/
### 4. Intercepted Route Shows Wrong Content
Check your matcher:
- `(.)photos` intercepts `/photos` from the same route level
- If your `@modal` is in `app/dashboard/@modal`, use `(.)photos` to intercept `/dashboard/photos`, not `/photos`
@@ -272,7 +265,7 @@ export default async function Gallery() {
return (
<div className="grid grid-cols-3 gap-4">
{photos.map(photo => (
{photos.map((photo) => (
<Link key={photo.id} href={`/photos/${photo.id}`}>
<img src={photo.thumbnail} alt={photo.title} />
</Link>
@@ -7,14 +7,14 @@ Create API endpoints with `route.ts` files.
```tsx
// app/api/users/route.ts
export async function GET() {
const users = await getUsers()
return Response.json(users)
const users = await getUsers();
return Response.json(users);
}
export async function POST(request: Request) {
const body = await request.json()
const user = await createUser(body)
return Response.json(user, { status: 201 })
const body = await request.json();
const user = await createUser(body);
return Response.json(user, { status: 201 });
}
```
@@ -60,11 +60,11 @@ Route handlers run in a **Server Component-like environment**:
```tsx
// Bad: This won't work - no React DOM in route handlers
import { renderToString } from 'react-dom/server'
import { renderToString } from 'react-dom/server';
export async function GET() {
const html = renderToString(<Component />) // Error!
return new Response(html)
const html = renderToString(<Component />); // Error!
return new Response(html);
}
```
@@ -72,18 +72,15 @@ export async function GET() {
```tsx
// app/api/users/[id]/route.ts
export async function GET(
request: Request,
{ params }: { params: Promise<{ id: string }> }
) {
const { id } = await params
const user = await getUser(id)
export async function GET(request: Request, { params }: { params: Promise<{ id: string }> }) {
const { id } = await params;
const user = await getUser(id);
if (!user) {
return Response.json({ error: 'Not found' }, { status: 404 })
return Response.json({ error: 'Not found' }, { status: 404 });
}
return Response.json(user)
return Response.json(user);
}
```
@@ -92,17 +89,17 @@ export async function GET(
```tsx
export async function GET(request: Request) {
// URL and search params
const { searchParams } = new URL(request.url)
const query = searchParams.get('q')
const { searchParams } = new URL(request.url);
const query = searchParams.get('q');
// Headers
const authHeader = request.headers.get('authorization')
const authHeader = request.headers.get('authorization');
// Cookies (Next.js helper)
const cookieStore = await cookies()
const token = cookieStore.get('token')
const cookieStore = await cookies();
const token = cookieStore.get('token');
return Response.json({ query, token })
return Response.json({ query, token });
}
```
@@ -110,37 +107,37 @@ export async function GET(request: Request) {
```tsx
// JSON response
return Response.json({ data })
return Response.json({ data });
// With status
return Response.json({ error: 'Not found' }, { status: 404 })
return Response.json({ error: 'Not found' }, { status: 404 });
// With headers
return Response.json(data, {
headers: {
'Cache-Control': 'max-age=3600',
},
})
});
// Redirect
return Response.redirect(new URL('/login', request.url))
return Response.redirect(new URL('/login', request.url));
// Stream
return new Response(stream, {
headers: { 'Content-Type': 'text/event-stream' },
})
});
```
## When to Use Route Handlers vs Server Actions
| Use Case | Route Handlers | Server Actions |
|----------|----------------|----------------|
| Form submissions | No | Yes |
| Data mutations from UI | No | Yes |
| Third-party webhooks | Yes | No |
| External API consumption | Yes | No |
| Public REST API | Yes | No |
| File uploads | Both work | Both work |
| Use Case | Route Handlers | Server Actions |
| ------------------------ | -------------- | -------------- |
| Form submissions | No | Yes |
| Data mutations from UI | No | Yes |
| Third-party webhooks | Yes | No |
| External API consumption | Yes | No |
| Public REST API | Yes | No |
| File uploads | Both work | Both work |
**Prefer Server Actions** for mutations triggered from your UI.
**Use Route Handlers** for external integrations and public APIs.
@@ -12,33 +12,33 @@ Client components **cannot** be async functions. Only Server Components can be a
```tsx
// Bad: async client component
'use client'
'use client';
export default async function UserProfile() {
const user = await getUser() // Cannot await in client component
return <div>{user.name}</div>
const user = await getUser(); // Cannot await in client component
return <div>{user.name}</div>;
}
// Good: Remove async, fetch data in parent server component
// page.tsx (server component - no 'use client')
export default async function Page() {
const user = await getUser()
return <UserProfile user={user} />
const user = await getUser();
return <UserProfile user={user} />;
}
// UserProfile.tsx (client component)
'use client'
('use client');
export function UserProfile({ user }: { user: User }) {
return <div>{user.name}</div>
return <div>{user.name}</div>;
}
```
```tsx
// Bad: async arrow function client component
'use client'
'use client';
const Dashboard = async () => {
const data = await fetchDashboard()
return <div>{data}</div>
}
const data = await fetchDashboard();
return <div>{data}</div>;
};
// Good: Fetch in server component, pass data down
```
@@ -48,6 +48,7 @@ const Dashboard = async () => {
Props passed from Server → Client must be JSON-serializable.
**Detect:** Server component passes these to a client component:
- Functions (except Server Actions with `'use server'`)
- `Date` objects
- `Map`, `Set`, `WeakMap`, `WeakSet`
@@ -59,16 +60,16 @@ Props passed from Server → Client must be JSON-serializable.
// Bad: Function prop
// page.tsx (server)
export default function Page() {
const handleClick = () => console.log('clicked')
return <ClientButton onClick={handleClick} />
const handleClick = () => console.log('clicked');
return <ClientButton onClick={handleClick} />;
}
// Good: Define function inside client component
// ClientButton.tsx
'use client'
('use client');
export function ClientButton() {
const handleClick = () => console.log('clicked')
return <button onClick={handleClick}>Click</button>
const handleClick = () => console.log('clicked');
return <button onClick={handleClick}>Click</button>;
}
```
@@ -76,28 +77,28 @@ export function ClientButton() {
// Bad: Date object (silently becomes string, then crashes)
// page.tsx (server)
export default async function Page() {
const post = await getPost()
return <PostCard createdAt={post.createdAt} /> // Date object
const post = await getPost();
return <PostCard createdAt={post.createdAt} />; // Date object
}
// PostCard.tsx (client) - will crash on .getFullYear()
'use client'
('use client');
export function PostCard({ createdAt }: { createdAt: Date }) {
return <span>{createdAt.getFullYear()}</span> // Runtime error!
return <span>{createdAt.getFullYear()}</span>; // Runtime error!
}
// Good: Serialize to string on server
// page.tsx (server)
export default async function Page() {
const post = await getPost()
return <PostCard createdAt={post.createdAt.toISOString()} />
const post = await getPost();
return <PostCard createdAt={post.createdAt.toISOString()} />;
}
// PostCard.tsx (client)
'use client'
('use client');
export function PostCard({ createdAt }: { createdAt: string }) {
const date = new Date(createdAt)
return <span>{date.getFullYear()}</span>
const date = new Date(createdAt);
return <span>{date.getFullYear()}</span>;
}
```
@@ -127,33 +128,33 @@ Functions marked with `'use server'` CAN be passed to client components.
```tsx
// Valid: Server Action can be passed
// actions.ts
'use server'
'use server';
export async function submitForm(formData: FormData) {
// server-side logic
}
// page.tsx (server)
import { submitForm } from './actions'
import { submitForm } from './actions';
export default function Page() {
return <ClientForm onSubmit={submitForm} /> // OK!
return <ClientForm onSubmit={submitForm} />; // OK!
}
// ClientForm.tsx (client)
'use client'
('use client');
export function ClientForm({ onSubmit }: { onSubmit: (data: FormData) => Promise<void> }) {
return <form action={onSubmit}>...</form>
return <form action={onSubmit}>...</form>;
}
```
## Quick Reference
| Pattern | Valid? | Fix |
|---------|--------|-----|
| `'use client'` + `async function` | No | Fetch in server parent, pass data |
| Pass `() => {}` to client | No | Define in client or use server action |
| Pass `new Date()` to client | No | Use `.toISOString()` |
| Pass `new Map()` to client | No | Convert to object/array |
| Pass class instance to client | No | Pass plain object |
| Pass server action to client | Yes | - |
| Pass `string/number/boolean` | Yes | - |
| Pass plain object/array | Yes | - |
| Pattern | Valid? | Fix |
| --------------------------------- | ------ | ------------------------------------- |
| `'use client'` + `async function` | No | Fetch in server parent, pass data |
| Pass `() => {}` to client | No | Define in client or use server action |
| Pass `new Date()` to client | No | Use `.toISOString()` |
| Pass `new Map()` to client | No | Convert to object/array |
| Pass class instance to client | No | Pass plain object |
| Pass server action to client | Yes | - |
| Pass `string/number/boolean` | Yes | - |
| Pass plain object/array | Yes | - |
@@ -32,6 +32,7 @@ export const runtime = 'edge'
## Detection
**Before adding `runtime = 'edge'`**, check:
1. Does the project already use Edge runtime?
2. Is there a specific latency requirement?
3. Are all dependencies Edge-compatible?
+16 -20
View File
@@ -8,12 +8,12 @@ Always use `next/script` instead of native `<script>` tags for better performanc
```tsx
// Bad: Native script tag
<script src="https://example.com/script.js"></script>
<script src="https://example.com/script.js"></script>;
// Good: Next.js Script component
import Script from 'next/script'
import Script from 'next/script';
<Script src="https://example.com/script.js" />
<Script src="https://example.com/script.js" />;
```
## Inline Scripts Need ID
@@ -100,7 +100,7 @@ export default function Layout({ children }) {
## Google Tag Manager
```tsx
import { GoogleTagManager } from '@next/third-parties/google'
import { GoogleTagManager } from '@next/third-parties/google';
export default function Layout({ children }) {
return (
@@ -108,7 +108,7 @@ export default function Layout({ children }) {
<GoogleTagManager gtmId="GTM-XXXXX" />
<body>{children}</body>
</html>
)
);
}
```
@@ -116,26 +116,22 @@ export default function Layout({ children }) {
```tsx
// YouTube embed
import { YouTubeEmbed } from '@next/third-parties/google'
import { YouTubeEmbed } from '@next/third-parties/google';
<YouTubeEmbed videoid="dQw4w9WgXcQ" />
<YouTubeEmbed videoid="dQw4w9WgXcQ" />;
// Google Maps
import { GoogleMapsEmbed } from '@next/third-parties/google'
import { GoogleMapsEmbed } from '@next/third-parties/google';
<GoogleMapsEmbed
apiKey="YOUR_API_KEY"
mode="place"
q="Brooklyn+Bridge,New+York,NY"
/>
<GoogleMapsEmbed apiKey="YOUR_API_KEY" mode="place" q="Brooklyn+Bridge,New+York,NY" />;
```
## Quick Reference
| Pattern | Issue | Fix |
|---------|-------|-----|
| `<script src="...">` | No optimization | Use `next/script` |
| `<Script>` without id | Can't track inline scripts | Add `id` attribute |
| `<Script>` inside `<Head>` | Wrong placement | Move outside Head |
| Inline GA/GTM scripts | No optimization | Use `@next/third-parties` |
| `strategy="beforeInteractive"` outside layout | Won't work | Only use in root layout |
| Pattern | Issue | Fix |
| --------------------------------------------- | -------------------------- | ------------------------- |
| `<script src="...">` | No optimization | Use `next/script` |
| `<Script>` without id | Can't track inline scripts | Add `id` attribute |
| `<Script>` inside `<Head>` | Wrong placement | Move outside Head |
| Inline GA/GTM scripts | No optimization | Use `@next/third-parties` |
| `strategy="beforeInteractive"` outside layout | Won't work | Only use in root layout |
@@ -77,12 +77,12 @@ services:
web:
build: .
ports:
- "3000:3000"
- '3000:3000'
environment:
- NODE_ENV=production
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:3000/api/health"]
test: ['CMD', 'wget', '-q', '--spider', 'http://localhost:3000/api/health']
interval: 30s
timeout: 10s
retries: 3
@@ -95,16 +95,18 @@ For traditional server deployments:
```js
// ecosystem.config.js
module.exports = {
apps: [{
name: 'nextjs',
script: '.next/standalone/server.js',
instances: 'max',
exec_mode: 'cluster',
env: {
NODE_ENV: 'production',
PORT: 3000,
apps: [
{
name: 'nextjs',
script: '.next/standalone/server.js',
instances: 'max',
exec_mode: 'cluster',
env: {
NODE_ENV: 'production',
PORT: 3000,
},
},
}],
],
};
```
@@ -168,11 +170,7 @@ module.exports = class CacheHandler {
// Set TTL based on revalidate option
if (ctx?.revalidate) {
await redis.setex(
CACHE_PREFIX + key,
ctx.revalidate,
JSON.stringify(cacheData)
);
await redis.setex(CACHE_PREFIX + key, ctx.revalidate, JSON.stringify(cacheData));
} else {
await redis.set(CACHE_PREFIX + key, JSON.stringify(cacheData));
}
@@ -197,10 +195,12 @@ const BUCKET = process.env.CACHE_BUCKET;
module.exports = class CacheHandler {
async get(key) {
try {
const response = await s3.send(new GetObjectCommand({
Bucket: BUCKET,
Key: `cache/${key}`,
}));
const response = await s3.send(
new GetObjectCommand({
Bucket: BUCKET,
Key: `cache/${key}`,
})
);
const body = await response.Body.transformToString();
return JSON.parse(body);
} catch (err) {
@@ -210,32 +210,34 @@ module.exports = class CacheHandler {
}
async set(key, data, ctx) {
await s3.send(new PutObjectCommand({
Bucket: BUCKET,
Key: `cache/${key}`,
Body: JSON.stringify({
value: data,
lastModified: Date.now(),
}),
ContentType: 'application/json',
}));
await s3.send(
new PutObjectCommand({
Bucket: BUCKET,
Key: `cache/${key}`,
Body: JSON.stringify({
value: data,
lastModified: Date.now(),
}),
ContentType: 'application/json',
})
);
}
};
```
## What Works vs What Needs Setup
| Feature | Single Instance | Multi-Instance | Notes |
|---------|----------------|----------------|-------|
| SSR | Yes | Yes | No special setup |
| SSG | Yes | Yes | Built at deploy time |
| ISR | Yes | Needs cache handler | Filesystem cache breaks |
| Image Optimization | Yes | Yes | CPU-intensive, consider CDN |
| Middleware | Yes | Yes | Runs on Node.js |
| Edge Runtime | Limited | Limited | Some features Node-only |
| `revalidatePath/Tag` | Yes | Needs cache handler | Must share cache |
| `next/font` | Yes | Yes | Fonts bundled at build |
| Draft Mode | Yes | Yes | Cookie-based |
| Feature | Single Instance | Multi-Instance | Notes |
| -------------------- | --------------- | ------------------- | --------------------------- |
| SSR | Yes | Yes | No special setup |
| SSG | Yes | Yes | Built at deploy time |
| ISR | Yes | Needs cache handler | Filesystem cache breaks |
| Image Optimization | Yes | Yes | CPU-intensive, consider CDN |
| Middleware | Yes | Yes | Runs on Node.js |
| Edge Runtime | Limited | Limited | Some features Node-only |
| `revalidatePath/Tag` | Yes | Needs cache handler | Must share cache |
| `next/font` | Yes | Yes | Fonts bundled at build |
| Draft Mode | Yes | Yes | Cookie-based |
## Image Optimization
@@ -244,6 +246,7 @@ Next.js Image Optimization works out of the box but is CPU-intensive.
### Option 1: Built-in (Simple)
Works automatically, but consider:
- Set `deviceSizes` and `imageSizes` in config to limit variants
- Use `minimumCacheTTL` to reduce regeneration
@@ -317,6 +320,7 @@ npx @opennextjs/aws build
```
Supports:
- AWS Lambda + CloudFront
- Cloudflare Workers
- Netlify Functions
@@ -8,27 +8,27 @@ Always requires Suspense boundary in static routes. Without it, the entire page
```tsx
// Bad: Entire page becomes CSR
'use client'
'use client';
import { useSearchParams } from 'next/navigation'
import { useSearchParams } from 'next/navigation';
export default function SearchBar() {
const searchParams = useSearchParams()
return <div>Query: {searchParams.get('q')}</div>
const searchParams = useSearchParams();
return <div>Query: {searchParams.get('q')}</div>;
}
```
```tsx
// Good: Wrap in Suspense
import { Suspense } from 'react'
import SearchBar from './search-bar'
import { Suspense } from 'react';
import SearchBar from './search-bar';
export default function Page() {
return (
<Suspense fallback={<div>Loading...</div>}>
<SearchBar />
</Suspense>
)
);
}
```
@@ -39,12 +39,12 @@ Requires Suspense boundary when route has dynamic parameters.
```tsx
// In dynamic route [slug]
// Bad: No Suspense
'use client'
import { usePathname } from 'next/navigation'
'use client';
import { usePathname } from 'next/navigation';
export function Breadcrumb() {
const pathname = usePathname()
return <nav>{pathname}</nav>
const pathname = usePathname();
return <nav>{pathname}</nav>;
}
```
@@ -59,9 +59,9 @@ If you use `generateStaticParams`, Suspense is optional.
## Quick Reference
| Hook | Suspense Required |
|------|-------------------|
| `useSearchParams()` | Yes |
| `usePathname()` | Yes (dynamic routes) |
| `useParams()` | No |
| `useRouter()` | No |
| Hook | Suspense Required |
| ------------------- | -------------------- |
| `useSearchParams()` | Yes |
| `usePathname()` | Yes (dynamic routes) |
| `useParams()` | No |
| `useRouter()` | No |
@@ -0,0 +1,100 @@
# Two-Phase File Upload (Frontend)
Pair with [backend two-phase upload rule](../nestjs-best-practices/rules/security-file-two-phase-upload.md).
## Flow
```
User drops file
→ POST /files/upload (temp) → { tempId, expiresAt }
→ store tempId in form state
→ user submits form
→ POST /correspondences (with tempFileIds) → backend commits in transaction
```
## Hook Pattern
```tsx
'use client';
import { useDropzone } from 'react-dropzone';
import { useMutation } from '@tanstack/react-query';
export function useTwoPhaseUpload() {
const uploadTemp = useMutation({
mutationFn: async (file: File) => {
const fd = new FormData();
fd.append('file', file);
const { data } = await apiClient.post<{ tempId: string; expiresAt: string }>(
'/files/upload',
fd,
);
return data;
},
});
return uploadTemp;
}
```
## Form Integration (RHF)
```tsx
export function CorrespondenceForm() {
const form = useForm<FormData>({ resolver: zodResolver(schema) });
const uploadTemp = useTwoPhaseUpload();
const [tempFileIds, setTempFileIds] = useState<string[]>([]);
const { getRootProps, getInputProps } = useDropzone({
accept: {
'application/pdf': ['.pdf'],
'image/vnd.dwg': ['.dwg'],
'application/vnd.openxmlformats-officedocument.wordprocessingml.document': ['.docx'],
'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet': ['.xlsx'],
'application/zip': ['.zip'],
},
maxSize: 50 * 1024 * 1024, // 50 MB — must match backend
onDrop: async (files) => {
const results = await Promise.all(files.map((f) => uploadTemp.mutateAsync(f)));
setTempFileIds((prev) => [...prev, ...results.map((r) => r.tempId)]);
},
});
const onSubmit = async (values: FormData) => {
await correspondenceService.create({
...values,
tempFileIds, // committed server-side in the same DB transaction
});
setTempFileIds([]);
};
return (
<form onSubmit={form.handleSubmit(onSubmit)}>
<div {...getRootProps()} className="dropzone">
<input {...getInputProps()} />
<p>{t('upload.dragDrop')}</p>
</div>
{/* other fields */}
</form>
);
}
```
## Rules
- **Whitelist MIME types** — must mirror backend ADR-016 whitelist (`.pdf`, `.dwg`, `.docx`, `.xlsx`, `.zip`).
- **50 MB cap** — enforce client-side too (better UX) plus server-side (authoritative).
- **Show temp-file pills** with remove button — users see what will be attached.
- **Clear `tempFileIds` on success/cancel** — prevent stale IDs on subsequent submits.
- **No retry of expired temps** — if `expiresAt` passed, prompt re-upload.
## ❌ Forbidden
- ❌ Uploading directly to permanent storage endpoint (no commit phase)
- ❌ Hardcoded MIME list in component (keep in shared constant file mirrored from backend)
- ❌ Ignoring `maxSize` — backend will reject but UX suffers
## Reference
- [ADR-016 Security](../../../specs/06-Decision-Records/ADR-016-security-authentication.md)
- Backend rule: [`security-file-two-phase-upload.md`](../nestjs-best-practices/rules/security-file-two-phase-upload.md)
@@ -0,0 +1,257 @@
# UUID Handling (ADR-019) — March 2026 Pattern
**Project-specific: Hybrid Identifier Strategy for NAP-DMS**
This project uses ADR-019: INT Primary Key (internal) + UUIDv7 (public API). Frontend code must handle this correctly.
> **Updated pattern:** Backend exposes `publicId` directly — ไม่มี `@Expose({ name: 'id' })` rename แล้ว. Frontend ใช้ `publicId` ตรงๆ — ห้าม fallback ไป `id`.
## The Pattern
| Source | Field Name | Type | Notes |
| ------------------------ | ------------------- | ----------------- | ----------------------------------------------------------- |
| **API Response** | `publicId` | `string` (UUIDv7) | Exposed directly (no rename) |
| **TypeScript Interface** | `publicId?: string` | UUID string | ใช้ตัวนี้เท่านั้น |
| **Form DTO** | `xxxUuid` | `string` | DTO field names: `projectUuid`, `contractUuid` (input only) |
| **URL param** | `[publicId]` | `string` (UUID) | e.g. `/correspondences/[publicId]/page.tsx` |
## Critical Rules
### 1. NEVER Use `parseInt()` on UUID
```tsx
// ❌ WRONG - parseInt on UUID gives garbage
const id = parseInt(projectId); // "0195a1b2-..." → 195 (wrong!)
// ❌ WRONG - Number() on UUID
const id = Number(projectId); // NaN
// ❌ WRONG - Unary plus
const id = +projectId; // NaN
// ✅ CORRECT - Send UUID string directly to API
apiClient.get(`/projects/${projectId}`); // projectId is already UUID string
```
### 2. Use `publicId` Only — NO `id ?? ''` Fallback
```tsx
// ✅ CORRECT — types/project.ts
interface Project {
publicId?: string; // UUID from API — ใช้ตัวนี้เท่านั้น
projectCode: string;
projectName: string;
}
// ✅ CORRECT — Component usage
const projectOptions = projects.map((p) => ({
label: `${p.projectName} (${p.projectCode})`,
value: p.publicId ?? '', // ADR-019 — ไม่ต้อง String() และไม่ไป id
key: p.publicId ?? p.projectCode, // fallback ไป business field ได้
}));
// ❌ WRONG — pattern เก่า
const oldOptions = projects.map((p) => ({
value: String(p.publicId ?? p.id ?? ''), // ❌ `id ?? ''` fallback
}));
```
### 3. Form Field Names (camelCase)
```tsx
// ❌ WRONG - snake_case doesn't match TypeScript interface
fields={[{ name: 'project_id', label: 'Project' }]}
// ✅ CORRECT - camelCase matches interface
fields={[{ name: 'projectUuid', label: 'Project' }]}
// Form submission
const onSubmit = (data: { projectUuid: string }) => {
// projectUuid is UUID string - send as-is
await apiClient.post('/contracts', data);
};
```
## Select Component Pattern
```tsx
import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from '@/components/ui/select';
interface ContractSelectProps {
contracts: Contract[];
value: string;
onChange: (value: string) => void;
}
export function ContractSelect({ contracts, value, onChange }: ContractSelectProps) {
return (
<Select value={value} onValueChange={onChange}>
<SelectTrigger>
<SelectValue placeholder="เลือกสัญญา" />
</SelectTrigger>
<SelectContent>
{contracts
.filter((c) => !!c.publicId) // กรอง contract ที่มี publicId เท่านั้น
.map((c) => (
<SelectItem key={c.publicId} value={c.publicId!}>
{c.contractName} ({c.contractCode})
</SelectItem>
))}
</SelectContent>
</Select>
);
}
```
## Data Table Pattern
```tsx
// Show relation columns with UUID entities
const columns: ColumnDef<Discipline>[] = [
{
accessorKey: 'disciplineCode',
header: 'Code',
},
{
accessorKey: 'contract',
header: 'Contract',
cell: ({ row }) => {
const contract = row.original.contract;
return contract ? (
<span>
{contract.contractName} ({contract.contractCode})
</span>
) : (
<span className="text-muted-foreground">-</span>
);
},
},
];
```
## API Service Pattern
```tsx
// lib/services/contract.service.ts
export const contractService = {
async getById(uuid: string): Promise<Contract> {
// Send UUID string directly - backend resolves to INT
const { data } = await apiClient.get(`/contracts/${uuid}`);
return data;
},
async create(dto: CreateContractDto): Promise<Contract> {
// DTO contains projectUuid (UUID string)
const { data } = await apiClient.post('/contracts', dto);
return data;
},
async update(uuid: string, dto: Partial<CreateContractDto>): Promise<Contract> {
const { data } = await apiClient.put(`/contracts/${uuid}`, dto);
return data;
},
async delete(uuid: string): Promise<void> {
await apiClient.delete(`/contracts/${uuid}`);
},
};
```
## TypeScript Interfaces
```tsx
// ✅ CORRECT — types/entities.ts
export interface BaseEntity {
publicId?: string; // UUID — ใช้ตัวนี้เท่านั้น (ไม่มี INT id ใน interface)
createdAt?: string;
updatedAt?: string;
}
export interface Project extends BaseEntity {
projectCode: string;
projectName: string;
description?: string;
}
export interface Contract extends BaseEntity {
contractCode: string;
contractName: string;
project?: Project; // Relation (nested entity)
}
// DTO (input only — รับ UUID จาก form)
export interface CreateContractDto {
projectUuid: string; // UUID string from select
contractCode: string;
contractName: string;
}
```
## Form with React Hook Form + Zod
```tsx
import { useForm } from 'react-hook-form';
import { zodResolver } from '@hookform/resolvers/zod';
import * as z from 'zod';
const formSchema = z.object({
projectUuid: z.string().uuid('กรุณาเลือกโปรเจกต์'),
contractCode: z.string().min(1, 'กรุณาระบุรหัสสัญญา'),
contractName: z.string().min(1, 'กรุณาระบุชื่อสัญญา'),
});
type FormData = z.infer<typeof formSchema>;
export function ContractForm() {
const form = useForm<FormData>({
resolver: zodResolver(formSchema),
defaultValues: {
projectUuid: '',
contractCode: '',
contractName: '',
},
});
const onSubmit = async (data: FormData) => {
// Send UUID strings directly
await contractService.create(data);
};
return (
<Form {...form}>
<form onSubmit={form.handleSubmit(onSubmit)}>{/* Form fields */}</form>
</Form>
);
}
```
## URL Parameters
```tsx
// app/contracts/[id]/page.tsx
export default async function ContractPage({ params }: { params: Promise<{ id: string }> }) {
const { id } = await params;
// id is UUID string from URL
const contract = await contractService.getById(id);
return <ContractDetail contract={contract} />;
}
```
## Common Pitfalls
| Pitfall | ❌ Wrong | ✅ Right |
| ---------------------------- | ------------------------------------------------ | --------------------------------- |
| Using INT `id` | `key={entity.id}` | `key={entity.publicId}` |
| parseInt on UUID | `parseInt(projectId)` | `projectId` (string) |
| Field name mismatch | `name="project_id"` | `name="projectUuid"` |
| `id ?? ''` fallback | `value={publicId ?? id ?? ''}` | `value={publicId ?? ''}` |
| `uuid` + `publicId` together | `interface { uuid?: string; publicId?: string }` | `interface { publicId?: string }` |
## Reference
- [ADR-019 Hybrid Identifier Strategy](../../../../specs/06-Decision-Records/ADR-019-hybrid-identifier-strategy.md)
- [Frontend Guidelines](../../../../specs/05-Engineering-Guidelines/05-03-frontend-guidelines.md)
- [UUID Implementation Plan](../../../../specs/05-Engineering-Guidelines/05-07-hybrid-uuid-implementation-plan.md)
> **Warning**: Using `parseInt()` on UUID values causes data corruption. Always use UUID strings directly in API calls.
+108
View File
@@ -0,0 +1,108 @@
# 🧠 NAP-DMS Agent Skills (v1.8.9)
ไฟล์นี้กำหนดทักษะและความสามารถเฉพาะทางของ Document Intelligence Engine สำหรับโครงการ LCBP3 v1.8.9 เพื่อรักษามาตรฐานสูงสุดด้าน Security และ Data Integrity
**Status**: Production Ready | **Last Updated**: 2026-04-22 | **Total Skills**: 20
> 📌 Shared context for all speckit-\* skills: see [`_LCBP3-CONTEXT.md`](./_LCBP3-CONTEXT.md).
---
## 🏗️ Architectural & Data Integrity
- **Identifier Strategy Mastery (ADR-019 — March 2026):**
- บังคับใช้ **UUIDv7** เป็น Public ID; entity สืบทอดจาก `UuidBaseEntity` และเปิด `publicId` **ตรงๆ** (ห้ามใช้ `@Expose({ name: 'id' })` rename)
- ตรวจสอบและป้องกันการใช้ `parseInt()`, `Number()`, หรือ `+` กับ UUID ทั้ง backend/frontend
- ตรวจสอบว่า Entity มีการใช้ `@Exclude()` บน Primary Key `INT AUTO_INCREMENT` เพื่อไม่ให้หลุดออกไปยัง API
- Frontend ใช้ `publicId` ตรงๆ — **ห้าม** `id ?? ''` fallback หรือมี `uuid?: string` คู่กับ `publicId` ใน interface
- **Strict Validation Engine:**
- บังคับใช้ **Zod** สำหรับการทำ Form Validation ฝั่ง Frontend
- บังคับใช้ **class-validator** สำหรับ Backend DTOs
- ตรวจสอบการส่ง **Idempotency-Key** ใน Header สำหรับทุก Mutation Request (POST/PUT/PATCH)
## ⚙️ Workflow & Concurrency Control
- **DMS Workflow Engine Proficiency:**
- มีความเชี่ยวชาญใน **DSL-based state machines**; ตรวจสอบทุกการเปลี่ยนสถานะเอกสารเทียบกับกฎใน DSL Parser เสมอ
- ป้องกันการอนุมัติซ้ำซ้อนโดยการตรวจสอบสถานะปัจจุบันจากฐานข้อมูลก่อนเริ่ม Logic การเปลี่ยน State ทุกครั้ง
- **Collision-Free Numbering (ADR-002):**
- ใช้ทักษะการทำ **Distributed Locking** ผ่าน **Redis Redlock** ร่วมกับ TypeORM `@VersionColumn` สำหรับการเจนเลขที่เอกสาร (Document Numbering)
- ห้ามเจนเลขโดยใช้ Logic ฝั่ง Application เพียงอย่างเดียวเด็ดขาด
- **Asynchronous Task Orchestration (ADR-008):**
- แยกงานที่ใช้เวลานาน (เช่น การส่ง Notification, การทำ Correspondence Routing) ไปทำที่ **BullMQ** เท่านั้น
## 🛡️ Security & Integrity Audit
- **RBAC Matrix Enforcement (ADR-016):**
- บังคับใช้ **JwtAuthGuard**, **RolesGuard** และ **CASL AbilityFactory** ในทุก Controller ใหม่
- ตรวจสอบการมีอยู่ของ `AuditLogInterceptor` สำหรับทุก API ที่มีการเปลี่ยนแปลงข้อมูล
- **Secure File Lifecycle:**
- ใช้ Logic **Two-Phase Upload**: Upload → Temp → ClamAV Scan → Commit → Permanent
- บังคับใช้ Whitelist File Extension และ Max Size 50MB ตามที่กำหนดใน ADR-016
## 🤖 AI Boundary & Privacy (ADR-018/020)
- **Data Isolation:**
- รับรองว่าฟีเจอร์ AI จะรันผ่าน **Ollama (On-premises)** เท่านั้น และไม่ส่งข้อมูลออกนอกเน็ตเวิร์ก
- AI จะเข้าถึงข้อมูลผ่าน **DMS API** เท่านั้น (ห้ามต่อ Database หรือ Storage โดยตรง)
- **Human-in-the-loop Validation:**
- ออกแบบให้ผลลัพธ์จาก AI (เช่น การดึง Metadata เอกสาร) ต้องผ่านการยืนยันจาก User ก่อนบันทึกลงระบบเสมอ
## 🏷️ Domain Terminology Consistency
- **Term Correction:** แก้ไขคำศัพท์ให้ถูกต้องตาม Glossary ทันที (เช่น เปลี่ยน Letter เป็น **Correspondence**, Approval Flow เป็น **Workflow Engine**)
- **i18n Guidelines:** ห้ามเขียน Thai/English String ลงใน Component โดยตรง ต้องใช้ i18n Keys เท่านั้น
---
## 🔄 Skill Dependency Matrix
| Skill | Dependencies | Handoffs To | Notes |
| -------------------------- | -------------------- | -------------------------------- | ----------------------------- |
| **speckit-constitution** | None | speckit-specify | Project governance foundation |
| **speckit-specify** | speckit-constitution | speckit-clarify | Feature specification |
| **speckit-clarify** | speckit-specify | speckit-plan | Resolve ambiguities |
| **speckit-plan** | speckit-clarify | speckit-tasks, speckit-checklist | Technical design |
| **speckit-tasks** | speckit-plan | speckit-implement | Task breakdown |
| **speckit-implement** | speckit-tasks | speckit-checker | Code implementation |
| **speckit-checker** | speckit-implement | speckit-tester | Static analysis |
| **speckit-tester** | speckit-checker | speckit-reviewer | Test execution |
| **speckit-reviewer** | speckit-tester | speckit-validate | Code review |
| **speckit-validate** | speckit-reviewer | None | Requirements validation |
| **speckit-analyze** | speckit-tasks | None | Cross-artifact consistency |
| **speckit-migrate** | None | speckit-plan | Legacy code import |
| **speckit-quizme** | speckit-specify | speckit-plan | Logic validation |
| **speckit-diff** | None | speckit-plan | Version comparison |
| **speckit-status** | None | None | Progress tracking |
| **speckit-taskstoissues** | speckit-tasks | None | Issue sync |
| **speckit-checklist** | speckit-plan | None | Requirements validation |
| **nestjs-best-practices** | None | speckit-implement | Backend patterns |
| **next-best-practices** | None | speckit-implement | Frontend patterns |
| **speckit-security-audit** | None | speckit-reviewer | Security validation |
---
## 🛠️ Skill Health Monitoring
### Health Check Scripts (from repo root)
- **Bash**: `./.agents/scripts/bash/audit-skills.sh` - Comprehensive skill health audit
- **PowerShell**: `./.agents/scripts/powershell/audit-skills.ps1` - Windows equivalent
### Validation Scripts
- **Version Check**: `./.agents/scripts/bash/validate-versions.sh` - Ensure version consistency
- **Workflow Sync**: `./.agents/scripts/bash/sync-workflows.sh` - Verify workflow integration
### Health Metrics
- **Total Skills**: 20 implemented
- **Version Alignment**: v1.8.9 across all skills
- **Template Coverage**: 100% for skills requiring templates
- **Documentation**: Complete front matter + shared `_LCBP3-CONTEXT.md` appendix
### Maintenance Schedule
- **Daily**: Run `audit-skills.sh` for health monitoring
- **Weekly**: Run `validate-versions.sh` for version consistency
- **Monthly**: Review skill dependencies and update documentation
+20 -7
View File
@@ -1,7 +1,7 @@
---
name: speckit-analyze
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
version: 1.0.0
version: 1.8.9
depends-on:
- speckit-tasks
---
@@ -21,13 +21,14 @@ You are the **Antigravity Consistency Analyst**. Your role is to identify incons
## Task
### Goal
Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit-tasks` has successfully produced a complete `tasks.md`.
## Operating Constraints
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit-analyze`.
**Constitution Authority**: The project constitution (`AGENTS.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit-analyze`.
### Steps
@@ -71,7 +72,7 @@ Load only the minimal necessary context from each artifact:
**From constitution:**
- Load `.specify/memory/constitution.md` for principle validation
- Load `AGENTS.md` for principle validation
### 3. Build Semantic Models
@@ -135,16 +136,16 @@ Output a Markdown report (no file writes) with the following structure:
## Specification Analysis Report
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
| ID | Category | Severity | Location(s) | Summary | Recommendation |
| --- | ----------- | -------- | ---------------- | ---------------------------- | ------------------------------------ |
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
(Add one row per finding; generate stable IDs prefixed by category initial.)
**Coverage Summary Table:**
| Requirement Key | Has Task? | Task IDs | Notes |
|-----------------|-----------|----------|-------|
| --------------- | --------- | -------- | ----- |
**Constitution Alignment Issues:** (if any)
@@ -191,3 +192,15 @@ Ask the user: "Would you like me to suggest concrete remediation edits for the t
## Context
{{args}}
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+93 -76
View File
@@ -1,7 +1,7 @@
---
name: speckit-checker
description: Run static analysis tools and aggregate results.
version: 1.0.0
version: 1.8.9
depends-on: []
---
@@ -26,119 +26,124 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
### Execution Steps
1. **Detect Project Type and Tools**:
```bash
# Check for config files
ls -la | grep -E "(package.json|pyproject.toml|go.mod|Cargo.toml|pom.xml)"
# Check for linter configs
ls -la | grep -E "(eslint|prettier|pylint|golangci|rustfmt)"
```
| Config | Tools to Run |
|--------|-------------|
| `package.json` | ESLint, TypeScript, npm audit |
| `pyproject.toml` | Pylint/Ruff, mypy, bandit |
| `go.mod` | golangci-lint, go vet |
| `Cargo.toml` | clippy, cargo audit |
| `pom.xml` | SpotBugs, PMD |
| Config | Tools to Run |
| ---------------- | ----------------------------- |
| `package.json` | ESLint, TypeScript, npm audit |
| `pyproject.toml` | Pylint/Ruff, mypy, bandit |
| `go.mod` | golangci-lint, go vet |
| `Cargo.toml` | clippy, cargo audit |
| `pom.xml` | SpotBugs, PMD |
2. **Run Linting**:
| Stack | Command |
|-------|---------|
| Node/TS | `npx eslint . --format json 2>/dev/null` |
| Python | `ruff check . --output-format json 2>/dev/null || pylint --output-format=json **/*.py` |
| Go | `golangci-lint run --out-format json` |
| Rust | `cargo clippy --message-format=json` |
| Stack | Command |
| ------- | ---------------------------------------------- | --- | ------------------------------------- |
| Node/TS | `npx eslint . --format json 2>/dev/null` |
| Python | `ruff check . --output-format json 2>/dev/null | | pylint --output-format=json \*_/_.py` |
| Go | `golangci-lint run --out-format json` |
| Rust | `cargo clippy --message-format=json` |
3. **Run Type Checking**:
| Stack | Command |
|-------|---------|
| TypeScript | `npx tsc --noEmit 2>&1` |
| Python | `mypy . --no-error-summary 2>&1` |
| Go | `go build ./... 2>&1` (types are built-in) |
| Stack | Command |
| ---------- | ------------------------------------------ |
| TypeScript | `npx tsc --noEmit 2>&1` |
| Python | `mypy . --no-error-summary 2>&1` |
| Go | `go build ./... 2>&1` (types are built-in) |
4. **Run Security Scanning**:
| Stack | Command |
|-------|---------|
| Node | `npm audit --json` |
| Python | `bandit -r . -f json 2>/dev/null || safety check --json` |
| Go | `govulncheck ./... 2>&1` |
| Rust | `cargo audit --json` |
| Stack | Command |
| ------ | -------------------------------- | --- | -------------------- |
| Node | `npm audit --json` |
| Python | `bandit -r . -f json 2>/dev/null | | safety check --json` |
| Go | `govulncheck ./... 2>&1` |
| Rust | `cargo audit --json` |
5. **Aggregate and Prioritize**:
| Category | Priority |
|----------|----------|
| Security (Critical/High) | 🔴 P1 |
| Type Errors | 🟠 P2 |
| Security (Medium/Low) | 🟡 P3 |
| Lint Errors | 🟡 P3 |
| Lint Warnings | 🟢 P4 |
| Style Issues | ⚪ P5 |
| Category | Priority |
| ------------------------ | -------- |
| Security (Critical/High) | 🔴 P1 |
| Type Errors | 🟠 P2 |
| Security (Medium/Low) | 🟡 P3 |
| Lint Errors | 🟡 P3 |
| Lint Warnings | 🟢 P4 |
| Style Issues | ⚪ P5 |
6. **Generate Report**:
```markdown
````markdown
# Static Analysis Report
**Date**: [timestamp]
**Project**: [name from package.json/pyproject.toml]
**Status**: CLEAN | ISSUES FOUND
## Tools Run
| Tool | Status | Issues |
|------|--------|--------|
| ESLint | ✅ | 12 |
| TypeScript | ✅ | 3 |
| npm audit | ⚠️ | 2 vulnerabilities |
| Tool | Status | Issues |
| ---------- | ------ | ----------------- |
| ESLint | ✅ | 12 |
| TypeScript | ✅ | 3 |
| npm audit | ⚠️ | 2 vulnerabilities |
## Summary by Priority
| Priority | Count |
|----------|-------|
| 🔴 P1 Critical | X |
| 🟠 P2 High | X |
| 🟡 P3 Medium | X |
| 🟢 P4 Low | X |
| Priority | Count |
| -------------- | ----- |
| 🔴 P1 Critical | X |
| 🟠 P2 High | X |
| 🟡 P3 Medium | X |
| 🟢 P4 Low | X |
## Issues
### 🔴 P1: Security Vulnerabilities
| Package | Severity | Issue | Fix |
|---------|----------|-------|-----|
| lodash | HIGH | Prototype Pollution | Upgrade to 4.17.21 |
| Package | Severity | Issue | Fix |
| ------- | -------- | ------------------- | ------------------ |
| lodash | HIGH | Prototype Pollution | Upgrade to 4.17.21 |
### 🟠 P2: Type Errors
| File | Line | Error |
|------|------|-------|
| src/api.ts | 45 | Type 'string' is not assignable to type 'number' |
| File | Line | Error |
| ---------- | ---- | ------------------------------------------------ |
| src/api.ts | 45 | Type 'string' is not assignable to type 'number' |
### 🟡 P3: Lint Issues
| File | Line | Rule | Message |
|------|------|------|---------|
| src/utils.ts | 12 | no-unused-vars | 'foo' is defined but never used |
| File | Line | Rule | Message |
| ------------ | ---- | -------------- | ------------------------------- |
| src/utils.ts | 12 | no-unused-vars | 'foo' is defined but never used |
## Quick Fixes
```bash
# Fix security issues
npm audit fix
# Auto-fix lint issues
npx eslint . --fix
```
````
## Recommendations
1. **Immediate**: Fix P1 security issues
2. **Before merge**: Fix P2 type errors
3. **Tech debt**: Address P3/P4 lint issues
```
```
7. **Output**:
@@ -152,3 +157,15 @@ Auto-detect available tools, run them, and aggregate results into a prioritized
- **Be Actionable**: Every issue should have a clear fix path
- **Don't Duplicate**: Dedupe issues found by multiple tools
- **Respect Configs**: Honor project's existing linter configs
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+13 -1
View File
@@ -1,7 +1,7 @@
---
name: speckit-checklist
description: Generate a custom checklist for the current feature based on user requirements.
version: 1.0.0
version: 1.8.9
---
## Checklist Purpose: "Unit Tests for English"
@@ -300,3 +300,15 @@ Sample items:
- Correct: Validation of requirement quality
- Wrong: "Does it do X?"
- Correct: "Is X clearly specified?"
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
@@ -6,16 +6,16 @@
**Note**: This checklist is generated by the `/speckit-checklist` command based on feature context and requirements.
<!--
<!--
============================================================================
IMPORTANT: The checklist items below are SAMPLE ITEMS for illustration only.
The /speckit-checklist command MUST replace these with actual items based on:
- User's specific checklist request
- Feature requirements from spec.md
- Technical context from plan.md
- Implementation details from tasks.md
DO NOT keep these sample items in the generated checklist file.
============================================================================
-->
+71 -59
View File
@@ -1,10 +1,10 @@
---
name: speckit-clarify
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
version: 1.0.0
version: 1.8.9
depends-on:
- speckit-specify
handoffs:
handoffs:
- label: Build Technical Plan
agent: speckit-plan
prompt: Create a plan for the spec. I am building with...
@@ -96,69 +96,69 @@ Execution steps:
- Information is better deferred to planning phase (note internally)
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
- Maximum of 10 total questions across the whole session.
- Each question must be answerable with EITHER:
- A short multiplechoice selection (25 distinct, mutually exclusive options), OR
- A one-word / shortphrase answer (explicitly constrain: "Answer in <=5 words").
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
- Maximum of 10 total questions across the whole session.
- Each question must be answerable with EITHER:
- A short multiplechoice selection (25 distinct, mutually exclusive options), OR
- A one-word / shortphrase answer (explicitly constrain: "Answer in <=5 words").
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
- If more than 5 categories remain unresolved, select the top 5 by (Impact \* Uncertainty) heuristic.
4. Sequential questioning loop (interactive):
- Present EXACTLY ONE question at a time.
- For multiplechoice questions:
- **Analyze all options** and determine the **most suitable option** based on:
- Best practices for the project type
- Common patterns in similar implementations
- Risk reduction (security, performance, maintainability)
- Alignment with any explicit project goals or constraints visible in the spec
- Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
- Format as: `**Recommended:** Option [X] - <reasoning>`
- Then render all options as a Markdown table:
- Present EXACTLY ONE question at a time.
- For multiplechoice questions:
- **Analyze all options** and determine the **most suitable option** based on:
- Best practices for the project type
- Common patterns in similar implementations
- Risk reduction (security, performance, maintainability)
- Alignment with any explicit project goals or constraints visible in the spec
- Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
- Format as: `**Recommended:** Option [X] - <reasoning>`
- Then render all options as a Markdown table:
| Option | Description |
|--------|-------------|
| A | <Option A description> |
| B | <Option B description> |
| C | <Option C description> (add D/E as needed up to 5) |
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
| Option | Description |
| ------ | --------------------------------------------------------------------------------------------------- |
| A | <Option A description> |
| B | <Option B description> |
| C | <Option C description> (add D/E as needed up to 5) |
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
- For shortanswer style (no meaningful discrete options):
- Provide your **suggested answer** based on best practices and context.
- Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
- Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
- After the user answers:
- If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
- Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
- If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
- Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
- Stop asking further questions when:
- All critical ambiguities resolved early (remaining queued items become unnecessary), OR
- User signals completion ("done", "good", "no more"), OR
- You reach 5 asked questions.
- Never reveal future queued questions in advance.
- If no valid questions exist at start, immediately report no critical ambiguities.
- For shortanswer style (no meaningful discrete options):
- Provide your **suggested answer** based on best practices and context.
- Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
- Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
- After the user answers:
- If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
- Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
- If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
- Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
- Stop asking further questions when:
- All critical ambiguities resolved early (remaining queued items become unnecessary), OR
- User signals completion ("done", "good", "no more"), OR
- You reach 5 asked questions.
- Never reveal future queued questions in advance.
- If no valid questions exist at start, immediately report no critical ambiguities.
5. Integration after EACH accepted answer (incremental update approach):
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
- For the first integrated answer in this session:
- Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
- Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
- Then immediately apply the clarification to the most appropriate section(s):
- Functional ambiguity → Update or add a bullet in Functional Requirements.
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
- Keep each inserted clarification minimal and testable (avoid narrative drift).
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
- For the first integrated answer in this session:
- Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
- Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
- Then immediately apply the clarification to the most appropriate section(s):
- Functional ambiguity → Update or add a bullet in Functional Requirements.
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
- Keep each inserted clarification minimal and testable (avoid narrative drift).
6. Validation (performed after EACH write plus final pass):
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
@@ -189,3 +189,15 @@ Behavior rules:
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
Context for prioritization: {{args}}
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+21 -9
View File
@@ -1,7 +1,7 @@
---
name: speckit-constitution
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
version: 1.0.0
version: 1.8.9
handoffs:
- label: Build Specification
agent: speckit-specify
@@ -24,11 +24,11 @@ You are the **Antigravity Governance Architect**. Your role is to establish and
### Outline
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
You are updating the project constitution at `AGENTS.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
Follow this execution flow:
1. Load the existing constitution template at `memory/constitution.md`.
1. Load the existing constitution template at `AGENTS.md`.
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
@@ -49,10 +49,10 @@ Follow this execution flow:
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
4. Consistency propagation checklist (convert prior checklist into active validations):
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
- Read `.agents/skills/speckit-plan/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
- Read `.agents/skills/speckit-specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
- Read `.agents/skills/speckit-tasks/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
- Read each command file in `.agents/skills/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
@@ -69,7 +69,7 @@ Follow this execution flow:
- Dates ISO format YYYY-MM-DD.
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
7. Write the completed constitution back to `AGENTS.md` (overwrite).
8. Output a final summary to the user with:
- New version and bump rationale.
@@ -87,4 +87,16 @@ If the user supplies partial updates (e.g., only one principle revision), still
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.
Do not create a new template; always operate on the existing `AGENTS.md` file.
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+28 -11
View File
@@ -1,7 +1,7 @@
---
name: speckit-diff
description: Compare two versions of a spec or plan to highlight changes.
version: 1.0.0
version: 1.8.9
depends-on: []
---
@@ -31,10 +31,12 @@ Compare two versions of a specification artifact and produce a structured diff r
- If no arguments: Use `check-prerequisites.sh` to find current feature's spec.md and compare with HEAD
2. **Load Files**:
```bash
# For git comparison
git show HEAD:<relative-path> > /tmp/old_version.md
```
- Read both versions into memory
3. **Semantic Diff Analysis**:
@@ -45,26 +47,29 @@ Compare two versions of a specification artifact and produce a structured diff r
- **Moved**: Reorganized content (same meaning, different location)
4. **Generate Report**:
```markdown
# Diff Report: [filename]
**Compared**: [version A] → [version B]
**Date**: [timestamp]
## Summary
- X additions, Y removals, Z modifications
## Changes by Section
### [Section Name]
| Type | Content | Impact |
|------|---------|--------|
| + Added | [new text] | [what this means] |
| - Removed | [old text] | [what this means] |
| Type | Content | Impact |
| ---------- | ------------------ | ----------------- |
| + Added | [new text] | [what this means] |
| - Removed | [old text] | [what this means] |
| ~ Modified | [before] → [after] | [what this means] |
## Risk Assessment
- Breaking changes: [list any]
- Scope changes: [list any]
```
@@ -79,3 +84,15 @@ Compare two versions of a specification artifact and produce a structured diff r
- **Highlight Impact**: Explain what each change means for implementation
- **Flag Breaking Changes**: Any change that invalidates existing work
- **Ignore Whitespace**: Focus on semantic changes, not formatting
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+43 -31
View File
@@ -1,7 +1,7 @@
---
name: speckit-implement
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md (with Ironclad Anti-Regression Protocols)
version: 1.0.0
version: 1.8.9
depends-on:
- speckit-tasks
---
@@ -53,7 +53,7 @@ If a file is critical, complex, or has high dependencies (>2 affected files):
4. **SWITCH** the imports in the consuming files one by one.
5. **ANNOUNCE**: "Applying Strangler Pattern to avoid regression."
*Benefit: If it breaks, we simply revert the import, not the whole logic.*
_Benefit: If it breaks, we simply revert the import, not the whole logic._
### Protocol 3: Reproduction Script First (TDD)
@@ -81,7 +81,7 @@ At the start of execution and after every 3 modifications:
### Outline
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
1. Run `../scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
- Scan all checklist files in the checklists/ directory
@@ -136,12 +136,12 @@ At the start of execution and after every 3 modifications:
git rev-parse --git-dir 2>/dev/null
```
- Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
- Check if .eslintrc* exists → create/verify .eslintignore
- Check if eslint.config.* exists → ensure the config's `ignores` entries cover required patterns
- Check if .prettierrc* exists → create/verify .prettierignore
- Check if Dockerfile\* exists or Docker in plan.md → create/verify .dockerignore
- Check if .eslintrc\* exists → create/verify .eslintignore
- Check if eslint.config.\* exists → ensure the config's `ignores` entries cover required patterns
- Check if .prettierrc\* exists → create/verify .prettierignore
- Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
- Check if terraform files (*.tf) exist → create/verify .terraformignore
- Check if terraform files (\*.tf) exist → create/verify .terraformignore
- Check if .helmignore needed (helm charts present) → create/verify .helmignore
**If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
@@ -179,35 +179,35 @@ At the start of execution and after every 3 modifications:
7. **Execute implementation following the task plan with Ironclad Protocols**:
**For EACH task**, follow this sequence:
a. **Blast Radius Analysis (Protocol 1)**:
- Identify all files that will be modified
- Run `grep` to find all dependents
- Report the blast radius
- Identify all files that will be modified
- Run `grep` to find all dependents
- Report the blast radius
b. **Strategy Decision**:
- If LOW risk (≤2 affected files): Proceed with inline modification
- If MEDIUM/HIGH risk (>2 files): Apply Strangler Pattern (Protocol 2)
- If LOW risk (≤2 affected files): Proceed with inline modification
- If MEDIUM/HIGH risk (>2 files): Apply Strangler Pattern (Protocol 2)
c. **Reproduction Script (Protocol 3)**:
- Create `repro_task_[ID].ts` that demonstrates expected behavior
- Run it to confirm current state (should fail for new features, or fail for bugs)
- Create `repro_task_[ID].ts` that demonstrates expected behavior
- Run it to confirm current state (should fail for new features, or fail for bugs)
d. **Implementation**:
- Execute the task according to plan
- **Phase-by-phase execution**: Complete each phase before moving to the next
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
- **File-based coordination**: Tasks affecting the same files must run sequentially
- Execute the task according to plan
- **Phase-by-phase execution**: Complete each phase before moving to the next
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
- **File-based coordination**: Tasks affecting the same files must run sequentially
e. **Verification**:
- Run the reproduction script again (should now pass)
- Run existing tests to ensure no regression
- If any test fails: **STOP** and report the regression
- Run the reproduction script again (should now pass)
- Run existing tests to ensure no regression
- If any test fails: **STOP** and report the regression
f. **Cleanup**:
- Delete temporary repro scripts OR convert to permanent tests
- Mark task as complete `[X]` in tasks.md
- Delete temporary repro scripts OR convert to permanent tests
- Mark task as complete `[X]` in tasks.md
8. **Progress tracking and error handling**:
- Report progress after each completed task with this format:
@@ -246,3 +246,15 @@ At the start of execution and after every 3 modifications:
---
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit-tasks` first to regenerate the task list.
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+36 -12
View File
@@ -1,7 +1,7 @@
---
name: speckit-migrate
description: Migrate existing projects into the speckit structure by generating spec.md, plan.md, and tasks.md from existing code.
version: 1.0.0
version: 1.8.9
depends-on: []
---
@@ -31,10 +31,11 @@ Analyze an existing codebase and generate speckit artifacts (spec.md, plan.md, t
- `--depth <n>`: Analysis depth (1=overview, 2=detailed, 3=exhaustive)
2. **Codebase Discovery**:
```bash
# Get project structure
tree -L 3 --dirsfirst -I 'node_modules|.git|dist|build' > /tmp/structure.txt
# Find key files
find . -name "*.md" -o -name "package.json" -o -name "*.config.*" | head -50
```
@@ -47,48 +48,59 @@ Analyze an existing codebase and generate speckit artifacts (spec.md, plan.md, t
- Map API endpoints (if applicable)
4. **Generate spec.md** (reverse-engineered):
```markdown
# [Feature Name] - Specification (Migrated)
> This specification was auto-generated from existing code.
> Review and refine before using for future development.
## Overview
[Inferred from README, comments, and code structure]
## Functional Requirements
[Extracted from existing functionality]
## Key Entities
[From data models, schemas, types]
```
5. **Generate plan.md** (reverse-engineered):
```markdown
# [Feature Name] - Technical Plan (Migrated)
## Current Architecture
[Documented from codebase analysis]
## Technology Stack
[From package.json, imports, configs]
## Component Map
[Directory → responsibility mapping]
```
6. **Generate tasks.md** (completion status):
```markdown
# [Feature Name] - Tasks (Migrated)
All tasks marked [x] represent existing implemented functionality.
Tasks marked [ ] are inferred gaps or TODOs found in code.
## Existing Implementation
- [x] [Component A] - Implemented in `src/componentA/`
- [x] [Component B] - Implemented in `src/componentB/`
## Identified Gaps
- [ ] [Missing tests for X]
- [ ] [TODO comment at Y]
```
@@ -104,3 +116,15 @@ Analyze an existing codebase and generate speckit artifacts (spec.md, plan.md, t
- **Preserve Intent**: Use code comments and naming to understand purpose
- **Flag TODOs**: Any TODO/FIXME/HACK in code becomes an open task
- **Be Conservative**: When unsure, ask rather than assume
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+17 -5
View File
@@ -1,10 +1,10 @@
---
name: speckit-plan
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
version: 1.0.0
version: 1.8.9
depends-on:
- speckit-specify
handoffs:
handoffs:
- label: Create Tasks
agent: speckit-tasks
prompt: Break the plan into tasks
@@ -32,7 +32,7 @@ You are the **Antigravity System Architect**. Your role is to bridge the gap bet
1. **Setup**: Run `../scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template from `templates/plan-template.md`.
2. **Load context**: Read FEATURE_SPEC and `AGENTS.md`. Load IMPL_PLAN template from `templates/plan-template.md`.
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
@@ -85,15 +85,27 @@ You are the **Antigravity System Architect**. Your role is to bridge the gap bet
- Output OpenAPI/GraphQL schema to `/contracts/`
3. **Agent context update**:
- Run `../scripts/bash/update-agent-context.sh gemini`
- Run `../scripts/bash/update-agent-context.sh windsurf`
- These scripts detect which AI agent is in use
- Update the appropriate agent-specific context file
- Add only new technology from current plan
- Preserve manual additions between markers
**Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
**Output**: data-model.md, /contracts/\*, quickstart.md, agent-specific file
## Key rules
- Use absolute paths
- ERROR on gate failures or unresolved clarifications
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
@@ -3,7 +3,7 @@
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
**Note**: This template is filled in by the `/speckit-plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
**Note**: This template is filled in by the `/speckit-plan` command. See `.agents/skills/plan.md` for the execution workflow.
## Summary
@@ -29,7 +29,7 @@
## Constitution Check
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
_GATE: Must pass before Phase 0 research. Re-check after Phase 1 design._
[Gates determined based on constitution file]
@@ -48,6 +48,7 @@ specs/[###-feature]/
```
### Source Code (repository root)
<!--
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
for this feature. Delete unused options and expand the chosen structure with
@@ -98,7 +99,7 @@ directories captured above]
> **Fill ONLY if Constitution Check has violations that must be justified**
| Violation | Why Needed | Simpler Alternative Rejected Because |
|-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
| Violation | Why Needed | Simpler Alternative Rejected Because |
| -------------------------- | ------------------ | ------------------------------------ |
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
+13 -1
View File
@@ -1,7 +1,7 @@
---
name: speckit-quizme
description: Challenge the specification with Socratic questioning to identify logical gaps, unhandled edge cases, and robustness issues.
version: 1.0.0
version: 1.8.9
handoffs:
- label: Clarify Spec Requirements
agent: speckit-clarify
@@ -65,3 +65,15 @@ Execution steps:
- **Be a Skeptic**: Don't assume the happy path works.
- **Focus on "When" and "If"**: When high load, If network drops, When concurrent edits.
- **Don't be annoying**: Focus on _critical_ flaws, not nitpicks.
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+55 -35
View File
@@ -1,7 +1,7 @@
---
name: speckit-reviewer
description: Perform code review with actionable feedback and suggestions.
version: 1.0.0
version: 1.8.9
depends-on: []
---
@@ -33,7 +33,7 @@ Review code changes and provide structured feedback with severity levels.
```bash
# Get staged changes
git diff --cached --name-only
# Get branch changes
git diff main...HEAD --name-only
```
@@ -44,14 +44,14 @@ Review code changes and provide structured feedback with severity levels.
3. **Review Categories**:
| Category | What to Check |
|----------|--------------|
| **Correctness** | Logic errors, off-by-one, null handling |
| **Security** | SQL injection, XSS, secrets in code |
| **Performance** | N+1 queries, unnecessary loops, memory leaks |
| **Maintainability** | Complexity, duplication, naming |
| **Best Practices** | Error handling, logging, typing |
| **Style** | Consistency, formatting (if no linter) |
| Category | What to Check |
| ------------------- | -------------------------------------------- |
| **Correctness** | Logic errors, off-by-one, null handling |
| **Security** | SQL injection, XSS, secrets in code |
| **Performance** | N+1 queries, unnecessary loops, memory leaks |
| **Maintainability** | Complexity, duplication, naming |
| **Best Practices** | Error handling, logging, typing |
| **Style** | Consistency, formatting (if no linter) |
4. **Analyze Each File**:
For each file, check:
@@ -64,63 +64,71 @@ Review code changes and provide structured feedback with severity levels.
5. **Severity Levels**:
| Level | Meaning | Block Merge? |
|-------|---------|--------------|
| 🔴 CRITICAL | Security issue, data loss risk | Yes |
| 🟠 HIGH | Bug, logic error | Yes |
| 🟡 MEDIUM | Code smell, maintainability | Maybe |
| 🟢 LOW | Style, minor improvement | No |
| 💡 SUGGESTION | Nice-to-have, optional | No |
| Level | Meaning | Block Merge? |
| ------------- | ------------------------------ | ------------ |
| 🔴 CRITICAL | Security issue, data loss risk | Yes |
| 🟠 HIGH | Bug, logic error | Yes |
| 🟡 MEDIUM | Code smell, maintainability | Maybe |
| 🟢 LOW | Style, minor improvement | No |
| 💡 SUGGESTION | Nice-to-have, optional | No |
6. **Generate Review Report**:
```markdown
````markdown
# Code Review Report
**Date**: [timestamp]
**Scope**: [files reviewed]
**Overall**: APPROVE | REQUEST CHANGES | NEEDS DISCUSSION
## Summary
| Severity | Count |
|----------|-------|
| 🔴 Critical | X |
| 🟠 High | X |
| 🟡 Medium | X |
| 🟢 Low | X |
| 💡 Suggestions | X |
| Severity | Count |
| -------------- | ----- |
| 🔴 Critical | X |
| 🟠 High | X |
| 🟡 Medium | X |
| 🟢 Low | X |
| 💡 Suggestions | X |
## Findings
### 🔴 CRITICAL: SQL Injection Risk
**File**: `src/db/queries.ts:45`
**Code**:
```typescript
const query = `SELECT * FROM users WHERE id = ${userId}`;
```
````
**Issue**: User input directly concatenated into SQL query
**Fix**: Use parameterized queries:
```typescript
const query = 'SELECT * FROM users WHERE id = $1';
await db.query(query, [userId]);
```
### 🟡 MEDIUM: Complex Function
**File**: `src/auth/handler.ts:120`
**Issue**: Function has cyclomatic complexity of 15
**Suggestion**: Extract into smaller functions
## What's Good
- Clear naming conventions
- Good test coverage
- Proper TypeScript types
## Recommended Actions
1. **Must fix before merge**: [critical/high items]
2. **Should address**: [medium items]
3. **Consider for later**: [low/suggestions]
```
```
7. **Output**:
@@ -134,3 +142,15 @@ Review code changes and provide structured feedback with severity levels.
- **Be Balanced**: Mention what's good, not just what's wrong
- **Prioritize**: Focus on real issues, not style nitpicks
- **Be Educational**: Explain WHY something is an issue
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+24 -12
View File
@@ -1,7 +1,7 @@
---
name: speckit-security-audit
description: Perform a security-focused audit of the codebase against OWASP Top 10, CASL authorization, and LCBP3-DMS security requirements.
version: 1.0.0
version: 1.8.9
depends-on:
- speckit-checker
---
@@ -12,16 +12,16 @@ You are the **Antigravity Security Sentinel**. Your mission is to identify secur
## Task
Perform a comprehensive security audit covering OWASP Top 10, CASL permission enforcement, file upload safety, and project-specific security rules defined in `specs/06-Decision-Records/ADR-016-security.md`.
Perform a comprehensive security audit covering OWASP Top 10, CASL permission enforcement, file upload safety, and project-specific security rules defined in `specs/06-Decision-Records/ADR-016-security-authentication.md`.
## Context Loading
Before auditing, load the security context:
1. Read `specs/06-Decision-Records/ADR-016-security.md` for project security decisions
1. Read `specs/06-Decision-Records/ADR-016-security-authentication.md` for project security decisions
2. Read `specs/05-Engineering-Guidelines/05-02-backend-guidelines.md` for backend security patterns
3. Read `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql` for CASL permission definitions
4. Read `GEMINI.md` for security rules (Section: Security & Integrity Rules)
3. Read `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql` for CASL permission definitions
4. Read `AGENTS.md` for security rules (Section: Security Rules Non-Negotiable + Security & Integrity Audit Protocol)
## Execution Steps
@@ -44,7 +44,7 @@ Scan the `backend/src/` directory for each OWASP category:
### Phase 2: CASL Authorization Audit
1. **Load permission matrix** from `specs/03-Data-and-Storage/lcbp3-v1.7.0-seed-permissions.sql`
1. **Load permission matrix** from `specs/03-Data-and-Storage/lcbp3-v1.8.0-seed-permissions.sql`
2. **Scan all controllers** for `@UseGuards(CaslAbilityGuard)` coverage:
```bash
@@ -131,8 +131,8 @@ Check LCBP3-DMS-specific file handling per ADR-016:
## Severity Classification
| Severity | Description | Response |
| -------------- | ----------------------------------------------------- | ----------------------- |
| Severity | Description | Response |
| --------------- | ----------------------------------------------------- | ----------------------- |
| 🔴 **Critical** | Exploitable vulnerability, data exposure, auth bypass | Immediate fix required |
| 🟠 **High** | Missing security control, potential escalation path | Fix before next release |
| 🟡 **Medium** | Best practice violation, defense-in-depth gap | Plan fix in sprint |
@@ -151,8 +151,8 @@ Generate a structured report:
## Summary
| Severity | Count |
| ---------- | ----- |
| Severity | Count |
| ----------- | ----- |
| 🔴 Critical | X |
| 🟠 High | X |
| 🟡 Medium | X |
@@ -179,8 +179,8 @@ Generate a structured report:
| Module | Controller | Guard? | Policies? | Level |
| ------ | --------------- | ------ | --------- | ------------ |
| auth | AuthController | ✅ | ✅ | N/A (public) |
| users | UsersController | ✅ | ✅ | L1-L4 |
| auth | AuthController | ✅ | ✅ | N/A (public) |
| users | UsersController | ✅ | ✅ | L1-L4 |
| ... | ... | ... | ... | ... |
## Recommendations Priority
@@ -197,3 +197,15 @@ Generate a structured report:
- **No False Confidence**: If a check is inconclusive, mark it as "⚠️ Needs Manual Review" rather than passing.
- **LCBP3-Specific**: Prioritize project-specific rules (idempotency, ClamAV, Redlock) over generic checks.
- **Frontend Too**: If scope includes frontend, also check for XSS in React components, unescaped user data, and exposed API keys.
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+15 -3
View File
@@ -1,7 +1,7 @@
---
name: speckit-specify
description: Create or update the feature specification from a natural language feature description.
version: 1.0.0
version: 1.8.9
handoffs:
- label: Build Technical Plan
agent: speckit-plan
@@ -64,8 +64,8 @@ Given that feature description, do this:
d. Run the script `../scripts/bash/create-new-feature.sh --json "{{args}}"` with the calculated number and short-name:
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
- Bash example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" --json --number 5 --short-name "user-auth" "Add user authentication"`
- PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "{{args}}" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
- Bash example: `.agents/scripts/bash/create-new-feature.sh --json "{{args}}" --number 5 --short-name "user-auth" "Add user authentication"`
- PowerShell example: `.agents/scripts/powershell/create-new-feature.ps1 -Json -Args '{{args}}' -Number 5 -ShortName "user-auth" "Add user authentication"`
**IMPORTANT**:
- Check all three sources (remote branches, local branches, specs directories) to find the highest number
@@ -262,3 +262,15 @@ Success criteria must be:
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
- "React components render efficiently" (framework-specific)
- "Redis cache hit rate above 80%" (technology-specific)
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../\_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
@@ -5,13 +5,13 @@
**Status**: Draft
**Input**: User description: "$ARGUMENTS"
## User Scenarios & Testing *(mandatory)*
## User Scenarios & Testing _(mandatory)_
<!--
IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
Each user story/journey must be INDEPENDENTLY TESTABLE - meaning if you implement just ONE of them,
you should still have a viable MVP (Minimum Viable Product) that delivers value.
Assign priorities (P1, P2, P3, etc.) to each story, where P1 is the most critical.
Think of each story as a standalone slice of functionality that can be:
- Developed independently
@@ -75,7 +75,7 @@
- What happens when [boundary condition]?
- How does system handle [error scenario]?
## Requirements *(mandatory)*
## Requirements _(mandatory)_
<!--
ACTION REQUIRED: The content in this section represents placeholders.
@@ -85,22 +85,22 @@
### Functional Requirements
- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
- **FR-005**: System MUST [behavior, e.g., "log all security events"]
*Example of marking unclear requirements:*
_Example of marking unclear requirements:_
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
### Key Entities *(include if feature involves data)*
### Key Entities _(include if feature involves data)_
- **[Entity 1]**: [What it represents, key attributes without implementation]
- **[Entity 2]**: [What it represents, relationships to other entities]
## Success Criteria *(mandatory)*
## Success Criteria _(mandatory)_
<!--
ACTION REQUIRED: Define measurable success criteria.
+40 -24
View File
@@ -1,7 +1,7 @@
---
name: speckit-status
description: Display a dashboard showing feature status, completion percentage, and blockers.
version: 1.0.0
version: 1.8.9
depends-on: []
---
@@ -26,6 +26,7 @@ Generate a dashboard view of all features and their completion status.
### Execution Steps
1. **Discover Features**:
```bash
# Find all feature directories
find .specify/features -maxdepth 1 -type d 2>/dev/null || echo "No features found"
@@ -33,14 +34,15 @@ Generate a dashboard view of all features and their completion status.
2. **For Each Feature, Gather Metrics**:
| Artifact | Check | Metric |
|----------|-------|--------|
| spec.md | Exists? | Has [NEEDS CLARIFICATION]? |
| plan.md | Exists? | All sections complete? |
| tasks.md | Exists? | Count [x] vs [ ] vs [/] |
| checklists/*.md | All items checked? | Checklist completion % |
| Artifact | Check | Metric |
| ---------------- | ------------------ | -------------------------- |
| spec.md | Exists? | Has [NEEDS CLARIFICATION]? |
| plan.md | Exists? | All sections complete? |
| tasks.md | Exists? | Count [x] vs [ ] vs [/] |
| checklists/\*.md | All items checked? | Checklist completion % |
3. **Calculate Completion**:
```
Phase 1 (Specify): spec.md exists & no clarifications needed
Phase 2 (Plan): plan.md exists & complete
@@ -56,40 +58,42 @@ Generate a dashboard view of all features and their completion status.
- Missing dependencies
5. **Generate Dashboard**:
```markdown
# Speckit Status Dashboard
**Generated**: [timestamp]
**Total Features**: X
## Overview
| Feature | Phase | Progress | Blockers | Next Action |
|---------|-------|----------|----------|-------------|
| auth-system | Implement | 75% | 0 | Complete remaining tasks |
| payment-flow | Plan | 40% | 2 | Resolve clarifications |
| Feature | Phase | Progress | Blockers | Next Action |
| ------------ | --------- | -------- | -------- | ------------------------ |
| auth-system | Implement | 75% | 0 | Complete remaining tasks |
| payment-flow | Plan | 40% | 2 | Resolve clarifications |
## Feature Details
### [Feature Name]
```
Spec: ████████░░ 80%
Plan: ██████████ 100%
Spec: ████████░░ 80%
Plan: ██████████ 100%
Tasks: ██████░░░░ 60%
```
**Blockers**:
- [ ] Clarification needed: "What payment providers?"
**Recent Activity**:
- Last modified: [date]
- Files changed: [list]
---
## Summary
- Features Ready for Implementation: X
- Features Blocked: Y
- Overall Project Completion: Z%
@@ -105,3 +109,15 @@ Generate a dashboard view of all features and their completion status.
- **Be Visual**: Use progress bars and tables
- **Be Actionable**: Every status should have a "next action"
- **Be Fast**: Cache nothing, always recalculate
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
+15 -3
View File
@@ -1,10 +1,10 @@
---
name: speckit-tasks
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
version: 1.0.0
version: 1.8.9
depends-on:
- speckit-plan
handoffs:
handoffs:
- label: Analyze For Consistency
agent: speckit-analyze
prompt: Run a project analysis for consistency
@@ -96,7 +96,7 @@ Every task MUST strictly follow this format:
4. **[Story] label**: REQUIRED for user story phase tasks only
- Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
- Setup phase: NO story label
- Foundational phase: NO story label
- Foundational phase: NO story label
- User Story phases: MUST have story label
- Polish phase: NO story label
5. **Description**: Clear action with exact file path
@@ -145,3 +145,15 @@ Every task MUST strictly follow this format:
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
- Each phase should be a complete, independently testable increment
- **Final Phase**: Polish & Cross-Cutting Concerns
---
## LCBP3-DMS Context (MUST LOAD)
Before executing, load **[../_LCBP3-CONTEXT.md](../_LCBP3-CONTEXT.md)** to get:
- Canonical rule sources (AGENTS.md, specs/06-Decision-Records/, specs/05-Engineering-Guidelines/)
- Tier 1 non-negotiables (ADR-019 UUID, ADR-009 schema, ADR-016 security, ADR-002 numbering, ADR-008 BullMQ, ADR-018/020 AI boundary, ADR-007 errors)
- Domain glossary (Correspondence / RFA / Transmittal / Circulation)
- Helper script real paths
- Commit checklist
@@ -1,6 +1,5 @@
---
description: "Task list template for feature implementation"
description: 'Task list template for feature implementation'
---
# Tasks: [FEATURE NAME]
@@ -25,21 +24,21 @@ description: "Task list template for feature implementation"
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
- Paths shown below assume single project - adjust based on plan.md structure
<!--
<!--
============================================================================
IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
The /speckit-tasks command MUST replace these with actual tasks based on:
- User stories from spec.md (with their priorities P1, P2, P3...)
- Feature requirements from plan.md
- Entities from data-model.md
- Endpoints from contracts/
Tasks MUST be organized by user story so each story can be:
- Implemented independently
- Tested independently
- Delivered as an MVP increment
DO NOT keep these sample tasks in the generated tasks.md file.
============================================================================
-->
@@ -83,8 +82,8 @@ Examples of foundational tasks (adjust based on your project):
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test\_[name].py
- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test\_[name].py
### Implementation for User Story 1
@@ -107,8 +106,8 @@ Examples of foundational tasks (adjust based on your project):
### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test\_[name].py
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test\_[name].py
### Implementation for User Story 2
@@ -129,8 +128,8 @@ Examples of foundational tasks (adjust based on your project):
### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test\_[name].py
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test\_[name].py
### Implementation for User Story 3

Some files were not shown because too many files have changed in this diff Show More