251217:1704 Docunment Number: Update to 1.6.2
Some checks failed
Spec Validation / validate-markdown (push) Has been cancelled
Spec Validation / validate-diagrams (push) Has been cancelled
Spec Validation / check-todos (push) Has been cancelled

This commit is contained in:
admin
2025-12-17 17:04:06 +07:00
parent 48ed74a27b
commit aaa5da3ec1
121 changed files with 8072 additions and 2103 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -1,103 +0,0 @@
## 3.11.15. Database Schema Requirements
### 3.11.15.1. Counter Table Schema
ตาราง `document_number_counters` ต้องมีโครงสร้างดังนี้:
```sql
CREATE TABLE document_number_counters (
project_id INT NOT NULL,
originator_organization_id INT NOT NULL,
recipient_organization_id INT NULL, -- NULL for RFA
correspondence_type_id INT NOT NULL,
sub_type_id INT DEFAULT 0, -- for TRANSMITTAL
rfa_type_id INT DEFAULT 0, -- for RFA
discipline_id INT DEFAULT 0, -- for RFA
current_year INT NOT NULL,
version INT DEFAULT 0 NOT NULL, -- Optimistic Lock
last_number INT DEFAULT 0,
PRIMARY KEY (
project_id,
originator_organization_id,
COALESCE(recipient_organization_id, 0),
correspondence_type_id,
sub_type_id,
rfa_type_id,
discipline_id,
current_year
),
FOREIGN KEY (project_id) REFERENCES projects(id) ON DELETE CASCADE,
FOREIGN KEY (originator_organization_id) REFERENCES organizations(id) ON DELETE CASCADE,
FOREIGN KEY (recipient_organization_id) REFERENCES organizations(id) ON DELETE CASCADE,
FOREIGN KEY (correspondence_type_id) REFERENCES correspondence_types(id) ON DELETE CASCADE
) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4 COLLATE = utf8mb4_general_ci
COMMENT = 'ตารางเก็บ Running Number Counters';
```
### 3.11.15.2. Index Requirements
```sql
-- Index สำหรับ Performance
CREATE INDEX idx_counter_lookup
ON document_number_counters (
project_id,
correspondence_type_id,
current_year
);
-- Index สำหรับ Originator lookup
CREATE INDEX idx_counter_org
ON document_number_counters (
originator_organization_id,
current_year
);
```
### 3.11.15.3. Important Notes
> **💡 Counter Key Design**
> - ใช้ `COALESCE(recipient_organization_id, 0)` ใน Primary Key เพื่อรองรับ NULL
> - `version` column สำหรับ Optimistic Locking (ป้องกัน race condition)
> - `last_number` เริ่มจาก 0 และเพิ่มขึ้นทีละ 1
> - Counter reset ทุกปี (เมื่อ `current_year` เปลี่ยน)
> **⚠️ Migration Notes**
> - ไม่มีข้อมูลเก่า ไม่ต้องทำ backward compatibility
> - สามารถสร้าง table ใหม่ได้เลยตาม schema ข้างต้น
> - ต้องมี seed data สำหรับ `correspondence_types`, `rfa_types`, `disciplines` ก่อน
### 3.11.15.4. Example Counter Records
```sql
-- Example: LETTER from คคง. to สคฉ.3 in LCBP3-C2 year 2025
INSERT INTO document_number_counters (
project_id, originator_organization_id, recipient_organization_id,
correspondence_type_id, sub_type_id, rfa_type_id, discipline_id,
current_year, version, last_number
) VALUES (
2, -- LCBP3-C2
22, -- คคง.
10, -- สคฉ.3
6, -- LETTER
0, 0, 0,
2025, 0, 0
);
-- Example: RFA from ผรม.2 in LCBP3-C2, discipline TER, type RPT, year 2025
INSERT INTO document_number_counters (
project_id, originator_organization_id, recipient_organization_id,
correspondence_type_id, sub_type_id, rfa_type_id, discipline_id,
current_year, version, last_number
) VALUES (
2, -- LCBP3-C2
42, -- ผรม.2
NULL, -- RFA ไม่มี specific recipient
1, -- RFA
0,
18, -- RPT (Report)
5, -- TER (Terminal)
2025, 0, 0
);
```

View File

View File

@@ -10,9 +10,9 @@
| Attribute | Value |
| ------------------ | -------------------------------- |
| **Version** | 1.6.0 |
| **Version** | 1.6.2 |
| **Status** | Active |
| **Last Updated** | 2025-12-13 |
| **Last Updated** | 2025-12-17 |
| **Owner** | Nattanin Peancharoen |
| **Classification** | Internal Technical Documentation |

View File

@@ -3,15 +3,15 @@
---
**title:** 'System Architecture'
**version:** 1.5.0
**version:** 1.6.2
**status:** first-draft
**owner:** Nattanin Peancharoen
**last_updated:** 2025-11-30
**last_updated:** 2025-12-17
**related:**
- specs/01-requirements/02-architecture.md
- specs/01-requirements/06-non-functional.md
- specs/03-implementation/fullftack-js-v1.5.0.md
- specs/03-implementation/fullftack-js-v1.6.2.md
---
@@ -19,9 +19,50 @@
เอกสารนี้อธิบายสถาปัตยกรรมระบบ LCBP3-DMS (Laem Chabang Port Phase 3 - Document Management System) ที่ใช้แนวทาง **Headless/API-First Architecture** พร้อมการ Deploy บน QNAP Server ผ่าน Container Station
## 🎯 Architecture Principles
## 1. 🎯 Architecture Principles
### 1.1 Core Principles
### 1.1 Component Overview
```
┌──────────────────────────────────────────────────────┐
│ Load Balancer │
│ (Nginx Proxy Manager) │
└────────────┬─────────────────────────────────────────┘
┌────────┴────────┬──────────────┬──────────────┐
│ │ │ │
┌───▼────┐ ┌──────▼──────┐ ┌──▼───┐ ┌─────▼─────┐
│Backend │ │Backend │ │Backend│ │ Backend │
│Node 1 │ │Node 2 │ │Node 3 │ │ Node 4 │
└───┬────┘ └──────┬──────┘ └──┬────┘ └─────┬─────┘
│ │ │ │
└────────────────┴──────────────┴───────────────┘
┌───────────┼───────────┬──────────────┐
│ │ │ │
┌────▼────┐ ┌──▼───┐ ┌───▼────┐ ┌────▼─────┐
│ MariaDB │ │Redis │ │ Redis │ │ Redis │
│ Primary │ │Node 1│ │ Node 2 │ │ Node 3 │
└────┬────┘ └──────┘ └────────┘ └──────────┘
┌────▼────┐
│ MariaDB │
│Replicas │
└─────────┘
```
### 1.2 Component Responsibilities
| Component | Purpose | Critical? |
| --------------- | --------------------------------- | --------- |
| Backend Nodes | API processing, number generation | YES |
| MariaDB Primary | Persistent sequence storage | YES |
| Redis Cluster | Distributed locking, reservations | YES |
| Load Balancer | Traffic distribution | YES |
| Prometheus | Metrics collection | NO |
| Grafana | Monitoring dashboard | NO |
---
### 1.3 Core Principles
1. **Data Integrity First:** ความถูกต้องของข้อมูลต้องมาก่อนทุกอย่าง
2. **Security by Design:** รักษาความปลอดภัยที่ทุกชั้น
@@ -29,13 +70,13 @@
4. **Resilience:** ทนทานต่อ Failure และ Recovery ได้รวดเร็ว
5. **Observability:** ติดตามและวิเคราะห์สถานะระบบได้ง่าย
### 1.2 Architecture Style
### 1.4 Architecture Style
- **Headless CMS Architecture:** แยก Frontend และ Backend เป็นอิสระ
- **API-First:** Backend เป็น API Server ที่ Frontend หรือ Third-party สามารถเรียกใช้ได้
- **Microservices-Ready:** ออกแบบเป็น Modular Architecture พร้อมแยกเป็น Microservices ในอนาคต
## 🏢 Infrastructure & Deployment
## 2. 🏢 Infrastructure & Deployment
### 2.1 Server Infrastructure
@@ -95,7 +136,7 @@ graph TB
- ใช้ Joi/Zod validate Environment Variables ตอน App Start
- Throw Error ทันทีหากขาด Variable สำคัญ
## 🔧 Core Services
## 3. 🔧 Core Services
### 3.1 Service Overview
@@ -195,7 +236,7 @@ graph TB
- Index อัตโนมัติเมื่อ Create/Update เอกสาร
- Async Indexing ผ่าน Queue (ไม่ Block Main Request)
## 🧱 Backend Module Architecture
## 4. 🧱 Backend Module Architecture
### 4.1 Modular Design
@@ -378,7 +419,7 @@ graph TB
- Dynamic Schema Generation
- Data Transformation
## 📊 Data Flow Architecture
## 5. 📊 Data Flow Architecture
### 5.1 Main Request Flow
@@ -518,7 +559,7 @@ sequenceDiagram
end
```
## 🛡️ Security Architecture
## 6. 🛡️ Security Architecture
### 6.1 Security Layers
@@ -643,7 +684,7 @@ graph TB
| Insecure Deserialization | Input Validation |
| Using Known Vulnerable Components | Regular Dependency Updates |
## 📈 Performance & Scalability
## 7. 📈 Performance & Scalability
### 7.1 Caching Strategy
@@ -686,7 +727,7 @@ graph TB
| Cache Hit Ratio | > 80% | Master Data |
| Application Startup | < 30s | Cold Start |
## 🔄 Resilience & Error Handling
## 8. 🔄 Resilience & Error Handling
### 8.1 Resilience Patterns
@@ -725,7 +766,7 @@ graph TB
- Fallback UI Components
- Retry Mechanisms for Failed Requests
## 📊 Monitoring & Observability
## 9. 📊 Monitoring & Observability
### 9.1 Health Checks
@@ -819,7 +860,7 @@ GET /health/live # Liveness probe
- `ip_address`, `user_agent`
- `timestamp`
## 💾 Backup & Disaster Recovery
## 10. 💾 Backup & Disaster Recovery
### 10.1 Backup Strategy
@@ -867,7 +908,7 @@ GET /health/live # Liveness probe
- Run consistency checks
- Verify critical business data
## 🏗️ Deployment Architecture
## 11. 🏗️ Deployment Architecture
### 11.1 Container Deployment
@@ -916,7 +957,7 @@ graph LR
ProdDeploy --> Monitor[Monitor & Alert]
```
## 🎯 Future Enhancements
## 12.🎯 Future Enhancements
### 12.1 Scalability Improvements
@@ -932,7 +973,7 @@ graph LR
- [ ] Mobile Native Apps
- [ ] Blockchain Integration for Document Integrity
### 12.3 Infrastructure
### 12.3 Infrastructure Enhancements
- [ ] Multi-Region Deployment
- [ ] CDN for Static Assets
@@ -943,9 +984,9 @@ graph LR
**Document Control:**
- **Version:** 1.6.0
- **Version:** 1.6.2
- **Status:** Active
- **Last Updated:** 2025-12-13
- **Last Updated:** 2025-12-17
- **Owner:** Nattanin Peancharoen
```

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,8 @@
# 📝 **Documents Management System Version 1.5.0: แนวทางการพัฒนา FullStackJS**
# 📝 **Documents Management System Version 1.6.1: แนวทางการพัฒนา FullStackJS**
**สถานะ:** first-draft
**วันที่:** 2025-12-01
**อ้างอิง:** Requirements Specification v1.5.0
**วันที่:** 2025-12-17
**อ้างอิง:** Requirements Specification v1.6.1
**Classification:** Internal Technical Documentation
## 🧠 **1. ปรัชญาทั่วไป (General Philosophy)**
@@ -1082,9 +1082,9 @@ Views เหล่านี้ทำหน้าที่เป็นแหล
## **Document Control:**
- **Document:** FullStackJS v1.5.0
- **Version:** 1.5
- **Date:** 2025-12-01
- **Document:** FullStackJS v1.6.1
- **Version:** 1.6
- **Date:** 2025-12-17
- **Author:** NAP LCBP3-DMS & Gemini
- **Status:** first-draft
- **Classification:** Internal Technical Documentation
@@ -1092,4 +1092,4 @@ Views เหล่านี้ทำหน้าที่เป็นแหล
---
`End of FullStackJS Guidelines v1.5.0`
`End of FullStackJS Guidelines v1.6.1`

View File

@@ -2,14 +2,15 @@
---
title: 'Operations Guide: Document Numbering System'
version: 1.6.0
status: draft
version: 1.6.2
status: APPROVED
owner: Operations Team
last_updated: 2025-12-02
last_updated: 2025-12-17
related:
- specs/01-requirements/03.11-document-numbering.md
- specs/03-implementation/document-numbering.md
- specs/04-operations/monitoring-alerting.md
- specs/05-decisions/ADR-002-document-numbering-strategy.md
---
## Overview
@@ -678,7 +679,8 @@ See: [Backup & Recovery Guide](file:///e:/np-dms/lcbp3/specs/04-operations/backu
## References
- [Requirements](file:///e:/np-dms/lcbp3/specs/01-requirements/03.11-document-numbering.md)
- [Implementation Guide](file:///e:/np-dms/lcbp3/specs/03-implementation/document-numbering.md)
- [Monitoring & Alerting](file:///e:/np-dms/lcbp3/specs/04-operations/monitoring-alerting.md)
- [Incident Response](file:///e:/np-dms/lcbp3/specs/04-operations/incident-response.md)
- [Requirements](file:///d:/nap-dms.lcbp3/specs/01-requirements/03.11-document-numbering.md)
- [Implementation Guide](file:///d:/nap-dms.lcbp3/specs/03-implementation/document-numbering.md)
- [ADR-002 Document Numbering Strategy](file:///d:/nap-dms.lcbp3/specs/05-decisions/ADR-002-document-numbering-strategy.md)
- [Monitoring & Alerting](file:///d:/nap-dms.lcbp3/specs/04-operations/monitoring-alerting.md)
- [Incident Response](file:///d:/nap-dms.lcbp3/specs/04-operations/incident-response.md)

View File

@@ -214,11 +214,11 @@ The system resolves the numbering format using the following priority:
2. **Default Format:** If not found, search for a record with matching `project_id` where `correspondence_type_id` is `NULL`.
3. **System Fallback:** If neither exists, use the hardcoded system default: `{ORG}-{RECIPIENT}-{SEQ:4}-{YEAR:BE}`.
| Priority | Scenario | Template Source | Counter Scope (Key) | Reset Behavior |
| --- | --- | --- | --- | --- |
| 1 | Specific Format Found | Database (project_id, type_id) | Specific Type (type_id) | Based on reset_sequence_yearly flag |
| 2 | Default Format Found | Database (project_id, type_id=NULL) | Shared Counter (type_id=NULL) | Based on reset_sequence_yearly flag |
| 3 | Fallback (No Config) | System Default: {ORG}-{RECIPIENT}-{SEQ:4}-{YEAR:BE} | Shared Counter (type_id=NULL) | Reset Yearly (Default: True) |
| Priority | Scenario | Template Source | Counter Scope (Key) | Reset Behavior |
| -------- | --------------------- | --------------------------------------------------- | ----------------------------- | ----------------------------------- |
| 1 | Specific Format Found | Database (project_id, type_id) | Specific Type (type_id) | Based on reset_sequence_yearly flag |
| 2 | Default Format Found | Database (project_id, type_id=NULL) | Shared Counter (type_id=NULL) | Based on reset_sequence_yearly flag |
| 3 | Fallback (No Config) | System Default: {ORG}-{RECIPIENT}-{SEQ:4}-{YEAR:BE} | Shared Counter (type_id=NULL) | Reset Yearly (Default: True) |
### Format Examples by Document Type
@@ -934,9 +934,9 @@ ensure:
เป็นไปตาม:
- ✅ [Requirements 3.11](../01-requirements/03.11-document-numbering.md) - Document Numbering Management (v1.5.0)
- ✅ [Backend Plan Section 4.2.10](../../docs/2_Backend_Plan_V1_4_5.md) - DocumentNumberingModule
- ✅ [Data Dictionary](../../docs/4_Data_Dictionary_V1_4_4.md) - Counter Tables
- ✅ [Requirements 3.11](../01-requirements/03.11-document-numbering.md) - Document Numbering Management (v1.6.2)
- ✅ [Implementation Guide](../03-implementation/document-numbering.md) - DocumentNumberingModule (v1.6.1)
- ✅ [Operations Guide](../04-operations/document-numbering-operations.md) - Monitoring & Troubleshooting
- ✅ [Security Best Practices](../02-architecture/security-architecture.md) - Rate Limiting, Audit Logging
---
@@ -961,7 +961,8 @@ ensure:
## Version History
| Version | Date | Changes |
| ------- | ---------- | ------------------------------------------------------------------------------------- |
| 1.0 | 2025-11-30 | Initial decision |
| 2.0 | 2025-12-02 | Updated with comprehensive error scenarios, monitoring, security, and all token types |
| Version | Date | Changes |
| ------- | ---------- | ------------------------------------------------------------------------------------------------- |
| 1.0 | 2025-11-30 | Initial decision |
| 2.0 | 2025-12-02 | Updated with comprehensive error scenarios, monitoring, security, and all token types |
| 3.0 | 2025-12-17 | Aligned with Requirements v1.6.2: updated counter schema, token definitions, Number State Machine |

View File

@@ -0,0 +1,166 @@
# TASK-BE-017: Document Numbering Backend Refactor
---
status: TODO
priority: HIGH
estimated_effort: 3-5 days
dependencies:
- specs/01-requirements/03.11-document-numbering.md (v1.6.2)
- specs/03-implementation/document-numbering.md (v1.6.2)
related_task: TASK-FE-017-document-numbering-refactor.md
---
## Objective
Refactor Document Numbering module ตาม specification v1.6.2 โดยเน้น:
- Number State Machine (RESERVED → CONFIRMED → VOID → CANCELLED)
- Idempotency-Key support
- Counter Key alignment ตาม requirements
---
## Implementation Checklist
### 1. Entity Updates
#### 1.1 DocumentNumberCounter Entity
- [ ] Rename `current_year` → ใช้ `reset_scope` pattern (YEAR_2025, NONE)
- [ ] Ensure FK columns match: `correspondence_type_id`, `originator_organization_id`, `recipient_organization_id`
- [ ] Add `rfa_type_id`, `sub_type_id`, `discipline_id` columns if missing
- [ ] Update Primary Key ให้ตรงกับ requirements spec
```typescript
// Expected Counter Key structure
interface CounterKey {
projectId: number;
originatorOrganizationId: number;
recipientOrganizationId: number; // 0 for RFA
correspondenceTypeId: number;
subTypeId: number; // 0 if not applicable
rfaTypeId: number; // 0 if not applicable
disciplineId: number; // 0 if not applicable
resetScope: string; // 'YEAR_2025', 'NONE'
}
```
#### 1.2 DocumentNumberAudit Entity
- [ ] Add `operation` enum: `RESERVE`, `CONFIRM`, `CANCEL`, `MANUAL_OVERRIDE`, `VOID`, `GENERATE`
- [ ] Ensure `counter_key` is stored as JSON
- [ ] Add `idempotency_key` column
#### 1.3 DocumentNumberReservation Entity (NEW if not exists)
- [ ] Create entity for Two-Phase Commit reservations
- [ ] Fields: `token`, `document_number`, `status`, `expires_at`, `metadata`
---
### 2. Service Updates
#### 2.1 DocumentNumberingService
- [ ] Implement `reserveNumber()` - Phase 1 of Two-Phase Commit
- [ ] Implement `confirmNumber()` - Phase 2 of Two-Phase Commit
- [ ] Implement `cancelNumber()` - Explicit cancel reservation
- [ ] Add Idempotency-Key checking logic
- [ ] Update `generateNextNumber()` to use new CounterKey structure
#### 2.2 Counter Key Builder
- [ ] Create helper to build counter key based on document type:
- Global (LETTER, MEMO, RFI): `(project, orig, recip, type, 0, 0, 0, YEAR_XXXX)`
- TRANSMITTAL: `(project, orig, recip, type, subType, 0, 0, YEAR_XXXX)`
- RFA: `(project, orig, 0, type, 0, rfaType, discipline, NONE)`
#### 2.3 ManualOverrideService
- [ ] Implement `manualOverride()` with validation
- [ ] Auto-update counter if manual number > current
#### 2.4 VoidReplaceService
- [ ] Implement `voidAndReplace()` workflow
- [ ] Link new document to voided document
---
### 3. Controller Updates
#### 3.1 DocumentNumberingController
- [ ] Add `POST /reserve` endpoint
- [ ] Add `POST /confirm` endpoint
- [ ] Add `POST /cancel` endpoint
- [ ] Add `Idempotency-Key` header validation middleware
#### 3.2 DocumentNumberingAdminController
- [ ] Add `POST /manual-override` endpoint
- [ ] Add `POST /void-and-replace` endpoint
- [ ] Add `POST /bulk-import` endpoint
- [ ] Add `GET /metrics` endpoint for monitoring dashboard
---
### 4. Number State Machine
```mermaid
stateDiagram-v2
[*] --> RESERVED: reserve()
RESERVED --> CONFIRMED: confirm()
RESERVED --> CANCELLED: cancel() or TTL expired
CONFIRMED --> VOID: void()
CANCELLED --> [*]
VOID --> [*]
```
#### 4.1 State Transitions
- [ ] Implement state validation before transitions
- [ ] Log all transitions to audit table
- [ ] TTL 5 minutes for RESERVED state
---
### 5. Testing
#### 5.1 Unit Tests
- [ ] CounterService.incrementCounter()
- [ ] ReservationService.reserve/confirm/cancel()
- [ ] TemplateValidator.validate()
- [ ] CounterKeyBuilder
#### 5.2 Integration Tests
- [ ] Two-Phase Commit flow
- [ ] Idempotency-Key duplicate prevention
- [ ] Redis lock + DB optimistic lock
#### 5.3 Load Tests
- [ ] Concurrent number generation (1000 req/s)
- [ ] Zero duplicates verification
---
## Files to Create/Modify
| Action | Path |
| ------ | ------------------------------------------------------------------------------------------- |
| MODIFY | `backend/src/modules/document-numbering/entities/document-number-counter.entity.ts` |
| MODIFY | `backend/src/modules/document-numbering/entities/document-number-audit.entity.ts` |
| CREATE | `backend/src/modules/document-numbering/entities/document-number-reservation.entity.ts` |
| MODIFY | `backend/src/modules/document-numbering/services/document-numbering.service.ts` |
| CREATE | `backend/src/modules/document-numbering/services/reservation.service.ts` |
| CREATE | `backend/src/modules/document-numbering/services/manual-override.service.ts` |
| MODIFY | `backend/src/modules/document-numbering/controllers/document-numbering.controller.ts` |
| MODIFY | `backend/src/modules/document-numbering/controllers/document-numbering-admin.controller.ts` |
| CREATE | `backend/src/modules/document-numbering/guards/idempotency.guard.ts` |
---
## Acceptance Criteria
- [ ] All Counter Key ตรงกับ requirements v1.6.2
- [ ] Number State Machine ทำงานถูกต้อง
- [ ] Idempotency-Key ป้องกัน duplicate requests
- [ ] Zero duplicate numbers ใน concurrent load test
- [ ] Audit logs บันทึกทุก operation
---
## References
- [Requirements v1.6.2](file:///d:/nap-dms.lcbp3/specs/01-requirements/03.11-document-numbering.md)
- [Implementation Guide v1.6.2](file:///d:/nap-dms.lcbp3/specs/03-implementation/document-numbering.md)
- [ADR-002](file:///d:/nap-dms.lcbp3/specs/05-decisions/ADR-002-document-numbering-strategy.md)

View File

@@ -0,0 +1,192 @@
# TASK-FE-017: Document Numbering Frontend Refactor
---
status: TODO
priority: HIGH
estimated_effort: 2-3 days
dependencies:
- TASK-BE-017-document-numbering-refactor.md
- specs/01-requirements/03.11-document-numbering.md (v1.6.2)
- specs/03-implementation/document-numbering.md (v1.6.2)
---
## Objective
Refactor Frontend Document Numbering ตาม specification v1.6.2:
- ป้องกัน User แก้ไขเลขที่เอกสาร
- สร้าง Admin Dashboard ด้วย Metrics
- Implement Admin Tools (Manual Override, Void/Replace)
---
## Implementation Checklist
### 1. User Mode Forms (Create/Edit)
#### 1.1 Correspondence Form
- [ ] **Create Mode**: แสดง "Auto Generated" หรือ Preview เลขที่เอกสาร
- [ ] **Edit Mode**: ช่อง Document No เป็น **Read-Only** เสมอ
- [ ] **API Integration**: ตัดการส่ง field `documentNumber` ไป Backend ใน Edit mode
#### 1.2 RFA Form
- [ ] Same as above - Read-Only document number
#### 1.3 Transmittal Form
- [ ] Same as above - Read-Only document number
**Files:**
- `frontend/components/correspondences/form.tsx`
- `frontend/components/rfas/form.tsx`
- `frontend/components/transmittals/form.tsx`
---
### 2. Admin Dashboard (`/admin/numbering`)
#### 2.1 Tab Structure
```
/admin/numbering
├── Templates (existing - keep as is)
├── Metrics & Audit (NEW)
└── Admin Tools (NEW)
```
#### 2.2 Templates Tab (Existing)
- [ ] Keep current functionality
- [ ] เป็น Tab แรก (default)
#### 2.3 Metrics & Audit Tab (NEW)
- [ ] Fetch metrics from `GET /admin/document-numbering/metrics`
- [ ] Display:
- Sequence utilization gauge
- Lock wait time chart
- Generation rate chart
- Recent errors table
- Audit logs table with filters
#### 2.4 Admin Tools Tab (NEW)
- [ ] **Manual Override Form**:
- Input: document_type, document_number, reason
- Calls `POST /admin/document-numbering/manual-override`
- [ ] **Void & Replace Form**:
- Input: document_id, reason
- Calls `POST /admin/document-numbering/void-and-replace`
- [ ] **Bulk Import Form**:
- Upload CSV/Excel file
- Preview before import
- Calls `POST /admin/document-numbering/bulk-import`
---
### 3. API Integration
#### 3.1 New API Endpoints
```typescript
// services/document-numbering.service.ts (frontend)
interface NumberingMetrics {
sequenceUtilization: number;
lockWaitTimeP95: number;
generationRate: number;
recentErrors: ErrorEntry[];
}
// GET /admin/document-numbering/metrics
getMetrics(): Promise<NumberingMetrics>
// POST /admin/document-numbering/manual-override
manualOverride(dto: ManualOverrideDto): Promise<void>
// POST /admin/document-numbering/void-and-replace
voidAndReplace(dto: VoidReplaceDto): Promise<{ newDocumentNumber: string }>
// POST /admin/document-numbering/bulk-import
bulkImport(file: File): Promise<ImportResult>
// GET /document-numbering/logs/audit
getAuditLogs(params: AuditQueryParams): Promise<PaginatedAuditLogs>
```
#### 3.2 DTOs
```typescript
interface ManualOverrideDto {
documentType: string;
documentNumber: string;
reason: string;
}
interface VoidReplaceDto {
documentId: number;
reason: string;
}
interface AuditQueryParams {
operation?: 'RESERVE' | 'CONFIRM' | 'CANCEL' | 'MANUAL_OVERRIDE' | 'VOID' | 'GENERATE';
dateFrom?: string;
dateTo?: string;
userId?: number;
page?: number;
limit?: number;
}
```
---
### 4. Components to Create
| Component | Path | Description |
| ------------------ | ----------------------------------------------- | --------------------------- |
| MetricsDashboard | `components/numbering/metrics-dashboard.tsx` | Metrics charts and gauges |
| AuditLogsTable | `components/numbering/audit-logs-table.tsx` | Filterable audit log viewer |
| ManualOverrideForm | `components/numbering/manual-override-form.tsx` | Admin tool form |
| VoidReplaceForm | `components/numbering/void-replace-form.tsx` | Admin tool form |
| BulkImportForm | `components/numbering/bulk-import-form.tsx` | CSV/Excel uploader |
---
### 5. UI/UX Requirements
#### 5.1 Document Number Display
- ใช้ Badge หรือ Chip style สำหรับ Document Number
- สี: Info (blue) สำหรับ Auto-generated
- สี: Warning (amber) สำหรับ Manual Override
- สี: Destructive (red) สำหรับ Voided
#### 5.2 Admin Tools Access Control
- ซ่อน Admin Tools tab สำหรับ users ที่ไม่มี permission `system.manage_settings`
- แสดง confirmation dialog ก่อน Manual Override / Void
---
## Files to Create/Modify
| Action | Path |
| ------ | -------------------------------------------------------- |
| MODIFY | `frontend/app/(admin)/admin/numbering/page.tsx` |
| CREATE | `frontend/components/numbering/metrics-dashboard.tsx` |
| CREATE | `frontend/components/numbering/audit-logs-table.tsx` |
| CREATE | `frontend/components/numbering/manual-override-form.tsx` |
| CREATE | `frontend/components/numbering/void-replace-form.tsx` |
| CREATE | `frontend/components/numbering/bulk-import-form.tsx` |
| MODIFY | `frontend/services/document-numbering.service.ts` |
| MODIFY | `frontend/components/correspondences/form.tsx` |
---
## Acceptance Criteria
- [ ] Document Number เป็น Read-Only ใน Edit mode ทุก form
- [ ] Admin Dashboard แสดง Metrics ได้ถูกต้อง
- [ ] Manual Override ทำงานได้และบันทึก Audit
- [ ] Void/Replace สร้างเลขใหม่และ link กับเอกสารเดิม
- [ ] Permission check ถูกต้องสำหรับ Admin Tools
---
## References
- [Requirements v1.6.2](file:///d:/nap-dms.lcbp3/specs/01-requirements/03.11-document-numbering.md)
- [Frontend Guidelines](file:///d:/nap-dms.lcbp3/specs/03-implementation/frontend-guidelines.md)
- [REQ-009 Original Task](file:///d:/nap-dms.lcbp3/specs/06-tasks/REQ-009-DocumentNumbering.md)

View File

@@ -0,0 +1,233 @@
# **คำสั่งตั้งค่า Gitea ใหม่ทั้งหมด + คำสั่งใช้งานประจำวัน / แก้ปัญหา / branch”**
---
📘 Git + Gitea (QNAP / Container Station) Cheat Sheet
คู่มือนี้รวบรวม:
- คำสั่งตั้งค่า Gitea ใหม่ทั้งหมด
- คำสั่งใช้งาน Git ประจำวัน
- การแก้ไขปัญหา repository
- การทำงานกับ branch
- การ reset / clone / merge / rebase
---
## 🧩 SECTION 1 การตั้งค่า Gitea ใหม่ทั้งหมด
🔹 1) เคลียร์ host key เดิม ใช้เมื่อ Gitea ถูก reset ใหม่ หรือ IP / key เปลี่ยน
```bash
ssh-keygen -R "[git.np-dms.work]:2222"
```
🔹 2) เชื่อมต่อครั้งแรก (จะมีคำถาม fingerprint)
```bash
ssh -T git@git.np-dms.work -p 2222
```
🔹 3) แสดง SSH public key เพื่อเพิ่มใน Gitea
```bash
cat /root/.ssh/id_ed25519.pub
cat /root/.ssh/id_rsa.pub
```
🔹 4) เพิ่ม remote ใหม่ (หากยังไม่ได้เพิ่ม)
```bash
git remote add origin ssh://git@git.np-dms.work:2222/np-dms/lcbp3.git
```
🔹 5) ลบ remote เดิมหากผิด
```bash
git remote remove origin
```
🔹 6) Push ครั้งแรกหลังตั้งค่า
```bash
git push -u origin main
```
🔹 7) Clone repo ใหม่ทั้งหมด
```bash
git clone ssh://git@git.np-dms.work:2222/np-dms/lcbp3.git
```
---
## 🧩 SECTION 2 คำสั่ง Git ใช้งานประจำวัน
🟦 ตรวจสอบสถานะงาน
```bash
git status
```
🟦 ดูว่าแก้ไฟล์อะไรไป
```bash
git diff
```
🟦 เพิ่มไฟล์ทั้งหมด
```bash
git add .
```
🟦 Commit การแก้ไข
```bash
git commit -m "message"
```
🟦 Push
```bash
git push
```
🟦 Pull (ดึงงานล่าสุด)
```bash
git pull
```
🟦 Pull (ดึงงานล่าสุด) แบบ rebase
```bash
git pull --rebase
```
🟦 ดู log
```bash
git log
```
---
## 🧩 SECTION 3 ทำงานกับ Branch
### ดู branch ทั้งหมด
```bash
git branch
```
### สร้าง branch ใหม่
```bash
git checkout -b feature/login-page
```
### สลับ branch
```bash
git checkout main
```
### ส่ง branch ขึ้น Gitea
```bash
git push -u origin feature/login-page
```
### ลบ branch ในเครื่อง
```bash
git branch -d feature/login-page
```
### ลบ branch บน Gitea
```bash
git push origin --delete feature/login-page
```
### Merge branch → main
```bash
git checkout main
git pull
git merge feature/login-page
git push
```
### Rebase เพื่อให้ history สวย
```bash
git checkout feature/login-page
git rebase main
git checkout main
git merge feature/login-page
git push
```
---
## 🧩 SECTION 4 แก้ไขปัญหา Repo
🔴 (1) Reset repo ทั้งหมดให้เหมือน remote
⚠ ใช้เมื่อไฟล์ในเครื่องพัง หรือแก้จนเละ
```bash
git fetch --all
git reset --hard origin/main
```
🔴 (2) แก้ปัญหา conflict ตอน pull
```bash
git pull --rebase
```
🔴 (3) ดู remote ว่าชี้ไปทางไหน
```bash
git remote -v
```
🔴 (4) เปลี่ยน remote ใหม่
```bash
git remote remove origin
git remote add origin ssh://git@git.np-dms.work:2222/np-dms/lcbp3.git
```
🔴 (5) Commit message ผิด แก้ใหม่
```bash
git commit --amend
```
🔴 (6) ย้อน commit ล่าสุด (ไม่ลบไฟล์)
```bash
git reset --soft HEAD~1
```
🔴 (7) ดู log แบบสรุป
```bash
git log --oneline --graph
```
🔴 (8) Clone repo ใหม่ทั้งหมด (เมื่อพังหนัก)
```bash
rm -rf lcbp3
git clone ssh://git@git.np-dms.work:2222/np-dms/lcbp3.git
```
---
## 📌 END
```

View File

@@ -0,0 +1,91 @@
# การติดตั้ง Gitea ใน Docker
* user id ของ gites:
* uid=1000(git) gid=1000(git) groups=1000(git)
## กำหนดสิทธิ
```bash
chown -R 1000:1000 /share/Container/gitea/
[/share/Container/git] # ls -l /share/Container/gitea/etc/app.ini
[/share/Container/git] # setfacl -R -m u:1000:rwx /share/Container/gitea/
[/share/Container/git] # setfacl -R -m u:70:rwx /share/Container/git/postgres/
getfacl /share/Container/git/etc/app.ini
chown -R 1000:1000 /share/Container/gitea/
ล้าง
setfacl -R -b /share/Container/gitea/
chgrp -R administrators /share/Container/gitea/
chown -R 1000:1000 /share/Container/gitea/etc /share/Container/gitea/lib /share/Container/gitea/backup
setfacl -m u:1000:rwx -m g:1000:rwx /share/Container/gitea/etc /share/Container/gitea/lib /share/Container/gitea/backup
```
## Docker file
```yml
# File: share/Container/git/docker-compose.yml
# DMS Container v1_4_1 : แยก service และ folder, Application name: git, Servive:gitea
networks:
lcbp3:
external: true
giteanet:
external: true
name: gitnet
services:
gitea:
image: gitea/gitea:latest-rootless
container_name: gitea
restart: always
stdin_open: true
tty: true
environment:
# ---- File ownership in QNAP ----
USER_UID: "1000"
USER_GID: "1000"
TZ: Asia/Bangkok
# ---- Server / Reverse proxy (NPM) ----
GITEA__server__ROOT_URL: https://git.np-dms.work/
GITEA__server__DOMAIN: git.np-dms.work
GITEA__server__SSH_DOMAIN: git.np-dms.work
GITEA__server__START_SSH_SERVER: "true"
GITEA__server__SSH_PORT: "22"
GITEA__server__SSH_LISTEN_PORT: "22"
GITEA__server__LFS_START_SERVER: "true"
GITEA__server__HTTP_ADDR: "0.0.0.0"
GITEA__server__HTTP_PORT: "3000"
GITEA__server__TRUSTED_PROXIES: "127.0.0.1/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
# --- การตั้งค่าฐานข้อมูล
GITEA__database__DB_TYPE: mysql
GITEA__database__HOST: mariadb:3306
GITEA__database__NAME: "gitea"
GITEA__database__USER: "gitea"
GITEA__database__PASSWD: "Center#2025"
# --- repos
GITEA__repository__ROOT: /var/lib/gitea/git/repositories
DISABLE_HTTP_GIT: "false"
ENABLE_BASIC_AUTHENTICATION: "true"
# --- Enable Package Registry ---
GITEA__packages__ENABLED: "true"
GITEA__packages__REGISTRY__ENABLED: "true"
GITEA__packages__REGISTRY__STORAGE_TYPE: local
GITEA__packages__REGISTRY__STORAGE_PATH: /data/registry
# Optional: lock install after setup (เปลี่ยนเป็น true เมื่อจบ onboarding)
GITEA__security__INSTALL_LOCK: "true"
volumes:
- /share/Container/gitea/backup:/backup
- /share/Container/gitea/etc:/etc/gitea
- /share/Container/gitea/lib:/var/lib/gitea
# ให้ repo root ใช้จาก /share/dms-data/gitea_repos
- /share/dms-data/gitea_repos:/var/lib/gitea/git/repositories
- /share/dms-data/gitea_registry:/data/registry
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3003:3000" # HTTP (ไปหลัง NPM)
- "2222:22" # SSH สำหรับ git clone/push
networks:
- lcbp3
- giteanet
```

View File

@@ -0,0 +1,957 @@
# Infrastructure Setup
## 1. Redis Cluster Configuration
### 1.1 Docker Compose Setup
```yaml
# docker-compose-redis.yml
version: '3.8'
services:
redis-1:
image: redis:7-alpine
container_name: lcbp3-redis-1
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf
ports:
- "6379:6379"
- "16379:16379"
volumes:
- redis-1-data:/data
networks:
- lcbp3-network
restart: unless-stopped
redis-2:
image: redis:7-alpine
container_name: lcbp3-redis-2
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf
ports:
- "6380:6379"
- "16380:16379"
volumes:
- redis-2-data:/data
networks:
- lcbp3-network
restart: unless-stopped
redis-3:
image: redis:7-alpine
container_name: lcbp3-redis-3
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf
ports:
- "6381:6379"
- "16381:16379"
volumes:
- redis-3-data:/data
networks:
- lcbp3-network
restart: unless-stopped
volumes:
redis-1-data:
redis-2-data:
redis-3-data:
networks:
lcbp3-network:
external: true
```
#### Initialize Cluster
```bash
# Start Redis nodes
docker-compose -f docker-compose-redis.yml up -d
# Wait for nodes to start
sleep 10
# Create cluster
docker exec -it lcbp3-redis-1 redis-cli --cluster create \
172.20.0.2:6379 \
172.20.0.3:6379 \
172.20.0.4:6379 \
--cluster-replicas 0
# Verify cluster
docker exec -it lcbp3-redis-1 redis-cli cluster info
docker exec -it lcbp3-redis-1 redis-cli cluster nodes
```
#### Health Check Script
```bash
#!/bin/bash
# scripts/check-redis-cluster.sh
echo "🔍 Checking Redis Cluster Health..."
for port in 6379 6380 6381; do
echo "\n📍 Node on port $port:"
# Check if node is up
docker exec lcbp3-redis-$(($port - 6378)) redis-cli -p 6379 ping
# Check cluster status
docker exec lcbp3-redis-$(($port - 6378)) redis-cli -p 6379 cluster info | grep cluster_state
# Check memory usage
docker exec lcbp3-redis-$(($port - 6378)) redis-cli -p 6379 info memory | grep used_memory_human
done
echo "\n✅ Cluster check complete"
```
---
## 2. Database Configuration
### 2.1 MariaDB Optimization for Numbering
```sql
-- /etc/mysql/mariadb.conf.d/50-numbering.cnf
[mysqld]
# Connection pool
max_connections = 200
thread_cache_size = 50
# Query cache (disabled for InnoDB)
query_cache_type = 0
query_cache_size = 0
# InnoDB settings
innodb_buffer_pool_size = 4G
innodb_log_file_size = 512M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
# Performance Schema
performance_schema = ON
performance_schema_instrument = 'wait/lock/%=ON'
# Binary logging
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 7
max_binlog_size = 100M
# Slow query log
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow-query.log
long_query_time = 1
```
### 2.2 Monitoring Locks
```sql
-- Check for lock contention
SELECT
r.trx_id waiting_trx_id,
r.trx_mysql_thread_id waiting_thread,
r.trx_query waiting_query,
b.trx_id blocking_trx_id,
b.trx_mysql_thread_id blocking_thread,
b.trx_query blocking_query
FROM information_schema.innodb_lock_waits w
INNER JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id
INNER JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id;
-- Check active transactions
SELECT * FROM information_schema.innodb_trx;
-- Kill long-running transaction (if needed)
KILL <thread_id>;
```
---
## 3. Backend Service Configuration
### 3.1 Backend Service Deployment
#### Docker Compose
```yaml
# docker-compose-backend.yml
version: '3.8'
services:
backend-1:
image: lcbp3-backend:latest
container_name: lcbp3-backend-1
environment:
- NODE_ENV=production
- DB_HOST=mariadb-primary
- REDIS_CLUSTER_NODES=redis-1:6379,redis-2:6379,redis-3:6379
- NUMBERING_LOCK_TIMEOUT=5000
- NUMBERING_RESERVATION_TTL=300
ports:
- "3001:3000"
depends_on:
- mariadb-primary
- redis-1
- redis-2
- redis-3
networks:
- lcbp3-network
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
backend-2:
image: lcbp3-backend:latest
container_name: lcbp3-backend-2
environment:
- NODE_ENV=production
- DB_HOST=mariadb-primary
- REDIS_CLUSTER_NODES=redis-1:6379,redis-2:6379,redis-3:6379
ports:
- "3002:3000"
depends_on:
- mariadb-primary
- redis-1
networks:
- lcbp3-network
restart: unless-stopped
networks:
lcbp3-network:
external: true
```
#### Health Check Endpoint
```typescript
// health/numbering.health.ts
import { Injectable } from '@nestjs/common';
import { HealthIndicator, HealthIndicatorResult } from '@nestjs/terminus';
import { Redis } from 'ioredis';
import { DataSource } from 'typeorm';
@Injectable()
export class NumberingHealthIndicator extends HealthIndicator {
constructor(
private redis: Redis,
private dataSource: DataSource,
) {
super();
}
async isHealthy(key: string): Promise<HealthIndicatorResult> {
const checks = await Promise.all([
this.checkRedis(),
this.checkDatabase(),
this.checkSequenceIntegrity(),
]);
const isHealthy = checks.every((check) => check.status === 'up');
return this.getStatus(key, isHealthy, { checks });
}
private async checkRedis(): Promise<any> {
try {
await this.redis.ping();
return { name: 'redis', status: 'up' };
} catch (error) {
return { name: 'redis', status: 'down', error: error.message };
}
}
private async checkDatabase(): Promise<any> {
try {
await this.dataSource.query('SELECT 1');
return { name: 'database', status: 'up' };
} catch (error) {
return { name: 'database', status: 'down', error: error.message };
}
}
private async checkSequenceIntegrity(): Promise<any> {
try {
const result = await this.dataSource.query(`
SELECT COUNT(*) as count
FROM document_numbering_sequences
WHERE current_value > (
SELECT max_value FROM document_numbering_configs
WHERE id = config_id
)
`);
const hasIssue = result[0].count > 0;
return {
name: 'sequence_integrity',
status: hasIssue ? 'degraded' : 'up',
exceeded_sequences: result[0].count,
};
} catch (error) {
return { name: 'sequence_integrity', status: 'down', error: error.message };
}
}
}
```
---
## 4. Monitoring & Alerting
### 4.1 Prometheus Configuration
```yaml
# prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093
rule_files:
- "/etc/prometheus/alerts/numbering.yml"
scrape_configs:
- job_name: 'backend'
static_configs:
- targets:
- 'backend-1:3000'
- 'backend-2:3000'
metrics_path: '/metrics'
- job_name: 'redis-numbering'
static_configs:
- targets:
- 'redis-1:6379'
- 'redis-2:6379'
- 'redis-3:6379'
metrics_path: '/metrics'
- job_name: 'mariadb'
static_configs:
- targets:
- 'mariadb-exporter:9104'
```
### 4.2 Alert Manager Configuration
```yaml
# alertmanager.yml
global:
resolve_timeout: 5m
route:
receiver: 'default'
group_by: ['alertname', 'severity']
group_wait: 10s
group_interval: 10s
repeat_interval: 12h
routes:
- match:
severity: critical
receiver: 'critical'
continue: true
- match:
severity: warning
receiver: 'warning'
receivers:
- name: 'default'
slack_configs:
- api_url: 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'
channel: '#lcbp3-alerts'
title: '{{ .GroupLabels.alertname }}'
text: '{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}'
- name: 'critical'
email_configs:
- to: 'devops@lcbp3.com'
from: 'alerts@lcbp3.com'
smarthost: 'smtp.gmail.com:587'
auth_username: 'alerts@lcbp3.com'
auth_password: 'your-password'
headers:
Subject: '🚨 CRITICAL: {{ .GroupLabels.alertname }}'
pagerduty_configs:
- service_key: 'YOUR_PAGERDUTY_KEY'
- name: 'warning'
slack_configs:
- api_url: 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'
channel: '#lcbp3-warnings'
```
### 4.3 Grafana Dashboards
#### Import Dashboard JSON
```bash
# Download dashboard template
curl -o numbering-dashboard.json \
https://raw.githubusercontent.com/lcbp3/grafana-dashboards/main/numbering.json
# Import to Grafana
curl -X POST http://admin:admin@localhost:3000/api/dashboards/db \
-H "Content-Type: application/json" \
-d @numbering-dashboard.json
```
#### Key Panels to Monitor
1. **Numbers Generated per Minute** - Rate of number creation
2. **Sequence Utilization** - Current usage vs max (alert >90%)
3. **Lock Wait Time (p95)** - Performance indicator
4. **Lock Failures** - System health indicator
5. **Redis Cluster Health** - Node status
6. **Database Connection Pool** - Resource usage
---
## 5. Backup & Recovery
### 5.1 Database Backup Strategy
#### Automated Backup Script
```bash
#!/bin/bash
# scripts/backup-numbering-db.sh
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/backups/numbering"
DB_NAME="lcbp3_production"
echo "🔄 Starting backup at $DATE"
# Create backup directory
mkdir -p $BACKUP_DIR
# Backup numbering tables only
docker exec lcbp3-mariadb mysqldump \
--single-transaction \
--routines \
--triggers \
$DB_NAME \
document_numbering_configs \
document_numbering_sequences \
document_numbering_audit_logs \
> $BACKUP_DIR/numbering_$DATE.sql
# Compress backup
gzip $BACKUP_DIR/numbering_$DATE.sql
# Keep only last 30 days
find $BACKUP_DIR -name "numbering_*.sql.gz" -mtime +30 -delete
echo "✅ Backup complete: numbering_$DATE.sql.gz"
```
#### Cron Schedule
```cron
# Run backup daily at 2 AM
0 2 * * * /opt/lcbp3/scripts/backup-numbering-db.sh >> /var/log/numbering-backup.log 2>&1
# Run integrity check weekly on Sunday at 3 AM
0 3 * * 0 /opt/lcbp3/scripts/check-sequence-integrity.sh >> /var/log/numbering-integrity.log 2>&1
```
### 5.2 Redis Backup
#### Enable RDB Persistence
```conf
# redis.conf
save 900 1 # Save if 1 key changed after 900 seconds
save 300 10 # Save if 10 keys changed after 300 seconds
save 60 10000 # Save if 10000 keys changed after 60 seconds
dbfilename dump.rdb
dir /data
# Enable AOF for durability
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
```
#### Backup Script
```bash
#!/bin/bash
# scripts/backup-redis.sh
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/backups/redis"
mkdir -p $BACKUP_DIR
for i in 1 2 3; do
echo "Backing up redis-$i..."
# Trigger BGSAVE
docker exec lcbp3-redis-$i redis-cli -p 6379 BGSAVE
# Wait for save to complete
sleep 10
# Copy RDB file
docker cp lcbp3-redis-$i:/data/dump.rdb \
$BACKUP_DIR/redis-${i}_${DATE}.rdb
# Copy AOF file
docker cp lcbp3-redis-$i:/data/appendonly.aof \
$BACKUP_DIR/redis-${i}_${DATE}.aof
done
# Compress
tar -czf $BACKUP_DIR/redis_cluster_${DATE}.tar.gz $BACKUP_DIR/*_${DATE}.*
# Cleanup
rm $BACKUP_DIR/*_${DATE}.rdb $BACKUP_DIR/*_${DATE}.aof
echo "✅ Redis backup complete"
```
### 5.3 Recovery Procedures
#### Scenario 1: Restore from Database Backup
```bash
#!/bin/bash
# scripts/restore-numbering-db.sh
BACKUP_FILE=$1
if [ -z "$BACKUP_FILE" ]; then
echo "Usage: ./restore-numbering-db.sh <backup_file>"
exit 1
fi
echo "⚠️ WARNING: This will overwrite current numbering data!"
read -p "Continue? (yes/no): " confirm
if [ "$confirm" != "yes" ]; then
echo "Aborted"
exit 0
fi
# Decompress if needed
if [[ $BACKUP_FILE == *.gz ]]; then
gunzip -c $BACKUP_FILE > /tmp/restore.sql
RESTORE_FILE="/tmp/restore.sql"
else
RESTORE_FILE=$BACKUP_FILE
fi
# Restore
docker exec -i lcbp3-mariadb mysql lcbp3_production < $RESTORE_FILE
echo "✅ Restore complete"
echo "🔄 Please verify sequence integrity"
```
#### Scenario 2: Redis Node Failure
```bash
# Automatically handled by cluster
# Node will rejoin cluster when restarted
# Check cluster status
docker exec lcbp3-redis-1 redis-cli cluster info
# If node is failed, remove and add back
docker exec lcbp3-redis-1 redis-cli --cluster del-node <node-id>
docker exec lcbp3-redis-1 redis-cli --cluster add-node <new-node-ip>:6379 <cluster-ip>:6379
```
---
## 6. Maintenance Procedures
### 6.1 Sequence Adjustment
#### Increase Max Value
```sql
-- Check current utilization
SELECT
dc.document_type,
ds.current_value,
dc.max_value,
ROUND((ds.current_value * 100.0 / dc.max_value), 2) as utilization
FROM document_numbering_sequences ds
JOIN document_numbering_configs dc ON ds.config_id = dc.id
WHERE ds.current_value > dc.max_value * 0.8;
-- Increase max_value for type approaching limit
UPDATE document_numbering_configs
SET max_value = max_value * 10,
updated_at = CURRENT_TIMESTAMP
WHERE document_type = 'COR'
AND max_value < 9999999;
-- Audit log
INSERT INTO document_numbering_audit_logs (
operation, document_type, old_value, new_value,
user_id, metadata
) VALUES (
'ADJUST_MAX_VALUE', 'COR', '999999', '9999999',
1, '{"reason": "Approaching limit", "automated": false}'
);
```
#### Reset Yearly Sequence
```sql
-- For document types with yearly reset
-- Run on January 1st
START TRANSACTION;
-- Create new sequence for new year
INSERT INTO document_numbering_sequences (
config_id,
scope_value,
current_value,
last_used_at
)
SELECT
id as config_id,
YEAR(CURDATE()) as scope_value,
0 as current_value,
NULL as last_used_at
FROM document_numbering_configs
WHERE scope = 'YEARLY';
-- Verify
SELECT * FROM document_numbering_sequences
WHERE scope_value = YEAR(CURDATE());
COMMIT;
```
### 6.2 Cleanup Old Audit Logs
```sql
-- Archive logs older than 2 years
-- Run monthly
START TRANSACTION;
-- Create archive table (if not exists)
CREATE TABLE IF NOT EXISTS document_numbering_audit_logs_archive
LIKE document_numbering_audit_logs;
-- Move old logs to archive
INSERT INTO document_numbering_audit_logs_archive
SELECT * FROM document_numbering_audit_logs
WHERE timestamp < DATE_SUB(CURDATE(), INTERVAL 2 YEAR);
-- Delete from main table
DELETE FROM document_numbering_audit_logs
WHERE timestamp < DATE_SUB(CURDATE(), INTERVAL 2 YEAR);
-- Optimize table
OPTIMIZE TABLE document_numbering_audit_logs;
COMMIT;
-- Export archive to file (optional)
SELECT * FROM document_numbering_audit_logs_archive
INTO OUTFILE '/tmp/audit_archive_2023.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n';
```
### 6.3 Redis Maintenance
#### Flush Expired Reservations
```bash
#!/bin/bash
# scripts/cleanup-expired-reservations.sh
echo "🧹 Cleaning up expired reservations..."
# Get all reservation keys
KEYS=$(docker exec lcbp3-redis-1 redis-cli --cluster call 172.20.0.2:6379 KEYS "reservation:*" | grep -v "(error)")
COUNT=0
for KEY in $KEYS; do
# Check TTL
TTL=$(docker exec lcbp3-redis-1 redis-cli TTL "$KEY")
if [ "$TTL" -lt 0 ]; then
# Delete expired key
docker exec lcbp3-redis-1 redis-cli DEL "$KEY"
((COUNT++))
fi
done
echo "✅ Cleaned up $COUNT expired reservations"
```
---
## 7. Disaster Recovery
### 7.1 Total System Failure
#### Recovery Steps
```bash
#!/bin/bash
# scripts/disaster-recovery.sh
echo "🚨 Starting disaster recovery..."
# 1. Start Redis cluster
echo "1⃣ Starting Redis cluster..."
docker-compose -f docker-compose-redis.yml up -d
sleep 30
# 2. Restore Redis backups
echo "2⃣ Restoring Redis backups..."
./scripts/restore-redis.sh /backups/redis/latest.tar.gz
# 3. Start database
echo "3⃣ Starting MariaDB..."
docker-compose -f docker-compose-db.yml up -d
sleep 30
# 4. Restore database
echo "4⃣ Restoring database..."
./scripts/restore-numbering-db.sh /backups/db/latest.sql.gz
# 5. Verify sequence integrity
echo "5⃣ Verifying sequence integrity..."
./scripts/check-sequence-integrity.sh
# 6. Start backend services
echo "6⃣ Starting backend services..."
docker-compose -f docker-compose-backend.yml up -d
# 7. Run health checks
echo "7⃣ Running health checks..."
sleep 60
for i in {1..5}; do
curl -f http://localhost:3001/health || echo "Backend $i not healthy"
done
echo "✅ Disaster recovery complete"
echo "⚠️ Please verify system functionality manually"
```
### 7.2 RTO/RPO Targets
| Scenario | RTO | RPO | Priority |
| ---------------------------- | ------- | ------ | -------- |
| Single backend node failure | 0 min | 0 | P0 |
| Single Redis node failure | 0 min | 0 | P0 |
| Database primary failure | 5 min | 0 | P0 |
| Complete data center failure | 1 hour | 15 min | P1 |
| Data corruption | 4 hours | 1 day | P2 |
---
## 8. Runbooks
### 8.1 High Sequence Utilization (>90%)
**Alert**: `SequenceWarning` or `SequenceCritical`
**Steps**:
1. Check current utilization
```sql
SELECT document_type, current_value, max_value,
ROUND((current_value * 100.0 / max_value), 2) as pct
FROM document_numbering_sequences s
JOIN document_numbering_configs c ON s.config_id = c.id
WHERE current_value > max_value * 0.9;
```
2. Assess impact
- How many numbers left?
- Daily usage rate?
- Days until exhaustion?
3. Take action
```sql
-- Option A: Increase max_value
UPDATE document_numbering_configs
SET max_value = max_value * 10
WHERE document_type = 'COR';
-- Option B: Reset sequence (yearly types only)
-- Schedule for next year/month
```
4. Notify stakeholders
5. Update monitoring thresholds if needed
---
### 8.2 High Lock Wait Time
**Alert**: `HighLockWaitTime`
**Steps**:
1. Check Redis cluster health
```bash
docker exec lcbp3-redis-1 redis-cli cluster info
docker exec lcbp3-redis-1 redis-cli cluster nodes
```
2. Check database locks
```sql
SELECT * FROM information_schema.innodb_lock_waits;
SELECT * FROM information_schema.innodb_trx
WHERE trx_started < NOW() - INTERVAL 30 SECOND;
```
3. Identify bottleneck
- Redis slow?
- Database slow?
- High concurrent load?
4. Take action based on cause:
- **Redis**: Add more nodes, check network latency
- **Database**: Optimize queries, increase connection pool
- **High load**: Scale horizontally (add backend nodes)
5. Monitor improvements
---
### 8.3 Redis Cluster Down
**Alert**: `RedisUnavailable`
**Steps**:
1. Verify all nodes down
```bash
for i in {1..3}; do
docker exec lcbp3-redis-$i redis-cli ping || echo "Node $i DOWN"
done
```
2. Check system falls back to DB-only mode
```bash
curl http://localhost:3001/health/numbering
# Should show: fallback_mode: true
```
3. Restart Redis cluster
```bash
docker-compose -f docker-compose-redis.yml restart
sleep 30
./scripts/check-redis-cluster.sh
```
4. If restart fails, restore from backup
```bash
./scripts/restore-redis.sh /backups/redis/latest.tar.gz
```
5. Verify numbering system back to normal
```bash
curl http://localhost:3001/health/numbering
# Should show: fallback_mode: false
```
6. Review logs for root cause
---
## 9. Performance Tuning
### 9.1 Slow Number Generation
**Diagnosis**:
```sql
-- Check slow queries
SELECT * FROM mysql.slow_log
WHERE sql_text LIKE '%document_numbering%'
ORDER BY query_time DESC
LIMIT 10;
-- Check index usage
EXPLAIN SELECT * FROM document_numbering_sequences
WHERE config_id = 1 AND scope_value = '2025'
FOR UPDATE;
```
**Optimizations**:
```sql
-- Add missing indexes
CREATE INDEX idx_sequence_lookup
ON document_numbering_sequences(config_id, scope_value);
-- Optimize table
OPTIMIZE TABLE document_numbering_sequences;
-- Update statistics
ANALYZE TABLE document_numbering_sequences;
```
### 8.2 Redis Memory Optimization
```bash
# Check memory usage
docker exec lcbp3-redis-1 redis-cli INFO memory
# If memory high, check keys
docker exec lcbp3-redis-1 redis-cli --bigkeys
# Set maxmemory policy
docker exec lcbp3-redis-1 redis-cli CONFIG SET maxmemory 2gb
docker exec lcbp3-redis-1 redis-cli CONFIG SET maxmemory-policy allkeys-lru
```
---
## 10. Security Hardening
### 10.1 Redis Security
```conf
# redis.conf
requirepass your-strong-redis-password
bind 0.0.0.0
protected-mode yes
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command CONFIG "CONFIG_abc123"
```
### 10.2 Database Security
```sql
-- Create dedicated numbering user
CREATE USER 'numbering'@'%' IDENTIFIED BY 'strong-password';
-- Grant minimal permissions
GRANT SELECT, INSERT, UPDATE ON lcbp3_production.document_numbering_* TO 'numbering'@'%';
GRANT SELECT ON lcbp3_production.users TO 'numbering'@'%';
FLUSH PRIVILEGES;
```
### 10.3 Network Security
```yaml
# docker-compose-network.yml
networks:
lcbp3-network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
driver_opts:
com.docker.network.bridge.name: lcbp3-br
com.docker.network.bridge.enable_icc: "true"
com.docker.network.bridge.enable_ip_masquerade: "true"
```
---
## 11. Compliance & Audit
### 11.1 Audit Log Retention
```sql
-- Export audit logs for compliance
SELECT *
FROM document_numbering

View File

@@ -0,0 +1,138 @@
# การติดตั้ง MAriaDB และ PHPMyAdmin ใน Docker
* user id ของ mariadb:
* uid=0(root) gid=0(root) groups=0(root)
## กำหนดสิทธิ
```bash
chown -R 999:999 /share/Container/mariadb/init
chmod 755 /share/Container/mariadb/init
setfacl -R -m u:999:r-x /share/Container/mariadb/init
setfacl -R -d -m u:999:r-x /share/Container/mariadb/init
chown -R 33:33 /share/Container/pma/tmp
chmod 755 /share/Container/pma/tmp
setfacl -R -m u:33:rwx /share/Container/pma/tmp
setfacl -R -d -m u:33:rwx /share/Container/pma/tmp
chown -R 33:33 /share/dms-data/logs/pma
chmod 755 /share/dms-data/logs/pma
setfacl -R -m u:33:rwx /share/dms-data/logs/pma
setfacl -R -d -m u:33:rwx /share/dms-data/logs/pma
setfacl -R -m u:1000:rwx /share/Container/gitea
setfacl -R -m u:1000:rwx /share/dms-data/gitea_repos
setfacl -R -m u:1000:rwx /share/dms-data/gitea_registry
```
## เพิ่ม database & user สำหรับ Nginx Proxy Manager (NPM)
```bash
docker exec -it mariadb mysql -u root -p
CREATE DATABASE npm;
CREATE USER 'npm'@'%' IDENTIFIED BY 'npm';
GRANT ALL PRIVILEGES ON npm.* TO 'npm'@'%';
FLUSH PRIVILEGES;
```
## เพิ่ม database & user สำหรับ Gitea
```bash
docker exec -it mariadb mysql -u root -p
CREATE DATABASE gitea CHARACTER SET 'utf8mb4' COLLATE 'utf8mb4_unicode_ci';
CREATE USER 'gitea'@'%' IDENTIFIED BY 'Center#2025';
GRANT ALL PRIVILEGES ON gitea.* TO 'gitea'@'%';
FLUSH PRIVILEGES;
```
## Docker file
```yml
# File: share/Container/mariadb/docker-compose.yml
# DMS Container v1_4_1 : แยก service และ folder,Application name: lcbp3-db, Servive: mariadb, pma
x-restart: &restart_policy
restart: unless-stopped
x-logging: &default_logging
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "5"
services:
mariadb:
<<: [*restart_policy, *default_logging]
image: mariadb:11.8
container_name: mariadb
stdin_open: true
tty: true
deploy:
resources:
limits:
cpus: "2.0"
memory: 4G
reservations:
cpus: "0.5"
memory: 1G
environment:
MYSQL_ROOT_PASSWORD: "Center#2025"
MYSQL_DATABASE: "lcbp3"
MYSQL_USER: "center"
MYSQL_PASSWORD: "Center#2025"
TZ: "Asia/Bangkok"
ports:
- "3306:3306"
volumes:
- "/share/Container/mariadb/data:/var/lib/mysql"
- "/share/Container/mariadb/my.cnf:/etc/mysql/conf.d/my.cnf:ro"
- "/share/Container/mariadb/init:/docker-entrypoint-initdb.d:ro"
- "/share/dms-data/mariadb/backup:/backup"
healthcheck:
test:
["CMD-SHELL", "mysqladmin ping -h 127.0.0.1 -pCenter#2025 || exit 1"]
interval: 10s
timeout: 5s
retries: 15
networks:
lcbp3: {}
pma:
<<: [*restart_policy, *default_logging]
image: phpmyadmin:5-apache
container_name: pma
stdin_open: true
tty: true
deploy:
resources:
limits:
cpus: "0.25"
memory: 256M
environment:
TZ: "Asia/Bangkok"
PMA_HOST: "mariadb"
PMA_PORT: "3306"
PMA_ABSOLUTE_URI: "https://pma.np-dms.work/"
UPLOAD_LIMIT: "1G"
MEMORY_LIMIT: "512M"
ports:
- "89:80"
# expose:
# - "80"
volumes:
- "/share/Container/pma/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php:ro"
- "/share/Container/pma/zzz-custom.ini:/usr/local/etc/php/conf.d/zzz-custom.ini:ro"
- "/share/Container/pma/tmp:/var/lib/phpmyadmin/tmp:rw"
- "/share/dms-data/logs/pma:/var/log/apache2"
depends_on:
mariadb:
condition: service_healthy
networks:
lcbp3: {}
networks:
lcbp3:
external: true
```

View File

@@ -0,0 +1,99 @@
# การติดตั้ง Nginx Proxy Manager (NPM) ใน Docker
* ค่าเริ่มต้นคือ:Email: [admin@example.com] Password: changeme
* user id ของ NPM:
* uid=0(root) gid=0(root) groups=0(root)
---
## กำหนดสิทธิ
```bash
# ตรวจสอบ user id ของ NPM
docker exec -it npm id
chown -R 0:0 /share/Container/npm
setfacl -R -m u:0:rwx /share/Container/npm
```
## Note: Configurations
| Domain Names | Forward Hostname | IP Forward Port | Cache Assets | Block Common Exploits | Websockets | Force SSL | HTTP/2 | SupportHSTS Enabled |
| :----------------------------- | :--------------- | :-------------- | :----------- | :-------------------- | :--------- | :-------- | :----- | :------------------ |
| backend.np-dms.work | backend | 3000 | [ ] | [x] | [ ] | [x] | [x] | [ ] |
| lcbp3.np-dms.work | frontend | 3000 | [x] | [x] | [x] | [x] | [x] | [ ] |
| db.np-dms.work | mariadb | 3306 | [x] | [x] | [x] | [x] | [x] | [ ] |
| git.np-dms.work | gitea | 3000 | [x] | [x] | [x] | [x] | [x] | [ ] |
| n8n.np-dms.work | n8n | 5678 | [x] | [x] | [x] | [x] | [x] | [ ] |
| npm.np-dms.work | npm | 81 | [ ] | [x] | [x] | [x] | [x] | [ ] |
| pma.np-dms.work | pma | 80 | [x] | [x] | [ ] | [x] | [x] | [ ] |
| np-dms.work, [www.np-dms.work] | localhost | 80 | [x] | [x] | [ ] | [x] | [x] | [ ] |
## Docker file
```yml
# File: share/Container/npm/docker-compose-npm.yml
# DMS Container v1_4_1 แยก service และ folder, Application name: lcbp3-npm, Servive:npm
x-restart: &restart_policy
restart: unless-stopped
x-logging: &default_logging
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "5"
services:
npm:
<<: [*restart_policy, *default_logging]
image: jc21/nginx-proxy-manager:latest
container_name: npm
stdin_open: true
tty: true
deploy:
resources:
limits:
cpus: "1.0" # 50% CPU
memory: 512M
ports:
- "80:80" # HTTP
- "443:443" # HTTPS
- "81:81" # NPM Admin UI
environment:
TZ: "Asia/Bangkok"
DB_MYSQL_HOST: "mariadb"
DB_MYSQL_PORT: 3306
DB_MYSQL_USER: "npm"
DB_MYSQL_PASSWORD: "npm"
DB_MYSQL_NAME: "npm"
# Uncomment this if IPv6 is not enabled on your host
DISABLE_IPV6: "true"
networks:
- lcbp3
- giteanet
volumes:
- "/share/Container/npm/data:/data"
- "/share/dms-data/logs/npm:/data/logs" # <-- เพิ่ม logging volume
- "/share/Container/npm/letsencrypt:/etc/letsencrypt"
- "/share/Container/npm/custom:/data/nginx/custom" # <-- สำคัญสำหรับ http_top.conf
# - "/share/Container/lcbp3/npm/landing:/data/landing:ro"
landing:
image: nginx:1.27-alpine
container_name: landing
restart: unless-stopped
volumes:
- "/share/Container/npm/landing:/usr/share/nginx/html:ro"
networks:
- lcbp3
networks:
lcbp3:
external: true
giteanet:
external: true
name: gitnet
```

View File

@@ -0,0 +1,119 @@
สวัสดีครับ! การตั้งค่า Network Segmentation และ Firewall Rules เป็นขั้นตอนที่ฉลาดมากครับ โดยเฉพาะเมื่อคุณมี Services ที่ต้องเปิดสู่ Public (เช่น `lcbp3.np-dms.work`) และ Services ภายใน (เช่น `db.np-dms.work`)
สำหรับอุปกรณ์ Omada (ER7206 + OC200) กลยุทธ์หลักคือการใช้ **VLANs (Virtual LANs)** เพื่อแบ่งกลุ่มอุปกรณ์ และใช้ **Firewall ACLs (Access Control Lists)** เพื่อควบคุมการจราจรระหว่างกลุ่มเหล่านั้น
นี่คือคำแนะนำตามแนวทาง "Zero Trust" ที่ปรับให้เข้ากับสถาปัตยกรรมของคุณครับ
---
## 1. 🌐 การแบ่งส่วนเครือข่าย (VLAN Segmentation)
ใน Omada Controller (OC200) ให้คุณไปที่ `Settings > Wired Networks > LAN` และสร้างเครือข่ายย่อย (VLANs) ดังนี้:
* **VLAN 1 (Default): Management**
* **IP Range:** 192.168.1.x
* **วัตถุประสงค์:** ใช้สำหรับอุปกรณ์ Network (ER7206, OC200, Switches) และ PC ของผู้ดูแลระบบ (Admin) เท่านั้น
* **VLAN 10: Servers (DMZ)**
* **IP Range:** 192.168.10.x
* **วัตถุประสงค์:** นี่คือ VLAN ที่คุณจะเสียบสาย LAN ของ **QNAP NAS** ครับ QNAP จะได้รับ IP ในกลุ่มนี้ (เช่น `192.168.10.100`)
* **VLAN 20: Office / Trusted**
* **IP Range:** 192.168.20.x
* **วัตถุประสงค์:** สำหรับ PC, Notebook, และ Wi-Fi ของพนักงานทั่วไปที่ต้องเข้าใช้งานระบบ (เช่น `lcbp3.np-dms.work`)
* **VLAN 30: Guests / Untrusted**
* **IP Range:** 192.168.30.x
* **วัตถุประสงค์:** สำหรับ Wi-Fi แขก (Guest) ห้ามเข้าถึงเครือข่ายภายในโดยเด็ดขาด
**การตั้งค่า Port Switch:**
หลังจากสร้าง VLANs แล้ว ให้ไปที่ `Devices` > เลือก Switch ของคุณ > `Ports` > กำหนด Port Profile:
* Port ที่เสียบ QNAP NAS: ตั้งค่า Profile เป็น **VLAN 10**
* Port ที่เสียบ PC พนักงาน: ตั้งค่า Profile เป็น **VLAN 20**
---
## 2. 🔥 Firewall Rules (ACLs)
นี่คือหัวใจสำคัญครับ ไปที่ `Settings > Network Security > ACL (Access Control)`
กฎของ Firewall จะทำงานจากบนลงล่าง (ข้อ 1 ทำก่อนข้อ 2)
### A. กฎการห้าม (Deny Rules) - สำคัญที่สุด
**กฎข้อ 1: บล็อก Guest (VLAN 30) ไม่ให้ยุ่งกับใคร**
* **Name:** Isolate-Guests
* **Policy:** Deny
* **Source:** `Network` -> `VLAN 30`
* **Destination:** `Network` -> `VLAN 1`, `VLAN 10`, `VLAN 20`
* *(กฎนี้จะทำให้ Guest ออกอินเทอร์เน็ตได้อย่างเดียว แต่คุยข้าม VLAN ไม่ได้)*
**กฎข้อ 2: บล็อก Server (VLAN 10) ไม่ให้โจมตีคนอื่น**
* **Name:** Isolate-Servers
* **Policy:** Deny
* **Source:** `Network` -> `VLAN 10`
* **Destination:** `Network` -> `VLAN 20`
* *(กฎนี้ป้องกันไม่ให้ Server (QNAP) ที่อาจถูกแฮก เริ่มเชื่อมต่อไปยัง PC ของพนักงาน (VLAN 20) เพื่อแพร่กระจาย Malware)*
**กฎข้อ 3: บล็อก Office ไม่ให้เข้าหน้า Admin**
* **Name:** Block-Office-to-Management
* **Policy:** Deny
* **Source:** `Network` -> `VLAN 20`
* **Destination:** `Network` -> `VLAN 1`
* *(ป้องกันไม่ให้พนักงานทั่วไปเข้าหน้าตั้งค่า Router หรือ Controller)*
### B. กฎการอนุญาต (Allow Rules)
**กฎข้อ 4: อนุญาตให้ Office (VLAN 20) ใช้งาน Services ที่จำเป็น**
* **Name:** Allow-Office-to-Services
* **Policy:** Allow
* **Source:** `Network` -> `VLAN 20`
* **Destination:** `IP Group` -> (สร้าง Group ชื่อ `QNAP_Services` ชี้ไปที่ `192.168.10.100` (IP ของ QNAP))
* **Port:** `Service` -> (สร้าง Port Group ชื่อ `Web_Services`):
* TCP 443 (HTTPS - สำหรับทุก Service เช่น lcbp3, git, pma)
* TCP 80 (HTTP - สำหรับ NPM redirect)
* TCP 81 (NPM Admin UI)
* TCP 2222 (Gitea SSH)
* (ไม่จำเป็นต้องเปิด Port 3000, 3003, 5678, 89 เพราะ NPM จัดการให้แล้ว)
### C. กฎสุดท้าย (Default)
Omada มักจะมีกฎ "Allow All" อยู่ล่างสุด ให้ปล่อยไว้ หรือถ้าคุณต้องการความปลอดภัยสูงสุด (Zero Trust) ให้เปลี่ยนกฎสุดท้ายเป็น "Deny All" (แต่ต้องมั่นใจว่ากฎ Allow ของคุณครอบคลุมทั้งหมดแล้ว)
---
## 3. 🚪 Port Forwarding (การเปิด Service สู่สาธารณะ)
ส่วนนี้ไม่ใช่ Firewall ACL แต่จำเป็นเพื่อให้คนนอกเข้าใช้งานได้ครับ
ไปที่ `Settings > Transmission > Port Forwarding`
สร้างกฎเพื่อส่งต่อการจราจรจาก WAN (อินเทอร์เน็ต) ไปยัง Nginx Proxy Manager (NPM) ที่อยู่บน QNAP (VLAN 10)
* **Name:** Allow-NPM-HTTPS
* **External Port:** 443
* **Internal Port:** 443
* **Internal IP:** `192.168.10.100` (IP ของ QNAP)
* **Protocol:** TCP
* **Name:** Allow-NPM-HTTP (สำหรับ Let's Encrypt)
* **External Port:** 80
* **Internal Port:** 80
* **Internal IP:** `192.168.10.100` (IP ของ QNAP)
* **Protocol:** TCP
### สรุปผังการเชื่อมต่อ
1. **ผู้ใช้ภายนอก** -> `https://lcbp3.np-dms.work`
2. **ER7206** รับที่ Port 443
3. **Port Forwarding** ส่งต่อไปยัง `192.168.10.100:443` (QNAP NPM)
4. **NPM** (บน QNAP) ส่งต่อไปยัง `backend:3000` หรือ `frontend:3000` ภายใน Docker
5. **ผู้ใช้ภายใน (Office)** -> `https://lcbp3.np-dms.work`
6. **Firewall ACL** (กฎข้อ 4) อนุญาตให้ VLAN 20 คุยกับ `192.168.10.100:443`
7. (ขั้นตอนที่ 3-4 ทำงานเหมือนเดิม)
การตั้งค่าตามนี้จะช่วยแยกส่วน Server ของคุณออกจากเครือข่ายพนักงานอย่างชัดเจน ซึ่งปลอดภัยกว่าการวางทุกอย่างไว้ในวง LAN เดียวกันมากครับ

View File

@@ -0,0 +1,143 @@
# การติดตั้ง Nginx Proxy Manager (NPM) ใน Docker
---
## **📝 คำอธิบายและข้อควรพิจารณา**
* 1 Redis (Service: cache)
* Image: redis:7-alpine มีขนาดเล็กและทันสมัย
* Port: ไม่ได้ expose port 6379 ออกมาที่ Host QNAP เพราะตามสถาปัตยกรรม Service backend (NestJS) จะคุยกับ cache (Redis) ผ่าน lcbp3 network ภายในโดยตรง ซึ่งปลอดภัยกว่าครับ
* Volume: map data ไปที่ /share/Container/cache/data เผื่อใช้ Redis ในการทำ Persistent Cache (ถ้าต้องการแค่ Locking อาจจะไม่จำเป็นต้อง map volume ก็ได้ครับ)
* User ID: Image redis:7-alpine รันด้วย user redis (UID 999)
* 2 Elasticsearch (Service: search)
* Image: elasticsearch:8.11.1 ผมเลือกเวอร์ชัน 8 ที่ใหม่และระบุชัดเจน (ไม่ใช้ latest) เพื่อความเสถียรครับ
* Port: ไม่ได้ expose port 9200 ออกมาที่ Host เช่นกัน เพราะ NPM_setting.md ระบุว่า npm (Nginx Proxy Manager) จะ forward search.np-dms.work ไปยัง service search ที่ port 9200 ผ่าน lcbp3 network ครับ
* Environment (สำคัญมาก):
* discovery.type: "single-node": ต้องมี ไม่อย่างนั้น Elasticsearch V.8 จะไม่ยอม start ถ้าไม่พบ node อื่นใน cluster
* xpack.security.enabled: "false": เพื่อความสะดวกในการพัฒนาระยะแรก NestJS จะได้เชื่อมต่อ API port 9200 ได้เลย (หากเปิดใช้งานจะต้องตั้งค่า SSL และ Token ซึ่งซับซ้อนกว่ามาก)
* ES_JAVA_OPTS: "-Xms1g -Xmx1g": เป็น Best Practice ที่ต้องกำหนด Heap Size ให้ Elasticsearch (ในที่นี้คือ 1GB)
* User ID: Image elasticsearch รันด้วย user elasticsearch (UID 1000)
---
## กำหนดสิทธิ
```bash
# สร้าง Directory
mkdir -p /share/Container/services/cache/data
mkdir -p /share/Container/services/search/data
# กำหนดสิทธิ์ให้ตรงกับ User ID ใน Container
# Redis (UID 999)
chown -R 999:999 /share/Container/services/cache/data
chmod -R 750 /share/Container/services/cache/data
# Elasticsearch (UID 1000)
chown -R 1000:1000 /share/Container/services/search/data
chmod -R 750 /share/Container/services/search/data
```
## Docker file
```yml
# File: /share/Container/services/docker-compose.yml (หรือไฟล์ที่คุณใช้รวม)
# DMS Container v1_4_1: เพิ่ม Application name: services, Services 'cache' (Redis) และ 'search' (Elasticsearch)
x-restart: &restart_policy
restart: unless-stopped
x-logging: &default_logging
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "5"
networks:
lcbp3:
external: true
services:
# ----------------------------------------------------------------
# 1. Redis (สำหรับ Caching และ Distributed Lock)
# Service Name: cache (ตามที่ NPM และ Backend Plan อ้างอิง)
# ----------------------------------------------------------------
cache:
<<: [*restart_policy, *default_logging]
image: redis:7-alpine # ใช้ Alpine image เพื่อให้มีขนาดเล็ก
container_name: cache
stdin_open: true
tty: true
deploy:
resources:
limits:
cpus: "1.0"
memory: 2G # Redis เป็น in-memory, ให้ memory เพียงพอต่อการใช้งาน
reservations:
cpus: "0.25"
memory: 512M
environment:
TZ: "Asia/Bangkok"
networks:
- lcbp3 # เชื่อมต่อ network ภายในเท่านั้น
volumes:
- "/share/Container/cache/data:/data" # Map volume สำหรับเก็บข้อมูล (ถ้าต้องการ persistence)
healthcheck:
test: ["CMD", "redis-cli", "ping"] # ตรวจสอบว่า service พร้อมใช้งาน
interval: 10s
timeout: 5s
retries: 5
# ----------------------------------------------------------------
# 2. Elasticsearch (สำหรับ Advanced Search)
# Service Name: search (ตามที่ NPM และ Backend Plan อ้างอิง)
# ----------------------------------------------------------------
search:
<<: [*restart_policy, *default_logging]
image: elasticsearch:8.11.1 # แนะนำให้ระบุเวอร์ชันชัดเจน (V.8)
container_name: search
stdin_open: true
tty: true
deploy:
resources:
limits:
cpus: "2.0" # Elasticsearch ใช้ CPU และ Memory ค่อนข้างหนัก
memory: 4G
reservations:
cpus: "0.5"
memory: 2G
environment:
TZ: "Asia/Bangkok"
# --- Critical Settings for Single-Node ---
discovery.type: "single-node" # สำคัญมาก: กำหนดให้รันแบบ 1 node
# --- Security (Disable for Development) ---
# ปิด xpack security เพื่อให้ NestJS เชื่อมต่อง่าย (backend -> search:9200)
# หากเป็น Production จริง ควรเปิดใช้งานและตั้งค่า token/cert ครับ
xpack.security.enabled: "false"
# --- Performance Tuning ---
# กำหนด Heap size (1GB) ให้เหมาะสมกับ memory limit (4GB)
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
networks:
- lcbp3 # เชื่อมต่อ network ภายใน (NPM จะ proxy port 9200 จากภายนอก)
volumes:
- "/share/Container/search/data:/usr/share/elasticsearch/data" # Map volume สำหรับเก็บ data/indices
healthcheck:
# รอจนกว่า cluster health จะเป็น yellow หรือ green
test: ["CMD-SHELL", "curl -s http://localhost:9200/_cluster/health | grep -q '\"status\":\"green\"\\|\\\"status\":\"yellow\"'"]
interval: 30s
timeout: 10s
retries: 5
```

View File

@@ -0,0 +1,91 @@
# การติดตั้ง n8n ใน Docker
* user id ของ gites:
* uid=1000(node) gid=1000(node) groups=1000(node)
## กำหนดสิทธิ
```bash
# สำหรับ n8n volumes
chown -R 1000:1000 /share/Container/n8n
chmod -R 755 /share/Container/n8n
```
## Docker file
```yml
# File: share/Container/n8n/docker-compose.yml
# DMS Container v1_4_1 แยก service และ folder, Application name:n8n service n8n
x-restart: &restart_policy
restart: unless-stopped
x-logging: &default_logging
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "5"
services:
n8n:
<<: [*restart_policy, *default_logging]
image: n8nio/n8n:latest
container_name: n8n
stdin_open: true
tty: true
deploy:
resources:
limits:
cpus: "1.5"
memory: 2G
reservations:
cpus: "0.25"
memory: 512M
environment:
TZ: "Asia/Bangkok"
NODE_ENV: "production"
# N8N_PATH: "/n8n/"
N8N_PUBLIC_URL: "https://n8n.np-dms.work/"
WEBHOOK_URL: "https://n8n.np-dms.work/"
N8N_EDITOR_BASE_URL: "https://n8n.np-dms.work/"
N8N_PROTOCOL: "https"
N8N_HOST: "n8n.np-dms.work"
N8N_PORT: 5678
N8N_PROXY_HOPS: "1"
N8N_DIAGNOSTICS_ENABLED: 'false'
N8N_SECURE_COOKIE: 'true'
N8N_ENCRYPTION_KEY: "9AAIB7Da9DW1qAhJE5/Bz4SnbQjeAngI"
N8N_BASIC_AUTH_ACTIVE: 'true'
N8N_BASIC_AUTH_USER: admin
N8N_BASIC_AUTH_PASSWORD: Center#2025
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS: 'true'
GENERIC_TIMEZONE: "Asia/Bangkok"
DB_TYPE: mysqldb
DB_MYSQLDB_DATABASE: "n8n"
DB_MYSQLDB_USER: "center"
DB_MYSQLDB_PASSWORD: "Center#2025"
DB_MYSQLDB_HOST: "mariadb"
DB_MYSQLDB_PORT: 3306
ports:
- "5678:5678"
networks:
lcbp3: {}
volumes:
- "/share/Container/n8n:/home/node/.n8n"
- "/share/Container/n8n/cache:/home/node/.cache"
- "/share/Container/n8n/scripts:/scripts"
- "/share/Container/n8n/data:/data"
- "/var/run/docker.sock:/var/run/docker.sock"
healthcheck:
test: ["CMD", "wget", "-qO-", "http://127.0.0.1:5678/"]
# test: ["CMD", "curl", "-f", "http://127.0.0.1:5678/ || exit 1"]
interval: 15s
timeout: 5s
retries: 30
networks:
lcbp3:
external: true
```

View File

@@ -0,0 +1,120 @@
# **🗺️ แผนผัง Network Architecture & Firewall (LCBP3-DMS)**
แผนผังนี้แสดงการแบ่งส่วนเครือข่าย (VLANs) และกฎ Firewall (ACLs) สำหรับ TP-Link Omada (ER7206/OC200) เพื่อรักษาความปลอดภัยของ QNAP NAS และ Docker Services
## **1\. แผนผังการเชื่อมต่อ (Connection Flow Diagram)**
graph TD
direction TB
subgraph Flow1 \[\<b\>การเชื่อมต่อจากภายนอก (Public WAN)\</b\>\]
User\[ผู้ใช้งานภายนอก (Internet)\]
end
subgraph Router \[\<b\>Router (ER7206)\</b\> \- Gateway\]
User \-- "Port 80/443 (HTTPS/HTTP)" \--\> ER7206
ER7206(\<b\>Port Forwarding\</b\>\<br/\>TCP 80 \-\> 192.168.10.100:80\<br/\>TCP 443 \-\> 192.168.10.100:443)
end
subgraph VLANs \[\<b\>เครือข่ายภายใน (VLANs & Firewall Rules)\</b\>\]
direction LR
subgraph VLAN10 \[\<b\>VLAN 10: Servers (DMZ)\</b\>\<br/\>192.168.10.x\]
QNAP\[\<b\>QNAP NAS (192.168.10.100)\</b\>\]
end
subgraph VLAN20 \[\<b\>VLAN 20: Office\</b\>\<br/\>192.168.20.x\]
OfficePC\[PC พนักงาน/Wi-Fi\]
end
subgraph VLAN30 \[\<b\>VLAN 30: Guests\</b\>\<br/\>192.168.30.x\]
GuestPC\[Guest Wi-Fi\]
end
subgraph Firewall \[\<b\>Firewall ACLs (ควบคุมโดย OC200)\</b\>\]
direction TB
rule1(\<b\>Rule 1: DENY\</b\>\<br/\>Guest (VLAN 30\) \-\> All VLANs)
rule2(\<b\>Rule 2: DENY\</b\>\<br/\>Server (VLAN 10\) \-\> Office (VLAN 20))
rule3(\<b\>Rule 3: ALLOW\</b\>\<br/\>Office (VLAN 20\) \-\> QNAP (192.168.10.100)\<br/\>Ports: 443, 80, 81, 2222\)
end
%% \--- แสดงผล Firewall Rules \---
GuestPC \-.x|rule1| QNAP
QNAP \-.x|rule2| OfficePC
OfficePC \-- "\[https://lcbp3.np-dms.work\](https://lcbp3.np-dms.work)" \--\>|rule3| QNAP
end
%% \--- เชื่อมต่อ Router กับ QNAP \---
ER7206 \--\> QNAP
subgraph Docker \[\<b\>Docker Network 'lcbp3' (ภายใน QNAP)\</b\>\]
direction TB
subgraph PublicServices \[Services ที่ NPM เปิดสู่ภายนอก\]
direction LR
NPM\[\<b\>NPM (Nginx Proxy Manager)\</b\>\<br/\>รับการจราจรจาก QNAP\]
Frontend(frontend:3000)
Backend(backend:3000)
Gitea(gitea:3000)
PMA(pma:80)
N8N(n8n:5678)
end
subgraph InternalServices \[Internal Services (Backend เรียกใช้เท่านั้น)\]
direction LR
DB(mariadb:3306)
Cache(cache:6379)
Search(search:9200)
end
%% \--- การเชื่อมต่อภายใน Docker \---
NPM \-- "lcbp3.np-dms.work" \--\> Frontend
NPM \-- "backend.np-dms.work" \--\> Backend
NPM \-- "git.np-dms.work" \--\> Gitea
NPM \-- "pma.np-dms.work" \--\> PMA
NPM \-- "n8n.np-dms.work" \--\> N8N
Backend \-- "lcbp3 Network" \--\> DB
Backend \-- "lcbp3 Network" \--\> Cache
Backend \-- "lcbp3 Network" \--\> Search
end
%% \--- เชื่อมต่อ QNAP กับ Docker \---
QNAP \--\> NPM
%% \--- Styling \---
classDef default fill:\#f9f9f9,stroke:\#333,stroke-width:2px;
classDef router fill:\#e6f7ff,stroke:\#0056b3,stroke-width:2px;
classDef vlan fill:\#fffbe6,stroke:\#d46b08,stroke-width:2px;
classDef docker fill:\#e6ffed,stroke:\#096dd9,stroke-width:2px;
classDef internal fill:\#f0f0f0,stroke:\#595959,stroke-width:2px,stroke-dasharray: 5 5;
classDef fw fill:\#fff0f0,stroke:\#d9363e,stroke-width:2px,stroke-dasharray: 3 3;
class Router,ER7206 router;
class VLANs,VLAN10,VLAN20,VLAN30 vlan;
class Docker,PublicServices,InternalServices docker;
class DB,Cache,Search internal;
class Firewall,rule1,rule2,rule3 fw;
## **2\. สรุปการตั้งค่า Firewall ACLs (สำหรับ Omada OC200)**
นี่คือรายการกฎ (Rules) ที่คุณต้องสร้างใน Settings \> Network Security \> ACL (เรียงลำดับจากบนลงล่าง):
| ลำดับ | Name | Policy | Source | Destination | Ports |
| :---- | :---- | :---- | :---- | :---- | :---- |
| **1** | Isolate-Guests | **Deny** | Network \-\> VLAN 30 (Guests) | Network \-\> VLAN 1, 10, 20 | All |
| **2** | Isolate-Servers | **Deny** | Network \-\> VLAN 10 (Servers) | Network \-\> VLAN 20 (Office) | All |
| **3** | Block-Office-to-Mgmt | **Deny** | Network \-\> VLAN 20 (Office) | Network \-\> VLAN 1 (Mgmt) | All |
| **4** | Allow-Office-to-Services | **Allow** | Network \-\> VLAN 20 (Office) | IP Group \-\> QNAP\_Services (192.168.10.100) | Port Group \-\> Web\_Services (443, 80, 81, 2222\) |
| **5** | (Default) | Allow | Any | Any | All |
## **3\. สรุปการตั้งค่า Port Forwarding (สำหรับ Omada ER7206)**
นี่คือรายการกฎที่คุณต้องสร้างใน Settings \> Transmission \> Port Forwarding:
| Name | External Port | Internal IP | Internal Port | Protocol |
| :---- | :---- | :---- | :---- | :---- |
| Allow-NPM-HTTPS | 443 | 192.168.10.100 | 443 | TCP |
| Allow-NPM-HTTP | 80 | 192.168.10.100 | 80 | TCP |

View File

@@ -0,0 +1,46 @@
# 2025-12-17: Document Numbering Specs v1.6.2 Alignment
**Date:** 2025-12-17
**Type:** Specification Refactoring
**Related:** REQ-009-DocumentNumbering
---
## Summary
ปรับปรุง specification files ของ Document Numbering ให้สอดคล้องกับ Requirements v1.6.2
---
## Changes Made
### Updated Specifications
| File | From | To | Key Changes |
| ----------------------------------------------------- | ------ | ------ | --------------------------------------- |
| `05-decisions/ADR-002-document-numbering-strategy.md` | v2.0 | v3.0 | Version refs, compliance links, history |
| `04-operations/document-numbering-operations.md` | v1.6.0 | v1.6.2 | Status→APPROVED, file paths fixed |
| `03-implementation/document-numbering.md` | v1.6.1 | v1.6.2 | ADR reference fixed |
### New Task Files
| File | Purpose |
| ----------------------------------------------------- | ----------------------------- |
| `06-tasks/TASK-BE-017-document-numbering-refactor.md` | Backend implementation tasks |
| `06-tasks/TASK-FE-017-document-numbering-refactor.md` | Frontend implementation tasks |
---
## Key Decisions
1. **Single Source of Truth**: `document_number_counters` เป็น authoritative counter system
2. **Counter Key Structure**: Unified to 8 fields (project, orig, recip, type, sub_type, rfa_type, discipline, reset_scope)
3. **Number State Machine**: RESERVED → CONFIRMED → VOID/CANCELLED
4. **Deprecated Tokens**: `{ORG}`, `{TYPE}` replaced with explicit `{ORIGINATOR}`, `{RECIPIENT}`, `{CORR_TYPE}`
---
## Next Actions
- [ ] Execute TASK-BE-017 (Backend team)
- [ ] Execute TASK-FE-017 (Frontend team, after BE ready)