260130:1446 Update Infrastructure
Some checks failed
Spec Validation / validate-markdown (push) Has been cancelled
Spec Validation / validate-diagrams (push) Has been cancelled
Spec Validation / check-todos (push) Has been cancelled

This commit is contained in:
admin
2026-01-30 14:46:06 +07:00
parent cd73cc1549
commit 9e8bd25e1d
12 changed files with 1903 additions and 255 deletions

View File

@@ -35,4 +35,3 @@ if ($LASTEXITCODE -ne 0) {
}
Write-Host "✅ Done!" -ForegroundColor Green
pause

View File

@@ -3,10 +3,10 @@
---
title: 'System Architecture'
version: 1.5.0
version: 1.8.0
status: first-draft
owner: Nattanin Peancharoen
last_updated: 2025-11-30
last_updated: 2026-01-26
related: -
specs/01-objectives.md
@@ -18,13 +18,375 @@ specs/01-objectives.md
- Domain: `np-dms.work`, `www.np-dms.work`
- IP: 159.192.126.103
- Server: QNAP (Model: TS-473A, RAM: 32GB, CPU: AMD Ryzen V1500B)
- Server: QNAP TS-473A, RAM: 32GB, CPU: AMD Ryzen V1500B, HDD: 4TBx4nos. RAID 5, SSD: 1TB ใช้เป็น caching, มี port 2.5Gbps 2 port
- Server: AS5304T, RAM: 16GB, CPU: Intel Celeron CPU @ 2.00GH, HDD: 6TBx3nos. RAID 5, SSD: 1TB ใช้เป็น caching, มี port 2.5Gbps 2 port
- Rotuter: TP-LINK ER7206, WAN/LAN port 1 SFP, WAN port 2, WAN/LAN 10/100/1000 port 3-6
- Core Switch: TP-LINK TL-SG2428P, LAN port 1-24 10/100/1000, SFP port 25-28 1Gbps
- Server Switch: AMPCOM, LAN port 1-8 10/100/1000/2500, SFP+ port 9 10Gbps
- Admin Switch: TP-LINK ES205G, LAN port 1-5 10/100/1000
- CCTV Switch: TP-LINK TL-SL1226P port 1-24 PoE+ 100Mbps, SFP port 24-25 1Gbps
- IP Phone Switch: TP-LINK TL-SG1210P port 1-8 PoE+ 100Mbps , Uplink1 10/100/1000, Uplink2 SFP 1Gbps
- Controller: TP-LINK OC200
- Wireless Access point: TP-LINK EAP610 16 ตัว
- CCTV: HikVision (DS-7732NXI-K4) + กล้อง 6 ตัว
- IP Phone: YeaLink 8 ตัว
- Admin Desktop: Windows 11, LAN port 10/100/1000/2500
- Printer: Kyocera CS 3554ci, LAN port 10/100/1000
- Containerization: Container Station (Docker & Docker Compose) ใช้ UI ของ Container Station เป็นหลัก ในการ configuration และการรัน docker command
- Development Environment: VS Code/Cursor on Windows 11
- Data Storage: /share/dms-data บน QNAP
- ข้อจำกัด: ไม่สามารถใช้ .env ในการกำหนดตัวแปรภายนอกได้ ต้องกำหนดใน docker-compose.yml เท่านั้น
## **2.2 การจัดการ Configuration (ปรับปรุง):**
## **2.2 Netwrok Configuration**
**VLAN Networks**
| VLAN ID | Name | Purpose | Gateway/Subnet | DHCP | IP Range | DNS | Lease Time | ARP Detection | IGMP Snooping | MLD Snooping | Notes |
| ------- | ------ | --------- | --------------- | ---- | ------------------ | ------- | ---------- | ------------- | ------------- | ------------ | --------------- |
| 10 | SERVER | Interface | 192.168.10.1/24 | No | - | Custom | - | - | - | - | Static servers |
| 20 | MGMT | Interface | 192.168.20.1/24 | No | - | Custom | - | Enable | Enable | - | Management only |
| 30 | USER | Interface | 192.168.30.1/24 | Yes | 192.168.30.10-254 | Auto | 7 Days | - | Enable | - | User devices |
| 40 | CCTV | Interface | 192.168.40.1/24 | Yes | 192.168.40.100-150 | Auto | 7 Days | - | Enable | - | CCTV & NVR |
| 50 | VOICE | Interface | 192.168.50.1/24 | Yes | 192.168.50.201-250 | Auto | 7 Days | - | - | - | IP Phones |
| 60 | DMZ | Interface | 192.168.60.1/24 | No | - | 1.1.1.1 | - | - | - | - | Public services |
| 70 | GUEST | Interface | 192.168.70.1/24 | Yes | 192.168.70.200-250 | Auto | 1 Day | - | - | - | Guest |
**Switch Profiles**
| Profile Name | Native Network | Tagged Networks | Untagged Networks | Voice Network | Loopback Control | Usage |
| ---------------- | -------------- | --------------------- | ----------------- | ------------- | ---------------- | ----------------------- |
| 01_CORE_TRUNK | MGMT (20) | 10,30,40,50,60,70 | MGMT (20) | - | Spanning Tree | Router & switch uplinks |
| 02_MGMT_ONLY | MGMT (20) | MGMT (20) | - | - | Spanning Tree | Management only |
| 03_SERVER_ACCESS | SERVER (10) | MGMT (20) | SERVER (10) | - | Spanning Tree | QNAP / ASUSTOR |
| 04_CCTV_ACCESS | CCTV (40) | - | CCTV (40) | - | Spanning Tree | CCTV cameras |
| 05_USER_ACCESS | USER (30) | - | USER (30) | - | Spanning Tree | PC / Printer |
| 06_AP_TRUNK | MGMT (20) | USER (30), GUEST (70) | MGMT (20) | - | Spanning Tree | EAP610 Access Points |
| 07_VOICE_ACCESS | USER (30) | VOICE (50) | USER (30) | VOICE (50) | Spanning Tree | IP Phones |
**ER7206 Port Mapping**
| Port | Connected Device | Port | Description |
| ---- | ---------------- | ------------- | ----------- |
| 1 | - | - | - |
| 2 | WAN | - | Internet |
| 3 | SG2428P | PVID MGMT(20) | Core Switch |
| 4 | - | - | - |
| 5 | - | - | - |
| 6 | - | - | - |
**AMPCOM Port Aggregate Setting**
| Aggregate Group ID | Type | Member port | Aggregated Port |
| ------------------ | ---- | ----------- | --------------- |
| Trunk1 | LACP | 3,4 | 3,4 |
| Trunk2 | LACP | 5,6 | 5,6 |
**AMPCOM Port VLAN Mapping**
| Port | Connected Device | Port vlan type | Access VLAN | Native VLAN | Trunk vlan |
| ------ | ---------------- | -------------- | ----------- | ----------- | -------------------- |
| 1 | SG2428P | Trunk | - | 20 | 10,20,30,40,50,60,70 |
| 2 | - | Trunk | - | 20 | 10,20,30,40,50,60,70 |
| 7 | - | Access | 20 | - | - |
| 8 | Admin Desktop | Access | 20 | - | - |
| Trunk1 | QNAP | Trunk | - | 10 | 10,20,30,40,50,60,70 |
| Trunk2 | ASUSTOR | Trunk | - | 10 | 10,20,30,40,50,60,70 |
**NAS NIC Bonding Configuration**
| Device | Bonding Mode | Member Ports | VLAN Mode | Tagged VLAN | IP Address | Gateway | Notes |
| ------- | ------------------- | ------------ | --------- | ----------- | --------------- | ------------ | ---------------------- |
| QNAP | IEEE 802.3ad (LACP) | Adapter 1, 2 | Untagged | 10 (SERVER) | 192.168.10.8/24 | 192.168.10.1 | Primary NAS for DMS |
| ASUSTOR | IEEE 802.3ad (LACP) | Port 1, 2 | Untagged | 10 (SERVER) | 192.168.10.9/24 | 192.168.10.1 | Backup / Secondary NAS |
> **หมายเหตุ**: NAS ทั้งสองตัวใช้ LACP bonding เพื่อเพิ่ม bandwidth และ redundancy โดยต้อง config ให้ตรงกับ AMPCOM Switch (Trunk1)
**SG2428P Port Mapping**
| Port | Connected Device | Switch Profile | Description |
| ---- | ------------------------- | -------------------- | ------------- |
| 1 | ER7206 | 01_CORE_TRUNK | Internet |
| 2 | OC200 | 01_CORE_TRUNK | Controller |
| 3 | Ampcom 2.5G Switch Port 1 | LAG1 (01_CORE_TRUNK) | Uplink |
| 4 | - | LAG1 (01_CORE_TRUNK) | Reserved |
| 5 | EAP610-01 | 06_AP_TRUNK | Access Point |
| 6 | EAP610-02 | 06_AP_TRUNK | Access Point |
| 7 | EAP610-03 | 06_AP_TRUNK | Access Point |
| 8 | EAP610-04 | 06_AP_TRUNK | Access Point |
| 9 | EAP610-05 | 06_AP_TRUNK | Access Point |
| 10 | EAP610-06 | 06_AP_TRUNK | Access Point |
| 11 | EAP610-07 | 06_AP_TRUNK | Access Point |
| 12 | EAP610-08 | 06_AP_TRUNK | Access Point |
| 13 | EAP610-09 | 06_AP_TRUNK | Access Point |
| 14 | EAP610-10 | 06_AP_TRUNK | Access Point |
| 15 | EAP610-11 | 06_AP_TRUNK | Access Point |
| 16 | EAP610-12 | 06_AP_TRUNK | Access Point |
| 17 | EAP610-13 | 06_AP_TRUNK | Access Point |
| 18 | EAP610-14 | 06_AP_TRUNK | Access Point |
| 19 | EAP610-15 | 06_AP_TRUNK | Access Point |
| 20 | EAP610-16 | 06_AP_TRUNK | Access Point |
| 21 | Reserved | 01_CORE_TRUNK | |
| 22 | Reserved | 01_CORE_TRUNK | |
| 23 | Printer | 05_USER_ACCESS | Printer |
| 24 | ES205G | 01_CORE_TRUNK | Management PC |
| 25 | TL-SL1226P | 01_CORE_TRUNK | Uplink |
| 26 | SG1210P | 01_CORE_TRUNK | Uplink |
| 27 | Reserved | 01_CORE_TRUNK | |
| 28 | Reserved | 01_CORE_TRUNK | |
**ES205G Port Mapping (Admin Switch)**
| Port | Connected Device | VLAN | Description |
| ---- | ---------------- | ----------- | ----------- |
| 1 | SG2428P Port 24 | Trunk (All) | Uplink |
| 2 | Admin Desktop | MGMT (20) | Admin PC |
| 3 | Reserved | MGMT (20) | |
| 4 | Reserved | MGMT (20) | |
| 5 | Reserved | MGMT (20) | |
> **หมายเหตุ**: ES205G เป็น Unmanaged Switch ไม่รองรับ VLAN tagging ดังนั้นทุก port จะอยู่ใน Native VLAN (20) ของ uplink
**TL-SL1226P Port Mapping (CCTV Switch)**
| Port | Connected Device | PoE | VLAN | Description |
| ---- | ---------------- | ---- | --------- | ----------- |
| 1 | Camera-01 | PoE+ | CCTV (40) | CCTV Camera |
| 2 | Camera-02 | PoE+ | CCTV (40) | CCTV Camera |
| 3 | Camera-03 | PoE+ | CCTV (40) | CCTV Camera |
| 4 | Camera-04 | PoE+ | CCTV (40) | CCTV Camera |
| 5 | Camera-05 | PoE+ | CCTV (40) | CCTV Camera |
| 6 | Camera-06 | PoE+ | CCTV (40) | CCTV Camera |
| 7-23 | Reserved | PoE+ | CCTV (40) | |
| 24 | HikVision NVR | - | CCTV (40) | NVR |
| 25 | SG2428P Port 25 | - | Trunk | SFP Uplink |
| 26 | Reserved | - | Trunk | SFP |
**SG1210P Port Mapping (IP Phone Switch)**
| Port | Connected Device | PoE | Data VLAN | Voice VLAN | Description |
| ------- | ---------------- | ---- | --------- | ---------- | ----------- |
| 1 | IP Phone-01 | PoE+ | USER (30) | VOICE (50) | IP Phone |
| 2 | IP Phone-02 | PoE+ | USER (30) | VOICE (50) | IP Phone |
| 3 | IP Phone-03 | PoE+ | USER (30) | VOICE (50) | IP Phone |
| 4 | IP Phone-04 | PoE+ | USER (30) | VOICE (50) | IP Phone |
| 5 | IP Phone-05 | PoE+ | USER (30) | VOICE (50) | IP Phone |
| 6 | IP Phone-06 | PoE+ | USER (30) | VOICE (50) | IP Phone |
| 7 | IP Phone-07 | PoE+ | USER (30) | VOICE (50) | IP Phone |
| 8 | IP Phone-08 | PoE+ | USER (30) | VOICE (50) | IP Phone |
| Uplink1 | Reserved | - | Trunk | - | RJ45 Uplink |
| Uplink2 | SG2428P Port 26 | - | Trunk | - | SFP Uplink |
> **หมายเหตุ**: SG1210P รองรับ Voice VLAN ทำให้ IP Phone ใช้ VLAN 50 สำหรับ voice traffic และ passthrough VLAN 30 สำหรับ PC ที่ต่อผ่าน phone
**Static IP Allocation**
| VLAN | Device | IP Address | MAC Address | Notes |
| ---------- | --------------- | ------------------ | ----------- | ---------------- |
| SERVER(10) | QNAP | 192.168.10.8 | - | Primary NAS |
| SERVER(10) | ASUSTOR | 192.168.10.9 | - | Backup NAS |
| SERVER(10) | Docker Host | 192.168.10.10 | - | Containers |
| MGMT(20) | ER7206 | 192.168.20.1 | - | Gateway/Router |
| MGMT(20) | SG2428P | 192.168.20.2 | - | Core Switch |
| MGMT(20) | AMPCOM | 192.168.20.3 | - | Server Switch |
| MGMT(20) | TL-SL1226P | 192.168.20.4 | - | CCTV Switch |
| MGMT(20) | SG1210P | 192.168.20.5 | - | Phone Switch |
| MGMT(20) | OC200 | 192.168.20.250 | - | Omada Controller |
| MGMT(20) | Admin Desktop | 192.168.20.100 | - | Admin PC |
| USER(30) | Printer | 192.168.30.222 | - | Kyocera CS3554ci |
| CCTV(40) | NVR | 192.168.40.100 | - | HikVision NVR |
| CCTV(40) | Camera-01 to 06 | 192.168.40.101-106 | - | CCTV Cameras |
| USER(30) | Admin Desktop | 192.168.30.100 | - | Admin PC (USER) |
**2.8 DHCP Reservation (MAC Mapping)**
**CCTV MAC Address Mapping (VLAN 40)**
| Device Name | IP Address | MAC Address | Port (Switch) | Notes |
| ------------- | -------------- | ----------- | ------------- | ---------- |
| HikVision NVR | 192.168.40.100 | | Port 24 | Master NVR |
| Camera-01 | 192.168.40.101 | | Port 1 | |
| Camera-02 | 192.168.40.102 | | Port 2 | |
| Camera-03 | 192.168.40.103 | | Port 3 | |
| Camera-04 | 192.168.40.104 | | Port 4 | |
| Camera-05 | 192.168.40.105 | | Port 5 | |
| Camera-06 | 192.168.40.106 | | Port 6 | |
**IP Phone MAC Address Mapping (VLAN 50)**
| Device Name | IP Address | MAC Address | Port (Switch) | Notes |
| ----------- | -------------- | ----------- | ------------- | ------- |
| IP Phone-01 | 192.168.50.201 | | Port 1 | Yealink |
| IP Phone-02 | 192.168.50.202 | | Port 2 | Yealink |
| IP Phone-03 | 192.168.50.203 | | Port 3 | Yealink |
| IP Phone-04 | 192.168.50.204 | | Port 4 | Yealink |
| IP Phone-05 | 192.168.50.205 | | Port 5 | Yealink |
| IP Phone-06 | 192.168.50.206 | | Port 6 | Yealink |
| IP Phone-07 | 192.168.50.207 | | Port 7 | Yealink |
| IP Phone-08 | 192.168.50.208 | | Port 8 | Yealink |
**Wireless SSID Mapping (OC200 Controller)**
| SSID Name | Band | VLAN | Security | Portal Auth | Notes |
| --------- | ------- | ---------- | --------- | ----------- | ----------------------- |
| PSLCBP3 | 2.4G/5G | USER (30) | WPA2/WPA3 | No | Staff WiFi |
| GUEST | 2.4G/5G | GUEST (70) | WPA2 | Yes | Guest WiFi with Captive |
> **หมายเหตุ**: ทุก SSID broadcast ผ่าน EAP610 ทั้ง 16 ตัว โดยใช้ 06_AP_TRUNK profile ที่ tag VLAN 30 และ 70
**Gateway ACL (ER7206 Firewall Rules)**
*Inter-VLAN Routing Policy*
| # | Name | Source | Destination | Service | Action | Log | Notes |
| --- | ----------------- | --------------- | ---------------- | -------------- | ------ | --- | --------------------------- |
| 1 | MGMT-to-ALL | VLAN20 (MGMT) | Any | Any | Allow | No | Admin full access |
| 2 | SERVER-to-ALL | VLAN10 (SERVER) | Any | Any | Allow | No | Servers outbound access |
| 3 | USER-to-SERVER | VLAN30 (USER) | VLAN10 (SERVER) | HTTP/HTTPS/SSH | Allow | No | Users access web apps |
| 4 | USER-to-DMZ | VLAN30 (USER) | VLAN60 (DMZ) | HTTP/HTTPS | Allow | No | Users access DMZ services |
| 5 | USER-to-MGMT | VLAN30 (USER) | VLAN20 (MGMT) | Any | Deny | Yes | Block users from management |
| 6 | USER-to-CCTV | VLAN30 (USER) | VLAN40 (CCTV) | Any | Deny | Yes | Isolate CCTV |
| 7 | USER-to-VOICE | VLAN30 (USER) | VLAN50 (VOICE) | Any | Deny | No | Isolate Voice |
| 8 | USER-to-GUEST | VLAN30 (USER) | VLAN70 (GUEST) | Any | Deny | No | Isolate Guest |
| 9 | CCTV-to-INTERNET | VLAN40 (CCTV) | WAN | HTTPS (443) | Allow | No | NVR cloud backup (optional) |
| 10 | CCTV-to-ALL | VLAN40 (CCTV) | Any (except WAN) | Any | Deny | Yes | CCTV isolated |
| 11 | VOICE-to-SIP | VLAN50 (VOICE) | SIP Server IP | SIP/RTP | Allow | No | Voice to SIP trunk |
| 12 | VOICE-to-ALL | VLAN50 (VOICE) | Any | Any | Deny | No | Voice isolated |
| 13 | DMZ-to-ALL | VLAN60 (DMZ) | Any (internal) | Any | Deny | Yes | DMZ cannot reach internal |
| 14 | GUEST-to-INTERNET | VLAN70 (GUEST) | WAN | HTTP/HTTPS/DNS | Allow | No | Guest internet only |
| 15 | GUEST-to-ALL | VLAN70 (GUEST) | Any (internal) | Any | Deny | Yes | Guest isolated |
| 99 | DEFAULT-DENY | Any | Any | Any | Deny | Yes | Catch-all deny |
*WAN Inbound Rules (Port Forwarding)*
| # | Name | WAN Port | Internal IP | Internal Port | Protocol | Notes |
| --- | --------- | -------- | ------------ | ------------- | -------- | ------------------- |
| 1 | HTTPS-NPM | 443 | 192.168.10.8 | 443 | TCP | Nginx Proxy Manager |
| 2 | HTTP-NPM | 80 | 192.168.10.8 | 80 | TCP | HTTP redirect |
> **หมายเหตุ**: ER7206 ใช้หลักการ Default Deny - Rules ประมวลผลจากบนลงล่าง
**Switch ACL (SG2428P Layer 2 Rules)**
*Port-Based Access Control*
| # | Name | Source Port | Source MAC/VLAN | Destination | Action | Notes |
| --- | --------------- | --------------- | --------------- | ------------------- | ------ | ------------------------ |
| 1 | CCTV-Isolation | Port 25 (CCTV) | VLAN 40 | VLAN 10,20,30 | Deny | CCTV cannot reach others |
| 2 | Guest-Isolation | Port 5-20 (APs) | VLAN 70 | VLAN 10,20,30,40,50 | Deny | Guest isolation |
| 3 | Voice-QoS | Port 26 (Phone) | VLAN 50 | Any | Allow | QoS priority DSCP EF |
*Storm Control (per port)*
| Port Range | Broadcast | Multicast | Unknown Unicast | Notes |
| ---------- | --------- | --------- | --------------- | ----------------------- |
| 1-28 | 10% | 10% | 10% | Prevent broadcast storm |
*Spanning Tree Configuration*
| Setting | Value | Notes |
| -------------------- | --------- | ------------------------------ |
| STP Mode | RSTP | Rapid Spanning Tree |
| Root Bridge Priority | 4096 | SG2428P as root |
| Port Fast | Port 5-24 | Edge ports (APs, endpoints) |
| BPDU Guard | Port 5-24 | Protect against rogue switches |
> **หมายเหตุ**: SG2428P เป็น L2+ switch, ACL ทำได้จำกัด ให้ใช้ ER7206 เป็น primary firewall
**EAP ACL (Omada Controller - Wireless Rules)**
*SSID: PSLCBP3 (Staff WiFi)*
| # | Name | Source | Destination | Service | Action | Schedule | Notes |
| --- | ------------------- | ---------- | ---------------- | -------- | ------ | -------- | ----------------- |
| 1 | Allow-DNS | Any Client | 8.8.8.8, 1.1.1.1 | DNS (53) | Allow | Always | DNS resolution |
| 2 | Allow-Server | Any Client | 192.168.10.0/24 | Any | Allow | Always | Access to servers |
| 3 | Allow-Printer | Any Client | 192.168.30.222 | 9100,631 | Allow | Always | Print services |
| 4 | Allow-Internet | Any Client | WAN | Any | Allow | Always | Internet access |
| 5 | Block-MGMT | Any Client | 192.168.20.0/24 | Any | Deny | Always | No management |
| 6 | Block-CCTV | Any Client | 192.168.40.0/24 | Any | Deny | Always | No CCTV access |
| 7 | Block-Voice | Any Client | 192.168.50.0/24 | Any | Deny | Always | No Voice access |
| 8 | Block-Client2Client | Any Client | Any Client | Any | Deny | Always | Client isolation |
*SSID: GUEST (Guest WiFi)*
| # | Name | Source | Destination | Service | Action | Schedule | Notes |
| --- | ------------------- | ---------- | ---------------- | ---------- | ------ | -------- | ------------------ |
| 1 | Allow-DNS | Any Client | 8.8.8.8, 1.1.1.1 | DNS (53) | Allow | Always | DNS resolution |
| 2 | Allow-HTTP | Any Client | WAN | HTTP/HTTPS | Allow | Always | Web browsing |
| 3 | Block-RFC1918 | Any Client | 10.0.0.0/8 | Any | Deny | Always | No private IPs |
| 4 | Block-RFC1918-2 | Any Client | 172.16.0.0/12 | Any | Deny | Always | No private IPs |
| 5 | Block-RFC1918-3 | Any Client | 192.168.0.0/16 | Any | Deny | Always | No internal access |
| 6 | Block-Client2Client | Any Client | Any Client | Any | Deny | Always | Client isolation |
*Rate Limiting*
| SSID | Download Limit | Upload Limit | Notes |
| ------- | -------------- | ------------ | ----------------------- |
| PSLCBP3 | Unlimited | Unlimited | Staff full speed |
| GUEST | 10 Mbps | 5 Mbps | Guest bandwidth control |
*Captive Portal (GUEST SSID)*
| Setting | Value | Notes |
| ---------------- | --------------- | ---------------------- |
| Portal Type | Simple Password | Single shared password |
| Session Timeout | 8 Hours | Re-auth after 8 hours |
| Idle Timeout | 30 Minutes | Disconnect if idle |
| Terms of Service | Enabled | User must accept ToS |
> **หมายเหตุ**: EAP ACL ทำงานที่ Layer 3 บน Omada Controller ช่วยลด load บน ER7206
**Network Topology Diagram**
```mermaid
graph TB
subgraph Internet
WAN[("🌐 Internet<br/>WAN")]
end
subgraph Router["ER7206 Router"]
R[("🔲 ER7206<br/>192.168.20.1")]
end
subgraph CoreSwitch["SG2428P Core Switch"]
CS[("🔲 SG2428P<br/>192.168.20.2")]
end
subgraph ServerSwitch["AMPCOM 2.5G Switch"]
SS[("🔲 AMPCOM<br/>192.168.20.3")]
end
subgraph Servers["VLAN 10 - Servers"]
QNAP[("💾 QNAP<br/>192.168.10.10")]
ASUSTOR[("💾 ASUSTOR<br/>192.168.10.11")]
end
subgraph AccessPoints["EAP610 x16"]
AP[("📶 WiFi APs")]
end
subgraph OtherSwitches["Distribution"]
CCTV_SW[("🔲 TL-SL1226P<br/>CCTV")]
PHONE_SW[("🔲 SG1210P<br/>IP Phone")]
ADMIN_SW[("🔲 ES205G<br/>Admin")]
end
WAN --> R
R -->|Port 3| CS
CS -->|LAG Port 3-4| SS
SS -->|Port 3-4 LACP| QNAP
SS -->|Port 5-6 LACP| ASUSTOR
SS -->|Port 7| ADMIN_SW
CS -->|Port 5-20| AP
CS -->|SFP 25| CCTV_SW
CS -->|SFP 26| PHONE_SW
CS -->|Port 24| ADMIN_SW
```
**OC200 Omada Controller Configuration**
| Setting | Value | Notes |
| --------------- | -------------------------- | ------------------------------ |
| Controller IP | 192.168.20.10 | Static IP in MGMT VLAN |
| Controller Port | 8043 (HTTPS) | Management Web UI |
| Adoption URL | https://192.168.20.10:8043 | URL for AP adoption |
| Site Name | LCBP3 | Single site configuration |
| Managed Devices | 16x EAP610 | All APs managed centrally |
| Firmware Update | Manual | Test before production rollout |
| Backup Schedule | Weekly (Sunday 2AM) | Auto backup to QNAP |
## **2.3 การจัดการ Configuration (ปรับปรุง):**
- ใช้ docker-compose.yml สำหรับ environment variables ตามข้อจำกัดของ QNAP
- Secrets Management:
@@ -40,7 +402,7 @@ specs/01-objectives.md
- ต้องแยก configuration ตาม environment (development, staging, production)
- Docker Network: ทุก Service จะเชื่อมต่อผ่านเครือข่ายกลางชื่อ lcbp3 เพื่อให้สามารถสื่อสารกันได้
## **2.3 Core Services:**
## **2.4 Core Services:**
- Code Hosting: Gitea (Self-hosted on QNAP)
@@ -102,29 +464,29 @@ specs/01-objectives.md
- Search Engine: Elasticsearch
- Cache: Redis
## **2.4 Business Logic & Consistency (ปรับปรุง):**
## **2.5 Business Logic & Consistency (ปรับปรุง):**
- 2.4.1 Unified Workflow Engine (หลัก):
- 2.5.1 Unified Workflow Engine (หลัก):
- ระบบการเดินเอกสารทั้งหมด (Correspondence, RFA, Circulation) ต้อง ใช้ Engine กลางเดียวกัน โดยกำหนด Logic ผ่าน Workflow DSL (JSON Configuration) แทนการเขียน Hard-coded ลงในตาราง
- Workflow Versioning (เพิ่ม): ระบบต้องรองรับการกำหนด Version ของ Workflow Definition โดยเอกสารที่เริ่มกระบวนการไปแล้ว (In-progress instances) จะต้องใช้ Workflow Version เดิม จนกว่าจะสิ้นสุดกระบวนการ หรือได้รับคำสั่ง Migrate จาก Admin เพื่อป้องกันความขัดแย้งของ State
- 2.4.2 Separation of Concerns:
- 2.5.2 Separation of Concerns:
- Module ต่างๆ (Correspondence, RFA, Circulation) จะเก็บเฉพาะข้อมูลของเอกสาร (Data) ส่วนสถานะและการเปลี่ยนสถานะ (State Transition) จะถูกจัดการโดย Workflow Engine
- 2.4.3 Idempotency & Locking:
- 2.5.3 Idempotency & Locking:
- ใช้กลไกเดิมในการป้องกันการทำรายการซ้ำ
- 2.4.4 Optimistic Locking:
- 2.5.4 Optimistic Locking:
- ใช้ Version Column ใน Database ควบคู่กับ Redis Lock สำหรับการสร้างเลขที่เอกสาร เพื่อเป็น Safety Net ชั้นสุดท้าย
- 2.4.5 จะไม่มีการใช้ SQL Triggers
- 2.5.5 จะไม่มีการใช้ SQL Triggers
- เพื่อป้องกันตรรกะซ่อนเร้น (Hidden Logic) และความซับซ้อนในการดีบัก
## **2.5 Data Migration และ Schema Versioning:**
## **2.6 Data Migration และ Schema Versioning:**
- ต้องมี database migration scripts สำหรับทุก schema change โดยใช้ TypeORM migrations
- ต้องรองรับ rollback ของ migration ได้
@@ -133,10 +495,11 @@ specs/01-objectives.md
- Migration scripts ต้องผ่านการทดสอบใน staging environment ก่อน production
- ต้องมี database backup ก่อนทำ migration ใน production
## **2.6 กลยุทธ์ความทนทานและการจัดการข้อผิดพลาด (Resilience & Error Handling Strategy)**
## **2.7 กลยุทธ์ความทนทานและการจัดการข้อผิดพลาด (Resilience & Error Handling Strategy)**
- 2.6.1 Circuit Breaker Pattern: ใช้สำหรับ external service calls (Email, LINE, Elasticsearch)
- 2.6.2 Retry Mechanism: ด้วย exponential backoff สำหรับ transient failures
- 2.6.3 Fallback Strategies: Graceful degradation เมื่อบริการภายนอกล้มเหลว
- 2.6.4 Error Handling: Error messages ต้องไม่เปิดเผยข้อมูล sensitive
- 2.7.1 Circuit Breaker Pattern: ใช้สำหรับ external service calls (Email, LINE, Elasticsearch)
- 2.7.2 Retry Mechanism: ด้วย exponential backoff สำหรับ transient failures
- 2.7.3 Fallback Strategies: Graceful degradation เมื่อบริการภายนอกล้มเหลว
- 2.7.4 Error Handling: Error messages ต้องไม่เปิดเผยข้อมูล sensitive
- 2.6.5 Monitoring: Centralized error monitoring และ alerting system

View File

@@ -7,25 +7,26 @@
## กำหนดสิทธิ
```bash
chown -R 1000:1000 /share/Container/gitea/
chown -R 1000:1000 /share/np-dms/gitea/
[/share/Container/git] # ls -l /share/Container/gitea/etc/app.ini
[/share/Container/git] # setfacl -R -m u:1000:rwx /share/Container/gitea/
[/share/Container/git] # setfacl -R -m u:70:rwx /share/Container/git/postgres/
getfacl /share/Container/git/etc/app.ini
chown -R 1000:1000 /share/Container/gitea/
ล้าง
setfacl -R -b /share/Container/gitea/
getfacl /share/np-dms/git/etc/app.ini
chown -R 1000:1000 /share/np-dms/gitea/
ล้างสิทธิ์
setfacl -R -b /share/np-dms/gitea/
chgrp -R administrators /share/Container/gitea/
chown -R 1000:1000 /share/Container/gitea/etc /share/Container/gitea/lib /share/Container/gitea/backup
setfacl -m u:1000:rwx -m g:1000:rwx /share/Container/gitea/etc /share/Container/gitea/lib /share/Container/gitea/backup
chgrp -R administrators /share/np-dms/gitea/
chown -R 1000:1000 /share/np-dms/gitea/etc /share/np-dms/gitea/lib /share/np-dms/gitea/backup
setfacl -m u:1000:rwx -m g:1000:rwx /share/np-dms/gitea/etc /share/np-dms/gitea/lib /share/np-dms/gitea/backup
```
## Docker file
```yml
# File: share/Container/git/docker-compose.yml
# DMS Container v1_4_1 : แยก service และ folder, Application name: git, Servive:gitea
# File: share/np-dms/git/docker-compose.yml
# DMS Container v1_7_0 : แยก service และ folder
# Application name: git, Servive:gitea
networks:
lcbp3:
external: true
@@ -74,12 +75,12 @@ services:
# Optional: lock install after setup (เปลี่ยนเป็น true เมื่อจบ onboarding)
GITEA__security__INSTALL_LOCK: "true"
volumes:
- /share/Container/gitea/backup:/backup
- /share/Container/gitea/etc:/etc/gitea
- /share/Container/gitea/lib:/var/lib/gitea
- /share/np-dms/gitea/backup:/backup
- /share/np-dms/gitea/etc:/etc/gitea
- /share/np-dms/gitea/lib:/var/lib/gitea
# ให้ repo root ใช้จาก /share/dms-data/gitea_repos
- /share/dms-data/gitea_repos:/var/lib/gitea/git/repositories
- /share/dms-data/gitea_registry:/data/registry
- /share/np-dms/gitea/gitea_repos:/var/lib/gitea/git/repositories
- /share/np-dms/gitea/gitea_registry:/data/registry
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:

View File

@@ -1,6 +1,27 @@
# Infrastructure Setup
## 1. Redis Cluster Configuration
> 📍 **Document Version:** v1.8.0
> 🖥️ **Primary Server:** QNAP TS-473A (Application & Database)
> 💾 **Backup Server:** ASUSTOR AS5403T (Infrastructure & Backup)
---
## Server Role Overview
| Component | QNAP TS-473A | ASUSTOR AS5403T |
| :-------------------- | :---------------------------- | :--------------------------- |
| **Redis/Cache** | ✅ Primary (Section 1) | ❌ Not deployed |
| **Database** | ✅ Primary MariaDB (Section 2) | ❌ Not deployed |
| **Backend Service** | ✅ NestJS API (Section 3) | ❌ Not deployed |
| **Monitoring** | ❌ Exporters only | ✅ Prometheus/Grafana |
| **Backup Target** | ❌ Source only | ✅ Backup storage (Section 5) |
| **Disaster Recovery** | ✅ Recovery target | ✅ Backup source (Section 7) |
> 📖 See [monitoring.md](monitoring.md) for ASUSTOR-specific monitoring setup
---
## 1. Redis Configuration (Standalone + Persistence)
### 1.1 Docker Compose Setup
```yaml
@@ -8,99 +29,29 @@
version: '3.8'
services:
redis-1:
image: redis:7-alpine
container_name: lcbp3-redis-1
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf
ports:
- "6379:6379"
- "16379:16379"
volumes:
- redis-1-data:/data
networks:
- lcbp3-network
redis:
image: 'redis:7.2-alpine'
container_name: lcbp3-redis
restart: unless-stopped
redis-2:
image: redis:7-alpine
container_name: lcbp3-redis-2
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf
ports:
- "6380:6379"
- "16380:16379"
# AOF: Enabled for durability
# Maxmemory: Prevent OOM
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD} --maxmemory 1gb --maxmemory-policy noeviction
volumes:
- redis-2-data:/data
networks:
- lcbp3-network
restart: unless-stopped
redis-3:
image: redis:7-alpine
container_name: lcbp3-redis-3
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf
- ./redis/data:/data
ports:
- "6381:6379"
- "16381:16379"
volumes:
- redis-3-data:/data
- '6379:6379'
networks:
- lcbp3-network
restart: unless-stopped
volumes:
redis-1-data:
redis-2-data:
redis-3-data:
- lcbp3
deploy:
resources:
limits:
cpus: '2.0'
memory: 1.5G
networks:
lcbp3-network:
external: true
```
#### Initialize Cluster
```bash
# Start Redis nodes
docker-compose -f docker-compose-redis.yml up -d
# Wait for nodes to start
sleep 10
# Create cluster
docker exec -it lcbp3-redis-1 redis-cli --cluster create \
172.20.0.2:6379 \
172.20.0.3:6379 \
172.20.0.4:6379 \
--cluster-replicas 0
# Verify cluster
docker exec -it lcbp3-redis-1 redis-cli cluster info
docker exec -it lcbp3-redis-1 redis-cli cluster nodes
```
#### Health Check Script
```bash
#!/bin/bash
# scripts/check-redis-cluster.sh
echo "🔍 Checking Redis Cluster Health..."
for port in 6379 6380 6381; do
echo "\n📍 Node on port $port:"
# Check if node is up
docker exec lcbp3-redis-$(($port - 6378)) redis-cli -p 6379 ping
# Check cluster status
docker exec lcbp3-redis-$(($port - 6378)) redis-cli -p 6379 cluster info | grep cluster_state
# Check memory usage
docker exec lcbp3-redis-$(($port - 6378)) redis-cli -p 6379 info memory | grep used_memory_human
done
echo "\n✅ Cluster check complete"
```
---
## 2. Database Configuration

View File

@@ -7,24 +7,30 @@
## กำหนดสิทธิ
```bash
chown -R 999:999 /share/nap-dms/mariadb/init
chmod 755 /share/nap-dms/mariadb/init
setfacl -R -m u:999:r-x /share/nap-dms/mariadb/init
setfacl -R -d -m u:999:r-x /share/nap-dms/mariadb/init
chown -R 33:33 /share/Container/pma/tmp
chmod 755 /share/Container/pma/tmp
setfacl -R -m u:33:rwx /share/Container/pma/tmp
setfacl -R -d -m u:33:rwx /share/Container/pma/tmp
chown -R 999:999 /share/np-dms/mariadb
chmod -R 755 /share/np-dms/mariadb
setfacl -R -m u:999:rwx /share/np-dms/mariadb
setfacl -R -d -m u:999:rwx /share/np-dms/mariadb
chown -R 999:999 /share/np-dms/mariadb/init
chmod 755 /share/np-dms/mariadb/init
setfacl -R -m u:999:r-x /share/np-dms/mariadb/init
setfacl -R -d -m u:999:r-x /share/np-dms/mariadb/init
chown -R 33:33 /share/np-dms/pma/tmp
chmod 755 /share/np-dms/pma/tmp
setfacl -R -m u:33:rwx /share/np-dms/pma/tmp
setfacl -R -d -m u:33:rwx /share/np-dms/pma/tmp
chown -R 33:33 /share/dms-data/logs/pma
chmod 755 /share/dms-data/logs/pma
setfacl -R -m u:33:rwx /share/dms-data/logs/pma
setfacl -R -d -m u:33:rwx /share/dms-data/logs/pma
setfacl -R -m u:1000:rwx /share/Container/gitea
setfacl -R -m u:1000:rwx /share/dms-data/gitea_repos
setfacl -R -m u:1000:rwx /share/dms-data/gitea_registry
setfacl -R -m u:1000:rwx /share/nap-dms/gitea
setfacl -R -m u:1000:rwx /share/nap-dms/gitea/gitea_repos
setfacl -R -m u:1000:rwx /share/nap-dms/gitea/gitea_registry
```
## เพิ่ม database & user สำหรับ Nginx Proxy Manager (NPM)
@@ -50,8 +56,9 @@ docker exec -it mariadb mysql -u root -p
## Docker file
```yml
# File: share/Container/mariadb/docker-compose.yml
# DMS Container v1_4_1 : แยก service และ folder,Application name: lcbp3-db, Servive: mariadb, pma
# File: share/nap-dms/mariadb/docker-compose.yml
# DMS Container v1_7_0 : ย้าย folder ไปที่ share/nap-dms/
# Application name: lcbp3-db, Servive: mariadb, pma
x-restart: &restart_policy
restart: unless-stopped
@@ -85,19 +92,19 @@ services:
TZ: "Asia/Bangkok"
ports:
- "3306:3306"
networks:
- lcbp3
volumes:
- "/share/nap-dms/mariadb/data:/var/lib/mysql"
- "/share/nap-dms/mariadb/my.cnf:/etc/mysql/conf.d/my.cnf:ro"
- "/share/nap-dms/mariadb/init:/docker-entrypoint-initdb.d:ro"
- "/share/np-dms/mariadb/data:/var/lib/mysql"
- "/share/np-dms/mariadb/my.cnf:/etc/mysql/conf.d/my.cnf:ro"
- "/share/np-dms/mariadb/init:/docker-entrypoint-initdb.d:ro"
- "/share/dms-data/mariadb/backup:/backup"
healthcheck:
test:
["CMD-SHELL", "mysqladmin ping -h 127.0.0.1 -pCenter#2025 || exit 1"]
test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
interval: 10s
timeout: 5s
retries: 15
networks:
lcbp3: {}
retries: 3
start_period: 30s
pma:
<<: [*restart_policy, *default_logging]
@@ -119,20 +126,46 @@ services:
MEMORY_LIMIT: "512M"
ports:
- "89:80"
networks:
- lcbp3
# expose:
# - "80"
volumes:
- "/share/Container/pma/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php:ro"
- "/share/Container/pma/zzz-custom.ini:/usr/local/etc/php/conf.d/zzz-custom.ini:ro"
- "/share/Container/pma/tmp:/var/lib/phpmyadmin/tmp:rw"
- "/share/np-dms/pma/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php:ro"
- "/share/np-dms/pma/zzz-custom.ini:/usr/local/etc/php/conf.d/zzz-custom.ini:ro"
- "/share/np-dms/pma/tmp:/var/lib/phpmyadmin/tmp:rw"
- "/share/dms-data/logs/pma:/var/log/apache2"
depends_on:
mariadb:
condition: service_healthy
networks:
lcbp3: {}
networks:
lcbp3:
external: true
# chown -R 999:999 /share/np-dms/mariadb/init
# chmod 755 /share/np-dms/mariadb/init
# setfacl -R -m u:999:r-x /share/np-dms/mariadb/init
# setfacl -R -d -m u:999:r-x /share/np-dms/mariadb/init
# chown -R 33:33 /share/np-dms/pma/tmp
# chmod 755 /share/np-dms/pma/tmp
# setfacl -R -m u:33:rwx /share/np-dms/pma/tmp
# setfacl -R -d -m u:33:rwx /share/np-dms/pma/tmp
# chown -R 33:33 /share/dms-data/logs/pma
# chmod 755 /share/dms-data/logs/pma
# setfacl -R -m u:33:rwx /share/dms-data/logs/pma
# setfacl -R -d -m u:33:rwx /share/dms-data/logs/pma
# setfacl -R -m u:1000:rwx /share/Container/gitea
# setfacl -R -m u:1000:rwx /share/dms-data/gitea_repos
# setfacl -R -m u:1000:rwx /share/dms-data/gitea_registry
# docker exec -it mariadb mysql -u root -p
# CREATE DATABASE npm;
# CREATE USER 'npm'@'%' IDENTIFIED BY 'npm';
# GRANT ALL PRIVILEGES ON npm.* TO 'npm'@'%';
# FLUSH PRIVILEGES;
```

View File

@@ -33,8 +33,9 @@ setfacl -R -m u:0:rwx /share/Container/npm
## Docker file
```yml
# File: share/Container/npm/docker-compose-npm.yml
# DMS Container v1_4_1 แยก service และ folder, Application name: lcbp3-npm, Servive:npm
# File: share/np-dms/npm/docker-compose-npm.yml
# DMS Container v1_7_0 : ย้าย folder ไปที่ share/np-dms/
# Application name: lcbp3-npm, Servive:npm
x-restart: &restart_policy
restart: unless-stopped
@@ -73,17 +74,17 @@ services:
- lcbp3
- giteanet
volumes:
- "/share/Container/npm/data:/data"
- "/share/np-dms/npm/data:/data"
- "/share/dms-data/logs/npm:/data/logs" # <-- เพิ่ม logging volume
- "/share/Container/npm/letsencrypt:/etc/letsencrypt"
- "/share/Container/npm/custom:/data/nginx/custom" # <-- สำคัญสำหรับ http_top.conf
- "/share/np-dms/npm/letsencrypt:/etc/letsencrypt"
- "/share/np-dms/npm/custom:/data/nginx/custom" # <-- สำคัญสำหรับ http_top.conf
# - "/share/Container/lcbp3/npm/landing:/data/landing:ro"
landing:
image: nginx:1.27-alpine
container_name: landing
restart: unless-stopped
volumes:
- "/share/Container/npm/landing:/usr/share/nginx/html:ro"
- "/share/np-dms/npm/landing:/usr/share/nginx/html:ro"
networks:
- lcbp3
networks:

View File

@@ -0,0 +1,499 @@
# 08-Infrastructure
คู่มือการตั้งค่า Infrastructure สำหรับ **NAP-DMS LCBP3** (Laem Chabang Port Phase 3 - Document Management System)
> 📍 **Platform:** QNAP (Container Station) + ASUSTOR (Portainer)
> 🌐 **Domain:** `*.np-dms.work` (IP: 159.192.126.103)
> 🔒 **Network:** `lcbp3` (Docker External Network)
> 📄 **Version:** v1.8.0 (aligned with 01-02-architecture.md)
---
## 🏢 Hardware Infrastructure
### Server Role Separation
#### QNAP TS-473A
| (Application & Database Server) | | |
| :------------------------------ | :---------------- | :-------------------- |
| ✔ Application Runtime | ✔ API / Web | ✔ Database (Primary) |
| ✔ High CPU / RAM usage | ✔ Worker / Queue | ✖ No long-term backup |
| Container Station (UI) | 32GB RAM (Capped) | AMD Ryzen V1500B |
#### ASUSTOR AS5403T
| (Infrastructure & Backup Server) | | |
| :------------------------------- | :---------------- | :------------------- |
| ✔ File Storage | ✔ Backup Target | ✔ Docker Infra |
| ✔ Monitoring / Registry | ✔ Log Aggregation | ✖ No heavy App logic |
| Portainer (Manage All) | 16GB RAM | Intel Celeron @2GHz |
### Servers Specification
| Device | Model | CPU | RAM | Resource Policy | Role |
| :---------- | :------ | :---------------------- | :--- | :------------------ | :--------------------- |
| **QNAP** | TS-473A | AMD Ryzen V1500B | 32GB | **Strict Limits** | Application, DB, Cache |
| **ASUSTOR** | AS5403T | Intel Celeron @ 2.00GHz | 16GB | **Moderate Limits** | Infra, Backup, Monitor |
### Service Distribution by Server
#### QNAP TS-473A (Application Stack)
| Category | Service | Strategy | Resource Limit (Est.) |
| :-------------- | :------------------------ | :------------------------------ | :-------------------- |
| **Web App** | Next.js (Frontend) | Single Instance | 2.0 CPU / 2GB RAM |
| **Backend API** | NestJS | **2 Replicas** (Load Balanced) | 2.0 CPU / 1.5GB RAM |
| **Database** | MariaDB (Primary) | Performance Tuned (Buffer Pool) | 4.0 CPU / 5GB RAM |
| **Worker** | Redis + BullMQ Worker | **Standalone + AOF** | 2.0 CPU / 1.5GB RAM |
| **Search** | Elasticsearch | **Heap Locked (2GB)** | 2.0 CPU / 4GB RAM |
| **API Gateway** | NPM (Nginx Proxy Manager) | SSL Termination | 1.0 CPU / 512MB RAM |
| **Workflow** | n8n | Automation | 1.0 CPU / 1GB RAM |
| **Code** | Gitea | Git Repository | 1.0 CPU / 1GB RAM |
#### ASUSTOR AS5403T (Infrastructure Stack)
| Category | Service | Notes |
| :--------------- | :------------------ | :------------------------------ |
| **File Storage** | NFS / SMB | Shared volumes for backup |
| **Backup** | Restic / Borg | Pull-based Backup (More Safe) |
| **Docker Infra** | Registry, Portainer | Container image registry, mgmt |
| **Monitoring** | Uptime Kuma | Service availability monitoring |
| **Metrics** | Prometheus, Grafana | Cross-Server Scraping |
| **Log** | Loki / Syslog | Centralized logging |
---
## 🔄 Data Flow Architecture
```
┌──────────────┐
│ User │
└──────┬───────┘
│ HTTPS (443)
┌──────────────────────────────────────────────────────────────┐
│ QNAP TS-473A │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Nginx Proxy Manager (NPM) │ │
│ │ SSL Termination + Round Robin LB │ │
│ └───────────────────────┬─────────────────────────────────┘ │
│ │ │
│ ┌───────────────────────▼─────────────────────────────────┐ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ Next.js │───▶│ NestJS │ │ NestJS │ │ │
│ │ │ (Frontend) │ │ (Replica 1)│ │ (Replica 2)│ │ │
│ │ └──────────────┘ └──────┬───────┘ └──────┬──────┘ │ │
│ │ │ │ │ │
│ │ ┌────────────────────────┼─────────────────┼ │ │
│ │ ▼ ▼ ▼ │ │
│ │ ┌──────────┐ ┌────────────┐ ┌─────────────┐ │ │
│ │ │ MariaDB │ │ Redis │ │Elasticsearch│ │ │
│ │ │(Primary) | │ (Persist.) │ │ (Search) │ │ │
│ │ └────┬─────┘ └────────────┘ └─────────────┘ │ │
│ └───────┼─────────────────────────────────────────────────┘ │
│ │ │
└──────────┼───────────────────────────────────────────────────┘
│ Local Dump -> Restic Pull (Cross-Server)
┌──────────────────────────────────────────────────────────────┐
│ ASUSTOR AS5403T │
│ ┌──────────────────────────────────────────────────────────┐│
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ││
│ │ │ Backup │ │ Registry │ │ Uptime │ ││
│ │ │ (Restic) │ │ (Docker) │ │ Kuma │ ││
│ │ └──────────┘ └──────────┘ └──────────┘ ││
│ │ ││
│ │ ┌──────────┐ ┌────────────┐ ┌──────────┐ ││
│ │ │Prometheus│ ──▶│ Grafana │ │ Loki │ ││
│ │ │(Metrics) │ │(Dashboard) │ │ (Logs) │ ││
│ │ └──────────┘ └────────────┘ └──────────┘ ││
│ │ ││
│ │ ┌───────────────────────────────────────────┐ ││
│ │ │ NFS / SMB Shared Storage │ ││
│ │ │ (Backup Volume) │ ││
│ │ └───────────────────────────────────────────┘ ││
│ └──────────────────────────────────────────────────────────┘│
└──────────────────────────────────────────────────────────────┘
```
---
## 🖥️ Docker Management Architecture
```
┌─────────────────────────────────────────────────────────────────────────┐
│ Portainer (ASUSTOR) │
│ https://portainer.np-dms.work │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────┐ ┌─────────────────────────────┐ │
│ │ Manage Infra Stack │ │ Remote Docker Endpoint │ │
│ │ (Local - ASUSTOR) │ │ (QNAP App Stack) │ │
│ ├─────────────────────────────┤ ├─────────────────────────────┤ │
│ │ • Registry │ │ • Next.js (Frontend) │ │
│ │ • Prometheus │ │ • NestJS (Backend) │ │
│ │ • Grafana │ │ • MariaDB │ │
│ │ • Uptime Kuma │ │ • Redis │ │
│ │ • Loki │ │ • Elasticsearch │ │
│ │ • Backup (Restic) │ │ • NPM │ │
│ │ • ClamAV │ │ • Gitea │ │
│ │ • node-exporter │ │ • n8n │ │
│ │ • cAdvisor │ │ • phpMyAdmin │ │
│ └─────────────────────────────┘ └─────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Container Station (QNAP): ใช้สำหรับ local UI management เท่านั้น
Portainer (ASUSTOR): ใช้เป็น centralized management ทั้ง 2 servers
```
---
## 🔐 Security Zones
```
┌─────────────────────────────────────────────────────────────────────────┐
│ SECURITY ZONES │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 🌐 PUBLIC ZONE │ │
│ │ ───────────────────────────────────────────────────────────── │ │
│ │ • Nginx Proxy Manager (NPM) │ │
│ │ • HTTPS (Port 443 only) │ │
│ │ • SSL/TLS Termination │ │
│ │ • Rate Limiting │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 📱 APPLICATION ZONE (QNAP - VLAN 10) │ │
│ │ ───────────────────────────────────────────────────────────── │ │
│ │ • Next.js (Frontend) │ │
│ │ • NestJS (Backend API) │ │
│ │ • n8n Workflow │ │
│ │ • Gitea │ │
│ │ • Internal API communication only │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 💾 DATA ZONE (QNAP - Internal Only) │ │
│ │ ───────────────────────────────────────────────────────────── │ │
│ │ • MariaDB (Primary Database) │ │
│ │ • Redis (Cache/Queue) │ │
│ │ • Elasticsearch (Search) │ │
│ │ • No public access - Backend only │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 🛠️ INFRASTRUCTURE ZONE (ASUSTOR - VLAN 10) │ │
│ │ ───────────────────────────────────────────────────────────── │ │
│ │ • Backup (Restic/Borg) │ │
│ │ • Docker Registry │ │
│ │ • Prometheus + Grafana │ │
│ │ • Uptime Kuma │ │
│ │ • Loki (Logs) │ │
│ │ • NFS/SMB Storage │ │
│ │ • Access via MGMT VLAN only │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
---
## 🌐 Network Architecture (VLAN)
### VLAN Networks
| VLAN ID | Name | Gateway/Subnet | DHCP Range | Purpose |
| :------ | :----- | :-------------- | :----------------- | :-------------------- |
| 10 | SERVER | 192.168.10.1/24 | Static | Servers (NAS, Docker) |
| 20 | MGMT | 192.168.20.1/24 | Static | Network Management |
| 30 | USER | 192.168.30.1/24 | .10-.254 (7 days) | Staff Devices |
| 40 | CCTV | 192.168.40.1/24 | .100-.150 (7 days) | Surveillance |
| 50 | VOICE | 192.168.50.1/24 | .201-.250 (7 days) | IP Phones |
| 60 | DMZ | 192.168.60.1/24 | Static | Public Services |
| 70 | GUEST | 192.168.70.1/24 | .200-.250 (1 day) | Guest WiFi |
### Static IP Allocation (Key Devices)
| VLAN | Device | IP Address | Role |
| :--------- | :------ | :------------- | :------------------ |
| SERVER(10) | QNAP | 192.168.10.8 | App/DB Server |
| SERVER(10) | ASUSTOR | 192.168.10.9 | Infra/Backup Server |
| MGMT(20) | ER7206 | 192.168.20.1 | Gateway/Router |
| MGMT(20) | SG2428P | 192.168.20.2 | Core Switch |
| MGMT(20) | AMPCOM | 192.168.20.3 | Server Switch |
| MGMT(20) | OC200 | 192.168.20.250 | Omada Controller |
| USER(30) | Printer | 192.168.30.222 | Kyocera CS3554ci |
| CCTV(40) | NVR | 192.168.40.100 | HikVision NVR |
### Network Equipment
| Device | Model | Ports | IP Address | Role |
| :------------------ | :----------------- | :---------------------- | :------------- | :--------------- |
| **Router** | TP-LINK ER7206 | 1 SFP + WAN + 4×GbE | 192.168.20.1 | Gateway/Firewall |
| **Core Switch** | TP-LINK SG2428P | 24×GbE PoE+ + 4×SFP | 192.168.20.2 | Core/PoE Switch |
| **Server Switch** | AMPCOM | 8×2.5GbE + 1×10G SFP+ | 192.168.20.3 | Server Uplink |
| **Admin Switch** | TP-LINK ES205G | 5×GbE (Unmanaged) | N/A | Admin PC |
| **CCTV Switch** | TP-LINK TL-SL1226P | 24×PoE+ 100Mbps + 2×SFP | 192.168.20.4 | CCTV PoE |
| **IP Phone Switch** | TP-LINK SG1210P | 8×PoE+ + 1×GbE + 1×SFP | 192.168.20.5 | VoIP |
| **Controller** | TP-LINK OC200 | Omada Controller | 192.168.20.250 | AP Management |
> 📖 Detailed port mappings and ACL rules: see [Securities.md](Securities.md) and [แผนผัง Network.md](แผนผัง%20Network.md)
---
## 🔗 Network Topology
```mermaid
graph TB
subgraph Internet
WAN[("🌐 Internet<br/>WAN")]
end
subgraph Router["ER7206 Router"]
R[("🔲 ER7206<br/>192.168.20.1")]
end
subgraph CoreSwitch["SG2428P Core Switch"]
CS[("🔲 SG2428P<br/>192.168.20.2")]
end
subgraph ServerSwitch["AMPCOM 2.5G Switch"]
SS[("🔲 AMPCOM<br/>192.168.20.3")]
end
subgraph Servers["VLAN 10 - Servers"]
QNAP[("💾 QNAP (App/DB)<br/>192.168.10.8")]
ASUSTOR[("💾 ASUSTOR (Infra)<br/>192.168.10.9")]
end
subgraph AccessPoints["EAP610 x16"]
AP[("📶 WiFi APs")]
end
subgraph OtherSwitches["Distribution"]
CCTV_SW[("🔲 TL-SL1226P<br/>CCTV")]
PHONE_SW[("🔲 SG1210P<br/>IP Phone")]
ADMIN_SW[("🔲 ES205G<br/>Admin")]
end
WAN --> R
R -->|Port 3| CS
CS -->|LAG Port 3-4| SS
SS -->|Port 3-4 LACP| QNAP
SS -->|Port 5-6 LACP| ASUSTOR
SS -->|Port 8| ADMIN_SW
CS -->|Port 5-20| AP
CS -->|SFP 25| CCTV_SW
CS -->|SFP 26| PHONE_SW
CS -->|Port 24| ADMIN_SW
```
---
## 📁 สารบัญเอกสาร
| ไฟล์ | คำอธิบาย |
| :--------------------------------------------------- | :--------------------------------------------------------------------------- |
| [Infrastructure Setup.md](Infrastructure%20Setup.md) | ภาพรวมการตั้งค่าโครงสร้างพื้นฐาน (Redis, MariaDB, Backend, Monitoring, Backup, DR) |
| [แผนผัง Network.md](แผนผัง%20Network.md) | แผนผัง Network Architecture และ Container Services |
| [Securities.md](Securities.md) | VLAN Segmentation, Firewall Rules, ACL (ER7206, SG2428P, EAP) |
---
## 🐳 Docker Compose Files
### Core Services (QNAP)
| ไฟล์ | Application | Services | Path บน QNAP |
| :--------------------------------------- | :---------- | :---------------------------------------- | :------------------------ |
| [MariaDB_setting.md](MariaDB_setting.md) | `lcbp3-db` | `mariadb`, `pma` | `/share/np-dms/mariadb/` |
| [NPM_setting.md](NPM_setting.md) | `lcbp3-npm` | `npm`, `landing` | `/share/np-dms/npm/` |
| [Service_setting.md](Service_setting.md) | `services` | `cache` (Redis), `search` (Elasticsearch) | `/share/np-dms/services/` |
| [Gitea_setting.md](Gitea_setting.md) | `git` | `gitea` | `/share/np-dms/gitea/` |
| [n8n_setting.md](n8n_setting.md) | `n8n` | `n8n` | `/share/np-dms/n8n/` |
### Infrastructure Services (ASUSTOR)
| ไฟล์ | Application | Services | Path บน ASUSTOR |
| :----------------------------- | :----------------- | :--------------------------------------------------- | :---------------------------- |
| [monitoring.md](monitoring.md) | `lcbp3-monitoring` | `prometheus`, `grafana`, `node-exporter`, `cadvisor` | `/volume1/np-dms/monitoring/` |
| *(NEW)* backup.md | `lcbp3-backup` | `restic`, `borg` | `/volume1/np-dms/backup/` |
| *(NEW)* registry.md | `lcbp3-registry` | `registry` | `/volume1/np-dms/registry/` |
| *(NEW)* uptime-kuma.md | `lcbp3-uptime` | `uptime-kuma` | `/volume1/np-dms/uptime/` |
---
## 🌐 Domain Mapping (NPM Proxy)
### Application Domains (QNAP)
| Domain | Service | Port | Host | Description |
| :-------------------- | :------- | :--- | :--- | :------------------------ |
| `lcbp3.np-dms.work` | frontend | 3000 | QNAP | Frontend Next.js |
| `backend.np-dms.work` | backend | 3000 | QNAP | Backend NestJS API |
| `pma.np-dms.work` | pma | 80 | QNAP | phpMyAdmin |
| `git.np-dms.work` | gitea | 3000 | QNAP | Gitea Git Server |
| `n8n.np-dms.work` | n8n | 5678 | QNAP | n8n Workflow Automation |
| `npm.np-dms.work` | npm | 81 | QNAP | Nginx Proxy Manager Admin |
### Infrastructure Domains (ASUSTOR)
| Domain | Service | Port | Host | Description |
| :----------------------- | :---------- | :--- | :------ | :----------------- |
| `grafana.np-dms.work` | grafana | 3000 | ASUSTOR | Grafana Dashboard |
| `prometheus.np-dms.work` | prometheus | 9090 | ASUSTOR | Prometheus Metrics |
| `uptime.np-dms.work` | uptime-kuma | 3001 | ASUSTOR | Uptime Monitoring |
| `portainer.np-dms.work` | portainer | 9443 | ASUSTOR | Docker Management |
| `registry.np-dms.work` | registry | 5000 | ASUSTOR | Docker Registry |
---
## ⚙️ Core Services Summary
### QNAP Services (Application)
| Service | Technology | Port | Purpose |
| :---------------- | :----------------- | :----- | :------------------------------------------- |
| **Reverse Proxy** | NPM | 80/443 | SSL Termination, Domain Routing |
| **Backend API** | NestJS | 3000 | REST API, Business Logic, Workflow Engine |
| **Frontend** | Next.js | 3000 | Web UI (App Router, React, Tailwind, Shadcn) |
| **Database** | MariaDB 11.8 | 3306 | Primary Relational Database |
| **Cache** | Redis 7.2 | 6379 | Caching, Session, BullMQ |
| **Search** | Elasticsearch 8.11 | 9200 | Full-text Search |
| **Code Hosting** | Gitea | 3000 | Git Repository (Self-hosted) |
| **Workflow** | n8n | 5678 | Automation, Integrations (LINE, Email) |
### ASUSTOR Services (Infrastructure)
| Service | Technology | Port | Purpose |
| :--------------- | :-------------- | :--- | :---------------------------- |
| **Metrics** | Prometheus | 9090 | Metrics Collection |
| **Dashboard** | Grafana | 3000 | Visualization, Alerting |
| **Uptime** | Uptime Kuma | 3001 | Service Availability Monitor |
| **Registry** | Docker Registry | 5000 | Private Container Images |
| **Management** | Portainer | 9443 | Centralized Docker Management |
| **Host Metrics** | node-exporter | 9100 | CPU, Memory, Disk metrics |
| **Container** | cAdvisor | 8080 | Container resource metrics |
| **Backup** | Restic/Borg | N/A | Automated Backups |
---
## 🔧 Quick Reference
### Docker Commands (QNAP - Container Station)
```bash
# ดู containers ทั้งหมด
docker ps -a
# ดู logs
docker logs -f <container_name>
# เข้าไปใน container
docker exec -it <container_name> sh
# Restart service
docker restart <container_name>
```
### Docker Commands (ASUSTOR - Portainer)
```bash
# Remote Docker endpoint connection
# Configure via Portainer UI: Settings > Environments > Add Environment
# Direct SSH to ASUSTOR
ssh admin@192.168.10.9
# Portainer API (Optional)
curl -X GET https://portainer.np-dms.work/api/endpoints \
-H "X-API-Key: <your-api-key>"
```
### Network
```bash
# สร้าง external network (ครั้งแรก) - ต้องทำทั้ง 2 servers
# On QNAP:
docker network create lcbp3
# On ASUSTOR:
docker network create lcbp3
# ดู network
docker network ls
docker network inspect lcbp3
```
### MariaDB
```bash
# เข้า MySQL CLI (QNAP)
docker exec -it mariadb mysql -u root -p
# Backup database (QNAP -> ASUSTOR)
docker exec mariadb mysqldump -u root -p lcbp3 > backup.sql
# Copy to ASUSTOR via NFS/SCP
```
---
## ⚙️ Environment Variables
ตัวแปรสำคัญที่ใช้ร่วมกันทุก Service:
| Variable | Value | Description |
| :----------------------- | :------------- | :--------------------- |
| `TZ` | `Asia/Bangkok` | Timezone |
| `MYSQL_HOST` / `DB_HOST` | `mariadb` | MariaDB hostname |
| `MYSQL_PORT` / `DB_PORT` | `3306` | MariaDB port |
| `REDIS_HOST` | `cache` | Redis hostname |
| `ELASTICSEARCH_HOST` | `search` | Elasticsearch hostname |
> ⚠️ **Security Note:** Sensitive secrets (Password, Keys) ต้องใช้ `docker-compose.override.yml` (gitignored) หรือ Docker secrets - ห้ามระบุใน docker-compose.yml หลัก
---
## 📚 เอกสารเสริม
| ไฟล์ | คำอธิบาย |
| :------------------------------- | :------------------------------------------------ |
| [Git_command.md](Git_command.md) | คำสั่ง Git + Gitea Cheat Sheet |
| [lcbp3-db.md](lcbp3-db.md) | Docker Compose สำหรับ MariaDB (alternative version) |
---
## 📋 Checklist สำหรับการติดตั้งใหม่
### Phase 1: Network & Infrastructure
1. [ ] Configure VLANs on ER7206 Router
2. [ ] Configure Switch Profiles on SG2428P
3. [ ] Configure Static IPs (QNAP: .8, ASUSTOR: .9)
### Phase 2: ASUSTOR Setup (Infra)
1. [ ] Create Docker Network: `docker network create lcbp3`
2. [ ] Deploy Portainer & Registry
3. [ ] Deploy Monitoring Stack (`prometheus.yml` with QNAP IP target)
4. [ ] Verify Prometheus can reach QNAP services
### Phase 3: QNAP Setup (App)
1. [ ] Create Docker Network: `docker network create lcbp3`
2. [ ] Create `.env` file with secure passwords
3. [ ] Deploy **MariaDB** (Wait for init)
4. [ ] Deploy **Redis Standalone** (Check AOF is active)
5. [ ] Deploy **Elasticsearch** (Check Heap limit)
6. [ ] Deploy **NPM** & App Services (Backend/Frontend)
7. [ ] Verify Internal Load Balancing (Backend Replicas)
### Phase 4: Backup & Security
1. [ ] Configure Restic on ASUSTOR to pull from QNAP
2. [ ] Set Resource Limits (Check `docker stats`)
3. [ ] Configure Firewall ACL Rules
---
> 📝 **หมายเหตุ:** เอกสารทั้งหมดอ้างอิงจาก Architecture Document **v1.8.0** และ DMS Container Schema **v1.7.0**

View File

@@ -0,0 +1,106 @@
# 08-Infrastructure
คู่มือการตั้งค่า Infrastructure สำหรับ **NAP-DMS LCBP3** (Laem Chabang Port Phase 3 - Document Management System)
> 📍 **Platform:** QNAP (Container Station) + ASUSTOR (Portainer)
> 🌐 **Domain:** `*.np-dms.work` (IP: 159.192.126.103)
> 🔒 **Network:** `lcbp3` (Docker External Network)
> 📄 **Version:** v2.0.0 (Refactored for Stability)
---
## 🏢 Hardware Infrastructure
### Server Role Separation
#### QNAP TS-473A
| (Application & Database Server)|||
| :--------------------- | :---------------- | :-------------------- |
| ✔ Application Runtime |✔ API / Web | ✔ Database (Primary) |
| ✔ High CPU / RAM usage | ✔ Worker / Queue | ✖ No long-term backup |
| Container Station (UI) | 32GB RAM (Capped) | AMD Ryzen V1500B |
#### ASUSTOR AS5403T
| (Infrastructure & Backup Server) |||
| :--------------------- | :---------------- | :------------------- |
| ✔ File Storage | ✔ Backup Target | ✔ Docker Infra |
|✔ Monitoring / Registry | ✔ Log Aggregation | ✖ No heavy App logic |
| Portainer (Manage All) | 16GB RAM | Intel Celeron @2GHz |
### Servers Specification & Resource Allocation
| Device | Model | CPU | RAM | Resource Policy | Role |
| :---------- | :------ | :---------------------- | :--- | :------------------ | :--------------------- |
| **QNAP** | TS-473A | AMD Ryzen V1500B | 32GB | **Strict Limits** | Application, DB, Cache |
| **ASUSTOR** | AS5403T | Intel Celeron @ 2.00GHz | 16GB | **Moderate Limits** | Infra, Backup, Monitor |
### Service Distribution by Server
#### QNAP TS-473A (Application Stack)
| Category | Service | Strategy | Resource Limit (Est.) |
| :-------------- | :------------------------ | :------------------------------ | :-------------------- |
| **Web App** | Next.js (Frontend) | Single Instance | 2.0 CPU / 2GB RAM |
| **Backend API** | NestJS | **2 Replicas** (Load Balanced) | 2.0 CPU / 1.5GB RAM |
| **Database** | MariaDB (Primary) | Performance Tuned (Buffer Pool) | 4.0 CPU / 5GB RAM |
| **Worker** | Redis + BullMQ Worker | **Standalone + AOF** | 2.0 CPU / 1.5GB RAM |
| **Search** | Elasticsearch | **Heap Locked (2GB)** | 2.0 CPU / 4GB RAM |
| **API Gateway** | NPM (Nginx Proxy Manager) | SSL Termination | 1.0 CPU / 512MB RAM |
| **Workflow** | n8n | Automation | 1.0 CPU / 1GB RAM |
| **Code** | Gitea | Git Repository | 1.0 CPU / 1GB RAM |
#### ASUSTOR AS5403T (Infrastructure Stack)
| Category | Service | Notes |
| :--------------- | :------------------ | :------------------------------ |
| **File Storage** | NFS / SMB | Shared volumes for backup |
| **Backup** | Restic / Borg | Pull-based Backup (More Safe) |
| **Docker Infra** | Registry, Portainer | Container image registry, mgmt |
| **Monitoring** | Uptime Kuma | Service availability monitoring |
| **Metrics** | Prometheus, Grafana | Cross-Server Scraping |
| **Log** | Loki / Syslog | Centralized logging |
---
## 🔄 Data Flow Architecture
┌──────────────┐
│ User │
└──────┬───────┘
│ HTTPS (443)
┌─────────────────────────────────────────────────────────────┐
│ QNAP TS-473A │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Nginx Proxy Manager (NPM) │ │
│ │ SSL Termination + Round Robin LB │ │
│ └───────────────────────┬─────────────────────────────────┘ │
│ │ │
│ ┌───────────────────────▼─────────────────────────────────┐ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ | │
│ │ │ Next.js │─▶│ NestJS │ │ NestJS │ | │
│ │ │ (Frontend) │ │ (Replica 1) │ │ (Replica 2) │ │ │
│ │ └──────────────┘ └──────┬───────┘ └──────┬───────┘ │ │
│ │ │ │ │ │
│ │ ┌─────────────────────────┼────────────────┼────┐ │ │
│ │ ▼ ▼ ▼ ▼ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌─────────────┐ │ │
│ │ │ MariaDB │ │ Redis │ │Elasticsearch│ │ │
│ │ │ (Primary)│ │(Persist.)│ │ (Search) │ │ │
│ │ └────┬─────┘ └──────────┘ └─────────────┘ │ │
│ └──────┼──────────────────────────────────────────────────┘ │
│ └──────┼────────────────────────────────────────────────────┘
| Local Dump -> Restic Pull (Cross-Server)
┌──────────────────────────────────────────────────────────────┐
│ ASUSTOR AS5403T │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │ Backup │ │ Registry │ │ Uptime │ │ │
│ │ │ (Restic) │ │ (Docker) │ │ Kuma │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ │ │
│ │ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │Prometheus│───▶│ Grafana │ │ Loki │ │ │
│ │ │(Scraper) │ │(Dashboard)│ │ (Logs) │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ ││
│ └──────────────────────────────────────────────────────────┘│ └──────────────────────────────────────────────────────────────┘

View File

@@ -36,24 +36,25 @@
```bash
# สร้าง Directory
mkdir -p /share/Container/services/cache/data
mkdir -p /share/Container/services/search/data
mkdir -p /share/np-dms/services/cache/data
mkdir -p /share/np-dms/services/search/data
# กำหนดสิทธิ์ให้ตรงกับ User ID ใน Container
# Redis (UID 999)
chown -R 999:999 /share/Container/services/cache/data
chmod -R 750 /share/Container/services/cache/data
chown -R 999:999 /share/np-dms/services/cache/data
chmod -R 750 /share/np-dms/services/cache/data
# Elasticsearch (UID 1000)
chown -R 1000:1000 /share/Container/services/search/data
chmod -R 750 /share/Container/services/search/data
chown -R 1000:1000 /share/np-dms/services/search/data
chmod -R 750 /share/np-dms/services/search/data
```
## Docker file
```yml
# File: /share/Container/services/docker-compose.yml (หรือไฟล์ที่คุณใช้รวม)
# DMS Container v1_4_1: เพิ่ม Application name: services, Services 'cache' (Redis) และ 'search' (Elasticsearch)
# File: /share/np-dms/services/docker-compose.yml (หรือไฟล์ที่คุณใช้รวม)
# DMS Container v1_7_0: เพิ่ม Application name: services
#Services 'cache' (Redis) และ 'search' (Elasticsearch)
x-restart: &restart_policy
restart: unless-stopped
@@ -93,7 +94,7 @@ services:
networks:
- lcbp3 # เชื่อมต่อ network ภายในเท่านั้น
volumes:
- "/share/Container/cache/data:/data" # Map volume สำหรับเก็บข้อมูล (ถ้าต้องการ persistence)
- "/share/np-dms/services/cache/data:/data" # Map volume สำหรับเก็บข้อมูล (ถ้าต้องการ persistence)
healthcheck:
test: ["CMD", "redis-cli", "ping"] # ตรวจสอบว่า service พร้อมใช้งาน
interval: 10s
@@ -132,7 +133,7 @@ services:
networks:
- lcbp3 # เชื่อมต่อ network ภายใน (NPM จะ proxy port 9200 จากภายนอก)
volumes:
- "/share/Container/search/data:/usr/share/elasticsearch/data" # Map volume สำหรับเก็บ data/indices
- "/share/np-dms/services/search/data:/usr/share/elasticsearch/data" # Map volume สำหรับเก็บ data/indices
healthcheck:
# รอจนกว่า cluster health จะเป็น yellow หรือ green
test: ["CMD-SHELL", "curl -s http://localhost:9200/_cluster/health | grep -q '\"status\":\"green\"\\|\\\"status\":\"yellow\"'"]

View File

@@ -0,0 +1,455 @@
# การติดตั้ง Monitoring Stack บน ASUSTOR
## **📝 คำอธิบายและข้อควรพิจารณา**
> ⚠️ **หมายเหตุ**: Monitoring Stack ทั้งหมดติดตั้งบน **ASUSTOR AS5403T** ไม่ใช่ QNAP
> เพื่อแยก Application workload ออกจาก Infrastructure/Monitoring workload
Stack สำหรับ Monitoring ประกอบด้วย:
| Service | Port | Purpose | Host |
| :---------------- | :--- | :-------------------------------- | :------ |
| **Prometheus** | 9090 | เก็บ Metrics และ Time-series data | ASUSTOR |
| **Grafana** | 3000 | Dashboard สำหรับแสดงผล Metrics | ASUSTOR |
| **Node Exporter** | 9100 | เก็บ Metrics ของ Host system | Both |
| **cAdvisor** | 8080 | เก็บ Metrics ของ Docker containers | Both |
| **Uptime Kuma** | 3001 | Service Availability Monitoring | ASUSTOR |
| **Loki** | 3100 | Log aggregation | ASUSTOR |
---
## 🏗️ Architecture Overview
```
┌─────────────────────────────────────────────────────────────────────────┐
│ ASUSTOR AS5403T (Monitoring Hub) │
├─────────────────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Prometheus │───▶│ Grafana │ │ Uptime Kuma │ │
│ │ :9090 │ │ :3000 │ │ :3001 │ │
│ └──────┬──────┘ └─────────────┘ └─────────────┘ │
│ │ │
│ │ Scrape Metrics │
│ ▼ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │node-exporter│ │ cAdvisor │ │
│ │ :9100 │ │ :8080 │ │
│ │ (Local) │ │ (Local) │ │
│ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
│ Remote Scrape
┌─────────────────────────────────────────────────────────────────────────┐
│ QNAP TS-473A (App Server) │
├─────────────────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │node-exporter│ │ cAdvisor │ │ Backend │ │
│ │ :9100 │ │ :8080 │ │ /metrics │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
```
---
## กำหนดสิทธิ (บน ASUSTOR)
```bash
# SSH เข้า ASUSTOR
ssh admin@192.168.10.9
# สร้าง Directory
mkdir -p /volume1/np-dms/monitoring/prometheus/data
mkdir -p /volume1/np-dms/monitoring/prometheus/config
mkdir -p /volume1/np-dms/monitoring/grafana/data
mkdir -p /volume1/np-dms/monitoring/uptime-kuma/data
mkdir -p /volume1/np-dms/monitoring/loki/data
# กำหนดสิทธิ์ให้ตรงกับ User ID ใน Container
# Prometheus (UID 65534 - nobody)
chown -R 65534:65534 /volume1/np-dms/monitoring/prometheus
chmod -R 750 /volume1/np-dms/monitoring/prometheus
# Grafana (UID 472)
chown -R 472:472 /volume1/np-dms/monitoring/grafana/data
chmod -R 750 /volume1/np-dms/monitoring/grafana/data
# Uptime Kuma (UID 1000)
chown -R 1000:1000 /volume1/np-dms/monitoring/uptime-kuma/data
chmod -R 750 /volume1/np-dms/monitoring/uptime-kuma/data
# Loki (UID 10001)
chown -R 10001:10001 /volume1/np-dms/monitoring/loki/data
chmod -R 750 /volume1/np-dms/monitoring/loki/data
```
---
## Note: NPM Proxy Configuration (ถ้าใช้ NPM บน ASUSTOR)
| Domain Names | Forward Hostname | IP Forward Port | Cache Assets | Block Common Exploits | Websockets | Force SSL | HTTP/2 |
| :--------------------- | :--------------- | :-------------- | :----------- | :-------------------- | :--------- | :-------- | :----- |
| grafana.np-dms.work | grafana | 3000 | [ ] | [x] | [x] | [x] | [x] |
| prometheus.np-dms.work | prometheus | 9090 | [ ] | [x] | [ ] | [x] | [x] |
| uptime.np-dms.work | uptime-kuma | 3001 | [ ] | [x] | [x] | [x] | [x] |
> **หมายเหตุ**: ถ้าใช้ NPM บน QNAP เพียงตัวเดียว ให้ forward ไปยัง IP ของ ASUSTOR (192.168.10.9)
---
## Docker Compose File (ASUSTOR)
```yaml
# File: /volume1/np-dms/monitoring/docker-compose.yml
# DMS Container v1.8.0: Application name: lcbp3-monitoring
# Deploy on: ASUSTOR AS5403T
# Services: prometheus, grafana, node-exporter, cadvisor, uptime-kuma, loki
x-restart: &restart_policy
restart: unless-stopped
x-logging: &default_logging
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "5"
networks:
lcbp3:
external: true
services:
# ----------------------------------------------------------------
# 1. Prometheus (Metrics Collection & Storage)
# ----------------------------------------------------------------
prometheus:
<<: [*restart_policy, *default_logging]
image: prom/prometheus:v2.48.0
container_name: prometheus
stdin_open: true
tty: true
deploy:
resources:
limits:
cpus: "1.0"
memory: 1G
reservations:
cpus: "0.25"
memory: 256M
environment:
TZ: "Asia/Bangkok"
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=30d'
- '--web.enable-lifecycle'
networks:
- lcbp3
volumes:
- "/volume1/np-dms/monitoring/prometheus/config:/etc/prometheus:ro"
- "/volume1/np-dms/monitoring/prometheus/data:/prometheus"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:9090/-/healthy"]
interval: 30s
timeout: 10s
retries: 3
# ----------------------------------------------------------------
# 2. Grafana (Dashboard & Visualization)
# ----------------------------------------------------------------
grafana:
<<: [*restart_policy, *default_logging]
image: grafana/grafana:10.2.2
container_name: grafana
stdin_open: true
tty: true
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
reservations:
cpus: "0.25"
memory: 128M
environment:
TZ: "Asia/Bangkok"
GF_SECURITY_ADMIN_USER: admin
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD:-Center#2025}
GF_SERVER_ROOT_URL: "https://grafana.np-dms.work"
GF_INSTALL_PLUGINS: grafana-clock-panel,grafana-piechart-panel
networks:
- lcbp3
volumes:
- "/volume1/np-dms/monitoring/grafana/data:/var/lib/grafana"
depends_on:
- prometheus
healthcheck:
test: ["CMD-SHELL", "wget --spider -q http://localhost:3000/api/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
# ----------------------------------------------------------------
# 3. Uptime Kuma (Service Availability Monitoring)
# ----------------------------------------------------------------
uptime-kuma:
<<: [*restart_policy, *default_logging]
image: louislam/uptime-kuma:1
container_name: uptime-kuma
deploy:
resources:
limits:
cpus: "0.5"
memory: 256M
environment:
TZ: "Asia/Bangkok"
networks:
- lcbp3
volumes:
- "/volume1/np-dms/monitoring/uptime-kuma/data:/app/data"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:3001"]
interval: 30s
timeout: 10s
retries: 3
# ----------------------------------------------------------------
# 4. Node Exporter (Host Metrics - ASUSTOR)
# ----------------------------------------------------------------
node-exporter:
<<: [*restart_policy, *default_logging]
image: prom/node-exporter:v1.7.0
container_name: node-exporter
deploy:
resources:
limits:
cpus: "0.5"
memory: 128M
environment:
TZ: "Asia/Bangkok"
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
networks:
- lcbp3
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:9100/metrics"]
interval: 30s
timeout: 10s
retries: 3
# ----------------------------------------------------------------
# 5. cAdvisor (Container Metrics - ASUSTOR)
# ----------------------------------------------------------------
cadvisor:
<<: [*restart_policy, *default_logging]
image: gcr.io/cadvisor/cadvisor:v0.47.2
container_name: cadvisor
deploy:
resources:
limits:
cpus: "0.5"
memory: 256M
environment:
TZ: "Asia/Bangkok"
networks:
- lcbp3
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8080/healthz"]
interval: 30s
timeout: 10s
retries: 3
# ----------------------------------------------------------------
# 6. Loki (Log Aggregation)
# ----------------------------------------------------------------
loki:
<<: [*restart_policy, *default_logging]
image: grafana/loki:2.9.0
container_name: loki
deploy:
resources:
limits:
cpus: "0.5"
memory: 512M
environment:
TZ: "Asia/Bangkok"
command: -config.file=/etc/loki/local-config.yaml
networks:
- lcbp3
volumes:
- "/volume1/np-dms/monitoring/loki/data:/loki"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:3100/ready"]
interval: 30s
timeout: 10s
retries: 3
```
---
## QNAP Node Exporter & cAdvisor
ติดตั้ง node-exporter และ cAdvisor บน QNAP เพื่อให้ Prometheus บน ASUSTOR scrape metrics ได้:
```yaml
# File: /share/np-dms/monitoring/docker-compose.yml (QNAP)
# เฉพาะ exporters เท่านั้น - metrics ถูก scrape โดย Prometheus บน ASUSTOR
version: '3.8'
networks:
lcbp3:
external: true
services:
node-exporter:
image: prom/node-exporter:v1.7.0
container_name: node-exporter
restart: unless-stopped
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
networks:
- lcbp3
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
cadvisor:
image: gcr.io/cadvisor/cadvisor:v0.47.2
container_name: cadvisor
restart: unless-stopped
networks:
- lcbp3
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
```
---
## Prometheus Configuration
สร้างไฟล์ `/volume1/np-dms/monitoring/prometheus/config/prometheus.yml` บน ASUSTOR:
```yaml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
# Prometheus self-monitoring (ASUSTOR)
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
# ============================================
# ASUSTOR Metrics (Local)
# ============================================
# Host metrics from Node Exporter (ASUSTOR)
- job_name: 'asustor-node'
static_configs:
- targets: ['node-exporter:9100']
labels:
host: 'asustor'
# Container metrics from cAdvisor (ASUSTOR)
- job_name: 'asustor-cadvisor'
static_configs:
- targets: ['cadvisor:8080']
labels:
host: 'asustor'
# ============================================
# QNAP Metrics (Remote - 192.168.10.8)
# ============================================
# Host metrics from Node Exporter (QNAP)
- job_name: 'qnap-node'
static_configs:
- targets: ['192.168.10.8:9100']
labels:
host: 'qnap'
# Container metrics from cAdvisor (QNAP)
- job_name: 'qnap-cadvisor'
static_configs:
- targets: ['192.168.10.8:8080']
labels:
host: 'qnap'
# Backend NestJS application (QNAP)
- job_name: 'backend'
static_configs:
- targets: ['192.168.10.8:3000']
labels:
host: 'qnap'
metrics_path: '/metrics'
# MariaDB Exporter (optional - QNAP)
# - job_name: 'mariadb'
# static_configs:
# - targets: ['192.168.10.8:9104']
# labels:
# host: 'qnap'
```
---
## Uptime Kuma Monitors
เมื่อ Uptime Kuma พร้อมใช้งาน ให้เพิ่ม monitors ต่อไปนี้:
| Monitor Name | Type | URL / Host | Interval |
| :------------ | :--- | :--------------------------------- | :------- |
| QNAP NPM | HTTP | https://npm.np-dms.work | 60s |
| Frontend | HTTP | https://lcbp3.np-dms.work | 60s |
| Backend API | HTTP | https://backend.np-dms.work/health | 60s |
| MariaDB | TCP | 192.168.10.8:3306 | 60s |
| Redis | TCP | 192.168.10.8:6379 | 60s |
| Elasticsearch | HTTP | http://192.168.10.8:9200 | 60s |
| Gitea | HTTP | https://git.np-dms.work | 60s |
| n8n | HTTP | https://n8n.np-dms.work | 60s |
| Grafana | HTTP | https://grafana.np-dms.work | 60s |
| QNAP Host | Ping | 192.168.10.8 | 60s |
| ASUSTOR Host | Ping | 192.168.10.9 | 60s |
---
## Grafana Dashboards
### Recommended Dashboards to Import
| Dashboard ID | Name | Purpose |
| :----------- | :--------------------------- | :------------------ |
| 1860 | Node Exporter Full | Host system metrics |
| 14282 | cAdvisor exporter | Container metrics |
| 11074 | Node Exporter for Prometheus | Node overview |
| 7362 | Docker and Host Monitoring | Combined view |
### Import Dashboard via Grafana UI
1. Go to **Dashboards → Import**
2. Enter Dashboard ID (e.g., `1860`)
3. Select Prometheus data source
4. Click **Import**
---
> 📝 **หมายเหตุ**: เอกสารนี้อ้างอิงจาก Architecture Document **v1.8.0** - Monitoring Stack deploy บน ASUSTOR AS5403T

View File

@@ -8,15 +8,15 @@
```bash
# สำหรับ n8n volumes
chown -R 1000:1000 /share/Container/n8n
chmod -R 755 /share/Container/n8n
chown -R 1000:1000 /share/np-dms/n8n
chmod -R 755 /share/np-dms/n8n
```
## Docker file
```yml
# File: share/Container/n8n/docker-compose.yml
# DMS Container v1_4_1 แยก service และ folder, Application name:n8n service n8n
# File: share/np-dms/n8n/docker-compose.yml
# DMS Container v1_7_0 แยก service และ folder, Application name:n8n service n8n
x-restart: &restart_policy
restart: unless-stopped
@@ -72,10 +72,10 @@ services:
networks:
lcbp3: {}
volumes:
- "/share/Container/n8n:/home/node/.n8n"
- "/share/Container/n8n/cache:/home/node/.cache"
- "/share/Container/n8n/scripts:/scripts"
- "/share/Container/n8n/data:/data"
- "/share/np-dms/n8n:/home/node/.n8n"
- "/share/np-dms/n8n/cache:/home/node/.cache"
- "/share/np-dms/n8n/scripts:/scripts"
- "/share/np-dms/n8n/data:/data"
- "/var/run/docker.sock:/var/run/docker.sock"
healthcheck:

View File

@@ -1,120 +1,359 @@
# **🗺️ แผนผัง Network Architecture & Firewall (LCBP3-DMS)**
# 🗺️ แผนผัง Network Architecture & Container Services (LCBP3-DMS)
แผนผังนี้แสดงการแบ่งส่วนเครือข่าย (VLANs) และกฎ Firewall (ACLs) สำหรับ TP-Link Omada (ER7206/OC200) เพื่อรักษาความปลอดภัยของ QNAP NAS และ Docker Services
แผนผังนี้แสดงการแบ่งส่วนเครือข่าย (VLANs), การเชื่อมต่อ Firewall (ACLs) และบทบาทของ Server ทั้งสองตัว (QNAP: Application, ASUSTOR: Infrastructure)
## **1\. แผนผังการเชื่อมต่อ (Connection Flow Diagram)**
---
## 1. ภาพรวมการแบ่งบทบาท Server
```
┌──────────────────────────────────────────────────────────────────────────────┐
│ LCBP3-DMS INFRASTRUCTURE │
├────────────────────────────────┬─────────────────────────────────────────────┤
│ QNAP TS-473A │ ASUSTOR AS5403T │
│ (Application & Database) │ (Infrastructure & Backup) │
├────────────────────────────────┼─────────────────────────────────────────────┤
│ ✔ Application Runtime │ ✔ File Storage (NFS/SMB) │
│ ✔ API / Web (NestJS, Next.js) │ ✔ Backup Target (Restic/Borg) │
│ ✔ Database (MariaDB Primary) │ ✔ Docker Infra (Registry, Portainer) │
│ ✔ High CPU / RAM usage │ ✔ Monitoring (Prometheus, Grafana) │
│ ✔ Worker / Queue (Redis) │ ✔ Log Aggregation (Loki) │
│ ✔ API Gateway (NPM) │ ✔ Uptime Monitoring (Uptime Kuma) │
│ ✖ ไม่เก็บ backup ระยะยาว │ ✖ ไม่รัน App logic หนัก │
├────────────────────────────────┼─────────────────────────────────────────────┤
│ Container: Container Station │ Container: Portainer │
│ IP: 192.168.10.8 │ IP: 192.168.10.9 │
│ Storage: 4TB×4 RAID5 + 1TB SSD │ Storage: 6TB×3 RAID5 + 1TB SSD │
└────────────────────────────────┴─────────────────────────────────────────────┘
```
---
## 2. Data Flow Diagram
```mermaid
flowchart TB
subgraph Internet["🌐 Internet"]
User[("👤 User")]
end
subgraph QNAP["💾 QNAP TS-473A (App Server)"]
NPM["🔲 NPM<br/>(Reverse Proxy)"]
Frontend["📱 Next.js<br/>(Frontend)"]
Backend["⚙️ NestJS<br/>(Backend API)"]
DB["🗄️ MariaDB"]
Redis["📦 Redis"]
ES["🔍 Elasticsearch"]
end
subgraph ASUSTOR["💾 ASUSTOR AS5403T (Infra Server)"]
Portainer["🐳 Portainer"]
Registry["📦 Registry"]
Prometheus["📊 Prometheus"]
Grafana["📈 Grafana"]
Uptime["⏱️ Uptime Kuma"]
Backup["💾 Restic/Borg"]
NFS["📁 NFS Storage"]
end
User -->|HTTPS 443| NPM
NPM --> Frontend
NPM --> Backend
Frontend --> Backend
Backend --> DB
Backend --> Redis
Backend --> ES
DB -.->|Scheduled Backup| Backup
Backup --> NFS
Portainer -.->|Manage| QNAP
Prometheus -.->|Collect Metrics| Backend
Prometheus -.->|Collect Metrics| DB
Uptime -.->|Health Check| NPM
```
---
## 3. Docker Management View
```mermaid
flowchart TB
subgraph Portainer["🐳 Portainer (ASUSTOR - Central Management)"]
direction TB
subgraph LocalStack["📦 Local Infra Stack"]
Registry["Docker Registry"]
Prometheus["Prometheus"]
Grafana["Grafana"]
Uptime["Uptime Kuma"]
Backup["Restic/Borg"]
Loki["Loki (Logs)"]
ClamAV["ClamAV"]
end
subgraph RemoteStack["🔗 Remote: QNAP App Stack"]
Frontend["Next.js"]
Backend["NestJS"]
MariaDB["MariaDB"]
Redis["Redis"]
ES["Elasticsearch"]
NPM["NPM"]
Gitea["Gitea"]
N8N["n8n"]
PMA["phpMyAdmin"]
end
end
```
---
## 4. Security Zones Diagram
```mermaid
flowchart TB
subgraph PublicZone["🌐 PUBLIC ZONE"]
direction LR
NPM["NPM (Reverse Proxy)"]
SSL["SSL/TLS Termination"]
end
subgraph AppZone["📱 APPLICATION ZONE (QNAP)"]
direction LR
Frontend["Next.js"]
Backend["NestJS"]
N8N["n8n"]
Gitea["Gitea"]
end
subgraph DataZone["💾 DATA ZONE (QNAP - Internal Only)"]
direction LR
MariaDB["MariaDB"]
Redis["Redis"]
ES["Elasticsearch"]
end
subgraph InfraZone["🛠️ INFRASTRUCTURE ZONE (ASUSTOR)"]
direction LR
Backup["Backup Services"]
Registry["Docker Registry"]
Monitoring["Prometheus + Grafana"]
Logs["Loki / Syslog"]
end
PublicZone -->|HTTPS Only| AppZone
AppZone -->|Internal API| DataZone
DataZone -.->|Backup| InfraZone
AppZone -.->|Metrics| InfraZone
```
---
## 5. แผนผังการเชื่อมต่อเครือข่าย (Network Flow)
```mermaid
graph TD
direction TB
subgraph Flow1 \[\<b\>การเชื่อมต่อจากภายนอก (Public WAN)\</b\>\]
User\[ผู้ใช้งานภายนอก (Internet)\]
subgraph Flow1["การเชื่อมต่อจากภายนอก (Public WAN)"]
User["ผู้ใช้งานภายนอก (Internet)"]
end
subgraph Router \[\<b\>Router (ER7206)\</b\> \- Gateway\]
User \-- "Port 80/443 (HTTPS/HTTP)" \--\> ER7206
ER7206(\<b\>Port Forwarding\</b\>\<br/\>TCP 80 \-\> 192.168.10.100:80\<br/\>TCP 443 \-\> 192.168.10.100:443)
subgraph Router["Router (ER7206) - Gateway"]
User -- "Port 80/443 (HTTPS/HTTP)" --> ER7206
ER7206["Port Forwarding<br/>TCP 80 192.168.10.8:80<br/>TCP 443 192.168.10.8:443"]
end
subgraph VLANs \[\<b\>เครือข่ายภายใน (VLANs & Firewall Rules)\</b\>\]
subgraph VLANs["เครือข่ายภายใน (VLANs & Firewall Rules)"]
direction LR
subgraph VLAN10 \[\<b\>VLAN 10: Servers (DMZ)\</b\>\<br/\>192.168.10.x\]
QNAP\[\<b\>QNAP NAS (192.168.10.100)\</b\>\]
subgraph VLAN10["VLAN 10: Servers<br/>192.168.10.x"]
QNAP["QNAP NAS<br/>(192.168.10.8)"]
ASUSTOR["ASUSTOR NAS<br/>(192.168.10.9)"]
end
subgraph VLAN20 \[\<b\>VLAN 20: Office\</b\>\<br/\>192.168.20.x\]
OfficePC\[PC พนักงาน/Wi-Fi\]
subgraph VLAN20["VLAN 20: MGMT<br/>192.168.20.x"]
AdminPC["Admin PC / Switches"]
end
subgraph VLAN30 \[\<b\>VLAN 30: Guests\</b\>\<br/\>192.168.30.x\]
GuestPC\[Guest Wi-Fi\]
subgraph VLAN30["VLAN 30: USER<br/>192.168.30.x"]
OfficePC["PC พนักงาน/Wi-Fi"]
end
subgraph Firewall \[\<b\>Firewall ACLs (ควบคุมโดย OC200)\</b\>\]
subgraph VLAN70["VLAN 70: GUEST<br/>192.168.70.x"]
GuestPC["Guest Wi-Fi"]
end
subgraph Firewall["Firewall ACLs (OC200/ER7206)"]
direction TB
rule1(\<b\>Rule 1: DENY\</b\>\<br/\>Guest (VLAN 30\) \-\> All VLANs)
rule2(\<b\>Rule 2: DENY\</b\>\<br/\>Server (VLAN 10\) \-\> Office (VLAN 20))
rule3(\<b\>Rule 3: ALLOW\</b\>\<br/\>Office (VLAN 20\) \-\> QNAP (192.168.10.100)\<br/\>Ports: 443, 80, 81, 2222\)
rule1["Rule 1: DENY<br/>Guest (VLAN 70) → All VLANs"]
rule2["Rule 2: DENY<br/>Server (VLAN 10) → User (VLAN 30)"]
rule3["Rule 3: ALLOW<br/>User (VLAN 30) → QNAP<br/>Ports: 443, 80"]
rule4["Rule 4: ALLOW<br/>MGMT (VLAN 20) → All"]
end
%% \--- แสดงผล Firewall Rules \---
GuestPC \-.x|rule1| QNAP
QNAP \-.x|rule2| OfficePC
OfficePC \-- "\[https://lcbp3.np-dms.work\](https://lcbp3.np-dms.work)" \--\>|rule3| QNAP
GuestPC -.x|rule1| QNAP
QNAP -.x|rule2| OfficePC
OfficePC -- "https://lcbp3.np-dms.work" -->|rule3| QNAP
AdminPC -->|rule4| QNAP
AdminPC -->|rule4| ASUSTOR
end
%% \--- เชื่อมต่อ Router กับ QNAP \---
ER7206 \--\> QNAP
ER7206 --> QNAP
subgraph Docker \[\<b\>Docker Network 'lcbp3' (ภายใน QNAP)\</b\>\]
subgraph DockerQNAP["Docker 'lcbp3' (QNAP - Applications)"]
direction TB
subgraph PublicServices \[Services ที่ NPM เปิดสู่ภายนอก\]
subgraph PublicServices["Services ที่ NPM เปิดสู่ภายนอก"]
direction LR
NPM\[\<b\>NPM (Nginx Proxy Manager)\</b\>\<br/\>รับการจราจรจาก QNAP\]
Frontend(frontend:3000)
Backend(backend:3000)
Gitea(gitea:3000)
PMA(pma:80)
N8N(n8n:5678)
NPM["NPM (Nginx Proxy Manager)"]
FrontendC["frontend:3000"]
BackendC["backend:3000"]
GiteaC["gitea:3000"]
PMAC["pma:80"]
N8NC["n8n:5678"]
end
subgraph InternalServices \[Internal Services (Backend เรียกใช้เท่านั้น)\]
subgraph InternalServices["Internal Services (Backend Only)"]
direction LR
DB(mariadb:3306)
Cache(cache:6379)
Search(search:9200)
DBC["mariadb:3306"]
CacheC["cache:6379"]
SearchC["search:9200"]
end
%% \--- การเชื่อมต่อภายใน Docker \---
NPM \-- "lcbp3.np-dms.work" \--\> Frontend
NPM \-- "backend.np-dms.work" \--\> Backend
NPM \-- "git.np-dms.work" \--\> Gitea
NPM \-- "pma.np-dms.work" \--\> PMA
NPM \-- "n8n.np-dms.work" \--\> N8N
Backend \-- "lcbp3 Network" \--\> DB
Backend \-- "lcbp3 Network" \--\> Cache
Backend \-- "lcbp3 Network" \--\> Search
NPM -- "lcbp3.np-dms.work" --> FrontendC
NPM -- "backend.np-dms.work" --> BackendC
NPM -- "git.np-dms.work" --> GiteaC
NPM -- "pma.np-dms.work" --> PMAC
NPM -- "n8n.np-dms.work" --> N8NC
BackendC -- "lcbp3 Network" --> DBC
BackendC -- "lcbp3 Network" --> CacheC
BackendC -- "lcbp3 Network" --> SearchC
end
%% \--- เชื่อมต่อ QNAP กับ Docker \---
QNAP \--\> NPM
subgraph DockerASUSTOR["Docker 'lcbp3' (ASUSTOR - Infrastructure)"]
direction TB
%% \--- Styling \---
classDef default fill:\#f9f9f9,stroke:\#333,stroke-width:2px;
classDef router fill:\#e6f7ff,stroke:\#0056b3,stroke-width:2px;
classDef vlan fill:\#fffbe6,stroke:\#d46b08,stroke-width:2px;
classDef docker fill:\#e6ffed,stroke:\#096dd9,stroke-width:2px;
classDef internal fill:\#f0f0f0,stroke:\#595959,stroke-width:2px,stroke-dasharray: 5 5;
classDef fw fill:\#fff0f0,stroke:\#d9363e,stroke-width:2px,stroke-dasharray: 3 3;
subgraph InfraServices["Infrastructure Services"]
direction LR
PortainerC["portainer:9443"]
RegistryC["registry:5000"]
PrometheusC["prometheus:9090"]
GrafanaC["grafana:3000"]
UptimeC["uptime-kuma:3001"]
end
class Router,ER7206 router;
class VLANs,VLAN10,VLAN20,VLAN30 vlan;
class Docker,PublicServices,InternalServices docker;
class DB,Cache,Search internal;
class Firewall,rule1,rule2,rule3 fw;
subgraph BackupServices["Backup & Storage"]
direction LR
ResticC["restic/borg"]
NFSC["NFS Share"]
end
## **2\. สรุปการตั้งค่า Firewall ACLs (สำหรับ Omada OC200)**
PortainerC -.->|"Remote Endpoint"| NPM
PrometheusC -.->|"Scrape Metrics"| BackendC
ResticC --> NFSC
end
นี่คือรายการกฎ (Rules) ที่คุณต้องสร้างใน Settings \> Network Security \> ACL (เรียงลำดับจากบนลงล่าง):
QNAP --> NPM
ASUSTOR --> PortainerC
DBC -.->|"Scheduled Backup"| ResticC
```
| ลำดับ | Name | Policy | Source | Destination | Ports |
| :---- | :---- | :---- | :---- | :---- | :---- |
| **1** | Isolate-Guests | **Deny** | Network \-\> VLAN 30 (Guests) | Network \-\> VLAN 1, 10, 20 | All |
| **2** | Isolate-Servers | **Deny** | Network \-\> VLAN 10 (Servers) | Network \-\> VLAN 20 (Office) | All |
| **3** | Block-Office-to-Mgmt | **Deny** | Network \-\> VLAN 20 (Office) | Network \-\> VLAN 1 (Mgmt) | All |
| **4** | Allow-Office-to-Services | **Allow** | Network \-\> VLAN 20 (Office) | IP Group \-\> QNAP\_Services (192.168.10.100) | Port Group \-\> Web\_Services (443, 80, 81, 2222\) |
| **5** | (Default) | Allow | Any | Any | All |
---
## **3\. สรุปการตั้งค่า Port Forwarding (สำหรับ Omada ER7206)**
## 6. สรุปการตั้งค่า Firewall ACLs (สำหรับ Omada OC200)
นี่คือรายการกฎที่คุณต้องสร้างใน Settings \> Transmission \> Port Forwarding:
นี่คือรายการกฎ (Rules) ที่คุณต้องสร้างใน **Settings > Network Security > ACL** (เรียงลำดับจากบนลงล่าง):
| Name | External Port | Internal IP | Internal Port | Protocol |
| :---- | :---- | :---- | :---- | :---- |
| Allow-NPM-HTTPS | 443 | 192.168.10.100 | 443 | TCP |
| Allow-NPM-HTTP | 80 | 192.168.10.100 | 80 | TCP |
| ลำดับ | Name | Policy | Source | Destination | Ports |
| :---- | :--------------------- | :-------- | :---------------- | :------------------------ | :----------------------------------- |
| **1** | Isolate-Guests | **Deny** | Network → VLAN 70 | Network → VLAN 10, 20, 30 | All |
| **2** | Isolate-Servers | **Deny** | Network → VLAN 10 | Network → VLAN 30 (USER) | All |
| **3** | Block-User-to-Mgmt | **Deny** | Network → VLAN 30 | Network → VLAN 20 (MGMT) | All |
| **4** | Allow-User-to-Services | **Allow** | Network → VLAN 30 | IP → QNAP (192.168.10.8) | Port Group → Web (443, 80, 81, 2222) |
| **5** | Allow-MGMT-to-All | **Allow** | Network → VLAN 20 | Any | All |
| **6** | Allow-Server-Internal | **Allow** | IP → 192.168.10.8 | IP → 192.168.10.9 | All (QNAP ↔ ASUSTOR) |
| **7** | (Default) | Deny | Any | Any | All |
---
## 7. สรุปการตั้งค่า Port Forwarding (สำหรับ Omada ER7206)
นี่คือรายการกฎที่คุณต้องสร้างใน **Settings > Transmission > Port Forwarding**:
| Name | External Port | Internal IP | Internal Port | Protocol |
| :-------------- | :------------ | :----------- | :------------ | :------- |
| Allow-NPM-HTTPS | 443 | 192.168.10.8 | 443 | TCP |
| Allow-NPM-HTTP | 80 | 192.168.10.8 | 80 | TCP |
> **หมายเหตุ**: Port forwarding ไปที่ QNAP (NPM) เท่านั้น, ASUSTOR ไม่ควรเปิดรับ traffic จากภายนอก
---
## 8. Container Service Distribution
### QNAP (192.168.10.8) - Application Services
| Container | Port | Domain | Network |
| :------------ | :--- | :------------------ | :------ |
| npm | 81 | npm.np-dms.work | lcbp3 |
| frontend | 3000 | lcbp3.np-dms.work | lcbp3 |
| backend | 3000 | backend.np-dms.work | lcbp3 |
| mariadb | 3306 | (internal) | lcbp3 |
| cache (redis) | 6379 | (internal) | lcbp3 |
| search (es) | 9200 | (internal) | lcbp3 |
| gitea | 3000 | git.np-dms.work | lcbp3 |
| n8n | 5678 | n8n.np-dms.work | lcbp3 |
| pma | 80 | pma.np-dms.work | lcbp3 |
### ASUSTOR (192.168.10.9) - Infrastructure Services
| Container | Port | Domain | Network |
| :------------ | :--- | :--------------------- | :------ |
| portainer | 9443 | portainer.np-dms.work | lcbp3 |
| prometheus | 9090 | prometheus.np-dms.work | lcbp3 |
| grafana | 3000 | grafana.np-dms.work | lcbp3 |
| uptime-kuma | 3001 | uptime.np-dms.work | lcbp3 |
| registry | 5000 | registry.np-dms.work | lcbp3 |
| node-exporter | 9100 | (internal) | lcbp3 |
| cadvisor | 8080 | (internal) | lcbp3 |
| loki | 3100 | (internal) | lcbp3 |
| restic/borg | N/A | (scheduled job) | host |
---
## 9. Backup Flow
```
┌────────────────────────────────────────────────────────────────────────┐
│ BACKUP STRATEGY │
├────────────────────────────────────────────────────────────────────────┤
│ │
│ QNAP (Source) ASUSTOR (Target) │
│ ┌──────────────┐ ┌──────────────────────┐ │
│ │ MariaDB │ ──── Daily 2AM ────▶ │ /volume1/backup/db/ │ │
│ │ (mysqldump) │ │ (Restic Repository) │ │
│ └──────────────┘ └──────────────────────┘ │
│ │
│ ┌──────────────┐ ┌──────────────────────┐ │
│ │ Redis RDB │ ──── Daily 3AM ────▶ │ /volume1/backup/ │ │
│ │ + AOF │ │ redis/ │ │
│ └──────────────┘ └──────────────────────┘ │
│ │
│ ┌──────────────┐ ┌──────────────────────┐ │
│ │ App Config │ ──── Weekly ───────▶ │ /volume1/backup/ │ │
│ │ + Volumes │ Sunday 4AM │ config/ │ │
│ └──────────────┘ └──────────────────────┘ │
│ │
│ Retention Policy: │
│ • Daily: 7 days │
│ • Weekly: 4 weeks │
│ • Monthly: 6 months │
│ │
└────────────────────────────────────────────────────────────────────────┘
```
---
> 📝 **หมายเหตุ**: เอกสารนี้อ้างอิงจาก Architecture Document **v1.8.0** - Last updated: 2026-01-28