260130:1446 Update Infrastructure
Some checks failed
Spec Validation / validate-markdown (push) Has been cancelled
Spec Validation / validate-diagrams (push) Has been cancelled
Spec Validation / check-todos (push) Has been cancelled

This commit is contained in:
admin
2026-01-30 14:46:06 +07:00
parent cd73cc1549
commit 9e8bd25e1d
12 changed files with 1903 additions and 255 deletions

View File

@@ -7,25 +7,26 @@
## กำหนดสิทธิ
```bash
chown -R 1000:1000 /share/Container/gitea/
chown -R 1000:1000 /share/np-dms/gitea/
[/share/Container/git] # ls -l /share/Container/gitea/etc/app.ini
[/share/Container/git] # setfacl -R -m u:1000:rwx /share/Container/gitea/
[/share/Container/git] # setfacl -R -m u:70:rwx /share/Container/git/postgres/
getfacl /share/Container/git/etc/app.ini
chown -R 1000:1000 /share/Container/gitea/
ล้าง
setfacl -R -b /share/Container/gitea/
getfacl /share/np-dms/git/etc/app.ini
chown -R 1000:1000 /share/np-dms/gitea/
ล้างสิทธิ์
setfacl -R -b /share/np-dms/gitea/
chgrp -R administrators /share/Container/gitea/
chown -R 1000:1000 /share/Container/gitea/etc /share/Container/gitea/lib /share/Container/gitea/backup
setfacl -m u:1000:rwx -m g:1000:rwx /share/Container/gitea/etc /share/Container/gitea/lib /share/Container/gitea/backup
chgrp -R administrators /share/np-dms/gitea/
chown -R 1000:1000 /share/np-dms/gitea/etc /share/np-dms/gitea/lib /share/np-dms/gitea/backup
setfacl -m u:1000:rwx -m g:1000:rwx /share/np-dms/gitea/etc /share/np-dms/gitea/lib /share/np-dms/gitea/backup
```
## Docker file
```yml
# File: share/Container/git/docker-compose.yml
# DMS Container v1_4_1 : แยก service และ folder, Application name: git, Servive:gitea
# File: share/np-dms/git/docker-compose.yml
# DMS Container v1_7_0 : แยก service และ folder
# Application name: git, Servive:gitea
networks:
lcbp3:
external: true
@@ -74,12 +75,12 @@ services:
# Optional: lock install after setup (เปลี่ยนเป็น true เมื่อจบ onboarding)
GITEA__security__INSTALL_LOCK: "true"
volumes:
- /share/Container/gitea/backup:/backup
- /share/Container/gitea/etc:/etc/gitea
- /share/Container/gitea/lib:/var/lib/gitea
- /share/np-dms/gitea/backup:/backup
- /share/np-dms/gitea/etc:/etc/gitea
- /share/np-dms/gitea/lib:/var/lib/gitea
# ให้ repo root ใช้จาก /share/dms-data/gitea_repos
- /share/dms-data/gitea_repos:/var/lib/gitea/git/repositories
- /share/dms-data/gitea_registry:/data/registry
- /share/np-dms/gitea/gitea_repos:/var/lib/gitea/git/repositories
- /share/np-dms/gitea/gitea_registry:/data/registry
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:

View File

@@ -1,6 +1,27 @@
# Infrastructure Setup
## 1. Redis Cluster Configuration
> 📍 **Document Version:** v1.8.0
> 🖥️ **Primary Server:** QNAP TS-473A (Application & Database)
> 💾 **Backup Server:** ASUSTOR AS5403T (Infrastructure & Backup)
---
## Server Role Overview
| Component | QNAP TS-473A | ASUSTOR AS5403T |
| :-------------------- | :---------------------------- | :--------------------------- |
| **Redis/Cache** | ✅ Primary (Section 1) | ❌ Not deployed |
| **Database** | ✅ Primary MariaDB (Section 2) | ❌ Not deployed |
| **Backend Service** | ✅ NestJS API (Section 3) | ❌ Not deployed |
| **Monitoring** | ❌ Exporters only | ✅ Prometheus/Grafana |
| **Backup Target** | ❌ Source only | ✅ Backup storage (Section 5) |
| **Disaster Recovery** | ✅ Recovery target | ✅ Backup source (Section 7) |
> 📖 See [monitoring.md](monitoring.md) for ASUSTOR-specific monitoring setup
---
## 1. Redis Configuration (Standalone + Persistence)
### 1.1 Docker Compose Setup
```yaml
@@ -8,99 +29,29 @@
version: '3.8'
services:
redis-1:
image: redis:7-alpine
container_name: lcbp3-redis-1
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf
ports:
- "6379:6379"
- "16379:16379"
volumes:
- redis-1-data:/data
networks:
- lcbp3-network
redis:
image: 'redis:7.2-alpine'
container_name: lcbp3-redis
restart: unless-stopped
redis-2:
image: redis:7-alpine
container_name: lcbp3-redis-2
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf
ports:
- "6380:6379"
- "16380:16379"
# AOF: Enabled for durability
# Maxmemory: Prevent OOM
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD} --maxmemory 1gb --maxmemory-policy noeviction
volumes:
- redis-2-data:/data
networks:
- lcbp3-network
restart: unless-stopped
redis-3:
image: redis:7-alpine
container_name: lcbp3-redis-3
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf
- ./redis/data:/data
ports:
- "6381:6379"
- "16381:16379"
volumes:
- redis-3-data:/data
- '6379:6379'
networks:
- lcbp3-network
restart: unless-stopped
volumes:
redis-1-data:
redis-2-data:
redis-3-data:
- lcbp3
deploy:
resources:
limits:
cpus: '2.0'
memory: 1.5G
networks:
lcbp3-network:
external: true
```
#### Initialize Cluster
```bash
# Start Redis nodes
docker-compose -f docker-compose-redis.yml up -d
# Wait for nodes to start
sleep 10
# Create cluster
docker exec -it lcbp3-redis-1 redis-cli --cluster create \
172.20.0.2:6379 \
172.20.0.3:6379 \
172.20.0.4:6379 \
--cluster-replicas 0
# Verify cluster
docker exec -it lcbp3-redis-1 redis-cli cluster info
docker exec -it lcbp3-redis-1 redis-cli cluster nodes
```
#### Health Check Script
```bash
#!/bin/bash
# scripts/check-redis-cluster.sh
echo "🔍 Checking Redis Cluster Health..."
for port in 6379 6380 6381; do
echo "\n📍 Node on port $port:"
# Check if node is up
docker exec lcbp3-redis-$(($port - 6378)) redis-cli -p 6379 ping
# Check cluster status
docker exec lcbp3-redis-$(($port - 6378)) redis-cli -p 6379 cluster info | grep cluster_state
# Check memory usage
docker exec lcbp3-redis-$(($port - 6378)) redis-cli -p 6379 info memory | grep used_memory_human
done
echo "\n✅ Cluster check complete"
```
---
## 2. Database Configuration

View File

@@ -7,24 +7,30 @@
## กำหนดสิทธิ
```bash
chown -R 999:999 /share/nap-dms/mariadb/init
chmod 755 /share/nap-dms/mariadb/init
setfacl -R -m u:999:r-x /share/nap-dms/mariadb/init
setfacl -R -d -m u:999:r-x /share/nap-dms/mariadb/init
chown -R 33:33 /share/Container/pma/tmp
chmod 755 /share/Container/pma/tmp
setfacl -R -m u:33:rwx /share/Container/pma/tmp
setfacl -R -d -m u:33:rwx /share/Container/pma/tmp
chown -R 999:999 /share/np-dms/mariadb
chmod -R 755 /share/np-dms/mariadb
setfacl -R -m u:999:rwx /share/np-dms/mariadb
setfacl -R -d -m u:999:rwx /share/np-dms/mariadb
chown -R 999:999 /share/np-dms/mariadb/init
chmod 755 /share/np-dms/mariadb/init
setfacl -R -m u:999:r-x /share/np-dms/mariadb/init
setfacl -R -d -m u:999:r-x /share/np-dms/mariadb/init
chown -R 33:33 /share/np-dms/pma/tmp
chmod 755 /share/np-dms/pma/tmp
setfacl -R -m u:33:rwx /share/np-dms/pma/tmp
setfacl -R -d -m u:33:rwx /share/np-dms/pma/tmp
chown -R 33:33 /share/dms-data/logs/pma
chmod 755 /share/dms-data/logs/pma
setfacl -R -m u:33:rwx /share/dms-data/logs/pma
setfacl -R -d -m u:33:rwx /share/dms-data/logs/pma
setfacl -R -m u:1000:rwx /share/Container/gitea
setfacl -R -m u:1000:rwx /share/dms-data/gitea_repos
setfacl -R -m u:1000:rwx /share/dms-data/gitea_registry
setfacl -R -m u:1000:rwx /share/nap-dms/gitea
setfacl -R -m u:1000:rwx /share/nap-dms/gitea/gitea_repos
setfacl -R -m u:1000:rwx /share/nap-dms/gitea/gitea_registry
```
## เพิ่ม database & user สำหรับ Nginx Proxy Manager (NPM)
@@ -50,8 +56,9 @@ docker exec -it mariadb mysql -u root -p
## Docker file
```yml
# File: share/Container/mariadb/docker-compose.yml
# DMS Container v1_4_1 : แยก service และ folder,Application name: lcbp3-db, Servive: mariadb, pma
# File: share/nap-dms/mariadb/docker-compose.yml
# DMS Container v1_7_0 : ย้าย folder ไปที่ share/nap-dms/
# Application name: lcbp3-db, Servive: mariadb, pma
x-restart: &restart_policy
restart: unless-stopped
@@ -85,19 +92,19 @@ services:
TZ: "Asia/Bangkok"
ports:
- "3306:3306"
networks:
- lcbp3
volumes:
- "/share/nap-dms/mariadb/data:/var/lib/mysql"
- "/share/nap-dms/mariadb/my.cnf:/etc/mysql/conf.d/my.cnf:ro"
- "/share/nap-dms/mariadb/init:/docker-entrypoint-initdb.d:ro"
- "/share/np-dms/mariadb/data:/var/lib/mysql"
- "/share/np-dms/mariadb/my.cnf:/etc/mysql/conf.d/my.cnf:ro"
- "/share/np-dms/mariadb/init:/docker-entrypoint-initdb.d:ro"
- "/share/dms-data/mariadb/backup:/backup"
healthcheck:
test:
["CMD-SHELL", "mysqladmin ping -h 127.0.0.1 -pCenter#2025 || exit 1"]
test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
interval: 10s
timeout: 5s
retries: 15
networks:
lcbp3: {}
retries: 3
start_period: 30s
pma:
<<: [*restart_policy, *default_logging]
@@ -119,20 +126,46 @@ services:
MEMORY_LIMIT: "512M"
ports:
- "89:80"
networks:
- lcbp3
# expose:
# - "80"
volumes:
- "/share/Container/pma/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php:ro"
- "/share/Container/pma/zzz-custom.ini:/usr/local/etc/php/conf.d/zzz-custom.ini:ro"
- "/share/Container/pma/tmp:/var/lib/phpmyadmin/tmp:rw"
- "/share/np-dms/pma/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php:ro"
- "/share/np-dms/pma/zzz-custom.ini:/usr/local/etc/php/conf.d/zzz-custom.ini:ro"
- "/share/np-dms/pma/tmp:/var/lib/phpmyadmin/tmp:rw"
- "/share/dms-data/logs/pma:/var/log/apache2"
depends_on:
mariadb:
condition: service_healthy
networks:
lcbp3: {}
networks:
lcbp3:
external: true
# chown -R 999:999 /share/np-dms/mariadb/init
# chmod 755 /share/np-dms/mariadb/init
# setfacl -R -m u:999:r-x /share/np-dms/mariadb/init
# setfacl -R -d -m u:999:r-x /share/np-dms/mariadb/init
# chown -R 33:33 /share/np-dms/pma/tmp
# chmod 755 /share/np-dms/pma/tmp
# setfacl -R -m u:33:rwx /share/np-dms/pma/tmp
# setfacl -R -d -m u:33:rwx /share/np-dms/pma/tmp
# chown -R 33:33 /share/dms-data/logs/pma
# chmod 755 /share/dms-data/logs/pma
# setfacl -R -m u:33:rwx /share/dms-data/logs/pma
# setfacl -R -d -m u:33:rwx /share/dms-data/logs/pma
# setfacl -R -m u:1000:rwx /share/Container/gitea
# setfacl -R -m u:1000:rwx /share/dms-data/gitea_repos
# setfacl -R -m u:1000:rwx /share/dms-data/gitea_registry
# docker exec -it mariadb mysql -u root -p
# CREATE DATABASE npm;
# CREATE USER 'npm'@'%' IDENTIFIED BY 'npm';
# GRANT ALL PRIVILEGES ON npm.* TO 'npm'@'%';
# FLUSH PRIVILEGES;
```

View File

@@ -33,8 +33,9 @@ setfacl -R -m u:0:rwx /share/Container/npm
## Docker file
```yml
# File: share/Container/npm/docker-compose-npm.yml
# DMS Container v1_4_1 แยก service และ folder, Application name: lcbp3-npm, Servive:npm
# File: share/np-dms/npm/docker-compose-npm.yml
# DMS Container v1_7_0 : ย้าย folder ไปที่ share/np-dms/
# Application name: lcbp3-npm, Servive:npm
x-restart: &restart_policy
restart: unless-stopped
@@ -73,17 +74,17 @@ services:
- lcbp3
- giteanet
volumes:
- "/share/Container/npm/data:/data"
- "/share/np-dms/npm/data:/data"
- "/share/dms-data/logs/npm:/data/logs" # <-- เพิ่ม logging volume
- "/share/Container/npm/letsencrypt:/etc/letsencrypt"
- "/share/Container/npm/custom:/data/nginx/custom" # <-- สำคัญสำหรับ http_top.conf
- "/share/np-dms/npm/letsencrypt:/etc/letsencrypt"
- "/share/np-dms/npm/custom:/data/nginx/custom" # <-- สำคัญสำหรับ http_top.conf
# - "/share/Container/lcbp3/npm/landing:/data/landing:ro"
landing:
image: nginx:1.27-alpine
container_name: landing
restart: unless-stopped
volumes:
- "/share/Container/npm/landing:/usr/share/nginx/html:ro"
- "/share/np-dms/npm/landing:/usr/share/nginx/html:ro"
networks:
- lcbp3
networks:
@@ -92,7 +93,7 @@ networks:
giteanet:
external: true
name: gitnet

View File

@@ -0,0 +1,499 @@
# 08-Infrastructure
คู่มือการตั้งค่า Infrastructure สำหรับ **NAP-DMS LCBP3** (Laem Chabang Port Phase 3 - Document Management System)
> 📍 **Platform:** QNAP (Container Station) + ASUSTOR (Portainer)
> 🌐 **Domain:** `*.np-dms.work` (IP: 159.192.126.103)
> 🔒 **Network:** `lcbp3` (Docker External Network)
> 📄 **Version:** v1.8.0 (aligned with 01-02-architecture.md)
---
## 🏢 Hardware Infrastructure
### Server Role Separation
#### QNAP TS-473A
| (Application & Database Server) | | |
| :------------------------------ | :---------------- | :-------------------- |
| ✔ Application Runtime | ✔ API / Web | ✔ Database (Primary) |
| ✔ High CPU / RAM usage | ✔ Worker / Queue | ✖ No long-term backup |
| Container Station (UI) | 32GB RAM (Capped) | AMD Ryzen V1500B |
#### ASUSTOR AS5403T
| (Infrastructure & Backup Server) | | |
| :------------------------------- | :---------------- | :------------------- |
| ✔ File Storage | ✔ Backup Target | ✔ Docker Infra |
| ✔ Monitoring / Registry | ✔ Log Aggregation | ✖ No heavy App logic |
| Portainer (Manage All) | 16GB RAM | Intel Celeron @2GHz |
### Servers Specification
| Device | Model | CPU | RAM | Resource Policy | Role |
| :---------- | :------ | :---------------------- | :--- | :------------------ | :--------------------- |
| **QNAP** | TS-473A | AMD Ryzen V1500B | 32GB | **Strict Limits** | Application, DB, Cache |
| **ASUSTOR** | AS5403T | Intel Celeron @ 2.00GHz | 16GB | **Moderate Limits** | Infra, Backup, Monitor |
### Service Distribution by Server
#### QNAP TS-473A (Application Stack)
| Category | Service | Strategy | Resource Limit (Est.) |
| :-------------- | :------------------------ | :------------------------------ | :-------------------- |
| **Web App** | Next.js (Frontend) | Single Instance | 2.0 CPU / 2GB RAM |
| **Backend API** | NestJS | **2 Replicas** (Load Balanced) | 2.0 CPU / 1.5GB RAM |
| **Database** | MariaDB (Primary) | Performance Tuned (Buffer Pool) | 4.0 CPU / 5GB RAM |
| **Worker** | Redis + BullMQ Worker | **Standalone + AOF** | 2.0 CPU / 1.5GB RAM |
| **Search** | Elasticsearch | **Heap Locked (2GB)** | 2.0 CPU / 4GB RAM |
| **API Gateway** | NPM (Nginx Proxy Manager) | SSL Termination | 1.0 CPU / 512MB RAM |
| **Workflow** | n8n | Automation | 1.0 CPU / 1GB RAM |
| **Code** | Gitea | Git Repository | 1.0 CPU / 1GB RAM |
#### ASUSTOR AS5403T (Infrastructure Stack)
| Category | Service | Notes |
| :--------------- | :------------------ | :------------------------------ |
| **File Storage** | NFS / SMB | Shared volumes for backup |
| **Backup** | Restic / Borg | Pull-based Backup (More Safe) |
| **Docker Infra** | Registry, Portainer | Container image registry, mgmt |
| **Monitoring** | Uptime Kuma | Service availability monitoring |
| **Metrics** | Prometheus, Grafana | Cross-Server Scraping |
| **Log** | Loki / Syslog | Centralized logging |
---
## 🔄 Data Flow Architecture
```
┌──────────────┐
│ User │
└──────┬───────┘
│ HTTPS (443)
┌──────────────────────────────────────────────────────────────┐
│ QNAP TS-473A │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Nginx Proxy Manager (NPM) │ │
│ │ SSL Termination + Round Robin LB │ │
│ └───────────────────────┬─────────────────────────────────┘ │
│ │ │
│ ┌───────────────────────▼─────────────────────────────────┐ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │
│ │ │ Next.js │───▶│ NestJS │ │ NestJS │ │ │
│ │ │ (Frontend) │ │ (Replica 1)│ │ (Replica 2)│ │ │
│ │ └──────────────┘ └──────┬───────┘ └──────┬──────┘ │ │
│ │ │ │ │ │
│ │ ┌────────────────────────┼─────────────────┼ │ │
│ │ ▼ ▼ ▼ │ │
│ │ ┌──────────┐ ┌────────────┐ ┌─────────────┐ │ │
│ │ │ MariaDB │ │ Redis │ │Elasticsearch│ │ │
│ │ │(Primary) | │ (Persist.) │ │ (Search) │ │ │
│ │ └────┬─────┘ └────────────┘ └─────────────┘ │ │
│ └───────┼─────────────────────────────────────────────────┘ │
│ │ │
└──────────┼───────────────────────────────────────────────────┘
│ Local Dump -> Restic Pull (Cross-Server)
┌──────────────────────────────────────────────────────────────┐
│ ASUSTOR AS5403T │
│ ┌──────────────────────────────────────────────────────────┐│
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ││
│ │ │ Backup │ │ Registry │ │ Uptime │ ││
│ │ │ (Restic) │ │ (Docker) │ │ Kuma │ ││
│ │ └──────────┘ └──────────┘ └──────────┘ ││
│ │ ││
│ │ ┌──────────┐ ┌────────────┐ ┌──────────┐ ││
│ │ │Prometheus│ ──▶│ Grafana │ │ Loki │ ││
│ │ │(Metrics) │ │(Dashboard) │ │ (Logs) │ ││
│ │ └──────────┘ └────────────┘ └──────────┘ ││
│ │ ││
│ │ ┌───────────────────────────────────────────┐ ││
│ │ │ NFS / SMB Shared Storage │ ││
│ │ │ (Backup Volume) │ ││
│ │ └───────────────────────────────────────────┘ ││
│ └──────────────────────────────────────────────────────────┘│
└──────────────────────────────────────────────────────────────┘
```
---
## 🖥️ Docker Management Architecture
```
┌─────────────────────────────────────────────────────────────────────────┐
│ Portainer (ASUSTOR) │
│ https://portainer.np-dms.work │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────┐ ┌─────────────────────────────┐ │
│ │ Manage Infra Stack │ │ Remote Docker Endpoint │ │
│ │ (Local - ASUSTOR) │ │ (QNAP App Stack) │ │
│ ├─────────────────────────────┤ ├─────────────────────────────┤ │
│ │ • Registry │ │ • Next.js (Frontend) │ │
│ │ • Prometheus │ │ • NestJS (Backend) │ │
│ │ • Grafana │ │ • MariaDB │ │
│ │ • Uptime Kuma │ │ • Redis │ │
│ │ • Loki │ │ • Elasticsearch │ │
│ │ • Backup (Restic) │ │ • NPM │ │
│ │ • ClamAV │ │ • Gitea │ │
│ │ • node-exporter │ │ • n8n │ │
│ │ • cAdvisor │ │ • phpMyAdmin │ │
│ └─────────────────────────────┘ └─────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Container Station (QNAP): ใช้สำหรับ local UI management เท่านั้น
Portainer (ASUSTOR): ใช้เป็น centralized management ทั้ง 2 servers
```
---
## 🔐 Security Zones
```
┌─────────────────────────────────────────────────────────────────────────┐
│ SECURITY ZONES │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 🌐 PUBLIC ZONE │ │
│ │ ───────────────────────────────────────────────────────────── │ │
│ │ • Nginx Proxy Manager (NPM) │ │
│ │ • HTTPS (Port 443 only) │ │
│ │ • SSL/TLS Termination │ │
│ │ • Rate Limiting │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 📱 APPLICATION ZONE (QNAP - VLAN 10) │ │
│ │ ───────────────────────────────────────────────────────────── │ │
│ │ • Next.js (Frontend) │ │
│ │ • NestJS (Backend API) │ │
│ │ • n8n Workflow │ │
│ │ • Gitea │ │
│ │ • Internal API communication only │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 💾 DATA ZONE (QNAP - Internal Only) │ │
│ │ ───────────────────────────────────────────────────────────── │ │
│ │ • MariaDB (Primary Database) │ │
│ │ • Redis (Cache/Queue) │ │
│ │ • Elasticsearch (Search) │ │
│ │ • No public access - Backend only │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 🛠️ INFRASTRUCTURE ZONE (ASUSTOR - VLAN 10) │ │
│ │ ───────────────────────────────────────────────────────────── │ │
│ │ • Backup (Restic/Borg) │ │
│ │ • Docker Registry │ │
│ │ • Prometheus + Grafana │ │
│ │ • Uptime Kuma │ │
│ │ • Loki (Logs) │ │
│ │ • NFS/SMB Storage │ │
│ │ • Access via MGMT VLAN only │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
---
## 🌐 Network Architecture (VLAN)
### VLAN Networks
| VLAN ID | Name | Gateway/Subnet | DHCP Range | Purpose |
| :------ | :----- | :-------------- | :----------------- | :-------------------- |
| 10 | SERVER | 192.168.10.1/24 | Static | Servers (NAS, Docker) |
| 20 | MGMT | 192.168.20.1/24 | Static | Network Management |
| 30 | USER | 192.168.30.1/24 | .10-.254 (7 days) | Staff Devices |
| 40 | CCTV | 192.168.40.1/24 | .100-.150 (7 days) | Surveillance |
| 50 | VOICE | 192.168.50.1/24 | .201-.250 (7 days) | IP Phones |
| 60 | DMZ | 192.168.60.1/24 | Static | Public Services |
| 70 | GUEST | 192.168.70.1/24 | .200-.250 (1 day) | Guest WiFi |
### Static IP Allocation (Key Devices)
| VLAN | Device | IP Address | Role |
| :--------- | :------ | :------------- | :------------------ |
| SERVER(10) | QNAP | 192.168.10.8 | App/DB Server |
| SERVER(10) | ASUSTOR | 192.168.10.9 | Infra/Backup Server |
| MGMT(20) | ER7206 | 192.168.20.1 | Gateway/Router |
| MGMT(20) | SG2428P | 192.168.20.2 | Core Switch |
| MGMT(20) | AMPCOM | 192.168.20.3 | Server Switch |
| MGMT(20) | OC200 | 192.168.20.250 | Omada Controller |
| USER(30) | Printer | 192.168.30.222 | Kyocera CS3554ci |
| CCTV(40) | NVR | 192.168.40.100 | HikVision NVR |
### Network Equipment
| Device | Model | Ports | IP Address | Role |
| :------------------ | :----------------- | :---------------------- | :------------- | :--------------- |
| **Router** | TP-LINK ER7206 | 1 SFP + WAN + 4×GbE | 192.168.20.1 | Gateway/Firewall |
| **Core Switch** | TP-LINK SG2428P | 24×GbE PoE+ + 4×SFP | 192.168.20.2 | Core/PoE Switch |
| **Server Switch** | AMPCOM | 8×2.5GbE + 1×10G SFP+ | 192.168.20.3 | Server Uplink |
| **Admin Switch** | TP-LINK ES205G | 5×GbE (Unmanaged) | N/A | Admin PC |
| **CCTV Switch** | TP-LINK TL-SL1226P | 24×PoE+ 100Mbps + 2×SFP | 192.168.20.4 | CCTV PoE |
| **IP Phone Switch** | TP-LINK SG1210P | 8×PoE+ + 1×GbE + 1×SFP | 192.168.20.5 | VoIP |
| **Controller** | TP-LINK OC200 | Omada Controller | 192.168.20.250 | AP Management |
> 📖 Detailed port mappings and ACL rules: see [Securities.md](Securities.md) and [แผนผัง Network.md](แผนผัง%20Network.md)
---
## 🔗 Network Topology
```mermaid
graph TB
subgraph Internet
WAN[("🌐 Internet<br/>WAN")]
end
subgraph Router["ER7206 Router"]
R[("🔲 ER7206<br/>192.168.20.1")]
end
subgraph CoreSwitch["SG2428P Core Switch"]
CS[("🔲 SG2428P<br/>192.168.20.2")]
end
subgraph ServerSwitch["AMPCOM 2.5G Switch"]
SS[("🔲 AMPCOM<br/>192.168.20.3")]
end
subgraph Servers["VLAN 10 - Servers"]
QNAP[("💾 QNAP (App/DB)<br/>192.168.10.8")]
ASUSTOR[("💾 ASUSTOR (Infra)<br/>192.168.10.9")]
end
subgraph AccessPoints["EAP610 x16"]
AP[("📶 WiFi APs")]
end
subgraph OtherSwitches["Distribution"]
CCTV_SW[("🔲 TL-SL1226P<br/>CCTV")]
PHONE_SW[("🔲 SG1210P<br/>IP Phone")]
ADMIN_SW[("🔲 ES205G<br/>Admin")]
end
WAN --> R
R -->|Port 3| CS
CS -->|LAG Port 3-4| SS
SS -->|Port 3-4 LACP| QNAP
SS -->|Port 5-6 LACP| ASUSTOR
SS -->|Port 8| ADMIN_SW
CS -->|Port 5-20| AP
CS -->|SFP 25| CCTV_SW
CS -->|SFP 26| PHONE_SW
CS -->|Port 24| ADMIN_SW
```
---
## 📁 สารบัญเอกสาร
| ไฟล์ | คำอธิบาย |
| :--------------------------------------------------- | :--------------------------------------------------------------------------- |
| [Infrastructure Setup.md](Infrastructure%20Setup.md) | ภาพรวมการตั้งค่าโครงสร้างพื้นฐาน (Redis, MariaDB, Backend, Monitoring, Backup, DR) |
| [แผนผัง Network.md](แผนผัง%20Network.md) | แผนผัง Network Architecture และ Container Services |
| [Securities.md](Securities.md) | VLAN Segmentation, Firewall Rules, ACL (ER7206, SG2428P, EAP) |
---
## 🐳 Docker Compose Files
### Core Services (QNAP)
| ไฟล์ | Application | Services | Path บน QNAP |
| :--------------------------------------- | :---------- | :---------------------------------------- | :------------------------ |
| [MariaDB_setting.md](MariaDB_setting.md) | `lcbp3-db` | `mariadb`, `pma` | `/share/np-dms/mariadb/` |
| [NPM_setting.md](NPM_setting.md) | `lcbp3-npm` | `npm`, `landing` | `/share/np-dms/npm/` |
| [Service_setting.md](Service_setting.md) | `services` | `cache` (Redis), `search` (Elasticsearch) | `/share/np-dms/services/` |
| [Gitea_setting.md](Gitea_setting.md) | `git` | `gitea` | `/share/np-dms/gitea/` |
| [n8n_setting.md](n8n_setting.md) | `n8n` | `n8n` | `/share/np-dms/n8n/` |
### Infrastructure Services (ASUSTOR)
| ไฟล์ | Application | Services | Path บน ASUSTOR |
| :----------------------------- | :----------------- | :--------------------------------------------------- | :---------------------------- |
| [monitoring.md](monitoring.md) | `lcbp3-monitoring` | `prometheus`, `grafana`, `node-exporter`, `cadvisor` | `/volume1/np-dms/monitoring/` |
| *(NEW)* backup.md | `lcbp3-backup` | `restic`, `borg` | `/volume1/np-dms/backup/` |
| *(NEW)* registry.md | `lcbp3-registry` | `registry` | `/volume1/np-dms/registry/` |
| *(NEW)* uptime-kuma.md | `lcbp3-uptime` | `uptime-kuma` | `/volume1/np-dms/uptime/` |
---
## 🌐 Domain Mapping (NPM Proxy)
### Application Domains (QNAP)
| Domain | Service | Port | Host | Description |
| :-------------------- | :------- | :--- | :--- | :------------------------ |
| `lcbp3.np-dms.work` | frontend | 3000 | QNAP | Frontend Next.js |
| `backend.np-dms.work` | backend | 3000 | QNAP | Backend NestJS API |
| `pma.np-dms.work` | pma | 80 | QNAP | phpMyAdmin |
| `git.np-dms.work` | gitea | 3000 | QNAP | Gitea Git Server |
| `n8n.np-dms.work` | n8n | 5678 | QNAP | n8n Workflow Automation |
| `npm.np-dms.work` | npm | 81 | QNAP | Nginx Proxy Manager Admin |
### Infrastructure Domains (ASUSTOR)
| Domain | Service | Port | Host | Description |
| :----------------------- | :---------- | :--- | :------ | :----------------- |
| `grafana.np-dms.work` | grafana | 3000 | ASUSTOR | Grafana Dashboard |
| `prometheus.np-dms.work` | prometheus | 9090 | ASUSTOR | Prometheus Metrics |
| `uptime.np-dms.work` | uptime-kuma | 3001 | ASUSTOR | Uptime Monitoring |
| `portainer.np-dms.work` | portainer | 9443 | ASUSTOR | Docker Management |
| `registry.np-dms.work` | registry | 5000 | ASUSTOR | Docker Registry |
---
## ⚙️ Core Services Summary
### QNAP Services (Application)
| Service | Technology | Port | Purpose |
| :---------------- | :----------------- | :----- | :------------------------------------------- |
| **Reverse Proxy** | NPM | 80/443 | SSL Termination, Domain Routing |
| **Backend API** | NestJS | 3000 | REST API, Business Logic, Workflow Engine |
| **Frontend** | Next.js | 3000 | Web UI (App Router, React, Tailwind, Shadcn) |
| **Database** | MariaDB 11.8 | 3306 | Primary Relational Database |
| **Cache** | Redis 7.2 | 6379 | Caching, Session, BullMQ |
| **Search** | Elasticsearch 8.11 | 9200 | Full-text Search |
| **Code Hosting** | Gitea | 3000 | Git Repository (Self-hosted) |
| **Workflow** | n8n | 5678 | Automation, Integrations (LINE, Email) |
### ASUSTOR Services (Infrastructure)
| Service | Technology | Port | Purpose |
| :--------------- | :-------------- | :--- | :---------------------------- |
| **Metrics** | Prometheus | 9090 | Metrics Collection |
| **Dashboard** | Grafana | 3000 | Visualization, Alerting |
| **Uptime** | Uptime Kuma | 3001 | Service Availability Monitor |
| **Registry** | Docker Registry | 5000 | Private Container Images |
| **Management** | Portainer | 9443 | Centralized Docker Management |
| **Host Metrics** | node-exporter | 9100 | CPU, Memory, Disk metrics |
| **Container** | cAdvisor | 8080 | Container resource metrics |
| **Backup** | Restic/Borg | N/A | Automated Backups |
---
## 🔧 Quick Reference
### Docker Commands (QNAP - Container Station)
```bash
# ดู containers ทั้งหมด
docker ps -a
# ดู logs
docker logs -f <container_name>
# เข้าไปใน container
docker exec -it <container_name> sh
# Restart service
docker restart <container_name>
```
### Docker Commands (ASUSTOR - Portainer)
```bash
# Remote Docker endpoint connection
# Configure via Portainer UI: Settings > Environments > Add Environment
# Direct SSH to ASUSTOR
ssh admin@192.168.10.9
# Portainer API (Optional)
curl -X GET https://portainer.np-dms.work/api/endpoints \
-H "X-API-Key: <your-api-key>"
```
### Network
```bash
# สร้าง external network (ครั้งแรก) - ต้องทำทั้ง 2 servers
# On QNAP:
docker network create lcbp3
# On ASUSTOR:
docker network create lcbp3
# ดู network
docker network ls
docker network inspect lcbp3
```
### MariaDB
```bash
# เข้า MySQL CLI (QNAP)
docker exec -it mariadb mysql -u root -p
# Backup database (QNAP -> ASUSTOR)
docker exec mariadb mysqldump -u root -p lcbp3 > backup.sql
# Copy to ASUSTOR via NFS/SCP
```
---
## ⚙️ Environment Variables
ตัวแปรสำคัญที่ใช้ร่วมกันทุก Service:
| Variable | Value | Description |
| :----------------------- | :------------- | :--------------------- |
| `TZ` | `Asia/Bangkok` | Timezone |
| `MYSQL_HOST` / `DB_HOST` | `mariadb` | MariaDB hostname |
| `MYSQL_PORT` / `DB_PORT` | `3306` | MariaDB port |
| `REDIS_HOST` | `cache` | Redis hostname |
| `ELASTICSEARCH_HOST` | `search` | Elasticsearch hostname |
> ⚠️ **Security Note:** Sensitive secrets (Password, Keys) ต้องใช้ `docker-compose.override.yml` (gitignored) หรือ Docker secrets - ห้ามระบุใน docker-compose.yml หลัก
---
## 📚 เอกสารเสริม
| ไฟล์ | คำอธิบาย |
| :------------------------------- | :------------------------------------------------ |
| [Git_command.md](Git_command.md) | คำสั่ง Git + Gitea Cheat Sheet |
| [lcbp3-db.md](lcbp3-db.md) | Docker Compose สำหรับ MariaDB (alternative version) |
---
## 📋 Checklist สำหรับการติดตั้งใหม่
### Phase 1: Network & Infrastructure
1. [ ] Configure VLANs on ER7206 Router
2. [ ] Configure Switch Profiles on SG2428P
3. [ ] Configure Static IPs (QNAP: .8, ASUSTOR: .9)
### Phase 2: ASUSTOR Setup (Infra)
1. [ ] Create Docker Network: `docker network create lcbp3`
2. [ ] Deploy Portainer & Registry
3. [ ] Deploy Monitoring Stack (`prometheus.yml` with QNAP IP target)
4. [ ] Verify Prometheus can reach QNAP services
### Phase 3: QNAP Setup (App)
1. [ ] Create Docker Network: `docker network create lcbp3`
2. [ ] Create `.env` file with secure passwords
3. [ ] Deploy **MariaDB** (Wait for init)
4. [ ] Deploy **Redis Standalone** (Check AOF is active)
5. [ ] Deploy **Elasticsearch** (Check Heap limit)
6. [ ] Deploy **NPM** & App Services (Backend/Frontend)
7. [ ] Verify Internal Load Balancing (Backend Replicas)
### Phase 4: Backup & Security
1. [ ] Configure Restic on ASUSTOR to pull from QNAP
2. [ ] Set Resource Limits (Check `docker stats`)
3. [ ] Configure Firewall ACL Rules
---
> 📝 **หมายเหตุ:** เอกสารทั้งหมดอ้างอิงจาก Architecture Document **v1.8.0** และ DMS Container Schema **v1.7.0**

View File

@@ -0,0 +1,106 @@
# 08-Infrastructure
คู่มือการตั้งค่า Infrastructure สำหรับ **NAP-DMS LCBP3** (Laem Chabang Port Phase 3 - Document Management System)
> 📍 **Platform:** QNAP (Container Station) + ASUSTOR (Portainer)
> 🌐 **Domain:** `*.np-dms.work` (IP: 159.192.126.103)
> 🔒 **Network:** `lcbp3` (Docker External Network)
> 📄 **Version:** v2.0.0 (Refactored for Stability)
---
## 🏢 Hardware Infrastructure
### Server Role Separation
#### QNAP TS-473A
| (Application & Database Server)|||
| :--------------------- | :---------------- | :-------------------- |
| ✔ Application Runtime |✔ API / Web | ✔ Database (Primary) |
| ✔ High CPU / RAM usage | ✔ Worker / Queue | ✖ No long-term backup |
| Container Station (UI) | 32GB RAM (Capped) | AMD Ryzen V1500B |
#### ASUSTOR AS5403T
| (Infrastructure & Backup Server) |||
| :--------------------- | :---------------- | :------------------- |
| ✔ File Storage | ✔ Backup Target | ✔ Docker Infra |
|✔ Monitoring / Registry | ✔ Log Aggregation | ✖ No heavy App logic |
| Portainer (Manage All) | 16GB RAM | Intel Celeron @2GHz |
### Servers Specification & Resource Allocation
| Device | Model | CPU | RAM | Resource Policy | Role |
| :---------- | :------ | :---------------------- | :--- | :------------------ | :--------------------- |
| **QNAP** | TS-473A | AMD Ryzen V1500B | 32GB | **Strict Limits** | Application, DB, Cache |
| **ASUSTOR** | AS5403T | Intel Celeron @ 2.00GHz | 16GB | **Moderate Limits** | Infra, Backup, Monitor |
### Service Distribution by Server
#### QNAP TS-473A (Application Stack)
| Category | Service | Strategy | Resource Limit (Est.) |
| :-------------- | :------------------------ | :------------------------------ | :-------------------- |
| **Web App** | Next.js (Frontend) | Single Instance | 2.0 CPU / 2GB RAM |
| **Backend API** | NestJS | **2 Replicas** (Load Balanced) | 2.0 CPU / 1.5GB RAM |
| **Database** | MariaDB (Primary) | Performance Tuned (Buffer Pool) | 4.0 CPU / 5GB RAM |
| **Worker** | Redis + BullMQ Worker | **Standalone + AOF** | 2.0 CPU / 1.5GB RAM |
| **Search** | Elasticsearch | **Heap Locked (2GB)** | 2.0 CPU / 4GB RAM |
| **API Gateway** | NPM (Nginx Proxy Manager) | SSL Termination | 1.0 CPU / 512MB RAM |
| **Workflow** | n8n | Automation | 1.0 CPU / 1GB RAM |
| **Code** | Gitea | Git Repository | 1.0 CPU / 1GB RAM |
#### ASUSTOR AS5403T (Infrastructure Stack)
| Category | Service | Notes |
| :--------------- | :------------------ | :------------------------------ |
| **File Storage** | NFS / SMB | Shared volumes for backup |
| **Backup** | Restic / Borg | Pull-based Backup (More Safe) |
| **Docker Infra** | Registry, Portainer | Container image registry, mgmt |
| **Monitoring** | Uptime Kuma | Service availability monitoring |
| **Metrics** | Prometheus, Grafana | Cross-Server Scraping |
| **Log** | Loki / Syslog | Centralized logging |
---
## 🔄 Data Flow Architecture
┌──────────────┐
│ User │
└──────┬───────┘
│ HTTPS (443)
┌─────────────────────────────────────────────────────────────┐
│ QNAP TS-473A │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Nginx Proxy Manager (NPM) │ │
│ │ SSL Termination + Round Robin LB │ │
│ └───────────────────────┬─────────────────────────────────┘ │
│ │ │
│ ┌───────────────────────▼─────────────────────────────────┐ │
│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ | │
│ │ │ Next.js │─▶│ NestJS │ │ NestJS │ | │
│ │ │ (Frontend) │ │ (Replica 1) │ │ (Replica 2) │ │ │
│ │ └──────────────┘ └──────┬───────┘ └──────┬───────┘ │ │
│ │ │ │ │ │
│ │ ┌─────────────────────────┼────────────────┼────┐ │ │
│ │ ▼ ▼ ▼ ▼ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌─────────────┐ │ │
│ │ │ MariaDB │ │ Redis │ │Elasticsearch│ │ │
│ │ │ (Primary)│ │(Persist.)│ │ (Search) │ │ │
│ │ └────┬─────┘ └──────────┘ └─────────────┘ │ │
│ └──────┼──────────────────────────────────────────────────┘ │
│ └──────┼────────────────────────────────────────────────────┘
| Local Dump -> Restic Pull (Cross-Server)
┌──────────────────────────────────────────────────────────────┐
│ ASUSTOR AS5403T │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │ Backup │ │ Registry │ │ Uptime │ │ │
│ │ │ (Restic) │ │ (Docker) │ │ Kuma │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ │ │
│ │ │ │
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
│ │ │Prometheus│───▶│ Grafana │ │ Loki │ │ │
│ │ │(Scraper) │ │(Dashboard)│ │ (Logs) │ │ │
│ │ └──────────┘ └──────────┘ └──────────┘ ││
│ └──────────────────────────────────────────────────────────┘│ └──────────────────────────────────────────────────────────────┘

View File

@@ -36,24 +36,25 @@
```bash
# สร้าง Directory
mkdir -p /share/Container/services/cache/data
mkdir -p /share/Container/services/search/data
mkdir -p /share/np-dms/services/cache/data
mkdir -p /share/np-dms/services/search/data
# กำหนดสิทธิ์ให้ตรงกับ User ID ใน Container
# Redis (UID 999)
chown -R 999:999 /share/Container/services/cache/data
chmod -R 750 /share/Container/services/cache/data
chown -R 999:999 /share/np-dms/services/cache/data
chmod -R 750 /share/np-dms/services/cache/data
# Elasticsearch (UID 1000)
chown -R 1000:1000 /share/Container/services/search/data
chmod -R 750 /share/Container/services/search/data
chown -R 1000:1000 /share/np-dms/services/search/data
chmod -R 750 /share/np-dms/services/search/data
```
## Docker file
```yml
# File: /share/Container/services/docker-compose.yml (หรือไฟล์ที่คุณใช้รวม)
# DMS Container v1_4_1: เพิ่ม Application name: services, Services 'cache' (Redis) และ 'search' (Elasticsearch)
# File: /share/np-dms/services/docker-compose.yml (หรือไฟล์ที่คุณใช้รวม)
# DMS Container v1_7_0: เพิ่ม Application name: services
#Services 'cache' (Redis) และ 'search' (Elasticsearch)
x-restart: &restart_policy
restart: unless-stopped
@@ -93,7 +94,7 @@ services:
networks:
- lcbp3 # เชื่อมต่อ network ภายในเท่านั้น
volumes:
- "/share/Container/cache/data:/data" # Map volume สำหรับเก็บข้อมูล (ถ้าต้องการ persistence)
- "/share/np-dms/services/cache/data:/data" # Map volume สำหรับเก็บข้อมูล (ถ้าต้องการ persistence)
healthcheck:
test: ["CMD", "redis-cli", "ping"] # ตรวจสอบว่า service พร้อมใช้งาน
interval: 10s
@@ -132,7 +133,7 @@ services:
networks:
- lcbp3 # เชื่อมต่อ network ภายใน (NPM จะ proxy port 9200 จากภายนอก)
volumes:
- "/share/Container/search/data:/usr/share/elasticsearch/data" # Map volume สำหรับเก็บ data/indices
- "/share/np-dms/services/search/data:/usr/share/elasticsearch/data" # Map volume สำหรับเก็บ data/indices
healthcheck:
# รอจนกว่า cluster health จะเป็น yellow หรือ green
test: ["CMD-SHELL", "curl -s http://localhost:9200/_cluster/health | grep -q '\"status\":\"green\"\\|\\\"status\":\"yellow\"'"]

View File

@@ -0,0 +1,455 @@
# การติดตั้ง Monitoring Stack บน ASUSTOR
## **📝 คำอธิบายและข้อควรพิจารณา**
> ⚠️ **หมายเหตุ**: Monitoring Stack ทั้งหมดติดตั้งบน **ASUSTOR AS5403T** ไม่ใช่ QNAP
> เพื่อแยก Application workload ออกจาก Infrastructure/Monitoring workload
Stack สำหรับ Monitoring ประกอบด้วย:
| Service | Port | Purpose | Host |
| :---------------- | :--- | :-------------------------------- | :------ |
| **Prometheus** | 9090 | เก็บ Metrics และ Time-series data | ASUSTOR |
| **Grafana** | 3000 | Dashboard สำหรับแสดงผล Metrics | ASUSTOR |
| **Node Exporter** | 9100 | เก็บ Metrics ของ Host system | Both |
| **cAdvisor** | 8080 | เก็บ Metrics ของ Docker containers | Both |
| **Uptime Kuma** | 3001 | Service Availability Monitoring | ASUSTOR |
| **Loki** | 3100 | Log aggregation | ASUSTOR |
---
## 🏗️ Architecture Overview
```
┌─────────────────────────────────────────────────────────────────────────┐
│ ASUSTOR AS5403T (Monitoring Hub) │
├─────────────────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Prometheus │───▶│ Grafana │ │ Uptime Kuma │ │
│ │ :9090 │ │ :3000 │ │ :3001 │ │
│ └──────┬──────┘ └─────────────┘ └─────────────┘ │
│ │ │
│ │ Scrape Metrics │
│ ▼ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │node-exporter│ │ cAdvisor │ │
│ │ :9100 │ │ :8080 │ │
│ │ (Local) │ │ (Local) │ │
│ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
│ Remote Scrape
┌─────────────────────────────────────────────────────────────────────────┐
│ QNAP TS-473A (App Server) │
├─────────────────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │node-exporter│ │ cAdvisor │ │ Backend │ │
│ │ :9100 │ │ :8080 │ │ /metrics │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
```
---
## กำหนดสิทธิ (บน ASUSTOR)
```bash
# SSH เข้า ASUSTOR
ssh admin@192.168.10.9
# สร้าง Directory
mkdir -p /volume1/np-dms/monitoring/prometheus/data
mkdir -p /volume1/np-dms/monitoring/prometheus/config
mkdir -p /volume1/np-dms/monitoring/grafana/data
mkdir -p /volume1/np-dms/monitoring/uptime-kuma/data
mkdir -p /volume1/np-dms/monitoring/loki/data
# กำหนดสิทธิ์ให้ตรงกับ User ID ใน Container
# Prometheus (UID 65534 - nobody)
chown -R 65534:65534 /volume1/np-dms/monitoring/prometheus
chmod -R 750 /volume1/np-dms/monitoring/prometheus
# Grafana (UID 472)
chown -R 472:472 /volume1/np-dms/monitoring/grafana/data
chmod -R 750 /volume1/np-dms/monitoring/grafana/data
# Uptime Kuma (UID 1000)
chown -R 1000:1000 /volume1/np-dms/monitoring/uptime-kuma/data
chmod -R 750 /volume1/np-dms/monitoring/uptime-kuma/data
# Loki (UID 10001)
chown -R 10001:10001 /volume1/np-dms/monitoring/loki/data
chmod -R 750 /volume1/np-dms/monitoring/loki/data
```
---
## Note: NPM Proxy Configuration (ถ้าใช้ NPM บน ASUSTOR)
| Domain Names | Forward Hostname | IP Forward Port | Cache Assets | Block Common Exploits | Websockets | Force SSL | HTTP/2 |
| :--------------------- | :--------------- | :-------------- | :----------- | :-------------------- | :--------- | :-------- | :----- |
| grafana.np-dms.work | grafana | 3000 | [ ] | [x] | [x] | [x] | [x] |
| prometheus.np-dms.work | prometheus | 9090 | [ ] | [x] | [ ] | [x] | [x] |
| uptime.np-dms.work | uptime-kuma | 3001 | [ ] | [x] | [x] | [x] | [x] |
> **หมายเหตุ**: ถ้าใช้ NPM บน QNAP เพียงตัวเดียว ให้ forward ไปยัง IP ของ ASUSTOR (192.168.10.9)
---
## Docker Compose File (ASUSTOR)
```yaml
# File: /volume1/np-dms/monitoring/docker-compose.yml
# DMS Container v1.8.0: Application name: lcbp3-monitoring
# Deploy on: ASUSTOR AS5403T
# Services: prometheus, grafana, node-exporter, cadvisor, uptime-kuma, loki
x-restart: &restart_policy
restart: unless-stopped
x-logging: &default_logging
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "5"
networks:
lcbp3:
external: true
services:
# ----------------------------------------------------------------
# 1. Prometheus (Metrics Collection & Storage)
# ----------------------------------------------------------------
prometheus:
<<: [*restart_policy, *default_logging]
image: prom/prometheus:v2.48.0
container_name: prometheus
stdin_open: true
tty: true
deploy:
resources:
limits:
cpus: "1.0"
memory: 1G
reservations:
cpus: "0.25"
memory: 256M
environment:
TZ: "Asia/Bangkok"
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=30d'
- '--web.enable-lifecycle'
networks:
- lcbp3
volumes:
- "/volume1/np-dms/monitoring/prometheus/config:/etc/prometheus:ro"
- "/volume1/np-dms/monitoring/prometheus/data:/prometheus"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:9090/-/healthy"]
interval: 30s
timeout: 10s
retries: 3
# ----------------------------------------------------------------
# 2. Grafana (Dashboard & Visualization)
# ----------------------------------------------------------------
grafana:
<<: [*restart_policy, *default_logging]
image: grafana/grafana:10.2.2
container_name: grafana
stdin_open: true
tty: true
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
reservations:
cpus: "0.25"
memory: 128M
environment:
TZ: "Asia/Bangkok"
GF_SECURITY_ADMIN_USER: admin
GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD:-Center#2025}
GF_SERVER_ROOT_URL: "https://grafana.np-dms.work"
GF_INSTALL_PLUGINS: grafana-clock-panel,grafana-piechart-panel
networks:
- lcbp3
volumes:
- "/volume1/np-dms/monitoring/grafana/data:/var/lib/grafana"
depends_on:
- prometheus
healthcheck:
test: ["CMD-SHELL", "wget --spider -q http://localhost:3000/api/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3
# ----------------------------------------------------------------
# 3. Uptime Kuma (Service Availability Monitoring)
# ----------------------------------------------------------------
uptime-kuma:
<<: [*restart_policy, *default_logging]
image: louislam/uptime-kuma:1
container_name: uptime-kuma
deploy:
resources:
limits:
cpus: "0.5"
memory: 256M
environment:
TZ: "Asia/Bangkok"
networks:
- lcbp3
volumes:
- "/volume1/np-dms/monitoring/uptime-kuma/data:/app/data"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:3001"]
interval: 30s
timeout: 10s
retries: 3
# ----------------------------------------------------------------
# 4. Node Exporter (Host Metrics - ASUSTOR)
# ----------------------------------------------------------------
node-exporter:
<<: [*restart_policy, *default_logging]
image: prom/node-exporter:v1.7.0
container_name: node-exporter
deploy:
resources:
limits:
cpus: "0.5"
memory: 128M
environment:
TZ: "Asia/Bangkok"
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
networks:
- lcbp3
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:9100/metrics"]
interval: 30s
timeout: 10s
retries: 3
# ----------------------------------------------------------------
# 5. cAdvisor (Container Metrics - ASUSTOR)
# ----------------------------------------------------------------
cadvisor:
<<: [*restart_policy, *default_logging]
image: gcr.io/cadvisor/cadvisor:v0.47.2
container_name: cadvisor
deploy:
resources:
limits:
cpus: "0.5"
memory: 256M
environment:
TZ: "Asia/Bangkok"
networks:
- lcbp3
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:8080/healthz"]
interval: 30s
timeout: 10s
retries: 3
# ----------------------------------------------------------------
# 6. Loki (Log Aggregation)
# ----------------------------------------------------------------
loki:
<<: [*restart_policy, *default_logging]
image: grafana/loki:2.9.0
container_name: loki
deploy:
resources:
limits:
cpus: "0.5"
memory: 512M
environment:
TZ: "Asia/Bangkok"
command: -config.file=/etc/loki/local-config.yaml
networks:
- lcbp3
volumes:
- "/volume1/np-dms/monitoring/loki/data:/loki"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "http://localhost:3100/ready"]
interval: 30s
timeout: 10s
retries: 3
```
---
## QNAP Node Exporter & cAdvisor
ติดตั้ง node-exporter และ cAdvisor บน QNAP เพื่อให้ Prometheus บน ASUSTOR scrape metrics ได้:
```yaml
# File: /share/np-dms/monitoring/docker-compose.yml (QNAP)
# เฉพาะ exporters เท่านั้น - metrics ถูก scrape โดย Prometheus บน ASUSTOR
version: '3.8'
networks:
lcbp3:
external: true
services:
node-exporter:
image: prom/node-exporter:v1.7.0
container_name: node-exporter
restart: unless-stopped
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
networks:
- lcbp3
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
cadvisor:
image: gcr.io/cadvisor/cadvisor:v0.47.2
container_name: cadvisor
restart: unless-stopped
networks:
- lcbp3
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
```
---
## Prometheus Configuration
สร้างไฟล์ `/volume1/np-dms/monitoring/prometheus/config/prometheus.yml` บน ASUSTOR:
```yaml
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
# Prometheus self-monitoring (ASUSTOR)
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
# ============================================
# ASUSTOR Metrics (Local)
# ============================================
# Host metrics from Node Exporter (ASUSTOR)
- job_name: 'asustor-node'
static_configs:
- targets: ['node-exporter:9100']
labels:
host: 'asustor'
# Container metrics from cAdvisor (ASUSTOR)
- job_name: 'asustor-cadvisor'
static_configs:
- targets: ['cadvisor:8080']
labels:
host: 'asustor'
# ============================================
# QNAP Metrics (Remote - 192.168.10.8)
# ============================================
# Host metrics from Node Exporter (QNAP)
- job_name: 'qnap-node'
static_configs:
- targets: ['192.168.10.8:9100']
labels:
host: 'qnap'
# Container metrics from cAdvisor (QNAP)
- job_name: 'qnap-cadvisor'
static_configs:
- targets: ['192.168.10.8:8080']
labels:
host: 'qnap'
# Backend NestJS application (QNAP)
- job_name: 'backend'
static_configs:
- targets: ['192.168.10.8:3000']
labels:
host: 'qnap'
metrics_path: '/metrics'
# MariaDB Exporter (optional - QNAP)
# - job_name: 'mariadb'
# static_configs:
# - targets: ['192.168.10.8:9104']
# labels:
# host: 'qnap'
```
---
## Uptime Kuma Monitors
เมื่อ Uptime Kuma พร้อมใช้งาน ให้เพิ่ม monitors ต่อไปนี้:
| Monitor Name | Type | URL / Host | Interval |
| :------------ | :--- | :--------------------------------- | :------- |
| QNAP NPM | HTTP | https://npm.np-dms.work | 60s |
| Frontend | HTTP | https://lcbp3.np-dms.work | 60s |
| Backend API | HTTP | https://backend.np-dms.work/health | 60s |
| MariaDB | TCP | 192.168.10.8:3306 | 60s |
| Redis | TCP | 192.168.10.8:6379 | 60s |
| Elasticsearch | HTTP | http://192.168.10.8:9200 | 60s |
| Gitea | HTTP | https://git.np-dms.work | 60s |
| n8n | HTTP | https://n8n.np-dms.work | 60s |
| Grafana | HTTP | https://grafana.np-dms.work | 60s |
| QNAP Host | Ping | 192.168.10.8 | 60s |
| ASUSTOR Host | Ping | 192.168.10.9 | 60s |
---
## Grafana Dashboards
### Recommended Dashboards to Import
| Dashboard ID | Name | Purpose |
| :----------- | :--------------------------- | :------------------ |
| 1860 | Node Exporter Full | Host system metrics |
| 14282 | cAdvisor exporter | Container metrics |
| 11074 | Node Exporter for Prometheus | Node overview |
| 7362 | Docker and Host Monitoring | Combined view |
### Import Dashboard via Grafana UI
1. Go to **Dashboards → Import**
2. Enter Dashboard ID (e.g., `1860`)
3. Select Prometheus data source
4. Click **Import**
---
> 📝 **หมายเหตุ**: เอกสารนี้อ้างอิงจาก Architecture Document **v1.8.0** - Monitoring Stack deploy บน ASUSTOR AS5403T

View File

@@ -8,15 +8,15 @@
```bash
# สำหรับ n8n volumes
chown -R 1000:1000 /share/Container/n8n
chmod -R 755 /share/Container/n8n
chown -R 1000:1000 /share/np-dms/n8n
chmod -R 755 /share/np-dms/n8n
```
## Docker file
```yml
# File: share/Container/n8n/docker-compose.yml
# DMS Container v1_4_1 แยก service และ folder, Application name:n8n service n8n
# File: share/np-dms/n8n/docker-compose.yml
# DMS Container v1_7_0 แยก service และ folder, Application name:n8n service n8n
x-restart: &restart_policy
restart: unless-stopped
@@ -72,10 +72,10 @@ services:
networks:
lcbp3: {}
volumes:
- "/share/Container/n8n:/home/node/.n8n"
- "/share/Container/n8n/cache:/home/node/.cache"
- "/share/Container/n8n/scripts:/scripts"
- "/share/Container/n8n/data:/data"
- "/share/np-dms/n8n:/home/node/.n8n"
- "/share/np-dms/n8n/cache:/home/node/.cache"
- "/share/np-dms/n8n/scripts:/scripts"
- "/share/np-dms/n8n/data:/data"
- "/var/run/docker.sock:/var/run/docker.sock"
healthcheck:

View File

@@ -1,120 +1,359 @@
# **🗺️ แผนผัง Network Architecture & Firewall (LCBP3-DMS)**
# 🗺️ แผนผัง Network Architecture & Container Services (LCBP3-DMS)
แผนผังนี้แสดงการแบ่งส่วนเครือข่าย (VLANs) และกฎ Firewall (ACLs) สำหรับ TP-Link Omada (ER7206/OC200) เพื่อรักษาความปลอดภัยของ QNAP NAS และ Docker Services
แผนผังนี้แสดงการแบ่งส่วนเครือข่าย (VLANs), การเชื่อมต่อ Firewall (ACLs) และบทบาทของ Server ทั้งสองตัว (QNAP: Application, ASUSTOR: Infrastructure)
## **1\. แผนผังการเชื่อมต่อ (Connection Flow Diagram)**
---
graph TD
## 1. ภาพรวมการแบ่งบทบาท Server
```
┌──────────────────────────────────────────────────────────────────────────────┐
│ LCBP3-DMS INFRASTRUCTURE │
├────────────────────────────────┬─────────────────────────────────────────────┤
│ QNAP TS-473A │ ASUSTOR AS5403T │
│ (Application & Database) │ (Infrastructure & Backup) │
├────────────────────────────────┼─────────────────────────────────────────────┤
│ ✔ Application Runtime │ ✔ File Storage (NFS/SMB) │
│ ✔ API / Web (NestJS, Next.js) │ ✔ Backup Target (Restic/Borg) │
│ ✔ Database (MariaDB Primary) │ ✔ Docker Infra (Registry, Portainer) │
│ ✔ High CPU / RAM usage │ ✔ Monitoring (Prometheus, Grafana) │
│ ✔ Worker / Queue (Redis) │ ✔ Log Aggregation (Loki) │
│ ✔ API Gateway (NPM) │ ✔ Uptime Monitoring (Uptime Kuma) │
│ ✖ ไม่เก็บ backup ระยะยาว │ ✖ ไม่รัน App logic หนัก │
├────────────────────────────────┼─────────────────────────────────────────────┤
│ Container: Container Station │ Container: Portainer │
│ IP: 192.168.10.8 │ IP: 192.168.10.9 │
│ Storage: 4TB×4 RAID5 + 1TB SSD │ Storage: 6TB×3 RAID5 + 1TB SSD │
└────────────────────────────────┴─────────────────────────────────────────────┘
```
---
## 2. Data Flow Diagram
```mermaid
flowchart TB
subgraph Internet["🌐 Internet"]
User[("👤 User")]
end
subgraph QNAP["💾 QNAP TS-473A (App Server)"]
NPM["🔲 NPM<br/>(Reverse Proxy)"]
Frontend["📱 Next.js<br/>(Frontend)"]
Backend["⚙️ NestJS<br/>(Backend API)"]
DB["🗄️ MariaDB"]
Redis["📦 Redis"]
ES["🔍 Elasticsearch"]
end
subgraph ASUSTOR["💾 ASUSTOR AS5403T (Infra Server)"]
Portainer["🐳 Portainer"]
Registry["📦 Registry"]
Prometheus["📊 Prometheus"]
Grafana["📈 Grafana"]
Uptime["⏱️ Uptime Kuma"]
Backup["💾 Restic/Borg"]
NFS["📁 NFS Storage"]
end
User -->|HTTPS 443| NPM
NPM --> Frontend
NPM --> Backend
Frontend --> Backend
Backend --> DB
Backend --> Redis
Backend --> ES
DB -.->|Scheduled Backup| Backup
Backup --> NFS
Portainer -.->|Manage| QNAP
Prometheus -.->|Collect Metrics| Backend
Prometheus -.->|Collect Metrics| DB
Uptime -.->|Health Check| NPM
```
---
## 3. Docker Management View
```mermaid
flowchart TB
subgraph Portainer["🐳 Portainer (ASUSTOR - Central Management)"]
direction TB
subgraph LocalStack["📦 Local Infra Stack"]
Registry["Docker Registry"]
Prometheus["Prometheus"]
Grafana["Grafana"]
Uptime["Uptime Kuma"]
Backup["Restic/Borg"]
Loki["Loki (Logs)"]
ClamAV["ClamAV"]
end
subgraph RemoteStack["🔗 Remote: QNAP App Stack"]
Frontend["Next.js"]
Backend["NestJS"]
MariaDB["MariaDB"]
Redis["Redis"]
ES["Elasticsearch"]
NPM["NPM"]
Gitea["Gitea"]
N8N["n8n"]
PMA["phpMyAdmin"]
end
end
```
---
## 4. Security Zones Diagram
```mermaid
flowchart TB
subgraph PublicZone["🌐 PUBLIC ZONE"]
direction LR
NPM["NPM (Reverse Proxy)"]
SSL["SSL/TLS Termination"]
end
subgraph AppZone["📱 APPLICATION ZONE (QNAP)"]
direction LR
Frontend["Next.js"]
Backend["NestJS"]
N8N["n8n"]
Gitea["Gitea"]
end
subgraph DataZone["💾 DATA ZONE (QNAP - Internal Only)"]
direction LR
MariaDB["MariaDB"]
Redis["Redis"]
ES["Elasticsearch"]
end
subgraph InfraZone["🛠️ INFRASTRUCTURE ZONE (ASUSTOR)"]
direction LR
Backup["Backup Services"]
Registry["Docker Registry"]
Monitoring["Prometheus + Grafana"]
Logs["Loki / Syslog"]
end
PublicZone -->|HTTPS Only| AppZone
AppZone -->|Internal API| DataZone
DataZone -.->|Backup| InfraZone
AppZone -.->|Metrics| InfraZone
```
---
## 5. แผนผังการเชื่อมต่อเครือข่าย (Network Flow)
```mermaid
graph TD
direction TB
subgraph Flow1 \[\<b\>การเชื่อมต่อจากภายนอก (Public WAN)\</b\>\]
User\[ผู้ใช้งานภายนอก (Internet)\]
subgraph Flow1["การเชื่อมต่อจากภายนอก (Public WAN)"]
User["ผู้ใช้งานภายนอก (Internet)"]
end
subgraph Router \[\<b\>Router (ER7206)\</b\> \- Gateway\]
User \-- "Port 80/443 (HTTPS/HTTP)" \--\> ER7206
ER7206(\<b\>Port Forwarding\</b\>\<br/\>TCP 80 \-\> 192.168.10.100:80\<br/\>TCP 443 \-\> 192.168.10.100:443)
subgraph Router["Router (ER7206) - Gateway"]
User -- "Port 80/443 (HTTPS/HTTP)" --> ER7206
ER7206["Port Forwarding<br/>TCP 80 192.168.10.8:80<br/>TCP 443 192.168.10.8:443"]
end
subgraph VLANs \[\<b\>เครือข่ายภายใน (VLANs & Firewall Rules)\</b\>\]
subgraph VLANs["เครือข่ายภายใน (VLANs & Firewall Rules)"]
direction LR
subgraph VLAN10 \[\<b\>VLAN 10: Servers (DMZ)\</b\>\<br/\>192.168.10.x\]
QNAP\[\<b\>QNAP NAS (192.168.10.100)\</b\>\]
end
subgraph VLAN20 \[\<b\>VLAN 20: Office\</b\>\<br/\>192.168.20.x\]
OfficePC\[PC พนักงาน/Wi-Fi\]
subgraph VLAN10["VLAN 10: Servers<br/>192.168.10.x"]
QNAP["QNAP NAS<br/>(192.168.10.8)"]
ASUSTOR["ASUSTOR NAS<br/>(192.168.10.9)"]
end
subgraph VLAN30 \[\<b\>VLAN 30: Guests\</b\>\<br/\>192.168.30.x\]
GuestPC\[Guest Wi-Fi\]
subgraph VLAN20["VLAN 20: MGMT<br/>192.168.20.x"]
AdminPC["Admin PC / Switches"]
end
subgraph Firewall \[\<b\>Firewall ACLs (ควบคุมโดย OC200)\</b\>\]
direction TB
rule1(\<b\>Rule 1: DENY\</b\>\<br/\>Guest (VLAN 30\) \-\> All VLANs)
rule2(\<b\>Rule 2: DENY\</b\>\<br/\>Server (VLAN 10\) \-\> Office (VLAN 20))
rule3(\<b\>Rule 3: ALLOW\</b\>\<br/\>Office (VLAN 20\) \-\> QNAP (192.168.10.100)\<br/\>Ports: 443, 80, 81, 2222\)
subgraph VLAN30["VLAN 30: USER<br/>192.168.30.x"]
OfficePC["PC พนักงาน/Wi-Fi"]
end
%% \--- แสดงผล Firewall Rules \---
GuestPC \-.x|rule1| QNAP
QNAP \-.x|rule2| OfficePC
OfficePC \-- "\[https://lcbp3.np-dms.work\](https://lcbp3.np-dms.work)" \--\>|rule3| QNAP
subgraph VLAN70["VLAN 70: GUEST<br/>192.168.70.x"]
GuestPC["Guest Wi-Fi"]
end
subgraph Firewall["Firewall ACLs (OC200/ER7206)"]
direction TB
rule1["Rule 1: DENY<br/>Guest (VLAN 70) → All VLANs"]
rule2["Rule 2: DENY<br/>Server (VLAN 10) → User (VLAN 30)"]
rule3["Rule 3: ALLOW<br/>User (VLAN 30) → QNAP<br/>Ports: 443, 80"]
rule4["Rule 4: ALLOW<br/>MGMT (VLAN 20) → All"]
end
GuestPC -.x|rule1| QNAP
QNAP -.x|rule2| OfficePC
OfficePC -- "https://lcbp3.np-dms.work" -->|rule3| QNAP
AdminPC -->|rule4| QNAP
AdminPC -->|rule4| ASUSTOR
end
%% \--- เชื่อมต่อ Router กับ QNAP \---
ER7206 \--\> QNAP
ER7206 --> QNAP
subgraph Docker \[\<b\>Docker Network 'lcbp3' (ภายใน QNAP)\</b\>\]
direction TB
subgraph PublicServices \[Services ที่ NPM เปิดสู่ภายนอก\]
direction LR
NPM\[\<b\>NPM (Nginx Proxy Manager)\</b\>\<br/\>รับการจราจรจาก QNAP\]
Frontend(frontend:3000)
Backend(backend:3000)
Gitea(gitea:3000)
PMA(pma:80)
N8N(n8n:5678)
subgraph DockerQNAP["Docker 'lcbp3' (QNAP - Applications)"]
direction TB
subgraph PublicServices["Services ที่ NPM เปิดสู่ภายนอก"]
direction LR
NPM["NPM (Nginx Proxy Manager)"]
FrontendC["frontend:3000"]
BackendC["backend:3000"]
GiteaC["gitea:3000"]
PMAC["pma:80"]
N8NC["n8n:5678"]
end
subgraph InternalServices \[Internal Services (Backend เรียกใช้เท่านั้น)\]
direction LR
DB(mariadb:3306)
Cache(cache:6379)
Search(search:9200)
subgraph InternalServices["Internal Services (Backend Only)"]
direction LR
DBC["mariadb:3306"]
CacheC["cache:6379"]
SearchC["search:9200"]
end
%% \--- การเชื่อมต่อภายใน Docker \---
NPM \-- "lcbp3.np-dms.work" \--\> Frontend
NPM \-- "backend.np-dms.work" \--\> Backend
NPM \-- "git.np-dms.work" \--\> Gitea
NPM \-- "pma.np-dms.work" \--\> PMA
NPM \-- "n8n.np-dms.work" \--\> N8N
NPM -- "lcbp3.np-dms.work" --> FrontendC
NPM -- "backend.np-dms.work" --> BackendC
NPM -- "git.np-dms.work" --> GiteaC
NPM -- "pma.np-dms.work" --> PMAC
NPM -- "n8n.np-dms.work" --> N8NC
Backend \-- "lcbp3 Network" \--\> DB
Backend \-- "lcbp3 Network" \--\> Cache
Backend \-- "lcbp3 Network" \--\> Search
end
%% \--- เชื่อมต่อ QNAP กับ Docker \---
QNAP \--\> NPM
BackendC -- "lcbp3 Network" --> DBC
BackendC -- "lcbp3 Network" --> CacheC
BackendC -- "lcbp3 Network" --> SearchC
end
%% \--- Styling \---
classDef default fill:\#f9f9f9,stroke:\#333,stroke-width:2px;
classDef router fill:\#e6f7ff,stroke:\#0056b3,stroke-width:2px;
classDef vlan fill:\#fffbe6,stroke:\#d46b08,stroke-width:2px;
classDef docker fill:\#e6ffed,stroke:\#096dd9,stroke-width:2px;
classDef internal fill:\#f0f0f0,stroke:\#595959,stroke-width:2px,stroke-dasharray: 5 5;
classDef fw fill:\#fff0f0,stroke:\#d9363e,stroke-width:2px,stroke-dasharray: 3 3;
subgraph DockerASUSTOR["Docker 'lcbp3' (ASUSTOR - Infrastructure)"]
direction TB
class Router,ER7206 router;
class VLANs,VLAN10,VLAN20,VLAN30 vlan;
class Docker,PublicServices,InternalServices docker;
class DB,Cache,Search internal;
class Firewall,rule1,rule2,rule3 fw;
subgraph InfraServices["Infrastructure Services"]
direction LR
PortainerC["portainer:9443"]
RegistryC["registry:5000"]
PrometheusC["prometheus:9090"]
GrafanaC["grafana:3000"]
UptimeC["uptime-kuma:3001"]
end
## **2\. สรุปการตั้งค่า Firewall ACLs (สำหรับ Omada OC200)**
subgraph BackupServices["Backup & Storage"]
direction LR
ResticC["restic/borg"]
NFSC["NFS Share"]
end
นี่คือรายการกฎ (Rules) ที่คุณต้องสร้างใน Settings \> Network Security \> ACL (เรียงลำดับจากบนลงล่าง):
PortainerC -.->|"Remote Endpoint"| NPM
PrometheusC -.->|"Scrape Metrics"| BackendC
ResticC --> NFSC
end
| ลำดับ | Name | Policy | Source | Destination | Ports |
| :---- | :---- | :---- | :---- | :---- | :---- |
| **1** | Isolate-Guests | **Deny** | Network \-\> VLAN 30 (Guests) | Network \-\> VLAN 1, 10, 20 | All |
| **2** | Isolate-Servers | **Deny** | Network \-\> VLAN 10 (Servers) | Network \-\> VLAN 20 (Office) | All |
| **3** | Block-Office-to-Mgmt | **Deny** | Network \-\> VLAN 20 (Office) | Network \-\> VLAN 1 (Mgmt) | All |
| **4** | Allow-Office-to-Services | **Allow** | Network \-\> VLAN 20 (Office) | IP Group \-\> QNAP\_Services (192.168.10.100) | Port Group \-\> Web\_Services (443, 80, 81, 2222\) |
| **5** | (Default) | Allow | Any | Any | All |
QNAP --> NPM
ASUSTOR --> PortainerC
DBC -.->|"Scheduled Backup"| ResticC
```
## **3\. สรุปการตั้งค่า Port Forwarding (สำหรับ Omada ER7206)**
---
นี่คือรายการกฎที่คุณต้องสร้างใน Settings \> Transmission \> Port Forwarding:
## 6. สรุปการตั้งค่า Firewall ACLs (สำหรับ Omada OC200)
| Name | External Port | Internal IP | Internal Port | Protocol |
| :---- | :---- | :---- | :---- | :---- |
| Allow-NPM-HTTPS | 443 | 192.168.10.100 | 443 | TCP |
| Allow-NPM-HTTP | 80 | 192.168.10.100 | 80 | TCP |
นี่คือรายการกฎ (Rules) ที่คุณต้องสร้างใน **Settings > Network Security > ACL** (เรียงลำดับจากบนลงล่าง):
| ลำดับ | Name | Policy | Source | Destination | Ports |
| :---- | :--------------------- | :-------- | :---------------- | :------------------------ | :----------------------------------- |
| **1** | Isolate-Guests | **Deny** | Network → VLAN 70 | Network → VLAN 10, 20, 30 | All |
| **2** | Isolate-Servers | **Deny** | Network → VLAN 10 | Network → VLAN 30 (USER) | All |
| **3** | Block-User-to-Mgmt | **Deny** | Network → VLAN 30 | Network → VLAN 20 (MGMT) | All |
| **4** | Allow-User-to-Services | **Allow** | Network → VLAN 30 | IP → QNAP (192.168.10.8) | Port Group → Web (443, 80, 81, 2222) |
| **5** | Allow-MGMT-to-All | **Allow** | Network → VLAN 20 | Any | All |
| **6** | Allow-Server-Internal | **Allow** | IP → 192.168.10.8 | IP → 192.168.10.9 | All (QNAP ↔ ASUSTOR) |
| **7** | (Default) | Deny | Any | Any | All |
---
## 7. สรุปการตั้งค่า Port Forwarding (สำหรับ Omada ER7206)
นี่คือรายการกฎที่คุณต้องสร้างใน **Settings > Transmission > Port Forwarding**:
| Name | External Port | Internal IP | Internal Port | Protocol |
| :-------------- | :------------ | :----------- | :------------ | :------- |
| Allow-NPM-HTTPS | 443 | 192.168.10.8 | 443 | TCP |
| Allow-NPM-HTTP | 80 | 192.168.10.8 | 80 | TCP |
> **หมายเหตุ**: Port forwarding ไปที่ QNAP (NPM) เท่านั้น, ASUSTOR ไม่ควรเปิดรับ traffic จากภายนอก
---
## 8. Container Service Distribution
### QNAP (192.168.10.8) - Application Services
| Container | Port | Domain | Network |
| :------------ | :--- | :------------------ | :------ |
| npm | 81 | npm.np-dms.work | lcbp3 |
| frontend | 3000 | lcbp3.np-dms.work | lcbp3 |
| backend | 3000 | backend.np-dms.work | lcbp3 |
| mariadb | 3306 | (internal) | lcbp3 |
| cache (redis) | 6379 | (internal) | lcbp3 |
| search (es) | 9200 | (internal) | lcbp3 |
| gitea | 3000 | git.np-dms.work | lcbp3 |
| n8n | 5678 | n8n.np-dms.work | lcbp3 |
| pma | 80 | pma.np-dms.work | lcbp3 |
### ASUSTOR (192.168.10.9) - Infrastructure Services
| Container | Port | Domain | Network |
| :------------ | :--- | :--------------------- | :------ |
| portainer | 9443 | portainer.np-dms.work | lcbp3 |
| prometheus | 9090 | prometheus.np-dms.work | lcbp3 |
| grafana | 3000 | grafana.np-dms.work | lcbp3 |
| uptime-kuma | 3001 | uptime.np-dms.work | lcbp3 |
| registry | 5000 | registry.np-dms.work | lcbp3 |
| node-exporter | 9100 | (internal) | lcbp3 |
| cadvisor | 8080 | (internal) | lcbp3 |
| loki | 3100 | (internal) | lcbp3 |
| restic/borg | N/A | (scheduled job) | host |
---
## 9. Backup Flow
```
┌────────────────────────────────────────────────────────────────────────┐
│ BACKUP STRATEGY │
├────────────────────────────────────────────────────────────────────────┤
│ │
│ QNAP (Source) ASUSTOR (Target) │
│ ┌──────────────┐ ┌──────────────────────┐ │
│ │ MariaDB │ ──── Daily 2AM ────▶ │ /volume1/backup/db/ │ │
│ │ (mysqldump) │ │ (Restic Repository) │ │
│ └──────────────┘ └──────────────────────┘ │
│ │
│ ┌──────────────┐ ┌──────────────────────┐ │
│ │ Redis RDB │ ──── Daily 3AM ────▶ │ /volume1/backup/ │ │
│ │ + AOF │ │ redis/ │ │
│ └──────────────┘ └──────────────────────┘ │
│ │
│ ┌──────────────┐ ┌──────────────────────┐ │
│ │ App Config │ ──── Weekly ───────▶ │ /volume1/backup/ │ │
│ │ + Volumes │ Sunday 4AM │ config/ │ │
│ └──────────────┘ └──────────────────────┘ │
│ │
│ Retention Policy: │
│ • Daily: 7 days │
│ • Weekly: 4 weeks │
│ • Monthly: 6 months │
│ │
└────────────────────────────────────────────────────────────────────────┘
```
---
> 📝 **หมายเหตุ**: เอกสารนี้อ้างอิงจาก Architecture Document **v1.8.0** - Last updated: 2026-01-28