251217:1704 Docunment Number: Update to 1.6.2
This commit is contained in:
233
specs/08-infrastructure/Git_command.md
Normal file
233
specs/08-infrastructure/Git_command.md
Normal file
@@ -0,0 +1,233 @@
|
||||
# **คำสั่งตั้งค่า Gitea ใหม่ทั้งหมด + คำสั่งใช้งานประจำวัน / แก้ปัญหา / branch”**
|
||||
|
||||
---
|
||||
|
||||
📘 Git + Gitea (QNAP / Container Station) – Cheat Sheet
|
||||
คู่มือนี้รวบรวม:
|
||||
|
||||
- คำสั่งตั้งค่า Gitea ใหม่ทั้งหมด
|
||||
- คำสั่งใช้งาน Git ประจำวัน
|
||||
- การแก้ไขปัญหา repository
|
||||
- การทำงานกับ branch
|
||||
- การ reset / clone / merge / rebase
|
||||
|
||||
---
|
||||
|
||||
## 🧩 SECTION 1 – การตั้งค่า Gitea ใหม่ทั้งหมด
|
||||
|
||||
🔹 1) เคลียร์ host key เดิม ใช้เมื่อ Gitea ถูก reset ใหม่ หรือ IP / key เปลี่ยน
|
||||
|
||||
```bash
|
||||
ssh-keygen -R "[git.np-dms.work]:2222"
|
||||
```
|
||||
|
||||
🔹 2) เชื่อมต่อครั้งแรก (จะมีคำถาม fingerprint)
|
||||
|
||||
```bash
|
||||
ssh -T git@git.np-dms.work -p 2222
|
||||
```
|
||||
|
||||
🔹 3) แสดง SSH public key เพื่อเพิ่มใน Gitea
|
||||
|
||||
```bash
|
||||
cat /root/.ssh/id_ed25519.pub
|
||||
cat /root/.ssh/id_rsa.pub
|
||||
```
|
||||
|
||||
🔹 4) เพิ่ม remote ใหม่ (หากยังไม่ได้เพิ่ม)
|
||||
|
||||
```bash
|
||||
git remote add origin ssh://git@git.np-dms.work:2222/np-dms/lcbp3.git
|
||||
```
|
||||
|
||||
🔹 5) ลบ remote เดิมหากผิด
|
||||
|
||||
```bash
|
||||
git remote remove origin
|
||||
```
|
||||
|
||||
🔹 6) Push ครั้งแรกหลังตั้งค่า
|
||||
|
||||
```bash
|
||||
git push -u origin main
|
||||
```
|
||||
|
||||
🔹 7) Clone repo ใหม่ทั้งหมด
|
||||
|
||||
```bash
|
||||
git clone ssh://git@git.np-dms.work:2222/np-dms/lcbp3.git
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🧩 SECTION 2 – คำสั่ง Git ใช้งานประจำวัน
|
||||
|
||||
🟦 ตรวจสอบสถานะงาน
|
||||
|
||||
```bash
|
||||
git status
|
||||
```
|
||||
|
||||
🟦 ดูว่าแก้ไฟล์อะไรไป
|
||||
|
||||
```bash
|
||||
git diff
|
||||
```
|
||||
|
||||
🟦 เพิ่มไฟล์ทั้งหมด
|
||||
|
||||
```bash
|
||||
git add .
|
||||
```
|
||||
|
||||
🟦 Commit การแก้ไข
|
||||
|
||||
```bash
|
||||
git commit -m "message"
|
||||
```
|
||||
|
||||
🟦 Push
|
||||
|
||||
```bash
|
||||
git push
|
||||
```
|
||||
|
||||
🟦 Pull (ดึงงานล่าสุด)
|
||||
|
||||
```bash
|
||||
git pull
|
||||
```
|
||||
|
||||
🟦 Pull (ดึงงานล่าสุด) แบบ rebase
|
||||
|
||||
```bash
|
||||
git pull --rebase
|
||||
```
|
||||
|
||||
🟦 ดู log
|
||||
|
||||
```bash
|
||||
git log
|
||||
```
|
||||
|
||||
|
||||
---
|
||||
## 🧩 SECTION 3 – ทำงานกับ Branch
|
||||
|
||||
### ดู branch ทั้งหมด
|
||||
|
||||
```bash
|
||||
git branch
|
||||
```
|
||||
|
||||
### สร้าง branch ใหม่
|
||||
|
||||
```bash
|
||||
git checkout -b feature/login-page
|
||||
```
|
||||
|
||||
### สลับ branch
|
||||
|
||||
```bash
|
||||
git checkout main
|
||||
```
|
||||
|
||||
### ส่ง branch ขึ้น Gitea
|
||||
|
||||
```bash
|
||||
git push -u origin feature/login-page
|
||||
```
|
||||
|
||||
### ลบ branch ในเครื่อง
|
||||
|
||||
```bash
|
||||
git branch -d feature/login-page
|
||||
```
|
||||
|
||||
### ลบ branch บน Gitea
|
||||
|
||||
```bash
|
||||
git push origin --delete feature/login-page
|
||||
```
|
||||
|
||||
### Merge branch → main
|
||||
|
||||
```bash
|
||||
git checkout main
|
||||
git pull
|
||||
git merge feature/login-page
|
||||
git push
|
||||
```
|
||||
|
||||
### Rebase เพื่อให้ history สวย
|
||||
|
||||
```bash
|
||||
git checkout feature/login-page
|
||||
git rebase main
|
||||
git checkout main
|
||||
git merge feature/login-page
|
||||
git push
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 🧩 SECTION 4 – แก้ไขปัญหา Repo
|
||||
|
||||
🔴 (1) Reset repo ทั้งหมดให้เหมือน remote
|
||||
|
||||
⚠ ใช้เมื่อไฟล์ในเครื่องพัง หรือแก้จนเละ
|
||||
|
||||
```bash
|
||||
git fetch --all
|
||||
git reset --hard origin/main
|
||||
```
|
||||
|
||||
🔴 (2) แก้ปัญหา conflict ตอน pull
|
||||
|
||||
```bash
|
||||
git pull --rebase
|
||||
```
|
||||
|
||||
🔴 (3) ดู remote ว่าชี้ไปทางไหน
|
||||
|
||||
```bash
|
||||
git remote -v
|
||||
```
|
||||
|
||||
🔴 (4) เปลี่ยน remote ใหม่
|
||||
|
||||
```bash
|
||||
git remote remove origin
|
||||
git remote add origin ssh://git@git.np-dms.work:2222/np-dms/lcbp3.git
|
||||
```
|
||||
|
||||
🔴 (5) Commit message ผิด แก้ใหม่
|
||||
|
||||
```bash
|
||||
git commit --amend
|
||||
```
|
||||
|
||||
🔴 (6) ย้อน commit ล่าสุด (ไม่ลบไฟล์)
|
||||
|
||||
```bash
|
||||
git reset --soft HEAD~1
|
||||
```
|
||||
|
||||
🔴 (7) ดู log แบบสรุป
|
||||
|
||||
```bash
|
||||
git log --oneline --graph
|
||||
```
|
||||
|
||||
🔴 (8) Clone repo ใหม่ทั้งหมด (เมื่อพังหนัก)
|
||||
|
||||
```bash
|
||||
rm -rf lcbp3
|
||||
git clone ssh://git@git.np-dms.work:2222/np-dms/lcbp3.git
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📌 END
|
||||
|
||||
```
|
||||
91
specs/08-infrastructure/Gitea_setting.md
Normal file
91
specs/08-infrastructure/Gitea_setting.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# การติดตั้ง Gitea ใน Docker
|
||||
|
||||
* user id ของ gites:
|
||||
|
||||
* uid=1000(git) gid=1000(git) groups=1000(git)
|
||||
|
||||
## กำหนดสิทธิ
|
||||
|
||||
```bash
|
||||
chown -R 1000:1000 /share/Container/gitea/
|
||||
[/share/Container/git] # ls -l /share/Container/gitea/etc/app.ini
|
||||
[/share/Container/git] # setfacl -R -m u:1000:rwx /share/Container/gitea/
|
||||
[/share/Container/git] # setfacl -R -m u:70:rwx /share/Container/git/postgres/
|
||||
getfacl /share/Container/git/etc/app.ini
|
||||
chown -R 1000:1000 /share/Container/gitea/
|
||||
ล้าง
|
||||
setfacl -R -b /share/Container/gitea/
|
||||
|
||||
chgrp -R administrators /share/Container/gitea/
|
||||
chown -R 1000:1000 /share/Container/gitea/etc /share/Container/gitea/lib /share/Container/gitea/backup
|
||||
setfacl -m u:1000:rwx -m g:1000:rwx /share/Container/gitea/etc /share/Container/gitea/lib /share/Container/gitea/backup
|
||||
```
|
||||
|
||||
## Docker file
|
||||
|
||||
```yml
|
||||
# File: share/Container/git/docker-compose.yml
|
||||
# DMS Container v1_4_1 : แยก service และ folder, Application name: git, Servive:gitea
|
||||
networks:
|
||||
lcbp3:
|
||||
external: true
|
||||
giteanet:
|
||||
external: true
|
||||
name: gitnet
|
||||
|
||||
services:
|
||||
gitea:
|
||||
image: gitea/gitea:latest-rootless
|
||||
container_name: gitea
|
||||
restart: always
|
||||
stdin_open: true
|
||||
tty: true
|
||||
environment:
|
||||
# ---- File ownership in QNAP ----
|
||||
USER_UID: "1000"
|
||||
USER_GID: "1000"
|
||||
TZ: Asia/Bangkok
|
||||
# ---- Server / Reverse proxy (NPM) ----
|
||||
GITEA__server__ROOT_URL: https://git.np-dms.work/
|
||||
GITEA__server__DOMAIN: git.np-dms.work
|
||||
GITEA__server__SSH_DOMAIN: git.np-dms.work
|
||||
GITEA__server__START_SSH_SERVER: "true"
|
||||
GITEA__server__SSH_PORT: "22"
|
||||
GITEA__server__SSH_LISTEN_PORT: "22"
|
||||
GITEA__server__LFS_START_SERVER: "true"
|
||||
GITEA__server__HTTP_ADDR: "0.0.0.0"
|
||||
GITEA__server__HTTP_PORT: "3000"
|
||||
GITEA__server__TRUSTED_PROXIES: "127.0.0.1/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
|
||||
# --- การตั้งค่าฐานข้อมูล
|
||||
GITEA__database__DB_TYPE: mysql
|
||||
GITEA__database__HOST: mariadb:3306
|
||||
GITEA__database__NAME: "gitea"
|
||||
GITEA__database__USER: "gitea"
|
||||
GITEA__database__PASSWD: "Center#2025"
|
||||
# --- repos
|
||||
GITEA__repository__ROOT: /var/lib/gitea/git/repositories
|
||||
DISABLE_HTTP_GIT: "false"
|
||||
ENABLE_BASIC_AUTHENTICATION: "true"
|
||||
# --- Enable Package Registry ---
|
||||
GITEA__packages__ENABLED: "true"
|
||||
GITEA__packages__REGISTRY__ENABLED: "true"
|
||||
GITEA__packages__REGISTRY__STORAGE_TYPE: local
|
||||
GITEA__packages__REGISTRY__STORAGE_PATH: /data/registry
|
||||
# Optional: lock install after setup (เปลี่ยนเป็น true เมื่อจบ onboarding)
|
||||
GITEA__security__INSTALL_LOCK: "true"
|
||||
volumes:
|
||||
- /share/Container/gitea/backup:/backup
|
||||
- /share/Container/gitea/etc:/etc/gitea
|
||||
- /share/Container/gitea/lib:/var/lib/gitea
|
||||
# ให้ repo root ใช้จาก /share/dms-data/gitea_repos
|
||||
- /share/dms-data/gitea_repos:/var/lib/gitea/git/repositories
|
||||
- /share/dms-data/gitea_registry:/data/registry
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
ports:
|
||||
- "3003:3000" # HTTP (ไปหลัง NPM)
|
||||
- "2222:22" # SSH สำหรับ git clone/push
|
||||
networks:
|
||||
- lcbp3
|
||||
- giteanet
|
||||
```
|
||||
957
specs/08-infrastructure/Infrastructure Setup.md
Normal file
957
specs/08-infrastructure/Infrastructure Setup.md
Normal file
@@ -0,0 +1,957 @@
|
||||
# Infrastructure Setup
|
||||
|
||||
## 1. Redis Cluster Configuration
|
||||
|
||||
### 1.1 Docker Compose Setup
|
||||
```yaml
|
||||
# docker-compose-redis.yml
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
redis-1:
|
||||
image: redis:7-alpine
|
||||
container_name: lcbp3-redis-1
|
||||
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf
|
||||
ports:
|
||||
- "6379:6379"
|
||||
- "16379:16379"
|
||||
volumes:
|
||||
- redis-1-data:/data
|
||||
networks:
|
||||
- lcbp3-network
|
||||
restart: unless-stopped
|
||||
|
||||
redis-2:
|
||||
image: redis:7-alpine
|
||||
container_name: lcbp3-redis-2
|
||||
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf
|
||||
ports:
|
||||
- "6380:6379"
|
||||
- "16380:16379"
|
||||
volumes:
|
||||
- redis-2-data:/data
|
||||
networks:
|
||||
- lcbp3-network
|
||||
restart: unless-stopped
|
||||
|
||||
redis-3:
|
||||
image: redis:7-alpine
|
||||
container_name: lcbp3-redis-3
|
||||
command: redis-server --port 6379 --cluster-enabled yes --cluster-config-file nodes.conf
|
||||
ports:
|
||||
- "6381:6379"
|
||||
- "16381:16379"
|
||||
volumes:
|
||||
- redis-3-data:/data
|
||||
networks:
|
||||
- lcbp3-network
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
redis-1-data:
|
||||
redis-2-data:
|
||||
redis-3-data:
|
||||
|
||||
networks:
|
||||
lcbp3-network:
|
||||
external: true
|
||||
```
|
||||
|
||||
#### Initialize Cluster
|
||||
```bash
|
||||
# Start Redis nodes
|
||||
docker-compose -f docker-compose-redis.yml up -d
|
||||
|
||||
# Wait for nodes to start
|
||||
sleep 10
|
||||
|
||||
# Create cluster
|
||||
docker exec -it lcbp3-redis-1 redis-cli --cluster create \
|
||||
172.20.0.2:6379 \
|
||||
172.20.0.3:6379 \
|
||||
172.20.0.4:6379 \
|
||||
--cluster-replicas 0
|
||||
|
||||
# Verify cluster
|
||||
docker exec -it lcbp3-redis-1 redis-cli cluster info
|
||||
docker exec -it lcbp3-redis-1 redis-cli cluster nodes
|
||||
```
|
||||
|
||||
#### Health Check Script
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# scripts/check-redis-cluster.sh
|
||||
|
||||
echo "🔍 Checking Redis Cluster Health..."
|
||||
|
||||
for port in 6379 6380 6381; do
|
||||
echo "\n📍 Node on port $port:"
|
||||
|
||||
# Check if node is up
|
||||
docker exec lcbp3-redis-$(($port - 6378)) redis-cli -p 6379 ping
|
||||
|
||||
# Check cluster status
|
||||
docker exec lcbp3-redis-$(($port - 6378)) redis-cli -p 6379 cluster info | grep cluster_state
|
||||
|
||||
# Check memory usage
|
||||
docker exec lcbp3-redis-$(($port - 6378)) redis-cli -p 6379 info memory | grep used_memory_human
|
||||
done
|
||||
|
||||
echo "\n✅ Cluster check complete"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Database Configuration
|
||||
|
||||
### 2.1 MariaDB Optimization for Numbering
|
||||
```sql
|
||||
-- /etc/mysql/mariadb.conf.d/50-numbering.cnf
|
||||
|
||||
[mysqld]
|
||||
# Connection pool
|
||||
max_connections = 200
|
||||
thread_cache_size = 50
|
||||
|
||||
# Query cache (disabled for InnoDB)
|
||||
query_cache_type = 0
|
||||
query_cache_size = 0
|
||||
|
||||
# InnoDB settings
|
||||
innodb_buffer_pool_size = 4G
|
||||
innodb_log_file_size = 512M
|
||||
innodb_flush_log_at_trx_commit = 1
|
||||
innodb_lock_wait_timeout = 50
|
||||
|
||||
# Performance Schema
|
||||
performance_schema = ON
|
||||
performance_schema_instrument = 'wait/lock/%=ON'
|
||||
|
||||
# Binary logging
|
||||
log_bin = /var/log/mysql/mysql-bin.log
|
||||
expire_logs_days = 7
|
||||
max_binlog_size = 100M
|
||||
|
||||
# Slow query log
|
||||
slow_query_log = 1
|
||||
slow_query_log_file = /var/log/mysql/slow-query.log
|
||||
long_query_time = 1
|
||||
```
|
||||
|
||||
### 2.2 Monitoring Locks
|
||||
```sql
|
||||
-- Check for lock contention
|
||||
SELECT
|
||||
r.trx_id waiting_trx_id,
|
||||
r.trx_mysql_thread_id waiting_thread,
|
||||
r.trx_query waiting_query,
|
||||
b.trx_id blocking_trx_id,
|
||||
b.trx_mysql_thread_id blocking_thread,
|
||||
b.trx_query blocking_query
|
||||
FROM information_schema.innodb_lock_waits w
|
||||
INNER JOIN information_schema.innodb_trx b ON b.trx_id = w.blocking_trx_id
|
||||
INNER JOIN information_schema.innodb_trx r ON r.trx_id = w.requesting_trx_id;
|
||||
|
||||
-- Check active transactions
|
||||
SELECT * FROM information_schema.innodb_trx;
|
||||
|
||||
-- Kill long-running transaction (if needed)
|
||||
KILL <thread_id>;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Backend Service Configuration
|
||||
|
||||
### 3.1 Backend Service Deployment
|
||||
|
||||
#### Docker Compose
|
||||
```yaml
|
||||
# docker-compose-backend.yml
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
backend-1:
|
||||
image: lcbp3-backend:latest
|
||||
container_name: lcbp3-backend-1
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
- DB_HOST=mariadb-primary
|
||||
- REDIS_CLUSTER_NODES=redis-1:6379,redis-2:6379,redis-3:6379
|
||||
- NUMBERING_LOCK_TIMEOUT=5000
|
||||
- NUMBERING_RESERVATION_TTL=300
|
||||
ports:
|
||||
- "3001:3000"
|
||||
depends_on:
|
||||
- mariadb-primary
|
||||
- redis-1
|
||||
- redis-2
|
||||
- redis-3
|
||||
networks:
|
||||
- lcbp3-network
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
|
||||
backend-2:
|
||||
image: lcbp3-backend:latest
|
||||
container_name: lcbp3-backend-2
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
- DB_HOST=mariadb-primary
|
||||
- REDIS_CLUSTER_NODES=redis-1:6379,redis-2:6379,redis-3:6379
|
||||
ports:
|
||||
- "3002:3000"
|
||||
depends_on:
|
||||
- mariadb-primary
|
||||
- redis-1
|
||||
networks:
|
||||
- lcbp3-network
|
||||
restart: unless-stopped
|
||||
|
||||
networks:
|
||||
lcbp3-network:
|
||||
external: true
|
||||
```
|
||||
|
||||
#### Health Check Endpoint
|
||||
```typescript
|
||||
// health/numbering.health.ts
|
||||
import { Injectable } from '@nestjs/common';
|
||||
import { HealthIndicator, HealthIndicatorResult } from '@nestjs/terminus';
|
||||
import { Redis } from 'ioredis';
|
||||
import { DataSource } from 'typeorm';
|
||||
|
||||
@Injectable()
|
||||
export class NumberingHealthIndicator extends HealthIndicator {
|
||||
constructor(
|
||||
private redis: Redis,
|
||||
private dataSource: DataSource,
|
||||
) {
|
||||
super();
|
||||
}
|
||||
|
||||
async isHealthy(key: string): Promise<HealthIndicatorResult> {
|
||||
const checks = await Promise.all([
|
||||
this.checkRedis(),
|
||||
this.checkDatabase(),
|
||||
this.checkSequenceIntegrity(),
|
||||
]);
|
||||
|
||||
const isHealthy = checks.every((check) => check.status === 'up');
|
||||
|
||||
return this.getStatus(key, isHealthy, { checks });
|
||||
}
|
||||
|
||||
private async checkRedis(): Promise<any> {
|
||||
try {
|
||||
await this.redis.ping();
|
||||
return { name: 'redis', status: 'up' };
|
||||
} catch (error) {
|
||||
return { name: 'redis', status: 'down', error: error.message };
|
||||
}
|
||||
}
|
||||
|
||||
private async checkDatabase(): Promise<any> {
|
||||
try {
|
||||
await this.dataSource.query('SELECT 1');
|
||||
return { name: 'database', status: 'up' };
|
||||
} catch (error) {
|
||||
return { name: 'database', status: 'down', error: error.message };
|
||||
}
|
||||
}
|
||||
|
||||
private async checkSequenceIntegrity(): Promise<any> {
|
||||
try {
|
||||
const result = await this.dataSource.query(`
|
||||
SELECT COUNT(*) as count
|
||||
FROM document_numbering_sequences
|
||||
WHERE current_value > (
|
||||
SELECT max_value FROM document_numbering_configs
|
||||
WHERE id = config_id
|
||||
)
|
||||
`);
|
||||
|
||||
const hasIssue = result[0].count > 0;
|
||||
|
||||
return {
|
||||
name: 'sequence_integrity',
|
||||
status: hasIssue ? 'degraded' : 'up',
|
||||
exceeded_sequences: result[0].count,
|
||||
};
|
||||
} catch (error) {
|
||||
return { name: 'sequence_integrity', status: 'down', error: error.message };
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Monitoring & Alerting
|
||||
|
||||
### 4.1 Prometheus Configuration
|
||||
|
||||
```yaml
|
||||
# prometheus.yml
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
evaluation_interval: 15s
|
||||
|
||||
alerting:
|
||||
alertmanagers:
|
||||
- static_configs:
|
||||
- targets:
|
||||
- alertmanager:9093
|
||||
|
||||
rule_files:
|
||||
- "/etc/prometheus/alerts/numbering.yml"
|
||||
|
||||
scrape_configs:
|
||||
- job_name: 'backend'
|
||||
static_configs:
|
||||
- targets:
|
||||
- 'backend-1:3000'
|
||||
- 'backend-2:3000'
|
||||
metrics_path: '/metrics'
|
||||
|
||||
- job_name: 'redis-numbering'
|
||||
static_configs:
|
||||
- targets:
|
||||
- 'redis-1:6379'
|
||||
- 'redis-2:6379'
|
||||
- 'redis-3:6379'
|
||||
metrics_path: '/metrics'
|
||||
|
||||
- job_name: 'mariadb'
|
||||
static_configs:
|
||||
- targets:
|
||||
- 'mariadb-exporter:9104'
|
||||
```
|
||||
|
||||
### 4.2 Alert Manager Configuration
|
||||
|
||||
```yaml
|
||||
# alertmanager.yml
|
||||
global:
|
||||
resolve_timeout: 5m
|
||||
|
||||
route:
|
||||
receiver: 'default'
|
||||
group_by: ['alertname', 'severity']
|
||||
group_wait: 10s
|
||||
group_interval: 10s
|
||||
repeat_interval: 12h
|
||||
|
||||
routes:
|
||||
- match:
|
||||
severity: critical
|
||||
receiver: 'critical'
|
||||
continue: true
|
||||
|
||||
- match:
|
||||
severity: warning
|
||||
receiver: 'warning'
|
||||
|
||||
receivers:
|
||||
- name: 'default'
|
||||
slack_configs:
|
||||
- api_url: 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'
|
||||
channel: '#lcbp3-alerts'
|
||||
title: '{{ .GroupLabels.alertname }}'
|
||||
text: '{{ range .Alerts }}{{ .Annotations.summary }}\n{{ end }}'
|
||||
|
||||
- name: 'critical'
|
||||
email_configs:
|
||||
- to: 'devops@lcbp3.com'
|
||||
from: 'alerts@lcbp3.com'
|
||||
smarthost: 'smtp.gmail.com:587'
|
||||
auth_username: 'alerts@lcbp3.com'
|
||||
auth_password: 'your-password'
|
||||
headers:
|
||||
Subject: '🚨 CRITICAL: {{ .GroupLabels.alertname }}'
|
||||
|
||||
pagerduty_configs:
|
||||
- service_key: 'YOUR_PAGERDUTY_KEY'
|
||||
|
||||
- name: 'warning'
|
||||
slack_configs:
|
||||
- api_url: 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'
|
||||
channel: '#lcbp3-warnings'
|
||||
```
|
||||
|
||||
### 4.3 Grafana Dashboards
|
||||
|
||||
#### Import Dashboard JSON
|
||||
```bash
|
||||
# Download dashboard template
|
||||
curl -o numbering-dashboard.json \
|
||||
https://raw.githubusercontent.com/lcbp3/grafana-dashboards/main/numbering.json
|
||||
|
||||
# Import to Grafana
|
||||
curl -X POST http://admin:admin@localhost:3000/api/dashboards/db \
|
||||
-H "Content-Type: application/json" \
|
||||
-d @numbering-dashboard.json
|
||||
```
|
||||
|
||||
#### Key Panels to Monitor
|
||||
1. **Numbers Generated per Minute** - Rate of number creation
|
||||
2. **Sequence Utilization** - Current usage vs max (alert >90%)
|
||||
3. **Lock Wait Time (p95)** - Performance indicator
|
||||
4. **Lock Failures** - System health indicator
|
||||
5. **Redis Cluster Health** - Node status
|
||||
6. **Database Connection Pool** - Resource usage
|
||||
|
||||
---
|
||||
|
||||
## 5. Backup & Recovery
|
||||
|
||||
### 5.1 Database Backup Strategy
|
||||
|
||||
#### Automated Backup Script
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# scripts/backup-numbering-db.sh
|
||||
|
||||
DATE=$(date +%Y%m%d_%H%M%S)
|
||||
BACKUP_DIR="/backups/numbering"
|
||||
DB_NAME="lcbp3_production"
|
||||
|
||||
echo "🔄 Starting backup at $DATE"
|
||||
|
||||
# Create backup directory
|
||||
mkdir -p $BACKUP_DIR
|
||||
|
||||
# Backup numbering tables only
|
||||
docker exec lcbp3-mariadb mysqldump \
|
||||
--single-transaction \
|
||||
--routines \
|
||||
--triggers \
|
||||
$DB_NAME \
|
||||
document_numbering_configs \
|
||||
document_numbering_sequences \
|
||||
document_numbering_audit_logs \
|
||||
> $BACKUP_DIR/numbering_$DATE.sql
|
||||
|
||||
# Compress backup
|
||||
gzip $BACKUP_DIR/numbering_$DATE.sql
|
||||
|
||||
# Keep only last 30 days
|
||||
find $BACKUP_DIR -name "numbering_*.sql.gz" -mtime +30 -delete
|
||||
|
||||
echo "✅ Backup complete: numbering_$DATE.sql.gz"
|
||||
```
|
||||
|
||||
#### Cron Schedule
|
||||
```cron
|
||||
# Run backup daily at 2 AM
|
||||
0 2 * * * /opt/lcbp3/scripts/backup-numbering-db.sh >> /var/log/numbering-backup.log 2>&1
|
||||
|
||||
# Run integrity check weekly on Sunday at 3 AM
|
||||
0 3 * * 0 /opt/lcbp3/scripts/check-sequence-integrity.sh >> /var/log/numbering-integrity.log 2>&1
|
||||
```
|
||||
|
||||
### 5.2 Redis Backup
|
||||
|
||||
#### Enable RDB Persistence
|
||||
```conf
|
||||
# redis.conf
|
||||
save 900 1 # Save if 1 key changed after 900 seconds
|
||||
save 300 10 # Save if 10 keys changed after 300 seconds
|
||||
save 60 10000 # Save if 10000 keys changed after 60 seconds
|
||||
|
||||
dbfilename dump.rdb
|
||||
dir /data
|
||||
|
||||
# Enable AOF for durability
|
||||
appendonly yes
|
||||
appendfilename "appendonly.aof"
|
||||
appendfsync everysec
|
||||
```
|
||||
|
||||
#### Backup Script
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# scripts/backup-redis.sh
|
||||
|
||||
DATE=$(date +%Y%m%d_%H%M%S)
|
||||
BACKUP_DIR="/backups/redis"
|
||||
|
||||
mkdir -p $BACKUP_DIR
|
||||
|
||||
for i in 1 2 3; do
|
||||
echo "Backing up redis-$i..."
|
||||
|
||||
# Trigger BGSAVE
|
||||
docker exec lcbp3-redis-$i redis-cli -p 6379 BGSAVE
|
||||
|
||||
# Wait for save to complete
|
||||
sleep 10
|
||||
|
||||
# Copy RDB file
|
||||
docker cp lcbp3-redis-$i:/data/dump.rdb \
|
||||
$BACKUP_DIR/redis-${i}_${DATE}.rdb
|
||||
|
||||
# Copy AOF file
|
||||
docker cp lcbp3-redis-$i:/data/appendonly.aof \
|
||||
$BACKUP_DIR/redis-${i}_${DATE}.aof
|
||||
done
|
||||
|
||||
# Compress
|
||||
tar -czf $BACKUP_DIR/redis_cluster_${DATE}.tar.gz $BACKUP_DIR/*_${DATE}.*
|
||||
|
||||
# Cleanup
|
||||
rm $BACKUP_DIR/*_${DATE}.rdb $BACKUP_DIR/*_${DATE}.aof
|
||||
|
||||
echo "✅ Redis backup complete"
|
||||
```
|
||||
|
||||
### 5.3 Recovery Procedures
|
||||
|
||||
#### Scenario 1: Restore from Database Backup
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# scripts/restore-numbering-db.sh
|
||||
|
||||
BACKUP_FILE=$1
|
||||
|
||||
if [ -z "$BACKUP_FILE" ]; then
|
||||
echo "Usage: ./restore-numbering-db.sh <backup_file>"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "⚠️ WARNING: This will overwrite current numbering data!"
|
||||
read -p "Continue? (yes/no): " confirm
|
||||
|
||||
if [ "$confirm" != "yes" ]; then
|
||||
echo "Aborted"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Decompress if needed
|
||||
if [[ $BACKUP_FILE == *.gz ]]; then
|
||||
gunzip -c $BACKUP_FILE > /tmp/restore.sql
|
||||
RESTORE_FILE="/tmp/restore.sql"
|
||||
else
|
||||
RESTORE_FILE=$BACKUP_FILE
|
||||
fi
|
||||
|
||||
# Restore
|
||||
docker exec -i lcbp3-mariadb mysql lcbp3_production < $RESTORE_FILE
|
||||
|
||||
echo "✅ Restore complete"
|
||||
echo "🔄 Please verify sequence integrity"
|
||||
```
|
||||
|
||||
#### Scenario 2: Redis Node Failure
|
||||
```bash
|
||||
# Automatically handled by cluster
|
||||
# Node will rejoin cluster when restarted
|
||||
|
||||
# Check cluster status
|
||||
docker exec lcbp3-redis-1 redis-cli cluster info
|
||||
|
||||
# If node is failed, remove and add back
|
||||
docker exec lcbp3-redis-1 redis-cli --cluster del-node <node-id>
|
||||
docker exec lcbp3-redis-1 redis-cli --cluster add-node <new-node-ip>:6379 <cluster-ip>:6379
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Maintenance Procedures
|
||||
|
||||
### 6.1 Sequence Adjustment
|
||||
|
||||
#### Increase Max Value
|
||||
```sql
|
||||
-- Check current utilization
|
||||
SELECT
|
||||
dc.document_type,
|
||||
ds.current_value,
|
||||
dc.max_value,
|
||||
ROUND((ds.current_value * 100.0 / dc.max_value), 2) as utilization
|
||||
FROM document_numbering_sequences ds
|
||||
JOIN document_numbering_configs dc ON ds.config_id = dc.id
|
||||
WHERE ds.current_value > dc.max_value * 0.8;
|
||||
|
||||
-- Increase max_value for type approaching limit
|
||||
UPDATE document_numbering_configs
|
||||
SET max_value = max_value * 10,
|
||||
updated_at = CURRENT_TIMESTAMP
|
||||
WHERE document_type = 'COR'
|
||||
AND max_value < 9999999;
|
||||
|
||||
-- Audit log
|
||||
INSERT INTO document_numbering_audit_logs (
|
||||
operation, document_type, old_value, new_value,
|
||||
user_id, metadata
|
||||
) VALUES (
|
||||
'ADJUST_MAX_VALUE', 'COR', '999999', '9999999',
|
||||
1, '{"reason": "Approaching limit", "automated": false}'
|
||||
);
|
||||
```
|
||||
|
||||
#### Reset Yearly Sequence
|
||||
```sql
|
||||
-- For document types with yearly reset
|
||||
-- Run on January 1st
|
||||
|
||||
START TRANSACTION;
|
||||
|
||||
-- Create new sequence for new year
|
||||
INSERT INTO document_numbering_sequences (
|
||||
config_id,
|
||||
scope_value,
|
||||
current_value,
|
||||
last_used_at
|
||||
)
|
||||
SELECT
|
||||
id as config_id,
|
||||
YEAR(CURDATE()) as scope_value,
|
||||
0 as current_value,
|
||||
NULL as last_used_at
|
||||
FROM document_numbering_configs
|
||||
WHERE scope = 'YEARLY';
|
||||
|
||||
-- Verify
|
||||
SELECT * FROM document_numbering_sequences
|
||||
WHERE scope_value = YEAR(CURDATE());
|
||||
|
||||
COMMIT;
|
||||
```
|
||||
|
||||
### 6.2 Cleanup Old Audit Logs
|
||||
|
||||
```sql
|
||||
-- Archive logs older than 2 years
|
||||
-- Run monthly
|
||||
|
||||
START TRANSACTION;
|
||||
|
||||
-- Create archive table (if not exists)
|
||||
CREATE TABLE IF NOT EXISTS document_numbering_audit_logs_archive
|
||||
LIKE document_numbering_audit_logs;
|
||||
|
||||
-- Move old logs to archive
|
||||
INSERT INTO document_numbering_audit_logs_archive
|
||||
SELECT * FROM document_numbering_audit_logs
|
||||
WHERE timestamp < DATE_SUB(CURDATE(), INTERVAL 2 YEAR);
|
||||
|
||||
-- Delete from main table
|
||||
DELETE FROM document_numbering_audit_logs
|
||||
WHERE timestamp < DATE_SUB(CURDATE(), INTERVAL 2 YEAR);
|
||||
|
||||
-- Optimize table
|
||||
OPTIMIZE TABLE document_numbering_audit_logs;
|
||||
|
||||
COMMIT;
|
||||
|
||||
-- Export archive to file (optional)
|
||||
SELECT * FROM document_numbering_audit_logs_archive
|
||||
INTO OUTFILE '/tmp/audit_archive_2023.csv'
|
||||
FIELDS TERMINATED BY ','
|
||||
ENCLOSED BY '"'
|
||||
LINES TERMINATED BY '\n';
|
||||
```
|
||||
|
||||
### 6.3 Redis Maintenance
|
||||
|
||||
#### Flush Expired Reservations
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# scripts/cleanup-expired-reservations.sh
|
||||
|
||||
echo "🧹 Cleaning up expired reservations..."
|
||||
|
||||
# Get all reservation keys
|
||||
KEYS=$(docker exec lcbp3-redis-1 redis-cli --cluster call 172.20.0.2:6379 KEYS "reservation:*" | grep -v "(error)")
|
||||
|
||||
COUNT=0
|
||||
for KEY in $KEYS; do
|
||||
# Check TTL
|
||||
TTL=$(docker exec lcbp3-redis-1 redis-cli TTL "$KEY")
|
||||
|
||||
if [ "$TTL" -lt 0 ]; then
|
||||
# Delete expired key
|
||||
docker exec lcbp3-redis-1 redis-cli DEL "$KEY"
|
||||
((COUNT++))
|
||||
fi
|
||||
done
|
||||
|
||||
echo "✅ Cleaned up $COUNT expired reservations"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Disaster Recovery
|
||||
|
||||
### 7.1 Total System Failure
|
||||
|
||||
#### Recovery Steps
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# scripts/disaster-recovery.sh
|
||||
|
||||
echo "🚨 Starting disaster recovery..."
|
||||
|
||||
# 1. Start Redis cluster
|
||||
echo "1️⃣ Starting Redis cluster..."
|
||||
docker-compose -f docker-compose-redis.yml up -d
|
||||
sleep 30
|
||||
|
||||
# 2. Restore Redis backups
|
||||
echo "2️⃣ Restoring Redis backups..."
|
||||
./scripts/restore-redis.sh /backups/redis/latest.tar.gz
|
||||
|
||||
# 3. Start database
|
||||
echo "3️⃣ Starting MariaDB..."
|
||||
docker-compose -f docker-compose-db.yml up -d
|
||||
sleep 30
|
||||
|
||||
# 4. Restore database
|
||||
echo "4️⃣ Restoring database..."
|
||||
./scripts/restore-numbering-db.sh /backups/db/latest.sql.gz
|
||||
|
||||
# 5. Verify sequence integrity
|
||||
echo "5️⃣ Verifying sequence integrity..."
|
||||
./scripts/check-sequence-integrity.sh
|
||||
|
||||
# 6. Start backend services
|
||||
echo "6️⃣ Starting backend services..."
|
||||
docker-compose -f docker-compose-backend.yml up -d
|
||||
|
||||
# 7. Run health checks
|
||||
echo "7️⃣ Running health checks..."
|
||||
sleep 60
|
||||
for i in {1..5}; do
|
||||
curl -f http://localhost:3001/health || echo "Backend $i not healthy"
|
||||
done
|
||||
|
||||
echo "✅ Disaster recovery complete"
|
||||
echo "⚠️ Please verify system functionality manually"
|
||||
```
|
||||
|
||||
### 7.2 RTO/RPO Targets
|
||||
|
||||
| Scenario | RTO | RPO | Priority |
|
||||
| ---------------------------- | ------- | ------ | -------- |
|
||||
| Single backend node failure | 0 min | 0 | P0 |
|
||||
| Single Redis node failure | 0 min | 0 | P0 |
|
||||
| Database primary failure | 5 min | 0 | P0 |
|
||||
| Complete data center failure | 1 hour | 15 min | P1 |
|
||||
| Data corruption | 4 hours | 1 day | P2 |
|
||||
|
||||
---
|
||||
|
||||
## 8. Runbooks
|
||||
|
||||
### 8.1 High Sequence Utilization (>90%)
|
||||
|
||||
**Alert**: `SequenceWarning` or `SequenceCritical`
|
||||
|
||||
**Steps**:
|
||||
1. Check current utilization
|
||||
```sql
|
||||
SELECT document_type, current_value, max_value,
|
||||
ROUND((current_value * 100.0 / max_value), 2) as pct
|
||||
FROM document_numbering_sequences s
|
||||
JOIN document_numbering_configs c ON s.config_id = c.id
|
||||
WHERE current_value > max_value * 0.9;
|
||||
```
|
||||
|
||||
2. Assess impact
|
||||
- How many numbers left?
|
||||
- Daily usage rate?
|
||||
- Days until exhaustion?
|
||||
|
||||
3. Take action
|
||||
```sql
|
||||
-- Option A: Increase max_value
|
||||
UPDATE document_numbering_configs
|
||||
SET max_value = max_value * 10
|
||||
WHERE document_type = 'COR';
|
||||
|
||||
-- Option B: Reset sequence (yearly types only)
|
||||
-- Schedule for next year/month
|
||||
```
|
||||
|
||||
4. Notify stakeholders
|
||||
5. Update monitoring thresholds if needed
|
||||
|
||||
---
|
||||
|
||||
### 8.2 High Lock Wait Time
|
||||
|
||||
**Alert**: `HighLockWaitTime`
|
||||
|
||||
**Steps**:
|
||||
1. Check Redis cluster health
|
||||
```bash
|
||||
docker exec lcbp3-redis-1 redis-cli cluster info
|
||||
docker exec lcbp3-redis-1 redis-cli cluster nodes
|
||||
```
|
||||
|
||||
2. Check database locks
|
||||
```sql
|
||||
SELECT * FROM information_schema.innodb_lock_waits;
|
||||
SELECT * FROM information_schema.innodb_trx
|
||||
WHERE trx_started < NOW() - INTERVAL 30 SECOND;
|
||||
```
|
||||
|
||||
3. Identify bottleneck
|
||||
- Redis slow?
|
||||
- Database slow?
|
||||
- High concurrent load?
|
||||
|
||||
4. Take action based on cause:
|
||||
- **Redis**: Add more nodes, check network latency
|
||||
- **Database**: Optimize queries, increase connection pool
|
||||
- **High load**: Scale horizontally (add backend nodes)
|
||||
|
||||
5. Monitor improvements
|
||||
|
||||
---
|
||||
|
||||
### 8.3 Redis Cluster Down
|
||||
|
||||
**Alert**: `RedisUnavailable`
|
||||
|
||||
**Steps**:
|
||||
1. Verify all nodes down
|
||||
```bash
|
||||
for i in {1..3}; do
|
||||
docker exec lcbp3-redis-$i redis-cli ping || echo "Node $i DOWN"
|
||||
done
|
||||
```
|
||||
|
||||
2. Check system falls back to DB-only mode
|
||||
```bash
|
||||
curl http://localhost:3001/health/numbering
|
||||
# Should show: fallback_mode: true
|
||||
```
|
||||
|
||||
3. Restart Redis cluster
|
||||
```bash
|
||||
docker-compose -f docker-compose-redis.yml restart
|
||||
sleep 30
|
||||
./scripts/check-redis-cluster.sh
|
||||
```
|
||||
|
||||
4. If restart fails, restore from backup
|
||||
```bash
|
||||
./scripts/restore-redis.sh /backups/redis/latest.tar.gz
|
||||
```
|
||||
|
||||
5. Verify numbering system back to normal
|
||||
```bash
|
||||
curl http://localhost:3001/health/numbering
|
||||
# Should show: fallback_mode: false
|
||||
```
|
||||
|
||||
6. Review logs for root cause
|
||||
|
||||
---
|
||||
|
||||
## 9. Performance Tuning
|
||||
|
||||
### 9.1 Slow Number Generation
|
||||
|
||||
**Diagnosis**:
|
||||
```sql
|
||||
-- Check slow queries
|
||||
SELECT * FROM mysql.slow_log
|
||||
WHERE sql_text LIKE '%document_numbering%'
|
||||
ORDER BY query_time DESC
|
||||
LIMIT 10;
|
||||
|
||||
-- Check index usage
|
||||
EXPLAIN SELECT * FROM document_numbering_sequences
|
||||
WHERE config_id = 1 AND scope_value = '2025'
|
||||
FOR UPDATE;
|
||||
```
|
||||
|
||||
**Optimizations**:
|
||||
```sql
|
||||
-- Add missing indexes
|
||||
CREATE INDEX idx_sequence_lookup
|
||||
ON document_numbering_sequences(config_id, scope_value);
|
||||
|
||||
-- Optimize table
|
||||
OPTIMIZE TABLE document_numbering_sequences;
|
||||
|
||||
-- Update statistics
|
||||
ANALYZE TABLE document_numbering_sequences;
|
||||
```
|
||||
|
||||
### 8.2 Redis Memory Optimization
|
||||
|
||||
```bash
|
||||
# Check memory usage
|
||||
docker exec lcbp3-redis-1 redis-cli INFO memory
|
||||
|
||||
# If memory high, check keys
|
||||
docker exec lcbp3-redis-1 redis-cli --bigkeys
|
||||
|
||||
# Set maxmemory policy
|
||||
docker exec lcbp3-redis-1 redis-cli CONFIG SET maxmemory 2gb
|
||||
docker exec lcbp3-redis-1 redis-cli CONFIG SET maxmemory-policy allkeys-lru
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. Security Hardening
|
||||
|
||||
### 10.1 Redis Security
|
||||
|
||||
```conf
|
||||
# redis.conf
|
||||
requirepass your-strong-redis-password
|
||||
bind 0.0.0.0
|
||||
protected-mode yes
|
||||
rename-command FLUSHDB ""
|
||||
rename-command FLUSHALL ""
|
||||
rename-command CONFIG "CONFIG_abc123"
|
||||
```
|
||||
|
||||
### 10.2 Database Security
|
||||
|
||||
```sql
|
||||
-- Create dedicated numbering user
|
||||
CREATE USER 'numbering'@'%' IDENTIFIED BY 'strong-password';
|
||||
|
||||
-- Grant minimal permissions
|
||||
GRANT SELECT, INSERT, UPDATE ON lcbp3_production.document_numbering_* TO 'numbering'@'%';
|
||||
GRANT SELECT ON lcbp3_production.users TO 'numbering'@'%';
|
||||
|
||||
FLUSH PRIVILEGES;
|
||||
```
|
||||
|
||||
### 10.3 Network Security
|
||||
|
||||
```yaml
|
||||
# docker-compose-network.yml
|
||||
networks:
|
||||
lcbp3-network:
|
||||
driver: bridge
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.20.0.0/16
|
||||
driver_opts:
|
||||
com.docker.network.bridge.name: lcbp3-br
|
||||
com.docker.network.bridge.enable_icc: "true"
|
||||
com.docker.network.bridge.enable_ip_masquerade: "true"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. Compliance & Audit
|
||||
|
||||
### 11.1 Audit Log Retention
|
||||
|
||||
```sql
|
||||
-- Export audit logs for compliance
|
||||
SELECT *
|
||||
FROM document_numbering
|
||||
138
specs/08-infrastructure/MariaDB_setting.md
Normal file
138
specs/08-infrastructure/MariaDB_setting.md
Normal file
@@ -0,0 +1,138 @@
|
||||
# การติดตั้ง MAriaDB และ PHPMyAdmin ใน Docker
|
||||
|
||||
* user id ของ mariadb:
|
||||
|
||||
* uid=0(root) gid=0(root) groups=0(root)
|
||||
|
||||
## กำหนดสิทธิ
|
||||
|
||||
```bash
|
||||
chown -R 999:999 /share/Container/mariadb/init
|
||||
chmod 755 /share/Container/mariadb/init
|
||||
setfacl -R -m u:999:r-x /share/Container/mariadb/init
|
||||
setfacl -R -d -m u:999:r-x /share/Container/mariadb/init
|
||||
|
||||
chown -R 33:33 /share/Container/pma/tmp
|
||||
chmod 755 /share/Container/pma/tmp
|
||||
setfacl -R -m u:33:rwx /share/Container/pma/tmp
|
||||
setfacl -R -d -m u:33:rwx /share/Container/pma/tmp
|
||||
|
||||
chown -R 33:33 /share/dms-data/logs/pma
|
||||
chmod 755 /share/dms-data/logs/pma
|
||||
setfacl -R -m u:33:rwx /share/dms-data/logs/pma
|
||||
setfacl -R -d -m u:33:rwx /share/dms-data/logs/pma
|
||||
|
||||
setfacl -R -m u:1000:rwx /share/Container/gitea
|
||||
setfacl -R -m u:1000:rwx /share/dms-data/gitea_repos
|
||||
setfacl -R -m u:1000:rwx /share/dms-data/gitea_registry
|
||||
```
|
||||
|
||||
## เพิ่ม database & user สำหรับ Nginx Proxy Manager (NPM)
|
||||
|
||||
```bash
|
||||
docker exec -it mariadb mysql -u root -p
|
||||
CREATE DATABASE npm;
|
||||
CREATE USER 'npm'@'%' IDENTIFIED BY 'npm';
|
||||
GRANT ALL PRIVILEGES ON npm.* TO 'npm'@'%';
|
||||
FLUSH PRIVILEGES;
|
||||
```
|
||||
|
||||
## เพิ่ม database & user สำหรับ Gitea
|
||||
|
||||
```bash
|
||||
docker exec -it mariadb mysql -u root -p
|
||||
CREATE DATABASE gitea CHARACTER SET 'utf8mb4' COLLATE 'utf8mb4_unicode_ci';
|
||||
CREATE USER 'gitea'@'%' IDENTIFIED BY 'Center#2025';
|
||||
GRANT ALL PRIVILEGES ON gitea.* TO 'gitea'@'%';
|
||||
FLUSH PRIVILEGES;
|
||||
```
|
||||
|
||||
## Docker file
|
||||
|
||||
```yml
|
||||
# File: share/Container/mariadb/docker-compose.yml
|
||||
# DMS Container v1_4_1 : แยก service และ folder,Application name: lcbp3-db, Servive: mariadb, pma
|
||||
x-restart: &restart_policy
|
||||
restart: unless-stopped
|
||||
|
||||
x-logging: &default_logging
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "5"
|
||||
|
||||
services:
|
||||
mariadb:
|
||||
<<: [*restart_policy, *default_logging]
|
||||
image: mariadb:11.8
|
||||
container_name: mariadb
|
||||
stdin_open: true
|
||||
tty: true
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "2.0"
|
||||
memory: 4G
|
||||
reservations:
|
||||
cpus: "0.5"
|
||||
memory: 1G
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: "Center#2025"
|
||||
MYSQL_DATABASE: "lcbp3"
|
||||
MYSQL_USER: "center"
|
||||
MYSQL_PASSWORD: "Center#2025"
|
||||
TZ: "Asia/Bangkok"
|
||||
ports:
|
||||
- "3306:3306"
|
||||
volumes:
|
||||
- "/share/Container/mariadb/data:/var/lib/mysql"
|
||||
- "/share/Container/mariadb/my.cnf:/etc/mysql/conf.d/my.cnf:ro"
|
||||
- "/share/Container/mariadb/init:/docker-entrypoint-initdb.d:ro"
|
||||
- "/share/dms-data/mariadb/backup:/backup"
|
||||
healthcheck:
|
||||
test:
|
||||
["CMD-SHELL", "mysqladmin ping -h 127.0.0.1 -pCenter#2025 || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 15
|
||||
networks:
|
||||
lcbp3: {}
|
||||
|
||||
pma:
|
||||
<<: [*restart_policy, *default_logging]
|
||||
image: phpmyadmin:5-apache
|
||||
container_name: pma
|
||||
stdin_open: true
|
||||
tty: true
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "0.25"
|
||||
memory: 256M
|
||||
environment:
|
||||
TZ: "Asia/Bangkok"
|
||||
PMA_HOST: "mariadb"
|
||||
PMA_PORT: "3306"
|
||||
PMA_ABSOLUTE_URI: "https://pma.np-dms.work/"
|
||||
UPLOAD_LIMIT: "1G"
|
||||
MEMORY_LIMIT: "512M"
|
||||
ports:
|
||||
- "89:80"
|
||||
# expose:
|
||||
# - "80"
|
||||
volumes:
|
||||
- "/share/Container/pma/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php:ro"
|
||||
- "/share/Container/pma/zzz-custom.ini:/usr/local/etc/php/conf.d/zzz-custom.ini:ro"
|
||||
- "/share/Container/pma/tmp:/var/lib/phpmyadmin/tmp:rw"
|
||||
- "/share/dms-data/logs/pma:/var/log/apache2"
|
||||
depends_on:
|
||||
mariadb:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
lcbp3: {}
|
||||
|
||||
networks:
|
||||
lcbp3:
|
||||
external: true
|
||||
```
|
||||
99
specs/08-infrastructure/NPM_setting.md
Normal file
99
specs/08-infrastructure/NPM_setting.md
Normal file
@@ -0,0 +1,99 @@
|
||||
# การติดตั้ง Nginx Proxy Manager (NPM) ใน Docker
|
||||
|
||||
* ค่าเริ่มต้นคือ:Email: [admin@example.com] Password: changeme
|
||||
|
||||
* user id ของ NPM:
|
||||
|
||||
* uid=0(root) gid=0(root) groups=0(root)
|
||||
|
||||
---
|
||||
|
||||
## กำหนดสิทธิ
|
||||
|
||||
```bash
|
||||
# ตรวจสอบ user id ของ NPM
|
||||
docker exec -it npm id
|
||||
chown -R 0:0 /share/Container/npm
|
||||
setfacl -R -m u:0:rwx /share/Container/npm
|
||||
```
|
||||
|
||||
## Note: Configurations
|
||||
|
||||
| Domain Names | Forward Hostname | IP Forward Port | Cache Assets | Block Common Exploits | Websockets | Force SSL | HTTP/2 | SupportHSTS Enabled |
|
||||
| :----------------------------- | :--------------- | :-------------- | :----------- | :-------------------- | :--------- | :-------- | :----- | :------------------ |
|
||||
| backend.np-dms.work | backend | 3000 | [ ] | [x] | [ ] | [x] | [x] | [ ] |
|
||||
| lcbp3.np-dms.work | frontend | 3000 | [x] | [x] | [x] | [x] | [x] | [ ] |
|
||||
| db.np-dms.work | mariadb | 3306 | [x] | [x] | [x] | [x] | [x] | [ ] |
|
||||
| git.np-dms.work | gitea | 3000 | [x] | [x] | [x] | [x] | [x] | [ ] |
|
||||
| n8n.np-dms.work | n8n | 5678 | [x] | [x] | [x] | [x] | [x] | [ ] |
|
||||
| npm.np-dms.work | npm | 81 | [ ] | [x] | [x] | [x] | [x] | [ ] |
|
||||
| pma.np-dms.work | pma | 80 | [x] | [x] | [ ] | [x] | [x] | [ ] |
|
||||
| np-dms.work, [www.np-dms.work] | localhost | 80 | [x] | [x] | [ ] | [x] | [x] | [ ] |
|
||||
|
||||
## Docker file
|
||||
|
||||
```yml
|
||||
# File: share/Container/npm/docker-compose-npm.yml
|
||||
# DMS Container v1_4_1 แยก service และ folder, Application name: lcbp3-npm, Servive:npm
|
||||
x-restart: &restart_policy
|
||||
restart: unless-stopped
|
||||
|
||||
x-logging: &default_logging
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "5"
|
||||
services:
|
||||
npm:
|
||||
<<: [*restart_policy, *default_logging]
|
||||
image: jc21/nginx-proxy-manager:latest
|
||||
container_name: npm
|
||||
stdin_open: true
|
||||
tty: true
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "1.0" # 50% CPU
|
||||
memory: 512M
|
||||
ports:
|
||||
- "80:80" # HTTP
|
||||
- "443:443" # HTTPS
|
||||
- "81:81" # NPM Admin UI
|
||||
environment:
|
||||
TZ: "Asia/Bangkok"
|
||||
DB_MYSQL_HOST: "mariadb"
|
||||
DB_MYSQL_PORT: 3306
|
||||
DB_MYSQL_USER: "npm"
|
||||
DB_MYSQL_PASSWORD: "npm"
|
||||
DB_MYSQL_NAME: "npm"
|
||||
# Uncomment this if IPv6 is not enabled on your host
|
||||
DISABLE_IPV6: "true"
|
||||
networks:
|
||||
- lcbp3
|
||||
- giteanet
|
||||
volumes:
|
||||
- "/share/Container/npm/data:/data"
|
||||
- "/share/dms-data/logs/npm:/data/logs" # <-- เพิ่ม logging volume
|
||||
- "/share/Container/npm/letsencrypt:/etc/letsencrypt"
|
||||
- "/share/Container/npm/custom:/data/nginx/custom" # <-- สำคัญสำหรับ http_top.conf
|
||||
# - "/share/Container/lcbp3/npm/landing:/data/landing:ro"
|
||||
landing:
|
||||
image: nginx:1.27-alpine
|
||||
container_name: landing
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- "/share/Container/npm/landing:/usr/share/nginx/html:ro"
|
||||
networks:
|
||||
- lcbp3
|
||||
networks:
|
||||
lcbp3:
|
||||
external: true
|
||||
giteanet:
|
||||
external: true
|
||||
name: gitnet
|
||||
|
||||
|
||||
|
||||
|
||||
```
|
||||
119
specs/08-infrastructure/Securities.md
Normal file
119
specs/08-infrastructure/Securities.md
Normal file
@@ -0,0 +1,119 @@
|
||||
สวัสดีครับ! การตั้งค่า Network Segmentation และ Firewall Rules เป็นขั้นตอนที่ฉลาดมากครับ โดยเฉพาะเมื่อคุณมี Services ที่ต้องเปิดสู่ Public (เช่น `lcbp3.np-dms.work`) และ Services ภายใน (เช่น `db.np-dms.work`)
|
||||
|
||||
สำหรับอุปกรณ์ Omada (ER7206 + OC200) กลยุทธ์หลักคือการใช้ **VLANs (Virtual LANs)** เพื่อแบ่งกลุ่มอุปกรณ์ และใช้ **Firewall ACLs (Access Control Lists)** เพื่อควบคุมการจราจรระหว่างกลุ่มเหล่านั้น
|
||||
|
||||
นี่คือคำแนะนำตามแนวทาง "Zero Trust" ที่ปรับให้เข้ากับสถาปัตยกรรมของคุณครับ
|
||||
|
||||
---
|
||||
|
||||
## 1. 🌐 การแบ่งส่วนเครือข่าย (VLAN Segmentation)
|
||||
|
||||
ใน Omada Controller (OC200) ให้คุณไปที่ `Settings > Wired Networks > LAN` และสร้างเครือข่ายย่อย (VLANs) ดังนี้:
|
||||
|
||||
* **VLAN 1 (Default): Management**
|
||||
* **IP Range:** 192.168.1.x
|
||||
* **วัตถุประสงค์:** ใช้สำหรับอุปกรณ์ Network (ER7206, OC200, Switches) และ PC ของผู้ดูแลระบบ (Admin) เท่านั้น
|
||||
|
||||
* **VLAN 10: Servers (DMZ)**
|
||||
* **IP Range:** 192.168.10.x
|
||||
* **วัตถุประสงค์:** นี่คือ VLAN ที่คุณจะเสียบสาย LAN ของ **QNAP NAS** ครับ QNAP จะได้รับ IP ในกลุ่มนี้ (เช่น `192.168.10.100`)
|
||||
|
||||
* **VLAN 20: Office / Trusted**
|
||||
* **IP Range:** 192.168.20.x
|
||||
* **วัตถุประสงค์:** สำหรับ PC, Notebook, และ Wi-Fi ของพนักงานทั่วไปที่ต้องเข้าใช้งานระบบ (เช่น `lcbp3.np-dms.work`)
|
||||
|
||||
* **VLAN 30: Guests / Untrusted**
|
||||
* **IP Range:** 192.168.30.x
|
||||
* **วัตถุประสงค์:** สำหรับ Wi-Fi แขก (Guest) ห้ามเข้าถึงเครือข่ายภายในโดยเด็ดขาด
|
||||
|
||||
**การตั้งค่า Port Switch:**
|
||||
หลังจากสร้าง VLANs แล้ว ให้ไปที่ `Devices` > เลือก Switch ของคุณ > `Ports` > กำหนด Port Profile:
|
||||
|
||||
* Port ที่เสียบ QNAP NAS: ตั้งค่า Profile เป็น **VLAN 10**
|
||||
* Port ที่เสียบ PC พนักงาน: ตั้งค่า Profile เป็น **VLAN 20**
|
||||
|
||||
---
|
||||
|
||||
## 2. 🔥 Firewall Rules (ACLs)
|
||||
|
||||
นี่คือหัวใจสำคัญครับ ไปที่ `Settings > Network Security > ACL (Access Control)`
|
||||
|
||||
กฎของ Firewall จะทำงานจากบนลงล่าง (ข้อ 1 ทำก่อนข้อ 2)
|
||||
|
||||
### A. กฎการห้าม (Deny Rules) - สำคัญที่สุด
|
||||
|
||||
**กฎข้อ 1: บล็อก Guest (VLAN 30) ไม่ให้ยุ่งกับใคร**
|
||||
|
||||
* **Name:** Isolate-Guests
|
||||
* **Policy:** Deny
|
||||
* **Source:** `Network` -> `VLAN 30`
|
||||
* **Destination:** `Network` -> `VLAN 1`, `VLAN 10`, `VLAN 20`
|
||||
* *(กฎนี้จะทำให้ Guest ออกอินเทอร์เน็ตได้อย่างเดียว แต่คุยข้าม VLAN ไม่ได้)*
|
||||
|
||||
**กฎข้อ 2: บล็อก Server (VLAN 10) ไม่ให้โจมตีคนอื่น**
|
||||
|
||||
* **Name:** Isolate-Servers
|
||||
* **Policy:** Deny
|
||||
* **Source:** `Network` -> `VLAN 10`
|
||||
* **Destination:** `Network` -> `VLAN 20`
|
||||
* *(กฎนี้ป้องกันไม่ให้ Server (QNAP) ที่อาจถูกแฮก เริ่มเชื่อมต่อไปยัง PC ของพนักงาน (VLAN 20) เพื่อแพร่กระจาย Malware)*
|
||||
|
||||
**กฎข้อ 3: บล็อก Office ไม่ให้เข้าหน้า Admin**
|
||||
|
||||
* **Name:** Block-Office-to-Management
|
||||
* **Policy:** Deny
|
||||
* **Source:** `Network` -> `VLAN 20`
|
||||
* **Destination:** `Network` -> `VLAN 1`
|
||||
* *(ป้องกันไม่ให้พนักงานทั่วไปเข้าหน้าตั้งค่า Router หรือ Controller)*
|
||||
|
||||
### B. กฎการอนุญาต (Allow Rules)
|
||||
|
||||
**กฎข้อ 4: อนุญาตให้ Office (VLAN 20) ใช้งาน Services ที่จำเป็น**
|
||||
|
||||
* **Name:** Allow-Office-to-Services
|
||||
* **Policy:** Allow
|
||||
* **Source:** `Network` -> `VLAN 20`
|
||||
* **Destination:** `IP Group` -> (สร้าง Group ชื่อ `QNAP_Services` ชี้ไปที่ `192.168.10.100` (IP ของ QNAP))
|
||||
* **Port:** `Service` -> (สร้าง Port Group ชื่อ `Web_Services`):
|
||||
* TCP 443 (HTTPS - สำหรับทุก Service เช่น lcbp3, git, pma)
|
||||
* TCP 80 (HTTP - สำหรับ NPM redirect)
|
||||
* TCP 81 (NPM Admin UI)
|
||||
* TCP 2222 (Gitea SSH)
|
||||
* (ไม่จำเป็นต้องเปิด Port 3000, 3003, 5678, 89 เพราะ NPM จัดการให้แล้ว)
|
||||
|
||||
### C. กฎสุดท้าย (Default)
|
||||
|
||||
Omada มักจะมีกฎ "Allow All" อยู่ล่างสุด ให้ปล่อยไว้ หรือถ้าคุณต้องการความปลอดภัยสูงสุด (Zero Trust) ให้เปลี่ยนกฎสุดท้ายเป็น "Deny All" (แต่ต้องมั่นใจว่ากฎ Allow ของคุณครอบคลุมทั้งหมดแล้ว)
|
||||
|
||||
---
|
||||
|
||||
## 3. 🚪 Port Forwarding (การเปิด Service สู่สาธารณะ)
|
||||
|
||||
ส่วนนี้ไม่ใช่ Firewall ACL แต่จำเป็นเพื่อให้คนนอกเข้าใช้งานได้ครับ
|
||||
ไปที่ `Settings > Transmission > Port Forwarding`
|
||||
|
||||
สร้างกฎเพื่อส่งต่อการจราจรจาก WAN (อินเทอร์เน็ต) ไปยัง Nginx Proxy Manager (NPM) ที่อยู่บน QNAP (VLAN 10)
|
||||
|
||||
* **Name:** Allow-NPM-HTTPS
|
||||
* **External Port:** 443
|
||||
* **Internal Port:** 443
|
||||
* **Internal IP:** `192.168.10.100` (IP ของ QNAP)
|
||||
* **Protocol:** TCP
|
||||
|
||||
* **Name:** Allow-NPM-HTTP (สำหรับ Let's Encrypt)
|
||||
* **External Port:** 80
|
||||
* **Internal Port:** 80
|
||||
* **Internal IP:** `192.168.10.100` (IP ของ QNAP)
|
||||
* **Protocol:** TCP
|
||||
|
||||
### สรุปผังการเชื่อมต่อ
|
||||
|
||||
1. **ผู้ใช้ภายนอก** -> `https://lcbp3.np-dms.work`
|
||||
2. **ER7206** รับที่ Port 443
|
||||
3. **Port Forwarding** ส่งต่อไปยัง `192.168.10.100:443` (QNAP NPM)
|
||||
4. **NPM** (บน QNAP) ส่งต่อไปยัง `backend:3000` หรือ `frontend:3000` ภายใน Docker
|
||||
5. **ผู้ใช้ภายใน (Office)** -> `https://lcbp3.np-dms.work`
|
||||
6. **Firewall ACL** (กฎข้อ 4) อนุญาตให้ VLAN 20 คุยกับ `192.168.10.100:443`
|
||||
7. (ขั้นตอนที่ 3-4 ทำงานเหมือนเดิม)
|
||||
|
||||
การตั้งค่าตามนี้จะช่วยแยกส่วน Server ของคุณออกจากเครือข่ายพนักงานอย่างชัดเจน ซึ่งปลอดภัยกว่าการวางทุกอย่างไว้ในวง LAN เดียวกันมากครับ
|
||||
143
specs/08-infrastructure/Service_setting.md
Normal file
143
specs/08-infrastructure/Service_setting.md
Normal file
@@ -0,0 +1,143 @@
|
||||
# การติดตั้ง Nginx Proxy Manager (NPM) ใน Docker
|
||||
|
||||
---
|
||||
|
||||
## **📝 คำอธิบายและข้อควรพิจารณา**
|
||||
|
||||
* 1 Redis (Service: cache)
|
||||
|
||||
* Image: redis:7-alpine มีขนาดเล็กและทันสมัย
|
||||
|
||||
* Port: ไม่ได้ expose port 6379 ออกมาที่ Host QNAP เพราะตามสถาปัตยกรรม Service backend (NestJS) จะคุยกับ cache (Redis) ผ่าน lcbp3 network ภายในโดยตรง ซึ่งปลอดภัยกว่าครับ
|
||||
|
||||
* Volume: map data ไปที่ /share/Container/cache/data เผื่อใช้ Redis ในการทำ Persistent Cache (ถ้าต้องการแค่ Locking อาจจะไม่จำเป็นต้อง map volume ก็ได้ครับ)
|
||||
|
||||
* User ID: Image redis:7-alpine รันด้วย user redis (UID 999)
|
||||
|
||||
* 2 Elasticsearch (Service: search)
|
||||
|
||||
* Image: elasticsearch:8.11.1 ผมเลือกเวอร์ชัน 8 ที่ใหม่และระบุชัดเจน (ไม่ใช้ latest) เพื่อความเสถียรครับ
|
||||
|
||||
* Port: ไม่ได้ expose port 9200 ออกมาที่ Host เช่นกัน เพราะ NPM_setting.md ระบุว่า npm (Nginx Proxy Manager) จะ forward search.np-dms.work ไปยัง service search ที่ port 9200 ผ่าน lcbp3 network ครับ
|
||||
|
||||
* Environment (สำคัญมาก):
|
||||
|
||||
* discovery.type: "single-node": ต้องมี ไม่อย่างนั้น Elasticsearch V.8 จะไม่ยอม start ถ้าไม่พบ node อื่นใน cluster
|
||||
|
||||
* xpack.security.enabled: "false": เพื่อความสะดวกในการพัฒนาระยะแรก NestJS จะได้เชื่อมต่อ API port 9200 ได้เลย (หากเปิดใช้งานจะต้องตั้งค่า SSL และ Token ซึ่งซับซ้อนกว่ามาก)
|
||||
|
||||
* ES_JAVA_OPTS: "-Xms1g -Xmx1g": เป็น Best Practice ที่ต้องกำหนด Heap Size ให้ Elasticsearch (ในที่นี้คือ 1GB)
|
||||
|
||||
* User ID: Image elasticsearch รันด้วย user elasticsearch (UID 1000)
|
||||
|
||||
---
|
||||
|
||||
## กำหนดสิทธิ
|
||||
|
||||
```bash
|
||||
# สร้าง Directory
|
||||
mkdir -p /share/Container/services/cache/data
|
||||
mkdir -p /share/Container/services/search/data
|
||||
|
||||
# กำหนดสิทธิ์ให้ตรงกับ User ID ใน Container
|
||||
# Redis (UID 999)
|
||||
chown -R 999:999 /share/Container/services/cache/data
|
||||
chmod -R 750 /share/Container/services/cache/data
|
||||
|
||||
# Elasticsearch (UID 1000)
|
||||
chown -R 1000:1000 /share/Container/services/search/data
|
||||
chmod -R 750 /share/Container/services/search/data
|
||||
```
|
||||
|
||||
## Docker file
|
||||
|
||||
```yml
|
||||
# File: /share/Container/services/docker-compose.yml (หรือไฟล์ที่คุณใช้รวม)
|
||||
# DMS Container v1_4_1: เพิ่ม Application name: services, Services 'cache' (Redis) และ 'search' (Elasticsearch)
|
||||
|
||||
x-restart: &restart_policy
|
||||
restart: unless-stopped
|
||||
|
||||
x-logging: &default_logging
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "5"
|
||||
|
||||
networks:
|
||||
lcbp3:
|
||||
external: true
|
||||
|
||||
services:
|
||||
# ----------------------------------------------------------------
|
||||
# 1. Redis (สำหรับ Caching และ Distributed Lock)
|
||||
# Service Name: cache (ตามที่ NPM และ Backend Plan อ้างอิง)
|
||||
# ----------------------------------------------------------------
|
||||
cache:
|
||||
<<: [*restart_policy, *default_logging]
|
||||
image: redis:7-alpine # ใช้ Alpine image เพื่อให้มีขนาดเล็ก
|
||||
container_name: cache
|
||||
stdin_open: true
|
||||
tty: true
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "1.0"
|
||||
memory: 2G # Redis เป็น in-memory, ให้ memory เพียงพอต่อการใช้งาน
|
||||
reservations:
|
||||
cpus: "0.25"
|
||||
memory: 512M
|
||||
environment:
|
||||
TZ: "Asia/Bangkok"
|
||||
networks:
|
||||
- lcbp3 # เชื่อมต่อ network ภายในเท่านั้น
|
||||
volumes:
|
||||
- "/share/Container/cache/data:/data" # Map volume สำหรับเก็บข้อมูล (ถ้าต้องการ persistence)
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"] # ตรวจสอบว่า service พร้อมใช้งาน
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
# ----------------------------------------------------------------
|
||||
# 2. Elasticsearch (สำหรับ Advanced Search)
|
||||
# Service Name: search (ตามที่ NPM และ Backend Plan อ้างอิง)
|
||||
# ----------------------------------------------------------------
|
||||
search:
|
||||
<<: [*restart_policy, *default_logging]
|
||||
image: elasticsearch:8.11.1 # แนะนำให้ระบุเวอร์ชันชัดเจน (V.8)
|
||||
container_name: search
|
||||
stdin_open: true
|
||||
tty: true
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "2.0" # Elasticsearch ใช้ CPU และ Memory ค่อนข้างหนัก
|
||||
memory: 4G
|
||||
reservations:
|
||||
cpus: "0.5"
|
||||
memory: 2G
|
||||
environment:
|
||||
TZ: "Asia/Bangkok"
|
||||
# --- Critical Settings for Single-Node ---
|
||||
discovery.type: "single-node" # สำคัญมาก: กำหนดให้รันแบบ 1 node
|
||||
# --- Security (Disable for Development) ---
|
||||
# ปิด xpack security เพื่อให้ NestJS เชื่อมต่อง่าย (backend -> search:9200)
|
||||
# หากเป็น Production จริง ควรเปิดใช้งานและตั้งค่า token/cert ครับ
|
||||
xpack.security.enabled: "false"
|
||||
# --- Performance Tuning ---
|
||||
# กำหนด Heap size (1GB) ให้เหมาะสมกับ memory limit (4GB)
|
||||
ES_JAVA_OPTS: "-Xms1g -Xmx1g"
|
||||
networks:
|
||||
- lcbp3 # เชื่อมต่อ network ภายใน (NPM จะ proxy port 9200 จากภายนอก)
|
||||
volumes:
|
||||
- "/share/Container/search/data:/usr/share/elasticsearch/data" # Map volume สำหรับเก็บ data/indices
|
||||
healthcheck:
|
||||
# รอจนกว่า cluster health จะเป็น yellow หรือ green
|
||||
test: ["CMD-SHELL", "curl -s http://localhost:9200/_cluster/health | grep -q '\"status\":\"green\"\\|\\\"status\":\"yellow\"'"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
|
||||
```
|
||||
91
specs/08-infrastructure/n8n_setting.md
Normal file
91
specs/08-infrastructure/n8n_setting.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# การติดตั้ง n8n ใน Docker
|
||||
|
||||
* user id ของ gites:
|
||||
|
||||
* uid=1000(node) gid=1000(node) groups=1000(node)
|
||||
|
||||
## กำหนดสิทธิ
|
||||
|
||||
```bash
|
||||
# สำหรับ n8n volumes
|
||||
chown -R 1000:1000 /share/Container/n8n
|
||||
chmod -R 755 /share/Container/n8n
|
||||
```
|
||||
|
||||
## Docker file
|
||||
|
||||
```yml
|
||||
# File: share/Container/n8n/docker-compose.yml
|
||||
# DMS Container v1_4_1 แยก service และ folder, Application name:n8n service n8n
|
||||
x-restart: &restart_policy
|
||||
restart: unless-stopped
|
||||
|
||||
x-logging: &default_logging
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "5"
|
||||
services:
|
||||
n8n:
|
||||
<<: [*restart_policy, *default_logging]
|
||||
image: n8nio/n8n:latest
|
||||
container_name: n8n
|
||||
stdin_open: true
|
||||
tty: true
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "1.5"
|
||||
memory: 2G
|
||||
reservations:
|
||||
cpus: "0.25"
|
||||
memory: 512M
|
||||
environment:
|
||||
TZ: "Asia/Bangkok"
|
||||
NODE_ENV: "production"
|
||||
# N8N_PATH: "/n8n/"
|
||||
N8N_PUBLIC_URL: "https://n8n.np-dms.work/"
|
||||
WEBHOOK_URL: "https://n8n.np-dms.work/"
|
||||
N8N_EDITOR_BASE_URL: "https://n8n.np-dms.work/"
|
||||
N8N_PROTOCOL: "https"
|
||||
N8N_HOST: "n8n.np-dms.work"
|
||||
N8N_PORT: 5678
|
||||
N8N_PROXY_HOPS: "1"
|
||||
N8N_DIAGNOSTICS_ENABLED: 'false'
|
||||
N8N_SECURE_COOKIE: 'true'
|
||||
N8N_ENCRYPTION_KEY: "9AAIB7Da9DW1qAhJE5/Bz4SnbQjeAngI"
|
||||
N8N_BASIC_AUTH_ACTIVE: 'true'
|
||||
N8N_BASIC_AUTH_USER: admin
|
||||
N8N_BASIC_AUTH_PASSWORD: Center#2025
|
||||
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS: 'true'
|
||||
GENERIC_TIMEZONE: "Asia/Bangkok"
|
||||
DB_TYPE: mysqldb
|
||||
DB_MYSQLDB_DATABASE: "n8n"
|
||||
DB_MYSQLDB_USER: "center"
|
||||
DB_MYSQLDB_PASSWORD: "Center#2025"
|
||||
DB_MYSQLDB_HOST: "mariadb"
|
||||
DB_MYSQLDB_PORT: 3306
|
||||
|
||||
ports:
|
||||
- "5678:5678"
|
||||
networks:
|
||||
lcbp3: {}
|
||||
volumes:
|
||||
- "/share/Container/n8n:/home/node/.n8n"
|
||||
- "/share/Container/n8n/cache:/home/node/.cache"
|
||||
- "/share/Container/n8n/scripts:/scripts"
|
||||
- "/share/Container/n8n/data:/data"
|
||||
- "/var/run/docker.sock:/var/run/docker.sock"
|
||||
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "-qO-", "http://127.0.0.1:5678/"]
|
||||
# test: ["CMD", "curl", "-f", "http://127.0.0.1:5678/ || exit 1"]
|
||||
interval: 15s
|
||||
timeout: 5s
|
||||
retries: 30
|
||||
|
||||
networks:
|
||||
lcbp3:
|
||||
external: true
|
||||
```
|
||||
120
specs/08-infrastructure/แผนผัง Network.md
Normal file
120
specs/08-infrastructure/แผนผัง Network.md
Normal file
@@ -0,0 +1,120 @@
|
||||
# **🗺️ แผนผัง Network Architecture & Firewall (LCBP3-DMS)**
|
||||
|
||||
แผนผังนี้แสดงการแบ่งส่วนเครือข่าย (VLANs) และกฎ Firewall (ACLs) สำหรับ TP-Link Omada (ER7206/OC200) เพื่อรักษาความปลอดภัยของ QNAP NAS และ Docker Services
|
||||
|
||||
## **1\. แผนผังการเชื่อมต่อ (Connection Flow Diagram)**
|
||||
|
||||
graph TD
|
||||
direction TB
|
||||
|
||||
subgraph Flow1 \[\<b\>การเชื่อมต่อจากภายนอก (Public WAN)\</b\>\]
|
||||
User\[ผู้ใช้งานภายนอก (Internet)\]
|
||||
end
|
||||
|
||||
subgraph Router \[\<b\>Router (ER7206)\</b\> \- Gateway\]
|
||||
User \-- "Port 80/443 (HTTPS/HTTP)" \--\> ER7206
|
||||
ER7206(\<b\>Port Forwarding\</b\>\<br/\>TCP 80 \-\> 192.168.10.100:80\<br/\>TCP 443 \-\> 192.168.10.100:443)
|
||||
end
|
||||
|
||||
subgraph VLANs \[\<b\>เครือข่ายภายใน (VLANs & Firewall Rules)\</b\>\]
|
||||
direction LR
|
||||
|
||||
subgraph VLAN10 \[\<b\>VLAN 10: Servers (DMZ)\</b\>\<br/\>192.168.10.x\]
|
||||
QNAP\[\<b\>QNAP NAS (192.168.10.100)\</b\>\]
|
||||
end
|
||||
|
||||
subgraph VLAN20 \[\<b\>VLAN 20: Office\</b\>\<br/\>192.168.20.x\]
|
||||
OfficePC\[PC พนักงาน/Wi-Fi\]
|
||||
end
|
||||
|
||||
subgraph VLAN30 \[\<b\>VLAN 30: Guests\</b\>\<br/\>192.168.30.x\]
|
||||
GuestPC\[Guest Wi-Fi\]
|
||||
end
|
||||
|
||||
subgraph Firewall \[\<b\>Firewall ACLs (ควบคุมโดย OC200)\</b\>\]
|
||||
direction TB
|
||||
rule1(\<b\>Rule 1: DENY\</b\>\<br/\>Guest (VLAN 30\) \-\> All VLANs)
|
||||
rule2(\<b\>Rule 2: DENY\</b\>\<br/\>Server (VLAN 10\) \-\> Office (VLAN 20))
|
||||
rule3(\<b\>Rule 3: ALLOW\</b\>\<br/\>Office (VLAN 20\) \-\> QNAP (192.168.10.100)\<br/\>Ports: 443, 80, 81, 2222\)
|
||||
end
|
||||
|
||||
%% \--- แสดงผล Firewall Rules \---
|
||||
GuestPC \-.x|rule1| QNAP
|
||||
QNAP \-.x|rule2| OfficePC
|
||||
OfficePC \-- "\[https://lcbp3.np-dms.work\](https://lcbp3.np-dms.work)" \--\>|rule3| QNAP
|
||||
end
|
||||
|
||||
%% \--- เชื่อมต่อ Router กับ QNAP \---
|
||||
ER7206 \--\> QNAP
|
||||
|
||||
subgraph Docker \[\<b\>Docker Network 'lcbp3' (ภายใน QNAP)\</b\>\]
|
||||
direction TB
|
||||
|
||||
subgraph PublicServices \[Services ที่ NPM เปิดสู่ภายนอก\]
|
||||
direction LR
|
||||
NPM\[\<b\>NPM (Nginx Proxy Manager)\</b\>\<br/\>รับการจราจรจาก QNAP\]
|
||||
Frontend(frontend:3000)
|
||||
Backend(backend:3000)
|
||||
Gitea(gitea:3000)
|
||||
PMA(pma:80)
|
||||
N8N(n8n:5678)
|
||||
end
|
||||
|
||||
subgraph InternalServices \[Internal Services (Backend เรียกใช้เท่านั้น)\]
|
||||
direction LR
|
||||
DB(mariadb:3306)
|
||||
Cache(cache:6379)
|
||||
Search(search:9200)
|
||||
end
|
||||
|
||||
%% \--- การเชื่อมต่อภายใน Docker \---
|
||||
NPM \-- "lcbp3.np-dms.work" \--\> Frontend
|
||||
NPM \-- "backend.np-dms.work" \--\> Backend
|
||||
NPM \-- "git.np-dms.work" \--\> Gitea
|
||||
NPM \-- "pma.np-dms.work" \--\> PMA
|
||||
NPM \-- "n8n.np-dms.work" \--\> N8N
|
||||
|
||||
Backend \-- "lcbp3 Network" \--\> DB
|
||||
Backend \-- "lcbp3 Network" \--\> Cache
|
||||
Backend \-- "lcbp3 Network" \--\> Search
|
||||
|
||||
end
|
||||
|
||||
%% \--- เชื่อมต่อ QNAP กับ Docker \---
|
||||
QNAP \--\> NPM
|
||||
|
||||
%% \--- Styling \---
|
||||
classDef default fill:\#f9f9f9,stroke:\#333,stroke-width:2px;
|
||||
classDef router fill:\#e6f7ff,stroke:\#0056b3,stroke-width:2px;
|
||||
classDef vlan fill:\#fffbe6,stroke:\#d46b08,stroke-width:2px;
|
||||
classDef docker fill:\#e6ffed,stroke:\#096dd9,stroke-width:2px;
|
||||
classDef internal fill:\#f0f0f0,stroke:\#595959,stroke-width:2px,stroke-dasharray: 5 5;
|
||||
classDef fw fill:\#fff0f0,stroke:\#d9363e,stroke-width:2px,stroke-dasharray: 3 3;
|
||||
|
||||
class Router,ER7206 router;
|
||||
class VLANs,VLAN10,VLAN20,VLAN30 vlan;
|
||||
class Docker,PublicServices,InternalServices docker;
|
||||
class DB,Cache,Search internal;
|
||||
class Firewall,rule1,rule2,rule3 fw;
|
||||
|
||||
## **2\. สรุปการตั้งค่า Firewall ACLs (สำหรับ Omada OC200)**
|
||||
|
||||
นี่คือรายการกฎ (Rules) ที่คุณต้องสร้างใน Settings \> Network Security \> ACL (เรียงลำดับจากบนลงล่าง):
|
||||
|
||||
| ลำดับ | Name | Policy | Source | Destination | Ports |
|
||||
| :---- | :---- | :---- | :---- | :---- | :---- |
|
||||
| **1** | Isolate-Guests | **Deny** | Network \-\> VLAN 30 (Guests) | Network \-\> VLAN 1, 10, 20 | All |
|
||||
| **2** | Isolate-Servers | **Deny** | Network \-\> VLAN 10 (Servers) | Network \-\> VLAN 20 (Office) | All |
|
||||
| **3** | Block-Office-to-Mgmt | **Deny** | Network \-\> VLAN 20 (Office) | Network \-\> VLAN 1 (Mgmt) | All |
|
||||
| **4** | Allow-Office-to-Services | **Allow** | Network \-\> VLAN 20 (Office) | IP Group \-\> QNAP\_Services (192.168.10.100) | Port Group \-\> Web\_Services (443, 80, 81, 2222\) |
|
||||
| **5** | (Default) | Allow | Any | Any | All |
|
||||
|
||||
## **3\. สรุปการตั้งค่า Port Forwarding (สำหรับ Omada ER7206)**
|
||||
|
||||
นี่คือรายการกฎที่คุณต้องสร้างใน Settings \> Transmission \> Port Forwarding:
|
||||
|
||||
| Name | External Port | Internal IP | Internal Port | Protocol |
|
||||
| :---- | :---- | :---- | :---- | :---- |
|
||||
| Allow-NPM-HTTPS | 443 | 192.168.10.100 | 443 | TCP |
|
||||
| Allow-NPM-HTTP | 80 | 192.168.10.100 | 80 | TCP |
|
||||
|
||||
Reference in New Issue
Block a user