docs: fix remaining warnings in SaaS design spec (round 2)
- Fix metadata.json shown as unencrypted in tree (now .enc) - Fix admin bypass order in checkPlanLimits (moved before status check) - Add PM2 cross-worker cache invalidation via process messaging - Fix fiel_credentials "no changes" contradiction with per-component IV - Backup all tenant DBs regardless of active status Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -186,16 +186,16 @@ When a client uploads their FIEL (.cer + .key + password):
|
||||
├── CAS2408138W2/
|
||||
│ ├── certificate.cer.enc ← AES-256-GCM encrypted
|
||||
│ ├── private_key.key.enc ← AES-256-GCM encrypted
|
||||
│ └── metadata.json ← serial, validity dates, upload date
|
||||
│ └── metadata.json.enc ← serial, validity dates, upload date (also encrypted)
|
||||
└── ROEM691011EZ4/
|
||||
├── certificate.cer.enc
|
||||
├── private_key.key.enc
|
||||
└── metadata.json
|
||||
└── metadata.json.enc
|
||||
```
|
||||
|
||||
**B. Central DB (`fiel_credentials` table):**
|
||||
- Existing structure: `cer_data`, `key_data`, `key_password_encrypted`, `encryption_iv`, `encryption_tag`
|
||||
- No changes needed to the table structure
|
||||
- **Schema change required:** Add per-component IV/tag columns (`cer_iv`, `cer_tag`, `key_iv`, `key_tag`, `password_iv`, `password_tag`) to support independent encryption per component. Alternatively, use a single JSON column for all encryption metadata. The existing `encryption_iv` and `encryption_tag` columns can be dropped after migration.
|
||||
|
||||
### Encryption
|
||||
|
||||
@@ -554,7 +554,7 @@ mkdir -p $DAILY_DIR $WEEKLY_DIR
|
||||
pg_dump -h localhost -U postgres horux360 | gzip > $DAILY_DIR/horux360_$DATE.sql.gz
|
||||
|
||||
# Backup each tenant DB
|
||||
for db in $(psql -h localhost -U postgres -t -c "SELECT database_name FROM tenants WHERE active = true" horux360); do
|
||||
for db in $(psql -h localhost -U postgres -t -c "SELECT database_name FROM tenants WHERE database_name IS NOT NULL" horux360); do
|
||||
db_trimmed=$(echo $db | xargs) # trim whitespace
|
||||
pg_dump -h localhost -U postgres "$db_trimmed" | gzip > $DAILY_DIR/${db_trimmed}_${DATE}.sql.gz
|
||||
done
|
||||
@@ -618,6 +618,12 @@ async function checkPlanLimits(req, res, next) {
|
||||
const tenant = await getTenantWithCache(req.user.tenantId); // cached 5 min
|
||||
const subscription = await getActiveSubscription(tenant.id);
|
||||
|
||||
// Admin-impersonated requests bypass subscription check
|
||||
// (admin needs to complete client setup regardless of payment status)
|
||||
if (req.headers['x-view-tenant'] && req.user.role === 'admin') {
|
||||
return next();
|
||||
}
|
||||
|
||||
// Allowed statuses: 'authorized' (paid) or 'pending' (grace period for new clients)
|
||||
const allowedStatuses = ['authorized', 'pending'];
|
||||
|
||||
@@ -631,12 +637,6 @@ async function checkPlanLimits(req, res, next) {
|
||||
}
|
||||
}
|
||||
|
||||
// Admin-impersonated requests bypass subscription check
|
||||
// (admin needs to complete client setup regardless of payment status)
|
||||
if (req.headers['x-view-tenant'] && req.user.role === 'admin') {
|
||||
return next();
|
||||
}
|
||||
|
||||
next();
|
||||
}
|
||||
```
|
||||
@@ -733,6 +733,8 @@ Pages also show an "upgrade" message if accessed directly via URL without the re
|
||||
|
||||
Plan checks and CFDI counts are cached in-memory with 5-minute TTL to avoid database queries on every request.
|
||||
|
||||
**Cache invalidation across PM2 workers:** Since each PM2 cluster worker has its own in-memory cache, subscription status changes (via webhook) must invalidate the cache in all workers. The webhook handler writes the status to the DB, then sends a `process.send()` message to the PM2 master which broadcasts to all workers to invalidate the specific tenant's cache entry. This ensures all workers reflect subscription changes within seconds, not minutes.
|
||||
|
||||
---
|
||||
|
||||
## Architecture Diagram
|
||||
|
||||
Reference in New Issue
Block a user