- Fix metadata.json shown as unencrypted in tree (now .enc) - Fix admin bypass order in checkPlanLimits (moved before status check) - Add PM2 cross-worker cache invalidation via process messaging - Fix fiel_credentials "no changes" contradiction with per-component IV - Backup all tenant DBs regardless of active status Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
28 KiB
Horux360 SaaS Transformation — Design Spec
Date: 2026-03-15 Status: Approved Author: Carlos Horux + Claude
Overview
Transform Horux360 from an internal multi-tenant accounting tool into a production-ready SaaS platform. Client registration remains manual (sales-led). Each client gets a fully isolated PostgreSQL database. Payments via MercadoPago. Transactional emails via Gmail SMTP (@horuxfin.com). Production deployment on existing server (192.168.10.212).
Target scale: 10-50 clients within 6 months.
Starting from scratch: No data migration. Existing schemas/data will be archived. Fresh setup.
Section 1: Database-Per-Tenant Architecture
Rationale
Clients sign NDAs requiring complete data isolation. Schema-per-tenant (current approach) shares a single database. Database-per-tenant provides:
- Independent backup/restore per client
- No risk of cross-tenant data leakage
- Each DB can be moved to a different server if needed
Structure
PostgreSQL Server (max_connections: 300)
├── horux360 ← Central DB (Prisma-managed)
├── horux_cas2408138w2 ← Client DB (raw SQL)
├── horux_roem691011ez4 ← Client DB
└── ...
Central DB (horux360) — Prisma-managed tables
Existing tables (modified):
tenants— adddatabase_namecolumn, removeschema_nameusers— no changesrefresh_tokens— flush all existing tokens at migration cutover (invalidate all sessions)fiel_credentials— no changes
New tables:
subscriptions— MercadoPago subscription trackingpayments— payment history
Prisma schema migration
The Prisma schema (apps/api/prisma/schema.prisma) must be updated:
- Replace
schema_name String @unique @map("schema_name")withdatabase_name String @unique @map("database_name")on theTenantmodel - Add
SubscriptionandPaymentmodels - Run
prisma migrate devto generate and apply migration - Update
Tenanttype inpackages/shared/src/types/tenant.ts: replaceschemaNamewithdatabaseName
JWT payload migration
The current JWT payload embeds schemaName. This must change:
- Update
JWTPayloadinpackages/shared/src/types/auth.ts: replaceschemaNamewithdatabaseName - Update token generation in
auth.service.ts: readtenant.databaseNameinstead oftenant.schemaName - Update
refreshTokensfunction to embeddatabaseName - At migration cutover: flush
refresh_tokenstable to invalidate all existing sessions (forces re-login)
Client DB naming
Formula: horux_<rfc_normalized>
RFC "CAS2408138W2" → horux_cas2408138w2
RFC "TPR840604D98" → horux_tpr840604d98
Client DB tables (created via raw SQL)
Each client database contains these tables (no schema prefix, direct public schema):
cfdis— with indexes: fecha_emision DESC, tipo, rfc_emisor, rfc_receptor, pg_trgm on nombre_emisor/nombre_receptor, uuid_fiscal uniqueiva_mensualisr_mensualalertascalendario_fiscal
TenantConnectionManager
class TenantConnectionManager {
private pools: Map<string, { pool: pg.Pool; lastAccess: Date }>;
private cleanupInterval: NodeJS.Timer;
// Get or create a pool for a tenant
getPool(tenantId: string, databaseName: string): pg.Pool;
// Create a new tenant database with all tables and indexes
provisionDatabase(rfc: string): Promise<string>;
// Drop a tenant database (soft-delete: rename to horux_deleted_<rfc>_<timestamp>)
deprovisionDatabase(databaseName: string): Promise<void>;
// Cleanup idle pools (called every 60s, removes pools idle > 5min)
private cleanupIdlePools(): void;
}
Pool configuration per tenant:
max: 3 connections (with 2 PM2 cluster instances, this means 6 connections/tenant max; at 50 tenants = 300, matchingmax_connections)idleTimeoutMillis: 300000 (5 min)connectionTimeoutMillis: 10000 (10 sec)
Note on PM2 cluster mode: Each PM2 worker is a separate Node.js process with its own TenantConnectionManager instance. With instances: 2 and max: 3 per pool, worst case is 50 tenants × 3 connections × 2 workers = 300 connections, which matches max_connections = 300. If scaling beyond 50 tenants, either increase max_connections or reduce pool max to 2.
Tenant middleware change
Current: Sets search_path on a shared connection.
New: Returns a dedicated pool connected to the tenant's own database.
// Before
req.tenantSchema = schema;
await pool.query(`SET search_path TO "${schema}", public`);
// After
req.tenantPool = tenantConnectionManager.getPool(tenant.id, tenant.databaseName);
All tenant service functions change from using a shared pool with schema prefix to using req.tenantPool with direct table names.
Admin impersonation (X-View-Tenant)
The current X-View-Tenant header support for admin "view-as" functionality is preserved. The new middleware resolves the databaseName for the viewed tenant:
// If admin is viewing another tenant
if (req.headers['x-view-tenant'] && req.user.role === 'admin') {
const viewedTenant = await getTenantByRfc(req.headers['x-view-tenant']);
req.tenantPool = tenantConnectionManager.getPool(viewedTenant.id, viewedTenant.databaseName);
} else {
req.tenantPool = tenantConnectionManager.getPool(tenant.id, tenant.databaseName);
}
Provisioning flow (new client)
- Admin creates tenant via UI → POST
/api/tenants/ - Insert record in
horux360.tenantswithdatabase_name - Execute
CREATE DATABASE horux_<rfc> - Connect to new DB, create all tables + indexes
- Create admin user in
horux360.userslinked to tenant - Send welcome email with temporary credentials
- Generate MercadoPago subscription link
Rollback on partial failure: If any step 3-7 fails:
- Drop the created database if it exists (
DROP DATABASE IF EXISTS horux_<rfc>) - Delete the
tenantsrow - Delete the
usersrow if created - Return error to admin with the specific step that failed
- The entire provisioning is wrapped in a try/catch with explicit cleanup
PostgreSQL tuning
max_connections = 300
shared_buffers = 4GB
work_mem = 16MB
effective_cache_size = 16GB
maintenance_work_mem = 512MB
Server disk
Expand from 29 GB to 100 GB to accommodate:
- 25-50 client databases (~2-3 GB total)
- Daily backups with 7-day retention (~15 GB)
- FIEL encrypted files (<100 MB)
- Logs, builds, OS (~10 GB)
Section 2: SAT Credential Storage (FIEL)
Dual storage strategy
When a client uploads their FIEL (.cer + .key + password):
A. Filesystem (for manual linking):
/var/horux/fiel/
├── CAS2408138W2/
│ ├── certificate.cer.enc ← AES-256-GCM encrypted
│ ├── private_key.key.enc ← AES-256-GCM encrypted
│ └── metadata.json.enc ← serial, validity dates, upload date (also encrypted)
└── ROEM691011EZ4/
├── certificate.cer.enc
├── private_key.key.enc
└── metadata.json.enc
B. Central DB (fiel_credentials table):
- Existing structure:
cer_data,key_data,key_password_encrypted,encryption_iv,encryption_tag - Schema change required: Add per-component IV/tag columns (
cer_iv,cer_tag,key_iv,key_tag,password_iv,password_tag) to support independent encryption per component. Alternatively, use a single JSON column for all encryption metadata. The existingencryption_ivandencryption_tagcolumns can be dropped after migration.
Encryption
- Algorithm: AES-256-GCM
- Key:
FIEL_ENCRYPTION_KEYenvironment variable (separate from other secrets) - Code change required:
sat-crypto.service.tscurrently derives the key fromJWT_SECRETviacreateHash('sha256').update(env.JWT_SECRET).digest(). This must be changed to readFIEL_ENCRYPTION_KEYfrom the env schema. Theenv.tsZod schema must be updated to declareFIEL_ENCRYPTION_KEYas required. - Each component (certificate, private key, password) is encrypted separately with its own IV and auth tag. The
fiel_credentialstable stores separateencryption_ivandencryption_tagper row. The filesystem also stores each file independently encrypted. - Code change required: The current
sat-crypto.service.tsshares a single IV/tag across all three components. Refactor to encrypt each component independently with its own IV/tag. Store per-component IV/tags in the DB (add columns:cer_iv,cer_tag,key_iv,key_tag,password_iv,password_tag— or use a JSON column). - Password is encrypted, never stored in plaintext
Manual decryption CLI
node scripts/decrypt-fiel.js --rfc CAS2408138W2
- Decrypts files to
/tmp/horux-fiel-<rfc>/ - Files auto-delete after 30 minutes (via setTimeout or tmpwatch)
- Requires SSH access to server
Security
/var/horux/fiel/permissions:700(root only)- Encrypted files are useless without
FIEL_ENCRYPTION_KEY metadata.jsonis also encrypted (contains serial number + RFC which could be used to query SAT's certificate validation service, violating NDA confidentiality requirements)
Upload flow
- Client navigates to
/configuracion/sat - Uploads
.cer+.keyfiles + enters password - API validates the certificate (checks it's a valid FIEL, not expired)
- Encrypts and stores in both filesystem and database
- Sends notification email to admin team: "Cliente X subió su FIEL"
Section 3: Payment System (MercadoPago)
Integration approach
Using MercadoPago's Preapproval (Subscription) API for recurring payments.
New tables in central DB
CREATE TABLE subscriptions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES tenants(id),
plan VARCHAR(20) NOT NULL,
mp_preapproval_id VARCHAR(100),
status VARCHAR(20) NOT NULL DEFAULT 'pending',
-- status: pending | authorized | paused | cancelled
amount DECIMAL(10,2) NOT NULL,
frequency VARCHAR(10) NOT NULL DEFAULT 'monthly',
-- frequency: monthly | yearly
current_period_start TIMESTAMP,
current_period_end TIMESTAMP,
created_at TIMESTAMP NOT NULL DEFAULT NOW(),
updated_at TIMESTAMP NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_subscriptions_tenant_id ON subscriptions(tenant_id);
CREATE INDEX idx_subscriptions_status ON subscriptions(status);
CREATE TABLE payments (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES tenants(id),
subscription_id UUID REFERENCES subscriptions(id),
mp_payment_id VARCHAR(100),
amount DECIMAL(10,2) NOT NULL,
status VARCHAR(20) NOT NULL DEFAULT 'pending',
-- status: approved | pending | rejected | refunded
payment_method VARCHAR(50),
paid_at TIMESTAMP,
created_at TIMESTAMP NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_payments_tenant_id ON payments(tenant_id);
CREATE INDEX idx_payments_subscription_id ON payments(subscription_id);
Plans and pricing
Defined in packages/shared/src/constants/plans.ts (update existing):
| Plan | Monthly price (MXN) | CFDIs | Users | Features |
|---|---|---|---|---|
| starter | Configurable | 100 | 1 | dashboard, cfdi_basic, iva_isr |
| business | Configurable | 500 | 3 | + reportes, alertas, calendario |
| professional | Configurable | 2,000 | 10 | + xml_sat, conciliacion, forecasting |
| enterprise | Configurable | Unlimited | Unlimited | + api, multi_empresa |
Prices are configured from admin panel, not hardcoded.
Subscription flow
- Admin creates tenant and assigns plan
- Admin clicks "Generate payment link" → API creates MercadoPago Preapproval
- Link is sent to client via email
- Client pays → MercadoPago sends webhook
- System activates subscription, records payment
Webhook endpoint
POST /api/webhooks/mercadopago (public, no auth)
Validates webhook signature using x-signature header and x-request-id.
Events handled:
payment→ query MercadoPago API for payment details → insert intopayments, update subscription periodsubscription_preapproval→ update subscription status (authorized, paused, cancelled)
On payment failure or subscription cancellation:
- Mark tenant
active = false - Client gets read-only access (can view data but not upload CFDIs, generate reports, etc.)
Admin panel additions
- View subscription status per client (active, amount, next billing date)
- Generate payment link button
- "Mark as paid manually" button (for bank transfer payments)
- Payment history per client
Client panel additions
- New section in
/configuracion: "Mi suscripción" - Shows: current plan, next billing date, payment history
- Client cannot change plan themselves (admin does it)
Environment variables
MP_ACCESS_TOKEN=<mercadopago_access_token>
MP_WEBHOOK_SECRET=<webhook_signature_secret>
MP_NOTIFICATION_URL=https://horux360.consultoria-as.com/api/webhooks/mercadopago
Section 4: Transactional Emails
Transport
Nodemailer with Gmail SMTP (Google Workspace).
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=<user>@horuxfin.com
SMTP_PASS=<google_app_password>
SMTP_FROM=Horux360 <noreply@horuxfin.com>
Requires generating an App Password in Google Workspace admin.
Email types
| Event | Recipient | Subject |
|---|---|---|
| Client registered | Client | Bienvenido a Horux360 |
| FIEL uploaded | Admin team | [Cliente] subió su FIEL |
| Payment received | Client | Confirmación de pago - Horux360 |
| Payment failed | Client + Admin | Problema con tu pago - Horux360 |
| Subscription expiring | Client | Tu suscripción vence en 5 días |
| Subscription cancelled | Client + Admin | Suscripción cancelada - Horux360 |
Template approach
HTML templates as TypeScript template literal functions. No external template engine.
// services/email/templates/welcome.ts
export function welcomeEmail(data: { nombre: string; email: string; tempPassword: string; loginUrl: string }): string {
return `<!DOCTYPE html>...`;
}
Each template:
- Responsive HTML email (inline CSS)
- Horux360 branding (logo, colors)
- Plain text fallback
Email service
class EmailService {
sendWelcome(to: string, data: WelcomeData): Promise<void>;
sendFielNotification(data: FielNotificationData): Promise<void>;
sendPaymentConfirmation(to: string, data: PaymentData): Promise<void>;
sendPaymentFailed(to: string, data: PaymentData): Promise<void>;
sendSubscriptionExpiring(to: string, data: SubscriptionData): Promise<void>;
sendSubscriptionCancelled(to: string, data: SubscriptionData): Promise<void>;
}
Limits
Gmail Workspace: 500 emails/day. Expected volume for 25 clients: ~50-100 emails/month. Well within limits.
Section 5: Production Deployment
Build pipeline
API:
cd apps/api && pnpm build # tsc → dist/
pnpm start # node dist/index.js
Web:
cd apps/web && pnpm build # next build → .next/
pnpm start # next start (optimized server)
PM2 configuration
// ecosystem.config.js
module.exports = {
apps: [
{
name: 'horux-api',
script: 'dist/index.js',
cwd: '/root/Horux/apps/api',
instances: 2,
exec_mode: 'cluster',
env: { NODE_ENV: 'production' }
},
{
name: 'horux-web',
script: 'node_modules/.bin/next',
args: 'start',
cwd: '/root/Horux/apps/web',
instances: 1,
exec_mode: 'fork',
env: { NODE_ENV: 'production' }
}
]
};
Auto-restart on crash. Log rotation via pm2-logrotate.
Nginx reverse proxy
# Rate limiting zone definitions (in http block of nginx.conf)
limit_req_zone $binary_remote_addr zone=auth:10m rate=5r/m;
limit_req_zone $binary_remote_addr zone=webhooks:10m rate=30r/m;
server {
listen 80;
server_name horux360.consultoria-as.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name horux360.consultoria-as.com;
ssl_certificate /etc/letsencrypt/live/horux360.consultoria-as.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/horux360.consultoria-as.com/privkey.pem;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
# Gzip
gzip on;
gzip_types text/plain application/json application/javascript text/css;
# Health check (for monitoring)
location /api/health {
proxy_pass http://127.0.0.1:4000;
}
# Rate limiting for public endpoints
location /api/auth/ {
limit_req zone=auth burst=5 nodelay;
proxy_pass http://127.0.0.1:4000;
}
location /api/webhooks/ {
limit_req zone=webhooks burst=10 nodelay;
proxy_pass http://127.0.0.1:4000;
}
# API
location /api/ {
proxy_pass http://127.0.0.1:4000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
client_max_body_size 200M; # Bulk XML uploads (200MB is enough for ~50k XML files)
}
# Next.js
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Health check endpoint
The existing GET /health endpoint returns { status: 'ok', timestamp }. PM2 uses this for liveness checks. Nginx can optionally use it for upstream health monitoring.
SSL
Let's Encrypt with certbot. Auto-renewal via cron.
certbot --nginx -d horux360.consultoria-as.com
Firewall
ufw allow 22/tcp # SSH
ufw allow 80/tcp # HTTP (redirect to HTTPS)
ufw allow 443/tcp # HTTPS
ufw enable
PostgreSQL only on localhost (no external access).
Backups
Cron job at 1:00 AM daily (runs before SAT cron at 3:00 AM, with enough gap to complete):
Authentication: Create a .pgpass file at /root/.pgpass with localhost:5432:*:postgres:<password> and chmod 600. This allows pg_dump to authenticate without inline passwords.
#!/bin/bash
# /var/horux/scripts/backup.sh
set -euo pipefail
BACKUP_DIR=/var/horux/backups
DATE=$(date +%Y-%m-%d)
DOW=$(date +%u) # Day of week: 1=Monday, 7=Sunday
DAILY_DIR=$BACKUP_DIR/daily
WEEKLY_DIR=$BACKUP_DIR/weekly
mkdir -p $DAILY_DIR $WEEKLY_DIR
# Backup central DB
pg_dump -h localhost -U postgres horux360 | gzip > $DAILY_DIR/horux360_$DATE.sql.gz
# Backup each tenant DB
for db in $(psql -h localhost -U postgres -t -c "SELECT database_name FROM tenants WHERE database_name IS NOT NULL" horux360); do
db_trimmed=$(echo $db | xargs) # trim whitespace
pg_dump -h localhost -U postgres "$db_trimmed" | gzip > $DAILY_DIR/${db_trimmed}_${DATE}.sql.gz
done
# On Sundays, copy to weekly directory
if [ "$DOW" -eq 7 ]; then
cp $DAILY_DIR/*_${DATE}.sql.gz $WEEKLY_DIR/
fi
# Remove daily backups older than 7 days
find $DAILY_DIR -name "*.sql.gz" -mtime +7 -delete
# Remove weekly backups older than 28 days
find $WEEKLY_DIR -name "*.sql.gz" -mtime +28 -delete
# Verify backup files are not empty (catch silent pg_dump failures)
for f in $DAILY_DIR/*_${DATE}.sql.gz; do
if [ ! -s "$f" ]; then
echo "WARNING: Empty backup file: $f" >&2
fi
done
Schedule separation: Backups run at 1:00 AM, SAT cron runs at 3:00 AM. With 50 clients, backup should complete in ~15-30 minutes, leaving ample gap before SAT sync starts.
Environment variables (production)
NODE_ENV=production
PORT=4000
DATABASE_URL=postgresql://postgres:<strong_password>@localhost:5432/horux360?schema=public
JWT_SECRET=<cryptographically_secure_random_64_chars>
JWT_EXPIRES_IN=24h
JWT_REFRESH_EXPIRES_IN=30d
CORS_ORIGIN=https://horux360.consultoria-as.com
FIEL_ENCRYPTION_KEY=<separate_32_byte_hex_key>
MP_ACCESS_TOKEN=<mercadopago_production_token>
MP_WEBHOOK_SECRET=<webhook_secret>
MP_NOTIFICATION_URL=https://horux360.consultoria-as.com/api/webhooks/mercadopago
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=<user>@horuxfin.com
SMTP_PASS=<google_app_password>
SMTP_FROM=Horux360 <noreply@horuxfin.com>
ADMIN_EMAIL=admin@horuxfin.com
SAT cron
Already implemented. Runs at 3:00 AM when NODE_ENV=production. Will activate automatically with the environment change.
Section 6: Plan Enforcement & Feature Gating
Enforcement middleware
// middleware: checkPlanLimits
async function checkPlanLimits(req, res, next) {
const tenant = await getTenantWithCache(req.user.tenantId); // cached 5 min
const subscription = await getActiveSubscription(tenant.id);
// Admin-impersonated requests bypass subscription check
// (admin needs to complete client setup regardless of payment status)
if (req.headers['x-view-tenant'] && req.user.role === 'admin') {
return next();
}
// Allowed statuses: 'authorized' (paid) or 'pending' (grace period for new clients)
const allowedStatuses = ['authorized', 'pending'];
// Check subscription status
if (!subscription || !allowedStatuses.includes(subscription.status)) {
// Allow read-only access for cancelled/paused subscriptions
if (req.method !== 'GET') {
return res.status(403).json({
message: 'Suscripción inactiva. Contacta soporte para reactivar.'
});
}
}
next();
}
Grace period: New clients start with status: 'pending' and have full write access (can upload FIEL, upload CFDIs, etc.). Once the subscription moves to 'cancelled' or 'paused' (e.g., failed payment), write access is revoked. Admin can also manually set status to 'authorized' for clients who pay by bank transfer.
CFDI limit check
Applied on POST /api/cfdi/ and POST /api/cfdi/bulk:
async function checkCfdiLimit(req, res, next) {
const tenant = await getTenantWithCache(req.user.tenantId);
if (tenant.cfdiLimit === -1) return next(); // unlimited
const currentCount = await getCfdiCountWithCache(req.tenantPool); // cached 5 min
const newCount = Array.isArray(req.body) ? req.body.length : 1;
if (currentCount + newCount > tenant.cfdiLimit) {
return res.status(403).json({
message: `Límite de CFDIs alcanzado (${currentCount}/${tenant.cfdiLimit}). Contacta soporte para upgrade.`
});
}
next();
}
User limit check
Applied on POST /api/usuarios/invite (already partially exists):
const userCount = await getUserCountForTenant(tenantId);
if (userCount >= tenant.usersLimit && tenant.usersLimit !== -1) {
return res.status(403).json({
message: `Límite de usuarios alcanzado (${userCount}/${tenant.usersLimit}).`
});
}
Feature gating
Applied per route using the existing hasFeature() function from shared:
function requireFeature(feature: string) {
return async (req, res, next) => {
const tenant = await getTenantWithCache(req.user.tenantId);
if (!hasFeature(tenant.plan, feature)) {
return res.status(403).json({
message: 'Tu plan no incluye esta función. Contacta soporte para upgrade.'
});
}
next();
};
}
// Usage in routes:
router.get('/reportes', authenticate, requireFeature('reportes'), reportesController);
router.get('/alertas', authenticate, requireFeature('alertas'), alertasController);
Feature matrix
| Feature key | Starter | Business | Professional | Enterprise |
|---|---|---|---|---|
| dashboard | Yes | Yes | Yes | Yes |
| cfdi_basic | Yes | Yes | Yes | Yes |
| iva_isr | Yes | Yes | Yes | Yes |
| reportes | No | Yes | Yes | Yes |
| alertas | No | Yes | Yes | Yes |
| calendario | No | Yes | Yes | Yes |
| xml_sat | No | No | Yes | Yes |
| conciliacion | No | No | Yes | Yes |
| forecasting | No | No | Yes | Yes |
| multi_empresa | No | No | No | Yes |
| api_externa | No | No | No | Yes |
Frontend feature gating
The sidebar/navigation hides menu items based on plan:
const tenant = useTenantInfo(); // new hook
const menuItems = allMenuItems.filter(item =>
!item.requiredFeature || hasFeature(tenant.plan, item.requiredFeature)
);
Pages also show an "upgrade" message if accessed directly via URL without the required plan.
Caching
Plan checks and CFDI counts are cached in-memory with 5-minute TTL to avoid database queries on every request.
Cache invalidation across PM2 workers: Since each PM2 cluster worker has its own in-memory cache, subscription status changes (via webhook) must invalidate the cache in all workers. The webhook handler writes the status to the DB, then sends a process.send() message to the PM2 master which broadcasts to all workers to invalidate the specific tenant's cache entry. This ensures all workers reflect subscription changes within seconds, not minutes.
Architecture Diagram
┌─────────────────────┐
│ Nginx (443/80) │
│ SSL + Rate Limit │
└──────────┬──────────┘
│
┌──────────────┼──────────────┐
│ │ │
┌─────▼─────┐ ┌────▼────┐ ┌──────▼──────┐
│ Next.js │ │ Express │ │ Webhook │
│ :3000 │ │ API x2 │ │ Handler │
│ (fork) │ │ :4000 │ │ (no auth) │
└───────────┘ │ (cluster)│ └──────┬──────┘
└────┬────┘ │
│ │
┌─────────▼──────────┐ │
│ TenantConnection │ │
│ Manager │ │
│ (pool per tenant) │ │
└─────────┬──────────┘ │
│ │
┌──────────────────┼──────┐ │
│ │ │ │
┌─────▼─────┐ ┌───────▼┐ ┌──▼──┐ │
│ horux360 │ │horux_ │ │horux│ │
│ (central) │ │client1 │ │_... │ │
│ │ └────────┘ └─────┘ │
│ tenants │ │
│ users │◄────────────────────────┘
│ subs │ (webhook updates)
│ payments │
└───────────┘
┌───────────────┐ ┌─────────────┐
│ /var/horux/ │ │ Gmail SMTP │
│ fiel/<rfc>/ │ │ @horuxfin │
│ backups/ │ └─────────────┘
└───────────────┘
┌───────────────┐
│ MercadoPago │
│ Preapproval │
│ API │
└───────────────┘
Out of Scope
- Landing page (already exists separately)
- Self-service registration (clients are registered manually by admin)
- Automatic SAT connector (manual FIEL linking for now)
- Plan change by client (admin handles upgrades/downgrades)
- Mobile app
- Multi-region deployment