feat(phase-6): Complete testing and deployment setup

Testing:
- Add pytest configuration (pytest.ini)
- Add test fixtures (tests/conftest.py)
- Add ContentGenerator tests (13 tests)
- Add ContentScheduler tests (16 tests)
- Add PublisherManager tests (16 tests)
- All 45 tests passing

Production Docker:
- Add docker-compose.prod.yml with healthchecks, resource limits
- Add Dockerfile.prod with multi-stage build, non-root user
- Add nginx.prod.conf with SSL, rate limiting, security headers
- Add .env.prod.example template

Maintenance Scripts:
- Add backup.sh for database and media backups
- Add restore.sh for database restoration
- Add cleanup.sh for log rotation and Docker cleanup
- Add healthcheck.sh with Telegram alerts

Documentation:
- Add DEPLOY.md with complete deployment guide

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
2026-01-28 02:12:34 +00:00
parent 354270be98
commit 85bda6abcf
15 changed files with 2296 additions and 0 deletions

73
.env.prod.example Normal file
View File

@@ -0,0 +1,73 @@
# ===========================================
# Production Environment Variables
# Copy to .env.prod and fill in values
# ===========================================
# ─────────────────────────────────────────
# DATABASE
# ─────────────────────────────────────────
POSTGRES_USER=social_automation
POSTGRES_PASSWORD=CHANGE_THIS_STRONG_PASSWORD
POSTGRES_DB=social_automation
# ─────────────────────────────────────────
# APPLICATION
# ─────────────────────────────────────────
SECRET_KEY=GENERATE_A_SECURE_RANDOM_KEY_HERE
ENVIRONMENT=production
# ─────────────────────────────────────────
# BUSINESS INFO
# ─────────────────────────────────────────
BUSINESS_NAME="Consultoría AS"
BUSINESS_LOCATION="Tijuana, México"
BUSINESS_WEBSITE="https://consultoria-as.com"
CONTENT_TONE="Profesional pero accesible, técnico cuando es necesario"
# ─────────────────────────────────────────
# DEEPSEEK API
# ─────────────────────────────────────────
DEEPSEEK_API_KEY=your_deepseek_api_key
DEEPSEEK_BASE_URL=https://api.deepseek.com
# ─────────────────────────────────────────
# X (TWITTER) API
# ─────────────────────────────────────────
X_API_KEY=your_x_api_key
X_API_SECRET=your_x_api_secret
X_ACCESS_TOKEN=your_x_access_token
X_ACCESS_SECRET=your_x_access_secret
X_BEARER_TOKEN=your_x_bearer_token
# ─────────────────────────────────────────
# META (FACEBOOK, INSTAGRAM, THREADS)
# ─────────────────────────────────────────
META_ACCESS_TOKEN=your_meta_access_token
META_APP_ID=your_meta_app_id
META_APP_SECRET=your_meta_app_secret
# Facebook
FACEBOOK_PAGE_ID=your_facebook_page_id
# Instagram
INSTAGRAM_ACCOUNT_ID=your_instagram_account_id
# Threads
THREADS_USER_ID=your_threads_user_id
# ─────────────────────────────────────────
# IMAGE UPLOAD (ImgBB)
# ─────────────────────────────────────────
IMGBB_API_KEY=your_imgbb_api_key
# ─────────────────────────────────────────
# TELEGRAM NOTIFICATIONS
# ─────────────────────────────────────────
TELEGRAM_BOT_TOKEN=your_telegram_bot_token
TELEGRAM_CHAT_ID=your_telegram_chat_id
# ─────────────────────────────────────────
# FLOWER (Celery Monitor)
# ─────────────────────────────────────────
FLOWER_USER=admin
FLOWER_PASSWORD=CHANGE_THIS_STRONG_PASSWORD

358
DEPLOY.md Normal file
View File

@@ -0,0 +1,358 @@
# Deployment Guide
## Social Media Automation System
Guía completa para desplegar el sistema en producción.
---
## Requisitos del Sistema
### Hardware Mínimo
- **CPU**: 2 cores
- **RAM**: 4GB
- **Disco**: 20GB SSD
- **Red**: Conexión estable a internet
### Software
- Docker 24.0+
- Docker Compose 2.20+
- Git
### Puertos Requeridos
- **80**: HTTP (redirección a HTTPS)
- **443**: HTTPS (aplicación principal)
---
## Instalación Rápida
```bash
# 1. Clonar repositorio
git clone https://git.consultoria-as.com/consultoria-as/social-media-automation.git
cd social-media-automation
# 2. Crear archivo de configuración
cp .env.prod.example .env.prod
# 3. Editar configuración
nano .env.prod
# 4. Generar certificados SSL (ver sección SSL)
# 5. Iniciar servicios
docker-compose -f docker-compose.prod.yml up -d
# 6. Verificar estado
docker-compose -f docker-compose.prod.yml ps
```
---
## Configuración Detallada
### Variables de Entorno (.env.prod)
#### Base de Datos
```bash
POSTGRES_USER=social_automation
POSTGRES_PASSWORD=<contraseña_segura>
POSTGRES_DB=social_automation
```
#### Aplicación
```bash
SECRET_KEY=<genera_con: openssl rand -hex 32>
ENVIRONMENT=production
```
#### APIs de Redes Sociales
**X (Twitter)**
1. Ve a https://developer.twitter.com
2. Crea una app con permisos de lectura/escritura
3. Genera tokens de acceso
```bash
X_API_KEY=<api_key>
X_API_SECRET=<api_secret>
X_ACCESS_TOKEN=<access_token>
X_ACCESS_SECRET=<access_secret>
X_BEARER_TOKEN=<bearer_token>
```
**Meta (Facebook, Instagram, Threads)**
1. Ve a https://developers.facebook.com
2. Crea una app tipo "Business"
3. Agrega productos: Facebook Login, Instagram Graph API
4. Genera token de página con permisos:
- `pages_manage_posts`
- `pages_read_engagement`
- `instagram_basic`
- `instagram_content_publish`
```bash
META_ACCESS_TOKEN=<page_access_token>
FACEBOOK_PAGE_ID=<page_id>
INSTAGRAM_ACCOUNT_ID=<instagram_business_id>
THREADS_USER_ID=<threads_user_id>
```
**DeepSeek API**
1. Ve a https://platform.deepseek.com
2. Genera una API key
```bash
DEEPSEEK_API_KEY=<api_key>
DEEPSEEK_BASE_URL=https://api.deepseek.com
```
**ImgBB (subida de imágenes)**
1. Ve a https://api.imgbb.com
2. Genera una API key
```bash
IMGBB_API_KEY=<api_key>
```
**Telegram (notificaciones)**
1. Habla con @BotFather en Telegram
2. Crea un bot con `/newbot`
3. Obtén el chat_id con @userinfobot
```bash
TELEGRAM_BOT_TOKEN=<bot_token>
TELEGRAM_CHAT_ID=<chat_id>
```
---
## Certificados SSL
### Opción 1: Let's Encrypt (Recomendado)
```bash
# Instalar certbot
apt install certbot
# Generar certificados
certbot certonly --standalone -d tu-dominio.com
# Copiar a nginx
cp /etc/letsencrypt/live/tu-dominio.com/fullchain.pem nginx/ssl/
cp /etc/letsencrypt/live/tu-dominio.com/privkey.pem nginx/ssl/
# Renovación automática (agregar a crontab)
0 0 1 * * certbot renew --quiet && docker-compose -f docker-compose.prod.yml restart nginx
```
### Opción 2: Certificado Autofirmado (Solo desarrollo)
```bash
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout nginx/ssl/privkey.pem \
-out nginx/ssl/fullchain.pem \
-subj "/CN=localhost"
```
---
## Comandos Útiles
### Gestión de Servicios
```bash
# Iniciar todos los servicios
docker-compose -f docker-compose.prod.yml up -d
# Detener todos los servicios
docker-compose -f docker-compose.prod.yml down
# Ver logs en tiempo real
docker-compose -f docker-compose.prod.yml logs -f
# Ver logs de un servicio específico
docker-compose -f docker-compose.prod.yml logs -f app
# Reiniciar un servicio
docker-compose -f docker-compose.prod.yml restart app
# Ver estado de los contenedores
docker-compose -f docker-compose.prod.yml ps
```
### Base de Datos
```bash
# Ejecutar migraciones
docker-compose -f docker-compose.prod.yml exec app alembic upgrade head
# Acceder a PostgreSQL
docker exec -it social-automation-db psql -U social_automation
# Backup manual
./scripts/maintenance/backup.sh
# Restaurar backup
./scripts/maintenance/restore.sh backups/database/db_backup_YYYYMMDD.sql.gz
```
### Celery
```bash
# Ver tareas activas
docker exec social-automation-flower celery -A app.worker.celery_app inspect active
# Ver tareas programadas
docker exec social-automation-flower celery -A app.worker.celery_app inspect scheduled
# Purgar cola
docker exec social-automation-worker celery -A app.worker.celery_app purge -f
```
---
## Mantenimiento
### Tareas Programadas (Crontab)
```bash
# Editar crontab
crontab -e
# Agregar las siguientes líneas:
# Backup diario a las 2 AM
0 2 * * * /ruta/al/proyecto/scripts/maintenance/backup.sh >> /var/log/backup.log 2>&1
# Limpieza semanal los domingos a las 3 AM
0 3 * * 0 /ruta/al/proyecto/scripts/maintenance/cleanup.sh >> /var/log/cleanup.log 2>&1
# Health check cada 5 minutos
*/5 * * * * /ruta/al/proyecto/scripts/maintenance/healthcheck.sh > /dev/null 2>&1
```
### Actualizaciones
```bash
# 1. Hacer backup
./scripts/maintenance/backup.sh
# 2. Obtener cambios
git pull origin main
# 3. Reconstruir imágenes
docker-compose -f docker-compose.prod.yml build
# 4. Aplicar migraciones
docker-compose -f docker-compose.prod.yml exec app alembic upgrade head
# 5. Reiniciar servicios
docker-compose -f docker-compose.prod.yml up -d
# 6. Verificar
./scripts/maintenance/healthcheck.sh
```
---
## Troubleshooting
### La aplicación no inicia
```bash
# Ver logs detallados
docker-compose -f docker-compose.prod.yml logs app
# Verificar variables de entorno
docker-compose -f docker-compose.prod.yml exec app env | grep -E "(DATABASE|REDIS|SECRET)"
# Probar conexión a DB
docker exec social-automation-db pg_isready -U social_automation
```
### Error de conexión a la base de datos
```bash
# Verificar que DB esté corriendo
docker ps | grep db
# Verificar health check
docker inspect social-automation-db | grep -A 10 Health
# Reiniciar DB
docker-compose -f docker-compose.prod.yml restart db
```
### Worker no procesa tareas
```bash
# Ver estado del worker
docker-compose -f docker-compose.prod.yml logs worker
# Verificar Redis
docker exec social-automation-redis redis-cli ping
# Reiniciar worker y beat
docker-compose -f docker-compose.prod.yml restart worker beat
```
### Error 502 Bad Gateway
```bash
# Verificar que app esté respondiendo
curl http://localhost:8000/api/health
# Ver logs de nginx
docker-compose -f docker-compose.prod.yml logs nginx
# Reiniciar nginx
docker-compose -f docker-compose.prod.yml restart nginx
```
### Alto uso de disco
```bash
# Ejecutar limpieza
./scripts/maintenance/cleanup.sh
# Limpiar Docker
docker system prune -a --volumes
# Verificar tamaño de backups
du -sh backups/
```
---
## Seguridad
### Checklist de Producción
- [ ] Cambiar todas las contraseñas por defecto
- [ ] Generar SECRET_KEY único
- [ ] Configurar certificados SSL válidos
- [ ] Configurar firewall (solo puertos 80, 443)
- [ ] Configurar backups automáticos
- [ ] Configurar monitoreo y alertas
- [ ] Habilitar autenticación en Flower
### Firewall (UFW)
```bash
# Configurar firewall básico
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow 80
ufw allow 443
ufw enable
```
---
## Soporte
- **Repositorio**: https://git.consultoria-as.com/consultoria-as/social-media-automation
- **Documentación API**: https://tu-dominio.com/docs
- **Monitor Celery**: https://tu-dominio.com/flower/

88
Dockerfile.prod Normal file
View File

@@ -0,0 +1,88 @@
# ===========================================
# Production Dockerfile
# Multi-stage build for smaller image
# ===========================================
# Stage 1: Build
FROM python:3.11-slim as builder
WORKDIR /app
# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
# Create virtualenv
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Stage 2: Production
FROM python:3.11-slim
WORKDIR /app
# Install runtime dependencies only
RUN apt-get update && apt-get install -y --no-install-recommends \
libpq5 \
chromium \
chromium-driver \
fonts-liberation \
libasound2 \
libatk-bridge2.0-0 \
libatk1.0-0 \
libatspi2.0-0 \
libcups2 \
libdbus-1-3 \
libdrm2 \
libgbm1 \
libgtk-3-0 \
libnspr4 \
libnss3 \
libxcomposite1 \
libxdamage1 \
libxfixes3 \
libxkbcommon0 \
libxrandr2 \
xdg-utils \
curl \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
# Copy virtualenv from builder
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
# Create non-root user for security
RUN groupadd -r appgroup && useradd -r -g appgroup appuser
# Copy application code
COPY --chown=appuser:appgroup . .
# Create directories
RUN mkdir -p /app/uploads /app/logs \
&& chown -R appuser:appgroup /app/uploads /app/logs
# Environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV CHROME_BIN=/usr/bin/chromium
ENV CHROMIUM_FLAGS="--no-sandbox --disable-dev-shm-usage"
# Switch to non-root user
USER appuser
# Expose port
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/api/health || exit 1
# Default command (can be overridden)
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]

240
docker-compose.prod.yml Normal file
View File

@@ -0,0 +1,240 @@
version: '3.8'
# ===========================================
# PRODUCCIÓN - Social Media Automation
# Uso: docker-compose -f docker-compose.prod.yml up -d
# ===========================================
services:
# ===========================================
# APLICACIÓN PRINCIPAL (FastAPI)
# ===========================================
app:
build:
context: .
dockerfile: Dockerfile.prod
container_name: social-automation-app
command: uvicorn app.main:app --host 0.0.0.0 --port 8000 --workers 4
environment:
- DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
- REDIS_URL=redis://redis:6379/0
env_file:
- .env.prod
volumes:
- uploaded_images:/app/uploads
- logs:/app/logs
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
restart: always
networks:
- social-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/api/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.25'
memory: 256M
# ===========================================
# CELERY WORKER (Procesamiento de tareas)
# ===========================================
worker:
build:
context: .
dockerfile: Dockerfile.prod
container_name: social-automation-worker
command: celery -A app.worker.celery_app worker --loglevel=warning --concurrency=4
environment:
- DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
- REDIS_URL=redis://redis:6379/0
env_file:
- .env.prod
volumes:
- uploaded_images:/app/uploads
- logs:/app/logs
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
restart: always
networks:
- social-network
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
# ===========================================
# CELERY BEAT (Programador de tareas)
# ===========================================
beat:
build:
context: .
dockerfile: Dockerfile.prod
container_name: social-automation-beat
command: celery -A app.worker.celery_app beat --loglevel=warning
environment:
- DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@db:5432/${POSTGRES_DB}
- REDIS_URL=redis://redis:6379/0
env_file:
- .env.prod
volumes:
- logs:/app/logs
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
restart: always
networks:
- social-network
deploy:
resources:
limits:
cpus: '0.25'
memory: 256M
# ===========================================
# FLOWER (Monitor de Celery) - Solo acceso interno
# ===========================================
flower:
build:
context: .
dockerfile: Dockerfile.prod
container_name: social-automation-flower
command: celery -A app.worker.celery_app flower --port=5555 --basic_auth=${FLOWER_USER}:${FLOWER_PASSWORD}
environment:
- REDIS_URL=redis://redis:6379/0
env_file:
- .env.prod
depends_on:
- redis
- worker
restart: always
networks:
- social-network
deploy:
resources:
limits:
cpus: '0.25'
memory: 256M
# ===========================================
# POSTGRESQL (Base de datos)
# ===========================================
db:
image: postgres:15-alpine
container_name: social-automation-db
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./backups:/backups
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
restart: always
networks:
- social-network
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
# ===========================================
# REDIS (Cola de mensajes)
# ===========================================
redis:
image: redis:7-alpine
container_name: social-automation-redis
command: redis-server --appendonly yes --maxmemory 256mb --maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
restart: always
networks:
- social-network
deploy:
resources:
limits:
cpus: '0.25'
memory: 256M
# ===========================================
# NGINX (Reverse Proxy + SSL)
# ===========================================
nginx:
image: nginx:alpine
container_name: social-automation-nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.prod.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- ./dashboard/static:/usr/share/nginx/html/static:ro
- nginx_logs:/var/log/nginx
depends_on:
app:
condition: service_healthy
restart: always
networks:
- social-network
healthcheck:
test: ["CMD", "nginx", "-t"]
interval: 30s
timeout: 10s
retries: 3
deploy:
resources:
limits:
cpus: '0.25'
memory: 128M
# ===========================================
# VOLÚMENES
# ===========================================
volumes:
postgres_data:
driver: local
redis_data:
driver: local
uploaded_images:
driver: local
logs:
driver: local
nginx_logs:
driver: local
# ===========================================
# REDES
# ===========================================
networks:
social-network:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.28.0.0/16

195
nginx/nginx.prod.conf Normal file
View File

@@ -0,0 +1,195 @@
# ===========================================
# NGINX Production Configuration
# Social Media Automation System
# ===========================================
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging format
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log main;
# Performance
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript
application/xml application/xml+rss text/javascript application/x-font-ttf
font/opentype image/svg+xml;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Rate limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=5r/m;
# Upstreams
upstream app {
server app:8000;
keepalive 32;
}
upstream flower {
server flower:5555;
}
# HTTP - Redirect to HTTPS
server {
listen 80;
server_name _;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location /health {
proxy_pass http://app/api/health;
proxy_connect_timeout 5s;
proxy_read_timeout 5s;
}
location / {
return 301 https://$host$request_uri;
}
}
# HTTPS - Main server
server {
listen 443 ssl http2;
server_name _;
# SSL Configuration
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
# SSL Security
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
# HSTS
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Client body size (for image uploads)
client_max_body_size 10M;
# Static files
location /static {
alias /usr/share/nginx/html/static;
expires 30d;
add_header Cache-Control "public, immutable";
access_log off;
}
# API endpoints
location /api {
limit_req zone=api burst=20 nodelay;
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 60s;
}
# Login rate limiting
location /api/auth/login {
limit_req zone=login burst=5 nodelay;
proxy_pass http://app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# Dashboard
location /dashboard {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# Flower (Celery monitor) - Protected access
location /flower/ {
# Optional: IP whitelist
# allow 192.168.1.0/24;
# deny all;
proxy_pass http://flower/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
}
# Health check
location /health {
proxy_pass http://app/api/health;
proxy_connect_timeout 5s;
proxy_read_timeout 5s;
access_log off;
}
# Root
location / {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Error pages
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
}

10
pytest.ini Normal file
View File

@@ -0,0 +1,10 @@
[pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
asyncio_mode = auto
addopts = -v --tb=short
filterwarnings =
ignore::DeprecationWarning
ignore::PendingDeprecationWarning

121
scripts/maintenance/backup.sh Executable file
View File

@@ -0,0 +1,121 @@
#!/bin/bash
# ===========================================
# Backup Script for Social Media Automation
# Run daily via cron:
# 0 2 * * * /path/to/backup.sh >> /var/log/backup.log 2>&1
# ===========================================
set -e
# Configuration
BACKUP_DIR="${BACKUP_DIR:-/root/Facebook-X-Threads-Automation/backups}"
RETENTION_DAYS="${RETENTION_DAYS:-7}"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
CONTAINER_NAME="${CONTAINER_NAME:-social-automation-db}"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
log() {
echo -e "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
}
error() {
log "${RED}ERROR: $1${NC}"
exit 1
}
success() {
log "${GREEN}$1${NC}"
}
warning() {
log "${YELLOW}$1${NC}"
}
# Create backup directory if not exists
mkdir -p "$BACKUP_DIR"/{database,media}
log "Starting backup process..."
# ===========================================
# 1. DATABASE BACKUP
# ===========================================
log "Backing up PostgreSQL database..."
DB_BACKUP_FILE="$BACKUP_DIR/database/db_backup_$TIMESTAMP.sql.gz"
# Check if container is running
if ! docker ps --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"; then
error "Database container '$CONTAINER_NAME' is not running"
fi
# Get database credentials from container
POSTGRES_USER=$(docker exec $CONTAINER_NAME printenv POSTGRES_USER 2>/dev/null || echo "social_user")
POSTGRES_DB=$(docker exec $CONTAINER_NAME printenv POSTGRES_DB 2>/dev/null || echo "social_automation")
# Perform backup
if docker exec $CONTAINER_NAME pg_dump -U "$POSTGRES_USER" "$POSTGRES_DB" | gzip > "$DB_BACKUP_FILE"; then
DB_SIZE=$(du -h "$DB_BACKUP_FILE" | cut -f1)
success "Database backup completed: $DB_BACKUP_FILE ($DB_SIZE)"
else
error "Database backup failed"
fi
# ===========================================
# 2. MEDIA FILES BACKUP
# ===========================================
log "Backing up media files..."
MEDIA_BACKUP_FILE="$BACKUP_DIR/media/media_backup_$TIMESTAMP.tar.gz"
UPLOADS_DIR="/root/Facebook-X-Threads-Automation/uploads"
if [ -d "$UPLOADS_DIR" ] && [ "$(ls -A $UPLOADS_DIR 2>/dev/null)" ]; then
if tar -czf "$MEDIA_BACKUP_FILE" -C "$(dirname $UPLOADS_DIR)" "$(basename $UPLOADS_DIR)"; then
MEDIA_SIZE=$(du -h "$MEDIA_BACKUP_FILE" | cut -f1)
success "Media backup completed: $MEDIA_BACKUP_FILE ($MEDIA_SIZE)"
else
warning "Media backup failed or partially completed"
fi
else
warning "No media files to backup"
fi
# ===========================================
# 3. CLEANUP OLD BACKUPS
# ===========================================
log "Cleaning up backups older than $RETENTION_DAYS days..."
# Count files before cleanup
DB_BEFORE=$(find "$BACKUP_DIR/database" -name "*.sql.gz" -type f 2>/dev/null | wc -l)
MEDIA_BEFORE=$(find "$BACKUP_DIR/media" -name "*.tar.gz" -type f 2>/dev/null | wc -l)
# Delete old files
find "$BACKUP_DIR/database" -name "*.sql.gz" -type f -mtime +$RETENTION_DAYS -delete 2>/dev/null || true
find "$BACKUP_DIR/media" -name "*.tar.gz" -type f -mtime +$RETENTION_DAYS -delete 2>/dev/null || true
# Count files after cleanup
DB_AFTER=$(find "$BACKUP_DIR/database" -name "*.sql.gz" -type f 2>/dev/null | wc -l)
MEDIA_AFTER=$(find "$BACKUP_DIR/media" -name "*.tar.gz" -type f 2>/dev/null | wc -l)
DB_DELETED=$((DB_BEFORE - DB_AFTER))
MEDIA_DELETED=$((MEDIA_BEFORE - MEDIA_AFTER))
if [ $DB_DELETED -gt 0 ] || [ $MEDIA_DELETED -gt 0 ]; then
log "Deleted $DB_DELETED database backup(s) and $MEDIA_DELETED media backup(s)"
fi
# ===========================================
# 4. SUMMARY
# ===========================================
log "─────────────────────────────────────────"
log "Backup Summary:"
log " Database backups: $DB_AFTER"
log " Media backups: $MEDIA_AFTER"
log " Total size: $(du -sh $BACKUP_DIR | cut -f1)"
log "─────────────────────────────────────────"
success "Backup process completed successfully!"

121
scripts/maintenance/cleanup.sh Executable file
View File

@@ -0,0 +1,121 @@
#!/bin/bash
# ===========================================
# Cleanup Script for Social Media Automation
# Run weekly via cron:
# 0 3 * * 0 /path/to/cleanup.sh >> /var/log/cleanup.log 2>&1
# ===========================================
set -e
# Configuration
PROJECT_DIR="${PROJECT_DIR:-/root/Facebook-X-Threads-Automation}"
LOG_RETENTION_DAYS="${LOG_RETENTION_DAYS:-30}"
DOCKER_LOG_MAX_SIZE="${DOCKER_LOG_MAX_SIZE:-100m}"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
log() {
echo -e "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
}
success() {
log "${GREEN}$1${NC}"
}
warning() {
log "${YELLOW}$1${NC}"
}
log "Starting cleanup process..."
# ===========================================
# 1. CLEAN DOCKER LOGS
# ===========================================
log "Cleaning Docker container logs..."
# Truncate Docker logs (requires root)
if [ -d /var/lib/docker/containers ]; then
for container_dir in /var/lib/docker/containers/*/; do
log_file="${container_dir}*-json.log"
for f in $log_file; do
if [ -f "$f" ]; then
size_before=$(du -h "$f" | cut -f1)
if truncate -s 0 "$f" 2>/dev/null; then
log " Truncated: $(basename $(dirname $f)) ($size_before)"
fi
fi
done
done
success "Docker logs cleaned"
else
warning "Docker log directory not found (might need sudo)"
fi
# ===========================================
# 2. CLEAN APPLICATION LOGS
# ===========================================
log "Cleaning application logs older than $LOG_RETENTION_DAYS days..."
if [ -d "$PROJECT_DIR/logs" ]; then
count=$(find "$PROJECT_DIR/logs" -name "*.log" -type f -mtime +$LOG_RETENTION_DAYS 2>/dev/null | wc -l)
find "$PROJECT_DIR/logs" -name "*.log" -type f -mtime +$LOG_RETENTION_DAYS -delete 2>/dev/null || true
log " Deleted $count old log file(s)"
fi
# ===========================================
# 3. CLEAN NGINX LOGS
# ===========================================
log "Rotating nginx logs..."
if docker ps --format '{{.Names}}' | grep -q "social-automation-nginx"; then
docker exec social-automation-nginx nginx -s reopen 2>/dev/null && \
success "Nginx logs rotated" || warning "Could not rotate nginx logs"
fi
# ===========================================
# 4. CLEAN DOCKER SYSTEM
# ===========================================
log "Cleaning Docker system..."
# Remove unused images, containers, networks
docker system prune -f --volumes 2>/dev/null && \
success "Docker system cleaned" || warning "Could not clean Docker system"
# Remove dangling images
dangling=$(docker images -f "dangling=true" -q 2>/dev/null | wc -l)
if [ $dangling -gt 0 ]; then
docker rmi $(docker images -f "dangling=true" -q) 2>/dev/null || true
log " Removed $dangling dangling image(s)"
fi
# ===========================================
# 5. CLEAN TEMP FILES
# ===========================================
log "Cleaning temporary files..."
# Python cache
find "$PROJECT_DIR" -type d -name "__pycache__" -exec rm -rf {} + 2>/dev/null || true
find "$PROJECT_DIR" -type f -name "*.pyc" -delete 2>/dev/null || true
find "$PROJECT_DIR" -type f -name "*.pyo" -delete 2>/dev/null || true
# Pytest cache
rm -rf "$PROJECT_DIR/.pytest_cache" 2>/dev/null || true
success "Temporary files cleaned"
# ===========================================
# 6. DISK USAGE REPORT
# ===========================================
log "─────────────────────────────────────────"
log "Disk Usage Report:"
log " Project: $(du -sh $PROJECT_DIR 2>/dev/null | cut -f1)"
log " Backups: $(du -sh $PROJECT_DIR/backups 2>/dev/null | cut -f1 || echo 'N/A')"
log " Docker: $(docker system df --format '{{.Size}}' 2>/dev/null | head -1 || echo 'N/A')"
log " Disk: $(df -h / | awk 'NR==2 {print $4 " free of " $2}')"
log "─────────────────────────────────────────"
success "Cleanup process completed!"

View File

@@ -0,0 +1,154 @@
#!/bin/bash
# ===========================================
# Health Check Script for Social Media Automation
# Run every 5 minutes via cron:
# */5 * * * * /path/to/healthcheck.sh
# ===========================================
# Configuration
APP_URL="${APP_URL:-http://localhost:8000}"
TELEGRAM_BOT_TOKEN="${TELEGRAM_BOT_TOKEN:-}"
TELEGRAM_CHAT_ID="${TELEGRAM_CHAT_ID:-}"
ALERT_FILE="/tmp/social_automation_alert_sent"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
log() {
echo -e "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
}
send_telegram() {
if [ -n "$TELEGRAM_BOT_TOKEN" ] && [ -n "$TELEGRAM_CHAT_ID" ]; then
message="$1"
curl -s -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/sendMessage" \
-d "chat_id=${TELEGRAM_CHAT_ID}" \
-d "text=${message}" \
-d "parse_mode=HTML" > /dev/null 2>&1
fi
}
check_service() {
local name=$1
local container=$2
if docker ps --format '{{.Names}}' | grep -q "^${container}$"; then
echo -e "${GREEN}${NC} $name"
return 0
else
echo -e "${RED}${NC} $name"
return 1
fi
}
log "Running health checks..."
ERRORS=0
STATUS=""
# ===========================================
# 1. CHECK DOCKER CONTAINERS
# ===========================================
echo ""
echo "Container Status:"
check_service "App (FastAPI)" "social-automation-app" || ((ERRORS++))
check_service "Worker (Celery)" "social-automation-worker" || ((ERRORS++))
check_service "Beat (Scheduler)" "social-automation-beat" || ((ERRORS++))
check_service "Database (PostgreSQL)" "social-automation-db" || ((ERRORS++))
check_service "Redis" "social-automation-redis" || ((ERRORS++))
check_service "Nginx" "social-automation-nginx" || ((ERRORS++))
# ===========================================
# 2. CHECK API HEALTH
# ===========================================
echo ""
echo "API Status:"
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "$APP_URL/api/health" 2>/dev/null || echo "000")
if [ "$HTTP_CODE" = "200" ]; then
echo -e "${GREEN}${NC} API responding (HTTP $HTTP_CODE)"
else
echo -e "${RED}${NC} API not responding (HTTP $HTTP_CODE)"
((ERRORS++))
fi
# ===========================================
# 3. CHECK DATABASE CONNECTION
# ===========================================
echo ""
echo "Database Status:"
if docker exec social-automation-db pg_isready -U social_user -d social_automation > /dev/null 2>&1; then
echo -e "${GREEN}${NC} PostgreSQL accepting connections"
else
echo -e "${RED}${NC} PostgreSQL not accepting connections"
((ERRORS++))
fi
# ===========================================
# 4. CHECK REDIS
# ===========================================
echo ""
echo "Redis Status:"
if docker exec social-automation-redis redis-cli ping 2>/dev/null | grep -q "PONG"; then
echo -e "${GREEN}${NC} Redis responding"
else
echo -e "${RED}${NC} Redis not responding"
((ERRORS++))
fi
# ===========================================
# 5. CHECK DISK SPACE
# ===========================================
echo ""
echo "System Resources:"
DISK_USAGE=$(df / | awk 'NR==2 {print $5}' | tr -d '%')
if [ "$DISK_USAGE" -lt 90 ]; then
echo -e "${GREEN}${NC} Disk usage: ${DISK_USAGE}%"
else
echo -e "${RED}${NC} Disk usage: ${DISK_USAGE}% (CRITICAL)"
((ERRORS++))
fi
# Memory
MEM_USAGE=$(free | awk 'NR==2 {printf "%.0f", $3/$2*100}')
if [ "$MEM_USAGE" -lt 90 ]; then
echo -e "${GREEN}${NC} Memory usage: ${MEM_USAGE}%"
else
echo -e "${YELLOW}!${NC} Memory usage: ${MEM_USAGE}% (HIGH)"
fi
# ===========================================
# 6. SUMMARY & ALERTS
# ===========================================
echo ""
echo "─────────────────────────────────────────"
if [ $ERRORS -eq 0 ]; then
echo -e "${GREEN}All systems operational${NC}"
# Clear alert file if exists (system recovered)
if [ -f "$ALERT_FILE" ]; then
rm "$ALERT_FILE"
send_telegram "✅ <b>Social Media Automation - RECOVERED</b>%0A%0AAll systems are back to normal."
fi
else
echo -e "${RED}$ERRORS error(s) detected${NC}"
# Send alert only if not already sent
if [ ! -f "$ALERT_FILE" ]; then
touch "$ALERT_FILE"
send_telegram "🚨 <b>Social Media Automation - ALERT</b>%0A%0A$ERRORS service(s) are down!%0ACheck server immediately."
fi
fi
echo "─────────────────────────────────────────"
exit $ERRORS

94
scripts/maintenance/restore.sh Executable file
View File

@@ -0,0 +1,94 @@
#!/bin/bash
# ===========================================
# Restore Script for Social Media Automation
# Usage: ./restore.sh [backup_file]
# ===========================================
set -e
# Configuration
BACKUP_DIR="${BACKUP_DIR:-/root/Facebook-X-Threads-Automation/backups}"
CONTAINER_NAME="${CONTAINER_NAME:-social-automation-db}"
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
log() {
echo -e "[$(date '+%Y-%m-%d %H:%M:%S')] $1"
}
error() {
log "${RED}ERROR: $1${NC}"
exit 1
}
success() {
log "${GREEN}$1${NC}"
}
warning() {
log "${YELLOW}$1${NC}"
}
# Check if backup file provided
if [ -z "$1" ]; then
log "Available database backups:"
echo ""
ls -lh "$BACKUP_DIR/database/"*.sql.gz 2>/dev/null || echo " No backups found"
echo ""
log "Usage: $0 <backup_file.sql.gz>"
exit 1
fi
BACKUP_FILE="$1"
# Check if file exists
if [ ! -f "$BACKUP_FILE" ]; then
# Try with backup dir prefix
if [ -f "$BACKUP_DIR/database/$BACKUP_FILE" ]; then
BACKUP_FILE="$BACKUP_DIR/database/$BACKUP_FILE"
else
error "Backup file not found: $BACKUP_FILE"
fi
fi
log "Backup file: $BACKUP_FILE"
# Confirm restore
warning "WARNING: This will overwrite the current database!"
read -p "Are you sure you want to continue? (yes/no): " CONFIRM
if [ "$CONFIRM" != "yes" ]; then
log "Restore cancelled"
exit 0
fi
# Check if container is running
if ! docker ps --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"; then
error "Database container '$CONTAINER_NAME' is not running"
fi
# Get database credentials
POSTGRES_USER=$(docker exec $CONTAINER_NAME printenv POSTGRES_USER 2>/dev/null || echo "social_user")
POSTGRES_DB=$(docker exec $CONTAINER_NAME printenv POSTGRES_DB 2>/dev/null || echo "social_automation")
log "Restoring database..."
# Drop existing connections and recreate database
docker exec $CONTAINER_NAME psql -U "$POSTGRES_USER" -c "
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = '$POSTGRES_DB'
AND pid <> pg_backend_pid();" postgres 2>/dev/null || true
# Restore
if gunzip -c "$BACKUP_FILE" | docker exec -i $CONTAINER_NAME psql -U "$POSTGRES_USER" "$POSTGRES_DB"; then
success "Database restored successfully!"
else
error "Database restore failed"
fi
log "Restore completed. Please restart the application containers."

1
tests/__init__.py Normal file
View File

@@ -0,0 +1 @@
"""Tests for the social media automation system."""

124
tests/conftest.py Normal file
View File

@@ -0,0 +1,124 @@
"""
Test fixtures and configuration.
"""
import pytest
from unittest.mock import MagicMock, AsyncMock, patch
from datetime import datetime, timedelta
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from app.core.database import Base
# In-memory SQLite for testing
TEST_DATABASE_URL = "sqlite:///:memory:"
@pytest.fixture
def test_engine():
"""Create an in-memory SQLite engine for testing."""
engine = create_engine(
TEST_DATABASE_URL,
connect_args={"check_same_thread": False}
)
Base.metadata.create_all(bind=engine)
yield engine
Base.metadata.drop_all(bind=engine)
@pytest.fixture
def test_session(test_engine):
"""Create a test database session."""
TestSessionLocal = sessionmaker(
autocommit=False, autoflush=False, bind=test_engine
)
session = TestSessionLocal()
yield session
session.close()
@pytest.fixture
def mock_openai_client():
"""Mock OpenAI client for DeepSeek API tests."""
mock_client = MagicMock()
mock_response = MagicMock()
mock_response.choices = [MagicMock()]
mock_response.choices[0].message.content = "Generated test content #TechTip #AI"
mock_client.chat.completions.create.return_value = mock_response
return mock_client
@pytest.fixture
def mock_httpx_client():
"""Mock httpx client for API calls."""
mock_client = AsyncMock()
mock_response = AsyncMock()
mock_response.status_code = 200
mock_response.json.return_value = {"id": "123", "success": True}
mock_client.get.return_value = mock_response
mock_client.post.return_value = mock_response
return mock_client
@pytest.fixture
def sample_product():
"""Sample product data for testing."""
return {
"name": "Laptop HP Pavilion",
"description": "Laptop potente para trabajo y gaming",
"price": 15999.00,
"category": "laptops",
"specs": {
"processor": "Intel Core i5",
"ram": "16GB",
"storage": "512GB SSD"
},
"highlights": ["Alta velocidad", "Diseño compacto", "Garantía 2 años"]
}
@pytest.fixture
def sample_service():
"""Sample service data for testing."""
return {
"name": "Automatización con IA",
"description": "Automatiza tus procesos con inteligencia artificial",
"category": "ai_automation",
"target_sectors": ["retail", "manufactura", "servicios"],
"benefits": ["Reduce costos", "Aumenta productividad", "24/7 operación"],
"call_to_action": "Agenda una demo gratuita"
}
@pytest.fixture
def sample_interaction():
"""Sample interaction data for testing."""
return {
"content": "¿Qué procesador recomiendas para edición de video?",
"type": "comment",
"platform": "x",
"author": "user123"
}
@pytest.fixture
def mock_settings():
"""Mock settings for testing."""
with patch('app.core.config.settings') as mock:
mock.DEEPSEEK_API_KEY = "test-api-key"
mock.DEEPSEEK_BASE_URL = "https://api.deepseek.com"
mock.BUSINESS_NAME = "Consultoría AS"
mock.BUSINESS_LOCATION = "Tijuana, México"
mock.BUSINESS_WEBSITE = "https://consultoria-as.com"
mock.CONTENT_TONE = "Profesional pero accesible"
yield mock
@pytest.fixture
def fixed_datetime():
"""Fixed datetime for consistent testing."""
return datetime(2024, 6, 15, 10, 0, 0) # Saturday 10:00

View File

@@ -0,0 +1,180 @@
"""
Tests for ContentGenerator service.
"""
import pytest
from unittest.mock import MagicMock, patch, AsyncMock
class TestContentGenerator:
"""Tests for the ContentGenerator class."""
@pytest.fixture
def generator(self, mock_settings):
"""Create a ContentGenerator instance with mocked client."""
with patch('app.services.content_generator.OpenAI') as mock_openai:
mock_client = MagicMock()
mock_response = MagicMock()
mock_response.choices = [MagicMock()]
mock_response.choices[0].message.content = "Test content #AI #Tech"
mock_client.chat.completions.create.return_value = mock_response
mock_openai.return_value = mock_client
from app.services.content_generator import ContentGenerator
gen = ContentGenerator()
gen._client = mock_client
yield gen
@pytest.mark.asyncio
async def test_generate_tip_tech(self, generator):
"""Test generating a tech tip."""
result = await generator.generate_tip_tech(
category="seguridad",
platform="x"
)
assert result is not None
assert len(result) > 0
generator.client.chat.completions.create.assert_called_once()
@pytest.mark.asyncio
async def test_generate_tip_tech_with_template(self, generator):
"""Test generating a tech tip with a template."""
result = await generator.generate_tip_tech(
category="productividad",
platform="threads",
template="Tip del día: {tip}"
)
assert result is not None
call_args = generator.client.chat.completions.create.call_args
assert "template" in str(call_args).lower() or "TEMPLATE" in str(call_args)
@pytest.mark.asyncio
async def test_generate_product_post(self, generator, sample_product):
"""Test generating a product post."""
result = await generator.generate_product_post(
product=sample_product,
platform="instagram"
)
assert result is not None
call_args = generator.client.chat.completions.create.call_args
messages = call_args.kwargs.get('messages', call_args[1].get('messages', []))
# Verify product info was included in prompt
user_message = messages[-1]['content']
assert sample_product['name'] in user_message
# Price is formatted with commas, check for the value
assert "15,999" in user_message or "15999" in user_message
@pytest.mark.asyncio
async def test_generate_service_post(self, generator, sample_service):
"""Test generating a service post."""
result = await generator.generate_service_post(
service=sample_service,
platform="facebook"
)
assert result is not None
call_args = generator.client.chat.completions.create.call_args
messages = call_args.kwargs.get('messages', call_args[1].get('messages', []))
user_message = messages[-1]['content']
assert sample_service['name'] in user_message
@pytest.mark.asyncio
async def test_generate_thread(self, generator):
"""Test generating a thread."""
generator.client.chat.completions.create.return_value.choices[0].message.content = \
"1/ Post uno\n2/ Post dos\n3/ Post tres"
result = await generator.generate_thread(
topic="Cómo proteger tu contraseña",
num_posts=3
)
assert isinstance(result, list)
assert len(result) == 3
@pytest.mark.asyncio
async def test_generate_response_suggestion(self, generator, sample_interaction):
"""Test generating response suggestions."""
generator.client.chat.completions.create.return_value.choices[0].message.content = \
"1. Respuesta corta\n2. Respuesta media\n3. Respuesta larga"
result = await generator.generate_response_suggestion(
interaction_content=sample_interaction['content'],
interaction_type=sample_interaction['type']
)
assert isinstance(result, list)
assert len(result) <= 3
@pytest.mark.asyncio
async def test_adapt_content_for_platform(self, generator):
"""Test adapting content for different platforms."""
original = "Este es un tip muy largo sobre seguridad informática con muchos detalles"
result = await generator.adapt_content_for_platform(
content=original,
target_platform="x"
)
assert result is not None
call_args = generator.client.chat.completions.create.call_args
messages = call_args.kwargs.get('messages', call_args[1].get('messages', []))
user_message = messages[-1]['content']
assert "280" in user_message # X character limit
def test_get_system_prompt(self, generator, mock_settings):
"""Test that system prompt includes business info."""
prompt = generator._get_system_prompt()
assert mock_settings.BUSINESS_NAME in prompt
assert mock_settings.BUSINESS_LOCATION in prompt
def test_lazy_initialization_without_api_key(self):
"""Test that client raises error without API key."""
with patch('app.services.content_generator.settings') as mock:
mock.DEEPSEEK_API_KEY = None
from app.services.content_generator import ContentGenerator
gen = ContentGenerator()
with pytest.raises(ValueError, match="DEEPSEEK_API_KEY"):
_ = gen.client
class TestCharacterLimits:
"""Tests for character limit handling."""
@pytest.mark.parametrize("platform,expected_limit", [
("x", 280),
("threads", 500),
("instagram", 2200),
("facebook", 500),
])
@pytest.mark.asyncio
async def test_platform_character_limits(self, platform, expected_limit, mock_settings):
"""Test that correct character limits are used per platform."""
with patch('app.services.content_generator.OpenAI') as mock_openai:
mock_client = MagicMock()
mock_response = MagicMock()
mock_response.choices = [MagicMock()]
mock_response.choices[0].message.content = "Test"
mock_client.chat.completions.create.return_value = mock_response
mock_openai.return_value = mock_client
from app.services.content_generator import ContentGenerator
gen = ContentGenerator()
gen._client = mock_client
await gen.generate_tip_tech("test", platform)
call_args = mock_client.chat.completions.create.call_args
messages = call_args.kwargs.get('messages', call_args[1].get('messages', []))
user_message = messages[-1]['content']
assert str(expected_limit) in user_message

View File

@@ -0,0 +1,278 @@
"""
Tests for PublisherManager service.
"""
import pytest
from unittest.mock import MagicMock, AsyncMock, patch
class TestPublisherManager:
"""Tests for the PublisherManager class."""
@pytest.fixture
def mock_publishers(self):
"""Create mock publishers."""
x_publisher = MagicMock()
x_publisher.char_limit = 280
x_publisher.validate_content.return_value = True
x_publisher.client = MagicMock()
x_publisher.publish = AsyncMock(return_value=MagicMock(
success=True, post_id="123", url="https://x.com/post/123"
))
threads_publisher = MagicMock()
threads_publisher.char_limit = 500
threads_publisher.validate_content.return_value = True
threads_publisher.access_token = "token"
threads_publisher.user_id = "user123"
threads_publisher.publish = AsyncMock(return_value=MagicMock(
success=True, post_id="456", url="https://threads.net/post/456"
))
fb_publisher = MagicMock()
fb_publisher.char_limit = 63206
fb_publisher.validate_content.return_value = True
fb_publisher.access_token = "token"
fb_publisher.page_id = "page123"
fb_publisher.publish = AsyncMock(return_value=MagicMock(
success=True, post_id="789"
))
ig_publisher = MagicMock()
ig_publisher.char_limit = 2200
ig_publisher.validate_content.return_value = True
ig_publisher.access_token = "token"
ig_publisher.account_id = "acc123"
ig_publisher.publish = AsyncMock(return_value=MagicMock(
success=True, post_id="101"
))
return {
"x": x_publisher,
"threads": threads_publisher,
"facebook": fb_publisher,
"instagram": ig_publisher
}
@pytest.fixture
def manager(self, mock_publishers):
"""Create a PublisherManager with mocked publishers."""
with patch('app.publishers.manager.XPublisher', return_value=mock_publishers["x"]), \
patch('app.publishers.manager.ThreadsPublisher', return_value=mock_publishers["threads"]), \
patch('app.publishers.manager.FacebookPublisher', return_value=mock_publishers["facebook"]), \
patch('app.publishers.manager.InstagramPublisher', return_value=mock_publishers["instagram"]):
from app.publishers.manager import PublisherManager, Platform
mgr = PublisherManager()
# Override with mocks
mgr._publishers = {
Platform.X: mock_publishers["x"],
Platform.THREADS: mock_publishers["threads"],
Platform.FACEBOOK: mock_publishers["facebook"],
Platform.INSTAGRAM: mock_publishers["instagram"],
}
return mgr
def test_init(self, manager):
"""Test manager initialization."""
assert manager._publishers is not None
assert len(manager._publishers) == 4
def test_get_publisher(self, manager):
"""Test getting a specific publisher."""
from app.publishers.manager import Platform
publisher = manager.get_publisher(Platform.X)
assert publisher is not None
def test_get_available_platforms(self, manager):
"""Test getting available platforms."""
available = manager.get_available_platforms()
assert isinstance(available, list)
assert "x" in available
assert "threads" in available
@pytest.mark.asyncio
async def test_publish_single_platform(self, manager):
"""Test publishing to a single platform."""
from app.publishers.manager import Platform
result = await manager.publish(
platform=Platform.X,
content="Test post #Testing"
)
assert result.success is True
assert result.post_id == "123"
@pytest.mark.asyncio
async def test_publish_content_too_long(self, manager, mock_publishers):
"""Test that too long content fails validation."""
from app.publishers.manager import Platform
mock_publishers["x"].validate_content.return_value = False
result = await manager.publish(
platform=Platform.X,
content="x" * 300 # Exceeds 280 limit
)
assert result.success is False
assert "límite" in result.error_message.lower() or "excede" in result.error_message.lower()
@pytest.mark.asyncio
async def test_publish_unsupported_platform(self, manager):
"""Test publishing to unsupported platform."""
result = await manager.publish(
platform=MagicMock(value="unsupported"),
content="Test"
)
assert result.success is False
assert "no soportada" in result.error_message.lower()
@pytest.mark.asyncio
async def test_publish_to_multiple_parallel(self, manager):
"""Test publishing to multiple platforms in parallel."""
from app.publishers.manager import Platform
result = await manager.publish_to_multiple(
platforms=[Platform.X, Platform.THREADS],
content="Multi-platform test #Test",
parallel=True
)
assert result.success is True
assert len(result.successful_platforms) >= 1
assert "x" in result.results
assert "threads" in result.results
@pytest.mark.asyncio
async def test_publish_to_multiple_sequential(self, manager):
"""Test publishing to multiple platforms sequentially."""
from app.publishers.manager import Platform
result = await manager.publish_to_multiple(
platforms=[Platform.X, Platform.FACEBOOK],
content="Sequential test",
parallel=False
)
assert result.success is True
@pytest.mark.asyncio
async def test_publish_to_multiple_with_dict_content(self, manager):
"""Test publishing with platform-specific content."""
from app.publishers.manager import Platform
content = {
"x": "Short post for X #X",
"threads": "Longer post for Threads with more details #Threads"
}
result = await manager.publish_to_multiple(
platforms=[Platform.X, Platform.THREADS],
content=content
)
assert result.success is True
@pytest.mark.asyncio
async def test_publish_with_image_meta_platforms(self, manager, mock_publishers):
"""Test that Meta platforms get public image URL."""
from app.publishers.manager import Platform
with patch('app.publishers.manager.image_upload') as mock_upload:
mock_upload.upload_from_path = AsyncMock(
return_value="https://imgbb.com/image.jpg"
)
result = await manager.publish(
platform=Platform.THREADS,
content="Post with image",
image_path="/local/image.jpg"
)
mock_upload.upload_from_path.assert_called_once_with("/local/image.jpg")
@pytest.mark.asyncio
async def test_test_connection(self, manager, mock_publishers):
"""Test connection testing."""
from app.publishers.manager import Platform
mock_publishers["x"].client.get_me.return_value = MagicMock(
data=MagicMock(username="testuser", name="Test", id=123)
)
result = await manager.test_connection(Platform.X)
assert result["platform"] == "x"
assert result["configured"] is True
assert result["connected"] is True
@pytest.mark.asyncio
async def test_test_all_connections(self, manager, mock_publishers):
"""Test testing all connections."""
mock_publishers["x"].client.get_me.return_value = MagicMock(
data=MagicMock(username="test", name="Test", id=123)
)
results = await manager.test_all_connections()
assert len(results) == 4
assert "x" in results
assert "threads" in results
class TestMultiPublishResult:
"""Tests for MultiPublishResult dataclass."""
def test_successful_platforms(self):
"""Test getting successful platforms."""
from app.publishers.manager import MultiPublishResult
from app.publishers.base import PublishResult
results = {
"x": PublishResult(success=True, post_id="123"),
"threads": PublishResult(success=False, error_message="Error"),
"facebook": PublishResult(success=True, post_id="456")
}
multi = MultiPublishResult(success=True, results=results, errors=[])
assert set(multi.successful_platforms) == {"x", "facebook"}
def test_failed_platforms(self):
"""Test getting failed platforms."""
from app.publishers.manager import MultiPublishResult
from app.publishers.base import PublishResult
results = {
"x": PublishResult(success=True, post_id="123"),
"threads": PublishResult(success=False, error_message="Error"),
}
multi = MultiPublishResult(success=True, results=results, errors=[])
assert multi.failed_platforms == ["threads"]
class TestPlatformEnum:
"""Tests for Platform enum."""
def test_platform_values(self):
"""Test platform enum values."""
from app.publishers.manager import Platform
assert Platform.X.value == "x"
assert Platform.THREADS.value == "threads"
assert Platform.FACEBOOK.value == "facebook"
assert Platform.INSTAGRAM.value == "instagram"
def test_platform_is_string_enum(self):
"""Test platform enum is string."""
from app.publishers.manager import Platform
assert isinstance(Platform.X, str)
assert Platform.X == "x"

259
tests/test_scheduler.py Normal file
View File

@@ -0,0 +1,259 @@
"""
Tests for ContentScheduler service.
"""
import pytest
from unittest.mock import MagicMock, patch
from datetime import datetime, timedelta
class TestContentScheduler:
"""Tests for the ContentScheduler class."""
@pytest.fixture
def mock_db_session(self):
"""Create a mock database session."""
mock_session = MagicMock()
mock_query = MagicMock()
mock_query.filter.return_value = mock_query
mock_query.first.return_value = None # No existing posts
mock_query.all.return_value = []
mock_query.order_by.return_value = mock_query
mock_session.query.return_value = mock_query
return mock_session
@pytest.fixture
def scheduler(self, mock_db_session):
"""Create a ContentScheduler with mocked database."""
with patch('app.services.scheduler.SessionLocal', return_value=mock_db_session):
from app.services.scheduler import ContentScheduler
return ContentScheduler()
def test_init(self, scheduler):
"""Test scheduler initialization."""
assert scheduler.posting_times is not None
assert "x" in scheduler.posting_times
assert "threads" in scheduler.posting_times
def test_get_next_available_slot_weekday(self, scheduler, mock_db_session, fixed_datetime):
"""Test getting next slot on a weekday."""
# Monday 10:00
weekday = datetime(2024, 6, 17, 10, 0, 0)
with patch('app.services.scheduler.SessionLocal', return_value=mock_db_session):
result = scheduler.get_next_available_slot("x", after=weekday)
assert result is not None
assert result > weekday
def test_get_next_available_slot_weekend(self, scheduler, mock_db_session):
"""Test getting next slot on a weekend."""
# Saturday 10:00
weekend = datetime(2024, 6, 15, 10, 0, 0)
with patch('app.services.scheduler.SessionLocal', return_value=mock_db_session):
result = scheduler.get_next_available_slot("x", after=weekend)
assert result is not None
def test_get_next_available_slot_late_night(self, scheduler, mock_db_session):
"""Test that late night moves to next day."""
# 11 PM
late_night = datetime(2024, 6, 17, 23, 0, 0)
with patch('app.services.scheduler.SessionLocal', return_value=mock_db_session):
result = scheduler.get_next_available_slot("x", after=late_night)
# Should be next day
assert result.date() > late_night.date()
def test_get_available_slots(self, scheduler, mock_db_session, fixed_datetime):
"""Test getting all available slots."""
with patch('app.services.scheduler.SessionLocal', return_value=mock_db_session):
slots = scheduler.get_available_slots(
platform="x",
start_date=fixed_datetime,
days=3
)
assert len(slots) > 0
for slot in slots:
assert slot.platform == "x"
assert slot.available is True
def test_schedule_post(self, scheduler, mock_db_session):
"""Test scheduling a post."""
mock_post = MagicMock()
mock_post.id = 1
mock_post.platforms = ["x"]
mock_post.status = "draft"
mock_db_session.query.return_value.filter.return_value.first.return_value = mock_post
with patch('app.services.scheduler.SessionLocal', return_value=mock_db_session):
result = scheduler.schedule_post(
post_id=1,
scheduled_at=datetime(2024, 6, 20, 12, 0, 0)
)
assert result == datetime(2024, 6, 20, 12, 0, 0)
assert mock_post.status == "scheduled"
mock_db_session.commit.assert_called_once()
def test_schedule_post_auto_time(self, scheduler, mock_db_session):
"""Test scheduling with auto-selected time."""
mock_post = MagicMock()
mock_post.id = 1
mock_post.platforms = ["x"]
# First call returns the post, second returns None (no conflicts)
mock_db_session.query.return_value.filter.return_value.first.side_effect = [
mock_post, None, None, None, None, None, None
]
with patch('app.services.scheduler.SessionLocal', return_value=mock_db_session):
result = scheduler.schedule_post(post_id=1)
assert result is not None
assert mock_post.scheduled_at is not None
def test_schedule_post_not_found(self, scheduler, mock_db_session):
"""Test scheduling a non-existent post."""
mock_db_session.query.return_value.filter.return_value.first.return_value = None
with patch('app.services.scheduler.SessionLocal', return_value=mock_db_session):
with pytest.raises(ValueError, match="not found"):
scheduler.schedule_post(post_id=999)
def test_reschedule_post(self, scheduler, mock_db_session):
"""Test rescheduling a post."""
mock_post = MagicMock()
mock_post.id = 1
mock_post.status = "scheduled"
mock_db_session.query.return_value.filter.return_value.first.return_value = mock_post
new_time = datetime(2024, 6, 25, 14, 0, 0)
with patch('app.services.scheduler.SessionLocal', return_value=mock_db_session):
result = scheduler.reschedule_post(post_id=1, new_time=new_time)
assert result is True
assert mock_post.scheduled_at == new_time
mock_db_session.commit.assert_called_once()
def test_reschedule_published_post_fails(self, scheduler, mock_db_session):
"""Test that published posts cannot be rescheduled."""
mock_post = MagicMock()
mock_post.status = "published"
mock_db_session.query.return_value.filter.return_value.first.return_value = mock_post
with patch('app.services.scheduler.SessionLocal', return_value=mock_db_session):
result = scheduler.reschedule_post(
post_id=1,
new_time=datetime(2024, 6, 25, 14, 0, 0)
)
assert result is False
def test_cancel_scheduled(self, scheduler, mock_db_session):
"""Test canceling a scheduled post."""
mock_post = MagicMock()
mock_post.id = 1
mock_post.status = "scheduled"
mock_post.scheduled_at = datetime(2024, 6, 20, 12, 0, 0)
mock_db_session.query.return_value.filter.return_value.first.return_value = mock_post
with patch('app.services.scheduler.SessionLocal', return_value=mock_db_session):
result = scheduler.cancel_scheduled(post_id=1)
assert result is True
assert mock_post.status == "draft"
assert mock_post.scheduled_at is None
def test_get_calendar(self, scheduler, mock_db_session):
"""Test getting calendar view."""
mock_posts = [
MagicMock(
id=1,
content="Test post 1",
platforms=["x"],
status="scheduled",
scheduled_at=datetime(2024, 6, 17, 12, 0, 0),
content_type="tip"
),
MagicMock(
id=2,
content="Test post 2",
platforms=["threads"],
status="scheduled",
scheduled_at=datetime(2024, 6, 17, 14, 0, 0),
content_type="product"
)
]
mock_query = MagicMock()
mock_query.filter.return_value = mock_query
mock_query.order_by.return_value = mock_query
mock_query.all.return_value = mock_posts
mock_db_session.query.return_value = mock_query
with patch('app.services.scheduler.SessionLocal', return_value=mock_db_session):
result = scheduler.get_calendar(
start_date=datetime(2024, 6, 15),
end_date=datetime(2024, 6, 20)
)
assert "2024-06-17" in result
assert len(result["2024-06-17"]) == 2
def test_auto_fill_calendar(self, scheduler, mock_db_session, fixed_datetime):
"""Test auto-filling calendar with suggested slots."""
with patch('app.services.scheduler.SessionLocal', return_value=mock_db_session):
slots = scheduler.auto_fill_calendar(
start_date=fixed_datetime,
days=3,
platforms=["x", "threads"]
)
assert len(slots) > 0
# Should be sorted by datetime
for i in range(1, len(slots)):
assert slots[i].datetime >= slots[i-1].datetime
class TestOptimalTimes:
"""Tests for optimal posting times configuration."""
def test_x_has_weekday_times(self):
"""Test that X platform has weekday times defined."""
from app.data.content_templates import OPTIMAL_POSTING_TIMES
assert "x" in OPTIMAL_POSTING_TIMES
assert "weekday" in OPTIMAL_POSTING_TIMES["x"]
assert len(OPTIMAL_POSTING_TIMES["x"]["weekday"]) > 0
def test_all_platforms_have_times(self):
"""Test all platforms have posting times."""
from app.data.content_templates import OPTIMAL_POSTING_TIMES
expected_platforms = ["x", "threads", "instagram", "facebook"]
for platform in expected_platforms:
assert platform in OPTIMAL_POSTING_TIMES
assert "weekday" in OPTIMAL_POSTING_TIMES[platform]
assert "weekend" in OPTIMAL_POSTING_TIMES[platform]
def test_time_format(self):
"""Test that times are in correct HH:MM format."""
from app.data.content_templates import OPTIMAL_POSTING_TIMES
import re
time_pattern = re.compile(r'^([01]?[0-9]|2[0-3]):[0-5][0-9]$')
for platform, times in OPTIMAL_POSTING_TIMES.items():
for day_type in ["weekday", "weekend"]:
for time_str in times.get(day_type, []):
assert time_pattern.match(time_str), f"Invalid time format: {time_str}"