Deployment Guide (All Environments)
This is the authoritative guide for deploying Meo Mai Moi in development, staging, and production.
There are now two deployment entrypoints:
- Manual/operator deploys use:
./utils/deploy.sh [--seed] [--fresh] [--no-cache] [--skip-build] [--no-interactive] [--quiet] [--auto-backup] [--restore]- CI-driven development deploys use:
./utils/deploy-ci-dev-ab.sh- CI-driven production deploys use:
./utils/deploy-ci-prod-ab.shSee ./utils/deploy.sh --help for the full manual/operator options.
Prerequisites
- Docker and Docker Compose installed
- Git installed and configured on the server
- Production: HTTPS terminated at your reverse proxy (nginx/caddy/traefik/Cloudflare)
- No host-level Bun installation is required for docs builds
Environment configuration
The deploy script uses a dual-file approach:
- Root
.env: Docker Compose variables (build args likeVAPID_PUBLIC_KEY, database credentials for the container) backend/.env: Laravel runtime configuration (APP_KEY, mail settings, etc.)
If these files don't exist, the deploy script will create them interactively (or non‑interactively with defaults when --no-interactive is used).
Root .env important variables:
VAPID_PUBLIC_KEY,VAPID_PRIVATE_KEY(for push notifications - generate withbun x web-push generate-vapid-keys)- Optional Umami analytics for the frontend SPA:
VITE_UMAMI_URLVITE_UMAMI_WEBSITE_IDVITE_UMAMI_DOMAINS(comma-separated allowlist, optional)VITE_UMAMI_DEBUG,VITE_UMAMI_LAZY_LOAD(optional flags)
POSTGRES_DB,POSTGRES_USER,POSTGRES_PASSWORD(must matchbackend/.envDB_* values)- Optional host port bindings for shared servers:
BACKEND_HOST_BIND,BACKEND_HOST_PORTREVERB_HOST_BIND,REVERB_HOST_PORTDB_HOST_BIND,DB_HOST_PORTHTTPS_HTTP_HOST_BIND,HTTPS_HTTP_HOST_PORTHTTPS_HTTPS_HOST_BIND,HTTPS_HTTPS_HOST_PORT
- Optional:
DEPLOY_NOTIFY_ENABLED=true,DEPLOY_NOTIFY_TELEGRAM_BOT_TOKEN,DEPLOY_NOTIFY_TELEGRAM_CHAT_IDfor deployment and monitoring notifications - Telegram user-bot runtime config lives in
backend/.env, not root.env:TELEGRAM_USER_BOT_TOKEN,TELEGRAM_USER_BOT_USERNAME - Optional:
DOCS_STRICT_LINKScontrols whether docs dead links fail builds in development (falseby default in development,trueby default in staging/production)
Umami note: these VITE_UMAMI_* values are build-time inputs for the frontend bundle. After changing them, rebuild/redeploy the backend image so the SPA assets are regenerated with the new analytics configuration.
backend/.env important variables:
APP_ENV(development|staging|production)APP_URL(e.g., https://example.com or https://localhost)DB_*(DB host, name, user, password - must match root.envPOSTGRES_* values)- Optional:
DEPLOY_HOST_PORTto override the host port used by deployment verification. If omitted, deploy verification followsBACKEND_HOST_PORTfrom the root.env, then falls back to8000.
Documentation build contract
- Docs are built in a disposable Bun Docker container (
oven/bun:1) during deploy. - The backend serves docs by bind-mounting
docs/.vitepress/distto/var/www/public/docs. - Deploy validates docs artifacts before starting containers.
- In
stagingandproduction, deployment fails ifdocs/.vitepress/dist/index.htmlis missing. - In
production, deployment also fails if the docs mount source is empty.
- In
- Dead-link policy:
productionandstaging: strict by default (DOCS_STRICT_LINKS=truebehavior).development: non-strict by default (DOCS_STRICT_LINKS=false), so deploy can continue with existing docs artifact if the docs rebuild fails.
Deployments
Development
./utils/deploy.sh # migrate only, preserves data
./utils/deploy.sh --seed # migrate + seed sample data
./utils/deploy.sh --auto-backup # create backup before deploying
./utils/deploy.sh --skip-build # skip Docker image builds (uses existing images)For CI-driven development deployment on the server, use:
./utils/deploy-ci-dev-ab.shdeploy-ci-dev-ab.sh is the preferred Woodpecker entrypoint for dev.meo-mai-moi.com. It deploys into the inactive slot, verifies that slot, then switches NGINX over only after the new slot is healthy. It intentionally skips the old self-updating git sync flow because CI already decides which commit is being deployed.
For catarchy2, the recommended root .env values are:
BACKEND_HOST_BIND=127.0.0.1
BACKEND_HOST_PORT=8010
REVERB_HOST_BIND=127.0.0.1
REVERB_HOST_PORT=8090
SLOT_A_BACKEND_HOST_BIND=127.0.0.1
SLOT_A_BACKEND_HOST_PORT=8001
SLOT_A_REVERB_HOST_BIND=127.0.0.1
SLOT_A_REVERB_HOST_PORT=8081
SLOT_B_BACKEND_HOST_BIND=127.0.0.1
SLOT_B_BACKEND_HOST_PORT=8002
SLOT_B_REVERB_HOST_BIND=127.0.0.1
SLOT_B_REVERB_HOST_PORT=8082
DB_HOST_BIND=127.0.0.1
DB_HOST_PORT=5433
DB_SERVICE_MODE=external
DB_EXTERNAL_CONTAINER=shared-postgres
SHARED_SERVICES_NETWORK_EXTERNAL=true
SHARED_SERVICES_NETWORK_NAME=shared-servicesAnd in backend/.env:
APP_URL=https://dev.meo-mai-moi.com
ENABLE_HTTPS=false
DB_HOST=shared-postgres
DB_PORT=5432
DB_DATABASE=meo_mai_moi_dev
DB_USERNAME=meo_mai_moi_dev
DB_PASSWORD=replace-meThis keeps Docker ports private to the host and lets host NGINX on catarchy2 own public 80/443.
In this mode, the backend joins the Docker network shared-services and uses shared PostgreSQL on catarchy2 instead of starting its own long-lived local db service.
Production A/B slots on meo
Production now uses the same slot-based rollout shape as development, but with a dedicated production slot helper:
./utils/deploy-ci-prod-ab.shRecommended root .env values on meo:
SLOT_A_BACKEND_HOST_BIND=127.0.0.1
SLOT_A_BACKEND_HOST_PORT=8011
SLOT_A_REVERB_HOST_BIND=127.0.0.1
SLOT_A_REVERB_HOST_PORT=8091
SLOT_B_BACKEND_HOST_BIND=127.0.0.1
SLOT_B_BACKEND_HOST_PORT=8012
SLOT_B_REVERB_HOST_BIND=127.0.0.1
SLOT_B_REVERB_HOST_PORT=8092
DB_SERVICE_MODE=external
DB_EXTERNAL_CONTAINER=shared-postgres
SHARED_SERVICES_NETWORK_EXTERNAL=true
SHARED_SERVICES_NETWORK_NAME=shared-servicesAnd in backend/.env:
APP_ENV=production
APP_URL=https://meo-mai-moi.com
ENABLE_HTTPS=false
DB_HOST=shared-postgres
DB_PORT=5432
DB_DATABASE=meo_mai_moi
DB_USERNAME=user
DB_PASSWORD=replace-meThe active production slot is tracked in:
/srv/meo-mai-moi/.deploy-active-slot-prodThe production A/B flow is:
- determine the inactive slot
- build and start only that target slot
- verify that target slot on its host-bound port
- rewrite the production NGINX vhost from
deploy/nginx/meo-mai-moi.com.conf.template - reload NGINX and mark the new slot active
- stop the legacy single-backend service after the first successful slot rollout
Important reverse-proxy note:
- the host NGINX vhost on
meomust be a pure reverse proxy to the active slot - do not keep
root /srv/meo-mai-moi/backend/publicor host-sidetry_filesrules in/etc/nginx/conf.d/meo-mai-moi.com.conf - otherwise the host can serve
public/index.phpas a static file instead of forwarding to PHP-FPM inside the active backend container - slot activation should always be followed by
nginx -tbefore reload
Development A/B slots on catarchy2
dev.meo-mai-moi.com now uses two backend slots on the same host:
- slot
a->backend_aon127.0.0.1:8001and Reverb on127.0.0.1:8081 - slot
b->backend_bon127.0.0.1:8002and Reverb on127.0.0.1:8082
The active slot is tracked in:
/opt/meo-mai-moi-dev/.deploy-active-slotUseful operational commands on catarchy2:
cd /opt/meo-mai-moi-dev
./utils/dev-slot.sh status
./utils/dev-slot.sh active
./utils/dev-slot.sh inactiveThe A/B deploy flow is:
- determine the inactive slot
- build and start only that target slot
- run migrations and application checks against the target slot
- rewrite the NGINX vhost from
deploy/nginx/dev.meo-mai-moi.com.conf.template - reload NGINX and mark the new slot active
This keeps the previous slot available as a rollback target and avoids the old blanket docker compose stop behavior during development slot deploys.
Note: Use --skip-build for faster deployments when you have already built the Docker images and just need to restart containers or run migrations.
Memory Optimization: In development environments (APP_ENV=development), the legacy single-slot deploy stops containers before build to reduce peak memory usage. In A/B mode, the deploy keeps the active slot running and only stops the inactive target service if needed. Production and staging environments build images while services are still running to minimize downtime.
HTTPS in development is handled by the https-proxy service (compose profile https).
To enable HTTPS locally:
- Set in
backend/.env:
APP_ENV=development
ENABLE_HTTPS=true- Generate self‑signed certificates (one time):
./utils/generate-dev-certs.sh- Deploy:
./utils/deploy.shAccess:
- App: http://localhost:8000 or https://localhost
- Admin: http(s)😕/localhost/admin
- Docs: http(s)😕/localhost/docs
Staging / Production
Use the same command on the server:
./utils/deploy.sh --no-interactive --quiet --auto-backupNotes:
- The backend container serves HTTP on port 80. In production A/B mode, terminate HTTPS at your reverse proxy and forward to the active slot host port (
8011or8012) via the generated NGINX vhost. - CI-based production rollout prefers the A/B entrypoint above, which verifies the inactive slot before the public switch.
- Migrations run via the deploy script only (the container’s entrypoint has
RUN_MIGRATIONS=falseto avoid race conditions). - The
--auto-backupflag automatically creates a backup before deployment for safety. - For production environments, consider setting up automated daily backups using the backup scheduler.
- Deploy fails fast if docs artifacts are missing/invalid for staging and production.
- In external PostgreSQL mode, backup and restore helpers must use client tools compatible with the shared server version; prefer the shared DB container over the app container for
pg_dump/psql.
Branch strategy
Deployment target branch is determined by environment and can be customized:
- Defaults:
- production →
main - staging →
staging - development →
dev
- Project‑level overrides: create a
.deploy-configfile in the repo root or base on the example:
# .deploy-config.example
DEPLOY_BRANCH_PRODUCTION=main
DEPLOY_BRANCH_STAGING=staging
DEPLOY_BRANCH_DEVELOPMENT=dev- One‑off override: set
DEPLOY_BRANCH_OVERRIDEenv var when invoking the script.
Webhook / CI automation
Two common ways to automate deployments:
- CI-driven development deployment should SSH into the server and run:
./utils/deploy-ci-dev-ab.shThis is the preferred path for Woodpecker-based dev deployments because it performs slot-aware A/B rollout. Woodpecker decides the commit; the server-side script only deploys the already-checked-out code.
- Manual or legacy automation can still SSH into the server and run:
./utils/deploy.sh --no-interactive --quietThis remains useful for operator-driven deploys and older webhook-style flows where something else has already updated the checkout on the target host.
- A webhook receiver on the server (already installed in your environment), which validates the payload signature and triggers the same command above. Ensure the deploy user has the repository checked out with proper permissions.
Woodpecker dev pipeline on catarchy2
The repository now includes a starter .woodpecker.yml for dev deployments.
Current intended flow:
- A push to
devtriggers Woodpecker. - Woodpecker SSHes into
catarchy2. - On the server, the long-lived checkout at
DEV_DEPLOY_PATHis reset to the pushed commit. - The server runs
./utils/deploy-ci-dev-ab.sh.
Current dev checkout and ports on catarchy2:
- checkout path:
/opt/meo-mai-moi-dev - backend:
127.0.0.1:8001 - reverb:
127.0.0.1:8081 - database: shared PostgreSQL on Docker network
shared-services(shared-postgres:5432)
Woodpecker secrets are intentionally split by scope:
- shared/global admin secrets:
CATARCHY2_HOSTCATARCHY2_USERCATARCHY2_SSH_KEY
- repo-local secrets for
meo-mai-moi:DEV_DEPLOY_PATH
Recommended values:
CATARCHY2_HOST=10.23.0.1- SSH host forcatarchy2over WireGuardCATARCHY2_USER=ubuntu- SSH user oncatarchy2CATARCHY2_SSH_KEY- base64-encoded private deploy keyDEV_DEPLOY_PATH=/opt/meo-mai-moi-dev- absolute path to the dev checkout oncatarchy2
Why CATARCHY2_HOST is not 127.0.0.1:
- Woodpecker steps run inside containers.
- Inside a CI container,
127.0.0.1means the container itself, not the VPS host. - Use the host's real reachable address instead. In this setup, the preferred address is the WireGuard IP
10.23.0.1.
The pipeline intentionally deploys via SSH into a host checkout instead of using host-path volumes inside Woodpecker steps. That keeps the repo compatible with non-trusted Woodpecker project settings and matches the existing deployment scripts more naturally.
Woodpecker main pipeline on meo
Current intended flow:
- A push to
maintriggers Woodpecker. - Woodpecker SSHes into
meo. - On the server, the long-lived checkout at
/srv/meo-mai-moiis reset to the pushed commit. - The server runs
./utils/deploy-ci-prod-ab.sh.
Current production checkout and slots on meo:
- checkout path:
/srv/meo-mai-moi - active slot marker:
/srv/meo-mai-moi/.deploy-active-slot-prod - slot
a: backend127.0.0.1:8011, reverb127.0.0.1:8091 - slot
b: backend127.0.0.1:8012, reverb127.0.0.1:8092 - database: shared PostgreSQL on Docker network
shared-services(shared-postgres:5432)
Woodpecker secrets for production:
- shared/global admin secrets:
MEO_HOSTMEO_USERMEO_SSH_KEY
Operational notes:
MEO_SSH_KEYshould be a one-line base64 encoding of the private deploy key content- manual reruns are allowed in addition to push-triggered runs
- stale
deploy.lockfiles should be treated as interrupted deploy residue, not as proof that a deploy is still active - CI deploy entrypoints now wait and retry for a short window if another deploy is actively holding the lock, instead of failing immediately on the first contention
- lock contention messages should report the holder's original start time and PID, rather than the retrying process's own start time
Reading CI-safe deploy logs
For the current catarchy2 dev setup, these log lines are expected informational skips, not deployment problems:
Bun not installed on host, skipping API generation checkBun not installed on host, skipping i18n checkphp not found on host; skipping OpenAPI spec generation
They appear because the actual build happens inside Docker rather than relying on host-installed Bun or PHP.
Logs and retention
- Per‑run logs are written to
.deploy/deploy-YYYYMMDD-HHMMSS.logand.deploy/deploy-YYYYMMDD-HHMMSS.json. - Convenience symlinks:
.deploy.logand.deploy.log.jsonpoint to the latest run. - Logs older than 30 days are cleaned up automatically.
- Volume deletion events are logged to
.deploy/volume-deletions.logfor audit trail.
Volume safety and debugging
Database volume protection
The deploy script includes several safeguards against accidental data loss:
- Empty database detection: Deployment fails if the database is empty (unless
--allow-empty-dbor--seedis specified) - Volume fingerprinting: Tracks the database volume creation timestamp in
.db_volume_fingerprintto detect unexpected volume recreation - Volume deletion logging: All
--freshdeployments log volume deletion events to.deploy/volume-deletions.log
Investigating data loss
If you encounter unexpected database emptiness or data loss, use these tools:
Check volume creation time vs fingerprint:
docker volume inspect meo-mai-moi_pgdata --format '{{ .CreatedAt }}'
cat .db_volume_fingerprintIf these don't match, the volume was recreated outside of tracked deployments.
Check volume deletion history:
cat .deploy/volume-deletions.logMonitor volume events in real-time (run in separate terminal):
docker events --filter 'type=volume' --format '{{.Time}} {{.Action}} {{.Actor.Attributes.name}}'Review historical volume events:
./utils/check-volume-events.sh [days-back] # Check last N days (default: 7)Note: Docker event logs are ephemeral and may be cleared/rotated. For persistent tracking, rely on .deploy/volume-deletions.log.
Common causes of volume deletion
- Running
docker compose down -v(the-vflag deletes volumes) - Running
docker system prune -a --volumes - Using
./utils/deploy.sh --fresh(intentional, but logged) - External tools or scripts that manage Docker resources
Telegram Notifications
The system supports Telegram notifications for:
- Deployment start/success/failure
- Database monitoring alerts (empty database, query failures)
Setup
Create a Telegram bot (one-time):
- Message @BotFather on Telegram
- Send
/newbotand follow instructions - Copy the bot token (format:
123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11)
Get your Chat ID:
- Message @userinfobot on Telegram
- Copy your Chat ID (numeric, e.g.,
127529747)
Configure in root
.env:bashDEPLOY_NOTIFY_ENABLED=true DEPLOY_NOTIFY_TELEGRAM_BOT_TOKEN=123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11 DEPLOY_NOTIFY_TELEGRAM_CHAT_ID=127529747Test notifications:
bash./utils/deploy_notify_test.shRebuild backend (required for monitoring alerts):
bashdocker compose up -d --build backend
Database Monitoring
A continuous monitoring script runs inside the backend container, checking every 60 seconds for:
- Empty database (all data lost)
- Database query failures
Alerts are sent to Telegram with diagnostic information. Check logs:
docker compose exec backend tail -f /var/www/storage/logs/db-monitor.log🔑 Seeder Overrides
Configure the initial Super Admin credentials via environment variables in backend/.env*:
SEED_ADMIN_EMAIL=admin@catarchy.space
SEED_ADMIN_PASSWORD=password
# Optional: SEED_ADMIN_NAME="Super Admin"DatabaseSeeder and deploy.sh will honor these values when seeding and when checking for the admin user during deployments.
Demo account seeding is also configurable:
DEMO_USER_EMAIL=demo@catarchy.space
DEMO_USER_NAME="Demo Caregiver"
DEMO_USER_PASSWORD=password
DEMO_LOGIN_TOKEN_TTL_SECONDS=120
# Optional: DEMO_LOGIN_REDIRECT_PATH=/When DatabaseSeeder runs in non-production environments, it now ensures this demo user exists and seeds a curated set of pets, health records, foster relationships, microchip data, and in-app notifications for the public demo flow.
🌱 Safe Production Seeders
When deploying to production, you may need to update basic reference data (categories, cities, pet types, etc.) without creating test users or pets. The following seeders are safe to run on production as they only populate essential reference data:
Safe Seeders to Run on Production
# Core reference data
docker compose exec backend php artisan db:seed --class=CitySeeder
docker compose exec backend php artisan db:seed --class=PetTypeSeeder
docker compose exec backend php artisan db:seed --class=CategorySeeder
# Authentication & permissions
docker compose exec backend php artisan db:seed --class=ShieldSeeder
docker compose exec backend php artisan db:seed --class=RolesAndPermissionsSeeder
# Configuration & notifications
docker compose exec backend php artisan db:seed --class=SettingsSeeder
docker compose exec backend php artisan db:seed --class=NotificationPreferenceSeeder
docker compose exec backend php artisan db:seed --class=NotificationTemplateSeederWhat These Seeders Provide
- CitySeeder: Creates city entries for various countries (reference data only)
- PetTypeSeeder: Creates pet types (cat, dog, bird, etc.) with their configurations
- CategorySeeder: Creates pet categories/breeds and characteristics for each pet type
- ShieldSeeder: Sets up Laravel Shield authentication/authorization data
- RolesAndPermissionsSeeder: Creates roles and permissions structure
- SettingsSeeder: Sets basic application settings (invite-only mode, email verification)
- NotificationPreferenceSeeder: Creates notification preference templates
- EmailConfigurationSeeder: Sets up email configuration options
- NotificationTemplateSeeder: Creates notification templates for the system
Important Notes
- These seeders use
updateOrCreate()so they're safe to run multiple times without duplicating data - They only create reference/configuration data, not test users, pets, or other entities
- Avoid running
DatabaseSeederon production as it calls multiple seeders including test data creation - Unsafe seeders to avoid:
UserSeeder,HelperProfileSeeder,PlacementRequestSeeder,ReviewSeeder,E2ETestingSeeder,E2EEmailConfigurationSeeder
💾 Backup & Restore System
The backup system supports both database and user uploads, with comprehensive safety features and automated scheduling options.
Creating Backups
Manual Backups
./utils/backup.sh all # Create both database and uploads backup
./utils/backup.sh database # Create only database backup
./utils/backup.sh uploads # Create only uploads backup
./utils/backup.sh --list # List all available backups
./utils/backup.sh --clean # Remove backups older than 30 daysAutomated Backups
./utils/backup-scheduler.sh # Run scheduled backup (respects schedule)
./utils/backup-scheduler.sh --run-now # Force immediate backup
./utils/backup-scheduler.sh --dry-run # Test backup configuration without runningCron Job Setup
./utils/setup-backup-cron.sh --interactive # Interactive cron setup
./utils/setup-backup-cron.sh --add-daily # Add daily backup cron job
./utils/setup-backup-cron.sh --add-weekly # Add weekly backup cron job
./utils/setup-backup-cron.sh --remove # Remove backup cron jobsBackup Features:
- Comprehensive Coverage: Database + user uploads in coordinated backups
- Compressed Formats:
- Database:
backups/backup-YYYY-MM-DD_HH-MM-SS.sql.gz - Uploads:
backups/uploads_backup-YYYY-MM-DD_HH-MM-SS.tar.gz
- Database:
- Integrity Verification: SHA256 checksums for all backups
- Automatic Cleanup: Configurable retention (default: 30 days)
- Health Checks: Container status and connectivity validation
- Flexible Scheduling: Hourly, daily, weekly, monthly options
- Smart Scheduling: Only runs when needed based on last backup time
Restoring from Backups
Individual Component Restoration
./utils/backup.sh --restore-database backups/backup-2026-01-22_14-51-10.sql.gz
./utils/backup.sh --restore-uploads backups/uploads_backup-2026-01-22_14-51-10.tar.gzCoordinated Restoration (Recommended)
./utils/backup.sh --restore-all 2026-01-22_14-51-10 # Restore both by timestampDuring Deployment (Automated)
./utils/deploy.sh --auto-backup # Create backup before deploying
./utils/deploy.sh --restore-db # Restore database before deploying
./utils/deploy.sh --restore-uploads # Restore uploads before deploying
./utils/deploy.sh --restore # Restore both database and uploadsWhen using --no-interactive together with restore flags, you must provide the restore target explicitly:
# Restore both (timestamp)
DEPLOY_RESTORE_TIMESTAMP=2026-01-22_14-51-10 ./utils/deploy.sh --no-interactive --restore
# Restore DB only (file path)
DEPLOY_RESTORE_DB_FILE=backups/backup-2026-01-22_14-51-10.sql.gz ./utils/deploy.sh --no-interactive --restore-db
# Restore uploads only (file path)
DEPLOY_RESTORE_UPLOADS_FILE=backups/uploads_backup-2026-01-22_14-51-10.tar.gz ./utils/deploy.sh --no-interactive --restore-uploadsLegacy Interactive Method
./utils/restore.sh # Interactive menu (database, uploads, or both)Safety Features
- Pre-restoration Validation: Disk space, connectivity, checksum verification
- Confirmation Prompts: Prevent accidental data loss with clear warnings
- Post-restoration Verification: Database connectivity and file count validation
- Detailed Logging: All operations logged with timestamps and error details
- Non-destructive Testing: Dry-run modes for backup scheduler
Rollback vs Restore
- Rollback (
rollback.sh): Revert code changes to a previous deployment snapshot while preserving database data - Restore: Replace current data with data from a backup file (destructive operation)
Use rollback for code issues, use restore for data recovery.
Configuration Options
Environment Variables for Backup Scheduler
BACKUP_SCHEDULE=daily # hourly, daily, weekly, monthly
BACKUP_RETENTION_DAYS=30 # Days to keep backups
BACKUP_TYPE=all # all, database, uploads
BACKUP_NOTIFICATION=true # Enable Telegram notifications
LOG_FILE=/path/to/logfile # Custom log file pathEnvironment Variables for Manual Backups
BACKUP_RETENTION_DAYS=7 # Override default retention
DB_USERNAME=user # Database username
DB_DATABASE=meo_mai_moi # Database nameUtility Scripts
- backup.sh - Comprehensive backup creation and restoration utility
- backup-scheduler.sh - Automated backup scheduler with health checks
- setup-backup-cron.sh - Cron job setup and management
- restore.sh - Legacy interactive restore utility (still supported)
- rollback.sh - Code rollback utility (preserves data)
Production Recommendations
- Enable Automated Backups: Set up daily or weekly cron jobs for production
- Use Coordinated Backups: Always backup both database and uploads together
- Test Restore Procedures: Regularly test restoration in staging environments
- Monitor Backup Health: Check logs and backup file integrity
- Configure Notifications: Enable Telegram alerts for backup failures
- Plan Retention Policy: Balance storage costs with recovery needs
- Secure Backup Storage: Consider off-site backup storage for critical data
Production HTTPS
Terminate HTTPS at your reverse proxy (nginx/caddy/traefik/Cloudflare) and forward to the backend’s HTTP port.
Set headers:
X-Forwarded-ProtoX-Forwarded-ForX-Forwarded-Host
Do not use self‑signed certificates in production.
Migration strategy
- Migrations are run explicitly by the deploy script after the container is healthy.
- This prevents startup races and ensures orderly seeding and verification.