Antes: 010_seed_super_admin.sql tinha hash bcrypt fixo amarrado a um pepper específico. Qualquer mudança no PASSWORD_PEPPER quebrava todos os logins silenciosamente após reset do banco. Agora: - migration 010: insere superadmin com placeholder inválido + force_change_password. ON CONFLICT DO NOTHING preserva o hash se o seeder já rodou. - seeder users.js: faz upsert de 'lol' com bcrypt(senha + env.PASSWORD_PEPPER) em runtime. Mudar o pepper e re-rodar o seeder é suficiente para atualizar as credenciais sem tocar em nenhuma migration. - docs/AGENTS.md: atualiza gotcha #1 explicando o novo fluxo migrate → seed - docs/DEVOPS.md: fix opção 1 do troubleshooting inclui re-deploy do seeder Fluxo correto após reset do banco (coberto pelo start.sh opções 2, 6, 8): npm run migrate → superadmin criado, hash = placeholder npm run seed → hash recalculado com PEPPER do ambiente, status = active Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
22 KiB
DevOps - GoHorseJobs (Development Environment)
Infraestrutura, CI/CD e deploy do projeto GoHorseJobs no servidor apolo.
Last Updated: 2026-02-18 Servers: Apolo VPS (Podman), Redbull VPS (Coolify) Tech Stack: Podman, Systemd (Quadlet), Traefik, PostgreSQL, Coolify
☁️ Cloudflare DNS Zone
Zone Info
| Property | Value |
|---|---|
| Zone ID | 5e7e9286849525abf7f30b451b7964ac |
| Domain | gohorsejobs.com |
| Account | gohorsejobs |
| yamamoto@rede5.com.br | |
| Plan | Free Website |
| Name Servers | chuck.ns.cloudflare.com, fatima.ns.cloudflare.com |
API Access
# Token location: ~/.ssh/cloudflare-token
export CF_AUTH_EMAIL="yamamoto@rede5.com.br"
export CF_AUTH_KEY="5dcfd89a9d4ec330dede0d4074a518f26818e"
# List zones
curl -s -H "X-Auth-Email: $CF_AUTH_EMAIL" -H "X-Auth-Key: $CF_AUTH_KEY" \
"https://api.cloudflare.com/client/v4/zones"
# List DNS records
curl -s -H "X-Auth-Email: $CF_AUTH_EMAIL" -H "X-Auth-Key: $CF_AUTH_KEY" \
"https://api.cloudflare.com/client/v4/zones/5e7e9286849525abf7f30b451b7964ac/dns_records"
# Purge cache
curl -s -X DELETE -H "X-Auth-Email: $CF_AUTH_EMAIL" -H "X-Auth-Key: $CF_AUTH_KEY" \
-H "Content-Type: application/json" \
"https://api.cloudflare.com/client/v4/zones/5e7e9286849525abf7f30b451b7964ac/purge_cache" \
-d '{"purge_everything":true}'
Active DNS Records (gohorsejobs.com)
| Subdomain | Type | IP/Target | Proxied |
|---|---|---|---|
| dev.gohorsejobs.com | A | 38.19.201.52 | No |
| api.gohorsejobs.com | A | 86.48.29.139 | Yes |
| api-dev.gohorsejobs.com | A | 86.48.29.139 | Yes |
| api-local.gohorsejobs.com | A | 38.19.201.52 | No |
| b-local.gohorsejobs.com | A | 38.19.201.52 | No |
| s-local.gohorsejobs.com | A | 38.19.201.52 | No |
| coolify-dev.gohorsejobs.com | A | 185.194.141.70 | No |
| local.gohorsejobs.com | A | 185.194.141.70 | No |
| api-local.gohorsejobs.com | A | 185.194.141.70 | No |
| b-local.gohorsejobs.com | A | 185.194.141.70 | No |
| s-local.gohorsejobs.com | A | 185.194.141.70 | No |
| panel.gohorsejobs.com | A | Multiple (Load Balanced) | Yes |
| pipe.gohorsejobs.com | A | Multiple (Load Balanced) | Yes |
| alert.gohorsejobs.com | A | Multiple (Load Balanced) | Yes |
| task.gohorsejobs.com | A | Multiple (Load Balanced) | Yes |
| stats.gohorsejobs.com | A | Multiple (Load Balanced) | Yes |
| storage.gohorsejobs.com | A | Multiple (Load Balanced) | Yes |
| base.gohorsejobs.com | A | Multiple | No |
| reg.gohorsejobs.com | A | Multiple (Load Balanced) | Yes |
| gohorsejobs.com | CNAME | gohorsejobs.pages.dev | Yes |
| *.gohorsejobs.com | CNAME | 8a3f435b-f374-4268-90f7-5610f577c706.cfargotunnel.com | Yes |
| mail.gohorsejobs.com | CNAME | everest.mxrouting.net | No |
Total: 190 DNS records (paginados)
☁️ Coolify DEV Environment (Redbull)
Ambiente de desenvolvimento no Coolify para deploy automatizado via Git.
Server Info
| Property | Value |
|---|---|
| Host | redbull (185.194.141.70) |
| Coolify URL | https://redbull.rede5.com.br |
| API Token | ~/.ssh/coolify-redbull-token |
| SSH Key | ~/.ssh/civo |
| Project UUID | gkgksco0ow4kgwo8ow4cgs8c |
| Environment | dev |
Resources Created
| Resource | UUID | Port | Domain | Status |
|---|---|---|---|---|
| Backend | iw4sow8s0kkg4cccsk08gsoo |
8521 | https://api-local.gohorsejobs.com | running |
| Frontend | ao8g40scws0w4cgo8coc8o40 |
3000 | https://local.gohorsejobs.com | running |
| Backoffice | hg48wkw4wggwsswcwc8sooo4 |
3001 | https://b-local.gohorsejobs.com | running |
| Seeder | q4w48gos8cgssso00o8w8gck |
8080 | https://s-local.gohorsejobs.com | running:healthy |
| Database (PostgreSQL) | bgws48os8wgwk08o48wg8k80 |
5432 | Internal only | running:healthy |
API Reference
Base URL: https://redbull.rede5.com.br/api/v1
Server UUID: m844o4gkwkwcc0k48swgs8c8
# Listar aplicações
curl -s -H "Authorization: Bearer $(cat ~/.ssh/coolify-redbull-token)" \
"https://redbull.rede5.com.br/api/v1/applications"
# Atualizar domínios (requer http:// ou https://)
curl -s -X PATCH -H "Authorization: Bearer $(cat ~/.ssh/coolify-redbull-token)" \
-H "Content-Type: application/json" \
"https://redbull.rede5.com.br/api/v1/applications/<UUID>" \
-d '{"domains":"http://local.gohorsejobs.com","instant_deploy":true}'
# Deploy aplicação
curl -s -H "Authorization: Bearer $(cat ~/.ssh/coolify-redbull-token)" \
"https://redbull.rede5.com.br/api/v1/deploy?uuid=<UUID>"
# Ver domínios do servidor
curl -s -H "Authorization: Bearer $(cat ~/.ssh/coolify-redbull-token)" \
"https://redbull.rede5.com.br/api/v1/servers/m844o4gkwkwcc0k48swgs8c8/domains"
Architecture
GitHub (rede5/gohorsejobs.git) ←→ Forgejo (pipe.gohorsejobs.com)
│ │
▼ ▼
Coolify (redbull.rede5.com.br)
├── Traefik (reverse proxy + TLS via Let's Encrypt)
│
├── gohorsejobs-backend-dev → https://api-local.gohorsejobs.com
├── gohorsejobs-frontend-dev → https://local.gohorsejobs.com
├── gohorsejobs-backoffice-dev → https://b-local.gohorsejobs.com
├── gohorsejobs-seeder-dev → https://s-local.gohorsejobs.com
│
└── PostgreSQL 16 (gohorsejobs-dev) → Internal network only
Environment Variables
Configured via Coolify UI or API:
DATABASE_URL=postgres://gohorsejobs:gohorsejobs123@bgws48os8wgwk08o48wg8k80:5432/gohorsejobs?sslmode=disable
BACKEND_PORT=8521
ENV=development
JWT_SECRET=gohorsejobs-dev-jwt-secret-2024-very-secure-key-32ch
JWT_EXPIRATION=7d
PASSWORD_PEPPER=gohorse-pepper
COOKIE_SECRET=gohorsejobs-cookie-secret-dev
CORS_ORIGINS=https://local.gohorsejobs.com,https://b-local.gohorsejobs.com,http://localhost:3000,http://localhost:8521
⚠️
PASSWORD_PEPPERé crítico. Todas as migration seeds e o seeder-api usamgohorse-pepper. Se este valor for alterado no Coolify sem regravar os hashes no banco, todos os logins falharão cominvalid credentials. Veja a seção de troubleshooting abaixo.
⚠️ Troubleshooting: Login retorna invalid credentials
Causa: O PASSWORD_PEPPER no Coolify não coincide com o pepper usado para gerar os hashes no banco.
Diagnóstico via SSH:
ssh redbull
# Verificar pepper no container em execução:
docker inspect <backend_container_name> --format '{{range .Config.Env}}{{println .}}{{end}}' | grep PEPPER
# Testar login direto (sem escaping de shell):
cat > /tmp/login.json <<'EOF'
{"email":"lol","password":"Admin@2025!"}
EOF
docker run --rm --network coolify -v /tmp/login.json:/tmp/login.json \
curlimages/curl:latest -s -X POST \
http://<backend_container>:8521/api/v1/auth/login \
-H 'Content-Type: application/json' -d @/tmp/login.json
Fix — opção 1 (preferível): corrigir o pepper no Coolify e re-rodar o seeder:
TOKEN=$(cat ~/.ssh/coolify-redbull-token)
# 1. Atualizar PASSWORD_PEPPER
curl -s -X PATCH \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"key":"PASSWORD_PEPPER","value":"gohorse-pepper"}' \
"https://redbull.rede5.com.br/api/v1/applications/iw4sow8s0kkg4cccsk08gsoo/envs"
# 2. Reiniciar o backend (para ele pegar o novo pepper)
curl -s -H "Authorization: Bearer $TOKEN" \
"https://redbull.rede5.com.br/api/v1/applications/iw4sow8s0kkg4cccsk08gsoo/restart"
# 3. Re-rodar o seeder (ele regrava o hash com o pepper correto automaticamente)
curl -s -H "Authorization: Bearer $TOKEN" \
"https://redbull.rede5.com.br/api/v1/deploy?uuid=q4w48gos8cgssso00o8w8gck"
O seeder (
seedUsers()) sempre faz upsert do superadmin/lol combcrypt(senha + PEPPER). Mudar o pepper e re-rodar o seeder é suficiente — nenhuma migration precisa ser tocada.
Fix — opção 2 (emergência, sem seeder): regravar hash direto no banco:
ssh redbull
# Gerar novo hash com node (usando arquivo para evitar expansão de $ pelo shell):
mkdir -p /tmp/hashgen && cat > /tmp/hashgen/gen.js <<'EOF'
const b = require("./node_modules/bcryptjs");
console.log(b.hashSync("Admin@2025!" + process.env.PEPPER, 10));
EOF
docker run --rm -v /tmp/hashgen:/app -w /app -e PEPPER=gohorse-pepper \
node:20-alpine sh -c "npm install bcryptjs -s && node gen.js"
# Aplicar no banco (SEMPRE usar -f, nunca -c, para preservar os $ do hash):
cat > /tmp/fix_hash.sql <<'EOF'
UPDATE users SET password_hash = '<hash_gerado_acima>', status = 'active'
WHERE identifier IN ('lol', 'superadmin');
EOF
docker cp /tmp/fix_hash.sql bgws48os8wgwk08o48wg8k80:/tmp/fix_hash.sql
docker exec bgws48os8wgwk08o48wg8k80 psql -U gohorsejobs -d gohorsejobs -f /tmp/fix_hash.sql
⚠️ Nunca passe hash bcrypt via
-c '...'na linha de comando — o shell expande os$e corrompe o hash silenciosamente. Use sempre um arquivo e-f.
Deploy via API
# Deploy application
curl -H "Authorization: Bearer $(cat ~/.ssh/coolify-redbull-token)" \
"https://redbull.rede5.com.br/api/v1/deploy?uuid=iw4sow8s0kkg4cccsk08gsoo"
# Check deployment status
curl -H "Authorization: Bearer $(cat ~/.ssh/coolify-redbull-token)" \
"https://redbull.rede5.com.br/api/v1/deployments/<deployment_uuid>"
# List applications
curl -H "Authorization: Bearer $(cat ~/.ssh/coolify-redbull-token)" \
"https://redbull.rede5.com.br/api/v1/applications"
# List databases
curl -H "Authorization: Bearer $(cat ~/.ssh/coolify-redbull-token)" \
"https://redbull.rede5.com.br/api/v1/databases"
Coolify Reference
- Docs: https://coolify.io/docs/get-started/introduction
- API Reference: https://coolify.io/docs/api-reference/authorization
- GitHub Integration: Uses SSH deploy key for private repo access
🏗️ Architecture Diagrams
Full Infrastructure Overview
graph TB
subgraph Clients ["Clients"]
Browser["Browser / Mobile"]
end
subgraph CF ["Cloudflare (DNS + CDN)"]
DNS["DNS Zone: gohorsejobs.com"]
end
subgraph Redbull ["Redbull VPS (185.194.141.70) — Coolify DEV"]
TraefikR("Traefik + Let's Encrypt")
subgraph CoolifyApps ["Coolify Applications"]
FE_C["Frontend (:3000)"]
BE_C["Backend API (:8521)"]
BO_C["Backoffice (:3001)"]
SE_C["Seeder API (:8080)"]
end
PG_C[("PostgreSQL 16\ngohorsejobs-dev")]
end
subgraph Apolo ["Apolo VPS (38.19.201.52) — Podman/Quadlet"]
TraefikA("Traefik")
subgraph PodmanApps ["Podman Containers (Systemd/Quadlet)"]
FE_A["Frontend (:3000)"]
BE_A["Backend API (:8521)"]
BO_A["Backoffice (:3001)"]
SE_A["Seeder API (:8080)"]
end
PG_A[("PostgreSQL\npostgres-main")]
Storage["/mnt/data\n(configs + DB data)"]
end
subgraph Git ["Git Repositories"]
GH["GitHub\nrede5/gohorsejobs"]
FJ["Forgejo (pipe)\npipe.gohorsejobs.com"]
end
subgraph External ["External Services"]
Stripe["Stripe (Payments)"]
Firebase["Firebase (FCM)"]
R2["Cloudflare R2 (Storage)"]
LavinMQ["LavinMQ (AMQP)"]
Resend["Resend (Email)"]
end
%% Client Flow
Browser --> DNS
DNS -- "local.gohorsejobs.com" --> TraefikR
DNS -- "dev.gohorsejobs.com" --> TraefikA
%% Redbull Routing
TraefikR -- "local.gohorsejobs.com" --> FE_C
TraefikR -- "api-local.gohorsejobs.com" --> BE_C
TraefikR -- "b-local.gohorsejobs.com" --> BO_C
TraefikR -- "s-local.gohorsejobs.com" --> SE_C
BE_C --> PG_C
BO_C --> PG_C
SE_C --> PG_C
%% Apolo Routing
TraefikA -- "dev.gohorsejobs.com" --> FE_A
TraefikA -- "api-tmp.gohorsejobs.com" --> BE_A
TraefikA -- "b-tmp.gohorsejobs.com" --> BO_A
BE_A --> PG_A
BO_A --> PG_A
SE_A --> PG_A
PG_A -.-> Storage
%% Git Flow
GH <--> FJ
%% External
BE_C -.-> Stripe
BE_C -.-> Firebase
BE_C -.-> R2
BO_C -.-> LavinMQ
BO_C -.-> Resend
style PG_C fill:#336791,stroke:#fff,color:#fff
style PG_A fill:#336791,stroke:#fff,color:#fff
style TraefikR fill:#f5a623,stroke:#fff,color:#fff
style TraefikA fill:#f5a623,stroke:#fff,color:#fff
style CF fill:#f48120,stroke:#fff,color:#fff
Apolo VPS (Podman/Quadlet) Detail
graph TD
subgraph Host ["Apolo VPS (Host)"]
subgraph FS ["File System (/mnt/data)"]
EnvBE["/gohorsejobs/backend/.env"]
EnvBO["/gohorsejobs/backoffice/.env"]
EnvSE["/gohorsejobs/seeder-api/.env"]
DBData[("postgres-general")]
end
subgraph Net ["Network: web_proxy"]
Traefik("Traefik")
subgraph App ["Application Containers"]
BE["Backend API (:8521)"]
BO["Backoffice (:3001)"]
SE["Seeder API (:8080)"]
FE["Frontend (:3000)"]
end
PG[("postgres-main (:5432)")]
end
end
%% Ingress
Internet((Internet)) --> Traefik
%% Routing
Traefik -- "dev.gohorsejobs.com" --> FE
Traefik -- "api-tmp.gohorsejobs.com" --> BE
Traefik -- "b-tmp.gohorsejobs.com" --> BO
Traefik -- "seeder.gohorsejobs.com" --> SE
%% Config Mounts
EnvBE -.-> BE
EnvBO -.-> BO
EnvSE -.-> SE
%% Data Persistence
PG -.-> DBData
%% Database Connections
BE --> PG
BO --> PG
SE --> PG
style PG fill:#336791,stroke:#fff,color:#fff
style Traefik fill:#f5a623,stroke:#fff,color:#fff
Coolify DEV (Redbull) Detail
graph TD
subgraph Redbull ["Redbull VPS — Coolify (redbull.rede5.com.br)"]
Traefik("Traefik + Let's Encrypt")
subgraph Apps ["Applications (auto-deploy via Git)"]
BE["Backend Go\n:8521"]
FE["Frontend Next.js\n:3000"]
BO["Backoffice NestJS\n:3001"]
SE["Seeder API\n:8080"]
end
PG[("PostgreSQL 16\ngohorsejobs\n:5432")]
end
GH["GitHub (rede5/gohorsejobs)"] --> |"push dev"| Traefik
Internet((Internet)) --> Traefik
Traefik -- "api-local.gohorsejobs.com" --> BE
Traefik -- "local.gohorsejobs.com" --> FE
Traefik -- "b-local.gohorsejobs.com" --> BO
Traefik -- "s-local.gohorsejobs.com" --> SE
BE --> PG
BO --> PG
SE --> PG
style PG fill:#336791,stroke:#fff,color:#fff
style Traefik fill:#f5a623,stroke:#fff,color:#fff
CI/CD Flow (Dual Pipeline)
Existem 2 pipelines independentes disparados simultaneamente a cada push:
graph TD
Dev["Developer\ngit push dev"]
subgraph Pipeline1 ["Pipeline 1: GitHub → Coolify"]
GH["GitHub\n(origin)"]
Webhook["GitHub Webhook\n(push event)"]
Coolify["Coolify\n(redbull.rede5.com.br)"]
Redbull["Redbull VPS\nFrontend + Backend + Backoffice + Seeder"]
end
subgraph Pipeline2 ["Pipeline 2: Forgejo → K3s Cluster"]
FJ["Forgejo\n(pipe.gohorsejobs.com)"]
Runner["Forgejo Actions Runner\n(self-hosted, K3s)"]
Registry["Container Registry\npipe.gohorsejobs.com"]
K3s["K3s Cluster\nBackend + Backoffice"]
end
Dev --> GH
Dev --> FJ
GH --> Webhook --> Coolify --> |"Docker build"| Redbull
FJ --> |"push triggers"| Runner
Runner --> |"docker build & push"| Registry
Runner --> |"kubectl apply"| K3s
| Pipeline | Trigger | Servicos | Destino |
|---|---|---|---|
| GitHub → Coolify | Webhook (push) | Frontend, Backend, Backoffice, Seeder | Redbull VPS (Docker) |
| Forgejo → K3s | Forgejo Actions (push) | Backend, Backoffice | K3s Cluster (Kubernetes) |
🔄 Forgejo CI/CD Pipeline (pipe.gohorsejobs.com)
O pipeline roda automaticamente via Forgejo Actions a cada push na branch dev.
Workflow: .forgejo/workflows/deploy.yaml
| Job | Descricao | Status Atual |
|---|---|---|
| build-and-push | Build Docker images (backend + backoffice), push to registry | OK |
| deploy | Deploy ao K3s via kubectl (requer KUBECONFIG secret) | OK (fix: KUBE_CONFIG → KUBECONFIG) |
Pipeline Steps
-
build-and-push (OK):
- Checkout code
- Docker login no registry
pipe.gohorsejobs.com - Build & push backend:
pipe.gohorsejobs.com/bohessefm/gohorsejobs:latest - Build & push backoffice:
pipe.gohorsejobs.com/bohessefm/backoffice:latest
-
deploy (FAIL - K3s nao configurado):
- Install kubectl
- Configure kubeconfig (via
secrets.KUBE_CONFIG) - Sync secrets e vars ao namespace
gohorsejobsdev kubectl apply -f k8s/dev/- Set image com SHA do commit
- Rollout restart deployments
Nota: O job deploy usava
secrets.KUBE_CONFIGmas o secret se chamaKUBECONFIG. Corrigido no commit atual.
Forgejo Actions Secrets & Variables
Secrets (configurados em Settings > Actions > Secrets):
FORGEJO_TOKEN— Login no container registryKUBECONFIG— Kubeconfig para acesso ao K3s cluster
Variables (configurados em Settings > Actions > Variables):
DATABASE_URL,JWT_SECRET,PASSWORD_PEPPER,COOKIE_SECRET,COOKIE_DOMAINBACKEND_PORT,BACKEND_HOST,ENV,CORS_ORIGINS,MTUAMQP_URL,S3_BUCKET,AWS_REGION,AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_ENDPOINTRSA_PRIVATE_KEY_BASE64,JWT_EXPIRATION
Forgejo API
# Token location: ~/.ssh/forgejo-token
FORGEJO_TOKEN="03d23c54672519c8473bd9c46ae7820b13c8b287"
# Listar runs do pipeline
curl -s -H "Authorization: token $FORGEJO_TOKEN" \
"https://pipe.gohorsejobs.com/api/v1/repos/bohessefm/gohorsejobs/actions/tasks?limit=5"
# Listar repositorios
curl -s -H "Authorization: token $FORGEJO_TOKEN" \
"https://pipe.gohorsejobs.com/api/v1/user/repos"
GitHub Webhooks (Auto-deploy Coolify)
Webhooks configurados no GitHub apontando para o Coolify:
| App | Webhook URL |
|---|---|
| Backend | https://redbull.rede5.com.br/webhooks/source/github/events/manual?uuid=iw4sow8s0kkg4cccsk08gsoo&secret=... |
| Frontend | https://redbull.rede5.com.br/webhooks/source/github/events/manual?uuid=ao8g40scws0w4cgo8coc8o40&secret=... |
| Backoffice | https://redbull.rede5.com.br/webhooks/source/github/events/manual?uuid=hg48wkw4wggwsswcwc8sooo4&secret=... |
| Seeder | https://redbull.rede5.com.br/webhooks/source/github/events/manual?uuid=q4w48gos8cgssso00o8w8gck&secret=... |
🔑 Credenciais e Tokens (Referencias)
Todos os tokens estao armazenados em ~/.ssh/:
| Arquivo | Servico | Uso |
|---|---|---|
~/.ssh/coolify-redbull-token |
Coolify API | Deploy e gerenciamento de apps |
~/.ssh/forgejo-token |
Forgejo API (pipe) | CI/CD, webhooks, repos |
~/.ssh/github-token |
GitHub API | Webhooks, repos |
~/.ssh/cloudflare-token |
Cloudflare API | DNS, cache |
~/.ssh/absam-token |
Absam Cloud API | VPS management |
~/.ssh/forgejo-gohorsejobs |
SSH Key | Forgejo Git operations |
~/.ssh/civo |
SSH Key | Acesso VPS Redbull |
~/.ssh/github |
SSH Key | GitHub Git operations |
💾 Storage & Persistence (/mnt/data)
All persistent data and configuration files are stored in /mnt/data on the host.
| Host Path | Container Path | Purpose | Type |
|---|---|---|---|
/mnt/data/gohorsejobs/backend/.env |
(Injected Env) | Backend Config: Secrets, DB URL, Port settings. | File |
/mnt/data/gohorsejobs/backoffice/.env |
(Injected Env) | Backoffice Config: Secrets, DB URL. | File |
/mnt/data/gohorsejobs/seeder-api/.env |
(Injected Env) | Seeder Config: Secrets, DB URL. | File |
/mnt/data/postgres-general |
/var/lib/postgresql/data |
Database Storage: Main storage for postgres-main container. Contains gohorsejobs_dev DB. |
Directory |
Backup Note: To backup the environment, ensure
/mnt/data/gohorsejobsand/mnt/data/postgres-generalare included in snapshots.
🌍 Service Maps & Networking
🚦 Traefik Routing
Services are exposed via Traefik labels defined in the Quadlet .container files.
| Domain | Service | Internal Port | Host Port (Debug) |
|---|---|---|---|
dev.gohorsejobs.com |
gohorsejobs-frontend-dev |
3000 |
8523 |
api-tmp.gohorsejobs.com |
gohorsejobs-backend-dev |
8521 |
8521 |
b-tmp.gohorsejobs.com |
gohorsejobs-backoffice-dev |
3001 |
- |
seeder.gohorsejobs.com |
gohorsejobs-seeder-dev |
8080 |
8522 |
🛑 Security
- Backend/Seeder/Frontend expose ports to the Host (
85xx) for debugging/direct access if needed. - Backoffice is only accessible via Traefik (internal network).
- PostgreSQL is only accessible internally via
web_proxynetwork (no host port binding).
🛠️ Operational Guide
1. View & Manage Configs
Configurations are not inside containers. Edit them on the host:
# Edit Backend Config
vim /mnt/data/gohorsejobs/backend/.env
# Apply changes
systemctl restart gohorsejobs-backend-dev
2. Full Environment Restart
To restart all GoHorseJobs related services (excluding Database):
systemctl restart gohorsejobs-backend-dev gohorsejobs-backoffice-dev gohorsejobs-seeder-dev gohorsejobs-frontend-dev
3. Database Access
Access the local database directly via the postgres-main container:
# Internal Connection
docker exec -it postgres-main psql -U yuki -d gohorsejobs_dev
🚀 Deployment Pipeline (Manual)
Current workflow uses Local Build -> Forgejo Registry -> Server Pull.
1. Build & Push (Local Machine)
# Login
podman login forgejo-gru.rede5.com.br
# Build
cd backend
podman build -t forgejo-gru.rede5.com.br/rede5/gohorsejobs-backend:latest .
# Push
podman push forgejo-gru.rede5.com.br/rede5/gohorsejobs-backend:latest
2. Deploy (On Apolo Server)
ssh root@apolo
# Pull new image
podman pull forgejo-gru.rede5.com.br/rede5/gohorsejobs-backend:latest
# Restart service (Systemd handles container recreation)
systemctl restart gohorsejobs-backend-dev