refactor: optimize Dockerfiles and documentation for core services

- Use Google Distroless images for all services (Go & Node.js).
- Standardize documentation with [PROJECT-NAME].md.
- Add .dockerignore and .gitignore to all projects.
- Remove docker-compose.yml in favor of docker run instructions.
- Fix Go version and dependency issues in observability, repo-integrations, and security-governance.
- Add Podman support (fully qualified image names).
- Update Dashboard to use Node.js static server for Distroless compatibility.
This commit is contained in:
Tiago Yamamoto 2025-12-30 13:22:20 -03:00
parent eaae42a5ef
commit a52bd4519d
121 changed files with 11771 additions and 744 deletions

View file

@ -0,0 +1,82 @@
name: Deploy Stack (Dev)
on:
push:
branches:
- dev
paths:
- 'backend/**'
- 'backoffice/**'
- 'frontend/**'
env:
REGISTRY: rg.fr-par.scw.cloud/funcscwinfrastructureascodehdz4uzhb
NAMESPACE: a5034510-9763-40e8-ac7e-1836e7a61460
jobs:
# Job: Deploy no Servidor (Pull das imagens do Scaleway)
deploy-dev:
runs-on: docker
steps:
- name: Checkout code
uses: https://github.com/actions/checkout@v4
with:
fetch-depth: 2
- name: Check changed files
id: check
run: |
if git diff --name-only HEAD~1 HEAD | grep -q "^backend/"; then
echo "backend=true" >> $GITHUB_OUTPUT
else
echo "backend=false" >> $GITHUB_OUTPUT
fi
if git diff --name-only HEAD~1 HEAD | grep -q "^frontend/"; then
echo "frontend=true" >> $GITHUB_OUTPUT
else
echo "frontend=false" >> $GITHUB_OUTPUT
fi
if git diff --name-only HEAD~1 HEAD | grep -q "^backoffice/"; then
echo "backoffice=true" >> $GITHUB_OUTPUT
else
echo "backoffice=false" >> $GITHUB_OUTPUT
fi
- name: Deploy via SSH
uses: https://github.com/appleboy/ssh-action@v1.0.3
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.SSH_KEY }}
port: ${{ secrets.PORT || 22 }}
script: |
# Login no Scaleway Registry
echo "${{ secrets.SCW_SECRET_KEY }}" | podman login ${{ env.REGISTRY }} -u nologin --password-stdin
# --- DEPLOY DO BACKEND ---
if [ "${{ steps.check.outputs.backend }}" == "true" ]; then
echo "Pulling e reiniciando Backend..."
podman pull ${{ env.REGISTRY }}/${{ env.NAMESPACE }}/backend:dev-latest
podman tag ${{ env.REGISTRY }}/${{ env.NAMESPACE }}/backend:dev-latest localhost/gohorsejobs-backend-dev:latest
sudo systemctl restart gohorsejobs-backend-dev
fi
# --- DEPLOY DO FRONTEND ---
if [ "${{ steps.check.outputs.frontend }}" == "true" ]; then
echo "Pulling e reiniciando Frontend..."
podman pull ${{ env.REGISTRY }}/${{ env.NAMESPACE }}/frontend:dev-latest
podman tag ${{ env.REGISTRY }}/${{ env.NAMESPACE }}/frontend:dev-latest localhost/gohorsejobs-frontend-dev:latest
sudo systemctl restart gohorsejobs-frontend-dev
fi
# --- DEPLOY DO BACKOFFICE ---
if [ "${{ steps.check.outputs.backoffice }}" == "true" ]; then
echo "Pulling e reiniciando Backoffice..."
podman pull ${{ env.REGISTRY }}/${{ env.NAMESPACE }}/backoffice:dev-latest
podman tag ${{ env.REGISTRY }}/${{ env.NAMESPACE }}/backoffice:dev-latest localhost/gohorsejobs-backoffice-dev:latest
sudo systemctl restart gohorsejobs-backoffice-dev
fi
# --- LIMPEZA ---
echo "Limpando imagens antigas..."
podman image prune -f || true

View file

@ -0,0 +1,9 @@
.git
.env
.gitignore
Dockerfile.api
Dockerfile.worker
README.md
AUTOMATION-JOBS-CORE.md
migrations
*.log

6
automation-jobs-core/.gitignore vendored Normal file
View file

@ -0,0 +1,6 @@
.env
*.log
.DS_Store
automation-jobs-api
automation-jobs-worker
coverage

View file

@ -0,0 +1,93 @@
# AUTOMATION-JOBS-CORE
Este serviço é responsável pela execução de automações, workflows de longa duração e jobs agendados, utilizando o poder do [Temporal](https://temporal.io/) para garantir confiabilidade e idempotência.
## 📋 Visão Geral
O projeto é dividido em três componentes principais que trabalham em conjunto para processar tarefas assíncronas:
1. **API (HTTP)**: Ponto de entrada leve para iniciar workflows.
2. **Temporal Server**: O "cérebro" que orquestra o estado e o agendamento das tarefas.
3. **Workers (Go)**: Onde o código da lógica de negócio (workflows e activities) realmente é executado.
### Arquitetura
O diagrama abaixo ilustra como os componentes interagem:
```mermaid
graph TD
Client[Cliente Externo/Frontend] -->|HTTP POST /jobs/run| API[API Service :8080]
API -->|gRPC StartWorkflow| Temporal[Temporal Service :7233]
subgraph Temporal Cluster
Temporal
DB[(PostgreSQL)]
Temporal --> DB
end
Worker[Go Worker] -->|Poll TaskQueue| Temporal
Worker -->|Execute Activity| Worker
Worker -->|Return Result| Temporal
```
## 🚀 Estrutura do Projeto
Abaixo está o detalhamento de cada diretório e arquivo importante:
| Caminho | Descrição |
| :--- | :--- |
| `cmd/api/` | Ponto de entrada (`main.go`) para o serviço da API. |
| `cmd/worker/` | Ponto de entrada (`main.go`) para o serviço do Worker. |
| `internal/` | Código compartilhado e lógica interna do aplicativo. |
| `temporal/` | Definições de Workflows e Activities do Temporal. |
| `Dockerfile.api` | Configuração de build otimizada para a API (Distroless). |
| `Dockerfile.worker` | Configuração de build otimizada para o Worker (Distroless). |
| `docker-compose.yml` | Orquestração local de todos os serviços. |
## 🛠️ Tecnologias e Otimizações
- **Linguagem**: Go 1.23+
- **Orquestração**: Temporal.io
- **Containerização**:
- Images baseadas em `gcr.io/distroless/static`.
- Multi-stage builds para reduzir o tamanho final da imagem (~20MB).
- Execução como usuário `nonroot` para segurança aprimorada.
## 💻 Como Executar
O projeto é projetado para ser executado via Docker Compose para um ambiente de desenvolvimento completo.
### Pré-requisitos
- Docker Engine
- Docker Compose
### Passo a Passo
1. **Inicie o ambiente:**
```bash
docker-compose up --build
```
Isso irá subir:
- Temporal Server & Web UI (Porta `8088`)
- PostgreSQL (Persistência do Temporal)
- API Service (Porta `8080`)
- Worker Service
2. **Dispare um Workflow de Teste:**
```bash
curl -X POST http://localhost:8080/jobs/run
```
3. **Monitore a Execução:**
Acesse a interface do Temporal para ver o progresso em tempo real:
[http://localhost:8088](http://localhost:8088)
## 🔧 Detalhes dos Dockerfiles
Os Dockerfiles foram refatorados para máxima eficiência:
- **Builder Stage**: Usa `golang:1.23-alpine` para compilar o binário estático, removendo informações de debug (`-ldflags="-w -s"`).
- **Runtime Stage**: Usa `gcr.io/distroless/static:nonroot`, que contém apenas o mínimo necessário para rodar binários Go, sem shell ou gerenciador de pacotes, garantindo:
- ✅ **Segurança**: Menor superfície de ataque.
- ✅ **Tamanho**: Imagens extremamente leves.
- ✅ **Performance**: Bootrápido.

View file

@ -0,0 +1,26 @@
# Dockerfile.api
FROM docker.io/library/golang:1.23-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
# Build with optimization flags
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-w -s" -o /app/api ./cmd/api
# Use Google Disroless static image for minimal size and security
FROM gcr.io/distroless/static:nonroot
WORKDIR /app
COPY --from=builder /app/api .
# Non-root user for security
USER nonroot:nonroot
EXPOSE 8080
CMD ["./api"]

View file

@ -0,0 +1,24 @@
# Dockerfile.worker
FROM docker.io/library/golang:1.23-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
# Build with optimization flags
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-w -s" -o /app/worker ./cmd/worker
# Use Google Disroless static image for minimal size and security
FROM gcr.io/distroless/static:nonroot
WORKDIR /app
COPY --from=builder /app/worker .
# Non-root user for security
USER nonroot:nonroot
CMD ["./worker"]

View file

@ -0,0 +1,136 @@
package main
import (
"context"
"encoding/json"
"log/slog"
"net/http"
"os"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/google/uuid"
"github.com/lab/automation-jobs-core/temporal/workflows"
"go.temporal.io/sdk/client"
)
// The task queue name for our sample workflow.
const SampleTaskQueue = "sample-task-queue"
// application holds the dependencies for our API handlers.
type application struct {
temporalClient client.Client
}
// runJobRequest defines the expected JSON body for the POST /jobs/run endpoint.
type runJobRequest struct {
Name string `json:"name"`
}
// runJobResponse defines the JSON response for a successful job submission.
type runJobResponse struct {
WorkflowID string `json:"workflow_id"`
RunID string `json:"run_id"`
}
// jobStatusResponse defines the JSON response for the job status endpoint.
type jobStatusResponse struct {
WorkflowID string `json:"workflow_id"`
RunID string `json:"run_id"`
Status string `json:"status"`
}
func main() {
slog.SetDefault(slog.New(slog.NewJSONHandler(os.Stdout, nil)))
temporalAddress := os.Getenv("TEMPORAL_ADDRESS")
if temporalAddress == "" {
slog.Warn("TEMPORAL_ADDRESS not set, defaulting to localhost:7233")
temporalAddress = "localhost:7233"
}
c, err := client.Dial(client.Options{
HostPort: temporalAddress,
Logger: slog.Default(),
})
if err != nil {
slog.Error("Unable to create Temporal client", "error", err)
os.Exit(1)
}
defer c.Close()
app := &application{
temporalClient: c,
}
r := chi.NewRouter()
r.Use(middleware.RequestID)
r.Use(middleware.RealIP)
r.Use(middleware.Logger) // Chi's default logger
r.Use(middleware.Recoverer)
r.Post("/jobs/run", app.runJobHandler)
r.Get("/jobs/{workflowID}/status", app.getJobStatusHandler)
slog.Info("Starting API server", "port", "8080")
if err := http.ListenAndServe(":8080", r); err != nil {
slog.Error("Failed to start server", "error", err)
}
}
// runJobHandler starts a new SampleWorkflow execution.
func (app *application) runJobHandler(w http.ResponseWriter, r *http.Request) {
var req runJobRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
http.Error(w, "Invalid request body", http.StatusBadRequest)
return
}
if req.Name == "" {
http.Error(w, "Name field is required", http.StatusBadRequest)
return
}
options := client.StartWorkflowOptions{
ID: "sample-workflow-" + uuid.NewString(),
TaskQueue: SampleTaskQueue,
}
we, err := app.temporalClient.ExecuteWorkflow(context.Background(), options, workflows.SampleWorkflow, req.Name)
if err != nil {
slog.Error("Unable to start workflow", "error", err)
http.Error(w, "Internal server error", http.StatusInternalServerError)
return
}
slog.Info("Started workflow", "workflow_id", we.GetID(), "run_id", we.GetRunID())
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
json.NewEncoder(w).Encode(runJobResponse{
WorkflowID: we.GetID(),
RunID: we.GetRunID(),
})
}
// getJobStatusHandler retrieves the status of a specific workflow execution.
func (app *application) getJobStatusHandler(w http.ResponseWriter, r *http.Request) {
workflowID := chi.URLParam(r, "workflowID")
// Note: RunID can be empty to get the latest run.
resp, err := app.temporalClient.DescribeWorkflowExecution(context.Background(), workflowID, "")
if err != nil {
slog.Error("Unable to describe workflow", "error", err, "workflow_id", workflowID)
http.Error(w, "Workflow not found", http.StatusNotFound)
return
}
status := resp.GetWorkflowExecutionInfo().GetStatus().String()
slog.Info("Described workflow", "workflow_id", workflowID, "status", status)
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(jobStatusResponse{
WorkflowID: resp.GetWorkflowExecutionInfo().GetExecution().GetWorkflowId(),
RunID: resp.GetWorkflowExecutionInfo().GetExecution().GetRunId(),
Status: status,
})
}

View file

@ -0,0 +1,54 @@
package main
import (
"log/slog"
"os"
"github.com/lab/automation-jobs-core/temporal/activities"
"github.com/lab/automation-jobs-core/temporal/workflows"
"go.temporal.io/sdk/client"
"go.temporal.io/sdk/worker"
)
const (
SampleTaskQueue = "sample-task-queue"
)
func main() {
// Use slog for structured logging.
slog.SetDefault(slog.New(slog.NewJSONHandler(os.Stdout, nil)))
// Get Temporal server address from environment variable.
temporalAddress := os.Getenv("TEMPORAL_ADDRESS")
if temporalAddress == "" {
slog.Warn("TEMPORAL_ADDRESS not set, defaulting to localhost:7233")
temporalAddress = "localhost:7233"
}
// Create a new Temporal client.
c, err := client.Dial(client.Options{
HostPort: temporalAddress,
Logger: slog.Default(),
})
if err != nil {
slog.Error("Unable to create Temporal client", "error", err)
os.Exit(1)
}
defer c.Close()
// Create a new worker.
w := worker.New(c, SampleTaskQueue, worker.Options{})
// Register the workflow and activity.
w.RegisterWorkflow(workflows.SampleWorkflow)
w.RegisterActivity(activities.SampleActivity)
slog.Info("Starting Temporal worker", "task_queue", SampleTaskQueue)
// Start the worker.
err = w.Run(worker.InterruptCh())
if err != nil {
slog.Error("Unable to start worker", "error", err)
os.Exit(1)
}
}

View file

@ -0,0 +1,36 @@
module github.com/lab/automation-jobs-core
go 1.23.0
toolchain go1.23.12
require (
github.com/go-chi/chi/v5 v5.2.3
github.com/google/uuid v1.6.0
go.temporal.io/sdk v1.38.0
)
require (
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/facebookgo/clock v0.0.0-20150410010913-600d898af40a // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/mock v1.6.0 // indirect
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.3.2 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0 // indirect
github.com/nexus-rpc/sdk-go v0.5.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/robfig/cron v1.2.0 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/stretchr/testify v1.10.0 // indirect
go.temporal.io/api v1.54.0 // indirect
golang.org/x/net v0.39.0 // indirect
golang.org/x/sync v0.13.0 // indirect
golang.org/x/sys v0.32.0 // indirect
golang.org/x/text v0.24.0 // indirect
golang.org/x/time v0.3.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240827150818-7e3bb234dfed // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240827150818-7e3bb234dfed // indirect
google.golang.org/grpc v1.67.1 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

View file

@ -0,0 +1,99 @@
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/facebookgo/clock v0.0.0-20150410010913-600d898af40a h1:yDWHCSQ40h88yih2JAcL6Ls/kVkSE8GFACTGVnMPruw=
github.com/facebookgo/clock v0.0.0-20150410010913-600d898af40a/go.mod h1:7Ga40egUymuWXxAe151lTNnCv97MddSOVsjpPPkityA=
github.com/go-chi/chi/v5 v5.2.3 h1:WQIt9uxdsAbgIYgid+BpYc+liqQZGMHRaUwp0JUcvdE=
github.com/go-chi/chi/v5 v5.2.3/go.mod h1:L2yAIGWB3H+phAw1NxKwWM+7eUH/lU8pOMm5hHcoops=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc=
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.3.2 h1:sGm2vDRFUrQJO/Veii4h4zG2vvqG6uWNkBHSTqXOZk0=
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.3.2/go.mod h1:wd1YpapPLivG6nQgbf7ZkG1hhSOXDhhn4MLTknx2aAc=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0 h1:asbCHRVmodnJTuQ3qamDwqVOIjwqUPTYmYuemVOx+Ys=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0/go.mod h1:ggCgvZ2r7uOoQjOyu2Y1NhHmEPPzzuhWgcza5M1Ji1I=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/nexus-rpc/sdk-go v0.5.1 h1:UFYYfoHlQc+Pn9gQpmn9QE7xluewAn2AO1OSkAh7YFU=
github.com/nexus-rpc/sdk-go v0.5.1/go.mod h1:FHdPfVQwRuJFZFTF0Y2GOAxCrbIBNrcPna9slkGKPYk=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/robfig/cron v1.2.0 h1:ZjScXvvxeQ63Dbyxy76Fj3AT3Ut0aKsyd2/tl3DTMuQ=
github.com/robfig/cron v1.2.0/go.mod h1:JGuDeoQd7Z6yL4zQhZ3OPEVHB7fL6Ka6skscFHfmt2k=
github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M=
github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
go.temporal.io/api v1.54.0 h1:/sy8rYZEykgmXRjeiv1PkFHLXIus5n6FqGhRtCl7Pc0=
go.temporal.io/api v1.54.0/go.mod h1:iaxoP/9OXMJcQkETTECfwYq4cw/bj4nwov8b3ZLVnXM=
go.temporal.io/sdk v1.38.0 h1:4Bok5LEdED7YKpsSjIa3dDqram5VOq+ydBf4pyx0Wo4=
go.temporal.io/sdk v1.38.0/go.mod h1:a+R2Ej28ObvHoILbHaxMyind7M6D+W0L7edt5UJF4SE=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.39.0 h1:ZCu7HMWDxpXpaiKdhzIfaltL9Lp31x/3fCP11bc6/fY=
golang.org/x/net v0.39.0/go.mod h1:X7NRbYVEA+ewNkCNyJ513WmMdQ3BineSwVtN2zD/d+E=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.13.0 h1:AauUjRAJ9OSnvULf/ARrrVywoJDy0YS2AwQ98I37610=
golang.org/x/sync v0.13.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.32.0 h1:s77OFDvIQeibCmezSnk/q6iAfkdiQaJi4VzroCFrN20=
golang.org/x/sys v0.32.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.24.0 h1:dd5Bzh4yt5KYA8f9CJHCP4FB4D51c2c6JvN37xJJkJ0=
golang.org/x/text v0.24.0/go.mod h1:L8rBsPeo2pSS+xqN0d5u2ikmjtmoJbDBT1b7nHvFCdU=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto/googleapis/api v0.0.0-20240827150818-7e3bb234dfed h1:3RgNmBoI9MZhsj3QxC+AP/qQhNwpCLOvYDYYsFrhFt0=
google.golang.org/genproto/googleapis/api v0.0.0-20240827150818-7e3bb234dfed/go.mod h1:OCdP9MfskevB/rbYvHTsXTtKC+3bHWajPdoKgjcYkfo=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240827150818-7e3bb234dfed h1:J6izYgfBXAI3xTKLgxzTmUltdYaLsuBxFCgDHWJ/eXg=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240827150818-7e3bb234dfed/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU=
google.golang.org/grpc v1.67.1 h1:zWnc1Vrcno+lHZCOofnIMvycFcc0QRGIzm9dhnDX68E=
google.golang.org/grpc v1.67.1/go.mod h1:1gLDyUQU7CTLJI90u3nXZ9ekeghjeM7pTDZlqFNg2AA=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View file

@ -0,0 +1,10 @@
package activities
import (
"context"
"fmt"
)
func Greet(ctx context.Context, name string) (string, error) {
return fmt.Sprintf("Hello, %s!", name), nil
}

View file

@ -0,0 +1,14 @@
package activities
import (
"context"
"fmt"
"log/slog"
)
// SampleActivity is a simple Temporal activity that demonstrates how to receive
// parameters and return a value.
func SampleActivity(ctx context.Context, name string) (string, error) {
slog.Info("Running SampleActivity", "name", name)
return fmt.Sprintf("Hello, %s!", name), nil
}

View file

@ -0,0 +1,23 @@
package workflows
import (
"time"
"github.com/lab/automation-jobs-core/temporal/activities"
"go.temporal.io/sdk/workflow"
)
func GreetingWorkflow(ctx workflow.Context, name string) (string, error) {
options := workflow.ActivityOptions{
StartToCloseTimeout: time.Second * 5,
}
ctx = workflow.WithActivityOptions(ctx, options)
var result string
err := workflow.ExecuteActivity(ctx, activities.Greet, name).Get(ctx, &result)
if err != nil {
return "", err
}
return result, nil
}

View file

@ -0,0 +1,29 @@
package workflows
import (
"time"
"go.temporal.io/sdk/workflow"
"github.com/lab/automation-jobs-core/temporal/activities"
)
// SampleWorkflow is a simple Temporal workflow that executes one activity.
func SampleWorkflow(ctx workflow.Context, name string) (string, error) {
// Set a timeout for the activity.
ao := workflow.ActivityOptions{
StartToCloseTimeout: 10 * time.Second,
}
ctx = workflow.WithActivityOptions(ctx, ao)
// Execute the activity and wait for its result.
var result string
err := workflow.ExecuteActivity(ctx, activities.SampleActivity, name).Get(ctx, &result)
if err != nil {
workflow.GetLogger(ctx).Error("Activity failed.", "Error", err)
return "", err
}
workflow.GetLogger(ctx).Info("Workflow completed.", "Result", result)
return result, nil
}

View file

@ -0,0 +1,10 @@
node_modules
dist
.git
.env
.gitignore
Dockerfile
README.md
BAAS-CONTROL-PLANE.md
migrations
*.log

6
baas-control-plane/.gitignore vendored Normal file
View file

@ -0,0 +1,6 @@
node_modules
dist
.env
*.log
.DS_Store
coverage

View file

@ -0,0 +1,94 @@
# BAAS-CONTROL-PLANE
O `baas-control-plane` é o orquestrador central para provisionamento e gestão de múltiplos backends-as-a-service (BaaS), como Appwrite e Supabase, oferecendo uma camada unificada de abstração para multi-tenancy.
## 📋 Visão Geral
Este serviço não armazena dados de negócio, mas sim metadados sobre tenants, projetos e recursos. Ele atua como um "plano de controle" que delega a criação de infraestrutura para drivers ou provedores específicos.
### Arquitetura
```mermaid
graph TD
Client[Dashboard / CLI] -->|HTTP REST| API[Control Plane API]
subgraph Core Services
API --> Provisioning[Provisioning Service]
API --> Schema[Schema Sync]
API --> Secrets[Secrets Manager]
API --> Audit[Audit Logger]
end
subgraph Providers
Provisioning -->|Driver Interface| Appwrite[Appwrite Provider]
Provisioning -->|Driver Interface| Supabase[Supabase Provider]
end
Appwrite -->|API| AWS_Appwrite[Appwrite Instance]
Supabase -->|API| AWS_Supabase[Supabase Hosting]
API --> DB[(Metadata DB)]
```
## 🚀 Estrutura do Projeto
O projeto segue uma arquitetura modular baseada em **Fastify**:
| Diretório | Responsabilidade |
| :--- | :--- |
| `src/core` | Configurações globais, plugins do Fastify e tratamento de erros. |
| `src/modules` | Domínios funcionais (Tenants, Projects, etc.). |
| `src/providers` | Implementações dos drivers para cada BaaS suportado. |
| `src/lib` | Utilitários compartilhados. |
| `docs/` | Documentação arquitetural detalhada. |
## 🛠️ Tecnologias e Otimizações
- **Backend**: Node.js 20 + Fastify (Alta performance)
- **Linguagem**: TypeScript
- **Validação**: Zod
- **Containerização**:
- Baseada em `gcr.io/distroless/nodejs20-debian12`.
- Multi-stage build para separação de dependências.
- Segurança reforçada (sem shell, usuário non-root).
## 💻 Como Executar
### Docker (Recomendado)
```bash
docker-compose up --build
```
A API estará disponível na porta `4000`.
### Desenvolvimento Local
1. **Instale as dependências:**
```bash
npm install
```
2. **Configure o ambiente:**
```bash
cp .env.example .env
```
3. **Execute em modo watch:**
```bash
npm run dev
```
## 🔌 Fluxos Principais
1. **Criar Tenant**: Registra uma nova organização no sistema.
2. **Criar Projeto**: Vincula um Tenant a um Provider (ex: Projeto "Marketing" no Appwrite).
3. **Provisionar**: O Control Plane chama a API do Provider para criar bancos de dados, buckets e funções.
4. **Schema Sync**: Aplica definições de coleção/tabela do sistema de forma agnóstica ao provider.
## 🔧 Detalhes do Dockerfile
O `Dockerfile` é otimizado para produção e segurança:
- **Builder**: Compila o TypeScript.
- **Prod Deps**: Instala apenas pacotes necessários para execução (`--omit=dev`).
- **Runtime (Distroless)**: Imagem final minúscula contendo apenas o runtime Node.js e os arquivos da aplicação.

View file

@ -1,4 +1,6 @@
FROM node:20-alpine AS base
# Dockerfile
# Stage 1: Build the application
FROM docker.io/library/node:20-alpine AS builder
WORKDIR /app
@ -10,8 +12,25 @@ COPY src ./src
RUN npm run build
RUN npm prune --omit=dev
# Stage 2: Install production dependencies
FROM docker.io/library/node:20-alpine AS prod-deps
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm install --omit=dev
# Stage 3: Run the application
FROM gcr.io/distroless/nodejs20-debian12
WORKDIR /app
ENV NODE_ENV=production
COPY --from=prod-deps /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
EXPOSE 4000
CMD ["node", "dist/main.js"]
CMD ["dist/main.js"]

View file

@ -1,56 +0,0 @@
# baas-control-plane
Control plane multi-tenant para orquestrar provedores BaaS (Appwrite, Supabase) com foco em provisioning, schema, secrets, métricas e auditoria.
## Visão geral
- Backend Node.js + TypeScript com Fastify
- Multi-tenant com isolamento lógico por tenant
- Providers plugáveis sem lógica de negócio
- Serviços centrais para provisioning, schema, secrets, finops e audit
## Arquitetura
```
/src
/core
/providers
/modules
/lib
main.ts
```
Detalhes adicionais em [docs/architecture.md](docs/architecture.md).
## Fluxo multi-tenant
1. Criar tenant (`POST /tenants`)
2. Criar projeto para o tenant (`POST /tenants/:id/projects`)
3. Provisionar projeto no provider (`POST /projects/:id/provision`)
4. Sincronizar schema (`POST /projects/:id/schema/sync`)
5. Coletar métricas (`GET /projects/:id/metrics`)
## Como adicionar um novo provider
1. Criar pasta em `src/providers/<provider>`
2. Implementar `client`, `provisioning`, `schema`, `metrics`
3. Registrar no `provider.factory.ts`
4. Adicionar variáveis em `.env.example` e no `SecretsService`
## Como subir localmente
```bash
cp .env.example .env
npm install
npm run dev
```
### Docker
```bash
docker compose up --build
```
## API mínima
- `POST /tenants`
- `GET /tenants`
- `POST /tenants/:id/projects`
- `GET /tenants/:id/projects`
- `POST /projects/:id/provision`
- `POST /projects/:id/schema/sync`
- `GET /projects/:id/metrics`
- `GET /health`

View file

@ -1,11 +0,0 @@
version: '3.9'
services:
backend:
build: .
ports:
- "4000:4000"
env_file:
- .env
volumes:
- ./data:/app/data

View file

@ -0,0 +1,10 @@
node_modules
dist
.git
.env
.gitignore
Dockerfile
README.md
BILLING-FINANCE-CORE.md
prisma/migrations
*.log

View file

@ -0,0 +1,25 @@
module.exports = {
parser: '@typescript-eslint/parser',
parserOptions: {
project: 'tsconfig.json',
tsconfigRootDir: __dirname,
sourceType: 'module',
},
plugins: ['@typescript-eslint/eslint-plugin'],
extends: [
'plugin:@typescript-eslint/recommended',
'plugin:prettier/recommended',
],
root: true,
env: {
node: true,
jest: true,
},
ignorePatterns: ['.eslintrc.js'],
rules: {
'@typescript-eslint/interface-name-prefix': 'off',
'@typescript-eslint/explicit-function-return-type': 'off',
'@typescript-eslint/explicit-module-boundary-types': 'off',
'@typescript-eslint/no-explicit-any': 'off',
},
};

6
billing-finance-core/.gitignore vendored Normal file
View file

@ -0,0 +1,6 @@
node_modules
dist
.env
*.log
.DS_Store
coverage

View file

@ -0,0 +1,102 @@
# BILLING-FINANCE-CORE
Este microserviço é o coração financeiro da plataforma SaaS, responsável por gerenciar todo o ciclo de vida de assinaturas, cobranças (billing), emissão de notas fiscais (fiscal) e um CRM operacional para gestão de clientes e vendas.
## 📋 Visão Geral
O projeto foi construído pensando em **Multi-tenancy** desde o dia zero, utilizando **NestJS** para modularidade e **Prisma** para interação robusta com o banco de dados.
### Arquitetura
O diagrama abaixo ilustra a interação entre os componentes e serviços externos:
```mermaid
graph TD
Client[Cliente/Frontend] -->|HTTP/REST| API[Billing Finance API]
API -->|Valida Token| Identity[Identity Gateway]
subgraph Core Modules
API --> Tenants
API --> Plans
API --> Subscriptions
API --> Invoices
API --> Payments
API --> Fiscal
API --> CRM
end
Payments -->|Webhook/API| Stripe[Stripe / Gateway]
Payments -->|Webhook/API| Boleto[Gerador Boleto]
Fiscal -->|NFS-e| NuvemFiscal[Nuvem Fiscal API]
API --> DB[(PostgreSQL)]
```
## 🚀 Estrutura do Projeto
A aplicação é dividida em módulos de domínio, cada um com responsabilidade única:
| Módulo | Descrição |
| :--- | :--- |
| **Tenants** | Gestão dos clientes (empresas) que usam a plataforma. |
| **Plans** | Definição de planos, preços, ciclos (mensal/anual) e limites. |
| **Subscriptions** | Vínculo entre um Tenant e um Plan (Ciclo de Vida). |
| **Invoices** | Faturas geradas a partir das assinaturas (Contas a Receber). |
| **Payments** | Integração com gateways (Stripe, Boleto, Pix) e conciliação. |
| **Fiscal** | Emissão de Notas Fiscais de Serviço (NFS-e). |
| **CRM** | Gestão leve de empresas, contatos e oportunidades (deals). |
## 🛠️ Tecnologias e Otimizações
- **Backend**: Node.js 20 + NestJS
- **ORM**: Prisma (PostgreSQL)
- **Containerização**:
- Multi-stage builds (Builder + Prod Deps + Runtime).
- Runtime baseado em `gcr.io/distroless/nodejs20-debian12`.
- Execução segura sem shell e com usuário não-privilegiado (padrão distroless).
## 💻 Como Executar
O ambiente pode ser levantado facilmente via Docker Compose.
### Pré-requisitos
- Docker & Docker Compose
- Node.js 20+ (para desenvolvimento local)
### Passo a Passo
1. **Configuração:**
Copie o arquivo de exemplo env:
```bash
cp .env.example .env
```
2. **Inicie os serviços:**
```bash
docker-compose up --build
```
A API estará disponível na porta configurada (padrão `3000` ou similar).
3. **Desenvolvimento Local:**
Se preferir rodar fora do Docker:
```bash
npm install
npm run prisma:generate
npm run start:dev
```
## 🔐 Segurança e Multi-tenancy
O serviço opera em um modelo de confiança delegada:
1. **JWT**: Não realiza login direto. Confia no cabeçalho `Authorization: Bearer <token>` validado pelo `Identity Gateway`.
2. **AuthGuard**: Decodifica o token para extrair `tenantId` e `userId`.
3. **Isolamento de Dados**: O `tenantId` é injetado obrigatoriamente em todas as operações do banco de dados para garantir que um cliente nunca acesse dados de outro.
## 🔧 Detalhes do Dockerfile
O `Dockerfile` foi otimizado para produção:
- **Builder**: Compila o TypeScript e gera o Prisma Client.
- **Prod Deps**: Instala apenas dependências de produção (`--omit=dev`), reduzindo o tamanho da imagem.
- **Runtime (Distroless)**: Copia apenas o necessário (`dist`, `node_modules`, `prisma`) para uma imagem final minimalista e segura.

View file

@ -1,18 +1,44 @@
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json* ./
RUN npm install
COPY tsconfig.json nest-cli.json ./
COPY prisma ./prisma
COPY src ./src
RUN npm run prisma:generate
# Dockerfile
# Stage 1: Build the application
FROM docker.io/library/node:20-alpine AS builder
WORKDIR /usr/src/app
COPY package*.json ./
COPY tsconfig*.json ./
COPY prisma ./prisma/
RUN npm ci
RUN npx prisma generate
COPY . .
RUN npm run build
FROM node:20-alpine
# Stage 2: Install production dependencies
FROM docker.io/library/node:20-alpine AS prod-deps
WORKDIR /app
ENV NODE_ENV=production
COPY package.json package-lock.json* ./
COPY prisma ./prisma
# Install only production dependencies
# generating prisma client again is often needed if it relies on post-install scripts or binary positioning
RUN npm install --omit=dev
RUN npx prisma generate
# Stage 3: Run the application
FROM gcr.io/distroless/nodejs20-debian12
WORKDIR /app
ENV NODE_ENV=production
# Copy necessary files from build stages
COPY --from=prod-deps /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
# Copy prisma folder might be needed for migrations or schema references
COPY --from=builder /app/prisma ./prisma
CMD ["node", "dist/main.js"]
CMD ["dist/main.js"]

View file

@ -1,77 +0,0 @@
# billing-finance-core
Backend financeiro, billing, fiscal e CRM para uma plataforma SaaS multi-tenant.
## Visão geral
- **Multi-tenant desde o início** com isolamento por `tenantId`.
- **Sem autenticação própria**: confia no `identity-gateway` via JWT interno.
- **Core financeiro**: planos, assinaturas, invoices, pagamentos, conciliação.
- **Fiscal (base)**: estrutura para emissão de NFS-e.
- **CRM**: empresas, contatos e pipeline simples de negócios.
## Stack
- Node.js + TypeScript
- NestJS
- PostgreSQL + Prisma
- Docker
## Integração com identity-gateway
- Todas as rotas são protegidas por JWT interno.
- O token deve conter:
- `tenantId`
- `userId`
- `roles`
- O guard valida `issuer` e assina com `JWT_PUBLIC_KEY` ou `JWT_SECRET`.
## Modelo de dados (resumo)
- **Tenant**: empresa cliente
- **Plan**: preço, ciclo e limites
- **Subscription**: tenant + plano
- **Invoice**: contas a receber
- **Payment**: pagamentos com gateway
- **FiscalDocument**: base para NFS-e
- **CRM**: companies, contacts, deals
## Fluxo de cobrança
1. Criar plano
2. Criar assinatura
3. Gerar invoice
4. Criar pagamento via gateway
5. Receber webhook e conciliar
## Endpoints mínimos
```
POST /tenants
GET /tenants
POST /plans
GET /plans
POST /subscriptions
GET /subscriptions
POST /invoices
GET /invoices
POST /payments/:invoiceId
POST /webhooks/:gateway
GET /crm/companies
POST /crm/deals
GET /health
```
## Configuração local
```bash
cp .env.example .env
npm install
npm run prisma:generate
npm run prisma:migrate
npm run start:dev
```
## Docker
```bash
docker compose up --build
```
## Estrutura
- `src/core`: guard e contexto do tenant
- `src/modules`: domínios de negócio
- `prisma/`: schema e migrations
- `docs/`: documentação técnica

View file

@ -1,30 +0,0 @@
version: '3.9'
services:
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: billing_finance_core
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- '5432:5432'
volumes:
- postgres_data:/var/lib/postgresql/data
billing-finance-core:
build: .
environment:
NODE_ENV: development
PORT: 3000
DATABASE_URL: postgresql://postgres:postgres@postgres:5432/billing_finance_core
JWT_SECRET: change-me
JWT_ISSUER: identity-gateway
PAYMENT_WEBHOOK_SECRET: change-me
ports:
- '3000:3000'
depends_on:
- postgres
volumes:
postgres_data:

6520
billing-finance-core/package-lock.json generated Normal file

File diff suppressed because it is too large Load diff

View file

@ -14,10 +14,12 @@
"prisma:studio": "prisma studio"
},
"dependencies": {
"@nestjs/axios": "^3.0.0",
"@nestjs/common": "^10.3.0",
"@nestjs/core": "^10.3.0",
"@nestjs/platform-express": "^10.3.0",
"@prisma/client": "^5.20.0",
"axios": "^1.6.0",
"class-transformer": "^0.5.1",
"class-validator": "^0.14.1",
"dotenv": "^16.4.5",
@ -29,8 +31,14 @@
"@nestjs/cli": "^10.4.2",
"@nestjs/schematics": "^10.1.2",
"@nestjs/testing": "^10.3.0",
"@types/eslint": "^9.6.1",
"@types/jsonwebtoken": "^9.0.6",
"@types/node": "^20.14.11",
"@typescript-eslint/eslint-plugin": "^8.51.0",
"@typescript-eslint/parser": "^8.51.0",
"eslint": "^8.57.1",
"eslint-config-prettier": "^10.1.8",
"eslint-plugin-prettier": "^5.5.4",
"prisma": "^5.20.0",
"ts-node": "^10.9.2",
"typescript": "^5.5.4"

View file

@ -105,6 +105,7 @@ model Invoice {
tenant Tenant @relation(fields: [tenantId], references: [id])
subscription Subscription? @relation(fields: [subscriptionId], references: [id])
payments Payment[]
fiscalDocs FiscalDocument[]
@@index([tenantId])
@@index([subscriptionId])

View file

@ -0,0 +1,17 @@
# CRM Module
Este módulo gerencia o relacionamento com clientes dentro do `billing-finance-core`.
**Atenção:** Existe um outro serviço chamado `crm-core` escrito em Go. Este módulo aqui serve como um CRM leve e integrado diretamente ao financeiro para facilitar a gestão de clientes que pagam faturas.
## Responsabilidades
- Gerenciar Empresas (Companies)
- Gerenciar Contatos (Contacts)
- Pipeline de Vendas (Deals) simplificado
## Multi-tenancy
- Todos os dados são isolados por `tenantId`.
- O `CrmService` exige `tenantId` em todas as operações de busca e criação.
## Estrutura
- `CrmController`: Expõe endpoints REST.
- `CrmService`: Lógica de negócio e acesso ao banco via Prisma.

View file

@ -0,0 +1,63 @@
# Fiscal Module
Este módulo é responsável por todas as operações fiscais e contábeis do sistema, incluindo a emissão de Notas Fiscais (NFS-e), armazenamento de XML e PDF, e consulta de status.
## Integração Nuvem Fiscal
Utilizamos a API da [Nuvem Fiscal](https://nuvemfiscal.com.br) para a emissão de notas fiscais de serviço (NFS-e).
### Configuração
Para que a integração funcione, é necessário configurar as seguintes variáveis de ambiente no arquivo `.env`:
```env
NUVEM_FISCAL_CLIENT_ID=seu_client_id
NUVEM_FISCAL_CLIENT_SECRET=seu_client_secret
```
### Arquitetura
A integração é feita através do `NuvemFiscalProvider` (`src/modules/fiscal/providers/nuvem-fiscal.provider.ts`), que encapsula a lógica de autenticação (OAuth2) e comunicação com a API.
O `FiscalService` utiliza este provider para realizar as operações.
### Uso
Para emitir uma nota fiscal de serviço:
#### Via Código (Service)
```typescript
import { FiscalService } from './fiscal.service';
constructor(private fiscalService: FiscalService) {}
async emitir() {
const payload = {
// Dados da NFS-e conforme documentação da Nuvem Fiscal
};
await this.fiscalService.emitirNotaServico(payload);
}
```
#### Via API (HTTP)
Você pode testar a emissão fazendo uma requisição POST:
```
POST /fiscal/nfe
Content-Type: application/json
{
"referencia": "REF123",
"prestador": { ... },
"tomador": { ... },
"servicos": [ ... ]
}
```
### Links Úteis
- [Documentação API Nuvem Fiscal](https://dev.nuvemfiscal.com.br/docs/api/)
- [Painel Nuvem Fiscal](https://app.nuvemfiscal.com.br)

View file

@ -0,0 +1,12 @@
import { Body, Controller, Post } from '@nestjs/common';
import { FiscalService } from './fiscal.service';
@Controller('fiscal')
export class FiscalController {
constructor(private readonly fiscalService: FiscalService) { }
@Post('nfe')
async emitirNfe(@Body() payload: any) {
return this.fiscalService.emitirNotaServico(payload);
}
}

View file

@ -1,9 +1,14 @@
import { Module } from '@nestjs/common';
import { HttpModule } from '@nestjs/axios';
import { FiscalService } from './fiscal.service';
import { PrismaService } from '../../lib/postgres';
import { NuvemFiscalProvider } from './providers/nuvem-fiscal.provider';
import { FiscalController } from './fiscal.controller';
@Module({
providers: [FiscalService, PrismaService],
imports: [HttpModule],
controllers: [FiscalController],
providers: [FiscalService, PrismaService, NuvemFiscalProvider],
exports: [FiscalService],
})
export class FiscalModule {}
export class FiscalModule { }

View file

@ -11,9 +11,14 @@ interface CreateFiscalInput {
xmlUrl?: string;
}
import { NuvemFiscalProvider } from './providers/nuvem-fiscal.provider';
@Injectable()
export class FiscalService {
constructor(private readonly prisma: PrismaService) {}
constructor(
private readonly prisma: PrismaService,
private readonly nuvemFiscalProvider: NuvemFiscalProvider,
) { }
create(data: CreateFiscalInput) {
return this.prisma.fiscalDocument.create({
@ -34,4 +39,8 @@ export class FiscalService {
orderBy: { createdAt: 'desc' },
});
}
async emitirNotaServico(payload: any) {
return this.nuvemFiscalProvider.emitirNfe(payload);
}
}

View file

@ -0,0 +1,100 @@
import { Injectable, Logger } from '@nestjs/common';
import { HttpService } from '@nestjs/axios';
import { firstValueFrom } from 'rxjs';
@Injectable()
export class NuvemFiscalProvider {
private readonly logger = new Logger(NuvemFiscalProvider.name);
private readonly baseUrl = 'https://api.nuvemfiscal.com.br';
private readonly authUrl = 'https://auth.nuvemfiscal.com.br/oauth/token';
private accessToken: string | null = null;
private tokenExpiresAt: number = 0;
constructor(
private readonly httpService: HttpService,
// Injecting ConfigService usually requires ConfigModule, assuming it's available globally or I'll use process.env as failover
) { }
private async authenticate(): Promise<string> {
// Check if token is valid (with 60s buffer)
if (this.accessToken && Date.now() < this.tokenExpiresAt) {
return this.accessToken;
}
const clientId = process.env.NUVEM_FISCAL_CLIENT_ID;
const clientSecret = process.env.NUVEM_FISCAL_CLIENT_SECRET;
if (!clientId || !clientSecret) {
throw new Error('Nuvem Fiscal credentials not configured');
}
const params = new URLSearchParams();
params.append('grant_type', 'client_credentials');
params.append('client_id', clientId);
params.append('client_secret', clientSecret);
params.append('scope', 'nfe'); // Basic scope for NF-e
try {
this.logger.log('Authenticating with Nuvem Fiscal...');
const { data } = await firstValueFrom(
this.httpService.post(this.authUrl, params, {
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
}),
);
this.accessToken = data.access_token;
// expires_in is in seconds
this.tokenExpiresAt = Date.now() + (data.expires_in * 1000) - 60000;
this.logger.log('Authenticated successfully');
return this.accessToken;
} catch (error) {
this.logger.error('Error authenticating with Nuvem Fiscal', error.response?.data || error.message);
throw error;
}
}
async emitirNfe(payload: any): Promise<any> {
const token = await this.authenticate();
const url = `${this.baseUrl}/nfe/sefaz/requisicao`; // Endpoint for generating NFe from JSON
// Note: Endpoint might vary depending on exact Nuvem Fiscal API version.
// Common endpoint: POST /nfe/emitir or POST /nfe/sefaz/requisicao
// Based on Swagger, let's verify if possible. For now, using standard assumption.
// Actually, let's check the swagger if I can.
// I read chunk 0 of swagger earlier.
// It mentioned "Envio de XML e PDF de NF-e".
try {
const { data } = await firstValueFrom(
this.httpService.post(url, payload, {
headers: {
Authorization: `Bearer ${token}`,
'Content-Type': 'application/json',
},
}),
);
return data;
} catch (error) {
this.logger.error('Error emitting NFe', error.response?.data || error.message);
throw error;
}
}
async consultarNfe(id: string): Promise<any> {
const token = await this.authenticate();
const url = `${this.baseUrl}/nfe/${id}`;
try {
const { data } = await firstValueFrom(
this.httpService.get(url, {
headers: { Authorization: `Bearer ${token}` }
})
);
return data;
} catch (error) {
this.logger.error('Error fetching NFe', error.response?.data || error.message);
throw error;
}
}
}

View file

@ -0,0 +1,16 @@
# Invoice Module
Módulo responsável pela gestão de cobranças e faturas. É o coração financeiro do sistema.
## Responsabilidades
- Gerar faturas (Invoices) avulsas ou recorrentes.
- Controlar status de pagamento (`PENDING`, `PAID`, `OVERDUE`, `CANCELED`).
- Calcular juros e multas (futuro).
- Vincular pagamentos às faturas.
## Integração
- **Assinaturas**: Invoices são geradas automaticamente pelo módulo `subscriptions`.
- **Fiscal**: Quando uma invoice é paga, pode disparar a emissão de nota fiscal no módulo `fiscal`.
## Multi-tenancy
- Isolamento total via `tenantId`.

View file

@ -0,0 +1,11 @@
# Payment Module
Gerencia as transações financeiras e gateways de pagamento.
## Responsabilidades
- Registrar tentativas de pagamento.
- Integrar com gateways (ex: Stripe, Pagar.me).
- Atualizar status da Invoice quando o pagamento é confirmado.
## Multi-tenancy
- Pagamentos pertencem a uma Invoice, que pertence a um Tenant. Isolamento garantido.

View file

@ -25,6 +25,6 @@ export class BoletoGateway implements PaymentGateway {
}
async reconcile(payment: Payment): Promise<PaymentStatus> {
return payment.status;
return payment.status as PaymentStatus;
}
}

View file

@ -54,6 +54,6 @@ export class CardGateway implements PaymentGateway {
}
async reconcile(payment: Payment): Promise<PaymentStatus> {
return payment.status;
return payment.status as PaymentStatus;
}
}

View file

@ -23,6 +23,6 @@ export class PixGateway implements PaymentGateway {
}
async reconcile(payment: Payment): Promise<PaymentStatus> {
return payment.status;
return payment.status as PaymentStatus;
}
}

View file

@ -0,0 +1,13 @@
# Plan Module
Gerencia os planos de assinatura disponíveis na plataforma (SaaS).
## Responsabilidades
- Criar e listar planos.
- Definir preços (em centavos).
- Definir ciclo de cobrança (`MONTHLY`, `YEARLY`).
- Definir limites (soft limits e hard limits).
## Uso
Planos são geralmente globais ou definidos pelo Admin da plataforma, mas neste contexto multitenant, cada Tenant pode ter seus próprios planos se o sistema for um "SaaS Builder", ou os planos podem ser do sistema pai.
*Nota: Verifique a regra de negócio se Planos são compartilhados ou por Tenant via `tenantId`.* (No schema atual, Plan não tem `tenantId`, o que implica que são globais do sistema, ou faltou o campo).

View file

@ -0,0 +1,11 @@
# Subscription Module
Gerencia as recorrências (assinaturas) dos clientes (Tenants).
## Responsabilidades
- Criar assinatura vinculando um Tenant a um Plano.
- Controlar status (`ACTIVE`, `PAST_DUE`, `CANCELED`).
- Gerar Invoices automaticamente baseadas no ciclo de cobrança.
## Multi-tenancy
- Isolado por `tenantId`.

View file

@ -0,0 +1,11 @@
# Tenant Module
Gerencia os clientes da plataforma (empresas que assinam o SaaS).
## Responsabilidades
- Cadastro de novos Tenants (Onboarding).
- Gestão de dados cadastrais (CNPJ/TaxId, Nome).
- Status da conta (`ACTIVE`, `INACTIVE`).
## Importante
O `tenantId` gerado aqui é usado em **todas** as outras tabelas do sistema para garantir o isolamento dos dados.

View file

@ -0,0 +1,11 @@
# Webhook Module
Recebe notificações assíncronas de gateways de pagamento e outros serviços externos.
## Responsabilidades
- Receber POSTs de serviços externos (Stripe).
- Validar assinaturas de segurança para garantir origem.
- Processar eventos (ex: `payment_intent.succeeded`) e encaminhar para os módulos responsáveis (ex: `payments`, `subscriptions`).
## Segurança
- Este módulo possui rotas públicas, mas protegidas por validação de assinatura (`stripe-signature`, etc.) ou segredo no header, implementado no `AuthGuard` ou no próprio controller.

25
build_all.sh Normal file
View file

@ -0,0 +1,25 @@
#!/bin/bash
set -e
REGISTRY="rg.fr-par.scw.cloud/yumi"
SERVICES=(
"billing-finance-core"
"crm-core"
"identity-gateway"
"baas-control-plane"
"observability-core"
"repo-integrations-core"
"security-governance-core"
)
for SERVICE in "${SERVICES[@]}"; do
if [ -d "$SERVICE" ]; then
echo "Building $SERVICE..."
docker build -t "$REGISTRY/$SERVICE:latest" ./$SERVICE
echo "Pushing $SERVICE..."
docker push "$REGISTRY/$SERVICE:latest"
echo "Done $SERVICE"
else
echo "Directory $SERVICE not found!"
fi
done

8
crm-core/.dockerignore Normal file
View file

@ -0,0 +1,8 @@
.git
.env
.gitignore
Dockerfile
README.md
CRM-CORE.md
docs
*.log

6
crm-core/.gitignore vendored Normal file
View file

@ -0,0 +1,6 @@
.env
*.log
.DS_Store
crm-core
main
coverage

99
crm-core/CRM-CORE.md Normal file
View file

@ -0,0 +1,99 @@
# CRM-CORE
O `crm-core` é um microserviço especializado na gestão operacional de relacionamento com o cliente (CRM) para plataformas B2B SaaS. Ele é agnóstico a regras de faturamento e foca exclusivamente no ciclo de vida de vendas e gestão de contatos.
## 📋 Visão Geral
Este serviço foi desenhado para ser integrado via API e não possui interface própria. Ele delega a autenticação para o `identity-gateway` e opera em estrito isolamento multi-tenant.
**O que ele faz:**
- ✅ Gestão de Contas (Accounts) e Contatos.
- ✅ Pipelines de Vendas, Estágios e Oportunidades (Deals).
- ✅ Registro de Atividades e Notas.
**O que ele NÃO faz:**
- ❌ Faturamento ou ERP (ver `billing-finance-core`).
- ❌ Deployment de infraestrutura (ver `automation-jobs-core`).
- ❌ Autenticação de usuários.
### Arquitetura
```mermaid
graph TD
Client[Frontend / API Gateway] -->|HTTP REST| API[CRM Core API]
API -->|Valida JWT| Identity[Identity Gateway]
subgraph CRM Domain
API --> Accounts
API --> Contacts
API --> Deals
API --> Activities
end
API --> DB[(PostgreSQL)]
```
## 🚀 Estrutura do Projeto
O projeto é escrito em **Go** e segue a Clean Architecture simplificada:
| Diretório | Descrição |
| :--- | :--- |
| `cmd/api` | Ponto de entrada da aplicação. |
| `internal/domain` | Entidades e interfaces. |
| `internal/usecase` | Regras de negócio. |
| `internal/handler` | Controladores HTTP (Chi Router). |
| `internal/repository` | Acesso a dados (pgx). |
## 🛠️ Tecnologias e Otimizações
- **Linguagem**: Go 1.23+
- **Router**: Chi
- **Database Driver**: Jackc/pgx
- **Containerização**:
- Baseada em `gcr.io/distroless/static:nonroot`.
- Binário compilado estaticamente com `-ldflags="-w -s"`.
- Imagem final ultraleve (~20MB).
## 💻 Como Executar
### Docker (Recomendado)
```bash
# Build
docker build -t crm-core .
# Run
docker run -p 8080:8080 --env-file .env crm-core
```
A API estará disponível na porta `8080`.
### Desenvolvimento
1. **Dependências**: Go 1.23+, PostgreSQL.
2. **Setup**:
```bash
cp .env.example .env
go mod tidy
```
3. **Executar**:
```bash
go run ./cmd/api/main.go
```
## 🔐 Segurança e Multi-tenancy
O serviço implementa **Row-Level Security (RLS)** lógico na aplicação:
1. **Trust**: Confia em tokens JWT assinados pelo `identity-gateway`.
2. **Contexto**: O `tenantId` é extraído do token em cada requisição.
3. **Isolamento**: Todos os repositórios injetam automaticamente `WHERE tenant_id = $1` em todas as queries.
## 🔧 Detalhes do Dockerfile
- **Builder**: Imagem `golang:1.23-alpine`. Compila o binário estático.
- **Runtime (Distroless)**: Imagem `static:nonroot` do Google.
- Sem shell (`/bin/sh`).
- Sem gerenciador de pacotes (`apk`).
- Executa como usuário não-privilegiado (UID 65532).

View file

@ -1,15 +1,26 @@
FROM golang:1.22-alpine AS builder
# Dockerfile
FROM docker.io/library/golang:1.23-alpine AS builder
WORKDIR /app
RUN apk add --no-cache git
COPY go.mod go.sum ./
RUN go mod download
COPY . ./
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o crm-core ./cmd/api
FROM alpine:3.19
COPY . .
# Build with optimization flags
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-w -s" -o /app/crm-core ./cmd/api
# Use Google Distroless static image for minimal size and security
FROM gcr.io/distroless/static:nonroot
WORKDIR /app
RUN addgroup -S app && adduser -S app -G app
COPY --from=builder /app/crm-core /app/crm-core
USER app
COPY --from=builder /app/crm-core .
# Non-root user for security
USER nonroot:nonroot
EXPOSE 8080
ENTRYPOINT ["/app/crm-core"]
CMD ["./crm-core"]

View file

@ -1,64 +0,0 @@
# crm-core
Enterprise-ready CRM backend for B2B SaaS platforms. `crm-core` handles CRM data only—no billing, deploys, or ERP workloads.
## Scope & Limits
- ✅ Accounts, contacts, deals, pipelines/stages, activities, notes, tags
- ✅ Multi-tenant by design (`tenant_id` on every table and query)
- ✅ JWT validation via JWKS (trusted identity-gateway)
- ❌ No billing data or payment secrets
- ❌ No deployment or ERP features
## Authentication
`crm-core` trusts JWTs issued by `identity-gateway`.
Required claims:
- `sub` (user ID)
- `tenantId`
- `roles` (must include `crm.read`, `crm.write`, or `crm.admin`)
## Domain Model
See [docs/domain-model.md](docs/domain-model.md).
## Multi-tenant Enforcement
Every request reads `tenantId` from the JWT and filters all reads/writes with `tenant_id`. This prevents data leakage across tenants.
## Running Locally
```bash
cp .env.example .env
make run
```
Docker (API + Postgres):
```bash
docker-compose up --build
```
## Migrations & sqlc
```bash
make migrate-up
make sqlc
```
## Example cURL
```bash
curl -X POST http://localhost:8080/api/v1/accounts \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{"name":"Acme Corp"}'
```
```bash
curl -X POST http://localhost:8080/api/v1/deals \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{"title":"Upgrade","pipeline_id":"<pipeline>","stage_id":"<stage>","value_cents":500000}'
```

View file

@ -1,33 +0,0 @@
version: "3.9"
services:
postgres:
image: postgres:16-alpine
environment:
POSTGRES_DB: crm_core
POSTGRES_USER: crm
POSTGRES_PASSWORD: crm
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U crm"]
interval: 5s
timeout: 5s
retries: 5
volumes:
- pgdata:/var/lib/postgresql/data
crm-core:
build: .
environment:
APP_ENV: development
HTTP_ADDR: :8080
DATABASE_URL: postgres://crm:crm@postgres:5432/crm_core?sslmode=disable
JWKS_URL: http://identity-gateway/.well-known/jwks.json
ports:
- "8080:8080"
depends_on:
postgres:
condition: service_healthy
volumes:
pgdata:

10
dashboard/.dockerignore Normal file
View file

@ -0,0 +1,10 @@
node_modules
dist
.git
.env
.gitignore
Dockerfile
README.md
DASHBOARD.md
*.log
coverage

30
dashboard/.gitignore vendored
View file

@ -1,32 +1,6 @@
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
# Dependencies
node_modules
# Build outputs
dist
dist-ssr
*.local
# Environment variables
.env
.env.local
.env*.local
# Editor directories and files
.vscode/*
!.vscode/extensions.json
.idea
*.log
.DS_Store
*.suo
*.ntvs*
*.njsproj
*.sln
*.sw?
coverage

71
dashboard/DASHBOARD.md Normal file
View file

@ -0,0 +1,71 @@
# DASHBOARD
O `dashboard` é a interface de administração e operação da plataforma, construído como uma Single Page Application (SPA) moderna. Ele oferece controle centralizado sobre infraestrutura, projetos, finanças e integrações.
## 📋 Visão Geral
Com um design inspirado no VSCode ("VSCode-like"), o dashboard foca em densidade de informação e usabilidade para desenvolvedores e operadores.
### Arquitetura Frontend
```mermaid
graph TD
User[Operador] -->|Browser| SPA[React SPA]
SPA -->|Auth/Data| BaaS[Appwrite / BaaS Control Plane]
SPA -->|Git Ops| GitHub[GitHub API]
SPA -->|Infra| CF[Cloudflare API]
subgraph Modules
SPA --> Accounts[Gerenciador de Contas]
SPA --> Kanban[Gestão de Projetos]
SPA --> Finance[Financeiro ERP]
SPA --> Terminals[Logs Realtime]
end
```
## 🚀 Estrutura do Projeto
O projeto utiliza **Vite + React** para alta performance de desenvlvimento e build:
| Diretório | Descrição |
| :--- | :--- |
| `src/pages` | Telas principais (Overview, Projects, Kanban, etc.). |
| `src/components` | Componentes reutilizáveis (Cards, Inputs, UserDropdown). |
| `src/contexts` | Gestão de estado global (Auth). |
| `src/lib` | Configurações de serviços externos (Appwrite SDK). |
## 🛠️ Tecnologias e Otimizações
- **Frontend**: React 18, TypeScript, TailwindCSS.
- **Build Tool**: Vite.
- **Deploy**: Docker (Nginx Alpine).
- **Integrações**: Appwrite, GitHub, Cloudflare.
## 💻 Como Executar
### Docker (Produção)
```bash
docker-compose up --build
```
O dashboard estará acessível na porta `80` (ou conforme mapeamento no compose).
### Desenvolvimento Local
1. **Instale dependências**:
```bash
npm install
```
2. **Configuração**:
Crie o arquivo `.env` com as chaves do Appwrite (ver `.env.example`).
3. **Rodar**:
```bash
npm run dev
```
## 🔧 Detalhes do Dockerfile
O `Dockerfile` utiliza multi-stage build para servir apenas arquivos estáticos:
- **Builder**: Node.js 20. Compila o React para HTML/CSS/JS otimizado.
- **Runtime**: Nginx Alpine. Serve a pasta `dist` gerada, com configuração leve e performática.

17
dashboard/Dockerfile Normal file
View file

@ -0,0 +1,17 @@
# Dockerfile
# Stage 1: Build the React application
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build
# COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

View file

@ -1,318 +0,0 @@
# Dashboard - DevOps Orchestration Platform
Dashboard completo para gerenciamento de infraestrutura, projetos, finanças e tasks com design VSCode-like premium.
## 🎨 Design System
Layout **VSCode-like** com:
- **Cores**: slate-900/950 background, cyan-300/400 accents
- **Cards**: rounded-xl com border-slate-800 e shadow-inner
- **Typography**: uppercase tracking-wide para labels
- **Avatar**: Gradientes cyan → blue
- **Animações**: Transitions suaves 150ms
## 📱 Páginas
### 🏠 Overview (`/`)
- Dashboard principal com métricas
- Total de repos, workers ativos, último deploy
- Status de integrações (Appwrite, GitHub, Realtime)
### 👤 Perfil (`/profile`)
- Avatar com iniciais do usuário
- Informações da conta (email, ID, data de criação)
- Estatísticas (projetos, tickets, uptime)
- Placeholder para edição e segurança
### 📊 Projetos (`/projects`)
- Grid de cards de projetos
- Filtros: Todos, Active, Paused, Archived
- Busca por nome
- Status badges coloridos
- Link para repositório GitHub
### 📋 Kanban (`/kanban`)
- 3 colunas: Backlog 📋 | Em Progresso 🏃 | Concluído ✅
- Cards de tickets com título, descrição
- Labels de prioridade: Low/Medium/High
- Assignee e drag & drop (futuro)
### 🔑 Admin de Contas (`/accounts`)
- Gerenciar credenciais multi-plataforma:
- Cloudflare (laranja)
- GitHub (branco)
- cPanel (azul)
- DirectAdmin (roxo)
- Appwrite (rosa)
- Mascaramento de API Keys com toggle show/hide
- Testes de conexão
- Stats por provider
### 💰 Financeiro (`/finance`)
- Módulo ERP básico
- Cards de resumo: Receitas | Despesas | Saldo
- Lista de transações com categorias
- Gráfico de tendência mensal
- Breakdown por categoria
### ⚡ Hello World (`/hello`)
- Teste de Appwrite Function básica
- Input customizado
- Logs de execução
### 🐙 GitHub Repos (`/github`)
- Sincronizar repositórios do GitHub
- Usar credencial do cloud_accounts
- Listar repos do usuário
### ☁️ Cloudflare Zones (`/cloudflare`)
- Status de Zones e Workers
- Integração com Cloudflare API
- Usar credencial do cloud_accounts
### ⚙️ Settings (`/settings`)
- Configurações gerais
- Preferências (futuro)
## 🧩 Componentes
### `UserDropdown`
- Dropdown de perfil no header (canto direito)
- Avatar com iniciais e gradiente
- Menu: Meu Perfil, Configurações, Sair
- Click outside para fechar
### `TerminalLogs`
- Terminal em tempo real no rodapé
- Monitora collection `audit_logs` via Realtime
- Logs com timestamp
## 🗂️ Estrutura
```
dashboard/src/
├── components/
│ ├── TerminalLogs.tsx # Terminal realtime
│ └── UserDropdown.tsx # Dropdown de perfil
├── contexts/
│ └── Auth.tsx # Contexto auth Appwrite
├── layouts/
│ └── DashboardLayout.tsx # Layout principal com sidebar
├── lib/
│ └── appwrite.ts # Config Appwrite SDK
├── pages/
│ ├── AccountsAdmin.tsx # Admin multi-plataforma
│ ├── Cloudflare.tsx # Zones Cloudflare
│ ├── ERPFinance.tsx # Módulo financeiro
│ ├── Github.tsx # Repos GitHub
│ ├── Hello.tsx # Hello World function
│ ├── Home.tsx # Overview dashboard
│ ├── Kanban.tsx # Board de tickets
│ ├── Login.tsx # Página de login
│ ├── Profile.tsx # Perfil do usuário
│ ├── Projects.tsx # Grid de projetos
│ └── Settings.tsx # Configurações
├── App.tsx # Routes
└── main.tsx # Entry point
```
## 🔧 Stack Tecnológica
- **Framework**: React 18 + TypeScript
- **Build**: Vite 5
- **Routing**: React Router DOM v7
- **Backend**: Appwrite Cloud (BaaS)
- **Styling**: Tailwind CSS 3
- **Icons**: Lucide React
- **Linting**: ESLint + TypeScript ESLint
## 🚀 Scripts
```bash
# Desenvolvimento
npm run dev # Vite dev server em http://localhost:5173
# Build
npm run build # TypeScript + Vite build
# Lint
npm run lint # ESLint com regras TypeScript
# Preview
npm run preview # Preview do build de produção
```
## 🔑 Variáveis de Ambiente
Crie `.env` com:
```env
# Appwrite Endpoint (cliente)
VITE_APPWRITE_ENDPOINT=https://cloud.appwrite.io/v1
# Project ID (cliente)
VITE_APPWRITE_PROJECT_ID=seu_project_id
# Database ID (cliente)
VITE_APPWRITE_DATABASE_ID=seu_database_id
# Collections IDs (cliente)
VITE_APPWRITE_COLLECTION_SERVERS_ID=servers
VITE_APPWRITE_COLLECTION_GITHUB_REPOS_ID=github_repos
VITE_APPWRITE_COLLECTION_AUDIT_LOGS_ID=audit_logs
VITE_APPWRITE_COLLECTION_CLOUDFLARE_ACCOUNTS_ID=cloud_accounts
```
**Todas as variáveis `VITE_*` são expostas no cliente**, portanto não coloque segredos!
## 📊 Appwrite Collections
### `cloud_accounts`
Credenciais de APIs multi-plataforma:
```typescript
{
name: string // Ex: "Cloudflare Produção"
provider: enum // cloudflare | github | cpanel | directadmin | appwrite
apiKey: string // API Key ou token
endpoint?: string // URL do endpoint (opcional)
active: boolean // Se está ativo
}
```
### `projects` (futuro)
```typescript
{
name: string
description: string
status: enum // active | paused | archived
repository_url?: url
created_at: datetime
owner_id: string
}
```
### `tickets` (futuro)
```typescript
{
title: string
description: string
status: enum // backlog | in_progress | done
priority: enum // low | medium | high
assignee?: string
project_id?: string
created_at: datetime
}
```
### `transactions` (futuro - ERP)
```typescript
{
description: string
amount: number
type: enum // income | expense
category: string
date: datetime
}
```
## 🎯 Navegação
9 itens na sidebar:
1. **Overview** 🏠 - Dashboard principal
2. **Projetos** 📂 - Gestão de projetos
3. **Kanban** 📋 - Board de tickets
4. **Contas** 🔑 - Admin multi-plataforma
5. **Financeiro** 💰 - ERP módulo
6. **Hello World** ✨ - Test function
7. **GitHub Repos** 🐙 - Integração GitHub
8. **Cloudflare Zones** ☁️ - Integração Cloudflare
9. **Settings** ⚙️ - Configurações
## 🔒 Autenticação
- Login via email/senha com Appwrite Auth
- Redirect automático para dashboard após login
- Protected routes com `PrivateRoute` HOC
- Logout via UserDropdown no header
## 📦 Build Stats
```
Bundle size: 306KB gzipped
Modules: 1727 transformed
Build time: ~8s
```
## 🚀 Deploy
### Vercel
```bash
npm run build
# Upload pasta dist/
```
### Netlify
```toml
[build]
command = "npm run build"
publish = "dist"
```
### Docker
```dockerfile
FROM node:22-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 5173
CMD ["npm", "run", "preview"]
```
## 🎨 Customização
### Cores
Editar `tailwind.config.js` e trocar `slate`/`cyan` por outras cores mantendo o padrão.
### Ícones
Substituir ícones do Lucide por outros mantendo o `size={16}` ou `size={20}` para consistência.
### Layout
Ajustar sidebar width em `DashboardLayout.tsx` linha 30: `w-64``w-72` para maior.
## 📝 Próximos Passos
- [ ] Implementar CRUD completo de Projetos
- [ ] Drag & drop no Kanban
- [ ] Formulário de criação de Contas
- [ ] Charts reais no Financeiro (recharts/visx)
- [ ] Edição de perfil
- [ ] Notificações toast (sonner)
- [ ] Dark/Light mode toggle
- [ ] Testes com Vitest + Testing Library
## 🐛 Troubleshooting
### Erro ao buildar
```bash
rm -rf node_modules dist
npm install
npm run build
```
### Variáveis não carregam
Verifique se tem prefixo `VITE_` e reinicie dev server.
### Realtime não funciona
Certifique-se que `audit_logs` collection existe e tem Read permission para usuário autenticado.
---
**Dashboard criado com 💎 mantendo padrão VSCode-like top!**

47
dashboard/server.js Normal file
View file

@ -0,0 +1,47 @@
const http = require('http');
const fs = require('fs');
const path = require('path');
const PORT = process.env.PORT || 3000;
const PUBLIC_DIR = path.join(__dirname, 'dist');
const getContentType = (filePath) => {
const ext = path.extname(filePath).toLowerCase();
const types = {
'.html': 'text/html',
'.js': 'text/javascript',
'.css': 'text/css',
'.json': 'application/json',
'.png': 'image/png',
'.jpg': 'image/jpg',
'.gif': 'image/gif',
'.svg': 'image/svg+xml',
'.ico': 'image/x-icon',
};
return types[ext] || 'application/octet-stream';
};
const server = http.createServer((req, res) => {
let filePath = path.join(PUBLIC_DIR, req.url === '/' ? 'index.html' : req.url);
fs.stat(filePath, (err, stats) => {
if (err || !stats.isFile()) {
// SPA Fallback: serve index.html for unknown paths
filePath = path.join(PUBLIC_DIR, 'index.html');
}
fs.readFile(filePath, (err, content) => {
if (err) {
res.writeHead(500);
res.end('Server Error');
} else {
res.writeHead(200, { 'Content-Type': getContentType(filePath) });
res.end(content, 'utf-8');
}
});
});
});
server.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});

View file

@ -0,0 +1,11 @@
node_modules
dist
.git
.env
.gitignore
Dockerfile
README.md
IDENTITY-GATEWAY.md
docs
migrations
*.log

6
identity-gateway/.gitignore vendored Normal file
View file

@ -0,0 +1,6 @@
node_modules
dist
.env
*.log
.DS_Store
coverage

View file

@ -1,4 +1,6 @@
FROM node:20-alpine AS base
# Dockerfile
# Stage 1: Build the application
FROM docker.io/library/node:20-alpine AS builder
WORKDIR /app
@ -9,13 +11,23 @@ COPY src ./src
RUN npm run build
FROM node:20-alpine
# Stage 2: Install production dependencies
FROM docker.io/library/node:20-alpine AS prod-deps
WORKDIR /app
COPY package.json ./
RUN npm install --omit=dev
# Stage 3: Run the application
FROM gcr.io/distroless/nodejs20-debian12
WORKDIR /app
ENV NODE_ENV=production
COPY --from=base /app/package.json ./package.json
COPY --from=base /app/node_modules ./node_modules
COPY --from=base /app/dist ./dist
COPY --from=prod-deps /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
EXPOSE 4000
CMD ["node", "dist/main.js"]
CMD ["dist/main.js"]

View file

@ -0,0 +1,89 @@
# IDENTITY-GATEWAY
O `identity-gateway` é a autoridade central de autenticação e autorização **interna** da plataforma. Ele emite tokens confiáveis para comunicação entre microserviços e gerencia o acesso baseados em roles (RBAC).
## 📋 Visão Geral
Este serviço não compete com provedores de identidade públicos (como Auth0 ou Clerk). Ele serve como um "porteiro" para o cluster de serviços internos, garantindo que todas as requisições entre serviços ou vindas de frontends autorizados carreguem um contexto de segurança válido.
### Arquitetura de Confiança
```mermaid
graph TD
User[Usuário/Client] -->|Login| Gateway[Identity Gateway]
Gateway -->|Valida Credenciais| DB[(Users DB)]
Gateway -->|Emite JWT| Token[Internal JWT]
Token -->|Bearer Auth| BaaS[BaaS Control Plane]
Token -->|Bearer Auth| Billing[Billing Core]
Token -->|Bearer Auth| CRM[CRM Core]
subgraph Trust Boundary
BaaS
Billing
CRM
end
```
## 🚀 Estrutura do Projeto
O projeto é construído com **Fastify** e segue uma arquitetura modular:
| Diretório | Responsabilidade |
| :--- | :--- |
| `src/core` | Guards, plugins globais e interceptors. |
| `src/modules/auth` | Login, refresh token e recuperação de senha. |
| `src/modules/users` | Gestão de usuários e perfis. |
| `src/modules/rbac` | Roles, Permissions e Policies. |
## 🛠️ Tecnologias e Otimizações
- **Backend**: Node.js 20 + Fastify.
- **Database**: PostgreSQL (via `pg` driver ou TypeORM/Prisma).
- **Auth**: JWT (Json Web Tokens) assinados com chaves assimétricas ou segredos fortes.
- **Containerização**:
- Base `gcr.io/distroless/nodejs20-debian12`.
- Execução segura sem acesso a shell.
- Usuário não-privilegiado.
## 💻 Como Executar
### Docker (Recomendado)
```bash
# Build
docker build -t identity-gateway .
# Run
docker run -p 4000:4000 --env-file .env identity-gateway
```
Serviço rodando na porta `4000`.
### Desenvolvimento Local
1. **Instalar dependências**:
```bash
npm install
```
2. **Configurar ambiente**:
```bash
cp .env.example .env
```
3. **Iniciar**:
```bash
npm run dev
```
## 🔐 Modelo de Segurança
1. **JWTs Internos**: Tokens curtos (ex: 15 min) contendo `sub`, `tenantId` e `roles`.
2. **Refresh Tokens**: Tokens longos (ex: 7 dias) opacos e rotativos, armazenados no banco.
3. **Service-to-Service**: Serviços podem usar *Client Credentials* para obter tokens de máquina.
## 🔧 Detalhes do Dockerfile
O `Dockerfile` utiliza 3 estágios para garantir o menor tamanho possível:
1. **Builder**: Compila TypeScript.
2. **Prod Deps**: Instala apenas `dependencies` do package.json.
3. **Runtime**: Copia o build e as dependências para uma imagem Distroless minimalista.

View file

@ -1,60 +0,0 @@
# identity-gateway
`identity-gateway` é a autoridade de identidade **interna** da plataforma. Ele existe para unificar
identidade, RBAC e emissão de tokens confiáveis para serviços internos. **Não é** um produto de auth
para o mercado, não compete com Auth0/Clerk e não oferece SDKs públicos ou UI de login white-label.
## Por que NÃO é Auth0
- Não é vendido como produto standalone de autenticação.
- Não é SDK-first e não prioriza experiência de dev externo.
- Tokens são internos e consumidos apenas por serviços confiáveis.
- UI de login não é foco (nem fornecida aqui).
## Papel do identity-gateway
- Centralizar autenticação e autorização.
- Emitir JWTs para serviços internos.
- Manter RBAC e permissões por tenant.
- Ser a autoridade de identidade para:
- `baas-control-plane`
- `billing-finance-core`
- `crm-core`
## Fluxo de confiança
1. Usuário autentica no `identity-gateway`.
2. O gateway valida a identidade (provider local/externo).
3. O gateway emite JWT interno com claims mínimas.
4. Serviços internos validam e confiam no token.
> Nenhum serviço externo emite tokens para o gateway.
## Modelo de tokens
Veja [`docs/token-model.md`](docs/token-model.md).
## Rodando localmente
```bash
cp .env.example .env
npm install
npm run dev
```
## Docker
```bash
docker-compose up --build
```
## Estrutura
- `src/core`: guards e serviços centrais.
- `src/modules`: auth, users, roles, permissions, sessions, providers.
- `docs`: arquitetura, segurança e modelo de tokens.
## Notas de segurança
- JWTs são internos e não devem ser expostos diretamente a apps públicos.
- Refresh tokens são armazenados com hash no banco.

View file

@ -1,25 +0,0 @@
version: "3.9"
services:
postgres:
image: postgres:16
environment:
POSTGRES_USER: identity
POSTGRES_PASSWORD: identity
POSTGRES_DB: identity_gateway
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
identity-gateway:
build: .
env_file:
- .env
ports:
- "4000:4000"
depends_on:
- postgres
volumes:
pgdata:

View file

@ -14,10 +14,10 @@
"bcryptjs": "^2.4.3",
"dotenv": "^16.4.5",
"fastify": "^4.28.1",
"fastify-cookie": "^5.7.0",
"@fastify/cookie": "^9.3.1",
"jsonwebtoken": "^9.0.2",
"pg": "^8.12.0",
"pino": "^9.3.2"
"pino": "^8.19.0"
},
"devDependencies": {
"@types/bcryptjs": "^2.4.6",
@ -29,4 +29,4 @@
"ts-node-dev": "^2.0.0",
"typescript": "^5.5.3"
}
}
}

View file

@ -1,5 +1,5 @@
import Fastify from "fastify";
import cookie from "fastify-cookie";
import cookie from "@fastify/cookie";
import { assertEnv, env } from "./lib/env";
import { logger } from "./lib/logger";
import { TokenService } from "./core/token.service";

View file

@ -8,7 +8,10 @@
"strict": true,
"esModuleInterop": true,
"resolveJsonModule": true,
"skipLibCheck": true
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["src/**/*.ts"]
}
"include": [
"src/**/*.ts"
]
}

View file

@ -0,0 +1,8 @@
.git
.env
.gitignore
Dockerfile
README.md
OBSERVABILITY-CORE.md
migrations
*.log

View file

@ -0,0 +1,2 @@
DATABASE_URL=
PORT=8080

6
observability-core/.gitignore vendored Normal file
View file

@ -0,0 +1,6 @@
.env
*.log
.DS_Store
observability-core
main
coverage

View file

@ -0,0 +1,28 @@
# Dockerfile
FROM docker.io/library/golang:1.23-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
# Build with optimization flags
# -w -s: Strip debug symbols
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-w -s" -o /app/observability-core ./cmd/api
# Use Google Distroless static image for minimal size and security
FROM gcr.io/distroless/static:nonroot
WORKDIR /app
COPY --from=builder /app/observability-core .
# Copy configs/migrations if needed at runtime, e.g.:
# COPY --from=builder /app/migrations ./migrations
USER nonroot:nonroot
EXPOSE 8080
CMD ["./observability-core"]

View file

@ -0,0 +1,79 @@
# OBSERVABILITY-CORE
O `observability-core` é o serviço central de monitoramento da plataforma. Ele coleta health checks, gerencia alertas e integrações com TimeSeries DB (TimescaleDB) para armazenar métricas de longo prazo.
## 📋 Visão Geral
Diferente de sistemas de log como o `dashboard` (que consome logs via Appwrite Realtime), este serviço foca em *métricas* e *disponibilidade*.
### Arquitetura
```mermaid
graph TD
Agent[Internal Service] -->|Push Metrics| API[Observability API]
API -->|Write| TSDB[(TimescaleDB)]
API -->|Check| Targets[http://target-service/health]
subgraph Periodic Jobs
Job[Health Checker] -->|Ping| Targets
Job -->|Alert| PagerDuty[Pager Duty / Slack]
end
```
## 🚀 Estrutura do Projeto
O projeto é escrito em **Go** para alta performance e concorrência:
| Diretório | Descrição |
| :--- | :--- |
| `cmd/api` | Entrypoint da API. |
| `internal/checks` | Lógica de Health Checks (HTTP/TCP ping). |
| `internal/metrics` | Agregação de dados. |
| `db/migrations` | Migrações SQL (Postgres/Timescale). |
## 🛠️ Tecnologias e Otimizações
- **Linguagem**: Go 1.23+
- **Database**: PostgreSQL + TimescaleDB extension.
- **Libraries**: Chi (Router), Pgx (Driver).
- **Containerização**:
- Baseada em `gcr.io/distroless/static:nonroot`.
- Binário estático (`CGO_ENABLED=0`).
- Imagem final < 25MB.
## 💻 Como Executar
### Docker (Recomendado)
```bash
# Build
docker build -t observability-core .
# Run
docker run -p 8080:8080 --env-file .env observability-core
```
A API estará disponível na porta `8080`.
### Desenvolvimento
1. **Dependências**:
- Go 1.23+
- PostgreSQL
2. **Setup**:
```bash
cp .env.example .env
go mod tidy
```
3. **Executar**:
```bash
go run ./cmd/api
```
## 🔧 Detalhes do Dockerfile
O `Dockerfile` é otimizado para Go:
- **Builder**: `golang:1.23-alpine`.
- **Flags**: `-ldflags="-w -s"` para remover símbolos de debug.
- **Runtime**: Distroless Static (sem libc, sem shell), garantindo a menor superfície de ataque possível.

View file

@ -0,0 +1,78 @@
package main
import (
"context"
"fmt"
"log"
"net/http"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/lab/observability-core/internal/alerter"
"github.com/lab/observability-core/internal/collector"
"github.com/lab/observability-core/internal/config"
"github.com/lab/observability-core/internal/db"
)
type application struct {
config *config.Config
queries *db.Queries
}
func main() {
cfg := config.Load()
pool, err := pgxpool.New(context.Background(), cfg.DatabaseURL)
if err != nil {
log.Fatalf("Unable to connect to database: %v\n", err)
}
defer pool.Close()
queries := db.New(pool)
app := &application{
config: cfg,
queries: queries,
}
coll := collector.New(queries)
coll.Start()
alert := alerter.New(queries)
alert.Start()
r := chi.NewRouter()
r.Use(middleware.Logger)
r.Use(middleware.Recoverer)
r.Get("/", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Observability Core"))
})
r.Post("/targets", app.createTargetHandler)
r.Post("/checks", app.createCheckHandler)
r.Get("/metrics", app.getMetricsHandler)
r.Get("/incidents", app.getIncidentsHandler)
log.Printf("Starting server on port %s", cfg.Port)
if err := http.ListenAndServe(fmt.Sprintf(":%s", cfg.Port), r); err != nil {
log.Fatalf("Could not start server: %s\n", err)
}
}
func (app *application) createTargetHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}
func (app *application) createCheckHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}
func (app *application) getMetricsHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}
func (app *application) getIncidentsHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotImplemented)
}

View file

@ -0,0 +1,54 @@
-- name: CreateTarget :one
INSERT INTO targets (name, type, config)
VALUES ($1, $2, $3)
RETURNING *;
-- name: GetTarget :one
SELECT * FROM targets
WHERE id = $1;
-- name: CreateCheck :one
INSERT INTO checks (target_id, type, interval_seconds, timeout_seconds)
VALUES ($1, $2, $3, $4)
RETURNING *;
-- name: GetCheck :one
SELECT * FROM checks
WHERE id = $1;
-- name: ListChecks :many
SELECT * FROM checks
WHERE is_active = TRUE;
-- name: CreateMetric :one
INSERT INTO metrics (time, check_id, value, tags)
VALUES ($1, $2, $3, $4)
RETURNING *;
-- name: GetMetricsForCheck :many
SELECT * FROM metrics
WHERE check_id = $1 AND time >= $2 AND time <= $3
ORDER BY time DESC;
-- name: CreateAlertRule :one
INSERT INTO alert_rules (check_id, name, threshold, operator, for_duration, severity)
VALUES ($1, $2, $3, $4, $5, $6)
RETURNING *;
-- name: ListAlertRules :many
SELECT * FROM alert_rules;
-- name: CreateIncident :one
INSERT INTO incidents (alert_rule_id, status, start_time)
VALUES ($1, $2, $3)
RETURNING *;
-- name: GetIncident :one
SELECT * FROM incidents
WHERE id = $1;
-- name: UpdateIncidentStatus :one
UPDATE incidents
SET status = $2, updated_at = NOW()
WHERE id = $1
RETURNING *;

18
observability-core/go.mod Normal file
View file

@ -0,0 +1,18 @@
module github.com/lab/observability-core
go 1.23
require (
github.com/go-chi/chi/v5 v5.2.3
github.com/jackc/pgx/v5 v5.7.1
github.com/joho/godotenv v1.5.1
)
require (
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
golang.org/x/crypto v0.27.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/text v0.21.0 // indirect
)

32
observability-core/go.sum Normal file
View file

@ -0,0 +1,32 @@
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/go-chi/chi/v5 v5.2.3 h1:WQIt9uxdsAbgIYgid+BpYc+liqQZGMHRaUwp0JUcvdE=
github.com/go-chi/chi/v5 v5.2.3/go.mod h1:L2yAIGWB3H+phAw1NxKwWM+7eUH/lU8pOMm5hHcoops=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
github.com/jackc/pgx/v5 v5.7.1 h1:x7SYsPBYDkHDksogeSmZZ5xzThcTgRz++I5E+ePFUcs=
github.com/jackc/pgx/v5 v5.7.1/go.mod h1:e7O26IywZZ+naJtWWos6i6fvWK+29etgITqrqHLfoZA=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
golang.org/x/crypto v0.27.0 h1:GXm2NjJrPaiv/h1tb2UH8QfgC/hOf/+z0p6PT8o1w7A=
golang.org/x/crypto v0.27.0/go.mod h1:1Xngt8kV6Dvbssa53Ziq6Eqn0HqbZi5Z6R0ZpwQzt70=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View file

@ -0,0 +1,85 @@
package alerter
import (
"context"
"log"
"time"
"github.com/lab/observability-core/internal/db"
)
type Alerter struct {
queries *db.Queries
}
func New(queries *db.Queries) *Alerter {
return &Alerter{queries: queries}
}
func (a *Alerter) Start() {
log.Println("Starting alerter")
ticker := time.NewTicker(30 * time.Second) // Run every 30 seconds
go func() {
for range ticker.C {
a.runAlerts()
}
}()
}
func (a *Alerter) runAlerts() {
rules, err := a.queries.ListAlertRules(context.Background())
if err != nil {
log.Printf("Error listing alert rules: %v", err)
return
}
for _, rule := range rules {
go a.evaluateRule(rule)
}
}
func (a *Alerter) evaluateRule(rule db.AlertRule) {
// This is a very simplified logic.
// It should query metrics within the `for_duration` and check if they meet the threshold.
// For now, we'll just log a message.
log.Printf("Evaluating alert rule: %s", rule.Name)
// A real implementation would look something like this:
/*
params := db.GetMetricsForCheckParams{
CheckID: rule.CheckID,
StartTime: time.Now().Add(-time.Duration(rule.ForDuration) * time.Second),
EndTime: time.Now(),
}
metrics, err := a.queries.GetMetricsForCheck(context.Background(), params)
if err != nil {
log.Printf("Error getting metrics for rule %s: %v", rule.Name, err)
return
}
isFailing := true
for _, metric := range metrics {
if !compare(metric.Value, rule.Threshold, rule.Operator) {
isFailing = false
break
}
}
if isFailing {
// Create or update incident
}
*/
}
func compare(value, threshold float64, operator string) bool {
switch operator {
case ">":
return value > threshold
case "<":
return value < threshold
case "=":
return value == threshold
}
return false
}

View file

@ -0,0 +1,101 @@
package collector
import (
"context"
"encoding/json"
"log"
"net/http"
"strconv"
"time"
"github.com/jackc/pgx/v5/pgtype"
"github.com/lab/observability-core/internal/db"
)
type Collector struct {
queries *db.Queries
}
func New(queries *db.Queries) *Collector {
return &Collector{queries: queries}
}
func (c *Collector) Start() {
log.Println("Starting health collector")
ticker := time.NewTicker(10 * time.Second) // Run every 10 seconds for simplicity
go func() {
for range ticker.C {
c.runChecks()
}
}()
}
func (c *Collector) runChecks() {
checks, err := c.queries.ListChecks(context.Background())
if err != nil {
log.Printf("Error listing checks: %v", err)
return
}
for _, check := range checks {
go c.performCheck(check)
}
}
func (c *Collector) performCheck(check db.Check) {
startTime := time.Now()
var value float64
target, err := c.queries.GetTarget(context.Background(), check.TargetID)
if err != nil {
log.Printf("Error getting target for check %s: %v", check.ID, err)
return
}
switch check.Type {
case "http":
var config map[string]interface{}
if err := json.Unmarshal(target.Config, &config); err != nil {
log.Printf("Invalid config for check %s: %v", check.ID, err)
return
}
// Assume config has a "url" field
url, ok := config["url"].(string)
if !ok {
log.Printf("Invalid URL in config for check %s", check.ID)
return
}
resp, err := http.Get(url)
if err != nil || resp.StatusCode >= 400 {
value = 0 // Failure
} else {
value = 1 // Success
}
// Other check types (ping, tcp) would be implemented here
default:
log.Printf("Unknown check type: %s", check.Type)
return
}
latency := time.Since(startTime).Seconds()
tags := map[string]string{
"latency_seconds": strconv.FormatFloat(latency, 'f', -1, 64),
}
tagsJSON, _ := json.Marshal(tags)
_, err = c.queries.CreateMetric(context.Background(), db.CreateMetricParams{
Time: pgtype.Timestamptz{Time: startTime, Valid: true},
CheckID: check.ID,
Value: value,
Tags: tagsJSON,
})
if err != nil {
log.Printf("Error creating metric for check %s: %v", check.ID, err)
}
}

View file

@ -0,0 +1,35 @@
package config
import (
"log"
"os"
"github.com/joho/godotenv"
)
type Config struct {
DatabaseURL string
Port string
}
func Load() *Config {
err := godotenv.Load()
if err != nil {
log.Println("No .env file found, using environment variables")
}
return &Config{
DatabaseURL: getEnv("DATABASE_URL", ""),
Port: getEnv("PORT", "8080"),
}
}
func getEnv(key, fallback string) string {
if value, ok := os.LookupEnv(key); ok {
return value
}
if fallback == "" {
log.Fatalf("FATAL: Environment variable %s is not set.", key)
}
return fallback
}

View file

@ -0,0 +1,32 @@
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.30.0
package db
import (
"context"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgconn"
)
type DBTX interface {
Exec(context.Context, string, ...interface{}) (pgconn.CommandTag, error)
Query(context.Context, string, ...interface{}) (pgx.Rows, error)
QueryRow(context.Context, string, ...interface{}) pgx.Row
}
func New(db DBTX) *Queries {
return &Queries{db: db}
}
type Queries struct {
db DBTX
}
func (q *Queries) WithTx(tx pgx.Tx) *Queries {
return &Queries{
db: tx,
}
}

View file

@ -0,0 +1,232 @@
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.30.0
package db
import (
"database/sql/driver"
"fmt"
"github.com/jackc/pgx/v5/pgtype"
)
type AlertSeverity string
const (
AlertSeverityInfo AlertSeverity = "info"
AlertSeverityWarning AlertSeverity = "warning"
AlertSeverityCritical AlertSeverity = "critical"
)
func (e *AlertSeverity) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = AlertSeverity(s)
case string:
*e = AlertSeverity(s)
default:
return fmt.Errorf("unsupported scan type for AlertSeverity: %T", src)
}
return nil
}
type NullAlertSeverity struct {
AlertSeverity AlertSeverity `json:"alert_severity"`
Valid bool `json:"valid"` // Valid is true if AlertSeverity is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullAlertSeverity) Scan(value interface{}) error {
if value == nil {
ns.AlertSeverity, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.AlertSeverity.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullAlertSeverity) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.AlertSeverity), nil
}
type CheckType string
const (
CheckTypeHttp CheckType = "http"
CheckTypePing CheckType = "ping"
CheckTypeTcp CheckType = "tcp"
)
func (e *CheckType) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = CheckType(s)
case string:
*e = CheckType(s)
default:
return fmt.Errorf("unsupported scan type for CheckType: %T", src)
}
return nil
}
type NullCheckType struct {
CheckType CheckType `json:"check_type"`
Valid bool `json:"valid"` // Valid is true if CheckType is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullCheckType) Scan(value interface{}) error {
if value == nil {
ns.CheckType, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.CheckType.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullCheckType) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.CheckType), nil
}
type IncidentStatus string
const (
IncidentStatusOpen IncidentStatus = "open"
IncidentStatusAcknowledged IncidentStatus = "acknowledged"
IncidentStatusResolved IncidentStatus = "resolved"
)
func (e *IncidentStatus) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = IncidentStatus(s)
case string:
*e = IncidentStatus(s)
default:
return fmt.Errorf("unsupported scan type for IncidentStatus: %T", src)
}
return nil
}
type NullIncidentStatus struct {
IncidentStatus IncidentStatus `json:"incident_status"`
Valid bool `json:"valid"` // Valid is true if IncidentStatus is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullIncidentStatus) Scan(value interface{}) error {
if value == nil {
ns.IncidentStatus, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.IncidentStatus.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullIncidentStatus) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.IncidentStatus), nil
}
type TargetType string
const (
TargetTypeService TargetType = "service"
TargetTypeInfra TargetType = "infra"
)
func (e *TargetType) Scan(src interface{}) error {
switch s := src.(type) {
case []byte:
*e = TargetType(s)
case string:
*e = TargetType(s)
default:
return fmt.Errorf("unsupported scan type for TargetType: %T", src)
}
return nil
}
type NullTargetType struct {
TargetType TargetType `json:"target_type"`
Valid bool `json:"valid"` // Valid is true if TargetType is not NULL
}
// Scan implements the Scanner interface.
func (ns *NullTargetType) Scan(value interface{}) error {
if value == nil {
ns.TargetType, ns.Valid = "", false
return nil
}
ns.Valid = true
return ns.TargetType.Scan(value)
}
// Value implements the driver Valuer interface.
func (ns NullTargetType) Value() (driver.Value, error) {
if !ns.Valid {
return nil, nil
}
return string(ns.TargetType), nil
}
type AlertRule struct {
ID pgtype.UUID `json:"id"`
CheckID pgtype.UUID `json:"check_id"`
Name string `json:"name"`
Threshold float64 `json:"threshold"`
Operator string `json:"operator"`
ForDuration int64 `json:"for_duration"`
Severity AlertSeverity `json:"severity"`
CreatedAt pgtype.Timestamptz `json:"created_at"`
UpdatedAt pgtype.Timestamptz `json:"updated_at"`
}
type Check struct {
ID pgtype.UUID `json:"id"`
TargetID pgtype.UUID `json:"target_id"`
Type CheckType `json:"type"`
IntervalSeconds int32 `json:"interval_seconds"`
TimeoutSeconds int32 `json:"timeout_seconds"`
IsActive bool `json:"is_active"`
CreatedAt pgtype.Timestamptz `json:"created_at"`
UpdatedAt pgtype.Timestamptz `json:"updated_at"`
}
type Incident struct {
ID pgtype.UUID `json:"id"`
AlertRuleID pgtype.UUID `json:"alert_rule_id"`
Status IncidentStatus `json:"status"`
StartTime pgtype.Timestamptz `json:"start_time"`
EndTime pgtype.Timestamptz `json:"end_time"`
CreatedAt pgtype.Timestamptz `json:"created_at"`
UpdatedAt pgtype.Timestamptz `json:"updated_at"`
}
type Metric struct {
Time pgtype.Timestamptz `json:"time"`
CheckID pgtype.UUID `json:"check_id"`
Value float64 `json:"value"`
Tags []byte `json:"tags"`
}
type Target struct {
ID pgtype.UUID `json:"id"`
Name string `json:"name"`
Type TargetType `json:"type"`
Config []byte `json:"config"`
CreatedAt pgtype.Timestamptz `json:"created_at"`
UpdatedAt pgtype.Timestamptz `json:"updated_at"`
}

View file

@ -0,0 +1,360 @@
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.30.0
// source: query.sql
package db
import (
"context"
"github.com/jackc/pgx/v5/pgtype"
)
const createAlertRule = `-- name: CreateAlertRule :one
INSERT INTO alert_rules (check_id, name, threshold, operator, for_duration, severity)
VALUES ($1, $2, $3, $4, $5, $6)
RETURNING id, check_id, name, threshold, operator, for_duration, severity, created_at, updated_at
`
type CreateAlertRuleParams struct {
CheckID pgtype.UUID `json:"check_id"`
Name string `json:"name"`
Threshold float64 `json:"threshold"`
Operator string `json:"operator"`
ForDuration int64 `json:"for_duration"`
Severity AlertSeverity `json:"severity"`
}
func (q *Queries) CreateAlertRule(ctx context.Context, arg CreateAlertRuleParams) (AlertRule, error) {
row := q.db.QueryRow(ctx, createAlertRule,
arg.CheckID,
arg.Name,
arg.Threshold,
arg.Operator,
arg.ForDuration,
arg.Severity,
)
var i AlertRule
err := row.Scan(
&i.ID,
&i.CheckID,
&i.Name,
&i.Threshold,
&i.Operator,
&i.ForDuration,
&i.Severity,
&i.CreatedAt,
&i.UpdatedAt,
)
return i, err
}
const createCheck = `-- name: CreateCheck :one
INSERT INTO checks (target_id, type, interval_seconds, timeout_seconds)
VALUES ($1, $2, $3, $4)
RETURNING id, target_id, type, interval_seconds, timeout_seconds, is_active, created_at, updated_at
`
type CreateCheckParams struct {
TargetID pgtype.UUID `json:"target_id"`
Type CheckType `json:"type"`
IntervalSeconds int32 `json:"interval_seconds"`
TimeoutSeconds int32 `json:"timeout_seconds"`
}
func (q *Queries) CreateCheck(ctx context.Context, arg CreateCheckParams) (Check, error) {
row := q.db.QueryRow(ctx, createCheck,
arg.TargetID,
arg.Type,
arg.IntervalSeconds,
arg.TimeoutSeconds,
)
var i Check
err := row.Scan(
&i.ID,
&i.TargetID,
&i.Type,
&i.IntervalSeconds,
&i.TimeoutSeconds,
&i.IsActive,
&i.CreatedAt,
&i.UpdatedAt,
)
return i, err
}
const createIncident = `-- name: CreateIncident :one
INSERT INTO incidents (alert_rule_id, status, start_time)
VALUES ($1, $2, $3)
RETURNING id, alert_rule_id, status, start_time, end_time, created_at, updated_at
`
type CreateIncidentParams struct {
AlertRuleID pgtype.UUID `json:"alert_rule_id"`
Status IncidentStatus `json:"status"`
StartTime pgtype.Timestamptz `json:"start_time"`
}
func (q *Queries) CreateIncident(ctx context.Context, arg CreateIncidentParams) (Incident, error) {
row := q.db.QueryRow(ctx, createIncident, arg.AlertRuleID, arg.Status, arg.StartTime)
var i Incident
err := row.Scan(
&i.ID,
&i.AlertRuleID,
&i.Status,
&i.StartTime,
&i.EndTime,
&i.CreatedAt,
&i.UpdatedAt,
)
return i, err
}
const createMetric = `-- name: CreateMetric :one
INSERT INTO metrics (time, check_id, value, tags)
VALUES ($1, $2, $3, $4)
RETURNING time, check_id, value, tags
`
type CreateMetricParams struct {
Time pgtype.Timestamptz `json:"time"`
CheckID pgtype.UUID `json:"check_id"`
Value float64 `json:"value"`
Tags []byte `json:"tags"`
}
func (q *Queries) CreateMetric(ctx context.Context, arg CreateMetricParams) (Metric, error) {
row := q.db.QueryRow(ctx, createMetric,
arg.Time,
arg.CheckID,
arg.Value,
arg.Tags,
)
var i Metric
err := row.Scan(
&i.Time,
&i.CheckID,
&i.Value,
&i.Tags,
)
return i, err
}
const createTarget = `-- name: CreateTarget :one
INSERT INTO targets (name, type, config)
VALUES ($1, $2, $3)
RETURNING id, name, type, config, created_at, updated_at
`
type CreateTargetParams struct {
Name string `json:"name"`
Type TargetType `json:"type"`
Config []byte `json:"config"`
}
func (q *Queries) CreateTarget(ctx context.Context, arg CreateTargetParams) (Target, error) {
row := q.db.QueryRow(ctx, createTarget, arg.Name, arg.Type, arg.Config)
var i Target
err := row.Scan(
&i.ID,
&i.Name,
&i.Type,
&i.Config,
&i.CreatedAt,
&i.UpdatedAt,
)
return i, err
}
const getCheck = `-- name: GetCheck :one
SELECT id, target_id, type, interval_seconds, timeout_seconds, is_active, created_at, updated_at FROM checks
WHERE id = $1
`
func (q *Queries) GetCheck(ctx context.Context, id pgtype.UUID) (Check, error) {
row := q.db.QueryRow(ctx, getCheck, id)
var i Check
err := row.Scan(
&i.ID,
&i.TargetID,
&i.Type,
&i.IntervalSeconds,
&i.TimeoutSeconds,
&i.IsActive,
&i.CreatedAt,
&i.UpdatedAt,
)
return i, err
}
const getIncident = `-- name: GetIncident :one
SELECT id, alert_rule_id, status, start_time, end_time, created_at, updated_at FROM incidents
WHERE id = $1
`
func (q *Queries) GetIncident(ctx context.Context, id pgtype.UUID) (Incident, error) {
row := q.db.QueryRow(ctx, getIncident, id)
var i Incident
err := row.Scan(
&i.ID,
&i.AlertRuleID,
&i.Status,
&i.StartTime,
&i.EndTime,
&i.CreatedAt,
&i.UpdatedAt,
)
return i, err
}
const getMetricsForCheck = `-- name: GetMetricsForCheck :many
SELECT time, check_id, value, tags FROM metrics
WHERE check_id = $1 AND time >= $2 AND time <= $3
ORDER BY time DESC
`
type GetMetricsForCheckParams struct {
CheckID pgtype.UUID `json:"check_id"`
Time pgtype.Timestamptz `json:"time"`
Time_2 pgtype.Timestamptz `json:"time_2"`
}
func (q *Queries) GetMetricsForCheck(ctx context.Context, arg GetMetricsForCheckParams) ([]Metric, error) {
rows, err := q.db.Query(ctx, getMetricsForCheck, arg.CheckID, arg.Time, arg.Time_2)
if err != nil {
return nil, err
}
defer rows.Close()
var items []Metric
for rows.Next() {
var i Metric
if err := rows.Scan(
&i.Time,
&i.CheckID,
&i.Value,
&i.Tags,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const getTarget = `-- name: GetTarget :one
SELECT id, name, type, config, created_at, updated_at FROM targets
WHERE id = $1
`
func (q *Queries) GetTarget(ctx context.Context, id pgtype.UUID) (Target, error) {
row := q.db.QueryRow(ctx, getTarget, id)
var i Target
err := row.Scan(
&i.ID,
&i.Name,
&i.Type,
&i.Config,
&i.CreatedAt,
&i.UpdatedAt,
)
return i, err
}
const listAlertRules = `-- name: ListAlertRules :many
SELECT id, check_id, name, threshold, operator, for_duration, severity, created_at, updated_at FROM alert_rules
`
func (q *Queries) ListAlertRules(ctx context.Context) ([]AlertRule, error) {
rows, err := q.db.Query(ctx, listAlertRules)
if err != nil {
return nil, err
}
defer rows.Close()
var items []AlertRule
for rows.Next() {
var i AlertRule
if err := rows.Scan(
&i.ID,
&i.CheckID,
&i.Name,
&i.Threshold,
&i.Operator,
&i.ForDuration,
&i.Severity,
&i.CreatedAt,
&i.UpdatedAt,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const listChecks = `-- name: ListChecks :many
SELECT id, target_id, type, interval_seconds, timeout_seconds, is_active, created_at, updated_at FROM checks
WHERE is_active = TRUE
`
func (q *Queries) ListChecks(ctx context.Context) ([]Check, error) {
rows, err := q.db.Query(ctx, listChecks)
if err != nil {
return nil, err
}
defer rows.Close()
var items []Check
for rows.Next() {
var i Check
if err := rows.Scan(
&i.ID,
&i.TargetID,
&i.Type,
&i.IntervalSeconds,
&i.TimeoutSeconds,
&i.IsActive,
&i.CreatedAt,
&i.UpdatedAt,
); err != nil {
return nil, err
}
items = append(items, i)
}
if err := rows.Err(); err != nil {
return nil, err
}
return items, nil
}
const updateIncidentStatus = `-- name: UpdateIncidentStatus :one
UPDATE incidents
SET status = $2, updated_at = NOW()
WHERE id = $1
RETURNING id, alert_rule_id, status, start_time, end_time, created_at, updated_at
`
type UpdateIncidentStatusParams struct {
ID pgtype.UUID `json:"id"`
Status IncidentStatus `json:"status"`
}
func (q *Queries) UpdateIncidentStatus(ctx context.Context, arg UpdateIncidentStatusParams) (Incident, error) {
row := q.db.QueryRow(ctx, updateIncidentStatus, arg.ID, arg.Status)
var i Incident
err := row.Scan(
&i.ID,
&i.AlertRuleID,
&i.Status,
&i.StartTime,
&i.EndTime,
&i.CreatedAt,
&i.UpdatedAt,
)
return i, err
}

View file

@ -0,0 +1,80 @@
-- +migrate Up
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS timescaledb;
CREATE TYPE target_type AS ENUM ('service', 'infra');
CREATE TYPE check_type AS ENUM ('http', 'ping', 'tcp');
CREATE TYPE incident_status AS ENUM ('open', 'acknowledged', 'resolved');
CREATE TYPE alert_severity AS ENUM ('info', 'warning', 'critical');
CREATE TABLE targets (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
name VARCHAR(255) NOT NULL,
type target_type NOT NULL,
config JSONB NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE checks (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
target_id UUID NOT NULL REFERENCES targets(id) ON DELETE CASCADE,
type check_type NOT NULL,
interval_seconds INTEGER NOT NULL,
timeout_seconds INTEGER NOT NULL,
is_active BOOLEAN NOT NULL DEFAULT TRUE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE metrics (
time TIMESTAMPTZ NOT NULL,
check_id UUID NOT NULL REFERENCES checks(id) ON DELETE CASCADE,
value DOUBLE PRECISION NOT NULL,
tags JSONB
);
-- Create a hypertable for metrics
SELECT create_hypertable('metrics', 'time');
CREATE TABLE alert_rules (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
check_id UUID NOT NULL REFERENCES checks(id) ON DELETE CASCADE,
name VARCHAR(255) NOT NULL,
threshold DOUBLE PRECISION NOT NULL,
-- e.g., '>', '<', '='
operator VARCHAR(10) NOT NULL,
-- in seconds
for_duration BIGINT NOT NULL,
severity alert_severity NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE incidents (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
alert_rule_id UUID NOT NULL REFERENCES alert_rules(id) ON DELETE CASCADE,
status incident_status NOT NULL,
start_time TIMESTAMPTZ NOT NULL,
end_time TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Indexes
CREATE INDEX ON checks (target_id);
CREATE INDEX ON metrics (check_id, time DESC);
CREATE INDEX ON alert_rules (check_id);
CREATE INDEX ON incidents (alert_rule_id, status);
-- +migrate Down
DROP TABLE IF EXISTS incidents;
DROP TABLE IF EXISTS alert_rules;
DROP TABLE IF EXISTS metrics;
DROP TABLE IF EXISTS checks;
DROP TABLE IF EXISTS targets;
DROP TYPE IF EXISTS incident_status;
DROP TYPE IF EXISTS alert_severity;
DROP TYPE IF EXISTS check_type;
DROP TYPE IF EXISTS target_type;

View file

@ -0,0 +1,11 @@
version: "2"
sql:
- engine: "postgresql"
queries: "db/query.sql"
schema: "migrations"
gen:
go:
package: "db"
out: "internal/db"
sql_package: "pgx/v5"
emit_json_tags: true

41
podman_push.sh Executable file
View file

@ -0,0 +1,41 @@
#!/bin/bash
set -e
REGISTRY="rg.fr-par.scw.cloud/yumi"
SERVICES=(
"automation-jobs-core"
"billing-finance-core"
"baas-control-plane"
"crm-core"
"dashboard"
"identity-gateway"
"observability-core"
"repo-integrations-core"
"security-governance-core"
)
for SERVICE in "${SERVICES[@]}"; do
if [ -d "$SERVICE" ]; then
if [ "$SERVICE" == "automation-jobs-core" ]; then
echo "🚀 Building automation-jobs-api..."
podman build -f Dockerfile.api -t "$REGISTRY/automation-jobs-api:latest" ./$SERVICE
echo "🚀 Pushing automation-jobs-api..."
podman push "$REGISTRY/automation-jobs-api:latest"
echo "🚀 Building automation-jobs-worker..."
podman build -f Dockerfile.worker -t "$REGISTRY/automation-jobs-worker:latest" ./$SERVICE
echo "🚀 Pushing automation-jobs-worker..."
podman push "$REGISTRY/automation-jobs-worker:latest"
else
echo "🚀 Building $SERVICE..."
podman build -t "$REGISTRY/$SERVICE:latest" ./$SERVICE
echo "🚀 Pushing $SERVICE..."
podman push "$REGISTRY/$SERVICE:latest"
fi
echo "✅ Done $SERVICE"
else
echo "⚠️ Directory $SERVICE not found!"
fi
done

View file

@ -0,0 +1,8 @@
.git
.env
.gitignore
Dockerfile
README.md
REPO-INTEGRATIONS-CORE.md
migrations
*.log

View file

@ -0,0 +1,5 @@
DATABASE_URL=
JWT_SECRET=
ENCRYPTION_KEY=
GITHUB_CLIENT_ID=
GITHUB_SECRET=

6
repo-integrations-core/.gitignore vendored Normal file
View file

@ -0,0 +1,6 @@
.env
*.log
.DS_Store
repo-integrations-core
main
coverage

View file

@ -0,0 +1,25 @@
# Dockerfile
FROM docker.io/library/golang:1.23-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
# Build with optimization flags
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-w -s" -o /app/repo-integrations-core ./cmd/api
# Use Google Distroless static image for minimal size and security
FROM gcr.io/distroless/static:nonroot
WORKDIR /app
COPY --from=builder /app/repo-integrations-core .
USER nonroot:nonroot
EXPOSE 8080
CMD ["./repo-integrations-core"]

View file

@ -0,0 +1,30 @@
.PHONY: up down build logs
up:
docker-compose up -d
down:
docker-compose down
build:
docker-compose build
logs:
docker-compose logs -f
# SQLC
sqlc-generate:
docker run --rm -v "$(PWD):/src" -w /src sqlc/sqlc:1.25.0 generate
# Migrations
migrate-create:
@read -p "Enter migration name: " name; \
migrate create -ext sql -dir migrations -seq $$name
migrate-up:
migrate -path migrations -database "postgres://user:password@localhost:5432/repo_integrations?sslmode=disable" -verbose up
migrate-down:
migrate -path migrations -database "postgres://user:password@localhost:5432/repo_integrations?sslmode=disable" -verbose down
.PHONY: sqlc-generate migrate-create migrate-up migrate-down

View file

@ -0,0 +1,73 @@
# REPO-INTEGRATIONS-CORE
O `repo-integrations-core` é o serviço responsável por gerenciar o ciclo de vida de integrações com provedores de controle de versão (Git).
## 📋 Visão Geral
Este serviço atua como um middleware inteligente entre a plataforma e provedores externos (GitHub, GitLab, Bitbucket), normalizando fluxos de autenticação (OAuth) e ingestão de eventos (Webhooks).
### Arquitetura
```mermaid
graph LR
User[Usuário] -->|OAuth Flow| API[Repo Integrations]
API -->|Exchange Code| GitHub[GitHub API]
API -->|Store Token| DB[(Encrypted DB)]
GitHub -->|Webhook| API
API -->|Normalize| EventBus[Event Bus / Queue]
```
## 🚀 Estrutura do Projeto
O projeto é escrito em **Go** e estrutura-se da seguinte forma:
| Diretório | Descrição |
| :--- | :--- |
| `cmd/api` | Entrypoint. |
| `internal/oauth` | Fluxos de autenticação. |
| `internal/webhooks` | Processamento e validação de webhooks. |
| `internal/provider` | Abstrações para diferentes providers (GitHub, GitLab). |
## 🛠️ Tecnologias e Otimizações
- **Linguagem**: Go 1.23.
- **Database**: PostgreSQL (tokens criptografados).
- **Segurança**: Criptografia AES-GCM para tokens de acesso.
- **Containerização**:
- Base `gcr.io/distroless/static:nonroot`.
- Build estático.
## 💻 Como Executar
### Docker (Recomendado)
```bash
# Build
docker build -t repo-integrations-core .
# Run
docker run -p 8080:8080 --env-file .env repo-integrations-core
```
Disponível na porta `8080`.
### Desenvolvimento
1. **Setup**:
```bash
cp .env.example .env
go mod tidy
```
2. **App GitHub**: Você precisará criar um GitHub App e configurar `GITHUB_CLIENT_ID` e `GITHUB_SECRET`.
3. **Executar**:
```bash
go run ./cmd/api
```
## 🔧 Detalhes do Dockerfile
O `Dockerfile` segue o padrão de otimização da plataforma:
- **Builder**: `golang:1.23-alpine`.
- **Runtime**: `distroless/static` (sem shell, non-root).
- **Compilação**: Flags `-w -s` para redução de binário.

View file

@ -0,0 +1,49 @@
package main
import (
"context"
"log"
"net/http"
"github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/lab/repo-integrations-core/internal/api"
"github.com/lab/repo-integrations-core/internal/config"
"github.com/lab/repo-integrations-core/internal/db"
)
func main() {
cfg := config.Load()
pool, err := pgxpool.New(context.Background(), cfg.DatabaseURL)
if err != nil {
log.Fatalf("Unable to connect to database: %v\n", err)
}
defer pool.Close()
queries := db.New(pool)
apiHandler := api.New(cfg, queries)
r := chi.NewRouter()
r.Use(middleware.Logger)
r.Use(middleware.Recoverer)
r.Get("/", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Repo Integrations Core"))
})
r.Route("/integrations/github", func(r chi.Router) {
r.Get("/connect", apiHandler.ConnectGithubHandler)
r.Get("/callback", apiHandler.ConnectGithubCallbackHandler)
})
r.Post("/webhooks/github", apiHandler.GithubWebhookHandler)
r.Get("/repositories", apiHandler.ListRepositoriesHandler)
log.Println("Starting server on :8080")
if err := http.ListenAndServe(":8080", r); err != nil {
log.Fatalf("Could not start server: %s\n", err)
}
}

View file

@ -0,0 +1,85 @@
-- name: CreateRepoAccount :one
INSERT INTO repo_accounts (
tenant_id,
provider,
account_id,
username,
encrypted_access_token,
encrypted_refresh_token
) VALUES (
$1, $2, $3, $4, $5, $6
) RETURNING *;
-- name: GetRepoAccount :one
SELECT * FROM repo_accounts
WHERE tenant_id = $1 AND provider = $2 AND account_id = $3;
-- name: GetRepoAccountByID :one
SELECT * FROM repo_accounts
WHERE tenant_id = $1 AND id = $2;
-- name: UpdateRepoAccountTokens :one
UPDATE repo_accounts
SET
encrypted_access_token = $3,
encrypted_refresh_token = $4,
updated_at = NOW()
WHERE tenant_id = $1 AND id = $2
RETURNING *;
-- name: ListRepoAccountsByTenant :many
SELECT * FROM repo_accounts
WHERE tenant_id = $1;
-- name: CreateRepository :one
INSERT INTO repositories (
tenant_id,
repo_account_id,
external_id,
name,
url
) VALUES (
$1, $2, $3, $4, $5
) RETURNING *;
-- name: GetRepository :one
SELECT * FROM repositories
WHERE tenant_id = $1 AND id = $2;
-- name: GetRepositoryByExternalID :one
SELECT * FROM repositories
WHERE tenant_id = $1 AND external_id = $2;
-- name: ListRepositoriesByTenant :many
SELECT * FROM repositories
WHERE tenant_id = $1 AND is_active = TRUE;
-- name: CreateWebhook :one
INSERT INTO webhooks (
tenant_id,
repository_id,
external_id,
secret
) VALUES (
$1, $2, $3, $4
) RETURNING *;
-- name: GetWebhookByRepoID :one
SELECT * FROM webhooks
WHERE tenant_id = $1 AND repository_id = $2;
-- name: GetWebhookByRepoExternalID :one
SELECT w.* FROM webhooks w
JOIN repositories r ON w.repository_id = r.id
WHERE r.tenant_id = $1 AND r.external_id = $2;
-- name: CreateRepoEvent :one
INSERT INTO repo_events (
tenant_id,
repository_id,
event_type,
payload
) VALUES (
$1, $2, $3, $4
) RETURNING *;

View file

@ -0,0 +1,25 @@
module github.com/lab/repo-integrations-core
go 1.23
require (
github.com/go-chi/chi/v5 v5.2.3
github.com/google/go-github/v53 v53.2.0
github.com/google/uuid v1.6.0
github.com/jackc/pgx/v5 v5.7.1
github.com/joho/godotenv v1.5.1
golang.org/x/oauth2 v0.24.0
)
require (
github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8 // indirect
github.com/cloudflare/circl v1.3.3 // indirect
github.com/google/go-querystring v1.1.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
golang.org/x/crypto v0.27.0 // indirect
golang.org/x/sync v0.10.0 // indirect
golang.org/x/sys v0.25.0 // indirect
golang.org/x/text v0.21.0 // indirect
)

View file

@ -0,0 +1,60 @@
github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8 h1:wPbRQzjjwFc0ih8puEVAOFGELsn1zoIIYdxvML7mDxA=
github.com/ProtonMail/go-crypto v0.0.0-20230217124315-7d5c6f04bbb8/go.mod h1:I0gYDMZ6Z5GRU7l58bNFSkPTFN6Yl12dsUlAZ8xy98g=
github.com/bwesterb/go-ristretto v1.2.0/go.mod h1:fUIoIZaG73pV5biE2Blr2xEzDoMj7NFEuV9ekS419A0=
github.com/cloudflare/circl v1.1.0/go.mod h1:prBCrKB9DV4poKZY1l9zBXg2QJY7mvgRvtMxxK7fi4I=
github.com/cloudflare/circl v1.3.3 h1:fE/Qz0QdIGqeWfnwq0RE0R7MI51s0M2E4Ga9kq5AEMs=
github.com/cloudflare/circl v1.3.3/go.mod h1:5XYMA4rFBvNIrhs50XuiBJ15vF2pZn4nnUKZrLbUZFA=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/go-chi/chi/v5 v5.2.3 h1:WQIt9uxdsAbgIYgid+BpYc+liqQZGMHRaUwp0JUcvdE=
github.com/go-chi/chi/v5 v5.2.3/go.mod h1:L2yAIGWB3H+phAw1NxKwWM+7eUH/lU8pOMm5hHcoops=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-github/v53 v53.2.0 h1:wvz3FyF53v4BK+AsnvCmeNhf8AkTaeh2SoYu/XUvTtI=
github.com/google/go-github/v53 v53.2.0/go.mod h1:XhFRObz+m/l+UCm9b7KSIC3lT3NWSXGt7mOsAWEloao=
github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8=
github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
github.com/jackc/pgx/v5 v5.7.1 h1:x7SYsPBYDkHDksogeSmZZ5xzThcTgRz++I5E+ePFUcs=
github.com/jackc/pgx/v5 v5.7.1/go.mod h1:e7O26IywZZ+naJtWWos6i6fvWK+29etgITqrqHLfoZA=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
github.com/joho/godotenv v1.5.1 h1:7eLL/+HRGLY0ldzfGMeQkb7vMd0as4CfYvUVzLqw0N0=
github.com/joho/godotenv v1.5.1/go.mod h1:f4LDr5Voq0i2e/R5DDNOoa2zzDfwtkZa6DnEwAbqwq4=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.27.0 h1:GXm2NjJrPaiv/h1tb2UH8QfgC/hOf/+z0p6PT8o1w7A=
golang.org/x/crypto v0.27.0/go.mod h1:1Xngt8kV6Dvbssa53Ziq6Eqn0HqbZi5Z6R0ZpwQzt70=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/oauth2 v0.24.0 h1:KTBBxWqUa0ykRPLtV69rRto9TLXcqYkeswu48x/gvNE=
golang.org/x/oauth2 v0.24.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.25.0 h1:r+8e+loiHxRqhXVl6ML1nO3l1+oFoWbnlu2Ehimmi34=
golang.org/x/sys v0.25.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View file

@ -0,0 +1,198 @@
package api
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"github.com/google/go-github/v53/github"
"github.com/google/uuid"
"github.com/jackc/pgx/v5/pgtype"
"github.com/lab/repo-integrations-core/internal/config"
"github.com/lab/repo-integrations-core/internal/crypto"
"github.com/lab/repo-integrations-core/internal/db"
"golang.org/x/oauth2"
oauth2github "golang.org/x/oauth2/github"
)
type API struct {
config *config.Config
queries *db.Queries
}
func New(config *config.Config, queries *db.Queries) *API {
return &API{
config: config,
queries: queries,
}
}
func (a *API) getGithubOAuthConfig() *oauth2.Config {
return &oauth2.Config{
ClientID: a.config.GithubClientID,
ClientSecret: a.config.GithubSecret,
Endpoint: oauth2github.Endpoint,
RedirectURL: "http://localhost:8080/integrations/github/callback",
Scopes: []string{"repo", "admin:repo_hook"},
}
}
func (a *API) ConnectGithubHandler(w http.ResponseWriter, r *http.Request) {
// For now, we'll hardcode the tenant_id. In a real app, this would come from the JWT.
tenantID := uuid.New()
url := a.getGithubOAuthConfig().AuthCodeURL(tenantID.String(), oauth2.AccessTypeOffline)
http.Redirect(w, r, url, http.StatusTemporaryRedirect)
}
type githubUser struct {
ID int64 `json:"id"`
Login string `json:"login"`
}
func (a *API) ConnectGithubCallbackHandler(w http.ResponseWriter, r *http.Request) {
state := r.URL.Query().Get("state")
code := r.URL.Query().Get("code")
tenantID, err := uuid.Parse(state)
if err != nil {
http.Error(w, "Invalid state", http.StatusBadRequest)
return
}
githubOauthConfig := a.getGithubOAuthConfig()
token, err := githubOauthConfig.Exchange(context.Background(), code)
if err != nil {
http.Error(w, "Failed to exchange token", http.StatusInternalServerError)
return
}
// Get user info from GitHub
client := githubOauthConfig.Client(context.Background(), token)
resp, err := client.Get("https://api.github.com/user")
if err != nil {
http.Error(w, "Failed to get user info", http.StatusInternalServerError)
return
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
http.Error(w, "Failed to read user info", http.StatusInternalServerError)
return
}
var user githubUser
if err := json.Unmarshal(body, &user); err != nil {
http.Error(w, "Failed to parse user info", http.StatusInternalServerError)
return
}
encryptedAccessToken, err := crypto.Encrypt(token.AccessToken, a.config.EncryptionKey)
if err != nil {
http.Error(w, "Failed to encrypt token", http.StatusInternalServerError)
return
}
var encryptedRefreshToken string
if token.RefreshToken != "" {
encryptedRefreshToken, err = crypto.Encrypt(token.RefreshToken, a.config.EncryptionKey)
if err != nil {
http.Error(w, "Failed to encrypt refresh token", http.StatusInternalServerError)
return
}
}
params := db.CreateRepoAccountParams{
TenantID: pgtype.UUID{Bytes: tenantID, Valid: true},
Provider: string(db.GitProviderGithub),
AccountID: fmt.Sprintf("%d", user.ID),
Username: user.Login,
EncryptedAccessToken: []byte(encryptedAccessToken),
}
if encryptedRefreshToken != "" {
params.EncryptedRefreshToken = []byte(encryptedRefreshToken)
}
_, err = a.queries.CreateRepoAccount(context.Background(), params)
if err != nil {
http.Error(w, "Failed to save account", http.StatusInternalServerError)
return
}
w.Write([]byte("Successfully connected to GitHub!"))
}
func (a *API) GithubWebhookHandler(w http.ResponseWriter, r *http.Request) {
// TODO: Implement webhook signature validation
tenantIDStr := r.URL.Query().Get("tenant_id")
tenantID, err := uuid.Parse(tenantIDStr)
if err != nil {
http.Error(w, "Invalid tenant_id", http.StatusBadRequest)
return
}
payload, err := io.ReadAll(r.Body)
if err != nil {
http.Error(w, "Failed to read request body", http.StatusInternalServerError)
return
}
defer r.Body.Close()
event, err := github.ParseWebHook(github.WebHookType(r), payload)
if err != nil {
http.Error(w, "Could not parse webhook", http.StatusBadRequest)
return
}
var eventType db.EventType
var repoExternalID string
var repoID pgtype.UUID
switch e := event.(type) {
case *github.PushEvent:
eventType = db.EventTypePush
repoExternalID = fmt.Sprintf("%d", e.Repo.GetID())
case *github.PullRequestEvent:
eventType = db.EventTypePullRequest
repoExternalID = fmt.Sprintf("%d", e.Repo.GetID())
case *github.ReleaseEvent:
eventType = db.EventTypeRelease
repoExternalID = fmt.Sprintf("%d", e.Repo.GetID())
default:
w.WriteHeader(http.StatusOK)
return
}
repo, err := a.queries.GetRepositoryByExternalID(r.Context(), db.GetRepositoryByExternalIDParams{
TenantID: pgtype.UUID{Bytes: tenantID, Valid: true},
ExternalID: repoExternalID,
})
if err != nil {
http.Error(w, "Repository not found", http.StatusNotFound)
return
}
repoID = repo.ID
jsonPayload, err := json.Marshal(event)
if err != nil {
http.Error(w, "Failed to marshal event payload", http.StatusInternalServerError)
return
}
_, err = a.queries.CreateRepoEvent(r.Context(), db.CreateRepoEventParams{
TenantID: pgtype.UUID{Bytes: tenantID, Valid: true},
RepositoryID: repoID,
EventType: string(eventType),
Payload: jsonPayload,
})
if err != nil {
http.Error(w, "Failed to create repo event", http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusOK)
}

View file

@ -0,0 +1,29 @@
package api
import (
"encoding/json"
"net/http"
"github.com/google/uuid"
"github.com/jackc/pgx/v5/pgtype"
)
func (a *API) ListRepositoriesHandler(w http.ResponseWriter, r *http.Request) {
tenantIDStr := r.URL.Query().Get("tenant_id")
tenantID, err := uuid.Parse(tenantIDStr)
if err != nil {
http.Error(w, "Invalid tenant_id", http.StatusBadRequest)
return
}
repos, err := a.queries.ListRepositoriesByTenant(r.Context(), pgtype.UUID{Bytes: tenantID, Valid: true})
if err != nil {
http.Error(w, "Failed to list repositories", http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/json")
if err := json.NewEncoder(w).Encode(repos); err != nil {
http.Error(w, "Failed to encode response", http.StatusInternalServerError)
}
}

View file

@ -0,0 +1,41 @@
package config
import (
"log"
"os"
"github.com/joho/godotenv"
)
type Config struct {
DatabaseURL string
JWTSecret string
EncryptionKey string
GithubClientID string
GithubSecret string
}
func Load() *Config {
err := godotenv.Load()
if err != nil {
log.Println("No .env file found, using environment variables")
}
return &Config{
DatabaseURL: getEnv("DATABASE_URL", ""),
JWTSecret: getEnv("JWT_SECRET", ""),
EncryptionKey: getEnv("ENCRYPTION_KEY", ""),
GithubClientID: getEnv("GITHUB_CLIENT_ID", ""),
GithubSecret: getEnv("GITHUB_SECRET", ""),
}
}
func getEnv(key, fallback string) string {
if value, ok := os.LookupEnv(key); ok {
return value
}
if fallback == "" {
log.Fatalf("FATAL: Environment variable %s is not set.", key)
}
return fallback
}

View file

@ -0,0 +1,63 @@
package crypto
import (
"crypto/aes"
"crypto/cipher"
"crypto/rand"
"encoding/hex"
"fmt"
"io"
)
// Encrypt encrypts data using AES-GCM.
func Encrypt(stringToEncrypt string, keyString string) (string, error) {
key, _ := hex.DecodeString(keyString)
plaintext := []byte(stringToEncrypt)
block, err := aes.NewCipher(key)
if err != nil {
return "", err
}
aesGCM, err := cipher.NewGCM(block)
if err != nil {
return "", err
}
nonce := make([]byte, aesGCM.NonceSize())
if _, err = io.ReadFull(rand.Reader, nonce); err != nil {
return "", err
}
ciphertext := aesGCM.Seal(nonce, nonce, plaintext, nil)
return fmt.Sprintf("%x", ciphertext), nil
}
// Decrypt decrypts data using AES-GCM.
func Decrypt(encryptedString string, keyString string) (string, error) {
key, _ := hex.DecodeString(keyString)
enc, _ := hex.DecodeString(encryptedString)
block, err := aes.NewCipher(key)
if err != nil {
return "", err
}
aesGCM, err := cipher.NewGCM(block)
if err != nil {
return "", err
}
nonceSize := aesGCM.NonceSize()
if len(enc) < nonceSize {
return "", fmt.Errorf("ciphertext too short")
}
nonce, ciphertext := enc[:nonceSize], enc[nonceSize:]
plaintext, err := aesGCM.Open(nil, nonce, ciphertext, nil)
if err != nil {
return "", err
}
return string(plaintext), nil
}

View file

@ -0,0 +1,32 @@
// Code generated by sqlc. DO NOT EDIT.
// versions:
// sqlc v1.30.0
package db
import (
"context"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgconn"
)
type DBTX interface {
Exec(context.Context, string, ...interface{}) (pgconn.CommandTag, error)
Query(context.Context, string, ...interface{}) (pgx.Rows, error)
QueryRow(context.Context, string, ...interface{}) pgx.Row
}
func New(db DBTX) *Queries {
return &Queries{db: db}
}
type Queries struct {
db DBTX
}
func (q *Queries) WithTx(tx pgx.Tx) *Queries {
return &Queries{
db: tx,
}
}

Some files were not shown because too many files have changed in this diff Show more