Update Keycloak probe/realm import behavior and authority config so auth services start reliably on the dev cluster, while keeping CD deployment steps aligned with the actual Kubernetes overlay behavior.
125 KiB
Multi-Tenant Club Work Manager
TL;DR
Quick Summary: Build a greenfield multi-tenant SaaS application for clubs (tennis, cycling, etc.) to manage work items (tasks + time-slot shifts) across their members. Credential-based multi-tenancy with PostgreSQL RLS, Keycloak auth, .NET 10 backend, Next.js frontend, deployed to Kubernetes.
Deliverables:
- .NET 10 REST API with multi-tenant data isolation (RLS + Finbuckle)
- Next.js frontend with Tailwind + shadcn/ui, club-switcher, task & shift management
- Keycloak integration for authentication with multi-club membership
- PostgreSQL schema with RLS policies and EF Core migrations
- Docker Compose for local development (hot reload, Keycloak, PostgreSQL)
- Kubernetes manifests (Kustomize base + dev overlay)
- Gitea CI pipeline (
.gitea/workflows/ci.yml) for backend/frontend/infrastructure validation- Gitea CD bootstrap + deployment pipelines (
.gitea/workflows/cd-bootstrap.yml,.gitea/workflows/cd-deploy.yml)- Comprehensive TDD test suite (xUnit + Testcontainers, Vitest + RTL, Playwright E2E)
- Seed data for development (2 clubs, 5 users, sample tasks + shifts)
Estimated Effort: Large Parallel Execution: YES — 7 waves Critical Path: Monorepo scaffold → Auth pipeline → Multi-tenancy layer → Domain model → API endpoints → Frontend → K8s manifests
Context
Original Request
Build a multi-tenant internet application for managing work items over several members of a club (e.g. Tennis club, cycling club). Backend in .NET + PostgreSQL, frontend in Next.js + Bun, deployed to Kubernetes with local Docker Compose for development.
Interview Summary
Key Discussions:
- Tenant identification: Credential-based (not subdomain). User logs in, JWT claims identify club memberships. Active club selected via X-Tenant-Id header.
- Work items: Hybrid — task-based (5-state: Open → Assigned → In Progress → Review → Done) AND time-slot shift scheduling with member sign-up.
- Auth: Keycloak (self-hosted), single realm, user attributes for club membership, custom protocol mapper for JWT claims.
- Multi-club: Users can belong to multiple clubs with different roles (Admin/Manager/Member/Viewer per club).
- Scale: MVP — 1-5 clubs, <100 users.
- Testing: TDD approach (tests first).
- Notifications: None for MVP.
- CI extension: Add Gitea-hosted CI pipeline for this repository.
- Pipeline scope (updated): CI + CD. CI handles build/test/lint/manifest validation; CD bootstrap publishes multi-arch images; CD deploy applies Kubernetes manifests.
Research Findings:
- Finbuckle.MultiTenant: ClaimStrategy + HeaderStrategy fallback is production-proven (fullstackhero/dotnet-starter-kit pattern).
- Middleware order:
UseAuthentication()→UseMultiTenant()→UseAuthorization()(single Keycloak realm, ClaimStrategy needs claims). - RLS safety: Use
SET LOCAL(transaction-scoped) notSETto prevent stale tenant context in pooled connections. - EF Core + RLS: Do NOT use Finbuckle's
IsMultiTenant()shadow property — use explicitTenantIdcolumn + RLS only (avoids double-filtering). - Bun: P99 latency issues with Next.js SSR (340ms vs 120ms Node.js). Use Bun for dev/package management, Node.js for production.
- .NET 10: Built-in
AddOpenApi()(no Swashbuckle). 49% faster than .NET 8. - Keycloak: Use
--import-realmfor dev bootstrapping. User attributes + custom protocol mapper for club claims. - Kustomize: Simpler than Helm for MVP. Base + overlays structure.
Metis Review
Identified Gaps (all resolved):
- Club data model: Club is both tenant container AND domain entity (name, sport type). Provisioned via Keycloak admin + seed script.
- User-Club DB relationship:
UserClubMembershipjunction table needed for assignment queries. Synced from Keycloak on first login. - Task workflow rules: Admin/Manager assigns. Assignee transitions through states. Only backward: Review → In Progress. Concurrency via EF Core
RowVersion. - Shift semantics: First-come-first-served sign-up. Cancel anytime before start. No waitlist. Optimistic concurrency for last-slot race.
- Seed data: 2 clubs, 5 users (1 admin, 1 manager, 2 members, 1 viewer), sample tasks + shifts.
- API style: REST + OpenAPI (built-in). URL:
/api/taskswith tenant from header, not path. - Data fetching: TanStack Query (client) + Server Components (initial load).
- First login UX: 1 club → auto-select. Multiple → picker. Zero → "Contact admin".
- RLS migration safety:
bypass_rls_policyon all RLS-enabled tables for migrations.
Work Objectives
Core Objective
Deliver a working multi-tenant club work management application where authenticated members can manage tasks and sign up for shifts within their club context, with full data isolation between tenants.
Concrete Deliverables
/backend/WorkClub.sln— .NET 10 solution (Api, Application, Domain, Infrastructure, Tests.Unit, Tests.Integration)/frontend/— Next.js 15 App Router project with Tailwind + shadcn/ui/docker-compose.yml— Local dev stack (PostgreSQL, Keycloak, .NET API, Next.js)/infra/k8s/— Kustomize manifests (base + dev overlay)/.gitea/workflows/ci.yml— Gitea Actions CI pipeline (parallel backend/frontend/infra checks)/.gitea/workflows/cd-bootstrap.yml— Gitea Actions CD bootstrap workflow (manual multi-arch image publish)/.gitea/workflows/cd-deploy.yml— Gitea Actions CD deployment workflow (Kubernetes deploy with Kustomize overlay)- PostgreSQL database with RLS policies on all tenant-scoped tables
- Keycloak realm configuration with test users and club memberships
- Seed data for development
Definition of Done
docker compose upstarts all 4 services healthy within 90s- Keycloak login returns JWT with club claims
- API enforces tenant isolation (cross-tenant requests return 403)
- RLS blocks data access at DB level without tenant context
- Tasks follow 5-state workflow with invalid transitions rejected (422)
- Shifts support sign-up with capacity enforcement (409 when full)
- Frontend shows club-switcher, task list, shift list
dotnet testpasses all unit + integration testsbun run testpasses all frontend testskustomize build infra/k8s/overlays/devproduces valid YAML- Gitea Actions CI passes on push/PR with backend + frontend + infra jobs
Must Have
- Credential-based multi-tenancy (JWT claims + X-Tenant-Id header)
- PostgreSQL RLS with
SET LOCALfor connection pooling safety - Finbuckle.MultiTenant with ClaimStrategy + HeaderStrategy fallback
- Backend validates JWT
clubsclaim against X-Tenant-Id → 403 on mismatch - 4-role authorization per club (Admin, Manager, Member, Viewer)
- 5-state task workflow with state machine validation
- Time-slot shift scheduling with capacity and first-come-first-served sign-up
- Club-switcher in frontend for multi-club users
- EF Core concurrency tokens on Task and Shift entities
- Seed data: 2 clubs, 5 users, sample tasks + shifts
- Docker Compose with hot reload for .NET and Next.js
- Kubernetes Kustomize manifests (base + dev overlay)
- TDD: all backend features have tests BEFORE implementation
- Gitea-hosted CI pipeline for this repository (
code.hal9000.damnserver.com/MasterMito/work-club-manager) - CI jobs run in parallel (backend, frontend, infrastructure validation)
- Gitea-hosted CD bootstrap workflow for private registry image publication (
workclub-api,workclub-frontend) - Gitea-hosted CD deployment workflow for Kubernetes dev namespace rollout (
workclub-dev)
Must NOT Have (Guardrails)
- No CQRS/MediatR — Direct service injection from controllers/endpoints
- No generic repository pattern — Use DbContext directly (EF Core IS the unit of work)
- No event sourcing — Enum column + domain validation for task states
- No Finbuckle
IsMultiTenant()shadow property — ExplicitTenantIdcolumn + RLS only - No Swashbuckle — Use .NET 10 built-in
AddOpenApi() - No abstraction over shadcn/ui — Use components as-is
- No custom HTTP client wrapper — Use
fetchwith auth headers - No database-per-tenant or schema-per-tenant — Single DB, shared schema, RLS
- No social login, self-registration, custom Keycloak themes, or 2FA
- No recurring shifts, waitlists, swap requests, or approval flows
- No custom roles or field-level permissions — 4 fixed roles
- No notifications (email/push) — Users check the app
- No API versioning, rate limiting, or HATEOAS — MVP scope
- No in-memory database for tests — Real PostgreSQL via Testcontainers
- No billing, subscriptions, or analytics dashboard
- No mobile app
- No single-step build-and-deploy coupling — keep image bootstrap and cluster deployment as separate workflows
Verification Strategy
ZERO HUMAN INTERVENTION — ALL verification is agent-executed. No exceptions. Acceptance criteria requiring "user manually tests/confirms" are FORBIDDEN.
Test Decision
- Infrastructure exists: NO (greenfield)
- Automated tests: YES (TDD — tests first)
- Backend framework: xUnit + Testcontainers (postgres:16-alpine) + WebApplicationFactory
- Frontend framework: Vitest + React Testing Library
- E2E framework: Playwright
- If TDD: Each task follows RED (failing test) → GREEN (minimal impl) → REFACTOR
QA Policy
Every task MUST include agent-executed QA scenarios (see TODO template below).
Evidence saved to .sisyphus/evidence/task-{N}-{scenario-slug}.{ext}.
- Frontend/UI: Use Playwright (playwright skill) — Navigate, interact, assert DOM, screenshot
- TUI/CLI: Use interactive_bash (tmux) — Run command, send keystrokes, validate output
- API/Backend: Use Bash (curl) — Send requests, assert status + response fields
- Library/Module: Use Bash (dotnet test / bun test) — Run tests, compare output
- Infrastructure: Use Bash (docker compose / kustomize build) — Validate configs
Execution Strategy
Parallel Execution Waves
Wave 1 (Start Immediately — scaffolding + infrastructure):
├── Task 1: Monorepo structure + git init + solution scaffold [quick]
├── Task 2: Docker Compose (PostgreSQL + Keycloak) [quick]
├── Task 3: Keycloak realm configuration + test users [unspecified-high]
├── Task 4: Domain entities + value objects [quick]
├── Task 5: Next.js project scaffold + Tailwind + shadcn/ui [quick]
└── Task 6: Kustomize base manifests [quick]
Wave 2 (After Wave 1 — data layer + auth):
├── Task 7: PostgreSQL schema + EF Core migrations + RLS policies (depends: 1, 2, 4) [deep]
├── Task 8: Finbuckle multi-tenant middleware + tenant validation (depends: 1, 3) [deep]
├── Task 9: Keycloak JWT auth in .NET + role-based authorization (depends: 1, 3) [deep]
├── Task 10: NextAuth.js Keycloak integration (depends: 3, 5) [unspecified-high]
├── Task 11: Seed data script (depends: 2, 3, 4) [quick]
└── Task 12: Backend test infrastructure (xUnit + Testcontainers + WebApplicationFactory) (depends: 1) [unspecified-high]
Wave 3 (After Wave 2 — core API):
├── Task 13: RLS integration tests — multi-tenant isolation proof (depends: 7, 8, 12) [deep]
├── Task 14: Task CRUD API endpoints + 5-state workflow (depends: 7, 8, 9) [deep]
├── Task 15: Shift CRUD API + sign-up/cancel endpoints (depends: 7, 8, 9) [deep]
├── Task 16: Club + Member API endpoints (depends: 7, 8, 9) [unspecified-high]
└── Task 17: Frontend test infrastructure (Vitest + RTL + Playwright) (depends: 5) [quick]
Wave 4 (After Wave 3 — frontend pages):
├── Task 18: App layout + club-switcher + auth guard (depends: 10, 16, 17) [visual-engineering]
├── Task 19: Task list + task detail + status transitions UI (depends: 14, 17, 18) [visual-engineering]
├── Task 20: Shift list + shift detail + sign-up UI (depends: 15, 17, 18) [visual-engineering]
└── Task 21: Login page + first-login club picker (depends: 10, 17) [visual-engineering]
Wave 5 (After Wave 4 — polish + Docker):
├── Task 22: Docker Compose full stack (backend + frontend + hot reload) (depends: 14, 15, 18) [unspecified-high]
├── Task 23: Backend Dockerfiles (dev + prod multi-stage) (depends: 14) [quick]
├── Task 24: Frontend Dockerfiles (dev + prod standalone) (depends: 18) [quick]
└── Task 25: Kustomize dev overlay + resource limits + health checks (depends: 6, 23, 24) [unspecified-high]
Wave 6 (After Wave 5 — E2E + CI/CD integration):
├── Task 26: Playwright E2E tests — auth flow + club switching (depends: 21, 22) [unspecified-high]
├── Task 27: Playwright E2E tests — task management flow (depends: 19, 22) [unspecified-high]
├── Task 28: Playwright E2E tests — shift sign-up flow (depends: 20, 22) [unspecified-high]
└── Task 29: Gitea CI/CD workflows (CI checks + image bootstrap + Kubernetes deploy) (depends: 12, 17, 23, 24, 25) [unspecified-high]
Wave FINAL (After ALL tasks — independent review, 4 parallel):
├── Task F1: Plan compliance audit (oracle)
├── Task F2: Code quality review (unspecified-high)
├── Task F3: Real manual QA (unspecified-high)
└── Task F4: Scope fidelity check (deep)
Critical Path: Task 1 → Task 5 → Task 17 → Task 18 → Task 24 → Task 25 → Task 29 → F1-F4
Parallel Speedup: ~68% faster than sequential
Max Concurrent: 6 (Wave 1)
Dependency Matrix
| Task | Depends On | Blocks | Wave |
|---|---|---|---|
| 1 | — | 7, 8, 9, 12 | 1 |
| 2 | — | 7, 11, 22 | 1 |
| 3 | — | 8, 9, 10, 11 | 1 |
| 4 | — | 7, 11 | 1 |
| 5 | — | 10, 17 | 1 |
| 6 | — | 25 | 1 |
| 7 | 1, 2, 4 | 13, 14, 15, 16 | 2 |
| 8 | 1, 3 | 13, 14, 15, 16 | 2 |
| 9 | 1, 3 | 14, 15, 16 | 2 |
| 10 | 3, 5 | 18, 21 | 2 |
| 11 | 2, 3, 4 | 22 | 2 |
| 12 | 1 | 13 | 2 |
| 13 | 7, 8, 12 | 14, 15 | 3 |
| 14 | 7, 8, 9 | 19, 22, 23 | 3 |
| 15 | 7, 8, 9 | 20, 22 | 3 |
| 16 | 7, 8, 9 | 18 | 3 |
| 17 | 5 | 18, 19, 20, 21 | 3 |
| 18 | 10, 16, 17 | 19, 20, 22 | 4 |
| 19 | 14, 17, 18 | 27 | 4 |
| 20 | 15, 17, 18 | 28 | 4 |
| 21 | 10, 17 | 26 | 4 |
| 22 | 14, 15, 18 | 26, 27, 28 | 5 |
| 23 | 14 | 25 | 5 |
| 24 | 18 | 25 | 5 |
| 25 | 6, 23, 24 | — | 5 |
| 26 | 21, 22 | — | 6 |
| 27 | 19, 22 | — | 6 |
| 28 | 20, 22 | — | 6 |
| 29 | 12, 17, 23, 24, 25 | F1-F4 | 6 |
| F1-F4 | ALL | — | FINAL |
Agent Dispatch Summary
- Wave 1 (6 tasks): T1 →
quick, T2 →quick, T3 →unspecified-high, T4 →quick, T5 →quick, T6 →quick - Wave 2 (6 tasks): T7 →
deep, T8 →deep, T9 →deep, T10 →unspecified-high, T11 →quick, T12 →unspecified-high - Wave 3 (5 tasks): T13 →
deep, T14 →deep, T15 →deep, T16 →unspecified-high, T17 →quick - Wave 4 (4 tasks): T18 →
visual-engineering, T19 →visual-engineering, T20 →visual-engineering, T21 →visual-engineering - Wave 5 (4 tasks): T22 →
unspecified-high, T23 →quick, T24 →quick, T25 →unspecified-high - Wave 6 (4 tasks): T26 →
unspecified-high, T27 →unspecified-high, T28 →unspecified-high, T29 →unspecified-high - FINAL (4 tasks): F1 →
oracle, F2 →unspecified-high, F3 →unspecified-high, F4 →deep
TODOs
-
1. Monorepo Structure + Git Repository + .NET Solution Scaffold
What to do:
- Initialize git repository:
git initat repo root - Create monorepo directory structure:
/backend/,/frontend/,/infra/ - Initialize .NET 10 solution at
/backend/WorkClub.sln - Create projects:
WorkClub.Api(web),WorkClub.Application(classlib),WorkClub.Domain(classlib),WorkClub.Infrastructure(classlib),WorkClub.Tests.Unit(xunit),WorkClub.Tests.Integration(xunit) - Configure project references: Api → Application → Domain ← Infrastructure. Tests → all.
- Add
.gitignore(combined dotnet + node + IDE entries),.editorconfig(C# conventions),global.json(pin .NET 10 SDK) - Create initial commit with repo structure and configuration files
- Add NuGet packages to Api:
Npgsql.EntityFrameworkCore.PostgreSQL,Finbuckle.MultiTenant,Microsoft.AspNetCore.Authentication.JwtBearer - Add NuGet packages to Infrastructure:
Npgsql.EntityFrameworkCore.PostgreSQL,Microsoft.EntityFrameworkCore.Design - Add NuGet packages to Tests.Integration:
Testcontainers.PostgreSql,Microsoft.AspNetCore.Mvc.Testing - Verify
dotnet buildcompiles with zero errors
Must NOT do:
- Do NOT add MediatR, AutoMapper, or FluentValidation
- Do NOT create generic repository interfaces
- Do NOT add Swashbuckle — will use built-in
AddOpenApi()later
Recommended Agent Profile:
- Category:
quick- Reason: Straightforward scaffolding with well-known dotnet CLI commands
- Skills: []
- No special skills needed — standard dotnet CLI operations
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 1 (with Tasks 2, 3, 4, 5, 6)
- Blocks: Tasks 7, 8, 9, 12
- Blocked By: None
References:
Pattern References:
- None (greenfield — no existing code)
External References:
- .NET 10 SDK:
dotnet new sln,dotnet new webapi,dotnet new classlib,dotnet new xunit - Clean Architecture layout: Api → Application → Domain ← Infrastructure (standard .NET layering)
global.jsonformat:{ "sdk": { "version": "10.0.100", "rollForward": "latestFeature" } }
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Git repository initialized Tool: Bash Preconditions: Repo root exists Steps: 1. Run `git rev-parse --is-inside-work-tree` 2. Assert output is "true" 3. Run `git log --oneline -1` 4. Assert initial commit exists 5. Run `cat .gitignore` 6. Assert contains "bin/", "obj/", "node_modules/", ".next/" Expected Result: Git repo initialized with comprehensive .gitignore Failure Indicators: Not a git repo, missing .gitignore entries Evidence: .sisyphus/evidence/task-1-git-init.txt Scenario: Solution builds successfully Tool: Bash Preconditions: .NET 10 SDK installed Steps: 1. Run `dotnet build backend/WorkClub.sln --configuration Release` 2. Check exit code is 0 3. Verify all 6 projects are listed in build output Expected Result: Exit code 0, output contains "6 succeeded, 0 failed" Failure Indicators: Any "error" or non-zero exit code Evidence: .sisyphus/evidence/task-1-solution-build.txt Scenario: Project references are correct Tool: Bash Preconditions: Solution exists Steps: 1. Run `dotnet list backend/src/WorkClub.Api/WorkClub.Api.csproj reference` 2. Assert output contains "WorkClub.Application" and "WorkClub.Infrastructure" 3. Run `dotnet list backend/src/WorkClub.Application/WorkClub.Application.csproj reference` 4. Assert output contains "WorkClub.Domain" 5. Run `dotnet list backend/src/WorkClub.Infrastructure/WorkClub.Infrastructure.csproj reference` 6. Assert output contains "WorkClub.Domain" Expected Result: All project references match Clean Architecture dependency graph Failure Indicators: Missing references or circular dependencies Evidence: .sisyphus/evidence/task-1-project-refs.txtCommit: YES
- Message:
chore(scaffold): initialize git repo and monorepo with .NET solution - Files:
backend/**/*.csproj,backend/WorkClub.sln,.gitignore,.editorconfig,global.json - Pre-commit:
dotnet build backend/WorkClub.sln
- Initialize git repository:
-
2. Docker Compose — PostgreSQL + Keycloak
What to do:
- Create
/docker-compose.ymlat repo root with services:postgres: PostgreSQL 16 Alpine, port 5432, volumepostgres-data, healthcheck viapg_isreadykeycloak: Keycloak 26.x (quay.io/keycloak/keycloak:26.1), port 8080, depends on postgres,--import-realmflag with volume mount to/opt/keycloak/data/import
- Configure PostgreSQL: database
workclub, userapp, passworddevpass(dev only) - Configure Keycloak: admin user
admin/admin, PostgreSQL as Keycloak's own database (separate DBkeycloak) - Create placeholder realm file at
/infra/keycloak/realm-export.json(will be populated in Task 3) - Add
app-networkbridge network connecting all services - Verify both services start and pass health checks
Must NOT do:
- Do NOT add backend or frontend services yet (Task 22)
- Do NOT use deprecated
KEYCLOAK_IMPORTenv var — use--import-realmCLI flag
Recommended Agent Profile:
- Category:
quick- Reason: Standard Docker Compose YAML, well-documented services
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 1 (with Tasks 1, 3, 4, 5, 6)
- Blocks: Tasks 7, 11, 22
- Blocked By: None
References:
External References:
- Keycloak Docker:
quay.io/keycloak/keycloak:26.1withstart-devcommand for local - Keycloak realm import:
--import-realmflag + volume/opt/keycloak/data/import - PostgreSQL healthcheck:
pg_isready -U app -d workclub - Keycloak healthcheck:
curl -sf http://localhost:8080/health/ready || exit 1
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: PostgreSQL starts and accepts connections Tool: Bash Preconditions: Docker installed, no conflicting port 5432 Steps: 1. Run `docker compose up -d postgres` 2. Wait up to 30s: `docker compose exec postgres pg_isready -U app -d workclub` 3. Assert exit code 0 4. Run `docker compose exec postgres psql -U app -d workclub -c "SELECT 1"` 5. Assert output contains "1" Expected Result: PostgreSQL healthy and accepting queries Failure Indicators: pg_isready fails, connection refused Evidence: .sisyphus/evidence/task-2-postgres-health.txt Scenario: Keycloak starts and serves OIDC discovery Tool: Bash Preconditions: docker compose up -d (postgres + keycloak) Steps: 1. Run `docker compose up -d` 2. Wait up to 120s for Keycloak health: poll `curl -sf http://localhost:8080/health/ready` 3. Curl `http://localhost:8080/realms/master/.well-known/openid-configuration` 4. Assert HTTP 200 and JSON contains "issuer" field Expected Result: Keycloak healthy, OIDC endpoint responding Failure Indicators: Keycloak fails to start, OIDC endpoint returns non-200 Evidence: .sisyphus/evidence/task-2-keycloak-health.txtCommit: YES
- Message:
infra(docker): add Docker Compose with PostgreSQL and Keycloak - Files:
docker-compose.yml,infra/keycloak/realm-export.json - Pre-commit:
docker compose config
- Create
-
3. Keycloak Realm Configuration + Test Users
What to do:
- Create Keycloak realm
workclubwith:- Client
workclub-api(confidential, service account enabled) for backend - Client
workclub-app(public, PKCE) for frontend — redirect URIs:http://localhost:3000/* - Custom user attribute
clubs(JSON string:{"club-1-uuid": "admin", "club-2-uuid": "member"}) - Custom protocol mapper
club-membership(type:Script MapperorUser Attribute→ JWT claimclubs)
- Client
- Create 5 test users:
admin@test.com/testpass123— Admin of club-1, Member of club-2manager@test.com/testpass123— Manager of club-1member1@test.com/testpass123— Member of club-1, Member of club-2member2@test.com/testpass123— Member of club-1viewer@test.com/testpass123— Viewer of club-1
- Export realm to
/infra/keycloak/realm-export.jsonincluding users (with hashed passwords) - Test: obtain token for admin@test.com, decode JWT, verify
clubsclaim is present
Must NOT do:
- Do NOT add social login providers
- Do NOT add self-registration flow
- Do NOT customize Keycloak login theme
- Do NOT use Keycloak Organizations feature (too new for MVP)
Recommended Agent Profile:
- Category:
unspecified-high- Reason: Keycloak realm configuration requires understanding OIDC flows, custom mappers, and realm export format
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 1 (with Tasks 1, 2, 4, 5, 6)
- Blocks: Tasks 8, 9, 10, 11
- Blocked By: None (creates realm export JSON file; Docker Compose from Task 2 will use it, but the file can be created independently)
References:
External References:
- Keycloak Admin REST API:
https://www.keycloak.org/docs-api/latest/rest-api/— for programmatic realm creation - Keycloak realm export format: JSON with
realm,clients,users,protocolMappersarrays - Custom protocol mapper:
oidc-usermodel-attribute-mappertype withclaim.name: clubs,user.attribute: clubs - PKCE client config:
publicClient: true,standardFlowEnabled: true,directAccessGrantsEnabled: true(for dev token testing)
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Token contains club membership claims Tool: Bash Preconditions: docker compose up -d (with realm imported) Steps: 1. Obtain token: `curl -sf -X POST http://localhost:8080/realms/workclub/protocol/openid-connect/token -d "client_id=workclub-app&username=admin@test.com&password=testpass123&grant_type=password"` 2. Extract access_token from JSON response 3. Decode JWT payload: `echo $TOKEN | cut -d. -f2 | base64 -d 2>/dev/null | jq '.clubs'` 4. Assert clubs claim contains `{"club-1-uuid": "admin", "club-2-uuid": "member"}` Expected Result: JWT has `clubs` claim with correct role mappings Failure Indicators: Missing `clubs` claim, wrong roles, token request fails Evidence: .sisyphus/evidence/task-3-jwt-claims.txt Scenario: All 5 test users can authenticate Tool: Bash Preconditions: Keycloak running with realm imported Steps: 1. For each user (admin@test.com, manager@test.com, member1@test.com, member2@test.com, viewer@test.com): curl token endpoint with username/password 2. Assert HTTP 200 for all 5 users 3. Assert each response contains "access_token" field Expected Result: All 5 users authenticate successfully Failure Indicators: Any user returns non-200, missing access_token Evidence: .sisyphus/evidence/task-3-user-auth.txtCommit: YES
- Message:
infra(keycloak): configure realm with test users and club memberships - Files:
infra/keycloak/realm-export.json - Pre-commit: —
- Create Keycloak realm
-
4. Domain Entities + Value Objects
What to do:
- Create domain entities in
WorkClub.Domain/Entities/:Club: Id (Guid), TenantId (string), Name, SportType (enum), Description, CreatedAt, UpdatedAtMember: Id (Guid), TenantId (string), ExternalUserId (string — Keycloak sub), DisplayName, Email, Role (enum: Admin/Manager/Member/Viewer), ClubId (FK), CreatedAt, UpdatedAtWorkItem(Task): Id (Guid), TenantId (string), Title, Description, Status (enum: Open/Assigned/InProgress/Review/Done), AssigneeId (FK? nullable), CreatedById (FK), ClubId (FK), DueDate (DateTimeOffset?), CreatedAt, UpdatedAt, RowVersion (byte[] — concurrency token)Shift: Id (Guid), TenantId (string), Title, Description, Location (string?), StartTime (DateTimeOffset), EndTime (DateTimeOffset), Capacity (int, default 1), ClubId (FK), CreatedById (FK), CreatedAt, UpdatedAt, RowVersion (byte[])ShiftSignup: Id (Guid), ShiftId (FK), MemberId (FK), SignedUpAt (DateTimeOffset)
- Create enums in
WorkClub.Domain/Enums/:SportType: Tennis, Cycling, Swimming, Football, OtherClubRole: Admin, Manager, Member, ViewerWorkItemStatus: Open, Assigned, InProgress, Review, Done
- Create
ITenantEntitymarker interface:string TenantId { get; set; } - Create
WorkItemStatusstate machine validation method onWorkItementity:- Valid transitions: Open→Assigned, Assigned→InProgress, InProgress→Review, Review→Done, Review→InProgress
- Method:
bool CanTransitionTo(WorkItemStatus newStatus)+void TransitionTo(WorkItemStatus newStatus)(throws on invalid)
- Write unit tests FIRST (TDD): test state machine — valid transitions succeed, invalid transitions throw
Must NOT do:
- Do NOT add event sourcing or domain events
- Do NOT create repository interfaces — will use DbContext directly
- Do NOT add navigation properties yet (EF Core configuration is Task 7)
Recommended Agent Profile:
- Category:
quick- Reason: Simple POCO classes and enum definitions with straightforward unit tests
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 1 (with Tasks 1, 2, 3, 5, 6)
- Blocks: Tasks 7, 11
- Blocked By: None
References:
External References:
- C# 14 record types for value objects:
public record SportType(string Value) - State machine pattern: simple
switchexpression in entity method — no external library needed ITenantEntityinterface pattern from Metis research: explicit TenantId property (not Finbuckle shadow)
Acceptance Criteria:
If TDD:
- Test file created:
backend/tests/WorkClub.Tests.Unit/Domain/WorkItemStatusTests.cs dotnet test backend/tests/WorkClub.Tests.Unit→ PASS (all state machine tests)
QA Scenarios (MANDATORY):
Scenario: Valid state transitions succeed Tool: Bash Preconditions: Unit test project compiles Steps: 1. Run `dotnet test backend/tests/WorkClub.Tests.Unit --filter "WorkItemStatus" --verbosity normal` 2. Assert tests cover: Open→Assigned, Assigned→InProgress, InProgress→Review, Review→Done, Review→InProgress 3. Assert all pass Expected Result: 5 valid transition tests pass Failure Indicators: Any test fails, missing test cases Evidence: .sisyphus/evidence/task-4-state-machine-valid.txt Scenario: Invalid state transitions throw Tool: Bash Preconditions: Unit test project compiles Steps: 1. Run `dotnet test backend/tests/WorkClub.Tests.Unit --filter "InvalidTransition" --verbosity normal` 2. Assert tests cover: Open→Done, Open→InProgress, Assigned→Done, InProgress→Open, Done→anything 3. Assert all pass (each asserts exception thrown) Expected Result: Invalid transition tests pass (exceptions correctly thrown) Failure Indicators: Missing invalid transition tests, test failures Evidence: .sisyphus/evidence/task-4-state-machine-invalid.txtCommit: YES
- Message:
feat(domain): add core entities — Club, Member, WorkItem, Shift with state machine - Files:
backend/src/WorkClub.Domain/**/*.cs,backend/tests/WorkClub.Tests.Unit/Domain/*.cs - Pre-commit:
dotnet test backend/tests/WorkClub.Tests.Unit
- Create domain entities in
-
5. Next.js Project Scaffold + Tailwind + shadcn/ui
What to do:
- Initialize Next.js 15 project in
/frontend/usingbunx create-next-app@latestwith:- App Router (not Pages), TypeScript, Tailwind CSS, ESLint,
src/directory output: 'standalone'innext.config.ts
- App Router (not Pages), TypeScript, Tailwind CSS, ESLint,
- Install and configure shadcn/ui:
bunx shadcn@latest init- Install initial components: Button, Card, Badge, Input, Label, Select, Dialog, DropdownMenu, Table, Toast
- Configure path aliases in
tsconfig.json:@/*→src/* - Create directory structure:
src/app/— App Router pagessrc/components/— shared componentssrc/lib/— utilitiessrc/hooks/— custom hookssrc/types/— TypeScript types
- Add environment variables to
.env.local.example:NEXT_PUBLIC_API_URL=http://localhost:5000NEXTAUTH_URL=http://localhost:3000NEXTAUTH_SECRET=dev-secret-change-meKEYCLOAK_ISSUER=http://localhost:8080/realms/workclubKEYCLOAK_CLIENT_ID=workclub-appKEYCLOAK_CLIENT_SECRET=<from-keycloak>
- Verify
bun run buildsucceeds
Must NOT do:
- Do NOT install Bun-specific Next.js plugins — use standard Next.js
- Do NOT create custom component wrappers over shadcn/ui
- Do NOT add auth pages yet (Task 21)
Recommended Agent Profile:
- Category:
quick- Reason: Standard Next.js scaffolding with well-known CLI tools
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 1 (with Tasks 1, 2, 3, 4, 6)
- Blocks: Tasks 10, 17
- Blocked By: None
References:
External References:
- Next.js create-next-app:
bunx create-next-app@latest frontend --typescript --tailwind --eslint --app --src-dir --use-bun - shadcn/ui init:
bunx shadcn@latest initthenbunx shadcn@latest add button card badge input label select dialog dropdown-menu table toast next.config.ts:{ output: 'standalone' }for Docker deployment- Standalone output: copies only needed
node_modules, createsserver.jsentry point
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Next.js builds successfully Tool: Bash Preconditions: Bun installed, frontend/ directory exists Steps: 1. Run `bun run build` in frontend/ 2. Assert exit code 0 3. Verify `.next/standalone/server.js` exists Expected Result: Build succeeds with standalone output Failure Indicators: TypeScript errors, build failure, missing standalone Evidence: .sisyphus/evidence/task-5-nextjs-build.txt Scenario: Dev server starts Tool: Bash Preconditions: frontend/ exists with dependencies installed Steps: 1. Start `bun run dev` in background 2. Wait 10s, then curl `http://localhost:3000` 3. Assert HTTP 200 4. Kill dev server Expected Result: Dev server responds on port 3000 Failure Indicators: Server fails to start, non-200 response Evidence: .sisyphus/evidence/task-5-dev-server.txtCommit: YES
- Message:
chore(frontend): initialize Next.js project with Tailwind and shadcn/ui - Files:
frontend/** - Pre-commit:
bun run build(in frontend/)
- Initialize Next.js 15 project in
-
6. Kustomize Base Manifests
What to do:
- Create
/infra/k8s/base/kustomization.yamlreferencing all base resources - Create base manifests:
backend-deployment.yaml: Deployment for dotnet-api (1 replica, port 8080, health probes at/health/live,/health/ready,/health/startup)backend-service.yaml: ClusterIP Service (port 80 → 8080)frontend-deployment.yaml: Deployment for nextjs (1 replica, port 3000, health probe at/api/health)frontend-service.yaml: ClusterIP Service (port 80 → 3000)postgres-statefulset.yaml: StatefulSet (1 replica, port 5432, PVC 10Gi, healthcheck viapg_isready)postgres-service.yaml: Headless + primary ClusterIP servicekeycloak-deployment.yaml: Deployment (1 replica, port 8080)keycloak-service.yaml: ClusterIP Serviceconfigmap.yaml: App configuration (non-sensitive: log level, CORS origins, API URLs)ingress.yaml: Basic Ingress for single-domain routing (frontend on/, backend on/api)
- Use resource requests/limits placeholders (overridden per environment in overlays)
- Use image tag placeholder
latest(overridden per environment) - Verify
kustomize build infra/k8s/baseproduces valid YAML
Must NOT do:
- Do NOT create Helm charts
- Do NOT add wildcard TLS/cert-manager (single domain, no subdomains)
- Do NOT add HPA, PDB, or NetworkPolicies (production concerns, not MVP)
Recommended Agent Profile:
- Category:
quick- Reason: Standard Kubernetes YAML manifests with well-known patterns
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 1 (with Tasks 1, 2, 3, 4, 5)
- Blocks: Task 25
- Blocked By: None
References:
External References:
- Kustomize:
kustomization.yamlwithresources:list - .NET health probes:
/health/startup(startupProbe),/health/live(livenessProbe),/health/ready(readinessProbe) - Next.js health:
/api/health(custom route handler) - PostgreSQL StatefulSet: headless Service + volumeClaimTemplates
- Ingress single-domain: path-based routing (
/→ frontend,/api→ backend)
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Kustomize base builds valid YAML Tool: Bash Preconditions: kustomize CLI installed Steps: 1. Run `kustomize build infra/k8s/base` 2. Assert exit code 0 3. Pipe output to `grep "kind:" | sort -u` 4. Assert contains: ConfigMap, Deployment (x3), Ingress, Service (x4), StatefulSet Expected Result: Valid YAML with all expected resource kinds Failure Indicators: kustomize build fails, missing resource kinds Evidence: .sisyphus/evidence/task-6-kustomize-base.txt Scenario: Resource names are consistent Tool: Bash Preconditions: kustomize build succeeds Steps: 1. Run `kustomize build infra/k8s/base | grep "name:" | head -20` 2. Verify naming convention is consistent (e.g., workclub-api, workclub-frontend, workclub-postgres) Expected Result: All resources follow consistent naming pattern Failure Indicators: Inconsistent or missing names Evidence: .sisyphus/evidence/task-6-resource-names.txtCommit: YES
- Message:
infra(k8s): add Kustomize base manifests for all services - Files:
infra/k8s/base/**/*.yaml - Pre-commit:
kustomize build infra/k8s/base
- Create
-
7. PostgreSQL Schema + EF Core Migrations + RLS Policies
What to do:
- Create
AppDbContextinWorkClub.Infrastructure/Data/:- DbSets: Clubs, Members, WorkItems, Shifts, ShiftSignups
- Configure entity mappings via
IEntityTypeConfiguration<T>classes - Configure
RowVersionas concurrency token on WorkItem and Shift (UseXminAsConcurrencyToken()) - Configure indexes:
TenantIdon all tenant-scoped tables,ClubIdon Member/WorkItem/Shift,Statuson WorkItem - Configure relationships: Club → Members, Club → WorkItems, Club → Shifts, Shift → ShiftSignups, Member → ShiftSignups
- Create initial EF Core migration
- Create SQL migration script for RLS policies (run after EF migration):
ALTER TABLE ... ENABLE ROW LEVEL SECURITYon: clubs, members, work_items, shifts, shift_signups- Create
tenant_isolationpolicy on each table:USING (tenant_id = current_setting('app.current_tenant_id', true)::text) - Create
bypass_rls_policyon each table for migrations role:USING (true)for roleapp_admin - Create
app_userrole (used by application) — RLS applies - Create
app_adminrole (used by migrations) — RLS bypassed
- Create
TenantDbConnectionInterceptor(implementsDbConnectionInterceptor):- On
ConnectionOpenedAsync: executeSET LOCAL app.current_tenant_id = '{tenantId}' - Get tenant ID from
IHttpContextAccessor→ITenantInfo(Finbuckle)
- On
- Create
SaveChangesInterceptorfor automatic TenantId assignment on new entities (implementsITenantEntity) - Write integration tests FIRST (TDD):
- Test: migration applies cleanly to fresh PostgreSQL
- Test: RLS blocks queries without
SET LOCALcontext - Test: RLS allows queries with correct tenant context
Must NOT do:
- Do NOT use Finbuckle's
IsMultiTenant()fluent API — use explicitTenantIdproperty - Do NOT use
SET— MUST useSET LOCALfor transaction-scoped tenant context - Do NOT use in-memory database provider for tests
- Do NOT create generic repository pattern
Recommended Agent Profile:
- Category:
deep- Reason: Complex EF Core configuration with RLS policies, connection interceptors, and concurrent connection pooling safety — highest-risk task in the project
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 2 (with Tasks 8, 9, 10, 11, 12)
- Blocks: Tasks 13, 14, 15, 16
- Blocked By: Tasks 1 (solution), 2 (Docker Compose for PostgreSQL), 4 (domain entities)
References:
External References:
- EF Core Npgsql:
UseXminAsConcurrencyToken()for PostgreSQL optimistic concurrency - PostgreSQL RLS:
ALTER TABLE x ENABLE ROW LEVEL SECURITY; CREATE POLICY ... USING (...) SET LOCAL: Transaction-scoped session variable — safe with connection poolingcurrent_setting('app.current_tenant_id', true): Second paramtruereturns NULL instead of error when unsetDbConnectionInterceptor:ConnectionOpenedAsync(DbConnection, ConnectionEndEventData, CancellationToken)— execute SQL after connection openedbypass_rls_policy:CREATE POLICY bypass ON table FOR ALL TO app_admin USING (true)— allows migrations- Finbuckle
ITenantInfo.Id: Access current tenant from DI
Acceptance Criteria:
If TDD:
- Test file:
backend/tests/WorkClub.Tests.Integration/Data/MigrationTests.cs - Test file:
backend/tests/WorkClub.Tests.Integration/Data/RlsTests.cs
QA Scenarios (MANDATORY):
Scenario: Migration applies to fresh PostgreSQL Tool: Bash (Testcontainers in test) Preconditions: Testcontainers, Docker running Steps: 1. Run `dotnet test backend/tests/WorkClub.Tests.Integration --filter "Migration" --verbosity normal` 2. Test spins up PostgreSQL container, applies migration, verifies all tables exist Expected Result: All tables created (clubs, members, work_items, shifts, shift_signups) Failure Indicators: Migration fails, missing tables Evidence: .sisyphus/evidence/task-7-migration.txt Scenario: RLS blocks access without tenant context Tool: Bash (integration test) Preconditions: Migration applied, seed data inserted Steps: 1. Run integration test that: a. Inserts data for tenant-1 and tenant-2 using admin role (bypass RLS) b. Opens connection as app_user WITHOUT `SET LOCAL` c. SELECT * FROM work_items → assert 0 rows d. Opens connection as app_user WITH `SET LOCAL app.current_tenant_id = 'tenant-1'` e. SELECT * FROM work_items → assert only tenant-1 rows Expected Result: RLS correctly filters by tenant, returns 0 without context Failure Indicators: Data leaks across tenants, queries return all rows Evidence: .sisyphus/evidence/task-7-rls-isolation.txt Scenario: RLS allows correct tenant access Tool: Bash (integration test) Preconditions: Migration applied, multi-tenant data seeded Steps: 1. Run integration test with Testcontainers: a. Insert 5 work items for tenant-1, 3 for tenant-2 b. SET LOCAL to tenant-1 → SELECT count(*) → assert 5 c. SET LOCAL to tenant-2 → SELECT count(*) → assert 3 Expected Result: Each tenant sees only their data Failure Indicators: Wrong counts, cross-tenant data visible Evidence: .sisyphus/evidence/task-7-rls-correct-access.txtCommit: YES (groups with Task 8)
- Message:
feat(data): add EF Core DbContext, migrations, RLS policies, and multi-tenant middleware - Files:
backend/src/WorkClub.Infrastructure/Data/**/*.cs, SQL migration files - Pre-commit:
dotnet test backend/tests/WorkClub.Tests.Integration --filter "Migration|Rls"
- Create
-
8. Finbuckle Multi-Tenant Middleware + Tenant Validation
What to do:
- Configure Finbuckle in
Program.cs:builder.Services.AddMultiTenant<TenantInfo>().WithClaimStrategy("tenant_id").WithHeaderStrategy("X-Tenant-Id")- Middleware order:
UseAuthentication()→UseMultiTenant()→UseAuthorization()
- Create
TenantValidationMiddleware:- Extract
clubsclaim from JWT - Extract
X-Tenant-Idheader from request - Validate: X-Tenant-Id MUST be present in JWT
clubsclaim → 403 if mismatch - Set Finbuckle tenant context
- Extract
- Create
ITenantProviderservice (scoped):GetTenantId(): returns current tenant ID from Finbuckle contextGetUserRole(): returns user's role in current tenant from JWT clubs claim
- Register
TenantDbConnectionInterceptor(from Task 7) in DI - Write integration tests FIRST (TDD):
- Test: request with valid JWT + matching X-Tenant-Id → 200
- Test: request with valid JWT + non-member X-Tenant-Id → 403
- Test: request without X-Tenant-Id header → 400
- Test: unauthenticated request → 401
Must NOT do:
- Do NOT use Finbuckle's RouteStrategy (tenant comes from credentials, not URL)
- Do NOT use
UseMultiTenant()beforeUseAuthentication()(ClaimStrategy needs claims)
Recommended Agent Profile:
- Category:
deep- Reason: Finbuckle middleware integration with custom validation, security-critical cross-tenant check
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 2 (with Tasks 7, 9, 10, 11, 12)
- Blocks: Tasks 13, 14, 15, 16
- Blocked By: Tasks 1 (solution), 3 (Keycloak for JWT claims)
References:
External References:
- Finbuckle docs:
WithClaimStrategy(claimType)— reads tenant from specified JWT claim - Finbuckle docs:
WithHeaderStrategy(headerName)— reads from HTTP header as fallback - Middleware order for single realm:
UseAuthentication()→UseMultiTenant()→UseAuthorization() IMultiTenantContextAccessor<TenantInfo>: injected service to access resolved tenant
Acceptance Criteria:
If TDD:
- Test file:
backend/tests/WorkClub.Tests.Integration/Middleware/TenantValidationTests.cs
QA Scenarios (MANDATORY):
Scenario: Valid tenant request succeeds Tool: Bash (integration test with WebApplicationFactory) Preconditions: WebApplicationFactory configured with test Keycloak token Steps: 1. Run integration test: a. Create JWT with clubs: {"club-1": "admin"} b. Send GET /api/tasks with Authorization: Bearer + X-Tenant-Id: club-1 c. Assert HTTP 200 Expected Result: Request passes tenant validation Failure Indicators: 401 or 403 when valid Evidence: .sisyphus/evidence/task-8-valid-tenant.txt Scenario: Cross-tenant access denied Tool: Bash (integration test) Preconditions: WebApplicationFactory with test JWT Steps: 1. Run integration test: a. Create JWT with clubs: {"club-1": "admin"} (no club-2) b. Send GET /api/tasks with X-Tenant-Id: club-2 c. Assert HTTP 403 Expected Result: 403 Forbidden — user not member of requested club Failure Indicators: Returns 200 (data leak) or wrong error code Evidence: .sisyphus/evidence/task-8-cross-tenant-denied.txt Scenario: Missing tenant header returns 400 Tool: Bash (integration test) Preconditions: WebApplicationFactory Steps: 1. Send authenticated request WITHOUT X-Tenant-Id header 2. Assert HTTP 400 Expected Result: 400 Bad Request Failure Indicators: 200 or 500 Evidence: .sisyphus/evidence/task-8-missing-header.txtCommit: YES (groups with Task 7)
- Message:
feat(data): add EF Core DbContext, migrations, RLS policies, and multi-tenant middleware - Files:
backend/src/WorkClub.Api/Middleware/*.cs,Program.csupdates - Pre-commit:
dotnet test backend/tests/WorkClub.Tests.Integration --filter "TenantValidation"
- Configure Finbuckle in
-
9. Keycloak JWT Auth in .NET + Role-Based Authorization
What to do:
- Configure JWT Bearer authentication in
Program.cs:builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme).AddJwtBearer(options => { options.Authority = keycloakIssuer; options.Audience = "workclub-api"; })
- Create custom
ClaimsPrincipaltransformation:- Parse
clubsclaim from JWT → extract role for current tenant → add role claims - Map club role to ASP.NET
ClaimTypes.Role(e.g., "Admin", "Manager", "Member", "Viewer")
- Parse
- Create authorization policies:
RequireAdmin: requires "Admin" roleRequireManager: requires "Admin" OR "Manager" roleRequireMember: requires "Admin" OR "Manager" OR "Member" roleRequireViewer: any authenticated user with valid club membership
- Add
builder.Services.AddOpenApi()for API documentation - Map health check endpoints:
/health/live,/health/ready,/health/startup - Write integration tests FIRST (TDD):
- Test: Admin can access admin-only endpoints
- Test: Member cannot access admin-only endpoints → 403
- Test: Viewer can only read, not write → 403 on POST/PUT/DELETE
Must NOT do:
- Do NOT add Swashbuckle — use
AddOpenApi()built-in - Do NOT implement custom JWT validation — let ASP.NET handle it
- Do NOT add 2FA or password policies
Recommended Agent Profile:
- Category:
deep- Reason: JWT authentication with custom claims transformation and role-based authorization requires careful security implementation
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 2 (with Tasks 7, 8, 10, 11, 12)
- Blocks: Tasks 14, 15, 16
- Blocked By: Tasks 1 (solution), 3 (Keycloak config)
References:
External References:
- ASP.NET JWT Bearer:
AddJwtBearerwithAuthoritypointing to Keycloak realm - .NET 10 OpenAPI:
builder.Services.AddOpenApi()+app.MapOpenApi() - Claims transformation:
IClaimsTransformation.TransformAsync(ClaimsPrincipal) - Authorization policies:
builder.Services.AddAuthorizationBuilder().AddPolicy("RequireAdmin", p => p.RequireRole("Admin")) - Health checks:
builder.Services.AddHealthChecks().AddNpgSql(connectionString)
Acceptance Criteria:
If TDD:
- Test file:
backend/tests/WorkClub.Tests.Integration/Auth/AuthorizationTests.cs
QA Scenarios (MANDATORY):
Scenario: Role-based access control works Tool: Bash (integration test) Preconditions: WebApplicationFactory with mocked JWT claims Steps: 1. Test: JWT with Admin role → GET /api/clubs → 200 2. Test: JWT with Viewer role → POST /api/tasks → 403 3. Test: JWT with Manager role → POST /api/tasks → 200 4. Test: No token → any endpoint → 401 Expected Result: Each role gets correct access level Failure Indicators: Wrong HTTP status codes, privilege escalation Evidence: .sisyphus/evidence/task-9-rbac.txt Scenario: Health endpoints respond without auth Tool: Bash Preconditions: API running Steps: 1. curl http://localhost:5000/health/live → assert 200 2. curl http://localhost:5000/health/ready → assert 200 Expected Result: Health endpoints are public (no auth required) Failure Indicators: 401 on health endpoints Evidence: .sisyphus/evidence/task-9-health.txtCommit: YES
- Message:
feat(auth): add Keycloak JWT authentication and role-based authorization - Files:
backend/src/WorkClub.Api/Auth/*.cs,Program.csupdates - Pre-commit:
dotnet test backend/tests/WorkClub.Tests.Integration --filter "Authorization"
- Configure JWT Bearer authentication in
-
10. NextAuth.js Keycloak Integration
What to do:
- Install Auth.js v5:
bun add next-auth@beta @auth/core - Create
/frontend/src/auth/directory with:auth.ts: NextAuth config with Keycloak OIDC provider- Configure JWT callback to include
clubsclaim from Keycloak token - Configure session callback to expose clubs + active club to client
- Create
/frontend/src/middleware.ts:- Protect all routes except
/,/login,/api/auth/* - Redirect unauthenticated users to
/login
- Protect all routes except
- Create auth utility hooks:
useSession()wrapper that includes club membershipuseActiveClub()— reads from local storage / cookie, provides club context
- Create API fetch utility:
- Wraps
fetchwithAuthorization: Bearer <token>andX-Tenant-Id: <activeClub>headers - Used by all frontend API calls
- Wraps
- Write tests FIRST (TDD):
- Test:
useActiveClubreturns stored club - Test: API fetch utility includes correct headers
- Test:
Must NOT do:
- Do NOT create custom HTTP client class wrapper
- Do NOT add social login providers
- Do NOT store tokens in localStorage (use HTTP-only cookies via NextAuth)
Recommended Agent Profile:
- Category:
unspecified-high- Reason: Auth.js v5 configuration with OIDC + custom callbacks requires understanding of token flows
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 2 (with Tasks 7, 8, 9, 11, 12)
- Blocks: Tasks 18, 21
- Blocked By: Tasks 3 (Keycloak realm), 5 (Next.js scaffold)
References:
External References:
- Auth.js v5 Keycloak:
import KeycloakProvider from "next-auth/providers/keycloak" - Auth.js JWT callback:
jwt({ token, account }) → add clubs claim from account.access_token - Auth.js session callback:
session({ session, token }) → expose clubs to client - Next.js middleware:
export { auth as middleware } from "@/auth"— or custom matcher - HTTP-only cookies: Default in Auth.js — tokens never exposed to client JS
Acceptance Criteria:
If TDD:
- Test file:
frontend/src/hooks/__tests__/useActiveClub.test.ts - Test file:
frontend/src/lib/__tests__/api.test.ts
QA Scenarios (MANDATORY):
Scenario: Auth utility includes correct headers Tool: Bash (bun test) Preconditions: Frontend tests configured Steps: 1. Run `bun run test -- --filter "api fetch"` in frontend/ 2. Assert: test verifies Authorization header contains Bearer token 3. Assert: test verifies X-Tenant-Id header contains active club ID Expected Result: API utility correctly attaches auth and tenant headers Failure Indicators: Missing headers, wrong header names Evidence: .sisyphus/evidence/task-10-api-headers.txt Scenario: Protected routes redirect to login Tool: Bash (bun test) Preconditions: Frontend middleware configured Steps: 1. Test middleware: unauthenticated request to /dashboard → redirect to /login 2. Test middleware: request to /api/auth/* → pass through Expected Result: Middleware protects routes correctly Failure Indicators: Protected route accessible without auth Evidence: .sisyphus/evidence/task-10-middleware.txtCommit: YES
- Message:
feat(frontend-auth): add NextAuth.js Keycloak integration - Files:
frontend/src/auth/**,frontend/src/middleware.ts,frontend/src/lib/api.ts,frontend/src/hooks/useActiveClub.ts - Pre-commit:
bun run build(in frontend/)
- Install Auth.js v5:
-
11. Seed Data Script
What to do:
- Create
SeedDataServiceinWorkClub.Infrastructure/Seed/:- Seed 2 clubs:
- Club 1: "Sunrise Tennis Club" (Tennis), tenant_id: "club-1-uuid"
- Club 2: "Valley Cycling Club" (Cycling), tenant_id: "club-2-uuid"
- Seed 5 members (matching Keycloak test users):
- Admin user → Admin in Club 1, Member in Club 2
- Manager user → Manager in Club 1
- Member1 user → Member in Club 1, Member in Club 2
- Member2 user → Member in Club 1
- Viewer user → Viewer in Club 1
- Seed sample work items per club:
- Club 1: 5 tasks in various states (Open, Assigned, InProgress, Review, Done)
- Club 2: 3 tasks (Open, Assigned, InProgress)
- Seed sample shifts per club:
- Club 1: 3 shifts (past, today, future) with varying capacity and sign-ups
- Club 2: 2 shifts (today, future)
- Seed 2 clubs:
- Register seed service to run on startup in Development environment only
- Use
app_adminrole connection for seeding (bypasses RLS) - Make seed idempotent (check if data exists before inserting)
Must NOT do:
- Do NOT seed in Production — guard with
IHostEnvironment.IsDevelopment() - Do NOT hard-code GUIDs — generate deterministic GUIDs from names for consistency
Recommended Agent Profile:
- Category:
quick- Reason: Straightforward data insertion with known entities
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 2 (with Tasks 7, 8, 9, 10, 12)
- Blocks: Task 22
- Blocked By: Tasks 2 (Docker PostgreSQL), 3 (Keycloak users), 4 (domain entities)
References:
External References:
- EF Core seeding:
context.Database.EnsureCreated()or explicitHasData()in model config - Idempotent seeding:
if (!context.Clubs.Any()) { context.Clubs.AddRange(...) } - Deterministic GUIDs:
new Guid("00000000-0000-0000-0000-000000000001")orGuid.CreateVersion5(namespace, name)
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Seed data populates database Tool: Bash Preconditions: Docker Compose up (postgres + keycloak), migrations applied Steps: 1. Run backend in Development mode 2. Query: `docker compose exec postgres psql -U app_admin -d workclub -c "SELECT count(*) FROM clubs"` → assert 2 3. Query: `SELECT count(*) FROM members` → assert ≥ 7 (5 users × some in multiple clubs) 4. Query: `SELECT count(*) FROM work_items` → assert 8 5. Query: `SELECT count(*) FROM shifts` → assert 5 Expected Result: All seed data present Failure Indicators: Missing data, wrong counts Evidence: .sisyphus/evidence/task-11-seed-data.txt Scenario: Seed is idempotent Tool: Bash Preconditions: Seed already applied Steps: 1. Run backend again (triggers seed again) 2. Assert same counts as before (no duplicates) Expected Result: No duplicate records after re-running seed Failure Indicators: Doubled counts Evidence: .sisyphus/evidence/task-11-seed-idempotent.txtCommit: YES
- Message:
feat(seed): add development seed data script - Files:
backend/src/WorkClub.Infrastructure/Seed/*.cs - Pre-commit:
dotnet build
- Create
-
12. Backend Test Infrastructure (xUnit + Testcontainers + WebApplicationFactory)
What to do:
- Configure
WorkClub.Tests.Integrationproject:- Create
CustomWebApplicationFactory<T>extendingWebApplicationFactory<Program>:- Override
ConfigureWebHost: replace PostgreSQL connection with Testcontainers - Spin up PostgreSQL container, apply migrations, configure test services
- Override
- Create
TestAuthHandlerfor mocking JWT auth in tests:- Allows tests to set custom claims (clubs, roles) without real Keycloak
- Register as default auth scheme in test factory
- Create base test class
IntegrationTestBase:- Provides
HttpClientwith auth headers - Helper methods:
AuthenticateAs(email, clubs)→ sets JWT claims - Helper:
SetTenant(tenantId)→ sets X-Tenant-Id header - Implements
IAsyncLifetimefor test setup/teardown
- Provides
- Create
DatabaseFixture(collection fixture):- Shares single PostgreSQL container across test class
- Resets data between tests (truncate tables, re-seed)
- Create
- Configure
WorkClub.Tests.Unitproject:- Add reference to
WorkClub.DomainandWorkClub.Application - No database dependency (pure unit tests)
- Add reference to
- Verify infrastructure works:
- Write 1 smoke test: GET /health/live → 200
- Run:
dotnet test backend/tests/WorkClub.Tests.Integration
Must NOT do:
- Do NOT use in-memory database — MUST use real PostgreSQL for RLS testing
- Do NOT mock EF Core DbContext — use real DbContext with Testcontainers
- Do NOT share mutable state between test classes
Recommended Agent Profile:
- Category:
unspecified-high- Reason: Test infrastructure setup with Testcontainers + custom WebApplicationFactory requires careful DI configuration
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 2 (with Tasks 7, 8, 9, 10, 11)
- Blocks: Task 13
- Blocked By: Task 1 (solution structure)
References:
External References:
- Testcontainers .NET:
new PostgreSqlBuilder().WithImage("postgres:16-alpine").Build() - WebApplicationFactory:
builder.ConfigureServices(services => { /* replace DB */ }) - Test auth handler:
services.AddAuthentication("Test").AddScheme<AuthenticationSchemeOptions, TestAuthHandler>("Test", o => {}) IAsyncLifetime:InitializeAsync/DisposeAsyncfor container lifecycle
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Smoke test passes with Testcontainers Tool: Bash Preconditions: Docker running (for Testcontainers) Steps: 1. Run `dotnet test backend/tests/WorkClub.Tests.Integration --filter "SmokeTest" --verbosity normal` 2. Assert exit code 0 3. Assert output shows 1 test passed Expected Result: Health endpoint smoke test passes Failure Indicators: Testcontainer fails to start, test timeout Evidence: .sisyphus/evidence/task-12-smoke-test.txt Scenario: Test auth handler works Tool: Bash Preconditions: Test infrastructure configured Steps: 1. Run integration test that: a. Creates client with TestAuthHandler claims: {"clubs": {"club-1": "admin"}} b. Sends request with X-Tenant-Id: club-1 c. Asserts controller can read claims correctly Expected Result: Custom claims available in controller Failure Indicators: Claims missing, auth fails Evidence: .sisyphus/evidence/task-12-test-auth.txtCommit: YES
- Message:
test(infra): add xUnit + Testcontainers + WebApplicationFactory base - Files:
backend/tests/WorkClub.Tests.Integration/**/*.cs,backend/tests/WorkClub.Tests.Unit/**/*.cs - Pre-commit:
dotnet test backend/tests/WorkClub.Tests.Integration --filter "SmokeTest"
- Configure
-
13. RLS Integration Tests — Multi-Tenant Isolation Proof
What to do:
- Write comprehensive integration tests proving RLS data isolation:
- Test 1: Complete isolation — Insert data for tenant-1 and tenant-2. Query as tenant-1 → see only tenant-1 data. Query as tenant-2 → see only tenant-2 data. Zero overlap.
- Test 2: No context = no data — Query without
SET LOCAL→ 0 rows returned (RLS blocks everything) - Test 3: Insert protection — Try to INSERT with tenant-2 context but tenant-1 tenant_id value → RLS blocks
- Test 4: Concurrent requests — Fire 50 parallel HTTP requests alternating tenant-1 and tenant-2 → each response contains ONLY the correct tenant's data (tests connection pool safety)
- Test 5: Cross-tenant header spoof — JWT has clubs: {club-1: admin}, send X-Tenant-Id: club-2 → 403
- Test 6: Tenant context in interceptor — Verify
TenantDbConnectionInterceptorcorrectly setsSET LOCALon each request
- All tests use Testcontainers + WebApplicationFactory from Task 12
- This is the CRITICAL RISK validation — must pass before building API endpoints
Must NOT do:
- Do NOT skip concurrent request test — this validates connection pool safety
- Do NOT use in-memory database
- Do NOT mock the RLS layer
Recommended Agent Profile:
- Category:
deep- Reason: Security-critical tests that validate the foundation of multi-tenancy. Concurrent test requires careful async handling.
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 3 (with Tasks 14, 15, 16, 17)
- Blocks: Tasks 14, 15 (unblocks API development once isolation is proven)
- Blocked By: Tasks 7 (schema + RLS), 8 (middleware), 12 (test infra)
References:
Pattern References:
- Task 7:
TenantDbConnectionInterceptor— the interceptor being tested - Task 8:
TenantValidationMiddleware— cross-tenant validation being tested - Task 12:
CustomWebApplicationFactory+IntegrationTestBase— test infrastructure
External References:
Task.WhenAll()for concurrent request testingParallel.ForEachAsync()for parallel HTTP requests- EF Core concurrency:
DbUpdateConcurrencyExceptionwhen RowVersion conflicts
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: All 6 RLS isolation tests pass Tool: Bash Preconditions: Testcontainers + test infrastructure from Task 12 Steps: 1. Run `dotnet test backend/tests/WorkClub.Tests.Integration --filter "RlsIsolation" --verbosity normal` 2. Assert 6 tests pass (complete isolation, no context, insert protection, concurrent, spoof, interceptor) Expected Result: All 6 pass, 0 failures Failure Indicators: ANY failure means multi-tenancy is BROKEN — stop all downstream work Evidence: .sisyphus/evidence/task-13-rls-isolation.txt Scenario: Concurrent request test proves pool safety Tool: Bash Preconditions: Same as above Steps: 1. Run concurrent test specifically: `dotnet test --filter "ConcurrentRequests" --verbosity detailed` 2. Review output: 50 requests, each response verified for correct tenant 3. Assert 0 cross-contamination events Expected Result: 50/50 requests return correct tenant data Failure Indicators: Any request returns wrong tenant's data Evidence: .sisyphus/evidence/task-13-concurrent-safety.txtCommit: YES
- Message:
test(rls): add multi-tenant isolation integration tests - Files:
backend/tests/WorkClub.Tests.Integration/MultiTenancy/*.cs - Pre-commit:
dotnet test backend/tests/WorkClub.Tests.Integration --filter "RlsIsolation"
- Write comprehensive integration tests proving RLS data isolation:
-
14. Task CRUD API Endpoints + 5-State Workflow
What to do:
- Create application services in
WorkClub.Application/Tasks/:TaskService: CRUD operations + state transitions- Uses
AppDbContextdirectly (no generic repo) - Validates state transitions using domain entity's
CanTransitionTo()method - Enforces role permissions: Create/Assign (Admin/Manager), Transition (assignee + Admin/Manager), Delete (Admin)
- Create minimal API endpoints in
WorkClub.Api/Endpoints/Tasks/:GET /api/tasks— list tasks for current tenant (filtered by RLS). Supports?status=Open&page=1&pageSize=20GET /api/tasks/{id}— single task detailPOST /api/tasks— create new task (status: Open). Requires Manager+ role.PATCH /api/tasks/{id}— update task (title, description, assignee, status). Enforces state machine.DELETE /api/tasks/{id}— soft-delete or hard-delete. Requires Admin role.
- Handle concurrency: catch
DbUpdateConcurrencyException→ return 409 - DTOs:
TaskListDto,TaskDetailDto,CreateTaskRequest,UpdateTaskRequest - Write tests FIRST (TDD):
- Test: CRUD operations work correctly
- Test: Invalid state transition → 422
- Test: Concurrency conflict → 409
- Test: Role enforcement (Viewer can't create)
Must NOT do:
- Do NOT use MediatR or CQRS — direct service injection
- Do NOT create generic CRUD base class
- Do NOT add full-text search
- Do NOT add sub-tasks or task dependencies
Recommended Agent Profile:
- Category:
deep- Reason: Core business logic with state machine, concurrency handling, and role-based access
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 3 (with Tasks 13, 15, 16, 17)
- Blocks: Tasks 19, 22, 23
- Blocked By: Tasks 7 (schema), 8 (multi-tenancy), 9 (auth)
References:
Pattern References:
- Task 4:
WorkItementity withCanTransitionTo()/TransitionTo()state machine - Task 7:
AppDbContextwithUseXminAsConcurrencyToken() - Task 8:
ITenantProvider.GetTenantId()for current tenant - Task 9: Authorization policies (
RequireManager,RequireAdmin)
External References:
- .NET 10 Minimal APIs:
app.MapGet("/api/tasks", handler).RequireAuthorization("RequireMember") TypedResults.Ok(),TypedResults.NotFound(),TypedResults.UnprocessableEntity(),TypedResults.Conflict()- Offset pagination:
Skip((page - 1) * pageSize).Take(pageSize) - Concurrency:
catch (DbUpdateConcurrencyException) → TypedResults.Conflict()
Acceptance Criteria:
If TDD:
- Test files in
backend/tests/WorkClub.Tests.Integration/Tasks/ dotnet test --filter "Tasks"→ all pass
QA Scenarios (MANDATORY):
Scenario: Full task lifecycle via API Tool: Bash (curl) Preconditions: Docker Compose up, seed data loaded, token obtained Steps: 1. POST /api/tasks with admin token + X-Tenant-Id: club-1 → assert 201, status "Open" 2. PATCH /api/tasks/{id} with status "Assigned" + assigneeId → assert 200, status "Assigned" 3. PATCH status "InProgress" → assert 200 4. PATCH status "Review" → assert 200 5. PATCH status "Done" → assert 200 6. GET /api/tasks/{id} → assert status "Done" Expected Result: Task progresses through all 5 states Failure Indicators: Any transition rejected, wrong status Evidence: .sisyphus/evidence/task-14-task-lifecycle.txt Scenario: Invalid state transition rejected Tool: Bash (curl) Preconditions: Task in "Open" status Steps: 1. PATCH /api/tasks/{id} with status "Done" (skipping states) → assert 422 2. PATCH /api/tasks/{id} with status "InProgress" (skipping Assigned) → assert 422 Expected Result: Invalid transitions return 422 Unprocessable Entity Failure Indicators: Returns 200 (state machine bypassed) Evidence: .sisyphus/evidence/task-14-invalid-transition.txt Scenario: Role enforcement on tasks Tool: Bash (curl) Preconditions: Viewer token obtained Steps: 1. GET /api/tasks with viewer token → assert 200 (can read) 2. POST /api/tasks with viewer token → assert 403 (cannot create) Expected Result: Viewer can read but not create tasks Failure Indicators: Viewer can create tasks (privilege escalation) Evidence: .sisyphus/evidence/task-14-role-enforcement.txtCommit: YES
- Message:
feat(tasks): add Task CRUD API with 5-state workflow - Files:
backend/src/WorkClub.Api/Endpoints/Tasks/*.cs,backend/src/WorkClub.Application/Tasks/*.cs - Pre-commit:
dotnet test backend/tests/ --filter "Tasks"
- Create application services in
-
15. Shift CRUD API + Sign-Up/Cancel Endpoints
What to do:
- Create application services in
WorkClub.Application/Shifts/:ShiftService: CRUD + sign-up/cancel logic- Sign-up: check capacity (count existing sign-ups < shift.Capacity), prevent duplicates, prevent sign-up for past shifts
- Cancel: remove sign-up record, allow only before shift starts
- Uses
AppDbContextdirectly - Handles concurrency for last-slot race: use optimistic concurrency with retry (catch
DbUpdateConcurrencyException, retry once)
- Create minimal API endpoints in
WorkClub.Api/Endpoints/Shifts/:GET /api/shifts— list shifts for current tenant. Supports?from=date&to=date&page=1&pageSize=20GET /api/shifts/{id}— shift detail including sign-up listPOST /api/shifts— create shift. Requires Manager+ role.PUT /api/shifts/{id}— update shift details. Requires Manager+ role.DELETE /api/shifts/{id}— delete shift. Requires Admin role.POST /api/shifts/{id}/signup— member signs up for shiftDELETE /api/shifts/{id}/signup— member cancels their sign-up
- DTOs:
ShiftListDto,ShiftDetailDto,CreateShiftRequest,UpdateShiftRequest - Write tests FIRST (TDD):
- Test: Sign-up succeeds when capacity available
- Test: Sign-up rejected when capacity full → 409
- Test: Sign-up rejected for past shift → 422
- Test: Cancel succeeds before shift starts
- Test: Duplicate sign-up rejected → 409
Must NOT do:
- Do NOT add recurring shift patterns
- Do NOT add waitlists or swap requests
- Do NOT add approval workflow for sign-ups (first-come-first-served)
- Do NOT add overlapping shift detection
Recommended Agent Profile:
- Category:
deep- Reason: Concurrency-sensitive capacity management with race condition handling
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 3 (with Tasks 13, 14, 16, 17)
- Blocks: Tasks 20, 22
- Blocked By: Tasks 7 (schema), 8 (multi-tenancy), 9 (auth)
References:
Pattern References:
- Task 4:
ShiftandShiftSignupentities - Task 7:
AppDbContextwith concurrency tokens - Task 9: Authorization policies
External References:
- Optimistic concurrency retry:
try { SaveChanges } catch (DbUpdateConcurrencyException) { retry once } - Date filtering:
shifts.Where(s => s.StartTime >= from && s.StartTime <= to) DateTimeOffset.UtcNowfor past-shift check
Acceptance Criteria:
If TDD:
- Test files in
backend/tests/WorkClub.Tests.Integration/Shifts/ dotnet test --filter "Shifts"→ all pass
QA Scenarios (MANDATORY):
Scenario: Sign-up with capacity enforcement Tool: Bash (curl) Preconditions: Shift created with capacity=2, 0 sign-ups Steps: 1. POST /api/shifts/{id}/signup with member1 token → assert 200 2. POST /api/shifts/{id}/signup with member2 token → assert 200 3. POST /api/shifts/{id}/signup with admin token → assert 409 (capacity full) 4. GET /api/shifts/{id} → assert signupCount: 2, capacity: 2 Expected Result: Third sign-up rejected, capacity enforced Failure Indicators: Third sign-up succeeds, wrong signup count Evidence: .sisyphus/evidence/task-15-capacity.txt Scenario: Past shift sign-up rejected Tool: Bash (curl) Preconditions: Shift with startTime in the past Steps: 1. POST /api/shifts/{id}/signup → assert 422 Expected Result: Cannot sign up for past shifts Failure Indicators: Sign-up succeeds for past shift Evidence: .sisyphus/evidence/task-15-past-shift.txt Scenario: Cancel sign-up before shift Tool: Bash (curl) Preconditions: Member signed up for future shift Steps: 1. DELETE /api/shifts/{id}/signup with member token → assert 200 2. GET /api/shifts/{id} → assert signupCount decreased by 1 Expected Result: Sign-up removed, capacity freed Failure Indicators: Sign-up not removed Evidence: .sisyphus/evidence/task-15-cancel.txtCommit: YES
- Message:
feat(shifts): add Shift CRUD API with sign-up and capacity - Files:
backend/src/WorkClub.Api/Endpoints/Shifts/*.cs,backend/src/WorkClub.Application/Shifts/*.cs - Pre-commit:
dotnet test backend/tests/ --filter "Shifts"
- Create application services in
-
16. Club + Member API Endpoints
What to do:
- Create endpoints in
WorkClub.Api/Endpoints/Clubs/:GET /api/clubs/me— list clubs the current user belongs to (from JWT claims + DB)GET /api/clubs/current— current club details (from X-Tenant-Id context)GET /api/members— list members of current club. Requires Member+ role.GET /api/members/{id}— member detailGET /api/members/me— current user's membership in current club
- Create
MemberSyncService:- On first API request from a user, check if their Keycloak
subexists in Members table - If not, create Member record from JWT claims (name, email, role)
- This keeps DB in sync with Keycloak without separate sync process
- On first API request from a user, check if their Keycloak
- Write tests FIRST (TDD):
- Test:
/api/clubs/mereturns only clubs user belongs to - Test:
/api/membersreturns only members of current tenant - Test: Member auto-sync creates record on first request
- Test:
Must NOT do:
- Do NOT add member management CRUD (invite, remove) — managed in Keycloak
- Do NOT add club settings or logo upload
- Do NOT add member search or filtering beyond basic list
Recommended Agent Profile:
- Category:
unspecified-high- Reason: Member sync logic requires understanding Keycloak-DB relationship
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 3 (with Tasks 13, 14, 15, 17)
- Blocks: Task 18
- Blocked By: Tasks 7 (schema), 8 (multi-tenancy), 9 (auth)
References:
Pattern References:
- Task 4:
Memberentity - Task 8:
ITenantProviderfor current tenant - Task 9: JWT claims parsing for user info
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Club list returns only user's clubs Tool: Bash (curl) Preconditions: Admin user (member of club-1 + club-2) Steps: 1. GET /api/clubs/me with admin token → assert 200 2. Assert response contains exactly 2 clubs 3. GET /api/clubs/me with manager token → assert 1 club (club-1 only) Expected Result: Each user sees only their clubs Failure Indicators: Wrong club count, sees other users' clubs Evidence: .sisyphus/evidence/task-16-clubs-me.txt Scenario: Member auto-sync on first request Tool: Bash (curl + psql) Preconditions: New user authenticated but no Member record in DB Steps: 1. GET /api/members/me with new user token → assert 200 2. Query DB: SELECT * FROM members WHERE external_user_id = '{sub}' → assert 1 row Expected Result: Member record created automatically Failure Indicators: 404 or no DB record Evidence: .sisyphus/evidence/task-16-member-sync.txtCommit: YES
- Message:
feat(clubs): add Club and Member API endpoints with auto-sync - Files:
backend/src/WorkClub.Api/Endpoints/Clubs/*.cs,backend/src/WorkClub.Application/Members/*.cs - Pre-commit:
dotnet test backend/tests/ --filter "Clubs|Members"
- Create endpoints in
-
17. Frontend Test Infrastructure (Vitest + RTL + Playwright)
What to do:
- Install and configure Vitest:
bun add -D vitest @testing-library/react @testing-library/jest-dom @vitejs/plugin-react jsdom - Create
frontend/vitest.config.tswith React + jsdom environment - Install and configure Playwright:
bun add -D @playwright/test && bunx playwright install chromium - Create
frontend/playwright.config.tswith:- Base URL:
http://localhost:3000 - Chromium only (faster for development)
- Screenshot on failure
- Base URL:
- Create test helpers:
frontend/tests/helpers/render.tsx— custom render with providers (session, tenant context)frontend/tests/helpers/mock-session.ts— mock NextAuth session for component tests
- Write 1 smoke test each:
- Vitest: render a shadcn Button component, assert it renders
- Playwright: navigate to homepage, assert page loads
- Add scripts to
package.json:"test": "vitest run","test:watch": "vitest","test:e2e": "playwright test"
Must NOT do:
- Do NOT install Jest (use Vitest)
- Do NOT install Cypress (use Playwright)
Recommended Agent Profile:
- Category:
quick- Reason: Standard test tool setup with config files
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 3 (with Tasks 13, 14, 15, 16)
- Blocks: Tasks 18, 19, 20, 21
- Blocked By: Task 5 (Next.js scaffold)
References:
External References:
- Vitest + React:
@vitejs/plugin-reactplugin,jsdomenvironment - RTL custom render: wrap with providers for consistent test setup
- Playwright config:
baseURL,use.screenshot: 'only-on-failure'
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Vitest smoke test passes Tool: Bash Preconditions: Vitest configured in frontend/ Steps: 1. Run `bun run test` in frontend/ 2. Assert exit code 0, 1 test passed Expected Result: Vitest runs and passes smoke test Failure Indicators: Config errors, test failure Evidence: .sisyphus/evidence/task-17-vitest-smoke.txt Scenario: Playwright smoke test passes Tool: Bash Preconditions: Playwright + Chromium installed, dev server running Steps: 1. Start dev server in background 2. Run `bunx playwright test tests/e2e/smoke.spec.ts` 3. Assert exit code 0 Expected Result: Playwright navigates to app and page loads Failure Indicators: Browser launch failure, navigation timeout Evidence: .sisyphus/evidence/task-17-playwright-smoke.txtCommit: YES
- Message:
test(frontend): add Vitest + RTL + Playwright setup - Files:
frontend/vitest.config.ts,frontend/playwright.config.ts,frontend/tests/** - Pre-commit:
bun run test(in frontend/)
- Install and configure Vitest:
-
18. App Layout + Club-Switcher + Auth Guard
What to do:
- Create root layout (
frontend/src/app/layout.tsx):- SessionProvider wrapper (NextAuth)
- TenantProvider wrapper (active club context)
- Sidebar navigation: Dashboard, Tasks, Shifts, Members
- Top bar: Club-switcher dropdown, user avatar/name, logout button
- Create
ClubSwitchercomponent (frontend/src/components/club-switcher.tsx):- Dropdown (shadcn DropdownMenu) showing user's clubs
- On switch: update local storage + cookie, refetch all data (TanStack Query
queryClient.invalidateQueries()) - Show current club name + sport type badge
- Create
AuthGuardcomponent:- Wraps protected pages
- If not authenticated → redirect to /login
- If authenticated, no active club, 1 club → auto-select
- If authenticated, no active club, multiple clubs → redirect to /select-club
- If authenticated, 0 clubs → show "Contact admin" message
- Install TanStack Query:
bun add @tanstack/react-query - Create
QueryProviderwrapper withQueryClient - Create
TenantContext(React Context) withactiveClubId,setActiveClub(),userRole - Write tests FIRST (TDD):
- Test: ClubSwitcher renders clubs from session
- Test: ClubSwitcher calls setActiveClub on selection
- Test: AuthGuard redirects when no session
Must NOT do:
- Do NOT add settings pages
- Do NOT add theme customization per club
- Do NOT create custom component wrappers over shadcn/ui
Recommended Agent Profile:
- Category:
visual-engineering- Reason: Layout design, responsive sidebar, dropdown component — frontend UI work
- Skills: [
frontend-ui-ux]frontend-ui-ux: Crafts clean UI layout even without mockups
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 4 (with Tasks 19, 20, 21)
- Blocks: Tasks 19, 20, 22
- Blocked By: Tasks 10 (NextAuth), 16 (Club API), 17 (test infra)
References:
Pattern References:
- Task 10: NextAuth session with clubs claim
- Task 16:
GET /api/clubs/meendpoint for club list
External References:
- shadcn/ui DropdownMenu:
<DropdownMenu><DropdownMenuTrigger>pattern - TanStack Query:
useQuery({ queryKey: ['clubs'], queryFn: fetchClubs }) - React Context:
createContext<TenantContextType>() - Next.js App Router layout: shared across all pages in directory
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Club switcher renders and switches Tool: Bash (bun test — Vitest + RTL) Preconditions: Mock session with 2 clubs Steps: 1. Render ClubSwitcher with mock session containing clubs: [{name: "Tennis"}, {name: "Cycling"}] 2. Assert both club names visible in dropdown 3. Click "Cycling" → assert setActiveClub called with cycling club ID Expected Result: Switcher renders clubs and handles selection Failure Indicators: Clubs not rendered, click handler not called Evidence: .sisyphus/evidence/task-18-club-switcher.txt Scenario: Layout renders with navigation Tool: Playwright Preconditions: Frontend running with mock auth Steps: 1. Navigate to /dashboard 2. Assert sidebar contains links: "Tasks", "Shifts", "Members" 3. Assert top bar shows club name and user name 4. Take screenshot Expected Result: Full layout visible with navigation Failure Indicators: Missing sidebar, broken layout Evidence: .sisyphus/evidence/task-18-layout.pngCommit: YES (groups with Tasks 19, 20, 21)
- Message:
feat(ui): add layout, club-switcher, login, task and shift pages - Files:
frontend/src/app/layout.tsx,frontend/src/components/club-switcher.tsx,frontend/src/components/auth-guard.tsx - Pre-commit:
bun run build && bun run test(in frontend/)
- Create root layout (
-
19. Task List + Task Detail + Status Transitions UI
What to do:
- Create
/frontend/src/app/(protected)/tasks/page.tsx:- Task list view using shadcn Table component
- Columns: Title, Status (Badge with color per status), Assignee, Due Date, Actions
- Filter by status (DropdownMenu with status options)
- Pagination (offset-based)
- "New Task" button (visible for Manager+ role)
- Create
/frontend/src/app/(protected)/tasks/[id]/page.tsx:- Task detail view
- Status transition buttons (only valid next states shown)
- Assign member dropdown (for Manager+ role)
- Edit title/description (for Manager+ or assignee)
- Create
/frontend/src/app/(protected)/tasks/new/page.tsx:- New task form: title, description, due date (optional)
- Form validation with shadcn form components
- Use TanStack Query hooks:
useTasks(filters),useTask(id),useCreateTask(),useUpdateTask(),useTransitionTask()
- Write tests FIRST (TDD):
- Test: Task list renders with mock data
- Test: Status badge shows correct color per status
- Test: Only valid transition buttons are shown
Must NOT do:
- Do NOT add drag-and-drop Kanban board
- Do NOT add inline editing
- Do NOT add bulk actions
Recommended Agent Profile:
- Category:
visual-engineering- Reason: Data table, form design, interactive status transitions
- Skills: [
frontend-ui-ux]
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 4 (with Tasks 18, 20, 21)
- Blocks: Task 27
- Blocked By: Tasks 14 (Task API), 17 (test infra), 18 (layout)
References:
Pattern References:
- Task 14: Task API endpoints (GET /api/tasks, PATCH /api/tasks/{id})
- Task 18: Layout with TenantContext for API headers
- Task 4: WorkItemStatus enum values and valid transitions
External References:
- shadcn Table:
<Table><TableHeader><TableBody>with mapped rows - shadcn Badge:
<Badge variant="outline">Open</Badge>with color variants - TanStack Query mutations:
useMutation({ mutationFn: updateTask, onSuccess: invalidate })
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Task list renders with data Tool: Bash (bun test) Preconditions: Mock API response with 5 tasks Steps: 1. Render TaskListPage with mocked TanStack Query 2. Assert 5 table rows rendered 3. Assert each row has title, status badge, assignee Expected Result: All tasks displayed in table Failure Indicators: Missing rows, wrong data Evidence: .sisyphus/evidence/task-19-task-list.txt Scenario: Status transition buttons shown correctly Tool: Bash (bun test) Preconditions: Task with status "InProgress" Steps: 1. Render TaskDetailPage with task in "InProgress" status 2. Assert "Move to Review" button is visible 3. Assert "Mark as Done" button is NOT visible (invalid transition) Expected Result: Only valid transitions shown Failure Indicators: Invalid transitions displayed Evidence: .sisyphus/evidence/task-19-transitions.txtCommit: YES (groups with Tasks 18, 20, 21)
- Message: (grouped in Task 18 commit)
- Files:
frontend/src/app/(protected)/tasks/**/*.tsx,frontend/src/hooks/useTasks.ts - Pre-commit:
bun run build && bun run test
- Create
-
20. Shift List + Shift Detail + Sign-Up UI
What to do:
- Create
/frontend/src/app/(protected)/shifts/page.tsx:- Shift list view using shadcn Card components (not table — more visual for schedules)
- Each card: Title, Date/Time, Location, Capacity bar (X/Y signed up), Sign-up button
- Filter by date range (DatePicker)
- "New Shift" button (visible for Manager+ role)
- Create
/frontend/src/app/(protected)/shifts/[id]/page.tsx:- Shift detail: full info + list of signed-up members
- "Sign Up" button (if capacity available and shift is future)
- "Cancel Sign-up" button (if user is signed up)
- Visual capacity indicator (progress bar)
- Create
/frontend/src/app/(protected)/shifts/new/page.tsx:- New shift form: title, description, location, start time, end time, capacity
- TanStack Query hooks:
useShifts(dateRange),useShift(id),useSignUp(),useCancelSignUp() - Write tests FIRST (TDD):
- Test: Shift card shows capacity correctly (2/3)
- Test: Sign-up button disabled when full
- Test: Past shift shows "Past" label, no sign-up button
Must NOT do:
- Do NOT add calendar view (list/card view only for MVP)
- Do NOT add recurring shift creation
- Do NOT add shift swap functionality
Recommended Agent Profile:
- Category:
visual-engineering- Reason: Card-based UI, capacity visualization, date/time components
- Skills: [
frontend-ui-ux]
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 4 (with Tasks 18, 19, 21)
- Blocks: Task 28
- Blocked By: Tasks 15 (Shift API), 17 (test infra), 18 (layout)
References:
Pattern References:
- Task 15: Shift API endpoints
- Task 18: Layout with TenantContext
- Task 4: Shift entity fields
External References:
- shadcn Card:
<Card><CardHeader><CardContent>for shift cards - Progress component: shadcn Progress for capacity bar
- Date picker: shadcn Calendar + Popover for date range filter
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Shift card shows capacity and sign-up state Tool: Bash (bun test) Preconditions: Mock shift with capacity 3, 2 signed up Steps: 1. Render ShiftCard with mock data 2. Assert "2/3 spots filled" text visible 3. Assert "Sign Up" button is enabled (1 spot left) 4. Re-render with capacity 3, 3 signed up 5. Assert "Sign Up" button is disabled Expected Result: Capacity displayed correctly, button state matches availability Failure Indicators: Wrong count, button enabled when full Evidence: .sisyphus/evidence/task-20-capacity-display.txt Scenario: Past shift cannot be signed up for Tool: Bash (bun test) Preconditions: Shift with startTime in the past Steps: 1. Render ShiftCard with past shift 2. Assert "Past" label visible 3. Assert "Sign Up" button not rendered Expected Result: Past shifts clearly marked, no sign-up option Failure Indicators: Sign-up button on past shift Evidence: .sisyphus/evidence/task-20-past-shift.txtCommit: YES (groups with Tasks 18, 19, 21)
- Message: (grouped in Task 18 commit)
- Files:
frontend/src/app/(protected)/shifts/**/*.tsx,frontend/src/hooks/useShifts.ts - Pre-commit:
bun run build && bun run test
- Create
-
21. Login Page + First-Login Club Picker
What to do:
- Create
/frontend/src/app/login/page.tsx:- Clean login page with "Sign in with Keycloak" button
- Uses NextAuth
signIn("keycloak")function - Shows app name/logo placeholder
- Create
/frontend/src/app/select-club/page.tsx:- Club selection page for multi-club users
- Shows cards for each club with name and sport type
- Clicking a club → sets active club → redirects to /dashboard
- Only shown when user has 2+ clubs and no active club
- Create
/frontend/src/app/(protected)/dashboard/page.tsx:- Simple dashboard showing:
- Active club name
- My open tasks count
- My upcoming shifts count
- Quick links to Tasks and Shifts pages
- Simple dashboard showing:
- Write tests FIRST (TDD):
- Test: Login page renders sign-in button
- Test: Club picker shows correct number of clubs
- Test: Dashboard shows summary counts
Must NOT do:
- Do NOT add custom login form (use Keycloak hosted login)
- Do NOT add registration page
- Do NOT add charts or analytics on dashboard
Recommended Agent Profile:
- Category:
visual-engineering- Reason: Login page design, club selection cards, dashboard layout
- Skills: [
frontend-ui-ux]
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 4 (with Tasks 18, 19, 20)
- Blocks: Task 26
- Blocked By: Tasks 10 (NextAuth), 17 (test infra)
References:
Pattern References:
- Task 10:
signIn("keycloak")and session with clubs claim - Task 18: AuthGuard redirects to /select-club or /login
External References:
- NextAuth signIn:
import { signIn } from "next-auth/react"→signIn("keycloak") - shadcn Card: for club selection cards
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Login page renders Tool: Bash (bun test) Steps: 1. Render LoginPage 2. Assert "Sign in" button visible Expected Result: Login page renders with sign-in button Failure Indicators: Missing button, render error Evidence: .sisyphus/evidence/task-21-login-page.txt Scenario: Club picker for multi-club user Tool: Bash (bun test) Preconditions: Mock session with 2 clubs, no active club Steps: 1. Render SelectClubPage with mock session 2. Assert 2 club cards rendered 3. Click first card → assert redirect to /dashboard Expected Result: Club picker shows clubs and handles selection Failure Indicators: Wrong club count, no redirect Evidence: .sisyphus/evidence/task-21-club-picker.txtCommit: YES (groups with Tasks 18, 19, 20)
- Message: (grouped in Task 18 commit)
- Files:
frontend/src/app/login/page.tsx,frontend/src/app/select-club/page.tsx,frontend/src/app/(protected)/dashboard/page.tsx - Pre-commit:
bun run build && bun run test
- Create
-
22. Docker Compose Full Stack (Backend + Frontend + Hot Reload)
What to do:
- Update
/docker-compose.ymlto add:dotnet-apiservice: build frombackend/Dockerfile.dev, port 5000→8080, volume mount/backend:/app(hot reload viadotnet watch), depends on postgres + keycloaknextjsservice: build fromfrontend/Dockerfile.dev, port 3000, volume mount/frontend:/app(hot reload viabun run dev), depends on dotnet-api
- Configure environment variables:
- Backend:
ConnectionStrings__DefaultConnection,Keycloak__Issuer,ASPNETCORE_ENVIRONMENT=Development - Frontend:
NEXT_PUBLIC_API_URL=http://localhost:5000,API_INTERNAL_URL=http://dotnet-api:8080,KEYCLOAK_ISSUER
- Backend:
- Ensure service startup order: postgres → keycloak → dotnet-api → nextjs
- Add
wait-for-itor health-check-based depends_on conditions - Backend runs migrations + seed on startup in Development mode
- Verify:
docker compose up→ all 4 services healthy → frontend accessible at localhost:3000 → can authenticate via Keycloak
Must NOT do:
- Do NOT use production Dockerfiles for local dev (use Dockerfile.dev with hot reload)
- Do NOT hardcode production secrets
Recommended Agent Profile:
- Category:
unspecified-high- Reason: Multi-service Docker Compose orchestration with health checks and dependency ordering
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 5 (with Tasks 23, 24, 25)
- Blocks: Tasks 26, 27, 28
- Blocked By: Tasks 14 (API), 15 (Shifts API), 18 (Frontend layout)
References:
Pattern References:
- Task 2: Docker Compose base (postgres + keycloak)
- Task 11: Seed data service (runs on Development startup)
External References:
dotnet watch run: Hot reload in Docker with volume mount- Docker Compose
depends_onwithcondition: service_healthy - Volume mount caching:
:cachedon macOS for performance
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Full stack starts from clean state Tool: Bash Preconditions: Docker installed, no conflicting ports Steps: 1. Run `docker compose down -v` (clean slate) 2. Run `docker compose up -d` 3. Wait up to 180s for all services healthy: `docker compose ps` 4. Assert 4 services running: postgres, keycloak, dotnet-api, nextjs 5. curl http://localhost:5000/health/live → assert 200 6. curl http://localhost:3000 → assert 200 7. curl Keycloak OIDC discovery → assert 200 Expected Result: All 4 services healthy and responding Failure Indicators: Service fails to start, health check fails Evidence: .sisyphus/evidence/task-22-full-stack.txt Scenario: Authentication works end-to-end Tool: Bash (curl) Preconditions: Full stack running Steps: 1. Get token from Keycloak for admin@test.com 2. Call GET /api/tasks with token + X-Tenant-Id → assert 200 3. Assert response contains seed data tasks Expected Result: Auth + API + data all connected Failure Indicators: Auth fails, API returns 401/403, no data Evidence: .sisyphus/evidence/task-22-e2e-auth.txtCommit: YES (groups with Tasks 23, 24, 25)
- Message:
infra(deploy): add full Docker Compose stack, Dockerfiles, and Kustomize dev overlay - Files:
docker-compose.yml - Pre-commit:
docker compose config
- Update
-
23. Backend Dockerfiles (Dev + Prod Multi-Stage)
What to do:
- Create
/backend/Dockerfile.dev:- Base:
mcr.microsoft.com/dotnet/sdk:10.0 - Install dotnet-ef tool
- WORKDIR /app, copy csproj + restore, copy source
- ENTRYPOINT:
dotnet watch run --project src/WorkClub.Api/WorkClub.Api.csproj
- Base:
- Create
/backend/Dockerfile:- Multi-stage build:
- Stage 1 (build):
sdk:10.0, restore + build + publish - Stage 2 (runtime):
aspnet:10.0-alpine, copy published output, non-root user
- Stage 1 (build):
- HEALTHCHECK:
curl -sf http://localhost:8080/health/live || exit 1 - Final image ~110MB
- Multi-stage build:
Must NOT do:
- Do NOT use full SDK image for production
- Do NOT run as root in production image
Recommended Agent Profile:
- Category:
quick- Reason: Standard .NET Docker patterns
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 5 (with Tasks 22, 24, 25)
- Blocks: Task 25
- Blocked By: Task 14 (working API to build)
References:
External References:
- .NET 10 Docker images:
mcr.microsoft.com/dotnet/aspnet:10.0-alpine(runtime),sdk:10.0(build) - Multi-stage: restore in separate layer for caching
- Non-root:
USER app(built into .NET Docker images)
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Production image builds and runs Tool: Bash Steps: 1. Run `docker build -t workclub-api:test backend/` 2. Assert build succeeds 3. Run `docker run --rm -d -p 18080:8080 --name test-api workclub-api:test` 4. Wait 10s, curl http://localhost:18080/health/live 5. Assert response (may fail without DB — that's OK, just verify container starts) 6. Check image size: `docker image inspect workclub-api:test --format='{{.Size}}'` 7. Assert < 200MB 8. Cleanup: `docker stop test-api` Expected Result: Image builds, starts, is <200MB Failure Indicators: Build fails, image too large Evidence: .sisyphus/evidence/task-23-backend-docker.txtCommit: YES (groups with Tasks 22, 24, 25)
- Message: (grouped in Task 22 commit)
- Files:
backend/Dockerfile,backend/Dockerfile.dev - Pre-commit:
docker build -t test backend/
- Create
-
24. Frontend Dockerfiles (Dev + Prod Standalone)
What to do:
- Create
/frontend/Dockerfile.dev:- Base:
node:22-alpine - Install bun:
npm install -g bun - Copy package.json + bun.lock, install deps
- CMD:
bun run dev
- Base:
- Create
/frontend/Dockerfile:- Multi-stage build (3 stages):
- Stage 1 (deps):
node:22-alpine, install bun,bun install --frozen-lockfile - Stage 2 (build): copy deps + source,
bun run build - Stage 3 (runner):
node:22-alpine, copy.next/standalone+.next/static+public, non-root user
- Stage 1 (deps):
- CMD:
node server.js - HEALTHCHECK:
curl -sf http://localhost:3000/api/health || exit 1 - Final image ~180MB
- Multi-stage build (3 stages):
Must NOT do:
- Do NOT use Bun as production runtime (latency issues) — use Node.js
- Do NOT include dev dependencies in production image
Recommended Agent Profile:
- Category:
quick- Reason: Standard Next.js Docker patterns
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 5 (with Tasks 22, 23, 25)
- Blocks: Task 25
- Blocked By: Task 18 (working frontend)
References:
External References:
- Next.js standalone Docker: copy
.next/standalone,.next/static,public node server.jsas entry point (standalone output)- Non-root user:
adduser --system --uid 1001 nextjs
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Production image builds and starts Tool: Bash Steps: 1. Run `docker build -t workclub-frontend:test frontend/` 2. Assert build succeeds 3. Run `docker run --rm -d -p 13000:3000 --name test-frontend workclub-frontend:test` 4. Wait 10s, curl http://localhost:13000 5. Assert HTTP 200 6. Check image size < 250MB 7. Cleanup: `docker stop test-frontend` Expected Result: Frontend image builds, starts, serves pages Failure Indicators: Build fails, no response Evidence: .sisyphus/evidence/task-24-frontend-docker.txtCommit: YES (groups with Tasks 22, 23, 25)
- Message: (grouped in Task 22 commit)
- Files:
frontend/Dockerfile,frontend/Dockerfile.dev - Pre-commit:
docker build -t test frontend/
- Create
-
25. Kustomize Dev Overlay + Resource Limits + Health Checks
What to do:
- Create
/infra/k8s/overlays/dev/kustomization.yaml:- Reference
../../base - Override replicas: 1 for all deployments
- Override resource limits: lower CPU/memory for dev
- Add dev-specific ConfigMap values (Development env, debug logging)
- Add dev-specific image tags
- Reference
- Create
/infra/k8s/overlays/dev/patches/:backend-resources.yaml: Lower resource requests/limits for devfrontend-resources.yaml: Lower resources
- Update base manifests if needed:
- Ensure health check endpoints match actual implementations
- Backend:
/health/startup,/health/live,/health/ready(from Task 9) - Frontend:
/api/health(from Task 5 or Task 18)
- Verify:
kustomize build infra/k8s/overlays/devproduces valid YAML with dev overrides
Must NOT do:
- Do NOT add HPA or PDB (not needed for dev)
- Do NOT add production secrets
Recommended Agent Profile:
- Category:
unspecified-high- Reason: Kustomize overlay configuration with patches
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 5 (with Tasks 22, 23, 24)
- Blocks: None
- Blocked By: Tasks 6 (base manifests), 23 (backend Dockerfile), 24 (frontend Dockerfile)
References:
Pattern References:
- Task 6: Kustomize base manifests
- Task 9: Health check endpoints
External References:
- Kustomize overlays:
resources: [../../base]+patches:list replicas:directive for scaling
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Dev overlay builds with correct overrides Tool: Bash Steps: 1. Run `kustomize build infra/k8s/overlays/dev` 2. Assert exit code 0 3. Verify replicas are 1 for all deployments 4. Verify resource limits are lower than base Expected Result: Valid YAML with dev-specific overrides Failure Indicators: Build fails, wrong values Evidence: .sisyphus/evidence/task-25-kustomize-dev.txtCommit: YES (groups with Tasks 22, 23, 24)
- Message: (grouped in Task 22 commit)
- Files:
infra/k8s/overlays/dev/**/*.yaml - Pre-commit:
kustomize build infra/k8s/overlays/dev
- Create
-
26. Playwright E2E Tests — Auth Flow + Club Switching
What to do:
- Create
frontend/tests/e2e/auth.spec.ts:- Test: Navigate to protected page → redirected to login
- Test: Click "Sign in" → redirected to Keycloak login → enter credentials → redirected back with session
- Test: After login, club picker shown (if multi-club user)
- Test: Select club → redirected to dashboard → club name visible in header
- Test: Switch club via dropdown → data refreshes → new club name visible
- Test: Logout → redirected to login page → protected page no longer accessible
- Configure Playwright to work with Docker Compose stack:
webServerconfig points to Docker Composenextjsservice- Use
globalSetupto ensure Docker Compose is running
- Save screenshots and trace on failure
Must NOT do:
- Do NOT test Keycloak admin console
- Do NOT test direct API calls (covered in backend tests)
Recommended Agent Profile:
- Category:
unspecified-high- Reason: E2E browser automation with Keycloak OIDC redirect flow
- Skills: [
playwright]playwright: Browser automation for E2E testing
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 6 (with Tasks 27, 28)
- Blocks: None
- Blocked By: Tasks 21 (login page), 22 (Docker Compose full stack)
References:
Pattern References:
- Task 21: Login page with Keycloak sign-in
- Task 18: Club-switcher component
- Task 3: Test user credentials (admin@test.com / testpass123)
External References:
- Playwright Keycloak login:
page.goto('/dashboard')→ assert redirect to Keycloak →page.fill('#username')→page.fill('#password')→page.click('#kc-login') - Playwright screenshot:
await page.screenshot({ path: 'evidence/...' })
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Full auth flow E2E Tool: Playwright Preconditions: Docker Compose full stack running Steps: 1. Navigate to http://localhost:3000/dashboard 2. Assert redirected to /login page 3. Click "Sign in" button 4. Assert redirected to Keycloak login (http://localhost:8080/realms/workclub/...) 5. Fill username: "admin@test.com", password: "testpass123" 6. Click Login 7. Assert redirected back to /select-club (multi-club user) 8. Click first club card ("Sunrise Tennis Club") 9. Assert URL is /dashboard 10. Assert text "Sunrise Tennis Club" visible in header 11. Screenshot Expected Result: Complete login + club selection flow works Failure Indicators: Redirect loop, Keycloak errors, stuck on club picker Evidence: .sisyphus/evidence/task-26-auth-flow.png Scenario: Club switching refreshes data Tool: Playwright Preconditions: Logged in, club-1 active Steps: 1. Navigate to /tasks → assert tasks visible (club-1 data) 2. Open club switcher dropdown 3. Click "Valley Cycling Club" 4. Assert tasks list updates (different data) 5. Assert header shows "Valley Cycling Club" Expected Result: Switching clubs changes visible data Failure Indicators: Data doesn't refresh, old club's data shown Evidence: .sisyphus/evidence/task-26-club-switch.pngCommit: YES (groups with Tasks 27, 28)
- Message:
test(e2e): add Playwright E2E tests for auth, tasks, and shifts - Files:
frontend/tests/e2e/auth.spec.ts - Pre-commit:
bunx playwright test tests/e2e/auth.spec.ts
- Create
-
27. Playwright E2E Tests — Task Management Flow
What to do:
- Create
frontend/tests/e2e/tasks.spec.ts:- Test: Create new task → appears in list
- Test: View task detail → all fields displayed
- Test: Transition task through all states (Open → Assigned → InProgress → Review → Done)
- Test: Viewer role cannot see "New Task" button
- Test: Task list filters by status
- Use authenticated session from Task 26 setup
Must NOT do:
- Do NOT test API directly (use UI interactions only)
Recommended Agent Profile:
- Category:
unspecified-high- Reason: Complex E2E flow with multiple pages and state transitions
- Skills: [
playwright]
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 6 (with Tasks 26, 28)
- Blocks: None
- Blocked By: Tasks 19 (task UI), 22 (Docker Compose)
References:
Pattern References:
- Task 19: Task list page, task detail page, task creation form
- Task 14: Task API behavior (valid/invalid transitions)
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Full task lifecycle via UI Tool: Playwright Preconditions: Logged in as admin, club-1 active Steps: 1. Navigate to /tasks 2. Click "New Task" button 3. Fill title: "Replace court net", description: "Net on court 3 is torn" 4. Click Submit → assert redirected to task detail 5. Assert status badge shows "Open" 6. Click "Assign" → select member → assert status "Assigned" 7. Click "Start" → assert status "In Progress" 8. Click "Submit for Review" → assert status "Review" 9. Click "Approve" → assert status "Done" 10. Screenshot final state Expected Result: Task flows through all 5 states via UI Failure Indicators: Transition fails, wrong status shown Evidence: .sisyphus/evidence/task-27-task-lifecycle.png Scenario: Viewer cannot create tasks Tool: Playwright Preconditions: Logged in as viewer@test.com Steps: 1. Navigate to /tasks 2. Assert "New Task" button is NOT visible Expected Result: Viewer sees task list but no create button Failure Indicators: Create button visible for viewer Evidence: .sisyphus/evidence/task-27-viewer-no-create.pngCommit: YES (groups with Tasks 26, 28)
- Message: (grouped in Task 26 commit)
- Files:
frontend/tests/e2e/tasks.spec.ts - Pre-commit:
bunx playwright test tests/e2e/tasks.spec.ts
- Create
-
28. Playwright E2E Tests — Shift Sign-Up Flow
What to do:
- Create
frontend/tests/e2e/shifts.spec.ts:- Test: Create new shift → appears in list
- Test: View shift detail → capacity bar shows 0/N
- Test: Sign up for shift → capacity updates, user listed
- Test: Cancel sign-up → capacity decreases
- Test: Full capacity → sign-up button disabled
- Test: Past shift → no sign-up button
- Use authenticated session from Task 26 setup
Must NOT do:
- Do NOT test concurrent sign-ups via UI (covered in backend integration tests)
Recommended Agent Profile:
- Category:
unspecified-high- Reason: E2E flow with capacity state tracking
- Skills: [
playwright]
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 6 (with Tasks 26, 27)
- Blocks: None
- Blocked By: Tasks 20 (shift UI), 22 (Docker Compose)
References:
Pattern References:
- Task 20: Shift list page, shift detail page, sign-up button
- Task 15: Shift API behavior (capacity, past shift rejection)
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: Sign up and cancel for shift Tool: Playwright Preconditions: Logged in as member1, future shift with capacity 3, 0 signed up Steps: 1. Navigate to /shifts 2. Click on future shift card 3. Assert "0/3 spots filled" 4. Click "Sign Up" → assert "1/3 spots filled" 5. Assert user name appears in sign-up list 6. Click "Cancel Sign-up" → assert "0/3 spots filled" 7. Screenshot Expected Result: Sign-up and cancel update capacity correctly Failure Indicators: Count doesn't update, name not shown Evidence: .sisyphus/evidence/task-28-shift-signup.png Scenario: Full capacity disables sign-up Tool: Playwright Preconditions: Shift with capacity 1, 1 already signed up Steps: 1. Navigate to shift detail for full shift 2. Assert "1/1 spots filled" 3. Assert "Sign Up" button is disabled or not present Expected Result: Cannot sign up for full shift Failure Indicators: Sign-up button still active Evidence: .sisyphus/evidence/task-28-full-capacity.pngCommit: YES (groups with Tasks 26, 27)
- Message: (grouped in Task 26 commit)
- Files:
frontend/tests/e2e/shifts.spec.ts - Pre-commit:
bunx playwright test tests/e2e/shifts.spec.ts
- Create
-
29. Gitea CI/CD Pipelines — CI Validation + Image Bootstrap + Kubernetes Deploy
What to do:
- Maintain
.gitea/workflows/ci.ymlfor repositorycode.hal9000.damnserver.com/MasterMito/work-club-manager - Maintain
.gitea/workflows/cd-bootstrap.ymlfor manual multi-arch image publishing to private registry - Maintain
.gitea/workflows/cd-deploy.ymlfor Kubernetes deployment using Kustomize overlays - Configure CI triggers:
pushonmainand feature branchespull_requesttargetingmainworkflow_dispatchfor manual reruns
- CI workflow structure (parallel validation jobs):
backend-ci: setup .NET 10 SDK, restore, build, run backend unit/integration testsfrontend-ci: setup Bun, install deps, run lint, type-check, unit tests, production buildinfra-ci: validate Docker Compose and Kustomize manifests
- CD bootstrap workflow behavior:
- Manual trigger with
image_tag+ build flags - Buildx multi-arch build (
linux/amd64,linux/arm64) forworkclub-apiandworkclub-frontend - Push image tags to
192.168.241.13:8080and emit task-31/task-32/task-33 evidence artifacts
- Manual trigger with
- CD deploy workflow behavior:
- Triggered by successful bootstrap (
workflow_run) or manual dispatch (image_taginput) - Install kubectl + kustomize on runner
- Run
kustomize edit set imageininfra/k8s/overlays/dev - Apply manifests with
kubectl apply -k infra/k8s/overlays/dev - Ensure namespace
workclub-devexists and perform deployment diagnostics
- Triggered by successful bootstrap (
- Enforce branch protection expectation in plan notes:
- Required checks:
backend-ci,frontend-ci,infra-ci
- Required checks:
Must NOT do:
- Do NOT collapse bootstrap and deployment into one opaque pipeline stage
- Do NOT bypass image-tag pinning in deployment
- Do NOT remove CI validation gates (
backend-ci,frontend-ci,infra-ci)
Recommended Agent Profile:
- Category:
unspecified-high- Reason: CI pipeline design spans backend/frontend/infra validation and requires careful runner orchestration
- Skills: []
Parallelization:
- Can Run In Parallel: YES
- Parallel Group: Wave 6 (with Tasks 26, 27, 28)
- Blocks: Final Verification Wave (F1-F4)
- Blocked By: Tasks 12, 17, 23, 24, 25
References:
Pattern References:
.gitea/workflows/ci.yml— Source of truth for CI checks.gitea/workflows/cd-bootstrap.yml— Source of truth for image publish bootstrap.gitea/workflows/cd-deploy.yml— Source of truth for deployment apply logicdocker-compose.yml— Source of truth fordocker compose configvalidationinfra/k8s/base/kustomization.yamlandinfra/k8s/overlays/dev/kustomization.yaml— Kustomize build/apply inputsbackend/WorkClub.sln— Backend restore/build/test entrypoint for .NET jobfrontend/package.json+frontend/bun.lock— Frontend scripts and cache key anchor
External References:
- Gitea Actions docs: workflow syntax and trigger model (
.gitea/workflows/*.yml) actions/setup-dotnetusage for .NET 10 SDK installationoven-sh/setup-bunusage for Bun runtime setup- Upload artifact action compatible with Gitea Actions runner implementation
Acceptance Criteria:
QA Scenarios (MANDATORY):
Scenario: CI workflow validates backend/frontend/infra in parallel Tool: Bash (Gitea API) Preconditions: `.gitea/workflows/ci.yml` pushed to repository, `GITEA_TOKEN` available Steps: 1. Trigger workflow via API or push a CI-test branch commit 2. Query latest workflow run status for `ci.yml` 3. Assert jobs `backend-ci`, `frontend-ci`, and `infra-ci` all executed 4. Assert final workflow conclusion is `success` Expected Result: All three CI jobs pass in one run Failure Indicators: Missing job, skipped required job, or non-success conclusion Evidence: .sisyphus/evidence/task-29-gitea-ci-success.json Scenario: CD bootstrap and deploy workflows are present and wired Tool: Bash Preconditions: Repository contains workflow files Steps: 1. Assert `.gitea/workflows/cd-bootstrap.yml` exists 2. Assert `.gitea/workflows/cd-deploy.yml` exists 3. Grep bootstrap workflow for buildx multi-arch publish step 4. Grep deploy workflow for `workflow_run`, `kustomize edit set image`, and `kubectl apply -k` Expected Result: Both CD workflows exist with expected bootstrap and deploy steps Failure Indicators: Missing file, missing trigger, or missing deploy commands Evidence: .sisyphus/evidence/task-29-gitea-cd-workflows.txt Scenario: Pipeline fails on intentional backend break Tool: Bash (git + Gitea API) Preconditions: Temporary branch available, ability to push test commit Steps: 1. Create a temporary branch with an intentional backend compile break 2. Push branch and wait for CI run 3. Assert `backend-ci` fails 4. Assert workflow conclusion is `failure` 5. Revert test commit / delete branch Expected Result: CI correctly rejects broken code and reports failure Failure Indicators: Broken backend still reports success Evidence: .sisyphus/evidence/task-29-gitea-ci-failure.jsonCommit: YES
- Message:
ci(cd): add CI validation plus bootstrap and Kubernetes deployment workflows - Files:
.gitea/workflows/ci.yml,.gitea/workflows/cd-bootstrap.yml,.gitea/workflows/cd-deploy.yml - Pre-commit:
docker compose config && kustomize build infra/k8s/overlays/dev > /dev/null
- Maintain
Final Verification Wave
4 review agents run in PARALLEL. ALL must APPROVE. Rejection → fix → re-run.
-
F1. Plan Compliance Audit —
oracleRead the plan end-to-end. For each "Must Have": verify implementation exists (read file, curl endpoint, run command). For each "Must NOT Have": search codebase for forbidden patterns (MediatR,IRepository<T>,Swashbuckle,IsMultiTenant(),SET app.current_tenantwithoutLOCAL) — reject with file:line if found. Validate CI appendix by checking.gitea/workflows/ci.ymlexists and includesbackend-ci,frontend-ci,infra-cijobs with push + pull_request triggers. Check evidence files exist in.sisyphus/evidence/. Compare deliverables against plan. Output:Must Have [N/N] | Must NOT Have [N/N] | Tasks [N/N] | VERDICT: APPROVE/REJECT -
F2. Code Quality Review —
unspecified-highRundotnet build+dotnet format --verify-no-changes+dotnet test+bun run build+bun run lint. Validate CI config integrity by running YAML lint/syntax check on.gitea/workflows/ci.ymland verifying all referenced commands exist in repo scripts/paths. Review all changed files for:as any/@ts-ignore, empty catches,console.login prod, commented-out code, unused imports,// TODOwithout ticket. Check AI slop: excessive comments, over-abstraction, generic names (data/result/item/temp), unnecessary null checks on non-nullable types. Output:Build [PASS/FAIL] | Format [PASS/FAIL] | Tests [N pass/N fail] | Lint [PASS/FAIL] | Files [N clean/N issues] | VERDICT -
F3. Real Manual QA —
unspecified-high(+playwrightskill) Startdocker compose upfrom clean state. Execute EVERY QA scenario from EVERY task — follow exact steps, capture evidence. Test cross-task integration: login → pick club → create task → assign → transition through states → switch club → verify isolation → create shift → sign up → verify capacity. Test edge cases: invalid JWT, expired token, cross-tenant header spoof, concurrent sign-up. Save to.sisyphus/evidence/final-qa/. Output:Scenarios [N/N pass] | Integration [N/N] | Edge Cases [N tested] | VERDICT -
F4. Scope Fidelity Check —
deepFor each task: read "What to do", read actual diff (git log/git diff). Verify 1:1 — everything in spec was built (no missing), nothing beyond spec was built (no creep). Check "Must NOT do" compliance across ALL tasks. Detect cross-task contamination: Task N touching Task M's files. Flag unaccounted changes. Verify no CQRS, no MediatR, no generic repo, no Swashbuckle, no social login, no recurring shifts, no notifications. Output:Tasks [N/N compliant] | Contamination [CLEAN/N issues] | Unaccounted [CLEAN/N files] | VERDICT
Commit Strategy
| Wave | Commit | Message | Files | Pre-commit |
|---|---|---|---|---|
| 1 | T1 | chore(scaffold): initialize git repo and monorepo with .NET solution |
backend/**/*.csproj, backend/WorkClub.sln, .gitignore, .editorconfig | dotnet build |
| 1 | T2 | infra(docker): add Docker Compose with PostgreSQL and Keycloak |
docker-compose.yml, backend/Dockerfile.dev | docker compose config |
| 1 | T3 | infra(keycloak): configure realm with test users and club memberships |
infra/keycloak/realm-export.json | — |
| 1 | T4 | feat(domain): add core entities — Club, Member, Task, Shift |
backend/src/WorkClub.Domain/**/*.cs | dotnet build |
| 1 | T5 | chore(frontend): initialize Next.js project with Tailwind and shadcn/ui |
frontend/**, package.json, next.config.ts, tailwind.config.ts | bun run build |
| 1 | T6 | infra(k8s): add Kustomize base manifests |
infra/k8s/base/**/*.yaml | kustomize build infra/k8s/base |
| 2 | T7+T8 | feat(data): add EF Core DbContext, migrations, RLS policies, and multi-tenant middleware |
backend/src/WorkClub.Infrastructure/**/.cs, backend/src/WorkClub.Api/Middleware/.cs | dotnet build |
| 2 | T9 | feat(auth): add Keycloak JWT authentication and role-based authorization |
backend/src/WorkClub.Api/Auth/*.cs, Program.cs | dotnet build |
| 2 | T10 | feat(frontend-auth): add NextAuth.js Keycloak integration |
frontend/src/auth/**, frontend/src/middleware.ts | bun run build |
| 2 | T11 | feat(seed): add development seed data script |
backend/src/WorkClub.Infrastructure/Seed/*.cs | dotnet build |
| 2 | T12 | test(infra): add xUnit + Testcontainers + WebApplicationFactory base |
backend/tests/**/*.cs | dotnet test |
| 3 | T13 | test(rls): add multi-tenant isolation integration tests |
backend/tests/WorkClub.Tests.Integration/MultiTenancy/*.cs | dotnet test |
| 3 | T14 | feat(tasks): add Task CRUD API with 5-state workflow |
backend/src/WorkClub.Api/Endpoints/Tasks/.cs, backend/src/WorkClub.Application/Tasks/.cs | dotnet test |
| 3 | T15 | feat(shifts): add Shift CRUD API with sign-up and capacity |
backend/src/WorkClub.Api/Endpoints/Shifts/.cs, backend/src/WorkClub.Application/Shifts/.cs | dotnet test |
| 3 | T16 | feat(clubs): add Club and Member API endpoints |
backend/src/WorkClub.Api/Endpoints/Clubs/*.cs | dotnet test |
| 3 | T17 | test(frontend): add Vitest + RTL + Playwright setup |
frontend/vitest.config.ts, frontend/playwright.config.ts | bun run test |
| 4 | T18-T21 | feat(ui): add layout, club-switcher, login, task and shift pages |
frontend/src/app//*.tsx, frontend/src/components//*.tsx | bun run build && bun run test |
| 5 | T22-T25 | infra(deploy): add full Docker Compose stack, Dockerfiles, and Kustomize dev overlay |
docker-compose.yml, /Dockerfile, infra/k8s/overlays/dev/*/*.yaml | docker compose config && kustomize build infra/k8s/overlays/dev |
| 6 | T26-T28 | test(e2e): add Playwright E2E tests for auth, tasks, and shifts |
frontend/tests/e2e/**/*.spec.ts | bunx playwright test |
| 6 | T29 | ci(cd): add CI validation plus bootstrap and Kubernetes deployment workflows |
.gitea/workflows/ci.yml, .gitea/workflows/cd-bootstrap.yml, .gitea/workflows/cd-deploy.yml | docker compose config && kustomize build infra/k8s/overlays/dev > /dev/null |
Success Criteria
Verification Commands
# All services start and are healthy
docker compose up -d && docker compose ps # Expected: 4 services running
# Backend health
curl -sf http://localhost:5000/health/live # Expected: 200, "Healthy"
# Keycloak OIDC discovery
curl -sf http://localhost:8080/realms/workclub/.well-known/openid-configuration # Expected: 200, JSON with "issuer"
# Get auth token
TOKEN=$(curl -sf -X POST http://localhost:8080/realms/workclub/protocol/openid-connect/token \
-d "client_id=workclub-app&username=admin@test.com&password=testpass123&grant_type=password" | jq -r '.access_token')
# Tenant isolation
curl -sf -H "Authorization: Bearer $TOKEN" -H "X-Tenant-Id: club-1-uuid" http://localhost:5000/api/tasks # Expected: 200, only club-1 tasks
curl -s -o /dev/null -w "%{http_code}" -H "Authorization: Bearer $TOKEN" -H "X-Tenant-Id: nonexistent-club" http://localhost:5000/api/tasks # Expected: 403
# Backend tests
dotnet test backend/tests/ --no-build # Expected: All pass
# Frontend build + tests
bun run build # Expected: Exit 0
bun run test # Expected: All pass
# K8s manifests valid
kustomize build infra/k8s/overlays/dev > /dev/null # Expected: Exit 0
# CI workflow file present and includes required jobs
grep -E "backend-ci|frontend-ci|infra-ci" .gitea/workflows/ci.yml # Expected: all 3 job names present
# CD bootstrap workflow present with multi-arch publish
grep -E "buildx|linux/amd64,linux/arm64|workclub-api|workclub-frontend" .gitea/workflows/cd-bootstrap.yml
# CD deploy workflow present with deploy trigger and apply step
grep -E "workflow_run|kustomize edit set image|kubectl apply -k" .gitea/workflows/cd-deploy.yml
Final Checklist
- All "Must Have" items present and verified
- All "Must NOT Have" items absent (no MediatR, no generic repo, no Swashbuckle, etc.)
- All backend tests pass (
dotnet test) - All frontend tests pass (
bun run test) - All E2E tests pass (
bunx playwright test) - Docker Compose stack starts clean and healthy
- Kustomize manifests build without errors
- Gitea CI workflow exists and references backend-ci/frontend-ci/infra-ci
- Gitea CD bootstrap and deploy workflows exist and are wired to image publish/deploy steps
- RLS isolation proven at database level
- Cross-tenant access returns 403
- Task state machine rejects invalid transitions (422)
- Shift sign-up respects capacity (409 when full)